id
stringlengths 9
16
| pid
stringlengths 11
18
| input
stringlengths 5.27k
352k
| output
stringlengths 399
9.26k
|
---|---|---|---|
gao_GAO-18-178 | gao_GAO-18-178_0 | Background
DHS Responsibilities for Ensuring the Security of U.S.-Bound Flights from Foreign Countries
Shortly after the September 11, 2001, terrorist attacks, Congress passed and the President signed into law the Aviation and Transportation Security Act (ATSA), which established TSA and gave the agency responsibility for securing all modes of transportation, including the nation’s civil aviation system, which includes the operations of U.S. and foreign-flagged air carriers to, from, and within the United States, as well as the foreign point-to-point operations of U.S.-flagged carriers. Consistent with ATSA and in accordance with existing statutory requirements, TSA is to assess the effectiveness of security measures at foreign airports (1) served by a U.S. air carrier, (2) from which a foreign air carrier serves the United States, (3) that pose a high risk of introducing danger to international air travel, and (4) that are otherwise deemed appropriate by the Secretary of Homeland Security. In carrying out this function, the statute identifies measures that the Secretary must take in the event that he or she determines that an airport is not maintaining and carrying out effective security measures based on TSA assessments. In addition, consistent with ATSA and in accordance with existing statutory requirements, TSA is to conduct inspections of U.S. air carriers and foreign air carriers servicing the United States from foreign airports to ensure that they meet applicable security requirements, including those set forth in an air carrier’s TSA-approved security program.
The Secretary of DHS delegated to the TSA Administrator the responsibility for conducting foreign airport assessments but retained responsibility for making the determination that a foreign airport does not maintain and carry out effective security measures. Currently, the Global Compliance Directorate, within OGS, is responsible for conducting foreign airport assessments and air carrier inspections. Table 1 highlights the roles and responsibilities of certain TSA positions within OGS that are responsible for implementing the foreign airport assessment and air carrier inspection programs.
TSA’s Process for Conducting Foreign Airport Assessments and Air Carrier Inspections
TSA assesses the effectiveness of security measures at foreign airports using select aviation security standards and recommended practices adopted by ICAO, a United Nations organization representing 191 countries. ICAO standards and recommended practices (referred to collectively in this report as ICAO standards unless otherwise noted) address operational issues at an airport, such as ensuring that passengers and baggage are properly screened and that unauthorized individuals do not have access to restricted areas of an airport. ICAO standards also address non-operational issues, such as whether a foreign government has implemented a national civil aviation security program for regulating security procedures at its airports and whether airport officials implementing security controls are subject to background investigations, are appropriately trained, and are certified according to a foreign government’s national civil aviation security program. TSA utilizes the 44 ICAO standards it sees as most critical in conducting its foreign airport assessments, which cover the following areas: airport operations; quality control; access control; aircraft security; passenger and cabin baggage screening; hold baggage screening; security measures relating to cargo, mail and other goods; security measures relating to special categories of passengers; prevention; and security measures relating to the landside.
TSA uses a risk-informed approach to schedule foreign airport assessments by categorizing airports into three risk tiers, with high risk airports assessed more frequently than medium and low risk airports. TSA’s assessments of foreign airports are conducted by a team of inspectors, which generally includes one team leader and one team member. According to TSA, it generally takes 3 to 7 days to complete a foreign airport assessment. However, the amount of time and number of team members required to conduct an assessment varies based on several factors, including the size of the airport, the number of air carrier inspections to be conducted at the airport, and the threat level to civil aviation in the host country.
TSA uses a multistep process to plan, conduct, and record assessments of foreign airports. Specifically, the TSAR must obtain approval from the host government to allow TSA to conduct an airport assessment, and schedule the date for the on-site assessment. After conducting an entry briefing with State, host country officials, and airport officials, the team conducts an on-site visit to the airport. During the assessment, the team of inspectors uses several methods to determine a foreign airport’s level of compliance with ICAO standards, including conducting interviews with airport officials, examining documents pertaining to the airport’s security measures, and conducting a physical inspection of the airport. For example, inspectors are to examine the integrity of fences, lighting, and locks by walking the grounds of the airport. Inspectors also make observations on access control procedures, such as examining employee and vehicle identification methods in secure areas, as well as monitoring passenger and baggage screening procedures in the airport. At the close of an airport assessment, inspectors brief foreign airport and government officials on the results. TSA inspectors also prepare a report detailing their findings on the airport’s overall security posture and security measures, which may contain recommendations for corrective action and must be reviewed by the TSAR, the ROC manager, and TSA headquarters officials. Afterward, a summary of the results is shared with the foreign airport and host government officials. In some cases, TSA requires air carriers to adopt security procedures, such as additional passenger screening, to compensate for deficiencies that TSA identified during a foreign airport assessment.
Along with conducting airport assessments, the same TSA inspection team also conducts air carrier inspections when visiting a foreign airport to ensure that air carriers are in compliance with TSA security requirements. The frequency of air carrier inspections at each airport depends on a risk-informed approach and is influenced, in part, by the airport’s vulnerability to security breaches, since the security posture of each airport varies. In general, TSA procedures require TSA to inspect all air carriers at each airport annually or semi-annually depending on the vulnerability level of the airport, with some exceptions. For example, TSA may elect to inspect all air carriers at a particular airport on an 18-month cycle if the airport has no documented vulnerabilities for the three previous visits and all air carriers at that location have demonstrated full compliance over the past five years. When conducting inspections, TSA inspectors examine compliance with applicable security requirements, including TSA-approved security programs, security directives, and emergency amendments to the security programs.
As in the case of airport assessments, air carrier inspections are conducted by a team of inspectors, which generally includes one team leader and one team member. An inspection of an air carrier typically takes 1 or 2 days, but can take longer depending on the extent of service by the air carrier. Inspection teams may spend several days at a foreign airport inspecting air carriers if there are multiple carriers serving the United States from that location. During an air carrier inspection, inspectors are to review applicable security manuals, procedures, and records; interview air carrier station personnel; and observe air carrier employees processing passengers from at least one flight from passenger check-in until the flight departs the gate to ensure that the air carrier is in compliance with applicable requirements. Inspectors evaluate a variety of security measures, such as passenger processing (e.g., use of No Fly and Selectee lists), checked baggage acceptance and control, aircraft security, passenger screening, cargo and mail screening, and catering security. Inspectors record inspection results into TSA’s Performance and Results Information System (PARIS), a database containing security compliance information on TSA-regulated entities. If an inspector finds that an air carrier is violating any applicable security requirements, additional steps are to be taken to record those specific violations and, in some cases, pursue them with further investigation.
GAO’s 2011 Review of TSA Foreign Airport Assessment Program
In 2011, we reported on TSA’s foreign airport assessment program, including TSA’s steps taken to enhance its program, the results of TSA’s foreign airport assessments, and opportunities for TSA to make program improvements in several key areas, such as developing criteria and guidance for determining foreign airport vulnerability ratings. We reported that TSA had not taken steps to evaluate its assessment results to identify regional and other trends over time. In addition, we found that TSA had not developed criteria or guidance for determining foreign airport vulnerability ratings. We also reported that there were opportunities for TSA to increase program efficiency and effectiveness by, for example, conducting more targeted foreign airport assessments and systematically compiling and analyzing security best practices. As a result, we recommended that TSA (1) develop a mechanism for trend analysis, (2) establish criteria and guidance to help decision makers with vulnerability ratings, and (3) consider the feasibility of conducting more targeted foreign airport assessments and compiling best practices. DHS concurred with the three recommendations and has since taken several actions to address them all, including developing a mechanism to compile and analyze best practices.
Since 2011, TSA Has Taken Various Steps to Strengthen its Foreign Airport Assessment and Air Carrier Inspection Programs
TSA Has Taken Steps to Better Target Program Resources Based on Risk
TSA established the Northern Virginia ROC. In 2012, TSA created a dedicated ROC in Northern Virginia to oversee North Africa and the Middle East given the high risk associated with many airports in the region. The creation of the Northern Virginia ROC alleviated resource burdens on the Frankfurt ROC, which previously had oversight for both the Europe and Africa-Middle East regions. In addition, the Northern Virginia ROC Manager stated that the small size of the ROC has facilitated strong working relationships because foreign airport officials in the region tend to meet with the same inspectors more frequently.
TSA created the Analysis and Risk Mitigation (ARM) Directorate. In 2013, TSA established a working group to evaluate ways to better integrate risk management in the foreign airport assessment and air carrier inspection programs. This working group developed a risk framework, which, according to TSA documentation, provides a systematic approach for analyzing risk at international airports, supports OGS decision making, and informs efforts to mitigate security deficiencies. In 2015, OGS created the ARM Directorate, which formalized the risk mitigation responsibilities of the working group and serves as the data analysis and evaluation arm of OGS. OGS officials stated that ARM helps the program focus its resources based on risk. For example, ARM analyzes and prioritizes activities, such as training, that are designed to mitigate security vulnerabilities at foreign airports.
TSA conducts more targeted foreign airport assessments. Based on a recommendation in our 2011 report, TSA has taken actions to conduct more targeted foreign airport assessments. For example, TSA developed the Pre-Visit Questionnaire, which host foreign airport officials fill out prior to TSA’s visit. This information enables each TSA foreign airport assessment team to tailor the on-site assessment at each airport and focus TSA’s assessment efforts on specific areas of concern. Additionally, TSA implemented more focused airport assessments, known as targeted risk assessments, in locations where risk is high or there are other factors that require a more focused evaluation of the site’s security posture. For the focused assessments, inspection teams place emphasis on observations, interviews, document reviews, and thorough analysis of specific ICAO standards.
TSA implemented cross-directorate reviews. In 2015, TSA implemented cross-directorate reviews, which bring together experts across the OGS components, such as inspectors and TSARs, to identify critical vulnerabilities at foreign airports and outline an initial plan to mitigate those vulnerabilities. Overall, TSA completed 28 cross- directorate reviews in 2015 and 2016.
TSA Has Taken Steps to Strengthen Foreign Airport Access and the Comprehensiveness of Its Evaluations
TSA took steps to resolve foreign airport access issues. Since our 2011 review, TSA has faced delays in scheduling some foreign airport assessments and obstacles in obtaining full access to airport operations at certain locations. According to TSA officials, TSA has used several tactics to resolve access issues, including deploying the same inspectors over multiple assessments to build rapport with foreign airport officials. For example, in one country in the Western Hemisphere region, TSA’s access to airport operations was initially limited by the host government. However, over time, TSA used a small pool of inspectors who officials said were able to build trust with the host government and gain better access, including the ability to conduct interviews of airport officials and take photographs of the security environment. Additionally, in 2011, we reported on TSA’s challenges in obtaining access to airports in Venezuela. Specifically, we reported that TSA had not been able to assess airports in Venezuela or conduct TSA compliance inspections for air carriers, including U.S. carriers, flying from Venezuela to the United States since 2006. According to TSA officials, in 2014, TSA regained access in Venezuela after establishing dialogue with the new government in place and emphasizing the benefits of the evaluation process.
TSA increased the number of joint airport assessments in Europe. In 2011, we reported that TSA took a number of actions to assess foreign airports in Europe, including conducting joint assessments with the EC, performing bi-lateral assessments, and executing table-top reviews in place of on-site airports visits. According to EC officials, the main goal under this arrangement was to better leverage resources and reduce the number of TSA visits per year to European airports because of concerns from EU member states about the frequency of visits from EC and U.S. audit teams. However, since our previous review, TSA has limited the use of table-top reviews and now primarily assesses foreign airports in Europe through joint assessments with the EC. Frankfurt ROC officials we met with indicated that TSA’s strong relationship with the EC has afforded the agency excellent access to foreign airports in Europe and a better understanding of vulnerabilities at these locations, which has resulted in more comprehensive assessments. For example, according to TSA, through the joint assessments, inspectors have better access to airport training documents, the ability to observe tests conducted by EC inspectors, and more time at checkpoints to observe screening operations.
TSA developed airport assessment and air carrier inspection job aids. In 2012, TSA developed job aids that provide inspectors with a set of detailed areas to assess for each ICAO standard. For example, a job aid for passenger and cabin baggage screening includes several prompts related to screening roles and responsibilities, the resolution process if a suspicious item is detected, and alternative procedures if screening equipment is not working as intended. TSA also developed job aids for the air carrier inspection process to better ensure that inspectors cover all requirements associated with air carrier security programs. According to OGS officials, these actions have led to more comprehensive evaluations and a better understanding of foreign airport and air carrier vulnerabilities.
TSA Has Worked to Create Operational Efficiencies
TSA established the Honolulu ROC. In 2012, TSA eliminated the Los Angeles ROC and established the Honolulu ROC given its proximity to the Pacific Islands, which allowed the agency to reduce costs and travel time to airports in these locations. Specifically according to TSA documentation, inspectors in the Los Angeles ROC often spent more than 20 hours traveling to and from sites in the Asia-Pacific region because of in-flight transit time and connection requirements. With the creation of the Honolulu ROC, TSA officials told us that inspectors have been better able to meet deadlines for completing foreign airport assessment reports and conduct follow-up visits to resolve noted issues.
TSA developed the Global Risk Analysis and Decision Support System. In 2012, TSA developed the Global Risk Analysis and Decision Support System (GRADS) to streamline the assessment report writing process and strengthen OGS’s data analysis capabilities of its foreign airport assessment results. According to TSA officials, GRADS has provided OGS personnel with a number of benefits, including the ability to run standardized reports, extract and analyze key data, and manage airport operational information, such as data on security screening equipment. According to TSA documentation, prior to 2012, the agency captured the results of its foreign airport assessments in narrative form that often amounted to more than 80 pages, hampering the ability to perform data analysis.
TSA standardized processes. Between 2012 and 2016, TSA deployed standardization teams, called Standardization Effort Teams, to help ensure more consistency among inspectors when conducting air carrier inspections and airport assessments, and to identify and develop best practices in areas such as training, among others. For example, in 2016, a team developed a tool to facilitate performance evaluations of inspectors.
TSA Foreign Airport Assessment Data Showed Variations in Compliance by Region and Across ICAO Standards, while Air Carrier Inspection Data Showed That Most Inspections Were Fully Compliant
Foreign Airports Differed in Level of Compliance by Region and Across ICAO Standards
TSA assesses the overall vulnerability level at each foreign airport using a rating system, ranging from a category “1,” which represents full compliance with ICAO standards, to a “4” or “5,” which involve more serious or egregious issues. Based on our analysis of TSA’s foreign airport assessment data, we found that compliance with ICAO standards varied by region. For example, our analysis showed that some regions of the world had a higher percentage of airports in vulnerability categories 4 and 5. Our analysis also showed that there are differences in compliance across the ICAO standards. Specific information related to TSA’s airport assessment results is deemed Sensitive Security Information.
According to TSA officials, it is difficult to draw conclusions about the cumulative foreign airport assessment results—such as whether the results are generally positive or negative—because the primary concern is not whether security deficiencies are identified, but whether foreign countries are capable and willing to address security deficiencies. Specifically, there is considerable regional variation in the level of compliance because some foreign countries face challenges due to lack of resources or technical knowledge, among other factors. TSA officials stated that while these challenges are not easy to overcome, agency efforts, such as training host country staff, can help foreign airports reduce their vulnerability scores over time. Our analysis of TSA’s foreign airport assessment data confirms that point. Specifically, we found that of the foreign airports categorized with a vulnerability rating of 4 of 5 in fiscal year 2012, the majority of these airports improved their vulnerability score in at least one follow-up assessment during fiscal years 2012 through 2016. According to TSA documentation, in some cases, foreign airports are able to take immediate measures to resolve security deficiencies. On the other hand, there are situations in which foreign airports may struggle to take corrective actions or sustain the improvements over time. Accordingly, TSA’s regulatory authority over air carriers is an important tool. TSA officials indicated that the agency commonly requires air carriers to adopt security procedures, such as passenger screening, to compensate for foreign airport security deficiencies. Moreover, if appropriate, DHS can take secretarial action, which includes the option to prohibit air carriers operating at a foreign airport from providing last point of departure flights to the United States.
Most Air Carrier Inspections Were Fully Compliant and TSA Used On-the-Spot Counseling to Resolve the Majority of Deficiencies
According to air carrier inspection data maintained by TSA, between fiscal years 2012 and 2016, air carriers providing last point of departure service to the United States from foreign airports complied with all TSA security requirements in most inspections. For those inspections that identified noncompliance, data from TSA showed that the majority of violations were corrected or addressed immediately through on-the-spot counseling. Inspectors submitted a certain number of violations for investigation because the violations were considered serious enough to potentially warrant an enforcement action. TSA can impose two general types of enforcement actions on air carriers that violate security requirements—an administrative action, such as a warning notice, or a monetary civil penalty. Based on information included in TSA’s investigation module within PARIS, TSA took administrative action in the majority of cases and levied 44 fines during fiscal years 2012 through 2016, which totaled about $575,000 and ranged from $1,000 to $40,500. According to TSA officials, they rely on a system of progressive enforcement and carefully consider whether a civil penalty is warranted based on the compliance history of an air carrier, among other factors.
TSA Addresses Security Deficiencies through Various Capacity Development Efforts, but Enhanced Data Management Could Strengthen Analysis and Decision Making
TSA Assists Foreign Airports and Air Carriers in Addressing Identified Security Deficiencies in Various Ways
Foreign Airports
As part of assisting foreign airports, inspectors work to transfer knowledge on how to mitigate identified airport security deficiencies to foreign airport officials and provide TSA program officials with suggestions for capacity development that could be effective in addressing these deficiencies. Specifically, TSA capacity development assistance to foreign airports includes on-the-spot counseling, training, technical assistance and consultation, and provision of security equipment.
Inspectors counsel foreign airport staff on-the-spot. According to TSA officials, inspectors typically offer counseling during airport assessments when they discover deficiencies, usually of an infrequent, less serious, or technical nature, that can be addressed immediately. For example, during a 2013 assessment of an airport in the Europe region, inspectors observed a total of 53 employees within the restricted area, of which one was not displaying his badge. Airport officials immediately requested that the individual display his badge and informed the TSA inspection team that they will remind all staff to properly display their airport media while in the restricted area. For the remainder of the airport visit, no badge display issues were noted. In another example, during an assessment in the Western Hemisphere region, inspectors observed persons entering a restricted area without undergoing screening. The inspectors counseled the airport’s security officials on the importance of adhering to the airport’s security program, and observed the airport officials take immediate action by implementing escort and screening procedures.
TSA provides security training. TSA may provide training to foreign airport staff to address deeper problems with staff security knowledge or to strengthen staff knowledge in an evolving threat environment. Training may take several forms, including traditional classroom courses or interactive workshops, and can range in length from one or two days to more than one week. Course topics include risk management, screening operations, and airport security, with a broad variety of sub-topics, such as insider risk, cargo security, and inspection techniques. According to TSA, new courses are in development to meet the changing security landscape. New course topics include landside security, behavioral awareness, and the effective use of canines.
TSA arranges for technical assistance and consultation. TSA assists foreign governments in securing technical assistance and consultation provided by TSA and other U.S. and foreign government agencies to help improve security at foreign airports, particularly after security incidents or at airports in developing countries. For example, after the 2016 terrorist attack on Brussels Airport, TSA was invited by airport officials to provide on-site consultation during the reconstitution of the airport facilities. In another example, TSA provided a country in the Africa-Middle East region with on-site technical assistance for configuring and testing explosives detection equipment at baggage screening checkpoints. In addition, State’s Anti-Terrorism Assistance Program augments TSA’s resources in building the aviation security capacity of foreign governments. For instance, State provides recipient nations with courses focused on airport security management, quality control, and fraudulent document recognition as well as multi-day passenger and cargo security consultations. In addition, with regard to capacity development TSA collaborates with other countries. Partners may promote common aviation security goals to other countries when political considerations preclude TSA from doing so, or combine resources with TSA for joint efforts. For example, in one collaboration, a country in the Asia-Pacific region provided resources and facilities, while TSA provided staff so that neighboring countries could attend aviation security training.
TSA loans and donates security equipment. TSA may loan or donate security equipment such as explosives detection devices and metal detection hand wands to lower-income countries. Since fiscal year 2012, TSA has loaned X-ray screening equipment and explosives detection devices to five countries. Enacted in July 2016, the Aviation Security Act expressly authorizes TSA to donate security screening equipment to a foreign last point of departure airport if such equipment can be reasonably expected to mitigate a specific vulnerability to the security of the United States or U.S. citizens. TSA may also provide staff at foreign airports with demonstrations for using equipment that has been loaned or donated by TSA, as well as equipment otherwise acquired by host governments. For instance, in 2016 TSA provided operator training and maintenance assistance to a country in the Africa-Middle East region that had procured passenger body scanners.
Air Carriers
TSA also takes steps to help air carriers address security deficiencies identified during air carrier inspections. TSA primarily offers capacity development support to air carriers through on-the-spot counseling and consultation with IIRs.
Inspectors counsel air carrier representatives on-the-spot. TSA assists air carrier representatives in addressing security deficiencies identified during air carrier inspections. According to TSA, since carriers have TSA-approved security programs, additional training may not be necessary to correct small issues. Rather, officials said that counseling air carrier staff on the proper procedures and follow up observations of them practicing the procedures may suffice. TSA data show that of the instances in which inspectors identified noncompliance with TSA security requirements during fiscal years 2012 through 2016, the majority of instances were resolved through counseling—that is, the security deficiencies were resolved with on-site assistance or consultation provided by TSA. For example, during an air carrier inspection in the Europe region, inspectors observed that a passenger wearing sandals was not screened properly. TSA counseled the screening staff that footwear screening requirements apply to all shoes, including sandals. The inspectors then observed proper rescreening of the passenger. TSA also discussed the matter with airline security representatives, who concurred with TSA.
IIRs assist air carriers with compliance. In addition to counseling provided by inspectors when deficiencies are identified, TSA assigns each air carrier to a representative who assists the carriers in complying with TSA security requirements. Although these representatives, called IIRs, do not participate in air carrier inspections, they do receive inspection results for the carriers with whom they work. IIRs counsel the air carriers and provide clarification regarding TSA security requirements when necessary. For example, they provide air carriers with clarification on the requirements contained in security directives and emergency amendments issued by TSA. In other instances, when an air carrier cannot comply with a TSA security requirement—such as when complying with a TSA security requirement would cause the air carrier to violate a host government security requirement—the air carrier works with its IIR to develop alternative security procedures in a manner consistent with TSA regulations. With alternative procedures, air carriers can deviate from their TSA-approved security program while still meeting the intent of TSA requirements. According to some IIRs with whom we spoke, these alternative procedures are intended to provide a level of security that is equivalent to the level of security provided by TSA’s standard requirements while also affording air carriers with some flexibility in how they achieve the intended security benefit of the TSA requirement. Alternative security procedures are reviewed by the IIR, who submits them to TSA headquarters and field officials for final review and approval.
TSA Has Taken Steps to Leverage Information for Capacity Development, but Could Enhance Data Management
Leveraging Information for Capacity Development
TSA has taken a number of steps to strengthen its analytical processes and better understand the impact of the foreign airport assessment and air carrier inspection programs. According to OGS officials, the establishment and evolution of the ARM Directorate has facilitated better data analysis and enhanced decision making pertaining to capacity development. Specifically, TSA now conducts regional strategy meetings, produces regional risk reports, and approves requests for assistance based on risk.
OGS conducts regional strategy meetings. Since fiscal year 2012, OGS has held strategy meetings to address aviation security threats and vulnerabilities within each region. During these meetings, OGS officials examine trend data for both airport assessments and air carrier inspections, including vulnerability ratings over a multi-year period, identify common areas of non-compliance, and develop capacity building approaches customized to each region. According to agency documentation, these meetings led OGS to recognize that each geographic region faces its own particular challenges and risks and requires unique mitigation approaches, such as at the country or airport level.
ARM develops regional risk reports. In 2016, the ARM Directorate began producing regional risk reports for use by other teams within OGS. The purpose of these reports is to provide OGS personnel operating within each of the four regions with an understanding of known vulnerabilities in the region and their associated risk in order to inform mitigation planning efforts. These reports include such information as key risks at each location and region-wide trends on vulnerabilities. For example, the reports show patterns in noncompliance related to critical ICAO standards. In addition, the reports compare airports by risk level and examine how individual airports compare to a regional average. According to ARM staff, one of the top priorities this year is to centralize analysis results within a web portal that allows users across OGS to sort and filter data. ARM expects the portal to include comprehensive airport profiles that capture the primary details for each location, such as the largest carriers and main risks.
OGS approves requests for assistance based on risk. Requests for capacity development assistance are submitted by OGS personnel, including TSARs and inspectors. TSA’s Capacity Development Branch (CDB) in ARM assesses these requests according to a standardized criterion that includes an airport’s past and present vulnerabilities, the root causes of these vulnerabilities, the timing of the assistance delivery, and the suitability of the intended recipient. For instance, TSA assesses the capabilities of the government or airport that would receive the assistance, and considers such factors as whether the intended recipient has the commitment necessary to institutionalize TSA-sponsored training and the technical expertise to use any equipment that may be loaned or donated by TSA. In addition, according to TSA officials, TSA considers the extent to which the intended recipient has been a cooperative partner in the past and implemented TSA’s previous security recommendations. After CDB’s risk-based assessment of assistance requests, OGS management makes a final determination regarding the provision of assistance.
TSA Could Enhance Data Management
While TSA has taken steps to leverage the results of foreign airport assessments and air carrier inspections to monitor system-wide vulnerabilities and inform capacity development, TSA lacks key information for decision making. For instance, we found that the Open Standards and Recommended Practices Findings Tool (OSFT) — a database for tracking the resolution status of identified foreign airport deficiencies — has gaps and its system for categorization does not result in sufficient specificity of information related to security deficiencies’ root causes and corrective actions.
Root causes represent the underlying reason why an airport is not meeting an ICAO standard and, according to TSA documentation, fall into three general categories: lack of knowledge, lack of infrastructure, and lack of will. For example, a foreign airport might fail to meet an ICAO standard because of lack of knowledge stemming from insufficient training programs or a high rate of staff turnover. According to OGS officials, an understanding of root causes is important because the challenges to addressing security deficiencies at foreign airports vary extensively from country to country and corrective actions need to be tailored to addressing the unique root causes of deficiencies that TSA identifies. Corrective actions are efforts to mitigate security deficiencies and might include training and other capacity building efforts. Corrective actions can be designed to help a foreign airport add a new security capability, enhance an existing capability, or increase the deployment of security measures.
Although root causes and corrective actions are important variables for decision making, we found that the OSFT has gaps in this information. TSARs—the primary liaisons between the U.S. government and foreign governments on transportation security issues—are responsible for following up on progress made by foreign officials in addressing security deficiencies identified during TSA assessments. Specifically, the Foreign Airport Assessment Program SOP states that, for each foreign airport assessed, the assigned TSAR is responsible for entering and updating key information in the OSFT, including root cause and corrective action information. According to the SOP, a thorough understanding of the underlying reasons for each deficiency is critical to selecting the appropriate mitigation activities. However, we found that around two thirds of fiscal year 2016 records in the OSFT exhibited empty fields pertaining to root cause or recommended corrective action. More specifically, root cause data and recommended corrective action data were each not recorded for 70 percent of findings.
During our interviews with TSARs, half (4 out of 8) indicated that they believed the OSFT to be a cumbersome tool that has limitations for recording status updates, among other issues, or that they preferred to use other mechanisms, such as spreadsheets stored locally, in order to avoid using the OSFT for certain functions. TSA headquarters officials indicated that OGS began requiring staff to record root cause and corrective action information in 2015 and that institutionalizing this requirement to facilitate consistent data entry will take time. However, complete data on root causes and corrective actions would help TSA systematically monitor airport performance in addressing deficiencies and leverage information for decision making regarding capacity development. For example, with complete information TSA would be in a better position to determine the extent to which airports were able to effectively close security vulnerabilities based on TSA’s capacity building efforts, as well as conduct trend analysis within and across its four regions, including identifying potential linkages between root causes and corrective actions. Specifically, TSA could determine the extent to which corrective actions seem to align best with certain root causes. For example, while training might be an appropriate remedy if foreign airport personnel lack knowledge, it might not be an appropriate solution for lack of will.
We also found that the OSFT has limitations related to the categorization of root causes and corrective actions. The Foreign Airport Assessment Program SOP indicates that root causes may relate to three broad categories, as explained earlier, and twelve subcategories: aviation security infrastructure, communication, cultural factors, human factors, management systems, physical infrastructure, procedures, quality control, resources, supervision, technology, and training. However, the OSFT does not include a field to categorize root causes according to these subcategories or other more specific areas. As a result, it does not capture more granular information that would better explain the specific root cause of an identified security issue.
Moreover, information on recommended corrective actions is stored entirely in OSFT narrative fields without a drop-down list or other type of categorization mechanism. For example, according to OSFT data, in one Western Hemisphere region country, inspectors observed insufficient employee screening and access control. The recommended corrective action—”Fencing around the terminal area will be enhanced and airport personnel counseled about employee screening”—would be difficult to include in quantitative analysis without manual intervention. The OSFT also includes a field for the final corrective action—how an airport ultimately resolved a security issue. However, the categories in the OSFT for final corrective action do not account for many key types of TSA’s mitigation efforts (e.g., training, loaning or donating equipment, and directing an air carrier to mitigate an airport vulnerability). Specifically, for fiscal year 2016, we found that the OSFT only included data for three high-level categories of final corrective actions: “airport authorities resolved,” “national authorities resolved,” and “other.”
ARM staff stated that they recognize that the classification of data currently contained in the OSFT could be improved, but that they have not had an opportunity to address the issues because they have been focused on developing the newest release of GRADS. TSA staff also indicated that they are exploring opportunities to better classify data in future releases of GRADS. However, according to the Foreign Airport Assessment Program SOP, a thorough understanding of the underlying reasons for each deficiency is critical to properly selecting the appropriate mitigation activities. Moreover, federal internal control standards suggest that agencies should design information systems to obtain and process information to meet each operational process’s data requirements and to respond to the entity’s objectives and risks. By classifying information on root causes and corrective actions with additional specificity, and through a standard system of categorization that would allow for system- wide analysis, TSA would be better positioned to assure that corrective actions accurately address the specific, underlying reasons for security vulnerabilities.
Conclusions
TSA’s foreign airport assessment and air carrier inspection programs play a vital role in ensuring the security of the aviation system. TSA has taken a number of steps to enhance foreign airport assessments and air carrier inspections since 2011, including targeting resources based on risk, strengthening access to foreign airports and the comprehensiveness of its assessments and inspections, and creating operational efficiencies. While TSA does not have authority to impose or otherwise enforce security requirements at foreign airports, the agency makes a concerted effort to help foreign airports improve their security posture and address security deficiencies identified during assessments. Moreover, TSA is commonly able to resolve air carrier security deficiencies with on-the-spot counseling.
While TSA uses various mechanisms for capacity building, better data management would help strengthen analysis and decision making. Specifically, fully capturing and more specifically categorizing data on the root causes of security deficiencies that TSA identifies and the associated corrective actions would provide the agency with a more comprehensive understanding of the security environment at foreign airports. For example, TSA could leverage this information for trend analysis, including evaluating potential linkages between root causes and corrective actions, and determining the extent to which airports that received specific types of capacity development services were able to close security vulnerabilities. Accordingly, TSA would have better visibility over the different types of capacity development that the agency offers and the overall return on investment for these efforts.
Recommendations for Executive Action
We are making the following two recommendations to TSA:
The Assistant Administrator for the Office of Global Strategies should ensure that data regarding the root causes of security deficiencies and corrective actions are consistently captured in accordance with TSA guidance. (Recommendation 1)
The Assistant Administrator for the Office of Global Strategies should update TSA’s data systems to include more specific categories for TSA’s data on the root causes and corrective actions related to security deficiencies. (Recommendation 2)
Agency Comments and Our Evaluation
We provided a draft of our report to DHS for its review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix II. DHS concurred with both recommendations in the report and described actions underway or planned to address them. With regard to the first recommendation that TSA ensure that data regarding the root causes of security deficiencies and corrective actions are consistently captured in accordance with TSA guidance, DHS concurred and stated that TSA will use a new tool, the Vulnerability Resolution Tool (VRT), to capture and categorize root causes and corrective actions. During the next fiscal year, TSA plans to train its staff in the use and importance of the VRT, and estimates that it will complete this process by October 31, 2018. If TSA consistently captures root causes and corrective actions in the new tool, TSA’s planned actions would address the intent of the recommendation. With regard to the second recommendation that TSA update TSA’s data systems to include more specific categories for TSA’s data on the root causes and corrective actions related to security deficiencies, DHS concurred and stated that TSA plans to include more specific categories for root causes and corrective actions in a future iteration of GRADS, and expects to complete the updates by October 31, 2018. If fully implemented, these actions should address the intent of the recommendation.
We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security, the Secretary of State, the Administrator of the Transportation Security Administration, and the TSA Assistant Administrator for the Office of Global Strategies. In addition, the report is available at no charge on the GAO website at http://gao.gov.
If you or your staff members have any questions about this report, please contact Jennifer Grover at (202) 512-7141 or [email protected], or Jessica Farb at (202) 512-6991 or [email protected]. Key contributors to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
The Aviation Security Act of 2016 includes a provision for GAO to review the efforts, capabilities, and effectiveness of TSA to enhance security capabilities at foreign airports and determine if the implementation of such efforts and capabilities effectively secures international-inbound aviation. This report (1) describes steps TSA has taken to enhance foreign airport assessments and air carrier inspections since 2011, (2) describes the results of TSA’s foreign airport assessments and air carrier inspections, and (3) examines steps TSA takes to address any deficiencies identified during foreign airport assessments and air carrier inspections.
To collectively address all three objectives, we reviewed the relevant laws and regulations pursuant to which TSA conducts foreign airport assessments and air carrier inspections. We reviewed various TSA documents on program management and strategic planning and interviewed TSA officials located at TSA headquarters and in the field. We interviewed other federal and nonfederal stakeholders, such as the Department of State (State), the European Commission (EC), and airport and air carrier representatives. We outline the specific steps taken to answer each objective below.
To obtain a greater understanding of the foreign airport assessment and air carrier inspection processes, including how TSA works with host nation officials and air carrier representatives, we accompanied a team of TSA inspectors during an air carrier inspection at an airport in Europe. We based our site selection on several factors, including the air carrier locations TSA had plans to inspect during the course of our audit work and host government willingness to allow us to accompany TSA. In addition, we spoke with officials at a separate European airport, including the airport operator and representatives from two air carriers.
To understand how TSA assesses and manages its foreign airport and air carrier risk information, we obtained and reviewed documents on TSA’s methodology for assigning individual risk rankings (called tier rankings) to each foreign airport it assesses. TSA’s rankings are based on the likelihood of a location being targeted, the protective measures in place at that location, and the potential impact of an attack on the international transportation system. Airports are then categorized as high, medium, or low risk. We also reviewed TSA’s methodology for grouping air carriers based on risk, which is influenced by the foreign airport risk tiers.
To describe the steps that TSA has taken to enhance foreign airport assessments and air carrier inspections since 2011, we reviewed various TSA documents on program management and strategic planning. Specifically, we reviewed TSA’s 2016 Foreign Airport Assessment Program Standard Operating Procedures (SOP), which prescribes program and operational guidance for assessing security measures at foreign airports, and informs TSA personnel at all levels of what is expected of them in the implementation of the program. We also reviewed the job aids that TSA inspectors use during each assessment and inspection, which ensure that the TSA-specified International Civil Aviation Organization (ICAO) aviation security standards and recommended practices (referred to collectively in this report as ICAO standards unless otherwise noted) and air carrier security program requirements are fully evaluated during each assessment. In addition, we reviewed TSA’s Office of Global Strategies (OGS) Strategic Plan for fiscal years 2014 through 2018, and documents describing changes to the OGS organizational structure since 2011.
To obtain stakeholder views and perspectives on steps TSA has taken to enhance its foreign airport assessment program since 2011, we interviewed and obtained information from various federal stakeholders. Specifically, we interviewed OGS officials located in the Global Compliance (GC), Global Affairs, and Analysis and Risk Mitigation (ARM) directorates. In addition, we also conducted site visits to three of the six TSA regional operations centers (ROC), located in Reston, Miami, and Frankfurt, where we met with ROC managers, transportation security specialists (henceforth referred to as inspectors) who conduct TSA’s foreign airport assessments and air carrier inspections, TSARs who follow up on host governments’ progress in addressing identified security deficiencies, international industry representatives (IIR) who liaise with air carriers, and regional directors (RD). We based our site visit selections on the number and type of staff available at each location and geographic dispersion. We also conducted telephone interviews with personnel from the Honolulu ROC and other OGS staff stationed worldwide. In total, we interviewed 4 of the 6 ROC managers, 19 of the 94 inspectors, 8 of the 29 TSARs, 8 of the 16 IIRs, and all 4 RDs. During these interviews, we discussed these officials’ responsibilities related to the assessment and inspection programs.
To describe the results of TSA’s foreign airport assessments and air carrier inspections, we interviewed TSA officials on the results of its evaluations, obtained and reviewed relevant program documents, and conducted our own independent analysis of TSA’s assessment and inspection results. Specifically, we obtained and reviewed TSA’s foreign airport assessment program vulnerability results tracking sheet used by GC to compile and track current and prior-year assessment results. This tracking sheet included records of TSA’s compliance assessments for each airport that TSA assessed from fiscal years 2012 through 2016. Specifically, the tracking sheet recorded assessment results for each of the ICAO standards used in the airport assessments, as well as an overall vulnerability score of 1 through 5 assigned after each assessment. This overall vulnerability score is a representation of compliance or noncompliance with all the ICAO standards against which TSA assesses foreign airports. We interviewed OGS officials on the steps taken to develop the tracking sheet, including how TSA manages and updates data. In addition, we conducted our own independent analysis of TSA’s assessment results from fiscal years 2012 through 2016, the five-year period since our previous review. Specifically, we analyzed data from TSA’s foreign airport assessment program vulnerability results tracking sheet to identify the number of airports in each vulnerability category by region. We also analyzed TSA assessment results data to determine the frequency with which foreign airports complied with particular ICAO standards, such as access control, quality control, passenger screening, and baggage screening, among others.
For air carrier inspection results, we analyzed data from PARIS on each air carrier that TSA inspected from fiscal years 2012 through 2016. Our analysis included the overall level of compliance, as well as the frequency with which each air carrier complied with particular security program requirements, such as aircraft search and passenger screening. We also interviewed TSA managers, inspectors, and TSARs about their roles and responsibilities in determining and documenting assessment and inspection results. To assess the reliability of TSA’s assessment and inspection data, we reviewed program documentation on system controls, interviewed knowledgeable officials from OGS and checked TSA’s data for any potential gaps and errors. Based on our overall analysis of the data, we determined that the data were sufficiently reliable to provide a general indication, by type or category, of the standards TSA assesses against and the level of compliance, and frequency of compliance, for TSA’s foreign airport assessments and air carrier inspections over the period of our analysis.
To examine the steps TSA takes to address deficiencies identified during foreign airport assessments and air carrier inspections, we interviewed ARM and other TSA staff. Specifically, we discussed the full range of options that are available to TSA for addressing airport and air carrier security deficiencies, including a variety of capacity development tools and collaboration with domestic agencies, such as State, and foreign partners, such as Australia, Canada, Chile, New Zealand, Singapore, South Africa, and the United Kingdom. During these interviews, we discussed the circumstances in which each option is typically used and the factors determining when an option is used. We also reviewed program management tools TSA uses to track and manage the status of foreign airport security deficiencies and records pertaining to capacity development assistance deliveries from fiscal years 2012 through 2016, including equipment loaned or donated, training courses provided, and technical assistance delivered.
To obtain information on the extent to which TSA provided oversight of its assessment and inspection efforts, we obtained and reviewed various TSA program management documents and tools that TSA uses to track and manage information for the programs. Specifically, we reviewed the fiscal year 2017 Global Compliance Master Work Plan, which TSA uses to track its foreign airport assessment schedule, including when various airports are due to be assessed. We also reviewed the Open Standards and Recommended Practices Findings Tool, which the TSA Representatives (TSAR) use to monitor and track a foreign airport’s progress in resolving security deficiencies identified by TSA inspectors during previous assessments. In addition, we reviewed the tracking sheet TSA uses to compile and track airport assessment results, including individual airport vulnerability scores and information on which specific ICAO standards were in noncompliance. Finally, we reviewed the results of air carrier inspections that are contained in the inspections and investigations modules of TSA’s Performance and Results Information System (PARIS).
To identify challenges affecting TSA’s foreign airport assessment program, we interviewed TSA officials, such as TSA’s Director of Global Compliance, and field officials located at the TSA ROCs about the challenges they experience obtaining access to foreign airports to conduct assessments, the performance of data management systems, and the provision of aviation security capacity development assistance to foreign governments. We also obtained their perspectives on foreign governments that have been reluctant to allow TSA inspectors to visit their airports. We also interviewed TSA’s Director of Global Compliance and headquarters and field staff on the agency’s use of databases and other tracking mechanisms to manage assessment and inspection results. In addition, we obtained the perspective of TSARs on challenges to ensuring that foreign airports address security deficiencies. We also interviewed officials within TSA’s Capacity Development Branch to better understand the scope and types of requests for assistance that they receive from foreign countries, the challenges that they experience in attempting to provide assistance, and their experience collaborating with State.
We met with State officials to better understand how they coordinate with TSA through their Office of Anti-Terrorism Assistance and other related efforts aimed at assisting foreign partners’ capacity to secure their airports. In addition, we met with officials from the EC and the International Air Transport Association to discuss efforts and programs these organizations have in place to enhance international aviation security.
In addition, during our interviews with ARM staff, we discussed the extent to which TSA uses information at its disposal to inform capacity development efforts. We also compared these efforts to criteria for obtaining and processing information in federal internal control standards. To identify opportunities for TSA to better leverage information to inform capacity development, we reviewed relevant program management documentation and tools that TSA uses to track and analyze assessment results. Specifically, we reviewed the 2016 Foreign Airport Assessment Program SOP and program management tools TSA uses to track and manage the status of foreign airport security deficiencies. We also reviewed our prior work concerning how risk- informed and priority driven decisions can help inform agency decision makers in allocating finite resources to the areas of greatest need.
Information from our interviews with government officials and members of the aviation industry provide insight into their perspectives on TSA’s foreign airport assessment and air carrier inspection programs. However, this information cannot be generalized beyond those with whom we spoke because we did not use statistical sampling techniques in selecting individuals to interview.
The performance audit upon which this report is based was conducted from August 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with TSA from September 2017 to December 2017 to prepare this nonsensitive version of the original sensitive report for public release. This public version was also prepared in accordance with these standards.
Appendix II: Comments from the Department of Homeland Security
Appendix III: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contacts above, Jason Bair and Chris Ferencik (Assistant Directors); Anthony C. Fernandez (Analyst-in-Charge); Bryan Bourgault; Elizabeth Dretsch; Jesse Elrod; Eric Hauswirth; Christopher Lee; Tom Lombardi; Amanda Miller; and Adam Vogt made key contributions to this report. | Why GAO Did This Study
Approximately 300 foreign airports offer last point of departure flights to the United States. TSA is the federal agency with primary responsibility for securing the nation's civil aviation system and assesses foreign airports and inspects air carriers to ensure they have in place effective security measures. While TSA is authorized under U.S. law to conduct foreign airport assessments, it does not have authority to impose or otherwise enforce security requirements at foreign airports. TSA is authorized to impose and enforce requirements on air carriers. The Aviation Security Act of 2016 includes a provision for GAO to review TSA's effort to enhance security at foreign airports.
This report addresses (1) steps TSA has taken to enhance foreign airport assessments and air carrier inspections since 2011, (2) the results of TSA's foreign airport assessments and air carrier inspections, and (3) steps TSA takes to address any deficiencies identified during foreign airport assessments and air carrier inspections. GAO reviewed TSA program data, interviewed TSA officials, and conducted site visits to TSA field locations that manage assessments and inspections.
What GAO Found
The Transportation Security Administration (TSA) has taken steps to enhance its foreign airport assessments and air carrier inspections since 2011, including aligning resources based on risk, resolving airport access issues, making evaluations more comprehensive, and creating operational efficiencies. For example, TSA has implemented targeted foreign airport assessments in locations where risk is high and developed the Global Risk Analysis and Decision Support System to strengthen data analysis. In addition, TSA has increased the number of joint airport assessments with the European Commission. Specifically, TSA officials GAO met with indicated that TSA's strong relationship with the European Commission has afforded the agency excellent access to foreign airports in Europe and a better understanding of vulnerabilities at these locations, which has resulted in more comprehensive assessments.
In its analysis of TSA foreign airport assessment results, GAO found that during fiscal years 2012 through 2016 there was considerable regional variation among last point of departure airports in the level of compliance with select International Civil Aviation Organization security standards and recommended practices. TSA attributed this regional variation to lack of airport resources or technical knowledge, among other factors. TSA officials also stated that while these challenges are not easy to overcome, agency efforts, such as training host country staff, can help foreign airports reduce their vulnerability scores over time. GAO's analysis of TSA's foreign airport assessment data confirmed that point by demonstrating that most foreign airports categorized with poor vulnerability ratings in fiscal year 2012 improved their vulnerability score in at least one follow-up assessment during fiscal years 2012 through 2016.
Meanwhile, U.S. and foreign-flagged air carriers providing last point of departure service to the United States from foreign airports complied with all TSA security requirements in most inspections, and TSA was able to resolve the majority of security deficiencies it identified with on-the-spot counseling. In some cases, TSA inspectors submitted violations for investigation because the violations were considered serious enough to potentially warrant an enforcement action.
TSA addresses identified deficiencies at foreign airports through capacity development, such as training and on-the-spot counseling. However, GAO found that TSA's database for tracking the resolution status of security deficiencies did not have comprehensive data on security deficiencies' root causes and corrective actions. In addition, the database lacked adequate categorization mechanisms. For example, while it captures three broad categories of root causes (e.g., lack of knowledge) it does not capture subcategories (e.g., supervision) that would better explain the root causes of security deficiencies. Fully collecting these data and improving the specificity of categorization would help TSA strengthen analysis and decision making. For example, TSA would be better positioned to determine the extent to which airports that received particular types of capacity development assistance were able to close security vulnerabilities. This is a public version of a sensitive report issued in October 2017. Information that TSA deemed to be sensitive is omitted from this report.
What GAO Recommends
To help strengthen TSA's analysis and decision making, GAO recommends that TSA fully capture and more specifically categorize data on the root causes of security deficiencies that it identifies and corrective actions. TSA concurred with the recommendations. |
gao_GAO-19-64 | gao_GAO-19-64_0 | Background
The defense lab enterprise consists of 63 labs, warfare centers, and engineering centers across the Departments of the Army, Navy, and Air Force, as shown in Figure 1 below. About 50,000 federally employed scientists and engineers work at these defense labs to support warfighter needs and develop transformative capabilities. Defense labs are managed and operated within the military service chain of command.
Defense Lab Funding Models
DOD budgets for technology and product development activities under its research, development, test, and evaluation budget, which DOD groups into seven budget activity categories for its annual budget estimates. Air Force and Army labs rely on appropriated funding provided from the service—often referred to as mission funding—or from customers (or some combination thereof). Customers, such as program offices, provide funding to defense labs for technology development activities and related research. The Air Force and Army funding structure is in contrast to Navy research and development activities, which operate under the Navy Working Capital Fund—a revolving fund that finances Department of the Navy activities on a reimbursable basis. Under this funding model, the Navy employs a Capital Investment Program to obtain capital assets, including minor military construction projects for labs. The program provides the framework for planning, coordinating, and controlling Navy working capital funds and expenditures to obtain capital assets. Figure 2 illustrates the varying funding models used by the military service labs.
Other DOD-Sponsored Science and Technology Entities
In addition to its labs, DOD sponsors other entities to provide for its technology development needs. Specifically, these include:
FFRDCs are operated by universities, other not-for-profit or nonprofit organizations, or private firms under long-term contracts and provide special research and development services that generally cannot be readily satisfied by government personnel or private contractors. For example, the Massachusetts Institute of Technology Lincoln Laboratory develops key radar and electronic warfare technologies for integrated air and missile defense systems. In addition, the Software Engineering Institute operated by Carnegie Mellon University provides cybersecurity solutions for defense entities. While DOD sponsors 10 FFRDCs in total, it designates 3 FFRDCs as research and development labs, which maintain long-term competencies in key technology areas. In addition to these, DOD sponsors 2 systems engineering and integration FFRDCs and 5 studies and analysis FFRDCs.
UARCs provide specialized research and development services similar to FFRDCs and also operate under long-term contracts.
However, unlike FFRDCs, DOD requires that UARCs be affiliated with a university. Generally, UARCs may not compete against industry in response to a competitive Request for Proposals for development or production that involves engineering expertise. DOD currently sponsors 13 UARCs.
Key Offices Responsible for Oversight of Defense Labs
Key DOD offices provide oversight to the defense labs:
The Under Secretary of Defense for Research and Engineering (USD(R&E))—the principal advisor to the Secretary of Defense for research, engineering, and technology development activities and programs—serves as DOD’s chief technology officer. The powers and duties of this office include establishing policies and providing oversight for DOD’s research, engineering, and technology development activities.
The Defense Laboratories Office—within the Office of the USD(R&E)—supports DOD’s research and engineering mission by helping to ensure comprehensive department-level insight into the activities and capabilities of the defense labs. This office carries out a range of core functions related to the defense labs, including analysis of capabilities, alignment of activities, and advocacy.
Defense Lab Authorities
Congress has granted authorities that address hiring, infrastructure, and technology transition challenges to defense labs since 1995. These authorities provide defense lab directors with certain flexibilities within the established legal framework to manage their operations. While Congress has provided a number of authorities, in this report we focus on four authorities that our prior work on best practices in science and technology management and expedited lab hiring has shown are, or have the potential to be, the most crucial for supporting innovation within DOD labs.
Laboratory Initiated Research Authority. This authority provides lab directors with the means to fund some of the research projects that the lab will pursue. The authority provided in Section 219 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, as implemented, provides lab directors with a means to fund projects they consider to be a priority in four allowable categories: (1) basic and applied research, (2) technology transition, (3) workforce development, and (4) revitalization, recapitalization, or repair or minor construction of lab infrastructure. These projects include those not specifically tied to defined requirements, outside of the normal 2-year budget planning process. The authority directs the Secretary of Defense to establish mechanisms under which lab directors may use an amount of funds equal to not less than 2 percent and not more than 4 percent of all funds available to the defense lab for projects under the four allowable categories. Further, lab directors are permitted to obtain additional funding by charging customers a fixed percentage fee that may not exceed 4 percent of costs.
Direct Hire Authorities. These authorities provide lab directors with a streamlined and accelerated hiring process. Congress has enacted four types of direct hire authorities since 2008, which help labs compete with private industry and academia for high-quality scientific, engineering, and technical talent. Specific types of direct hire authorities include hiring: (1) candidates with advanced degrees; (2) candidates with bachelor’s degrees; (3) veterans; and (4) students currently enrolled in graduate or undergraduate science, technology, engineering, and mathematics (STEM) programs.,
Laboratory Enhancement Pilot Program. This authority provides methods for effective lab management operations. Section 233 of the National Defense Authorization Act for Fiscal Year 2017 established a pilot program for lab directors to propose alternative and innovative methods that might lead to more effectively managing labs, and authorized lab directors to waive any regulation, restriction, requirement, guidance, policy, procedure, or departmental instruction that would affect implementation of these methods, unless such implementation would be prohibited by a provision of an existing statute or common law.
Micro-purchase Authority. This authority facilitates the purchasing process for labs. The FAR states a preference for government agencies to purchase and pay for micro-purchases of supplies or services using the government-wide commercial purchase card up to and at the micro-purchase threshold, but micro-purchases may be conducted using any of the simplified acquisition methods. This facilitates the ability of lab officials to quickly and easily acquire needed items for their activities and reduce the administrative costs associated with such small purchases. While the FAR micro-purchase was generally $3,500 during our review, Congress increased it to $10,000 for activities of the science and technology reinvention labs in Section 217 of the National Defense Authorization Act for Fiscal Year 2017.
Major Federal Research Agency Investments
As we found in June 2018, the federal government spends approximately $137 billion annually government-wide on research and development (R&D) to help further agencies’ missions, including at federal labs. From fiscal years 2015 to 2017, DOD, Energy, and NASA represented three of the top four federal agencies with the highest annual federal R&D spending, accounting for about 66 percent of total federal R&D spending on average, as shown in Figure 3.
While the labs primarily support the agencies that directly fund them, DOD, Energy, and NASA research entities also collaborate extensively to support activities of shared interest. For example, DOD and NASA research centers have collaborated to develop hypersonic vehicle capabilities. Further, Energy’s national labs help provide critical national security capabilities for DOD and support NASA’s deep space mission radioisotope requirements. In 2017, Energy reported performing about $2.6 billion of work per year from fiscal years 2011 through 2015 for other federal agencies and other customers, including DOD.
Defense Labs Have Used Selected Authorities, but Their Use Has Been Offset by Other Military Service Policies and Interests
Most defense labs have used the selected authorities since 2008, but their use has sometimes been limited for a variety of reasons. According to lab directors, this is because of DOD legal and policy restrictions and stakeholder concerns. For example:
Use of the laboratory initiated research authority was limited by DOD’s military construction funding and financial management policies.
Use of the direct hire authority was limited, in part, by personnel- related delays, security clearance challenges, and military hiring restrictions.
Use of the laboratory enhancement pilot program was limited by stakeholder uncertainty about how to use this authority effectively.
Use of the increased micro-purchase authority was limited by stakeholder concerns about the authority’s potential effect on small businesses.
Most Defense Labs Have Used the Laboratory Initiated Research Authority, but Less than the Maximum Allowed
We found that most defense labs have used the laboratory initiated research authority. Twenty-three of 31 of respondents to our survey— about 74 percent—reported obligating funds under this authority. However, we found that most labs are not using the full 4 percent of all funds available to each lab, or charging customers the full fixed percentage fee of 4 percent of costs, as allowed by law. Specifically, we found that, as of September 2018:
Navy labs reported charging customers a percentage fee of about 2 percent of costs as of fiscal year 2018. Prior to this, Navy labs only charged a 1 percent fixed fee on these costs. Because Navy labs are working capital funded organizations, they can use payments from customers for goods delivered or services performed.
Army labs reported using between 2 and 3 percent of all funds available to the lab for projects under the four allowable categories and charging customers a fixed fee of between zero and 3 percent of costs to fund such activities.
Only the Air Force Research Laboratory reported using the full 4 percent of all funds available to the lab. According to agency officials, the lab is using 3 percent of all funds available to the lab and is allowing individual technology directorates the option to use the additional 1 percent of funds available. In fiscal year 2018, three of the lab’s nine technology directorates chose to use this additional 1 percent. However, the lab has not charged customers a fixed percentage fee on their costs at all.
As figure 4 shows, in fiscal year 2017, the aggregate fixed percentage fee charged by labs in each of the military departments totaled under the full 4 percent allowed by law for each funding source. Decisions to charge lower percentages are decisions to forego additional potential funding, although agencies have various reasons why this can happen, as we will discuss later.
In total, DOD reported that this authority provided almost $300 million to labs in fiscal year 2017 and funded more than 1,750 projects across the four allowable categories, as Figure 5 illustrates.
We previously found, in June 2017, that the laboratory initiated research authority provides defense lab directors with limited flexibility to initiate science and technology projects. These projects include those that are not road mapped or tied to defined requirements outside of the normal 2- year budget planning process, and are focused on both near- and long- term needs.
For this review, defense lab officials we interviewed stated that the laboratory initiated research authority enables their scientists and researchers to pursue projects not necessarily tied to requirements and provides necessary funds for workforce development and lab infrastructure projects. Further, as shown in Figure 6, lab directors we surveyed generally view the authority as both fostering innovation and increasing efficiency across the four allowable categories on which funds can be used.
In accordance with the one of the statutory purposes for the use of the funds, lab directors have developed new, innovative technologies using this authority. For example, DOD reported that: In fiscal year 2017, the Naval Surface Warfare Center, Crane Division, developed and fielded a solution to an urgent requirement for defeating small unmanned aerial vehicles that attack Navy assets or surveil naval activities. The center delivered this technology to the warfighter in May 2017 just 7 weeks after the Navy submitted the requirement.
The Army Research Laboratory used the authority to fund a project that eventually developed a material that could increase the speed and lower the power needs of future generations of computer chips, thereby supporting Army networks.
The Navy invested more than $700 thousand in laboratory initiated research authority funds to commission a Ballast Water Research Lab at Naval Surface Warfare Center, Carderock Division. Through the use of this new facility, engineers will be able to study ways to treat ballast water to prevent introduction of non- native aquatic species into a new environment that can be disastrous for the marine life that already inhabit that environment, and ensure that the Navy is able to meet various port regulations around the world for its ships.
The Air Force Research Laboratory invested funds in fiscal year 2017 to renovate an existing facility to provide high performance computing capability to aid the rapid development of “game-changing” technologies and weapon systems.
Officials at the Army’s Space and Missile Defense Command Technical Center noted they used the laboratory initiated research authority for the first time in fiscal year 2018 because the current executive director, who assumed the position in 2017, prioritized implementing this authority. Most of the Center’s planned investments are focused on workforce development and laboratory infrastructure projects; officials cited a high energy laser technology lab as one of the projects being supported by the revitalization, recapitalization, or minor military construction portion of this authority.
Although the majority of defense labs reported using the laboratory initiated research authority, interviews we conducted throughout our review, along with other DOD reports, identified certain obstacles that have, at times, impeded wider usage.
DOD-wide military construction funding restrictions. DOD restrictions limit the amount of laboratory initiated research authority funds that labs can spend on lab infrastructure. DOD’s limit is $6 million for the revitalization and recapitalization projects that can be funded under the laboratory initiated research authority. Lab officials stated that this amount is often insufficient to construct advanced lab facilities. Air Force Research Laboratory officials indicated that it is nearly impossible to construct lab facilities for less than $6 million. Officials at the Army’s Aviation and Missile Research, Development and Engineering Center echoed this sentiment and noted that they have primarily used funds to renovate existing buildings rather than fund new lab facility construction. In January 2017, the Defense Science Board identified lab infrastructure challenges, including that the average age of research and development facilities was nearly 50 years. Further, the Board reported that the labs are usually not successful in competing against broader service needs for military construction funds.
Air Force does not charge customers a fixed percentage fee of costs. The Air Force Research Laboratory reported that it is not charging customers the allowable fixed percentage fee of costs to fund science and technology activities because it does not have a mechanism in place to do so. Air Force Research Laboratory officials estimated the lab would collect approximately $3 million a year if the lab charged customer activities the maximum allowable fee (4 percent). Air Force financial management officials stated that the service’s accounting system does not currently have an automated capability to transfer the allowable percentage fee of costs to a central account at the Air Force Research Laboratory. This lack of capability, officials noted, creates a significant administrative burden for charging these fees. The officials stated that they have not yet estimated the cost to add an automated capability.
Although it is possible for the Air Force Research Laboratory to charge customer work orders manually—outside of the Air Force’s accounting system—officials with the Office of the Assistant Secretary of the Air Force for Financial Management and Comptroller perceive that the resources (time and people) required to manage such a process would be cost prohibitive. However, according to these officials, the Air Force has not assessed the costs required to improve the accounting system to do so, nor has it identified the potential benefits any improvements would provide. Federal internal control standards state that changes in condition affecting an entity and its environment often require changes to the entity’s internal control system, as existing controls may not be effective for meeting objectives (or addressing risks) under changed conditions. Further, these standards state that any internal control deficiencies require further evaluation and remediation by management. By not assessing the potential costs and benefits related to the options for collecting these allowable fees, the Air Force could be missing out on a potential source of funding to support its needs.
DOD lacks clear guidance on how the Navy should use the initiated research authority for some infrastructure investments within the Capital Investment Program. In our review of DOD documentation, we found that, among the military departments, Navy labs funded recapitalization and revitalization projects using the laboratory initiated research authority the least. As recently as early 2017, a DOD-commissioned study found that defense labs face substantial infrastructure deficiencies that it has not yet identified funding to address. In fiscal year 2017, Navy labs invested $7.3 million in lab recapitalization projects, compared to $32.9 million and $53.7 million at the Air Force and Army, respectively. Navy lab officials told us that their ability to fund lab recapitalization and revitalization projects using funds available under the laboratory initiated research authority is limited because they have not been provided with clear guidance as to whether and how to use the laboratory initiated research authority within the Capital Investment Program of the Navy Working Capital Fund.
Some Navy lab officials stated that they have found ways to use the initiated research authority for certain infrastructure investments. These officials stated that they used authority outside of the Capital Investment Program of the Navy Working Capital Fund, for instance, for projects below applicable thresholds because using the authority within the Program creates a bureaucratic and financial burden for them. For example, officials at two separate warfare centers—Naval Surface Warfare Center, Crane Division, and the Naval Air Warfare Center, Aircraft Division, noted that they did not expend funds in either fiscal year 2016 or fiscal year 2017 for recapitalization and revitalization projects. Both cited the Capital Investment Program as a significant barrier to their desired use of the laboratory initiated research authority.
Officials from the Office of Budget, within the Office of the Assistant Secretary of the Navy for Financial Management and Comptroller agreed that, to date, clarifying guidance on the use of the laboratory initiated research authority within the Capital Investment Program has not been issued, effectively limiting the extent to which the labs can use it for infrastructure needs. According to these officials, the Office of the Secretary of Defense (OSD) Comptroller—in coordination with the Office of Financial Policy and Systems within the Office of the Assistant Secretary of the Navy for Financial Management and Comptroller—is responsible for developing the clarifying guidance their office has sought. This persistent lack of guidance on whether or how Navy labs should use the laboratory initiated research authority within the context of the Capital Investment Program presents an opportunity cost. Namely, the Navy’s labs have missed out on, and continue to miss, opportunities to invest in needed improvements to its aging lab infrastructure.
The Army requires its laboratories to apply similar percentages to what is refers to as “Army direct appropriations” and “customer funds.” The Army requires that the percentage fee applied to direct appropriations not vary from the percentage fee applied to customer funds by more than 1 percent. The Army implemented this policy to maximize the laboratory initiated research authority’s effect on its 17 laboratories. However, the Office of the USD(R&E) reported in March 2018 that the policy was having a significant limiting effect on the breadth and scope of activities executed under this authority. Similarly, we found that the policy may, in practice, create a disincentive for Army lab directors to use the authority. In their responses to our survey, Army lab directors, representing key capability areas, acknowledged their concern about the percentage fee they assessed on customer funds affecting their ability to increase or maintain their customer bases. Further, some Army lab directors reported assessing a lower percentage fee on customer funds than allowed, which could help retain customers that might otherwise be driven away with higher assessed fees to carry out activities. As a result, these labs generally are setting a lower percentage fee on their directly appropriated funds, thereby lowering the overall laboratory initiated research funding available to them. Nonetheless, the Army has not assessed its policy to determine whether changes are needed to eliminate these disincentives. Continuing to operate without such an assessment could result in Army labs using the laboratory initiated research authority to fund fewer self-initiated projects—with the downstream effect that fewer new technologies for warfighters are available.
The Navy applies a consistent fixed percentage fee of costs across its labs. Within the Navy, senior leadership has set the fixed percentage fee of costs the labs charge on customer funds at 2 percent. A senior Navy science and technology official stated that Navy leadership set a uniform fixed percentage fee to charge to customer activities across the Navy lab enterprise, in part, to ensure the labs were not inadvertently competing against one another for customer funds. For example, without a uniform rate, a Navy warfare center could offer a lower fee to entice a customer to use it rather than another center. The use of a fixed percentage fee facilitates program offices selecting warfare centers on the basis of best available match in capabilities. On the other hand, the Navy’s fixed 2 percentage fee of costs does limit—by half, as compared to the maximum 4 percent allowable—the amount of fees that Navy labs can collect. Consequently, several Navy lab directors told us that they would like to have the ability to increase the fixed percentage fee of costs above the Navy’s 2 percent to provide their labs with additional resources they said they need for innovation-related investments.
DOD Labs Have Used Direct Hire Authorities to Hire Qualified Candidates for Key Scientific Positions but Experienced Delays
Among the lab directors that responded to our survey, 30 of 31 replied that their lab had used at least one of the four types of direct hire authorities previously discussed since fiscal year 2014. Officials view direct hire authority as allowing the labs to compete with private industry for qualified applicants. Lab directors reported they generally believe that each type of direct hire is extremely or very useful for fostering innovation and increasing efficiency, as shown in Figure 7.
Selected Officials’ Testimony on the Value of Direct Hire Authority: The U.S. Army Engineer Research and Development Center “was able to meet this important goal [of annually hiring more than 160 new researchers] in large part because of its direct hiring authorities, which save time, effort, and costs, and allow the organization to more effectively hire the best and brightest minds available.” – Dr. Jeffrey P. Holland, Past Director, U.S. Army Engineer Research and Development Center, in testimony before the Senate Committee on Armed Services (Emerging Threats and Capabilities Subcommittee), May 3, 2017. “The Air Force’s ability to recruit, retain, and develop the STEM workforce is vital toward building the future Air Force; Congress has been greatly supportive of these efforts…the addition of direct hire for candidates has been extremely useful in hiring qualified scientists and engineers in less than half the time of traditional hiring methods.” – Jeffrey Stanley, Air Force Deputy Assistant Secretary— Science, Technology and Engineering in testimony before the House Committee on Armed Services (Emerging Threats and Capabilities Subcommittee), March 14, 2018.
Although participation in the laboratory enhancement pilot program is open to the DOD labs—and 19 of the 31 lab directors, or 61 percent, that responded to our survey reported they were participating—to date, only the Navy has formally established a pilot program for its labs. The Army and Air Force have not yet used this relatively new authority. A senior Navy science and technology official told us the Navy took important steps to facilitate the implementation of that service’s pilot program. According to the Navy official:
The Office of the Deputy Assistant Secretary of the Navy for Research, Development, Test and Evaluation led the effort across the Navy labs, compiling—from each lab’s submission—a single list of proposals to forward to Navy leadership that would apply to all participating Navy labs.
The Navy pursued a three-phased approach with its pilot program, with Phase 1 primarily focused on contracting and acquisition policy- related matters. Senior Navy research and development officials perceived these matters as being the easiest from which to obtain buy-in from Navy policy officials and attorneys, as well as Navy leadership. Phase 2 will include proposals related to Information Technology systems for research and development networks, while Phase 3 will most likely address personnel issues.
Navy research and development officials deferred proposals— including information technology network enhancements—that might require extensive discussions with policy officials and attorneys stakeholders across the Navy. These proposals were pushed back to allow time for those stakeholders to see how the pilot program was being implemented and executed by the labs.
None of the Army and Air Force labs has yet established a laboratory enhancement pilot program. Consistent with Army policy, the Medical Research and Materiel Command and the Space and Missile Defense Command Technical Center submitted proposals; however, they have yet to establish a pilot program. The Army’s Research, Development and Engineering Command, with input from its subordinate labs and engineering centers, developed a list of lab enhancement proposals but, as of September 2018, had yet to formally submit these final proposals to Army leadership for approval. These include initiatives in business operations, contracting, finance, information technology, and personnel management. A senior Army science and technology acknowledged that organizations across the military department have concerns about providing the labs with too much autonomy to use this new authority.
Air Force Research Laboratory officials said they previously submitted a list of approximately 30 proposals to the Defense Laboratories Office in September 2017, but ultimately pulled back those requests because of stakeholder concerns within the Air Force. Specifically, officials with the Office of the Deputy Assistant Secretary of the Air Force for Science, Technology, and Engineering stated that the Air Force Materiel Command, to which the lab is a subordinate organization, had not seen the proposals before they were submitted. In addition, these officials identified concerns about how various stakeholders throughout the Air Force—such as those from financial management and personnel—would react to these proposals. These proposals could potentially sidestep the stakeholders’ oversight function of related lab activities. A senior Air Force Research Laboratory official stated that the lab re-submitted its proposals to the Air Force Materiel Command and that Air Force leadership was still reviewing them at the time of this report.
Defense Labs Have Widely Implemented Micro-Purchase Authority
Twenty-six of 31 labs directors—84 percent—reported having used the $10,000 micro-purchase threshold authority granted by Congress in 2016. However, we found that contracting and small business management officials’ concerns with this authority have created implementation challenges at some defense labs. For instance, a senior Navy official indicated that multiple stakeholders from across the Navy—including its Office of Small Business Programs—raised concerns about the authority’s potential impact on small businesses as micro-purchasing allows defense labs to bypass small business set asides. Several labs reported similar stakeholder concerns that prevented implementation of the micro-purchase threshold increase.
At the same time, however, lab officials we interviewed expressed the view that the increased threshold will be beneficial, consistent with their opinions about the laboratory enhancement pilot program. For example, officials at the Naval Research Laboratory stated that increasing the threshold to $10,000 allows their scientists and engineers to directly purchase necessary equipment and materials through simplified procedures. They identified examples of projects that had been delayed by as much as several months because scientists and engineers used other than simplified acquisition procedures to purchase a relatively inexpensive piece of equipment, such as a specialized microscope, because the cost was above the previous threshold of $3,500.
Similarly, the Army’s Armament Research, Development and Engineering Center reported that the micro-purchase threshold increase enables the lab to use simplified acquisition procedures for more items. As a result, they noted that the new authority increases efficiency by reducing contracting time and cost for those additional items. The Navy’s Space and Naval Warfare Systems Center Atlantic similarly reported that requirements, which were previously procured using other than simplified acquisition procedures, took up to 60 to 90 days to procure, while it took as little as 3 to 4 days under this new authority, which enabled its scientists and engineers to purchase materials needed for critical, time sensitive projects. However, lab officials acknowledged that the $10,000 micro-purchase threshold authority—like the laboratory enhancement pilot program—is too new to fully understand how it will increase efficiency and foster innovation over the long term.
DOD Gains Scientific Expertise from Research Centers Governed through Noncompetitive Contracts
DOD sponsors several research centers, which are governed through noncompetitive agreements, including contracts. These centers provide the department with access to scientific experts employed by universities and other non-profit organizations. Scientists employed by these external to DOD research centers—specifically, three lab FFRDCs and 13 UARCs—execute DOD-funded science and technology development projects in emerging technical areas. DOD staff oversee these centers using routine oversight of funded research tasks and comprehensive reviews, which help DOD determine whether the centers’ funding should continue. DOD and research center officials told us that their ability to authorize work at the FFRDCs that DOD sponsors is limited by legislative restrictions on the staffing levels at these centers, as well as by infrastructure modernization challenges they face.
External DOD Research Centers Are Funded by the Government Established under Noncompetitive Procedures
DOD sponsors three research and development FFRDC labs that were established under noncompetitive procedures. Two of the three lab FFRDCs are operated by universities and one is operated by a nonprofit company. DOD also has contracts with 13 UARCs that fulfill a similar scientific role as the lab FFRDCs, while also differing from them in other respects. These differences are described in more detail in table 1.
DOD’s contractor-operated research centers received about $1.3 billion annually in DOD funding in fiscal year 2016 and fiscal year 2017, according to DOD data. The two largest research and development FFRDCs, the Lincoln Laboratory and the Software Engineering Institute, received about 67 percent of total research center funding from DOD in 2017. UARCs received an average of $27 million in DOD funding, which was a 15 percent decrease from 2016. Research centers may also receive work and funding from other federal departments and private companies after obtaining DOD sponsor approval. Appendix II provides an overview of DOD FFRDC and UARC funding in fiscal years 2016 and 2017.
DOD Sponsorship and Contract Awards: We reported in 2014 that FFRDCs in the federal government are defined through the sponsoring agreement between the agency and the contractor retained to operate the FFRDC. A written agreement of sponsorship between the government and the FFRDC must be prepared when the FFRDC is established, which may be included in a contract between the government and the FFRDC, or in another legal instrument under which an FFRDC accomplishes effort, or it may be in a separate written agreement. Historically, DOD sponsors retain contractors for many years or decades as FFRDC operators. We found that research centers undertake DOD-sponsored projects and, in some limited instances, scientific projects initiated by centers that are overseen by DOD staff. Individual sponsors enter into noncompetitive contracts with FFRDCs and UARCs. DOD uses noncompetitive contracts to establish or maintain an essential engineering, research, or development capability to be provided by an educational or other nonprofit institution or a federally funded research and development center.
Scientific Project Funding: We found that project sponsors provide funding to existing contracts. For example, the government issues orders for requirements under Lincoln Laboratory’s indefinite delivery indefinite quantity base contract as funding sponsors approve new projects. Individual project sponsors, along with the primary sponsor, oversee how project funds are spent by the centers. Project sponsors decide whether they will continue to work with these entities based on perceived performance success. This effectively provides an incentive for FFRDCs and UARCS to perform successfully. This work and review cycle is described in Figure 9 below.
FFRDCs and UARCs also partner with DOD government-operated labs to plan and execute technology development projects. For example, according to Navy officials, Naval Surface Warfare Center, Carderock Division collaborated with Navy-sponsored UARCs, such as Penn State’s Applied Research Laboratory, to help develop Navy submarine propeller and propulsion designs.
Self-initiated Projects: Research center officials said that DOD provides some research centers with limited funds to self-initiate innovative projects. This funding helps the centers ensure that development projects are not limited to just satisfying near-term DOD requirements. Instead, future generations of DOD technologies can be funded. For example, officials at Johns Hopkins University Applied Physics Laboratory proactively conducted work on advanced naval defense technologies in response to similar technology development in adversary countries. Although Navy sponsors did not fund this initial work, they subsequently provided funding in this area after Hopkins’ research identified a risk reduction strategy for the Navy, according to the Johns Hopkins officials. This allowed the UARC to move relatively quickly on a new science and technology project idea.
Research Centers Provide DOD with Access to Scientific Expertise
DOD uses 13 UARCs and three lab FFRDCs to obtain direct access to scientific expertise in emerging technical areas, supplementing research conducted at DOD’s government-owned and operated labs. These research centers provide DOD with additional scientific capabilities and the ability to expand quickly into new technical fields.
Hiring Scientific Personnel: Although FFRDCs are largely federally funded, they are generally operated, managed, and administered by either a university or consortium of universities, other not-for-profit or nonprofit organization, or an industrial firm, as an autonomous organization or as an identifiable separate operating unit of a parent organization. The contractor operating the FFRDC exercises primary control over its FFRDC’s business concerns, such as personnel policies and compensation. DOD-funded research centers have flexibility in hiring scientists that leverage a parent institution’s expertise in emerging scientific fields. For example, leadership officials at the Army Institute for Soldier Nanotechnologies UARC at MIT and the Software Engineering Institute FFRDC at Carnegie-Melon University noted that projects they have conducted for DOD have benefitted from university experts in fields such as dark matter physics and artificial intelligence.
Personnel Compensation: Research center officials we spoke with noted that their workforce policies permit them to flexibly hire, fire, and compensate staff as needed. Although employee salaries are established separately from the government schedule, they are approved by the government. Further, officials noted that university centers typically offer salaries in line with the labor market, but do not attempt to compete on a salary basis with relatively high, unaffordable private sector company salaries. Instead, they compete on the basis of other factors, such as offering scientists the opportunity to work for a prestigious university conducting science and technology research.
Research Center Infrastructure: As with personnel matters, research centers have discretion to manage infrastructure in accordance with the policies and procedures of their parent institutions. While one center, Lincoln Laboratory, is located on government property, others primarily reside on property owned or leased by their parent institutions. According to agency officials, DOD contributes funding for the use and repair of these facilities through their contracts with research centers. Officials noted that Lincoln Laboratory uses military construction funding to pay for new buildings as it is located on government property.
Trusted Advisor Role: FFRDCs and UARCs function as trusted advisors for the government and operate in the public interest with objectivity and independence. FFRDCs are independent, private-sector, non-profit organization units required to be free from personal or organizational conflicts of interest, as the FFRDCs answer to the government customer. As a result, DOD’s lab FFRDCs perform tasks that are closely associated with the performance of inherently governmental functions and have access to sensitive and proprietary data.
DOD-Sponsored Research Centers Are Limited in the Amount of Work They Can Perform for DOD, According to Research Center Officials
Research center officials noted challenges limiting their work providing scientific expertise to DOD. FFRDCs are also limited in executing infrastructure investments.
Limitation on Available Work Hours: DOD FFRDCs are limited by an annual ceiling set by Congress on the amount of staff years of technical effort (STE) that may be funded for defense FFRDCs. We previously found in October 2008 these limits were imposed in response to concerns that DOD was inefficiently using its FFRDCs. We found that the STE workload limitation aimed to ensure that FFRDC work was appropriate and limited resources were being used for DOD’s highest priorities. As a result, Software Engineering Institute officials said they decline many DOD programs’ requests for assistance due to the annual work hour limitation. Further, officials at the Office of the Secretary of Defense’s Studies and FFRDC Management Office reported that this limit significantly constrains the use of DOD’s FFRDCs and that DOD customer demand for their services is significantly greater than the annual STE limit. OSD officials indicated that FFRDC related work must be deferred to later years when these limits are reached, since there are no other legally compliant alternatives capable of fulfilling these requirements.
Infrastructure: FFRDC officials we interviewed identified infrastructure challenges—including aging facilities and equipment—as hindering their research and development efforts. For example, many buildings at the Massachusetts Institute of Technology (MIT) Lincoln Laboratory are over 60 years old; MIT considers over half of them to be in substandard condition. According to an MIT official, these facilities, located on government property, were not structurally designed for modern research and have relatively poor vibration isolation, resulting in inefficient workarounds or work that could not be performed. Officials from the Defense Laboratories Office noted that the MIT Lincoln Laboratory is unique among DOD’s FFRDCs in that it is operated on government- owned property.
A 2013 study, conducted on behalf of the White House Office of Science and Technology Policy, found that lab infrastructure project funding proposals must compete with hospitals, barracks, runways, and roads and, therefore, tend to be lower on the priority list for military construction funding. A 2017 Defense Science Board report and DOD officials we spoke with indicated this continues to be true. While contract research centers have significant flexibility to execute infrastructure work, they are still affected by limited availability of military construction funding. Officials at another center noted that in some instances, DOD sponsors have been unable or slow to provide required secure facilities and equipment within needed time frames. Delays of this nature can affect the research centers’ ability to deliver the technologies or related services needed by DOD.
Energy and Space Research Centers Follow Different Governance Approaches, but Exhibit Similar Benefits and Challenges
The Department of Energy (Energy) primarily relies on contractor- operated FFRDCs to operate its labs, while the majority of NASA labs and centers are government-operated. Energy’s national labs form the core of the agency’s scientific work and mission. This is in contrast to DOD-funded labs, which constitute a relatively small aspect of DOD’s overall mission. We have previously found that Energy’s labs can use funding for minor infrastructure improvements. NASA centers can also approve and fund certain facility projects, in accordance with NASA policies, and they have encountered significant challenges with aging infrastructure. Also, in some cases, energy and space research centers have significant challenges with hiring replacement staff and competing with private sector employers for staff. Energy’s labs can hire scientific personnel with the flexibility of private companies, while NASA centers were previously provided hiring flexibilities by Congress in 2004 to facilitate staff hiring.
While Energy and NASA’s research entities follow their specific governance models, there are broad characteristics common across these agencies as well as DOD. Table 2 illustrates that while research centers are largely government-owned, the government is not always the operator.
Department of Energy Primarily Uses Contractors to Operate National Labs and Manage Scientific Expertise
As we have reported, Department of Energy national labs are primarily operated by for-profit, non-profit and university FFRDC contractors using management and operating contracts, which are competed on a limited basis. Energy’s funding sponsors and headquarters officials are required to reevaluate FFRDC performance in increments not to exceed 5 years by federal acquisition regulations, which inform future decisions to renew the agreement. In 1990, we designated Energy’s contract management—including both contract administration and project management—a high-risk area because of Energy’s inadequate management and oversight of contractors, leaving the department vulnerable to fraud, waste, abuse, and mismanagement. In 2009, we subsequently narrowed the focus of Energy’s high-risk designation to the National Nuclear Security Administration and Office of Environmental Management, which together oversee four national labs. Further, in our 2017 High Risk report, we found that these two agencies had made progress in addressing our contract management concerns, but we identified continued problems with the agencies having sufficient capacity to mitigate contract and project management risks. Also, we found that they had demonstrated little progress in addressing contract management challenges, particularly in the area of financial management.
Energy’s Lab Contractors Manage Nearly All the Agency’s Scientific Expertise
The Department of Energy uses performance-based management and operating contracts, which have been subject to limited competition, with universities, non-profit companies and for-profit companies to operate the national labs on government-owned property. These contractor-operated FFRDCs provide the vast majority of Energy’s science and technology capacity, rather than supplementing the work of government-operated labs like DOD’s FFRDCs. Energy has depended on the expertise of private organizations to execute its science and technology work since the Manhattan Project produced the first atomic bomb during World War II.
The Spallation Neutron Source is an experimental research facility at Oak Ridge National Laboratory—a government-owned contractor-operated laboratory. The Spallation Neutron Source includes the world’s most powerful pulsed-neutron sources and provides information about the structure and properties of materials that cannot be obtained by other means. The Spallation Neutron Source is a user facility whereby researchers from universities, national laboratories, and industry submit proposals, which are peer- reviewed and must compete for time at the user facility.
The primary focus of each lab varies based on its expertise and facilities. Energy largely oversees its lab contractors through its headquarters program offices, which include the National Nuclear Security Administration, Office of Science, the Office of Fossil Energy, as well as co-located government field offices. Office of Science-sponsored labs primarily support scientific research for energy and physical sciences, while the National Nuclear Security Administration-sponsored (NNSA) labs primarily focus on nuclear weapons and related science and technologies. Energy also oversees its lab contractors’ activities through on-site Energy oversight offices that work alongside lab management at each FFRDC. Some labs specialize in earlier-phase science, while other labs work on later-phase nuclear weapons technologies in addition to earlier-phase science. As Figure 10 shows, these labs are spread across the United States.
Energy has only one government-operated and government-owned lab, the National Energy Technology Laboratory. Key differences between Energy’s contractor-operated and government-operated governance models are described in table 3.
Energy’s FFRDCs use their own personnel systems, which Energy officials stated provide more flexibility for hiring and retaining qualified staff. Management within these FFRDCs can move staff in or out of scientific areas more quickly than government labs can, thereby providing greater agility to meet Energy’s needs in emerging science areas. For example, Energy’s lab oversight staff at Oak Ridge National Laboratory told us that use of lab contractors’ human resources management systems allows for workforce flexibilities to meet Energy’s needs. While these contractors have leeway in managing their human resources systems, Energy’s headquarters maintains oversight—through its contracting officers—over employee compensation.
FFRDC Contractors Manage Most Energy Labs’ Infrastructure
Energy’s FFRDC contractors manage and operate nearly all of the department’s government-owned national lab facilities—including day-to- day management of government-controlled facilities and real property. Lab operators used funding to complete minor construction projects, which cost $10 million or less. This funding comes from a percentage of science and technology projects’ funding, requires local Energy oversight office approval, and has streamlined project management requirements. In contrast, major infrastructure upgrades are funded through relatively long and complex line-item funding processes, and projects over $50 million are subject to more rigorous project management requirements.
Energy’s Lab Contractors Have Limited Discretion to Initiate Scientific Projects
Energy’s labs use a small portion of their funding to initiate discretionary projects for science and technologies that will benefit sponsors in the long-term by maintaining the scientific and technical vitality of the laboratories. To maintain and enhance lab expertise, the National Defense Authorization Act for Fiscal Year 1991 authorized Energy’s contractor-operated labs receiving funding for national security programs to use a percentage of lab funds to perform lab-directed R&D of a creative and innovative nature. The actual percentages allowed to be used for lab-directed R&D are subject to Energy’s approval.
Energy’s entities sponsor most national lab projects based on their needs and lab expertise. Typically, earlier foundational science projects are funded through a process whereby funding sponsors issue calls for proposals to Energy’s national labs. Interested scientific teams at labs provide proposals to conduct these projects for sponsor consideration. Sponsors then assess proposals for scientific merit and decide which teams receive funding to execute their projects. NNSA provides funding for later-phase nuclear technology development projects to its labs after agreement is made regarding objectives and deliverables for specific projects, according to Lawrence Livermore National Laboratory officials.
Energy’s Lab Officials Identified Challenges Despite Management Flexibilities
Despite their flexibilities with regard to hiring and infrastructure decisions compared to government operated labs, Energy’s lab leadership and government oversight officials noted human resource and facilities related challenges, such as:
Sufficiently compensating staff located in high-cost of living areas. For example, the labor market of the San Francisco area, where several Department of Energy national labs are located, is highly competitive for employers. Commercial firms offer salaries and compensation that typically exceed those of government-funded, contractor-operated labs, although Energy’s contractors have more pay flexibility than is allowed for Energy’s government employees.
Obtaining government clearances in a timely manner. Energy’s NNSA oversight officials and lab management staff, in particular, cited this challenge, which they stated has led to a backlog of people needing clearances.
Government hiring freeze constraining overall hiring. Officials at Energy’s government-operated National Energy Technology Laboratory reported that as a result of a government hiring freeze, the lab has increasingly hired private contractor staff to the point that more than half of the total lab staff is now comprised of contractor employees.
Major infrastructure challenges at Energy labs. Energy reported in July 2018 that over half of all national lab buildings are in either substandard or inadequate condition. The Energy Inspector General also identified infrastructure modernization as one of Energy’s top management challenges. This finding followed a mandated commission’s report in 2015 that facilities and infrastructure across Energy’s national lab network were hampered by high levels of deferred maintenance and excess facilities.
NASA Research Centers Are Primarily Governed as Government-Operated Entities
The majority of NASA’s science and technology facilities are operated within the governance framework of government-operated research centers, similarly to most DOD labs. While government-operated, they have been granted additional legislative flexibilities for hiring employees beyond those normally available to government entities.
NASA locates its science and technology staff at four government- operated research centers, one contractor-operated FFRDC, and at five NASA centers assisting space and space flight development. These centers and the Jet Propulsion Laboratory—NASA’s sole sponsored FFRDC—execute NASA’s research missions including technology development in exploration and aeronautics. The differences between these two governance approaches are described in Table 4. NASA also works with Johns Hopkins University Applied Physics Lab, a UARC, to develop major space flight missions.
The NASA Glenn Research Center—a government- operated laboratory—is currently developing solar electric propulsion technologies intended to allow manned and unmanned spacecraft to be propelled far beyond earth orbit using solar power. This project is developing large, flexible, radiation-resistant solar arrays that can be unfurled to capture solar energy powering fuel-efficient electrostatic thrusters. Scientists expect a system-level flight test within the next decade to demonstrate key technologies supporting NASA’s Lunar Orbital Platform-Gateway project, a platform to mature necessary short- and long-duration deep space exploration capabilities.
Headquarters, including funding sponsors providing oversight for their individual projects.
Title 51, Chapters 201 and 203 of U.S. Code and technology efforts with DOD and other organizations, including use of NASA lab and test facilities.
Operated by university contractor having sole source contract.
Not permitted to compete against industry, except for operation of an FFRDC. property (originally part of DOD). 5-year contract renewable to 10 years total.
Headquarters, including funding sponsors providing oversight for their individual projects.
Title 51, Chapters 201 and 203 of U.S. Code; 10 U.S.C. § 2304 (c) (3)(B) ; Federal Acquisition Regulation § 35.017 based NASA oversight staff. sponsors seeking FFRDC assistance with NASA approval. “Lab” as used in this context refers to science and technology organizations equivalent to NASA Research Centers, DOD UARCs and FFRDCs.
Mission leadership officials at NASA Headquarters—including the Associate Administrators for Aeronautics Research, Human Exploration, Science and Space Technology—oversee NASA’s research centers as well as the Jet Propulsion Laboratory. These officials are responsible for technology programs providing funds to research centers and the Jet Propulsion Laboratory to support their specific mission areas. NASA’s science and technology project portfolios are based on the requirements and priorities established by NASA’s leaders in collaboration with key stakeholders in academia and industry among others. In planning their science and technology work, NASA’s Glenn Research Center officials noted that NASA research center directors consider the capabilities and resources—including staff and facilities—of other research centers to minimize redundant work.
NASA Has Flexibilities for Managing Its Scientist Workforce
NASA depends on a highly skilled civil servant and contractor workforce to plan and execute its missions. Congress provided NASA with additional human resource authorities beyond those otherwise allowed for federal government personnel through the NASA Flexibility Act of 2004. We found in September 2008 that NASA sought this flexibility to ensure that it could hire and retain the workforce it desired. This law consisted of multiple provisions to address a range of human capital challenges and to strengthen all levels of the workforce. The provisions included incentives—including compensation—to allow NASA to compete successfully in the labor market with the private sector and reshape its workforce more effectively to support the Agency’s mission. NASA also employs a significant contractor workforce across its different centers.
NASA Scientific Projects Are Mostly Funded According to NASA’s Priorities
Glenn Research Center officials we interviewed stated their portfolio of science and technology projects—and funding—mostly aligns with NASA’s top requirements and priorities. They, along with NASA sponsors, create technology roadmaps and investment plans to determine their future projects. NASA policy requires that NASA’s scientific teams offer proposals for potential research and science and technology projects. This is similar in some ways to how many DOD and Energy centers must find sponsors willing to fund specific technology development projects, rather than receiving technology development funding for a given year. These proposals are reviewed by peer review teams, who identify for selecting officials those proposals they believe have the most scientific merit. Ames Research Center officials said they believe this process can foster innovation, encourage employees to keep skills honed, and mitigate complacency.
Glenn Research Center officials said that while most of the work they conduct is for sponsored applied research or advanced technology development, about 2 percent of their science and technology budget is spent on early-stage scientific innovation. Recommended projects of this nature proposed by the research center are typically approved by headquarters officials, according to these Glenn officials. NASA provides technical grants for basic research and applied science to university scientists nationwide on a competitive basis, and also funds similar research done internally at research centers.
NASA Officials Identified Workforce and Infrastructure Challenges
As with DOD and Energy’s research centers, NASA officials have identified some key operating challenges, including:
Aging infrastructure and facilities. The NASA Inspector General listed infrastructure area as one of the top five management and performance challenges facing NASA. Further, the Inspector General identified deficiencies with facilities planning and reported that about 80 percent of facilities at three of four NASA research centers are over 50 years old, while about half of the facilities at the Jet Propulsion Laboratory and the fourth research center are that old. Infrastructure projects and upgrades of $1 million or less are undertaken by research center management instead of at the NASA headquarters level. Construction above this threshold has significantly more requirements and is approved by NASA headquarters. Glenn Research Center officials indicated it is difficult to obtain funding for projects that exceed the minor infrastructure threshold, in part, due competition with major construction of facilities proposals from across the agency for limited funds. As a result, they put most of their efforts into sustaining existing infrastructure.
Workforce shortages in key technical areas. As we found in May 2018, NASA has experienced workforce challenges on several major projects such as the Mars 2020 and Europa Clipper projects. Also, over 40 percent of NASA’s workforce is either eligible to retire now or will be eligible in the next 5 years. NASA headquarters officials noted that NASA’s workforce is aging because NASA has a low attrition rate—about 4 percent annually—and high numbers of staff stay several years beyond retirement. Further, in 2017, the NASA Inspector General found gaps in NASA’s workforce planning for specific capability areas and how workforce plans would meet future needs, and recommended that NASA establish standardized guidance defining the data and analyses for these planning efforts. NASA concurred with and identified its plan to implement this recommendation. However, NASA has not implemented this recommendation, according to the NASA Inspector General’s latest semiannual report to Congress.
Conclusions
Congress provided DOD lab directors with key authorities to foster targeted, timely investments in the most pressing technology areas. Lab directors have used these authorities—such as laboratory initiated research and direct hire authorities—to varying degrees, but more needs to be done to facilitate innovation and efficiency. Specifically, service specific obstacles in the Air Force, Navy, and Army impede lab directors from capitalizing on laboratory initiated research authority to a greater extent. Service leadership can take actions to better understand and potentially remove barriers to more fully use laboratory initiated research tools.
Recommendations for Executive Action
We are making the following three recommendations to DOD: The Secretary of the Air Force should assess the potential costs and benefits of implementing accounting system improvements that would allow the Air Force Research Laboratory to charge customers a fixed percentage fee on provided science and technology activities to the extent allowed under the laboratory initiated research authority. (Recommendation 1)
The Secretary of the Navy should clarify whether and how to use the laboratory initiated research authority within the Capital Investment Program. (Recommendation 2)
The Secretary of the Army should assess existing Army policy for laboratory initiated research authority and determine whether to implement changes to eliminate disincentives for lab usage of the authority. (Recommendation 3)
Agency Comments
We provided a draft of this report to DOD, Energy, and NASA for review and comment. Energy and NASA did not provide any comments on the draft report. In DOD’s written comments, reproduced in appendix III, DOD concurred with our three recommendations. Further, in its response to our third recommendation, DOD stated that the Army plans to initiate a study by January 2, 2019, regarding its use of the laboratory initiated research authority. According to DOD, the Army’s study will identify potential opportunities for policy improvements.
We are sending copies of this report to the appropriate congressional committees and offices; the Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; the Secretary of Energy; and the NASA Administrator. In addition, the report will be made available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions concerning this report, please contact me at (202) 512-4841. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff members making key contributions to the report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
This report examines (1) how the Department of Defense (DOD) labs have used selected legislative authorities to foster innovation and efficiency and identify what barriers impede their use; (2) identifies and describes governance models used by selected DOD-sponsored federally funded research centers and university affiliated research centers; and (3) identifies and describes governance models used non-defense labs, specifically at the Department of Energy (Energy) and National Aeronautics and Space Administration (NASA).
To address the first objective, we selected four specific authorities for our review based on previous work identifying science and technology best practices and expedited lab hiring:
Laboratory Initiated Research Authority. The authority provided in Section 219 of the Duncan Hunter National Defense Authorization Act for Fiscal Year 2009, as implemented, provides lab directors with flexibility to fund projects in four allowable categories: basic and applied research; technology transition; workforce development; and revitalization, recapitalization, or repair or minor military construction of lab infrastructure.
Laboratory Enhancement Pilot Program. Section 233 of the National Defense Authorization Act for Fiscal Year 2017 established a pilot program for lab directors to propose alternative and innovative methods that might lead to more effectively managing and operating labs and authorized lab directors to waive any regulation, restriction, requirement, guidance, policy, procedure, or departmental instruction that would affect implementation of these methods unless such implementation would be prohibited by a provision of an existing statute or common law.
Direct Hire Authority. Four types of direct hire authorities authorized by Congress since 2008 are intended to provide a streamlined and accelerated hiring process to allow the labs to successfully compete with private industry and academia for high-quality scientific, engineering, and technician talent.
Micro-purchase Authority. The Federal Acquisition Regulation states a preference for government agencies, to purchase and pay for micro-purchases of supplies or services using the government-wide commercial purchase card up to and at the micro-purchase threshold, but micro-purchases may be conducted using any of the simplified acquisition methods. While the FAR micro-purchase threshold was generally $3,500 at the time of our review, Congress increased this threshold to $10,000 for activities of DOD science and technology reinvention laboratories in Section 217 the National Defense Authorization Act for Fiscal Year 2017.
Although Congress has provided additional legislative authorities to defense lab directors to address hiring, infrastructure, and technology transition challenges, the authorities that we covered in our review are the ones that our prior and current work have shown are currently, or have the potential to be, the most critical for supporting science and technology reinvention laboratories’ innovation mission within DOD labs. DOD lab leaders use these authorities to flexibly fund projects intended to facilitate research and development; propose alternative and innovative methods that might lead to more effective lab management; directly hire personnel at DOD labs including students currently enrolled in science, technology, engineering, and mathematics (STEM) programs; and expand critical science and technology purchases using simplified acquisition methods.
To identify the extent to which DOD laboratories have used these authorities as well as to identify what potential barriers existed to using these authorities, we administered a survey to 44 STRL directors (or their equivalent) to collect information on the use of these specific authorities, their perceptions about the effectiveness of those authorities, and their perceptions about any barriers to using these authorities. The members of the population surveyed were the 44 defense laboratories defined as science and technology reinvention laboratories. For the purposes of our review, we defined laboratories as inclusive of Air Force technical directorates (10), Army warfare centers (17), and Navy warfare centers (17). We emailed questionnaires to the laboratories beginning in late March 2018, and survey data collection ended in early May 2018, with 31 labs returning completed questionnaires, for an overall response rate of 71 percent at the laboratory level.
We took steps to minimize the potential errors that the practical difficulties of conducting any survey may introduce. Nonresponse error can result when a survey fails to capture information from all population members selected into a survey sample. Of the 13 questionnaires not returned, 4 were Army warfare centers, and 9 were Air Force research directorates. Throughout the data collection period, we made multiple follow-up attempts by email and phone to those labs not yet responding. The Air Force Research Laboratory (AFRL) provided a single survey response for the entire laboratory enterprise. Not all returned questionnaires may have answers to every question applicable to a respondent. However, this question-level nonresponse did not exceed one for any of the questions applicable to all 31 labs. Because we selected the entire population of laboratories for our survey, our estimates are not subject to sampling error. We developed our list of the 44 labs in our population in consultation with DOD, and are confident that none were left out, so our or survey has no known sources of coverage error. We conducted pretests of the draft questionnaire with 3 laboratories in the population and made revisions to reduce the possibility of measurement error from differences in how questions were interpreted and the sources of information available to respondents. After reviewing the answers received, we also followed up as necessary with respondents to clarify apparent inconsistencies or other possible misreports, and made changes to responses where corrections were needed. A second, independent analyst checked the accuracy of all computer analyses to minimize the likelihood of errors in data processing.
To obtain additional information on this objective, we reviewed relevant legislation which established or amended these authorities and reviewed applicable DOD and service policy documentation. Further, we collected military service related information on the usage of two authorities, such as:
Spending data on the use of the laboratory initiated research authority. We gathered this information from DOD-mandated reports to Congress on the use of this authority and military service officials. We determined these data to be reliable based on reviews of agency documentation collected and interviews with agency officials.
Data on the usage of direct hire authorities by the service laboratories.
We collected direct hire data from each of the military services including the number of direct hire authority candidates hired as well as the number of direct hire positions the laboratories were authorized to hire. We determined these data to be reliable based on reviews of agency documentation collected and interviews with agency officials. We also used select findings from our May 2018 report where we evaluated DOD’s use of hiring authorities, including direct hire authority. More information about the scope and methodology of our prior work can be found in that report.
In addition, we also collected information on military service proposals to utilize the laboratory enhancement pilot program authority.
To obtain further information on department- and service-level involvement in and perspectives of defense laboratory authorities and challenges, we interviewed officials responsible for the management, execution, and oversight of DOD’s science and technology enterprise, including military service labs. At the Office of the Secretary of Defense and military department headquarters level, those responsible for the management and oversight of science and technology activities, we met with officials from the:
Office of the Assistant Secretary of Defense for Research and
DOD Defense Laboratories Office;
Office of the Deputy Assistant Secretary of the Army for Research and
Office of the Deputy Assistant Secretary of the Air Force for Science, Technology, and Engineering;
Office of the Assistant Secretary of the Air Force for Financial
Office of the Deputy Assistant Secretary of the Navy for Research, Development, Test, and Evaluation; and
Office of the Budget, within the Office of the Assistant Secretary of the Navy for Financial Management and Comptroller We also met with military department lab officials responsible for the management and execution of science and technology activities from the:
Army Research, Development and Engineering Command;
Army Research Laboratory;
Army Aviation and Missile Research, Development, and Engineering
Air Force Research Laboratory;
Naval Research Laboratory;
Naval Surface Warfare Center, Headquarters; and
Naval Surface Warfare Center, Carderock Division To identify and describe governance models used by selected DOD- sponsored federally funded research centers (FFRDCs) and university affiliated research centers (UARCs), we focused our review on the 3 FFRDCs designated as research and development labs as well as all 13 UARCS sponsored by DOD entities. We reviewed appropriate sections of the FAR language related to FFRDCs and UARCs, DOD guidance for working with FFRDCs and UARCs, relevant contracts, and performance assessments. Further, we met with officials from the office of the Deputy Director, OSD Studies and Federally Funded Research & Development Centers Management and Office to discuss overall FFRDC and UARC management, policies, and challenges facing FFRDCs and UARCs. We interviewed officials at selected research and development FFRDCs and UARCS to discuss their experience conducting DOD research and interactions with their customers, such as defense program executive offices. We met with officials at the two major research and development lab FFRDCs—The Lincoln Laboratory at the Massachusetts Institute of Technology (MIT) and the Software Engineering Institute at Carnegie Mellon University. We also selected a university affiliated research center sponsored by the Army and Navy: The Applied Physics Laboratory at Johns Hopkins University and the Institute for Soldier Nanotechnologies also at the MIT.
To identify and describe governance models by non-defense labs, we selected Energy and NASA to focus our efforts. We identified 17 Energy national labs and 4 NASA research centers conducting basic and applied research similar to DOD labs. These agencies, along with DOD, represent 3 of the top 4 agencies in terms of average federal research and development spending from fiscal years 2015 to 2017. In our August 2016 GAO Technology Readiness Assessment Guide, we drew heavily from DOD, NASA, and Energy for best practices, terminology, and examples. This contributed to our decision to focus on Energy and NASA’s research entities in this laboratory governance review. We did not include the fourth agency—the National Institutes of Health—in our review because it is not as similar to DOD. We also reviewed relevant Energy and NASA guidance as well as relevant FAR sections.
At Energy, we met with officials from the National Nuclear Security Administration which is semi-autonomous entity within Energy responsible for managing the nation’s nuclear weapons and nuclear security. We also met with Officials from the Office of Science, a program office responsible for supporting energy related fundamental science and research. To gain further insights on operating structures, funding arrangements, and their overall experience we met with lab leadership at selected Energy labs which were chosen based on initial discussions with agency officials and our review of past GAO work:
Oak Ridge National Laboratory,
Lawrence Berkeley National Laboratory,
Lawrence Livermore National Laboratory, and
National Energy Technology Laboratory (the sole Energy government owned and operated laboratory)
We also met with leadership from Battelle Memorial Institute, which is the sole or joint contract manager for five Energy national labs including Oak Ridge National Laboratory. In addition, Battelle is an integrated subcontractor at Lawrence Livermore National Laboratory.
At NASA, we met with officials with NASA’s Science Mission Directorate and Mission Support Directorate to discuss overall research center management and operations. We also leveraged ongoing and recently completed work at GAO to gain additional insight on NASA’s operations such as human capital management. Almost all of NASA’s research, space, and space flight centers conduct research and development activities. However, we focused our review on four research centers where NASA primarily conducts its aeronautics research, which has substantial overlap with DOD activities. To gain additional insight into the experience of lab leaders at NASA research centers, we met with officials at NASA’s Glenn Research Center and Ames Research Center. In addition, we also met with officials at the NASA Jet Propulsion Center, which is the only NASA-sponsored FFRDC.
We conducted this performance audit from July 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Funding for Selected External DOD Sponsored Research Centers
Office of the Secretary of Defense (OSD)
Appendix III: Comments from the Department of Defense
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Christopher R. Durbin (Assistant Director); Charlie Shivers, III (Analyst-in-Charge); Emily Bond; Lorraine Ettaro; Carl Ramirez; Sylvia Schatz; Sean Seales; Brian Smith; and Robin Wilson made significant contributions to this report. | Why GAO Did This Study
Congress created several authorities that provide DOD research labs with ways to increase efficiency and foster innovation.
Senate report 114-255 contained a provision for GAO to study governance models used by federal labs. This report evaluates DOD labs' use of authorities to foster innovation and efficiency.
GAO selected four authorities that recent work on best practices for science and technology management and expedited defense lab hiring have shown to be the most crucial for supporting innovation; administered a survey to 44 lab directors to gain insight into their use of the authorities; interviewed key lab officials and contractors; and reviewed relevant policies and guidance.
What GAO Found
Congress has provided the Department of Defense's (DOD) research labs with several authorities to enhance management and operations. Four authorities that GAO examined provide lab directors with greater ability to make their own decisions regarding the funding of projects, hiring, lab management, and purchasing of equipment or services.
1. Laboratory initiated research authority. This authority, as implemented, provides labs with a means to fund new science and technology projects that they consider a priority. Labs may use a percentage of all funds available to the lab and are permitted to charge customers of the lab a percentage fee of the costs for activities performed by the lab for the customer.
2. Direct hire authority. This authority enables labs to compete with private industry for high-quality talent. For example, it provides for streamlined hiring of applicants with relevant advanced degrees, or students enrolled in science, technology, engineering, and mathematics programs.
3. Laboratory enhancement pilot program authority. This authority generally allows lab directors to propose alternative methods that might lead to more effective lab management, and waive certain policies or procedures that might affect implementation of these methods.
4. Micro-purchase authority. This authority raises the threshold for small purchases for DOD research lab activities from $3,500 to $10,000 to facilitate acquisitions.
While labs have used these authorities, their use has sometimes been limited, particularly with the laboratory initiated research authority. DOD lab directors at Air Force, Navy, and Army cited several obstacles that impede wider use of that authority, specifically:
Air Force: Financial management officials at the Air Force stated that the service's accounting system does not currently have an automated capability to transfer the allowable percentage fee of costs to a central account at the Air Force Research Laboratory. This lack of capability, officials noted, creates a significant administrative burden related to charging these fees.
Navy: In fiscal year 2017, Navy labs invested $7.3 million in lab infrastructure projects, compared to $32.9 million and $53.7 million at the Air Force and Army, respectively. Navy lab officials told us that they were restricted in their use of infrastructure funds available under the laboratory initiated research authority due to a lack of clear guidance as to whether and how to use this authority within the Capital Investment Program of the Navy Working Capital Fund.
Army: The Army requires its labs to use a similar percentage of funds from two sources: (1) what it refers to as directly appropriated funds and (2) funds labs charge for customer activities. Some Army lab directors reported assessing a lower rate on customer funds than allowed so as not to drive customers away. The labs then generally charge a lower than desired rate on their directly appropriated funds, which further constrains the total funding available to them.
What GAO Recommends
GAO is making three recommendations to enhance DOD's use of laboratory initiated research authority, including that the Air Force assess potential accounting system improvements, the Navy clarify how labs can use the authority for infrastructure improvements, and the Army assess its policy to determine whether changes are needed to remove disincentives for labs to use the authority. DOD concurred with the recommendations. |
gao_GAO-19-29 | gao_GAO-19-29_0 | Background
Oversight of 2014 Nuclear Enterprise Reviews’ Recommendations
In November 2014, the Secretary of Defense directed DOD to address the recommendations from the 2014 nuclear enterprise reviews and directed CAPE to track and assess these implementation efforts. The Joint Staff, the Navy, the Air Force, offices within the Office of the Secretary of Defense, and U.S. Strategic Command have supported CAPE’s efforts. CAPE compiled the recommendations from the 2014 nuclear enterprise reviews. In total, CAPE identified 175 distinct recommendations from the three documents. CAPE then identified 247 sub-recommendations within those recommendations, which were directed to multiple military services or other DOD components. For example, if a recommendation was directed to the Air Force and the Navy, then one sub-recommendation was made to the Air Force and one to the Navy.
CAPE then worked with the military services to identify offices of primary responsibility for implementing actions to address the recommendations, any offices with coordinating responsibility, and any resources necessary to implement each recommendation. CAPE has developed a centralized tracking tool to collect information on progress in meeting milestones and metrics. As shown in figure 1, the tracking tool includes fields for the underlying problem statement, or root cause, for the recommendation; time frames with milestones for implementing the recommendation; and performance measures (referred to as metrics in the tracking tool) to assess the effectiveness of the actions taken.
The tracking tool currently contains hundreds of unique milestones and metrics and, according to CAPE officials, additional milestones and metrics are added as they are identified. The Air Force and the Navy also have developed their own methods of tracking their service-specific recommendations. In December 2016, the Deputy Secretary of Defense issued a memorandum that directed the transition of the tracking and analysis responsibilities related to implementing the recommendations of the 2014 nuclear enterprise reviews from CAPE to the military departments and other DOD components. However, CAPE remains responsible for providing guidance to inform the analyses conducted by other DOD entities, overseeing these analyses, and assessing recommendations for closure. The aim of these changes was to enhance ownership and embed the principles of robust analysis, continuous monitoring, and responsibility throughout the department.
NC3 Systems
NC3 is a large and complex system comprised of numerous land-, air-, and space-based components used to ensure connectivity between the President and nuclear forces. NC3 is managed by the military departments, nuclear force commanders, and the defense agencies; it provides the President with the means to authorize the use of nuclear weapons in a crisis.
NC3 systems support five important functions:
Force management: assignment, training, deployment, maintenance, and logistics support of nuclear forces before, during, and after any crisis.
Planning: development and modification of plans for the employment of nuclear weapons and other options.
Situation monitoring: collection, maintenance, assessment, and dissemination of information on friendly forces, adversary forces and possible targets, emerging nuclear powers, and worldwide events of interest.
Decision making: assessment, review, and consultation that occur when the employment or movement of nuclear weapons is considered.
Force direction: implementation of decisions regarding the execution, termination, destruction, and disablement of nuclear weapons.
Oversight of the 2015 NC3 Report Recommendations
As recommended in the 2015 NC3 report, the NLC3S Council has taken a lead role in providing oversight and making the final determination on the implementation status of that report’s 13 recommendations. The NLC3S Council is co-chaired by the Under Secretary of Defense for Acquisition and Sustainment and the Vice Chairman of the Joint Chiefs of Staff. Members of the council include the Under Secretary of Defense for Policy; the Commander, U.S. Strategic Command; the Commander, North American Aerospace Defense Command/U.S. Northern Command; the Director, National Security Agency; and the DOD CIO. The DOD CIO also serves as the Secretariat for the NLC3S Council and tracks the implementation of recommendations from the 2015 NC3 report, among other activities. Additional organizations, such as the Office of the Under Secretary of Defense for Intelligence, may participate in the NLC3S Council’s meetings to provide subject matter expertise. Regular participants in the NLC3S Council include the Office of the Under Secretary of Defense (Comptroller); senior leaders from the Army, the Navy, and the Air Force; the Defense Information Systems Agency; the White House Military Office; and CAPE.
Key Nuclear Oversight Organizations
DOD has established or participated in a number of oversight organizations that aid in the management of the defense nuclear enterprise. These organizations include the following:
NDERG: Established in 2014 by the Secretary of Defense to ensure the long-term health of the nuclear enterprise by addressing resourcing, personnel, organizational, and enterprise policy issues identified in the 2014 nuclear enterprise reviews. The NDERG consists of a group of senior officials chaired by the Deputy Secretary of Defense, including the Vice Chairman of the Joint Chiefs of Staff. The NDERG is supported by a Nuclear Deterrent Working Group, which meets biweekly and reviews the status of the recommendations of the nuclear enterprise reviews, and a Nuclear Deterrent Senior Oversight Group, which meets quarterly and reviews any recommendations that the Working Group believes are ready for the NDERG to close. The Nuclear Deterrent Senior Oversight Group also receives annual briefings on component assessments, reviews organizational changes, and discusses other cross-service issues. The Deputy Secretary of Defense updates the Secretary of Defense on the NDERG’s progress as requested.
NLC3S Council: A DOD council established by statute that is responsible for the oversight of the command, control, and communications system for the national leadership of the United States. Additionally, as recommended in the 2015 NC3 report, the NLC3S Council reviews the recommendations from the report and assesses them for closure. The NLC3S Council is supported by the National Leadership Command Capabilities Executive Management Board, which comprises a Senior Steering Group and four working groups—Stakeholders, Resources, Assessments, and Nuclear Command and Control Issues. The Executive Management Board ensures that the Council is informed of and presents issues that need principal-level decisions.
Nuclear Weapons Council: A joint DOD and Department of Energy council established by statute that is responsible for managing aspects of the U.S. nuclear weapons stockpile and programs. The Under Secretary of Defense for Acquisition and Sustainment is designated as the chair of the Nuclear Weapons Council, and the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs serves as the staff director of the Council. The Nuclear Weapons Council is supported by a senior executive-level Standing and Safety Committee and a subordinate, working-level Action Officers Group. The Action Officers Group performs detailed analyses of issues and provides those analyses to the Standing and Safety Committee, which reviews them and formulates decision packages for final Council review and decision.
Nuclear Matters: An office under the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; it is headed by the Deputy Assistant Secretary of Defense for Nuclear Matters and serves as a focal point for DOD activities and initiatives to sustain a safe, secure, and effective nuclear deterrent and counter the threat from nuclear terrorism and nuclear proliferation.
Nuclear and Missile Defense Policy: An office supporting the Under Secretary of Defense for Policy and the Assistant Secretary of Defense for Strategy, Plans, and Capabilities. Nuclear and Missile Defense Policy participates in the development of strategies, creation of policies, and conduct of oversight of national nuclear policy, treaty negotiations, and missile defense policy.
U.S. Strategic Command: DOD functional combatant command responsible for planning for and employment of U.S. nuclear weapons and for certain matters related to NC3.
DOD Has Made Progress in Implementing and Tracking Recommendations, Including Evaluating and Documenting Key Risks
DOD continues to make progress in implementing the recommendations from the 2014 nuclear enterprise reviews and has made improvements in tracking and evaluating this progress. Specifically, the military services and other DOD components have begun identifying and documenting risks associated with implementing recommendations from the 2014 reviews, based on guidance that was issued by CAPE in January 2018. DOD has also made progress in implementing the recommendations from the 2015 NC3 report. For example, the DOD CIO issued guidance in July 2018 to improve the tracking and evaluation of DOD’s progress in implementing the recommendations of the NC3 report.
DOD Has Made Progress Implementing Recommendations from the 2014 Nuclear Enterprise Reviews
DOD continues to make progress in implementing the recommendations of the 2014 nuclear enterprise reviews. As of our last report, in October 2017, DOD had closed 77 sub-recommendations. Based on our review of CAPE’s centralized tracking tool, the NDERG has closed 74 additional sub-recommendations since then. As a result, according to the CAPE tracking tool, the NDERG has closed 151 of the 247 sub- recommendations as of September 2018 (see fig. 2).
Since October 2017, DOD has closed sub-recommendations related to a number of issues identified in the 2014 nuclear enterprise reviews. For example, in January 2018, the NDERG closed a sub-recommendation originating from the Internal Assessment of the Department of Defense Nuclear Enterprise that the Air Force should ensure its nuclear inspection teams are properly sized and that inspection efforts are coordinated. In response to the recommendation, the Air Force worked to reduce the footprint of inspectors, to the extent possible, and improve consolidation of inspections to avoid redundancy. Meanwhile, in January 2018, the NDERG also closed a sub-recommendation that originated from the Independent Review of the Department of Defense Nuclear Enterprise that the Navy improve its readiness reporting system to provide better information about manning and personnel costs. In response to the recommendation, the Navy has made improvements in its readiness reporting by having ballistic missile submarine fleet commanders report additional readiness information about manning and personnel costs through the Navy’s readiness reporting.
The Air Force, the Navy, and CAPE have described some of the remaining open recommendations as enduring issues for the enterprise, and tracking progress toward these recommendations will aid in monitoring the overall health of the defense nuclear enterprise. These recommendations include ongoing sustainment and maintenance efforts and improving the morale of the nuclear forces. As we have previously reported, CAPE officials stated that it would take years to implement the great majority of these recommendations and measure whether they have had their intended effect. For example, CAPE and military service officials have noted that it would take years for some of the recommended cultural changes to manifest.
Military Services Have Begun to Track and Evaluate the Risks Associated with Open 2014 Recommendations
The military departments and other DOD components are responsible for tracking and evaluating the implementation status of the 2014 nuclear enterprise reviews’ recommendations; CAPE is providing guidance to aid these efforts. As we previously reported, CAPE had been responsible for tracking this progress until, in December 2016, the Deputy Secretary of Defense issued a memorandum that transitioned this responsibility from CAPE to the military departments and other DOD components. However, CAPE remains responsible for providing guidance to inform the analyses conducted by other DOD entities, overseeing the analyses, and assessing recommendations for closure. In January 2018, in response to our 2017 recommendation, CAPE issued additional guidance to improve the identification, assessment, and documentation of risks related to implementing the 2014 nuclear enterprise reviews’ recommendations.
CAPE’s January 2018 guidance includes specific instructions that military departments and other DOD components should follow when identifying, assessing, and documenting risks. Specifically, the guidance instructs the responsible components to identify any key risks associated with the open recommendations and to document those key risks. The January 2018 guidance defines key risks as those that require mitigation by the leadership of the DOD component (e.g., a risk that requires mitigation by senior Air Force or Navy leadership) or those that cannot be mitigated within a component’s existing authorities and resources (e.g., a risk that cannot be mitigated within the Air Force or Navy that must be raised to a higher authority). Additionally, the guidance indicates that risks that do not rise to the level of being key risks should also be tracked according to the component’s own assessment methodology and, if a component’s approach to a recommendation does not carry any key risks, this should be documented.
The guidance identifies some risk assessment tools for components to use, as appropriate, but specifically states that components should consider the following questions:
What are the risks if the recommendation is not implemented?
What are the risks in the approach to implementing the recommendation?
What flexibility does the approach have to respond to unintended consequences?
What are the controls and actions needed to mitigate risk to an acceptable level?
The guidance also notes that components should update risk assessments periodically as progress is made and new data become available.
According to the CAPE tracking tool, as of September 2018, key risks—or the absence of key risks—are documented for 85 of the 96 open sub- recommendations in the centralized tracking tool. Of the 85 sub- recommendations for which risk information is identified in the centralized tracking tool’s “Key Risks and Issues” field, key risks are identified for 50. For the remaining 35, no risks are identified as rising to the level of being a key risk. Based on information in the tracking tool, the Air Force and the Navy have lead responsibility for the 85 sub-recommendations for which risk information is identified in the tracking tool. U.S. Strategic Command, Joint Staff, and the Office of the Secretary of Defense have not yet included any risk information for the remaining 11 open sub- recommendations for which they have lead responsibility.
In addition to updated risk information in CAPE’s central tracking tool, the Air Force has updated its internal tracking tool. According to Air Force officials, the Air Force tracking tool includes both key risks—risks that require Air Force leadership to mitigate them—and low-level risks—risks that do not rise to the level where Air Force leadership should mitigate them—for each of the 60 remaining sub-recommendations for which it has the lead. For example, for the recommendation concerning Air Force nuclear personnel shortages, the Air Force’s internal tracker notes the risk that over-prioritizing the nuclear enterprise could affect the Air Force’s ability to conduct conventional operations. Additionally, the Air Force has identified areas where there is no key risk. For example, for the recommendation concerning intercontinental ballistic missile sustainment, the Air Force’s internal tracker noted that there was no key risk but that there was a low-level risk that using limited resources to support legacy systems could lead to underfunding modernization efforts.
The Navy, in addition to documenting risk information in CAPE’s centralized tracking tool, has documented risks for many of its open sub- recommendations in an internal document called the Navy Nuclear Deterrent Review Plan of Actions and Milestones, which tracks the Navy recommendations by categories that the Navy created. For example, when discussing risks for maintaining Navy NC3 systems, the Navy Nuclear Deterrent Review Plan of Actions and Milestones states that the Navy monitors availability across several levels, including sustainment and modernization efforts. Additionally, controls are in place at various levels to manage risks to the availability of NC3 assets. The Navy Nuclear Deterrent Review Plan of Actions and Milestones acknowledges that if the Navy does not continue to use these controls, the risk to the NC3 mission may be unacceptable. According to Navy officials, risk is also examined during the Navy’s internal process for closing recommendations through a review by the Navy Nuclear Deterrent Mission Oversight Council. For example, the Council was briefed on actions to mitigate the risk that insufficient personnel strength at some maintenance facilities poses to the operational availability of Ohio-class submarines.
DOD Has Made Progress in Implementing Recommendations from the 2015 NC3 Report
DOD continues to make progress in implementing the recommendations of the 2015 NC3 report. Since we last reported, in October 2017, DOD has closed 3 additional recommendations. In total, as of August 2018, the NLC3S Council has closed 5 of the 13 recommendations from the NC3 report (see fig. 3).
According to tracking information from the DOD CIO, the Navy has completed its portion of two of the open recommendations, but the Air Force still has tasks it needs to complete before each recommendation can be reviewed and closed by the NLC3S Council. As a result, these two recommendations will remain “in progress” until the Air Force also completes its portion of the implementation. In addition, a DOD component has recommended that an additional 2 of the 13 recommendations be closed; however, these have not yet been reviewed by the NLC3S Council.
In July 2018, in response to our October 2017 recommendation, the DOD CIO issued guidance to improve the tracking and evaluation of DOD’s progress in implementing the recommendations of the 2015 NC3 report.
This guidance provides instructions to the military departments and DOD components with responsibility for implementation of the 2015 NC3 report recommendations to identify and provide key milestones, metrics utilized to track progress, and information about recent progress—including an assessment of progress, required decisions and guidance, and key risks and other issues.
Information on the status of the 2015 NC3 report’s recommendations is collected in a layout similar to that developed by CAPE for the 2014 recommendations. The responsible organizations are in the process of updating the information they have provided to the DOD CIO to respond to the new guidance. The guidance directs the responsible organizations to provide quarterly updates on the remaining, open recommendations beginning in August 2018. According to a DOD CIO official, these regular updates will continue until the recommendations are closed.
DOD Has Taken Steps to Improve Oversight of the Nuclear Enterprise, but Key Oversight Groups Lack Clearly Defined Roles and Responsibilities and Methods for Communication and Collaboration Military Services Have Taken Steps to Improve Oversight of the Nuclear Enterprise
DOD and the military services have taken steps to improve oversight of the defense nuclear enterprise, in part in response to recommendations from the 2014 nuclear enterprise reviews. DOD plans to use the NDERG to oversee long-term and enduring issues affecting the nuclear enterprise. However, the NDERG does not have formally defined roles and responsibilities, and DOD has not established methods for how the NDERG will communicate and collaborate with the other nuclear enterprise oversight organizations. Further, DOD NC3 oversight guidance has not been updated to reflect evolving NC3 oversight roles and responsibilities and to include methods for communicating and collaborating with other nuclear enterprise oversight groups.
The military services have taken steps to improve oversight of the nuclear enterprise in response to the concerns raised by the 2014 nuclear enterprise reviews. The reviews noted a lack of comprehensive oversight of the defense nuclear enterprise and a need for increased visibility for senior leaders. Specifically,
Since 2014, the Air Force has realigned responsibilities, authorities, and accountability for its nuclear forces to improve oversight of the nuclear enterprise. For example, the Air Force implemented two recommendations from the Internal Assessment of the Department of Defense Nuclear Enterprise to elevate senior Air Force leadership positions in the nuclear enterprise. Air Force Global Strike Command was upgraded from a three-star to a four-star major command. According to officials from Air Force Global Strike Command, the elevation of the command to a four-star major command has helped ensure support from the Air Force for funding and management of the nuclear enterprise. In 2016, Air Force Global Strike Command created the Air Force NC3 Center to manage portions of the Air Force NC3 weapon system that are owned by the command and—according to Air Force NC3 Center officials—to provide oversight of the organize, train, and equip function for all of the Air Force’s NC3 missions. The Air Force also upgraded the position of Deputy Chief of Staff for Strategic Deterrence and Nuclear Integration, Headquarters Air Force, from a two-star to a three-star position. The elevation of both the Air Force Global Strike Command and A10 leadership was authorized by the Secretary of Defense to ensure that their rank is commensurate with the importance of the nuclear mission.
The Navy oversees its leg of the nuclear triad using the Navy Nuclear Deterrent Mission Oversight Council. The Council is a senior Department of the Navy forum that is responsible for coordinating the Navy’s nuclear weapon activities (safety, security, reliability, and nuclear weapons incident response), operations, personnel, policy, material support, and oversight functions. According to Navy officials, the Navy Nuclear Deterrent Mission Oversight Council addresses long-term issues affecting the Navy’s nuclear enterprise and identifies and monitors risks associated with those issues, including the actions taken in response to the 2014 nuclear enterprise reviews.
The NDERG Lacks Clearly Defined Roles and Responsibilities and Approaches for Communicating and Collaborating with Other Nuclear Oversight Groups
While the Deputy Secretary of Defense was designated as chairman of the NDERG, DOD guidance does not define the membership, roles, and responsibilities of the NDERG or identify methods for how the NDERG and its working and oversight groups should communicate and collaborate with other nuclear enterprise oversight groups. In July 2018, the Deputy Secretary of Defense issued a memorandum directing a series of changes intended to make the NDERG an enduring, principal- level forum to track risks, issues, and opportunities associated with the health of the defense nuclear enterprise. The memorandum directed the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs to serve as the NDERG secretariat and, with the Director of CAPE, co-chair the Nuclear Deterrent Senior Oversight Group. In addition, within 60 days of the issuance of the memorandum, the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs is to provide a draft NDERG charter for coordination. The charter will serve as an interim step while the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs prepares a DOD directive; it will also specify the NDERG’s functions, organization, and responsibilities. The new role as secretariat of the NDERG and co-chair of the Nuclear Deterrent Senior Oversight Group will expand the current responsibilities of the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs with regard to nuclear enterprise oversight.
However, it is not clear whether the charter under consideration will adequately incorporate the roles and responsibilities of the entities on the NDERG, particularly given the new long-term role of the NDERG. According to DOD officials, they have not determined to what extent NDERG roles and responsibilities will be articulated in the charter. Further, prior to issuance of the July 2018 memorandum, officials stated that they had not created a charter for the NDERG because senior leaders within the department were still deciding what ongoing role the NDERG should take in monitoring the health of the nuclear enterprise. The July memorandum helps to clarify this role, but it does not make clear all of the associated roles and responsibilities of the NDERG and its participants. For example, DOD has not determined whether the charter will identify the NDERG’s responsibilities for issues that are not directly related to the 2014 nuclear enterprise reviews or what the NDERG’s long- term role will be once most or all of the recommendations from the 2014 nuclear enterprise reviews are implemented. The July memorandum does indicate that the charter will include a plan to confirm that NDERG- approved actions have the expected effects and do not result in unintended consequences or recurrence of the initial issue. However, the memorandum does not specify how or when the NDERG should address new issues and does not indicate that the charter or DOD directive will do so either.
Standards for Internal Control in the Federal Government states that management should establish an organizational structure, assign responsibility, and delegate authority to achieve an entity’s objectives. Specifically, the standards call for management to develop an organizational structure with an understanding of the organization’s overall responsibilities and assign these responsibilities to enable the organization to operate in an efficient and effective manner, comply with applicable laws and regulations, and reliably report quality information. In the 2014 nuclear enterprise reviews, DOD identified a lack of comprehensive oversight of the defense nuclear enterprise. To ensure greater awareness among senior DOD leaders, the internal review recommended that DOD create a single, senior-level position to oversee the nuclear enterprise, provide the Secretary of Defense with additional routine visibility into the nuclear enterprise, and marshal the authority of the Secretary to resolve identified issues. DOD did not implement the internal review team’s recommendation to establish a senior oversight position for the nuclear enterprise because, according to CAPE officials, the Secretary of Defense considered the NDERG to be sufficient to address the recommendation. However, four years after it was established, the roles and responsibilities of the NDERG have not been clearly articulated. DOD now plans to develop a charter and subsequent DOD directive for the NDERG, but it remains unclear whether these documents will provide clear roles and responsibilities for the NDERG to effectively function as the comprehensive oversight body for the enterprise—in part because, according to officials, they are in the early stages of development.
In addition, DOD has not clearly defined how the NDERG will communicate and collaborate with the other oversight groups. DOD uses other groups, such as the Nuclear Weapons Council and the NLC3S Council, to oversee portions of the nuclear enterprise and coordinate among various DOD entities and with the Department of Energy. Many of the same individuals and organizations are represented in two or all three of the oversight organizations. For example, four DOD senior leaders— the Vice Chairman of the Joint Chiefs of Staff; the Under Secretary of Defense for Acquisition and Sustainment; the Under Secretary of Defense for Policy; and the Commander, U.S. Strategic Command—participate in both the Nuclear Weapons Council and the NLC3S Council, which are statutorily responsible for oversight of aspects of the defense nuclear enterprise. Figure 4 shows the roles and responsibilities of some of the nuclear enterprise oversight groups and DOD components.
The NDERG, the Nuclear Weapons Council, and the NLC3S Council have lower-level management and working groups that include participants from many of the same organizations. For example, the Air Force’s Office of Strategic Deterrence and Nuclear Integration is represented in the NDERG’s Nuclear Deterrent Senior Oversight Group and on the Nuclear Weapons Council’s Standing and Safety Committee. The Army, Navy, and Air Force also participate in all three oversight groups’ working groups. Unlike the NDERG—which will have no formally defined roles and responsibilities until its charter and the eventual directive are finalized—the Nuclear Weapons Council and the NLC3S Council are statutorily responsible for overseeing specific aspects of the nuclear enterprise.
According to officials from the Office of the Deputy Assistant Secretary of Defense for Nuclear Matters, in response to updated presidential guidance, a charter is being drafted for a new nuclear enterprise oversight group—the Security Incident Response Council. According to these officials, the council will be an interagency group that will have oversight of plans for responding to potential security incidents involving nuclear weapons and will bring together officials from across all relevant departments and agencies.
The Deputy Secretary of Defense’s July 2018 memorandum, previously discussed, does not address how the NDERG should collaborate with other nuclear enterprise oversight groups with overlapping responsibilities. According to the memorandum, issues falling under the purview of other existing nuclear enterprise oversight groups will be addressed by those groups, but the memorandum acknowledges that the groups may interact. Specifically, the memorandum states that the Nuclear Weapons Council, the NLC3S Council, the Nuclear Posture Review Implementation group, and the Security Incident Response Council may recommend issues for the NDERG. However, the memorandum does not describe how the NDERG should communicate the necessary quality information with other oversight groups, including criteria for determining which issues should be recommended or otherwise communicated to the NDERG or when those groups should go about recommending issues for consideration to the NDERG. Further, the other oversight groups will not fall under the authority of the NDERG charter, so stating that the groups may recommend issues for the NDERG does not ensure that they will do so. As previously stated, it is not clear whether these issues will be addressed in either the NDERG’s charter or the subsequent DOD directive.
As we have previously reported, leading practices for enhancing interagency collaboration include agreeing on roles and responsibilities and having written guidance and agreements. Specifically, collaborating agencies should work together to define and agree on their respective roles and responsibilities. In doing so, agencies can clarify who will do what, organize their joint and individual efforts, and facilitate decision making. Additionally, Standards for Internal Control in the Federal Government states that management should use quality information to achieve an entity’s objectives and internally and externally communicate the necessary quality information to achieve the objectives. These standards call for management to communicate quality information with appropriate methods of communication and consider a variety of factors in selecting an appropriate method of communication, such as the audience and the nature of the information.
The 2014 independent nuclear enterprise review found that the difficulty of defining the defense nuclear enterprise complicates senior DOD leaders’ ability to take ownership of the enterprise. Specifically, the independent review noted that senior leaders within the Office of the Secretary of Defense and the military services referred to the “nuclear enterprise” as if there were a coherent, integrated structure and set of activities supporting the nuclear forces. However, the review team did not find a coherent, integrated structure and synchronized set of activities that could be characterized as a DOD “nuclear enterprise.” Further, the independent review team found that there was a loose federation of separate nuclear activities scattered across multiple organizations with no clearly defined responsibility or accountability.
In response to the challenges the independent review identified in 2014, the review recommended that the loosely federated nuclear activities within OSD and the Air Force be brought together into a coherent and synchronized structure that focuses on direction and support for the nuclear forces. In addition, the internal review noted as one of its most important findings that the problems of the nuclear enterprise did not exist in isolation and would require a coordinated, holistic approach to resolve. In particular, the internal review team concluded that, because the issues they identified in each of the military services were interdependent, the ultimate solutions in many instances would have to be cultural, structural, and sustained over the long term. Identifying oversight groups’ roles and responsibilities and identifying and establishing methods for communicating and collaborating among groups could help mitigate the problems identified in the 2014 reviews.
In the absence of defined roles and responsibilities or methods for how the NDERG is to communicate and collaborate with other existing oversight organizations, the NDERG may be unable to effectively oversee the defense nuclear enterprise in a coordinated, holistic manner that would address problems identified by the 2014 nuclear enterprise reviews or other issues it may need to address in the future. Additionally, clear roles and responsibilities and methods for communication and collaboration could better position senior leaders to effectively manage resourcing and risk across the department. Officials from CAPE; the Office of the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; and the military services agreed that clarifying roles and responsibilities and identifying methods for communication would be helpful in addressing long-standing issues and guiding the NDERG in the future.
Additionally, with increased funding and prioritization of the nuclear enterprise, as called for in the 2018 Nuclear Posture Review, there is an increased need for the kind of coordinated, holistic oversight of the nuclear enterprise that was recommended in the 2014 Internal Assessment of the Department of Defense Nuclear Enterprise. For example, the Nuclear Posture Review’s goal of replacing legacy nuclear systems beginning in the mid-2020s will require senior leaders from across the defense nuclear enterprise to make decisions regarding resource allocation and prioritization—for both the new systems and the existing systems that are not being replaced. Collaboration among the various nuclear enterprise oversight groups can help to make this resource allocation and prioritization effective.
DOD Guidance Does Not Reflect Evolving NC3 Oversight Roles and Responsibilities and Methods for Communicating and Collaborating with Other Nuclear Oversight Groups
As a result of the 2018 Nuclear Posture Review, NC3 roles, responsibilities, and authorities are evolving as DOD is in the process of making changes to the NC3 governance construct. The Nuclear Posture Review directed the Chairman of the Joint Chiefs of Staff to develop a plan to reform NC3 governance to ensure its effective functioning and modernization. The following key documents outline the proposed changes to NC3 roles, responsibilities, and authorities:
2018 Nuclear Posture Review, February 2018: To improve NC3 governance, the Nuclear Posture Review directed the Chairman of the Joint Chiefs of Staff, in consultation with key DOD stakeholders, to deliver to the Secretary of Defense, no later than May 1, 2018, a plan to reform NC3 governance to ensure its effective functioning and modernization.
NC3 Governance Reform Initiative, February – May 2018: In response to the Nuclear Posture Review, the Joint Staff conducted a review of NC3 governance identifying problems with the current NC3 enterprise governance construct and suggested changes to address these problems.
Chairman of the Joint Chiefs of Staff memorandum, May 2018: Following the NC3 Governance Reform Initiative review, the Chairman of the Joint Chiefs of Staff provided the Secretary of Defense a memorandum recommending a new NC3 governance construct that would make the Commander of U.S. Strategic Command the operational commander of the NC3 enterprise. Under this new construct, specifically, the Commander of Strategic Command would be designated as the NC3 enterprise lead and would have increased responsibilities for operations, requirements, and systems engineering and integration. In addition, to support the new role of the Commander of U.S. Strategic Command, the Office of the Under Secretary of Defense for Acquisition and Sustainment would be designated as the NC3 enterprise capability portfolio manager and given increased responsibilities for resources and acquisition. The memorandum also proposes that the Chairman and the Deputy Secretary of Defense would provide leadership and oversight, which would include providing enterprise-level guidance to the department.
U.S. Strategic Command Commander’s Estimate, May 2018: At the direction of the Chairman of the Joint Chiefs of Staff, U.S. Strategic Command developed the NC3 Governance Reform – Commander’s Estimate (Commander’s Estimate) with a recommended course of action to implement the new NC3 governance roles, responsibilities, and authorities. This Commander’s Estimate was provided to the Secretary of Defense along with the Chairman’s May memorandum. Concurrently, U.S. Strategic Command is developing an implementation plan.
U.S. Strategic Command NC3 implementation plan, expected fall 2018: According to a Strategic Command official, an NC3 implementation plan is currently being drafted to implement the proposed changes to NC3 governance. Initial operating capability for the new roles, responsibilities, and authorities is expected to occur within six months of the approval of U.S. Strategic Command’s implementation plan.
If the changes to NC3 governance are approved, as proposed in the Commander’s Estimate, the Commander of U.S. Strategic Command would have the operational lead for NC3 and would be delegated the authorities and assigned the resources necessary to perform the following functions: operating the NC3 enterprise assessing and managing NC3 enterprise operational performance defining NC3 enterprise requirements and prioritization conducting systems engineering and analysis to integrate current and future NC3 enterprise architectures approving NC3 enterprise developmental tests and operations overseeing NC3 enterprise acquisition and service/national programs leading NC3 enterprise advocacy across DOD’s processes and governance forums, such as the NLC3S Changes to NC3 roles, responsibilities, and authorities would necessitate changing existing NC3-related guidance documents. The current NC3 oversight structure is documented in statutes and presidential and departmental guidance. For example, the NLC3S Council’s roles and responsibilities are defined in statute and in charters for the Council and its National Leadership Command Capability Executive Management Board. DOD issuances also establish policy and assign responsibilities for matters related to the NC3 system to organizations throughout DOD, including U.S. Strategic Command. The changes proposed in the Commander’s Estimate, if implemented, would result in DOD having to update its own guidance and determine whether there is a need to request a change in the statutory language or presidential guidance. According to a U.S. Strategic Command official, work still needs to be done to help align authorities within the NC3 enterprise. The Commander’s Estimate states that any changes to NC3 oversight authorities that may result from implementing the suggested changes in the Commander’s Estimate will be annotated in existing applicable policy and guidance documents.
As we have previously reported and as we have noted in this report, leading practices for enhancing interagency collaboration include agreeing on roles and responsibilities and having written guidance and agreements. Additionally, Standards for Internal Control in the Federal Government calls for management to develop an organizational structure with an understanding of the organization’s overall responsibilities, and assign these responsibilities to enable the organization to operate in an efficient and effective manner, comply with applicable laws and regulations, and reliably report quality information. To achieve this, management should assign responsibility and delegate authority to key roles throughout the organization. Further, federal internal control standards call for identifying appropriate methods for communicating both internally and externally. However, DOD has not clearly defined roles and responsibilities. Additionally, DOD has not developed written guidance and agreements that establish how the NLC3S Council, U.S. Strategic Command, and other organizations responsible for NC3 governance will collaborate with each other, or identified methods of communication. Further, DOD has not determined how these entities will collaborate with other oversight groups that need to have visibility over any problems or resourcing decisions related to the NC3 enterprise, such as the NDERG and other entities with responsibility for the nuclear enterprise as a whole.
The 2015 NC3 report made recommendations to address diffused responsibility in the NC3 enterprise; however, based our interviews with officials, these issues still persist. According to DOD officials, 3 years later there continues to be a problem with the management of the NC3 enterprise that resulted in the Secretary of Defense including the need to reform NC3 governance in the 2018 Nuclear Posture Review. Specifically, the 2018 Nuclear Posture Review recognized the broad diffusion of NC3 system governance authority and responsibility within DOD as an area of particular concern. To address these concerns, the department is increasing the oversight roles of a number of organizations. However, these changes may further complicate long-standing issues associated with the governance of the NC3 enterprise unless the department clearly articulates how all of the NC3 oversight bodies are to collaborate.
As DOD identifies changes that must be made to guidance for implementing the new NC3 governance construct, it has an opportunity to make improvements to enhance collaboration and communication among NC3 oversight groups and other nuclear enterprise groups. Updating its guidance to clarify changes to the roles and responsibilities of the many entities involved in the oversight and governance of NC3—and establishing methods for how those entities should communicate and collaborate—would better position senior leaders to effectively manage resourcing and risk across the NC3 enterprise. The NC3 enterprise is a large and complex system, and without clearly identified roles and responsibilities for an effective oversight structure, problems similar to those identified in 2014 as negatively affecting the management of the entirety of the defense nuclear enterprise may continue to limit effective management of the NC3 enterprise.
Conclusions
DOD has continued to take steps to improve the defense nuclear enterprise in response to the 2014 nuclear enterprise reviews and the 2015 NC3 report. By including risk identification, assessment, and documentation, CAPE has strengthened its framework for monitoring the department’s efforts to address the many issues identified in 2014— including those enduring issues that must be watched for years to come. The DOD CIO’s adoption of a similar framework to monitor the implementation of recommendations from the 2015 NC3 report has also set up a structure to track and evaluate progress. The responsible military services and DOD components’ use of these structures should aid them in assessing their efforts, including providing means to reassess and re- evaluate individual efforts and their relationship to the health of the defense nuclear enterprise as a whole. The efforts the department has taken and has under way should improve senior leaders’ visibility into these issues and better position them to ensure that progress continues to be made, underlying problems are addressed, and risks mitigated or accepted after considering the predictable and desirable results. However, for these changes to be effective, the department must clearly articulate the roles and responsibilities for a comprehensive oversight structure. Unless DOD is able to align the roles and responsibilities of the many entities now charged with oversight functions, the department’s leadership may not be in a position to be informed of issues affecting the nuclear enterprise or the NC3 enterprise and may be unable to make effective resourcing decisions. The creation of both a charter and DOD directive for the NDERG as well as DOD’s efforts to reform NC3 governance provide DOD with opportunities to create comprehensive oversight structures—with defined roles and responsibilities and methods for communication among oversight groups—for the defense enterprise to address enduring leadership problems and help the department to move forward in its governance of the nuclear enterprise. Further, by establishing methods for communication and collaboration among these organizations, the department could better avoid unnecessary overlap and duplication of effort, important issues falling through the seams between organizations, or enterprise-wide risks not being identified or addressed through a holistic approach.
Recommendations for Executive Action
We are making four recommendations to the Secretary of Defense: The Secretary of Defense should ensure that the Deputy Secretary of Defense—in coordination with the military departments; U.S. Strategic Command; the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; CAPE; and other relevant components of DOD—identify in the planned charter and DOD directive clear roles and responsibilities for the members of the NDERG. (Recommendation 1)
The Secretary of Defense should ensure that the Deputy Secretary of Defense—in coordination with the military departments; U.S. Strategic Command; the Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs; CAPE; and other relevant components of DOD—establish in the planned charter and DOD directive methods for the NDERG to communicate and collaborate with other organizations that have oversight responsibilities for portions of the nuclear enterprise. (Recommendation 2)
The Secretary of Defense should ensure that the Deputy Secretary of Defense and Chairman of the Joint Chiefs of Staff—in coordination with the Vice Chairman of the Joint Chiefs of Staff, the Under Secretary of Defense for Acquisition and Sustainment (as NLC3S Council co-chairs), and U.S. Strategic Command—update the applicable DOD guidance (such as the NLC3S Council’s and Executive Management Board’s charters) and identify whether there is a need to request changes to statutory or presidential guidance in order to clarify changes to roles and responsibilities for NC3 oversight. (Recommendation 3)
The Secretary of Defense should ensure that the Deputy Secretary of Defense and Chairman of the Joint Chiefs of Staff—in coordination with the Vice Chairman of the Joint Chiefs of Staff, the Under Secretary of Defense for Acquisition and Sustainment (as NLC3S Council co- chairs),and U.S. Strategic Command—update the applicable guidance to establish methods for communication and collaboration among organizations that have oversight responsibilities for portions of the nuclear enterprise as changes are considered for charters, guidance, and laws to reflect the changes to NC3 oversight. (Recommendation 4)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD for comment. In its comments, reproduced in appendix I, DOD concurred with all four of our recommendations. DOD also provided technical comments, which we incorporated as appropriate.
In concurring with our first and second recommendations, DOD stated that it will clearly identify roles and responsibilities in the NDERG charter and stated that the charter will also direct NDERG stakeholders to coordinate on the prioritization of issues that involve other organizations that have oversight responsibilities for portions of the nuclear enterprise.
In concurring with our third and fourth recommendations, DOD stated that U.S. Strategic Command, in coordination with other DOD components, has developed an NC3 Governance Improvement Implementation Plan that outlines the required updates and revisions that need to be requested for statutory guidance as well as implemented for NC3 governance body charters, DOD issuances, and Chairman of the Joint Chiefs of Staff issuances to clarify the new roles and responsibilities for NC3 oversight. Further, DOD noted that these updates and revisions will establish methods and provide direction for communication and collaboration among organizations that have nuclear enterprise oversight roles and responsibilities.
We are encouraged that DOD is planning to take these actions to address all four of our recommendations. We believe that, once DOD implements our recommendations, the department’s leadership will be better positioned to be informed of issues affecting the nuclear enterprise or the NC3 enterprise and better organized to make effective resourcing decisions.
We are providing copies of this report to the appropriate congressional committees, and to the Secretary of Defense; the Under Secretary of Defense for Acquisition and Sustainment; the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, of the Navy, and of the Air Force; the Commandant of the Marine Corps; the Commander, U.S. Strategic Command; the Department of Defense Chief Information Officer; and the Director of the Office of Cost Assessment and Program Evaluation.
If you or your staff have any questions about this report, please contact me at (202) 512-9971 or [email protected] points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Comments from the Department of Defense
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, key contributors to this report were Penney Harwell Caramia, Assistant Director; R. Scott Fletcher; Jonathan Gill; Susannah Hawthorne; Brent Helt; Joanne Landesman; Amie Lesser; K. Ryan Lester; Ned Malone; and Michael Shaughnessy.
Related GAO Products
Defense Nuclear Enterprise: Processes to Monitor Progress on Implementing Recommendations and Managing Risks Could Be Improved. GAO-18-144. Washington, D.C.: Oct. 5, 2017.
Nuclear Weapons Sustainment: Budget Estimates Report Contains More Information than in Prior Fiscal Years, but Transparency Can Be Improved. GAO-17-557. Washington, D.C.: July 20, 2017.
Nuclear Weapons: DOD Assessed the Need for Each Leg of the Strategic Triad and Considered Other Reductions to Nuclear Force. GAO-16-740. Washington, D.C.: Sept. 22, 2016.
Defense Nuclear Enterprise: DOD Has Established Processes for Implementing and Tracking Recommendations to Improve Leadership, Morale, and Operations. GAO-16-597R. Washington, D.C.: July 14, 2016.
Nuclear Weapons Council: Enhancing Interagency Collaboration Could Help with Implementation of Expanded Responsibilities. GAO-15-446. Washington, D.C.: May 21, 2015. | Why GAO Did This Study
In 2014, the Secretary of Defense directed two reviews of DOD's nuclear enterprise. These reviews identified problems with leadership, organization, investment, morale, policy, and procedures, as well as other shortcomings that adversely affected the nuclear deterrence mission. The reviews also made recommendations to address these problems. In 2015, DOD conducted a review focused on NC3 systems, which resulted in additional recommendations.
The National Defense Authorization Act for Fiscal Year 2017 includes a provision for GAO to review DOD's processes for addressing these recommendations. This report addresses the extent to which DOD and the military services have (1) made progress in the implementation, tracking, and evaluation—including identifying and documenting risk—of the recommendations of the 2014 nuclear enterprise reviews and the 2015 NC3 report and (2) improved oversight of the defense nuclear enterprise and managed roles, responsibilities, and collaboration among various organizations. GAO reviewed relevant documents and interviewed agency officials from DOD and the military services.
What GAO Found
The Department of Defense (DOD) has made progress in implementing the recommendations from the 2014 nuclear enterprise reviews and a 2015 nuclear command, control, and communications (NC3) review and has improved its tracking and evaluation of this progress. For example, since GAO last reported—in October 2017—an additional 74 of the 247 sub-recommendations from the 2014 reviews have been closed; 96 remain open. In January 2018, in response to a GAO recommendation, the Office of Cost Assessment and Program Evaluation (CAPE) issued guidance to aid the military services in identifying, assessing, and documenting risks associated with the 2014 recommendations, such as unintended consequences from their implementation. The guidance calls on them to update their risk assessments periodically as new data become available. The Air Force and Navy have begun to provide risk information in CAPE's and their own tracking tools. In July 2018, in response to a GAO recommendation, DOD's Chief Information Officer issued guidance to improve tracking and evaluation of progress in implementing the 2015 recommendations.
DOD and the military services have taken steps to improve oversight of the nuclear enterprise in response to the 2014 reviews but lack clear roles and responsibilities and methods for collaboration. The Secretary of Defense created the Nuclear Deterrent Enterprise Review Group (NDERG) in 2014 to ensure the long-term health of the nuclear enterprise by addressing resourcing, personnel, organizational, and enterprise policy issues. However, DOD guidance has not clearly defined roles and responsibilities for the NDERG or provided methods for the NDERG to communicate and collaborate with other nuclear oversight organizations, including those shown in the figure. Nor has NC3 oversight guidance been updated to reflect changes in roles and responsibilities and to include methods for communication and collaboration among NC3 oversight groups. In the absence of defined roles and responsibilities for the NDERG and NC3 oversight bodies and methods for how the NDERG and NC3 oversight groups are to communicate and collaborate, senior leaders may not be in a position to effectively manage resourcing and risk across the department.
What GAO Recommends
GAO makes four recommendations for DOD to clarify roles, responsibilities, and methods of communication and collaboration for both the NDERG and NC3 oversight bodies. DOD concurred with all four recommendations and provided information about planned actions to implement them. |
gao_GAO-19-135T | gao_GAO-19-135T_0 | Background
PTC systems are required by law to prevent certain types of accidents or incidents. In particular, a PTC system must be designed to prevent train- to-train collisions, derailments due to excessive speed, incursions into work zone limits, and the movement of a train through a switch left in the wrong position. While railroads may implement any PTC system that meets these requirements, the majority of passenger railroads are implementing one of four types of systems. PTC’s intended safety benefits can be fully achieved nationwide when all required railroads have successfully installed PTC components, tested that these components work together and the systems function as designed, and are interoperable with other host and tenant railroads’ PTC systems that share track. Interoperability means the locomotives of any host railroad and tenant railroad operating over the same track segment will communicate with and respond to the PTC system, allowing uninterrupted movements over property boundaries. Interoperability is critical to PTC functioning properly given the complexity of the rail network in the United States. In much of the country, Class I freight railroads function as hosts for Amtrak and commuter railroads. For example, one of the seven major Class I freight railroads reports that 24 tenant railroads operate over its PTC-equipped tracks, including freight, Amtrak, and commuter railroads. A notable exception to this is the Northeast Corridor, which runs from Washington, D.C., to Boston, Massachusetts, which Amtrak predominantly owns and over which six freight and seven commuter railroads operate as tenants.
PTC implementation involves multiple stages to achieve full implementation, including planning and system development, equipment installation and testing, system certification, and full deployment, including interoperability. Each railroad must develop an FRA-approved PTC implementation plan that includes project schedules and milestones for certain activities, such as equipment installation. The equipment installation stage involves many components, including communication systems; hardware on locomotives and along the side of the track (called “wayside equipment”); and software in centralized office locations as well as onboard the train and along the track. Each railroad is required to report quarterly and annually to FRA on its PTC implementation status relative to its implementation plan. A railroad can also revise its implementation plan to reflect changes to the project, which then must be reviewed and approved by FRA.
In addition, railroads must demonstrate that the PTC systems are deployed safely and meet functional requirements through multiple stages of testing. Before initiating testing on the general rail system, railroads must submit a formal test request for FRA approval that includes, among other things, the specific test procedures, dates and locations for testing, and the effect the tests will have on current operations. The multiple stages of PTC testing include:
Laboratory testing: locomotive and wayside equipment testing in a lab environment to verify that individual components function as designed.
Field testing: includes several different tests of individual components and the overall system, such as testing of each locomotive type to verify that it meets functional requirements and field integration testing—a key implementation milestone to verify that each PTC component is integrated and functioning safely as designed.
Revenue service demonstration (RSD): an advanced form of field testing in which the railroad operates PTC-equipped trains in regular service under specific conditions. RSD is intended to validate the performance of the PTC system as a whole and to test the system under normal, real-world operations.
Interoperability testing: host and tenant railroads that operate on the same track must work together to test interoperability to ensure each railroad can operate seamlessly across property boundaries. Almost all of the 40 railroads currently required to implement PTC must demonstrate interoperability with at least one other railroad’s PTC system.
Using results from field and RSD testing, combined with other information, host railroads must then submit a safety plan to FRA for approval. We have previously reported that these safety plans are about 5,000 pages in length. Once FRA approves a safety plan, the railroad receives PTC system certification, which is required for full implementation, and is authorized to operate the PTC system in revenue service. According to FRA officials, the FRA may impose conditions to the PTC safety plan approval as necessary to ensure safety, resulting in a conditional certification.
Railroads may receive a maximum 2-year extension from FRA past the December 31, 2018, deadline if they meet six criteria set forth in statute. Specifically, railroads must demonstrate, to the satisfaction of FRA, that they have: (1) installed all PTC system hardware consistent with the total amounts identified in the railroad’s implementation plan; (2) acquired all necessary spectrum consistent with the implementation plan; (3) completed required employee training; (4) included in a revised implementation plan an alternative schedule and sequence for implementing the PTC system as soon as practicable but no later than December 31, 2020; (5) certified to FRA that they will be in full compliance with PTC statutory requirements by the date provided in the alternative schedule and sequence; and (6) for Class I railroads and Amtrak, initiated RSD or implemented a PTC system on more than 50 percent of the track they own or control that is required to have PTC. For commuter and Class II and III railroads, the sixth statutory criterion is to have either initiated RSD on at least one territory required to have operations governed by a PTC system or “met any other criteria established by the Secretary,” which FRA refers to as “substitute” criteria.
FRA is responsible for overseeing railroads’ implementation of PTC, and the agency monitors progress and provides direct assistance to railroads implementing PTC. For example, FRA officials provide technical assistance to railroads, address questions, and review railroad-submitted documentation. FRA has a PTC Staff Director, designated PTC specialists in the eight FRA regions, and additional engineers and test monitors responsible for overseeing technical and engineering aspects of implementation and reviewing railroads’ submissions and requests, as well as programmatic support staff. In anticipation of the upcoming implementation deadline, in May 2017, FRA began to send notification letters to railroads it determined were at risk of both not meeting the December 31, 2018, implementation deadline and not completing the requirements necessary to qualify for an extension. FRA identified “at- risk” railroads by comparing a railroad’s hardware installation status to the total hardware required for PTC implementation, according to the railroad’s implementation plan. FRA has increased the “at-risk” threshold percentage over time as the deadline approaches. (See Table 1).
FRA has additional oversight tools, which include use of its general civil penalty enforcement authority for failure to meet certain statutory PTC requirements. FRA has used this authority in 2017 and 2018 to assess civil penalties, primarily against passenger railroads that failed to comply with the equipment installation milestones, the spectrum acquisition milestones, or both, that the railroads had established in their implementation plans for the end of 2016 and 2017.
As part of our body of work on PTC, we found that railroads face numerous PTC implementation challenges and made recommendations to FRA to improve its oversight of implementation. Specifically, in 2013 and 2015 we found that many railroads were struggling to make progress due to a number of complex and interrelated challenges, such as developing system components and identifying and correcting issues discovered during testing. For example, we found in March 2018 that FRA had not systematically communicated information or used a risk-based approach to help commuter railroads prepare for the 2018 deadline or to qualify for an extension. We also found that many railroads were concerned about FRA’s ability to review submitted documentation in a timely manner, particularly given the length of some required documentation such as safety plans and FRA’s limited resources for document review. In March 2018, we recommended FRA identify and adopt a method for systematically communicating information to railroads and use a risk-based approach to prioritize its resources and workload.
FRA agreed with our recommendations. Most recently, in September 2018, we testified on the status of railroads’ implementation of PTC.
Many Passenger Railroads Remain in Early Stages of PTC Implementation and FRA Has Clarified Extension Requirements Passenger Railroads Continue to Install and to Test PTC Systems, and Report Previously Identified Implementation Challenges
As of June 30, 2018, many passenger railroads reported that they remain in the equipment installation and field-testing stages, which are early stages of PTC implementation. However, since we testified in March 2018, railroads have made progress on equipment installation. Based on our analysis of the 40 railroads’ reported status as of June 30, 2018, about half of the railroads have completed equipment installation, and many others are nearing completion of this stage. Specifically, 20 of the 29 passenger railroads reported being more than 90 percent complete with locomotive equipment installation. Nearly two-thirds of passenger railroads that must install wayside equipment reported being more than 90 percent complete. One-third of passenger railroads are among those designated by FRA as at-risk of both not meeting the end of 2018 implementation deadline and not completing the requirements necessary to qualify for an extension. Specifically, in August 2018, FRA identified nine railroads—all commuter railroads—as at-risk, fewer than the 12 railroads FRA had previously designated as at-risk in its June 2018 letters to railroads.
Since we reported in March 2018, Amtrak reported that it has initiated both field testing and RSD, but most commuter railroads reported slower progress with testing, especially with RSD. For example:
Laboratory and initial field testing: 19 of 28 commuter railroads reported having initiated this testing as of June 30, 2018; this number represents six more commuter railroads than the 13 we previously reported as having initiated field testing as of September 30, 2017.
RSD testing: Eight of 28 commuter railroads reported initiating RSD testing as of June 30, 2018; this number represents two more commuter railroads than the six we previously reported as having entered RSD testing as of September 30, 2017. As noted earlier, unless a commuter railroad receives approval for using substitute criteria, the railroad must initiate RSD, a final stage of PTC testing, on at least one territory by December 31, 2018, to qualify for an extension.
Passenger railroad representatives reported that they continued to face many of the same challenges we have previously identified, including limited industry-wide availability of vendors and expertise and software defects. For example, in response to our questionnaire, 12 of 29 passenger railroads reported challenges with PTC vendors and contractors. One passenger railroad noted that because its contractor manages PTC projects across the country with the same deadline and requirements, it can be difficult for all railroads to get the resources they need from their contractor. We previously reported that there are a limited number of vendors available to design PTC systems, provide software and hardware, and conduct testing. For example, we reported in 2015 that, according to railroad industry representatives, there were two vendors for the onboard train management computer and three vendors for the wayside equipment. One small passenger railroad recently testified that, because a single manufacturer was providing PTC equipment and software to many railroads across the country, it had to wait over a year for PTC equipment to be delivered and installed. We also previously reported that railroads face software challenges, and noted that railroads had concerns with the number of defects identified during software testing, since these take time to address. In response to our questionnaire, nine passenger railroads reported encountering challenges related to maturity of the PTC software systems, such as working through software bugs or defects during testing.
As passenger railroads work to complete PTC implementation activities, some have made service or schedule adjustments to accommodate the need to install equipment or perform testing. Moreover, several passenger railroads told us that as PTC implementation schedules become more compressed, avoiding effects on passengers becomes more difficult. We identified 10 passenger railroads that have made changes to their operations due, in part, to PTC implementation, including the six largest commuter railroads in the country, which collectively reported over 400 million passenger trips in 2017. These changes had effects such as reduced service or longer travel times. For example, one of the largest passenger railroads in the country reduced service on certain routes and eliminated some express trains to accommodate schedules enabling them to complete PTC equipment installation prior to the December 2018 deadline. Another large passenger railroad has shutdown weekend service—providing bus service to transport passengers between stations—for PTC testing. Several passenger railroads had to reduce service for equipment or track installation or testing, resulting in fewer locomotives or less track available for service.
FRA Has Recently Clarified Extension Requirements
In June, July, and August 2018, FRA held three PTC symposiums that were attended by representatives from all 40 railroads and that focused on the extension process and substitute criteria, PTC testing, and safety plans, respectively. FRA’s June 2018 symposium covered information consistent with our March 2018 recommendation that the agency adopt a method for systematically communicating information related to the requirements and process for an extension to railroads. Specifically, FRA presented information on the procedures for requesting and obtaining FRA’s approval for an extension to implement PTC beyond the December 2018 deadline including FRA’s review process. FRA also clarified that for commuter railroads, initiating field testing was one approach that could potentially qualify as substitute criteria, rather than initiating RSD.
Representatives we interviewed from the passenger railroads that participated in the symposiums found them to be helpful, and some passenger railroads reported that the information presented led them to adjust their approach to meeting the December 2018 deadline. For example, one passenger railroad representative we spoke to said that until the symposium, he was unaware that using field testing as substitute criteria was a potential option. Some passenger railroads we met with also told us they are re-evaluating what activities and documentation need to be revised and submitted to FRA before the December 2018 deadline based on the information presented at the symposiums. For example, representatives from one passenger railroad we met with said that FRA officials encouraged them to update their PTC implementation plan right away with current equipment installation totals, to ensure consistency across all required documentation by the end of 2018. A couple of passenger railroads noted that the information presented at the symposiums clarified many questions and would have been beneficial to know a year or two earlier in the implementation process.
In addition, in recent months FRA has continued to provide assistance to railroads and has taken a series of steps to better prepare railroads for the 2018 deadline. These steps include meeting regularly with individual railroads and developing approaches intended to help many railroads meet the requirements necessary for a deadline extension. For example, representatives from one commuter railroad said agency officials have been willing to share lessons learned, clarify requirements, and review draft documentation to provide informal feedback.
Passenger Railroads and FRA Are Working toward Extensions, Leaving Substantial Work to Be Completed Beyond 2018 Most Passenger Railroads Anticipate Needing an Extension, and Many Plan to Start RSD Testing Beyond 2018
Almost three-quarters of passenger railroads (21 of 29) reported to us that they plan to apply for an extension. Five passenger railroads reported to us that they planned to submit their extension request by the end of September 2018, but as of September 21, 2018, only one had submitted the request and required documentation. However, FRA officials noted that with the exception of possibly one or two railroads, they anticipate that all passenger and freight railroads will likely need an extension, and that railroads must submit their requests by the end of the year to be considered in compliance with PTC requirements. A railroad must demonstrate that it has met all of the statutory criteria necessary to qualify before, or when, it formally requests an extension. And as previously discussed, many railroads remain in the early stages of PTC implementation. Of the eight passenger railroads that anticipate reaching full implementation by December 31, 2018, six are already operating under conditionally certified safety plans; one has submitted its safety plan for review; one plans to submit its safety plan to FRA in fall 2018 for certification. FRA officials stated that it is unclear whether the passenger railroads that have obtained conditional PTC System Certification will have achieved full implementation on all route miles by December 31, 2018.
Of the 21 passenger railroads that intend to apply for an extension, more than half—all commuters—reported that they plan to use substitute criteria to qualify. Moreover, two-thirds of the commuter railroads (8 of 12) that plan to use substitute criteria intend to apply to use their initiation of field integration or functional testing as substitute criteria, and many of these will apply to begin field testing on only a portion of their track.
Figure 1 depicts the stage of PTC implementation that passenger railroads at least expect to reach by December 31, 2018, in order to be in compliance with the deadline, based on railroads’ responses to our July- August 2018 questionnaire.
Although FRA has recently made clear that it is authorized to grant extensions based on initiating field testing or other FRA-approved substitute criteria, this approach defers time-intensive RSD testing into 2019 and beyond. For example, one commuter railroad we met with has applied for, and was granted approval by FRA to use, the initiation of field testing on a 16.5-mile segment of track as substitute criteria to qualify for an extension. That railroad must ultimately implement PTC over 321 miles of track that it owns and operates over, meaning that it will need to complete field testing, RSD, and interoperability testing on the remaining 95 percent of its track and achieve system certification prior to the 2020 deadline. In March 2018, we testified that FRA officials told us that moving from the start of field testing to the start of RSD can take between 1 and 3 years, and has averaged about 2 years for those railroads that have completed that stage. We also reported that FRA officials believe that most railroads underestimate the amount of time needed for testing. FRA officials told us that they do not consider railroads that are approved for an extension under substitute criteria to be necessarily at a higher risk of not completing PTC implementation by 2020. However, in light of these time estimates and the unknown challenges that railroads may face during testing, railroads that are in the early field-testing stage moving into 2019 could face challenges completing PTC implementation by the extended December 2020 deadline.
Railroads further behind in PTC implementation may need to apply for an extension due to factors such as compressed implementation schedules, as well as the time needed for FRA approvals. For example, representatives from one commuter railroad said that they hope to reach RSD before the December 31, 2018, deadline, but that it would be difficult to meet the extension requirements, apply for, and receive an extension given the volume of paperwork FRA will be receiving at the end of the year. Instead, the railroad plans to submit an extension request using substitute criteria consisting of field testing in order to be in compliance at the end of the year. Such an approach involves first applying for and receiving approval for substitute criteria and then formally requesting an extension and submitting supporting documentation to FRA before the end of the year. Entering RSD prior to the deadline could be difficult given that FRA officials told us they have advised railroads to allow at least a month for FRA’s review of test requests, which must be approved prior to initiating field testing and RSD.
Some passenger railroads also reported challenges regarding host and tenant responsibilities, including coordination and interoperability—which are likely to continue beyond 2018. Some passenger railroads told us that coordinating with host or tenant railroads that are in different implementation stages as the 2018 deadline approaches poses several challenges. For example, a few passenger railroads told us that they are unable to conduct interoperability testing because their host or tenant railroad has not yet reached that stage of implementation. Additionally, officials from Amtrak—which interoperates with 21 other railroads—noted that the host-tenant relationship can be complicated and requires a high level of coordination to resolve issues between railroads. Amtrak officials also told us they were conducting risk assessments to determine whether and how to continue service in situations where their host or tenant railroad has not completed PTC implementation or met the requirements necessary for an extension. While few passenger railroads have reached the interoperability stage, one railroad association stated that interoperability is, and will continue to be, a substantial challenge for metropolitan areas with dense and complex rail networks with several host-tenant relationships. For example, according to one passenger railroad, 14 different freight and passenger railroads will need to interoperate in the Chicago area.
FRA’s Substantial Workload Remains a Concern
FRA’s already substantial workload is expected to increase as railroads continue to submit documentation necessary for extensions and continue PTC implementation activities. FRA is focused on ensuring railroads are in compliance by the December 2018 deadline—whether via an extension or by completing implementation. While FRA officials report that they anticipate almost all railroads will likely request an extension, only one passenger railroad had submitted an application for an extension as of September 21, 2018. FRA will need to review and approve all related documentation associated with each extension request and make a determination within 90 days, meaning if a railroad were to submit its extension request on December 31, 2018, FRA would have until the end of March 2019 to approve or deny the railroad’s extension request. In addition to extension requests and supporting documentation, many passenger railroads will also be submitting to FRA: requests for substitute criteria, test requests to initiate field testing or RSD, revisions to PTC implementation plans, and PTC safety plans. Some of these documents can be lengthy and require back and forth between FRA and railroads before approval. For example, we previously reported that PTC safety plans are about 5,000 pages in length and take between 6 and 12 months for FRA to review.
To help manage the forthcoming influx of documentation, FRA officials have offered to review draft documentation, such as substitute criteria requests and test requests, and have advised railroads to take FRA’s review times into account prior to submitting required documentation. FRA officials told us that in trying to manage their workload, they initially told railroads they did not have time to review draft submittals. However, they found that taking the time to conduct draft reviews ultimately led to higher quality formal submittals and accelerated the overall review process. In addition, FRA officials said that their goal is to not delay any railroad that is ready to move into testing, and that they advised railroads to build 30 to 45 days for test request reviews into their project schedules.
Despite these efforts, some passenger railroads remain concerned about the agency’s ability to manage the PTC workload in the coming months and beyond 2018. For example, seven of 29 passenger railroads identified FRA’s resources and review times as a challenge leading up to the December 2018 deadline. In addition, three passenger railroads reported that they would complete all the requirements for full PTC implementation by the December 31, 2018, deadline, but planned to apply for an extension due to concerns that FRA would not be able to review and certify their safety plans to enable them to reach full implementation prior to the deadline. Based on similar concerns, in March 2018, we recommended FRA develop an approach to prioritize the allocation of resources to address areas of greatest risk as railroads work to complete PTC implementation. FRA has acknowledged the railroads’ concern given the surge of submissions requiring FRA approval in 2018 and has reported the agency is reallocating existing expertise and expanding the PTC workforce through training, expanding contracts with existing support contractors, and initiating one additional contract to provide technical support. For example, FRA officials told us that they reallocated resources to shift PTC specialists’ responsibilities to focus exclusively on testing-related activities because their involvement is critical for the testing stage. Taking steps to prioritize limited resources will only increase in importance as the amount of documentation needing FRA review continues to grow in 2019 and 2020, as railroads move through testing and submit complex and lengthy safety plans.
Although FRA has taken steps to provide key extension information to railroads and to help ensure railroads’ compliance with PTC deadlines, uncertainty remains, particularly in regard to FRA’s enforcement strategy if railroads are noncompliant with the PTC implementation requirements, such as if railroads were to fail to apply for an extension by the deadline. Representatives from all railroads implementing PTC with whom we met told us that FRA’s planned enforcement approach for any railroad that fails to meet the requirements for an extension beyond 2018 is unclear. FRA officials told us they have shared the range of applicable civil penalties with railroads for years, but that any policy decision about how potential fines will be levied for non-compliant railroads has not yet been made. In addition, it is also unclear how the agency would approach enforcement for railroads that have a host or tenant operating on their tracks that has not completed implementation or met the requirements necessary for an extension. Ten of the 13 passenger railroads we met with told us they do not currently have or see a need to develop contingency plans. For example, representatives from one passenger railroad said they did not have a contingency plan because FRA has made clear they are committed to helping railroads comply with the 2018 deadline. FRA officials said that the goal of enforcement is to help bring all railroads into compliance and that they would look at the specific circumstances for any host-tenant issues before assessing a fine.
In conclusion, almost all passenger railroads will likely request an extension beyond 2018, which will require FRA approval. Many commuter railroads plan to request substitute criteria which may result in those railroads remaining in the early stages of PTC implementation at the start of 2019. However, given that only one passenger railroad has submitted an extension request, it is unlikely we will know how many railroads will be granted an extension by the December 31, 2018 deadline. While few passenger railroads had developed contingency plans when we met with them, as December nears and schedules become further compressed, additional railroads may have to make service or schedule adjustments to help them reach compliance with the deadline. Although FRA has reported taking some actions in response to our March 2018 recommendation that they better prioritize resources, FRA resources and review times remain a significant concern—both for near-term efforts such as extension requests and for the safety plans that need to be reviewed and certified prior to the end of 2020. These issues—combined with the ongoing implementation, testing, and interoperability challenges that a number of railroads reported to us—raise questions as to the extent FRA and the nation's passenger railroads are poised for full PTC implementation by December 31, 2020.
Chairman Thune, Ranking Member Nelson, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have at this time.
GAO Contact and Staff Acknowledgments
If you or your staff have any questions about this testimony, please contact Susan Fleming, Director, Physical Infrastructure at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Susan Zimmerman (Assistant Director); Katherine Blair; Greg Hanna; Delwen Jones; Emily Larson; Joanie Lofgren; SaraAnn Moessbauer; Maria Wallace; and Crystal Wesco.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Forty railroads are currently required by statute to implement PTC, a communications-based system designed to slow or stop a train that is not being operated safely. Of these, 29 passenger railroads collectively provide over 500 million passenger trips annually. Although the deadline for PTC implementation is December 31, 2018, railroads may receive a maximum 2-year extension to December 31, 2020, if they meet certain statutory criteria.
GAO was asked to review passenger railroads' progress toward PTC implementation. This statement discusses (1) passenger railroads' PTC progress and FRA's steps to assist them, and (2) how passenger railroads and FRA plan to approach the 2018 and 2020 deadlines. GAO analyzed railroads' most recent quarterly reports covering activities through June 30, 2018; sent a brief questionnaire to all 40 railroads; and interviewed officials from FRA and 16 railroads, selected in part based on those identified as at-risk by FRA.
What GAO Found
As of June 30, 2018, passenger railroads (28 commuter railroads and Amtrak) generally remained in the early stages of positive train control (PTC) implementation—including equipment installation and early field testing. However, many passenger railroads are nearing completion of the equipment installation stage. For example, two-thirds of passenger railroads reported being more than 90 percent complete with equipment installation. With regard to testing, Amtrak has reported that it has initiated both field testing and revenue service demonstration (RSD), an advanced form of field testing that is required to fully implement PTC. However, most commuter railroads reported slower progress with testing. Of the 28 commuter railroads required to implement PTC, 19 reported initiating field testing, but only eight reported initiating RSD. The Federal Railroad Administration (FRA) recently clarified the criteria railroads must meet to qualify for a 2-year extension past the December 31, 2018, PTC implementation deadline. To receive an extension, railroads must meet six statutory criteria. For the sixth criterion, commuter railroads are authorized to either initiate RSD on at least one track segment or use FRA-approved substitute criteria. FRA clarified these and other requirements at three PTC symposiums hosted for railroads in summer 2018. For example, FRA officials said that initiating field testing instead of RSD was one approach that commuter railroads could potentially take to receive FRA's approval of substitute criteria. FRA's actions are consistent with GAO's March 2018 recommendation that the agency communicate to railroads the requirements and process for an extension.
Challenges related to PTC implementation and FRA's resources raise questions as to the extent FRA and the passenger railroad industry are poised for full PTC implementation by December 31, 2020. Most passenger railroads anticipate needing an extension, leaving substantial work for both railroads and FRA to complete before the end of 2020. Almost three-quarters of passenger railroads (21 of 29) reported that they, or the railroad which owns the track on which they operate, will apply for an extension. More than half of these railroads reported planning to apply for an extension using substitute criteria, and of these, eight intend to apply for substitute criteria based on field testing. Though use of substitute criteria is authorized in law, this approach defers time-intensive RSD testing into 2019 and beyond. In addition, passenger railroads reported that they continue to face many of the same challenges GAO previously identified, such as software defects and limited industry-wide availability of vendors. Further, passenger railroads expressed concern that FRA's workload will markedly increase as railroads submit requests for extension approvals. FRA has acknowledged concerns about the pending surge of submissions and agency officials said they have taken recent steps to help manage the forthcoming influx of documentation, such as reallocating resources. However, as of September 21, 2018, only one passenger railroad had applied for an extension. It remains unclear how many extension requests FRA will receive or what FRA's enforcement strategy will be for noncompliance with the statute, such as for railroads that fail to apply for an extension by the deadline.
What GAO Recommends
In March 2018, GAO recommended FRA take steps to systematically communicate extension information to railroads and to use a risk-based approach to prioritize agency resources and workload. FRA has taken some steps to address these recommendations, such as recently communicating and clarifying extension requirements to all railroads during three symposiums. GAO will continue to monitor FRA's progress. |
gao_GAO-18-84 | gao_GAO-18-84_0 | Background
This section provides a brief background into nutrient pollution, federal and state activities to address water pollution, and nutrient credit trading.
Nutrient Pollution
According to EPA, nutrient pollution is one of America’s most widespread, costly, and challenging environmental problems. Nutrients are natural parts of aquatic ecosystems that support the growth of algae and aquatic plants, which provide food and habitat for fish, shellfish, and smaller organisms that live in water. However, when too many nutrients enter the environment, often as the direct result of human activities, the air and water can become polluted. The primary sources of nutrient pollution are fertilizer, animal manure, wastewater treatment plants, power plants, storm water runoff, cars, detergents, failing septic tanks, and pet waste. (See fig. 1.)
Too much nitrogen and phosphorus in surface waters can cause algae to grow faster than ecosystems can handle. Significant increases in algae can harm water quality and habitats. Large growths of algae, called algal blooms, can severely reduce or eliminate oxygen in the water, leading to the illnesses and death of large numbers of fish. Some algal blooms are harmful to humans because they produce elevated levels of toxins and bacteria that can make people sick if they come into contact with or drink contaminated water or consume tainted fish or shellfish. According to a 2016 memorandum from EPA, nutrient pollution contributes to a trend of increasing numbers of harmful algal blooms in surface waters and consequentially a growing threat to public health and local economies. For instance, in 2016, algal blooms occurred along U.S. coastlines from Alaska to Florida, closing beaches, affecting tourism and local economies, and resulting in a state of emergency declaration in four coastal counties in Florida and more than 250 health advisories nationwide.
Federal and State Activities to Address Water Pollution
The Clean Water Act establishes a nationwide approach improving and maintaining the quality of rivers, streams, lakes, and other surface water bodies. Under this approach, states—overseen by EPA—are to set water quality standards, monitor water quality, and assess water quality against the applicable standards. Water quality standards define the water quality goals of a water body, or portion thereof, by designating the use or uses to be made of the water and by setting criteria necessary to protect the uses. These standards establish an additional legal basis for controlling pollution entering the waters of the United States from point sources, such as wastewater treatment plants. Water quality standards include the following, among other things: designated uses of the water body, such as the protection and propagation of fish, shellfish, and wildlife; criteria to protect designated uses, such as specific criteria or levels for toxic or nutrient pollutants that could harm aquatic life; anti-degradation requirements that describe the conditions under which water quality may be lowered in surface waters while still protecting existing uses and high quality waters; and other general policies to address implementation issues.
To protect a water body’s designated uses, a state must establish numeric criteria, or, where numeric criteria cannot be established or as a supplement to them, narrative or biomonitoring criteria. EPA has encouraged states to incorporate numeric criteria into water quality standards and TMDLs for water bodies with nutrient impairments because they require less interpretation to implement than narrative criteria. Numeric criteria express precise, measurable levels of particular chemicals or conditions allowable in a water body. In contrast, narrative criteria express in a qualitative form how to protect a designated use of a water body. Narrative criteria often describe the desired conditions of a water body as being “free from” certain negative conditions. For instance, to protect a designated use, narrative criteria could require that a particular water body be free from floating non-petroleum oils of vegetable or animal origin. According to EPA, under most circumstances, water quality criteria that limit specific toxic pollutants are expressed numerically. However, according to EPA officials, most water quality criteria that limit nutrient pollutants are expressed narratively. EPA has provided support to states on how to develop numeric criteria through written guidance, webinars, and workshops. According to EPA officials and data, however, there has been limited state progress in developing numeric criteria for nutrients. As of 2017, six states had at least one statewide numeric criterion for either nitrogen or phosphorus for some water bodies.
Through the monitoring and assessment process, states are to identify water bodies that do not meet established water quality standards and are therefore considered to be impaired. The Clean Water Act generally requires—for each water body that a state has identified as impaired— that the state develop a TMDL for each pollutant impairing the water body. A TMDL reflects the calculation of the maximum amount of a pollutant that a water body can receive, while meeting and continuing to meet water quality standards for that particular pollutant. A TMDL determines a pollutant reduction target and allocates load reductions necessary to meet that target to both point and nonpoint source(s) of the pollutant, although under the Clean Water Act only point sources can be required to reduce pollutants. For a point source, legal discharge limits based on the targets identified in the TMDL are incorporated into an NPDES permit. An NPDES permit can be issued as an individual permit to a single facility, written to reflect site-specific conditions of that facility, or as a general permit for multiple facilities with similar operations and types of discharges. For example, Connecticut uses a general permit to implement the Long Island Sound TMDL. This permit authorizes 79 wastewater treatment facilities to discharge nitrogen into the sound and includes a specific nitrogen limit for each facility.
Under the Clean Water Act and EPA’s regulations, states or EPA can typically determine the most appropriate geographic area and pollutants for each TMDL. The Chesapeake Bay TMDL is the largest TMDL that EPA has developed. This TMDL identifies the necessary nutrient pollution reductions across the bay jurisdictions, which encompass seven states in a 64,000-square-mile watershed, and comprise 276 smaller TMDLs for 92 individual Chesapeake Bay tributaries. Similarly, the Long Island Sound TMDL identifies the necessary nitrogen pollution reductions for parts of Connecticut and New York that discharge into the sound. In contrast, many TMDLs cover a single water body, such as a lake or a segment of a river.
Unlike its approach for point sources, the Clean Water Act’s approach to curtailing nonpoint source pollution is largely voluntary. One of the primary ways that EPA addresses nonpoint source nutrient pollution is with the section 319 program. Through this grant-based program, EPA funds voluntary projects aimed at reducing nonpoint source pollution, particularly runoff from agricultural production. Grants from this program support a wide variety of activities including the development and implementation of best management practices (BMP), which are used to reduce or eliminate the introduction of pollutants into receiving waters. Some common agricultural BMPs include planting strips of trees or shrubs along stream banks to serve as buffers or planting cover crops, such as clover, in fields near water bodies to reduce nutrient runoff.
Nutrient Credit Trading
EPA also encourages states to use nutrient credit trading to help address nutrient pollution. Nutrient credit trading programs are designed to allow a point source to purchase pollutant reduction credits from another point source or a nonpoint source in the same watershed with the intent of meeting the discharge limits established in an NPDES permit. These limits establish a baseline that credit generators must discharge below before they can sell credits. According to EPA guidance, point sources that exceed their discharge limit can buy credits to be compliant with their permits, and point sources that have discharged below their limits can sell credits. Because the Clean Water Act does not require nonpoint sources to meet nutrient reduction targets established in a TMDL, there is no demand to buy credits. However, nonpoint sources can sell credits in some programs once these sources have reduced pollution below the targets established in the TMDL for the watershed or geographic area. To provide states with guidance on developing and implementing trading programs, EPA issued its Water Quality Trading Policy in 2003 and its Water Quality Trading Toolkit for Permit Writers in 2007. According to the EPA toolkit, states have the flexibility to structure a trading program to meet state needs including the type of entities allowed to trade; the types of pollutants traded, such as nutrients; and the mechanism for carrying out the trades. Additionally, the legal and policy framework for trading programs can vary.
The Clean Water Act does not explicitly identify trading as an option to comply with NPDES permits. According to EPA’s guidance, however, the act provides authority for EPA and states to develop a variety of programs and activities to control pollution; including trading programs, provided that these programs are consistent with the act. For instance, trading must not violate any of the act’s provisions, such as the anti-degradation policy, which maintains and protects the existing uses of water bodies, or the anti-backsliding policy, which prohibits the modification of existing NPDES permits with less stringent standards than those established in the previous permit.
Eleven States Had Nutrient Credit Trading Programs in 2014, and Trading Provided Flexibility for Some Point Sources to Meet Nutrient Discharge Limits in the 3 States We Reviewed
According to EPA data and interviews with EPA officials, in 2014, a total of 19 nutrient credit trading programs existed in 11 states. The majority of nutrient credit trades occurred in 3 states—Connecticut, Pennsylvania, and Virginia. Most point sources participating in these 3 state programs in 2014 did not purchase credits. However, EPA and state officials and stakeholders told us that trading provided point sources with flexibility that allowed them to manage risk, reduce the cost of compliance, and better manage the timing of upgrades of their nutrient removal technology.
Eleven States Had a Total of 19 Trading Programs in 2014, and 3 States Accounted for the Majority of Trades, According to EPA
In 2014, a total of 19 nutrient credit trading programs existed in 11 states, according to EPA data and interviews with EPA officials. These 11 states were California, Connecticut, Florida, Georgia, Idaho, Minnesota, North Carolina, Ohio, Pennsylvania, South Carolina, and Virginia. Three of the states—Georgia, Minnesota, and North Carolina—had more than one nutrient credit trading program. Each program covered a specific watershed, portion of a watershed, municipality, or permit holder (see fig. 2). See appendix II for a list of the 19 programs.
EPA documents and officials indicated that trading may be less viable in some locations than in others. EPA’s documentation discusses factors that can affect the viability of trading. For example, trading should occur within an area—such as a watershed—that is appropriately defined to ensure that trades will maintain water quality standards within that area.
In a 2008 evaluation of water quality trading, EPA identified other location-specific conditions that influence whether trading occurs, including the regulatory environment, the nature of participants, and watershed characteristics. EPA officials in Region 9 explained, for example, that they do not see strong demand for nutrient credit trading in their region because there are not many nutrient impaired watersheds with a favorable combination of point sources that need credits and willing credit generators.
Trading activity varied among the 19 programs. According to EPA data, not every state with a trading program had trades in 2014. According to EPA data and officials, the majority of nutrient credit trades occurred in 3 states—Connecticut, Pennsylvania, and Virginia—which were also the largest programs in terms of the number of participating point sources. According to state data and officials, the number of trades in these states in 2014 ranged from 31 to 151. (See table 1.)
Under EPA guidance, each state has the flexibility to establish or approve a nutrient credit trading program or programs to meet its own situation. The three programs we reviewed are each structured somewhat differently. Specifically, see the following:
Connecticut adopted legislation for a nutrient trading program in 2001. The state also issued a general permit in 2002 that allows 79 point sources in the Long Island Sound watershed to trade nitrogen credits. Connecticut’s program does not allow nonpoint sources to generate credits. All nutrient credit trades are automatically processed annually by the state credit exchange, known as Connecticut’s Nitrogen Credit Exchange Program. Connecticut state officials explained that, at the end of the year, the exchange compares each point source’s total pounds of nitrogen discharged to its discharge limit. Each point source that discharges less than its limit receives a payment from the exchange. Each point source that discharges more than its limit—and thus would be out of compliance with the general permit if it failed to secure credits in a timely manner—is billed for the credits needed to bring it into compliance with its discharge limits. Because these transactions are conducted annually, the number of trades reported for Connecticut in 2014 is the same as the number of participating point sources that purchased credits in 2014.
Pennsylvania established its trading policy and guidance in 2005.
The state issues individual NPDES permits to point sources that allow for trading both nitrogen and phosphorus credits in the Chesapeake Bay watershed. In this program, both point sources and nonpoint sources may generate credits to sell to point sources for compliance with permit limits. Like Connecticut, Pennsylvania has an exchange for buying and selling credits, which is called PENNVEST. Unlike Connecticut, the exchange does not automatically conduct trades at the end of the year. Instead, point sources and nonpoint sources can choose whether to use the exchange to buy or sell credits, or whether to conduct sales outside the exchange. Pennsylvania officials told us that sales typically occur outside the exchange. According to Pennsylvania officials, the proportion of trades going through the exchange has been less than 10 percent annually since 2014.
Virginia established its trading program through state legislation in 2005. The state uses a general NPDES permit that allows point sources within the Virginia portion of the Chesapeake Bay watershed to trade nitrogen and phosphorus credits. The general permit does not normally allow point sources to use credits generated by nonpoint sources for compliance with the general permit. Point sources covered under this permit generally trade with each other through the Virginia Nutrient Credit Exchange Association, although there can be a handful of bilateral trades, according to Virginia officials and state data.
Most Point Sources Participating in the Three State Programs We Reviewed Did Not Purchase Credits, but Trading Provided Flexibility, According to Officials
In the three states we reviewed, most point sources participating in the trading programs did not purchase credits to meet nutrient discharge limits, according to state data and officials. Officials from each state explained that many point sources have upgraded their nutrient removal technology in order to help them meet discharge limits. For example, from 2002, when Connecticut’s trading program began, through 2014, 53 of the 79 point sources in Connecticut’s trading program had invested in new technology to improve nutrient removal, according to state documents. As a result, many of those point sources generate nutrient reductions that they can sell as credits and do not usually need to purchase credits, according to state data and officials. Most point sources in the three states we reviewed did not purchase credits in 2014. (See table 2.)
The percentage of point sources in those trading programs that did purchase credits to meet discharge limits ranged from 14 to 49 percent, depending on the state. Specifically, see the following: In Virginia, 14 percent of point sources in the trading program purchased credits in 2014—the lowest percentage in the states we reviewed. Virginia officials told us that few point sources purchased credits because many point sources upgraded their nutrient removal technology before implementing the TMDL in anticipation of the stricter discharge limits and were able to meet discharge limits without purchasing credits.
In Pennsylvania, 29 percent of point sources in the trading program purchased credits in 2014. Officials in Pennsylvania told us, however, that the demand for credits has continued to drop as point sources upgrade their nutrient removal technology. They said that most point sources that were planning to upgrade have done so.
In Connecticut, 49 percent of point sources in the trading program purchased credits in 2014—the highest percentage of the states we reviewed. According to Connecticut’s 2014/2015 program report, the number of point sources that bought credits in 2014 was due to (1) increased discharges from three large wastewater treatment facilities that were under construction that year and (2) cold weather that affected the ability of point sources to remove nutrients from their discharges using biological processes. For comparison, 35 percent of point sources bought credits in Connecticut in 2015. A member of the Nutrient Credit Exchange Advisory Board in Connecticut told us that since the program began in 2002, the number of point sources that have needed to buy credits has generally decreased over time as these facilities have upgraded their nutrient removal technology. State officials expect this trend to continue in the future as more point sources complete their technology upgrades.
For the point sources that did purchase credits in 2014, state officials in the three states we reviewed told us that the total amount (in pounds) of nutrients that point sources purchased as credits to meet their individual discharge limits was generally small relative to the aggregate discharge limits (see table 3). In addition, the number of credits purchased by point sources was generally much less than the number of credits generated (see table 4). However, because the three programs collect data differently, we could not make comparisons across all three states for both measures. Specifically, for two of the states—Connecticut and Virginia—we were able to compare the amount (in pounds) of nutrients purchased to the aggregate discharge limit, but we did not have comparable data for Pennsylvania. For the number of credits purchased relative to the number of credits available, we were able compare the data for Pennsylvania and Virginia, but we could not make the comparison for Connecticut. Nevertheless, the available state data show that the amount (in pounds) of nutrient credits purchased in these three programs in 2014 was generally small.
The state data for 2014 showed that the amount of nutrient credits purchased in these three programs was generally small. Specifically, see the following:
Point sources participating in Connecticut’s nutrient credit trading program in 2014 purchased about 645,000 pounds of nitrogen credits to meet individual discharge limits. In total, point sources in the program had an aggregate discharge limit of about 3.3 million pounds for nitrogen. Point sources in Connecticut purchased the most pounds relative to the aggregate discharge limit among the states we reviewed—about 20 percent. However, in 2014, point sources removed far more nutrients—5.3 million pounds of nitrogen—than the 645,000 pounds purchased.
Point sources participating in Virginia’s nutrient credit trading program in 2014 purchased about 164,000 pounds of nitrogen credits and 35,000 pounds of phosphorus credits to meet individual discharge limits. In total, point sources in the program had an aggregate discharge limit of about 19 million pounds for nitrogen and 1.6 million pounds for phosphorus. Therefore, the pounds of nitrogen and phosphorus traded in Virginia in 2014 represented about 1 percent and 2 percent, respectively, of the aggregate discharge limit for these nutrients. In addition, the number of credits purchased by point sources in Virginia was less than the number of credits generated. Specifically, point sources in Virginia purchased about 164,000 nitrogen credits out of 6 million nitrogen credits generated, and about 35,000 phosphorus credits out of 797,000 phosphorus credits generated.
Officials in Pennsylvania told us that the amount of nutrients traded in their program was small relative to the aggregate discharge limits, but they could not provide data in terms of pounds that we could use to make the comparison. However, data from Pennsylvania show that the number of credits purchased by point sources was generally much less than the number of credits generated. Specifically, point sources in Pennsylvania purchased about 805,000 nitrogen credits out of 1.9 million nitrogen credits generated, and about 85,000 phosphorus credits out of 111,000 phosphorus credits generated.
In the three states we reviewed, most credits sold were generated by point sources, not nonpoint sources. As previously discussed, Pennsylvania was the only state we reviewed that allowed nonpoint sources to generate and sell credits. Of the credits sold in Pennsylvania, a relatively small percentage was sold by nonpoint sources. Specifically, nonpoint sources sold 36 percent of all nitrogen credits purchased in 2014 and 11 percent of all phosphorus credits. According to state officials, there were seven nonpoint source sellers of credits, including at least four sellers that aggregate credits generated by multiple agricultural operations.
Although most point sources in these states did not buy credits in 2014, EPA officials, state officials, and point source stakeholders told us that nutrient credit trading was important because it gave point sources flexibility in meeting nutrient discharge limits. According to officials and stakeholders, this flexibility allowed point sources to manage risk, reduce the cost of compliance, and better manage the timing of upgrades of point sources’ nutrient removal technology. Specifically, see the following:
Managing risk. Although each point source’s permit contains specific discharge limits, a point source’s actual discharge varies from year to year. For example, an official from the Virginia Nutrient Credit Exchange Association explained that point sources will forecast their anticipated discharge over a 5-year period. However, there can be considerable variance from the forecast for any given year because of, for example, unpredictable weather, which can upset biological nutrient removal processes. Therefore, nutrient trading gives point sources insurance against unexpectedly high discharges by allowing them to “true up” at the end of the year by buying credits from point sources that discharged below their limits. This reduces the risk that an individual point source faces noncompliance with its permitted limit.
Reducing the cost of compliance. Stakeholders said that upgrading nutrient removal technology to meet discharge limits is economically feasible for some point sources but is potentially unaffordable for point sources with fewer financial resources and smaller economies of scale. For example, one point source credit buyer in Connecticut told us that the buyer’s facilities had invested in upgrading nutrient removal technology, but any additional upgrades to meet the discharge limits would not be economically feasible. The buyer explained that, within a trading program, those point sources with lower pollution control costs can generate additional reductions in pollution, which they can use to generate credits to sell to those point sources with higher pollution control costs. As a result, trading can make nutrient reduction efforts more cost-efficient system-wide.
Managing the timing of upgrades. Trading helps point sources better manage the timing of upgrades to their nutrient removal technology, according to state officials and point source stakeholders. For example, a point source stakeholder in Virginia told us that it would have been difficult for all point sources to upgrade at once to meet the new discharge limits established in the NPDES permit under the TMDL, since there was a limited pool of engineers and construction companies that could install these upgrades, and that trading gave point sources time to schedule upgrades over several years. Additionally, in Pennsylvania, a point source credit buyer explained that the point source planned to complete a multi-year $34 million upgrade of its facilities in 2017 to meet discharge limits that came into effect in October 2012. To meet discharge limits in the meantime, the point source developed a program to purchase nitrogen credits from local nonpoint sources that would implement cover crop conservation practices to generate the necessary reductions. Therefore, trading allowed the point source to meet discharge limits during the period it was planning and completing the upgrade.
Although nutrient credit trading has provided point sources with flexibility in meeting discharge limits, trading is not responsible for reducing nutrient pollution, according to EPA, state, and other stakeholders. These stakeholders told us that pollution reduction largely results from nutrient discharge limits in permits and the nutrient removal technology that point sources invest in to meet or reduce below those limits.
States Oversee Nutrient Credit Trading Programs by Approving and Verifying Credit Generation, and EPA Reviews Permits That Allow for Trading
States oversee nutrient credit trading programs by approving and verifying credit generation to ensure that credits represent real nutrient pollution reductions. EPA reviews permits, conducts periodic evaluations of point source facilities to ensure that trading is consistent with the Clean Water Act, and issues national-level guidance for nutrient credit trading.
States Approve and Verify Credit Generation
States oversee nutrient credit trading programs by approving and verifying credit generation to ensure that credits represent real nutrient pollution reductions. A state’s approval and verification process varies depending on whether the credit generator is a point or nonpoint source. For point sources, the states we reviewed followed a process for verifying credits that is based on the existing oversight process for NPDES permits. Because nonpoint sources do not have NPDES permits, states use a separate process to approve and verify that nonpoint sources’ pollution reduction activities have generated credits for trading.
Process for Approving and Verifying Point Source Credits
States we reviewed approve credit generation by point sources by including language that allows for trading in point sources’ individual or general NPDES permits. In Connecticut and Virginia, point sources covered under the states’ general permits are automatically approved to generate nutrient credits for trading. In Pennsylvania, point source facilities with language that allows for trading in their individual permits and that meet requirements in the state’s watershed implementation plan are approved to generate credits. In all three states, the language that allows for trading in these permits includes the individual discharge limit for each point source, which is called a baseline, for trading purposes. An approved point source is able to generate credits when it reduces its discharge below its baseline.
To verify point source credits, the states we reviewed each use an oversight process based on its NPDES authority to oversee permits that include discharge monitoring and reporting, and inspections. Federal regulations require point sources with NPDES permits to periodically monitor compliance with the effluent limitations established in their permits and report the results to the permitting authority. Specific monitoring and reporting requirements, including the frequency of monitoring, are included in each permit. State officials in the three states we reviewed all told us that they use discharge monitoring reports to determine how many credits a point source has generated. For example, according to the terms of the general permit for nutrient discharges in Virginia, point sources must sample nitrogen and phosphorus from one time per month to three times per week, depending on the volume of discharge. By February 1 of each year, point sources must submit total annual nitrogen and phosphorus discharges to the Virginia Department of Environmental Quality using a discharge monitoring report, which covers discharges during the previous calendar year. State officials in Virginia told us that they review these reports for data quality and determine which point sources generated credits and which point sources must buy credits to meet discharge limits. Any credits that point sources intend to use for compliance during the previous calendar year must be purchased by June 1.
In addition, state officials in all three states told us that they conduct periodic inspections of point source facilities to ensure that facilities are appropriately monitoring and reporting nutrient discharges as required under their permits. For example, officials in Pennsylvania told us that for point sources, the state’s Department of Environmental Protection conducts periodic inspections of point sources to ensure that they are meeting requirements that allow them to generate credits. These officials said that they generally inspect each facility at least once per year.
Process for Approving and Verifying Nonpoint Source Credits
In Pennsylvania, according to state officials and program documents, such as state regulations, a nonpoint source that seeks to generate credits must submit a request for credit certification. The request includes a description of how the nonpoint source intends to reduce nutrient pollution, such as through a BMP, and information about steps the nonpoint source will take to verify the credits including any relevant calculations, maps, and photographs. State officials review the request for technical acceptability and consistency with program requirements before approving credit generation.
To verify nonpoint source credits after the credit-generating activity has taken place, officials in Pennsylvania told us that they review information about the performance of that activity, such as a BMP. According to the Pennsylvania Department of Environmental Protection’s website, officials review documentation to ensure that the credit-generating activity was implemented as described in the verification plan submitted with the certification request, and that all program requirements are met. In addition to reviewing documentation, officials may conduct activities such as monitoring the credit-generating activity, inspecting sites, and performing compliance audits. For example, as part of the verification process, a nonpoint source credit generator official told us that they had to provide before and after photos of the cover crop that was intended to prevent nutrient pollution in a local water body. They said that they provided documentation that the crops were planted at a certain time and were the appropriate types of crops. In addition, they provided calculations related to the crops planted and types of soil they were planted in, before the credits could be verified.
EPA Reviews Permits and Conducts Periodic Evaluations of Point Source Facilities to Ensure That Trading Is Consistent with the Clean Water Act
EPA oversees trading programs as part of its oversight of NPDES to ensure that they are fully consistent with the Clean Water Act and its implementing regulations, in particular when questions or concerns arise, according to EPA policy. EPA officials told us that they conduct oversight primarily through the regional offices, which (1) review NPDES permits; (2) review and comment on state regulatory frameworks for trading; and (3) evaluate point source facilities by collecting discharge information and conducting periodic on-site inspections to ensure, for example, that sampling and record keeping practices are in order. Additionally, EPA headquarters provides national-level guidance and training to state programs and stakeholders.
Review of NPDES Permits
According to EPA officials, EPA’s regional offices review NPDES permits that allow for trading to ensure that these permits meet the standards of the Clean Water Act and are consistent with EPA’s policy and guidance on trading. The regional offices can object to these permits, if necessary. EPA can request changes to permits to ensure that they align with federal requirements. Although EPA does not review every NPDES permit, it will generally review permits that allow for trading because these permits could be considered more complicated, controversial, or challenging, according to EPA officials.
In the states we reviewed, officials told us that EPA has reviewed NPDES permits that allow for trading and has at times requested that states make changes to the permits. For example, officials in Pennsylvania told us that EPA has reviewed 180 permits from large facilities in the state’s trading program and objected to 14 of them, requiring state officials to modify those permits. Officials in Virginia said that EPA has reviewed its general permit that allows for nutrient credit trading. Virginia officials said that, during the most recent EPA review, the agency issued a formal objection to the permit and asked the state to increase the sampling frequency in the permit’s monitoring guidelines. As a result, Virginia modified the permit to satisfy EPA’s request.
Review of State Regulatory Frameworks for Trading and Evaluation of Facilities
In addition to reviewing NPDES permits, EPA regional officials told us that they review and comment on states’ regulatory frameworks for trading. Officials said that they review these frameworks to identify any issues in developing and implementing the programs and that they request that state permitting agencies make changes when necessary. For example, in 2012, EPA Region 3 completed reviews of all six states and the District of Columbia in the Chesapeake Bay watershed, including the trading programs for both Virginia and Pennsylvania. After reviewing Pennsylvania’s trading program, EPA raised concerns about the state’s calculation of the baseline for nonpoint source credit generation. In response to EPA’s concerns, officials in Pennsylvania told us that they made changes in the way nonpoint source credits are calculated.
EPA’s involvement in reviewing state trading frameworks can vary, according to EPA and state officials. For example, because of specific authorities written into the Chesapeake Bay TMDL, EPA Region 3 plays a very active role in reviewing state trading programs, according to officials from Region 3. By comparison, Connecticut state officials told us that since EPA Region 1 granted its initial approval of Connecticut’s trading program, there has been little direct involvement by EPA in overseeing the program.
Stakeholders in the states we reviewed and EPA regional officials told us that EPA conducts periodic evaluations of point source facilities by collecting discharge monitoring data and conducting inspections. Officials at EPA Region 3 told us that they conduct inspections of facilities, review records and sampling procedures, and evaluate credit generators. A nutrient credit generator in Pennsylvania told us that EPA has audited the facility’s process for converting nutrient-rich manure into energy, mineral products, and nutrient credits. State officials in Virginia and Connecticut told us that they report nutrient discharge data to EPA for review.
EPA Provides National-Level Oversight
In addition to oversight activities conducted by the regions, EPA conducts some oversight of nutrient credit trading at the national level. EPA’s oversight at the national level involves: (1) setting national guidance for trading, (2) offering training on nutrient credit trading to state officials and stakeholders, and (3) periodically collecting some data on nutrient credit trading programs. Specifically, see the following:
Guidance. EPA has issued three documents that provide guidance to states to assist them in developing and implementing nutrient credit trading programs: EPA’s 2003 Water Quality Trading Policy; the 2004 Water Quality Trading Assessment Handbook; and the 2007 Water Quality Trading Toolkit for Permit Writers, which EPA updated in 2009.
Training. EPA has offered training for NPDES permit writers to help them better understand how to write NPDES permits that incorporate provisions for nutrient credit trading, according to EPA officials. EPA and USDA also sponsored a 3-day water quality trading workshop in September 2015 in Lincoln, Nebraska, on a range of different subjects related to water quality trading. According to the workshop’s summary document, over 200 attendees participated, including water resource professionals; third-party environmental market makers; academics; representatives of federal, state, and local governments; representatives of non-governmental organizations; and agricultural and environmental stakeholders.
Data collection. According to EPA officials, there is no requirement for permittees to report data about trading programs at a national level and EPA has no systematic way to collect this information. However, EPA manually collects some trading data, such as the names of programs with permits that allow for trading, which provides the agency with a general understanding of the extent to which trading is being used nationally. Officials told us that they plan to update national trading data at least every 2 years and make them available online in the fall of 2017.
The Presence of Discharge Limits and the Challenges of Measuring Nonpoint Sources’ Nutrient Reductions Affect Participation in Trading Programs, According to Stakeholders
Stakeholders cited two key factors that have affected participation in nutrient credit trading—the presence of discharge limits for nutrients and the challenges of measuring nutrient reductions resulting from nonpoint sources’ implementation of BMPs.
First, officials from the three states we reviewed, and other stakeholders we interviewed, cited the importance of discharge limits for nutrients as a driver to create demand for nutrient credit trading. Without such a driver, point sources have little incentive to purchase nutrient credits. According to EPA guidance, discharge limits—most commonly established in a TMDL—are the leading driver for nutrient credit trading markets. For the Pennsylvania and Virginia programs, the nutrient discharge limits are established in the Chesapeake Bay TMDL. For the Connecticut program, nutrient discharge limits are established in the Long Island Sound TMDL. The TMDL nutrient discharge limits are ultimately translated into discharge limits in the NPDES permits for point sources.
Pennsylvania officials explained how discharge limits serve as a driver for trading. Officials stated that although the state established its nutrient trading program in 2005, the TMDL for Chesapeake Bay was not established until 2010. Officials noted that in the first years of the program, little trading took place because point sources did not have to meet nutrient discharge limits. Once EPA established the TMDL for the Chesapeake Bay—and Pennsylvania established discharge limits for point sources in the NPDES permits—demand for nutrient credit trading increased, according to Pennsylvania officials. Officials explained that if point sources had not yet upgraded their nutrient removal technology, and could not meet the NPDES permit discharge limits, they could buy nutrient credits to comply with discharge limits. EPA officials added that demand for trading could increase over the long term because of economic or population growth.
In addition to programs in the three states, we also reviewed a program in the Ohio River Basin where nutrient credit trading activity has been limited, according to program officials. This multi-state trading program allows point and nonpoint sources in Ohio, Indiana, and Kentucky to generate and sell nutrient credits, and was designed as a pilot to test nutrient credit trading in case discharge limits were established. Program officials told us that while some credits have been generated and sold, participation in the program has been limited because there is no requirement—in either a TMDL or numeric water quality standards—for the point sources in these states to meet discharge limits. As the program is currently implemented, they said that credits are not purchased by point sources to comply with discharge limits but rather by corporations to meet internal sustainability goals or by philanthropists who want to invest in BMPs that address nutrient pollution in the Ohio River Basin.
Unlike point sources, the Clean Water Act does not require nonpoint sources to meet nutrient discharge limits established in TMDLs or numeric water quality standards, and as a result, EPA said there is no federal regulatory driver creating demand for nonpoint sources to participate in nutrient credit trading programs.
The second factor affecting participation in trading programs relates to the challenges of measuring nutrient reductions that result from nonpoint sources’ implementation of BMPs. According to EPA officials and guidance, federal and state agencies typically do not directly monitor nonpoint source pollution or the effectiveness of BMPs because the diffuse nature of nonpoint source pollution makes monitoring costly and impractical. Instead, agencies and other stakeholders rely on models to estimate the amount of pollution discharged by nonpoint sources and the effectiveness of BMPs. These models incorporate information about variables such as land use, soil type, and precipitation to estimate the amount of nutrients that will be reduced as the result of implementing a specific BMP. Even with these models, EPA guidance recommends that the programs use a rule that calls for nonpoint source credit generators to generate credits at a greater than a one-to-one basis to account for uncertainties in modeling. According to this guidance, the rule can also mitigate other uncertainties such as how well BMPs are designed and maintained and the risk of a BMP failing to produce the expected results.
In part because of this uncertainty, two of the states we reviewed did not allow nonpoint sources to generate credits in their programs. State officials in Connecticut told us that it was easier for Connecticut to implement nutrient trading with point sources, as their discharges are easy to quantify. State officials in Virginia told us that point source to nonpoint source trading is complicated and they felt that they could meet their TMDL reduction goals solely with point source reductions.
Pennsylvania does allow nonpoint sources to generate and sell credits but the state has developed a rule to help address some of these uncertainties. Specifically, Pennsylvania implemented a rule in 2016 requiring nonpoint sources to generate three nutrient credits for every nutrient credit sold. This rule was developed as an interim step to address EPA’s concern that the state’s calculation of the baseline for nonpoint source credit generation was not consistent with the reductions needed to meet the Chesapeake Bay TMDL goals. Pennsylvania’s rule, however, appears to have reduced the use of nonpoint source credits. State program data show that in 2016 approximately 115,000 nitrogen credits were available from nonpoint sources after the implementation of the rule, almost one-third the approximately 381,000 nitrogen credits that were available in 2014. An official at a wastewater treatment facility in south central Pennsylvania told us that the rule increased the cost to generate nonpoint source credits and reduced the number of nonpoint source credits available in Pennsylvania’s trading program. Specifically, to meet its discharge limits in 2014, this facility purchased approximately 75,000 nitrogen credits, 52,000 of which were generated from local farmers who installed BMPs on their land. In 2016, after the rule was implemented, the same facility purchased 95,000 nitrogen credits, only 5,000 of which were generated from local farmers. According to the point source officials, they could no longer rely solely on purchasing credits generated from local farmers because there were fewer nonpoint source credits available to purchase in 2016. To meet the discharge limit, this facility purchased the remaining credits they needed from other point sources because nonpoint source credits were not available. Pennsylvania officials told us that the decline in the number of nonpoint source credits is mostly due to the new rule. However, they said that other factors such as the low price of credits have also decreased the incentive to generate nonpoint source credits.
According to EPA officials, the program should implement a stricter baseline, based on the pollution reduction targets established in the Chesapeake Bay TMDL. Pennsylvania officials told us that if they make the baseline requirements stricter, there may be no incentive for nonpoint sources to generate credits because it would be much more difficult to meet the minimum requirements and the cost of generating credits would be prohibitive.
State officials and stakeholders also told us that even if a program allows nonpoint sources to trade, point sources often prefer to trade with other point sources because they have similar permit and monitoring requirements and are both legally liable for meeting discharge limits. Trading between point sources provides buyers with the assurance that the credits they purchase represent actual reductions and can be used for compliance with an NPDES permit.
Agency Comments
On September 12, 2017, we provided a draft of this report to EPA for review and comment. On September 29, 2017, EPA responded by email stating that its Office of Water had reviewed the draft report and EPA had no comments.
We are sending copies of this report to the appropriate congressional committees, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
This report (1) examines the extent to which nutrient credit trading programs have been used and what the outcomes of the programs have been, (2) describes how states and the Environmental Protection Agency (EPA) oversee nutrient credit programs, and (3) describes what key factors stakeholders view as affecting participation in nutrient credit trading.
To examine the extent to which nutrient credit trading programs have been used and what the outcomes of the programs have been, we first spoke with EPA headquarters and EPA regional officials and reviewed EPA data. EPA does not have a formal definition for water quality trading programs, of which nutrient credit trading is a subcategory, and is not required to keep information on these programs. EPA periodically gathers some limited information on trading programs, including the type of trading program, location, facilities participating, and estimated trades. The most recent data EPA had at the time we conducted our review were for 2014. EPA officials explained that the completeness and consistency of the data reported by states to EPA varied somewhat. For example, not all programs reported trading data for calendar year 2014. To verify the accuracy of EPA’s list of trading programs, we interviewed or e-mailed officials from all 10 EPA regions to confirm the presence or absence of trading programs in each state in 2014. For the 7 EPA regions with some form of trading program in their regions, we interviewed regional officials to gather more information about the type of trading conducted and whether there was trading activity in 2014.
Using EPA’s information as a starting point, we developed a modified list of nutrient credit trading programs that existed in 2014. For our modified list, we excluded two programs, one from Region 5 and one from Region 10, from EPA’s data that did not trade nutrient credits. Based on our discussion with EPA officials, we also excluded trading programs that let residential septic system owners “trade” credits to encourage wastewater treatment facilities to take their systems online. We also excluded one program that included three states—the Ohio River Basin Interstate Water Quality Trading Project —because none of the participating states have discharge limits in their permits. In the process of interviewing EPA regions, we also added one program from Region 4 and two programs from Region 5 that EPA officials told us had been inadvertently left off EPA’s 2014 list.
From this list, we then selected a nongeneralizable sample of the three nutrient credit trading programs—Connecticut, Pennsylvania, and Virginia—which appeared to have done the most trading and had the most participating point sources in 2014 for a more detailed examination. Because these programs were judgmentally selected, the results of our review of these programs cannot be generalized. For these three programs, we reviewed state laws and regulations, National Pollutant Discharge Elimination System (NPDES) permits, watershed implementation plans, program rules and policies, annual summaries of nutrient credit purchases and sales, and assessments of state trading programs when available. We also interviewed state and program officials and other stakeholders, such as point source and nonpoint source credit generators and buyers, to gather information on the programs, including structure, participants, number and type of trades, authorizing mechanisms, and outcomes.
Specifically, to determine the number of trades we asked each state for its official list of trades from 2014, the most recent year for which we could get complete data from all programs. We counted each time a point source purchased credits as one trade. In addition, we asked states to provide us the number of point sources that purchased credits and the number of point sources in the trading programs, which we used to determine the percentage of point sources that purchased credits. The states post the number of point sources that purchased credits online, and the number of point sources in the program is identified in state documents.
We also asked the states for the number of credits purchased and the aggregate discharge limit for point sources to determine the percentage of credits purchased in pounds of nutrients relative to the aggregate limit. The aggregate discharge limit is the maximum allowable discharge for point sources in the program. Because this limit represents the maximum amount of pollution allowable to meet water quality standards, it served as a point of reference for comparing the amount of discharge that was traded. We took these numbers from state records, and they are derived from the total maximum daily load, according to EPA policy. According to Virginia and Connecticut officials, in their programs one credit is equal to one equalized or delivered pound of pollution—that is, a pound of pollution after accounting for the delivery ratio. Pennsylvania could not provide us the number of pounds purchased. According to Pennsylvania officials, a nutrient credit does not equal a pound of pollution in their program because they use trading ratios, such as delivery ratios. This means that credits generated from different sources represent different nutrient reductions depending on where they are relative to the polluted water body. However, the aggregate discharge limit does not represent the pounds of nutrients that could have been traded, since the volume of trading was limited by the supply of credits, which was less than the aggregate discharge limit in Virginia and Pennsylvania. Specifically, to show this, we used state data on the number of credits generated and compared them with the number of credits purchased. Connecticut does not have data on the number of credits available. To assess the reliability of the state data, we visually reviewed the data for completeness and interviewed state officials responsible for collecting and using data about their quality assurance protocols and their confidence in the data. We found the data to be sufficiently reliable for our purposes and confirmed all final numbers with state officials.
We interviewed state program officials in all three states to better understand the extent to which nutrient credit trading programs have been used and what the outcomes have been. During these interviews, we discussed the management of the programs, reviewed state trading data, and discussed the benefits and challenges of nutrient credit trading. We visited Pennsylvania to meet with program officials and stakeholders. Specifically, we met a representative of a credit aggregator that buys and sells credits from nonpoint source generators and toured a wastewater treatment facility that generates credits and a facility that generates nutrient credits by processing chicken manure into energy. We also spoke with buyers and sellers of nutrient credits in Connecticut and officials from the nutrient credit exchange in all three states. We did not audit these state trading programs or analyze their effectiveness or efficiency in meeting discharge limits or water quality standards.
We also conducted a literature review of academic and economic journals. We searched peer-reviewed journals for articles published from 2011 through 2016 discussing water quality trading or nutrient credit trading. We did not find any additional trading programs in the United States that had not already been identified.
To describe how states and EPA oversee nutrient credit programs, we reviewed relevant federal laws, regulations, and EPA policies and guidance related to nutrient credit trading. We reviewed state requirements for implementing the NPDES program, under the Clean Water Act and implementing regulations, which defines responsibilities applicable to states that serve as permitting authorities for overseeing point source permittees’ monitoring and reporting. These same authorities are used by states to oversee state trading programs. The Clean Water Act does not specifically authorize water quality trading, according to EPA officials; however, EPA has developed trading guidance for states interested in developing trading programs. We reviewed this guidance, specifically, EPA’s 2003 Water Quality Trading Policy and 2007 Water Quality Trading Toolkit for Permit Writers. We also reviewed state documents, such as watershed implementation plans, that identify trading program rules, and interviewed state officials and other stakeholders for our nongeneralizable sample of three nutrient credit trading programs. In our interviews we asked state officials how they oversee their trading programs. In particular, we asked how they approve point and nonpoint sources to generate credits, verify that a credit represents a real reduction in nutrient pollution, and monitor the buying and selling of credits to ensure that permit obligations are met. We also interviewed officials from EPA’s Office of Water and the 7 EPA regions with nutrient credit trading programs and asked them to describe EPA’s oversight role at the regional and national level.
To describe what key factors stakeholders view as affecting participation in nutrient credit trading, we spoke with officials from EPA’s Office of Water, the 7 EPA regions with nutrient credit trading programs, and officials and stakeholders from the nongeneralizable sample of three nutrient credit trading programs. In addition, we reviewed documents and interviewed officials for one nongeneralizable multi-state trading program in the Ohio River Basin. We reviewed this program to better understand the key factors that can affect participation in nutrient trading programs. We interviewed officials from the institute that developed the program and corresponded with state officials from Kentucky and Ohio, two of the states involved in the Ohio Basin program. Finally, we interviewed representatives of stakeholder groups, such as those representing wastewater treatment facilities, national organizations for water issues, and agricultural conservation districts to get a broad perspective on the key factors that affect participation in nutrient credit trading programs.
Appendix II: Nutrient Credit Trading in the United States in 2014
We identified 19 nutrient credit trading programs in 11 states and seven Environmental Protection Agency regions, in 2014.The 11 states that had programs are: California, Connecticut, Florida, Georgia, Idaho, Minnesota, North Carolina, Ohio, Pennsylvania, South Carolina, and Virginia (see table 5).
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Janet Frisch (Assistant Director), Chuck Bausell, Mark Braza, Ellen Fried, Patrick Harner, Karen Howard, Greg Marchand, Emily Ryan, Jason Trentacoste, and Daniel Will made key contributions this report. . | Why GAO Did This Study
Nutrient pollution—caused by excess nitrogen and phosphorus entering water bodies—poses significant risks to the nation's water quality. Nutrients can enter water bodies from point sources and nonpoint sources. The Clean Water Act establishes the basic structure for regulating discharges of pollutants, including excess nutrients. Under the act, authorized states—assisted and overseen by EPA—set limits on nutrients impairing a water body and limits on point source discharges. EPA encourages states to use nutrient credit trading to address nutrient pollution. According to EPA, trading allows a point source to meet nutrient discharge limits by buying pollutant credits from a source that has reduced its discharges more than required.
GAO was asked to examine nutrient credit trading programs. This report describes (1) the extent to which nutrient credit trading programs have been used and what the outcomes of the programs have been, (2) how states and EPA oversee nutrient credit trading programs, and (3) what key factors stakeholders view as affecting participation in nutrient credit trading. GAO reviewed EPA documents and interviewed EPA officials to gather information on trading programs. GAO then selected a nongeneralizable sample of three programs with the most trades in 2014 (based on the most recent available data); reviewed program documents; and interviewed EPA, state, and program officials and other stakeholders about the programs.
What GAO Found
In 2014, 11 states had 19 nutrient credit trading programs, and trading provided flexibility for some point sources, such as wastewater treatment plants, to meet nutrient discharge limits, according to Environmental Protection Agency (EPA) data and officials. The majority of nutrient credit trading during 2014 occurred in three state programs—programs in Connecticut, Pennsylvania, and Virginia. A review of trading data from these programs showed that most point sources participating in the three state programs did not purchase credits in 2014 to meet their discharge limits, which are established in National Pollutant Discharge Elimination System (NPDES) permits under the Clean Water Act. For the point sources that did purchase credits in 2014, state officials in the three states told GAO that the total amount in pounds of nutrients that point sources purchased as credits was generally small. Nevertheless, state officials explained that nutrient credit trading was useful because it allowed point sources to manage risk, reduce the cost of compliance, and better manage the timing of upgrades of nutrient removal technology.
States oversee nutrient credit trading programs, and EPA helps ensure that programs are consistent with the act. States oversee nutrient credit trading programs by approving and verifying the generation of credits to ensure that credits represent real reductions in nutrient pollution. A state's approval and verification process varies depending on whether the credit generator is a point or nonpoint source, such as runoff from agricultural and urban areas. For point sources, the states GAO reviewed followed a process for verifying credits that is based on the existing oversight process for NPDES permits. Because nonpoint sources do not have NPDES permits, states use a separate process to approve and verify that nonpoint sources' pollution reduction activities have generated credits for trading. When questions or concerns arise, EPA uses its oversight authority to ensure that trades and trading programs are fully consistent with the act. EPA officials told GAO that they conduct oversight primarily through the regional offices, which (1) review NPDES permits, (2) review and comment on state regulatory frameworks for trading, (3) conduct periodic on-site inspections, and (4) provide national-level guidance and training to state programs and stakeholders.
According to stakeholders, two key factors have affected participation in nutrient credit trading—the presence of discharge limits for nutrients and the challenges of measuring the results of nonpoint sources' nutrient reduction activities. Officials from the three states GAO reviewed and other stakeholders cited the importance of discharge limits for nutrients as a driver to create demand for trading. Without such a driver, point sources have little incentive to purchase nutrient credits. The challenges of measuring nutrient reductions by nonpoint sources create uncertainties about the value of credits generated by nonpoint sources. In part, because of these uncertainties, the states GAO reviewed either did not allow nonpoint sources to trade or created special rules for nonpoint sources. State officials and stakeholders also told GAO that even if a program allows nonpoint sources to trade, point sources often prefer to trade with other point sources because they have similar permit and monitoring requirements. |
gao_GAO-18-593 | gao_GAO-18-593_0 | Background
This section describes agency responsibilities, the history of the federal timber export and substitution ban, and changes to the timber economy since restrictions on timber export and substitution were first implemented.
Federal Land Management Agency Responsibilities
Under the National Forest Management Act and the Federal Land Policy and Management Act of 1976, respectively, the Forest Service and BLM manage federal lands under their jurisdiction for various uses such as protection of fish and wildlife habitat, recreation, mineral production, and timber harvesting. As part of the agencies’ management of timber harvesting on public lands, both the Forest Service and BLM conduct timber sales. Timber sale activities include identifying the sale area, conducting the required environmental analyses, soliciting bids, preparing the timber sale contract, marking the sale boundary and the trees to be cut or left, and monitoring the harvest operations and reforestation activities. The agencies monitor harvest operations to help ensure that, for example, the trees are harvested from the agreed-upon area and the logs are hauled on the route agreed upon in the timber sale contract. The agencies have developed policies for general timber sale activities, as well as policies specific to preventing, detecting, and responding to illegal federal timber export and substitution.
History of the Federal Timber Export and Substitution Ban
Since the late 1960s, four primary laws have been enacted prohibiting federal timber export and substitution: the Foreign Assistance Act of 1968, the Interior and Related Agencies Appropriations Act of 1974, the Forest Resources Conservation and Shortage Relief Act of 1990, and the Forest Resources Conservation and Shortage Relief Act of 1997.
In 1968, an amendment to the Foreign Assistance Act of 1968— commonly referred to as the “Morse Amendment”—restricted the volume of timber that could be harvested and exported from federal lands in unprocessed form. This legislation was enacted after the Secretaries of Agriculture and the Interior issued joint orders calling for this restriction, deeming it necessary to maintain a viable domestic wood-processing industry. As we previously found, in the early 1960s, export of federal timber was generally not viewed as a concern, but as exports of federal, private, and other timber increased, public and private concerns grew about the effect of unrestricted log exports on the domestic wood- processing industry. For example, the percentage of timber harvested in Oregon and Washington that was exported grew from approximately 6 percent in 1965 to about 18 percent in 1972.
In 1973, a provision was included in the Interior and Related Agencies Appropriations Act of 1974 that, in effect, prohibited the export of unprocessed timber harvested from federal lands west of the 100th meridian in the contiguous 48 states. (Figure 1 shows the location of the 100th meridian and Forest Service- and BLM-managed lands.) The 1973 provision also prohibited purchasers from using timber harvested from federal lands in their processing facilities while exporting nonfederal unprocessed timber that could have been used in those facilities, an activity referred to as substitution. The provision also stated that the limitation on export and substitution did not apply to species of timber the agencies have determined to be surplus to domestic lumber and plywood manufacturing needs.
In 1990, the Forest Resources Conservation and Shortage Relief Act of 1990 made permanent the ban on exporting unprocessed logs from western federal lands and provided for greater restrictions on substitution. Under the 1990 act, however, it is not considered substitution if a company purchases federal timber from within a particular “sourcing area” and exports nonfederal timber harvested from areas outside the sourcing area. For example, firms with timber operations in both Oregon and Washington could purchase federal timber from a sourcing area in eastern Oregon for manufacture while also purchasing private timber in Washington for export. The 1990 act required the Forest Service and Interior to issue, in consultation with each other, coordinated and consistent regulations implementing the act on the lands under their respective jurisdictions.
The Forest Service issued a series of regulations to implement the 1990 act, the most comprehensive of which was issued September 8, 1995. In a provision contained in the act providing appropriations to the Forest Service for fiscal year 1996, Congress effectively suspended implementation of the 1995 regulation to allow the administration, Congress, and affected parties more time to address policy issues with respect to the 1990 act. The Forest Service’s fiscal year 1997 appropriation act contained a similar provision. BLM did not issue regulations implementing the 1990 act.
In 1997, Congress amended the 1990 act. Among other things, the Forest Resources Conservation and Shortage Relief Act of 1997 relaxed substitution restrictions in Washington State and allowed the Forest Service and BLM to reduce the penalties imposed for violating the act by taking into account “all relevant mitigating factors, including mistake, inadvertence, and error.” The 1997 act also suspended the Forest Service’s 1995 regulations implementing the 1990 act and directed the agencies to issue new coordinated and consistent regulations implementing the act by June 1998. The law requires the agencies to implement their regulations in effect prior to September 8, 1995, until new regulations are issued.
Changes to the Timber Economy since the 1960s
Since restrictions on timber export and substitution were first implemented in the late 1960s, the timber economy has continued to change. Domestically, the volume of timber harvested from Forest Service lands each year has declined from about 12.4 billion board feet in 1973 to 2.6 billion board feet in 2017. The number of domestic mills along the Pacific Coast has also decreased, mostly through mill closures. For example, from 1996 to 2016, the number of mills in Washington State declined from 186 to 88. In addition, since the 1990s, the structure of the corporate timber industry has changed. For example, many of the corporate timber companies that once owned both mills and the private lands to supply those mills have divested some or all of their private timberlands. Additionally, the value of U.S. softwood log exports has grown since 2007, with China, Japan, and Canada the three largest importers of these logs. According to information from the Foreign Agricultural Service, the value of U.S. softwood log exports grew from approximately $949 million in 2007 to approximately $1.4 billion in 2017 (in constant 2017 dollars).
Forest Service and BLM Found No Violations of the Export and Substitution Ban from 2007 through 2017, and Officials and Stakeholders Said the Likelihood of Violations is Low
According to Forest Service and BLM officials, the agencies found no violations of the ban on federal timber export and substitution from 2007 through 2017. Forest Service officials described instances in which the agency responded to reports of potential violations, but the reports were not substantiated. All agency officials and stakeholders we interviewed said that the likelihood of illegal timber export and substitution is low. However, several officials acknowledged that some risk of violations exists under certain circumstances.
From 2007 through 2017, the Forest Service and BLM found no violations of the federal timber export and substitution ban. Forest Service officials identified four instances in which the agency investigated potential violations. For example, in one instance, the Forest Service’s Pacific Southwest region investigated an incident in 2017 at the Port of Richmond near Oakland, California. According to the associated investigation report, an employee at the port’s export facility noticed four logs were marked as coming from a federal timber sale and reported it to the Forest Service. Forest Service law enforcement officials conducted an investigation and determined that the logs came from the Sierra National Forest and were placed at the facility in error. The purchaser subsequently delivered the logs to the intended recipient and the agency took no further action. Forest Service officials said that because the logs had not been exported, but had been placed at the facility in error with no intent to export them, the agency determined that there was no violation of the export ban. In another instance, officials from the agency’s Southwestern Region said that, in 2010, they identified a case in which a purchaser cut federal logs, removed the bark, and then exported the logs to Mexico for use as telephone poles. The officials investigated to determine whether that type of exporting was legal. The Forest Service concluded that the purchaser’s activities constituted processing the logs into end products and therefore the logs were being legally exported. BLM officials we interviewed did not describe any instances in which they identified, or were made aware of, potential violations.
All Forest Service and BLM officials and stakeholders we interviewed said the likelihood of timber export and substitution violations is low due to a combination of several factors, including economic factors associated with log markets and changes in the organizational structure of timber companies. However, several officials acknowledged that some risk of violations exists under certain circumstances.
Economic factors within log markets. Several agency officials and stakeholders said smaller trees of a lower quality are being harvested from federal lands compared to the trees harvested in the 1990s. Several of these officials and stakeholders said there is less demand and lower value in overseas markets for logs with such characteristics. A senior official from the Klamath National Forest in California, for example, said that trees harvested from the forest in the 1980s had log diameters of 35 to 42 inches, but by 2017 the diameter had decreased to 14 to 18 inches. Additionally, according to statistics from the State of California, old-growth trees—generally, trees more than 150 years old—represented nearly 70 percent of timber harvested in California in 1979, but by 1999 the proportion had fallen to less than 10 percent. As we have found, old-growth trees can have more attractive grain characteristics and can be used for higher- value products compared to young-growth trees, which may make the former more attractive for export. Several officials and stakeholders also said that the decrease over time in the amount of federal timber available for sale has made violations less likely. For example, Oregon Department of Forestry information shows that the volume of timber harvested on BLM-managed lands in Oregon declined from about 1.5 billion board feet in 1973 to 182 million board feet in 2016. Some of these officials and stakeholders said that federal timber is an important part of domestic sawmill operators’ timber supply, and, given the reduced amount of federal timber available, sawmill operators would have little incentive to export logs because doing so would further reduce their own timber supply.
Changes in timber company organizational structure. Several officials and stakeholders said that changes in timber company organizational structure have also made substitution less likely. Several officials and stakeholders noted that many Pacific Northwest timber companies once owned both sawmills and timberland from which they harvested timber to supply their mills. According to some officials, under those conditions, the likelihood of substitution was greater because these companies could have benefitted by exporting logs from their own lands for a high price while supplying their sawmill operation with federal timber purchased at a lower price. However, many timber companies have sold or reorganized over the past 2 decades, resulting in few companies now owning both sawmills and timberlands, according to some agency officials. In 2009, Oregon State University reported on this change, noting that “almost all large, publicly traded forest product companies have shed their timber lands in the past 20 years, a reflection of global economic pressures, new tax laws, and other forces.” A 2014 report from the Department of Agriculture likewise noted this change. Some agency officials said that, as a result, sawmills generally must buy all of their timber— whether privately sourced or federal—on the open market, which provides less incentive for substitution than if these sawmills were using timber they already owned.
Several officials also said, however, that some risk of violations remains, particularly under certain circumstances. For example, some Forest Service regional officials said that some national forests could be vulnerable to illegal timber export if log prices or demand for certain tree species increase in the future. Additionally, several Forest Service officials expressed concern about having sufficient staff to monitor timber sales for compliance with relevant requirements, including the ban on export and substitution, especially in light of potential increases in timber sales. In particular, officials from four of the six national forests included in our review said the Forest Service increased the volume of timber their national forest is expected to offer for sale beginning in fiscal year 2018. For example, a Boise National Forest official said the forest’s timber sale target increased from 50 million board feet per year, which has been consistent over the last decade, to 74 million board feet in fiscal year 2018, with a goal of 96 million board feet per year by fiscal year 2021. According to some Forest Service officials, higher timber sale targets could reduce the ability of agency staff to carry out timber sale responsibilities, including monitoring, that help guard against illegal timber export and substitution. Several Forest Service “Timber and Log Accountability Audits”—internal evaluations of regional and forest-level timber sale activities—also noted that reduced staffing levels and experience were areas of concern in carrying out forests’ timber sale programs.
The Forest Service and BLM Did Not Issue New Regulations Required by Law and Some Policies are Outdated or Unclear
The Forest Service and BLM neither issued new regulations as required by the Forest Resources Conservation and Shortage Relief Act of 1997 nor obtained legislative relief from the requirement. The agencies have policies and practices to help prevent, detect, and respond to illegal timber export and substitution. However, some policies are outdated or unclear, and the agencies have not reviewed their policies for continued relevance and effectiveness.
The Agencies Did Not Issue New Regulations As Required by the 1997 Act
As noted previously, in 1997, Congress amended the Forest Resources Conservation and Shortage Relief Act of 1990 to, among other things, relax substitution restrictions in Washington State. The 1997 act included other provisions such as allowing the agencies to reduce the penalties imposed for violating the ban. The act also states that the agencies “shall, in consultation, each prescribe new coordinated and consistent regulations to implement the act” and required the agencies to issue these regulations by June 1, 1998. The act also states that, until new regulations are issued, regulations that were in effect prior to September 8, 1995, are to remain in effect. However, because neither agency issued regulations as required by the act, their regulations currently in use do not reflect changes made by the 1997 act.
Forest Service. The Forest Service drafted regulations to implement the 1997 act, but as of June 2018, the agency had not finalized them.
According to Forest Service headquarters officials, the agency did not finalize the draft regulations because of competing priorities. The officials did not provide an estimate as to when the draft regulations would be made final. Because the draft regulations have not been made final, Forest Service regulations from the early 1990s remain in effect but do not reflect the changes made by the 1997 act.
BLM. According to BLM headquarters officials, BLM began drafting regulations in 2010 to implement the 1997 act, but did not complete that effort because of insufficient resources and competing priorities. Because BLM did not issue new regulations, BLM is required by law to rely on its regulations issued prior to September 8, 1995. BLM regulations reflect timber export and substitution laws from the 1970s because BLM did not issue regulations implementing the 1990 act because of competing priorities at that time, according to officials. Consequently, BLM regulations currently in use do not reflect the changes made by the 1997 act.
Forest Service officials said their agency did not seek legislative relief from the requirement to issue new regulations, and BLM officials said they have no record that their agency sought legislative relief but could not be certain that the agency had not done so. Without issuing new coordinated and consistent regulations as required by the 1997 act, or obtaining legislative relief, the agencies will continue to be out of compliance with this provision of the act.
Some Agency Policies Related to Illegal Export and Substitution are Outdated or Unclear, and the Agencies Have Not Reviewed Their Policies for Relevance and Effectiveness
We identified several areas in which either the Forest Service or BLM or both have policies to help prevent, detect, and respond to illegal federal timber export and substitution. For example:
Timber sale contract provisions. Both agencies have policies that require timber sale contracts to include a statement about the prohibition on federal timber export and substitution, which can help ensure timber purchasers are aware of the prohibition. We reviewed the standard timber sale contract forms used by both agencies at the time of our review and found that the forms include provisions with this statement.
Marking of unprocessed logs. Both agencies generally require purchasers to mark unprocessed logs originating from federal lands subject to the ban with a spot of yellow paint and an identifying mark known as a hammer brand before the logs are removed from the timber sale area. According to the agencies’ policies, marking the logs is intended to help identify them as being prohibited from export. Figure 2 shows an example of marked federal logs.
Forest Service regulations generally require that both ends of each unprocessed log be marked, but agency policy allows agency officials to waive the requirement under certain circumstances if officials determine that the risk of export or substitution is low. For example, for certain timber sales the Pacific Southwest Region does not require that logs smaller than 10 inches in diameter be painted and branded.
BLM policy directs that one end of most unprocessed logs be painted and branded. Specifically, it calls for painting and branding one end of each log with a diameter of more than 10 inches. Likewise, when a log truck carries 10 or fewer logs (regardless of the logs’ diameter), all logs on the truck are to be painted and branded. For truckloads of 11 logs or more, a minimum of 10 logs must be painted and branded on one end, regardless of the logs’ diameter. BLM policy allows contracting officers to implement more stringent requirements, such as requiring purchasers to paint and brand all logs harvested on an individual timber sale regardless of size or number, but it does not allow contracting officers to waive the marking requirement.
Penalizing violators. Both agencies have penalties for violating the export and substitution ban. Forest Service penalties are described in agency policy and in agency contract provisions, and include imposing penalties, cancelling contracts, and debarring purchasers from bidding on future Forest Service timber sales. BLM penalties are described in agency contract provisions only, and include contract cancellation and recovery of damages.
In addition, many Forest Service and BLM officials said that general timber sale administration policies—those aimed at managing timber sales generally, regardless of export issues—help address the risk of illegal federal timber export and substitution. Both agencies’ policies for timber sale administration include mechanisms for monitoring various activities associated with federal timber sales, including periodically inspecting timber harvest operations at active logging sites and observing log trucks carrying cut timber from logging sites to ensure they follow designated haul routes. Many officials we spoke with from both agencies said that such periodic inspections and consistent contact with logging operators help prevent and detect illegal export or substitution of federal timber.
However, Forest Service and BLM policies related to three areas— surveillance, certification requirements, and investigating potential violations—are outdated or unclear, or in some cases have not been fully implemented. The agencies also have not reviewed their policies for continued relevance and effectiveness as called for by federal internal control standards.
Surveillance. Forest Service policy directs each Forest Service region with forests subject to the export ban to conduct surveillance and establish procedures, training, and other controls for the surveillance program in the region—stating that, at a minimum, regional standards must include monthly surveillance. However, three of the six regions subject to the ban have not established surveillance procedures because, according to regional officials, they have no access to ports and therefore the policy is not relevant to them. However, Forest Service headquarters officials said the requirement is relevant to all regions having forests subject to the ban, because federal logs originating from regions without ports could be transported across regions and exported from another region. These headquarters officials said that more clarity in the agency’s policy about establishing regional surveillance procedures may be helpful to the regions.
The remaining Forest Service regions subject to the ban—the Pacific Southwest, Pacific Northwest, and Northern regions, each of which contains log export facilities—established procedures as called for by national policy but do not conduct surveillance on a monthly basis. The Pacific Southwest Region’s procedures call for monthly surveillance of export facilities in accordance with national policy. However, the Pacific Northwest Region’s procedures call for quarterly surveillance rather than monthly surveillance. The Northern Region delegates responsibility for surveillance to a national forest in the Pacific Northwest Region. We reviewed surveillance inspection reports from calendar year 2017 and found that, during that year, the Pacific Southwest Region conducted from one to nine inspections of each of the six facilities regional officials identified as exporting logs— less than the monthly surveillance called for by regional and national policy. Officials from the Pacific Northwest Region provided us calendar year 2017 surveillance information for one of the region’s six facilities that exported logs that year. For that facility, Forest Service officials conducted surveillance seven times in 2017, including at least one inspection per quarter, which is in accordance with regional policy but not national policy. Officials from both regions said they view the frequency with which they conduct surveillance to be appropriate. For example, officials from the Pacific Southwest Region said that when a port is actively exporting timber, they conduct surveillance at least once per month, as required by policy. Officials from the Pacific Northwest Region said they view their frequency of surveillance to be appropriate, since they view the likelihood of export violations to be low and they have competing agency priorities.
BLM policy does not call for surveillance of log export facilities. However, officials from BLM’s Coos Bay District, which has two log export facilities, have conducted surveillance since the 1970s as a way to help detect illegal timber export, according to BLM documents and officials. Based on our review of 2017 surveillance inspection reports, BLM officials inspected one export facility twice and the other facility seven times during that year. Figure 3 shows an example of unprocessed logs at one of the export facilities in Coos Bay, Oregon.
Some officials from both agencies said they may in some cases be unable to conduct surveillance within export facilities because they do not have clear authority to enter these facilities. BLM headquarters officials said BLM did not develop a policy calling for surveillance because the agency did not know whether it had the authority to enter log export facilities and therefore was not confident that such a policy could be carried out. Some officials from both agencies said they generally have been granted access but noted that this is subject to the willingness of the facility owners. Forest Service and BLM headquarters officials similarly said the agencies generally do not have legal authority to board ships or to inspect closed shipping containers to look for federal logs.
Certification Requirements. Both agencies’ policies direct the agencies to collect certification forms to help them determine whether timber purchasers are engaged in export or substitution. However, the agencies’ forms are outdated—the Forest Service’s certification form expired, and some BLM forms reflect legal requirements that are no longer in effect. Nevertheless, the agencies have not updated their forms or changed their policies requiring collection of these forms.
Forest Service policy states that “Prior to award, during the life of the contract, and for a period of 3 years from the termination date, the purchaser must furnish, upon request, the volume and geographic origin of unprocessed timber from private lands that was exported or sold for export.” The purchaser may submit the information on a specified Forest Service certification form or “other appropriate forms.” Forest Service regional officials from three of the six regions subject to the ban said they do not collect this information because the certification form, approved by the Office of Management and Budget, expired in 1999. Some Forest Service officials said updating and collecting the form could help prevent and detect illegal timber export and substitution by providing agency officials with information about purchasers’ activities. One senior headquarters official, however, noted that the information provided on the form relies on the purchaser’s self-certification, making it difficult for agency officials to verify.
BLM policy requires agency staff to collect a minimum of two certification forms for each timber sale. One is to be collected before the sale is approved, to determine whether the timber sale purchaser has substituted federal timber for exported unprocessed private timber within a specified time frame. The other is to be collected after the harvest is completed and before the contract is terminated, to determine whether purchasers are exporting BLM timber. Two additional certification forms may be collected when applicable—one prior to the sale and the other after the harvest is completed—but are not required for all sales. We reviewed documentation from a sample of 22 BLM timber sale contracts that closed in 2017 in the five western Oregon BLM districts and found that BLM collected the required certification forms for 21 of the 22 contracts. The remaining contract file was missing a required form. BLM officials said the missing form could not be located.
However, the two certification forms BLM can collect before approving a timber sale reflect legal requirements that are no longer in effect. According to the 1997 act, a purchaser may not purchase unprocessed federal timber if “such person has, during the preceding 24-month period, exported unprocessed timber originating from private lands.” However, the two BLM certification forms instruct the purchaser to provide this information for the preceding 12-month period. Senior BLM officials acknowledged the inconsistency between these forms and the current legal requirement. They said that the 12- month time frame specified in the certification forms reflects the BLM regulations issued to implement the appropriations act export restrictions in the 1970s.
Investigating Potential Violations. Both agencies have policies for investigating potential export violations. The Forest Service’s policy for investigating export violations states that, upon finding a violation, the contracting officer should contact law enforcement and prepare a report about the violation, including any planned follow-up actions. Forest Service headquarters officials said that it is unclear whether this policy applies only in cases where export violations have been substantiated or is to be used in instances where violations are suspected but not confirmed. BLM headquarters officials said that their personnel are to use policies detailed in the agency’s standard contract administration procedures, which cover all timber sale administration violations, to investigate potential and substantiated export violations. These procedures provide officials discretion in the actions they take. For example, the procedures state that “many such violations may simply be corrected with good verbal communications between the BLM and purchaser representatives. Other violations require more forceful action and complete documentation of such actions.”
The agencies differ in the extent to which they define what conduct constitutes an export violation. Forest Service policies do not define export; however, its regulations do, stating that export can occur at any of several points—when a person enters into an agreement to convey logs to another country, when logs are placed in an export facility in preparation for shipment outside the United States, or when logs are placed on a ship, train, or other transport destined for a foreign country. BLM policies and regulations do not define the term export or state what constitutes an export violation. Officials from both agencies said that determining whether a violation has occurred requires judgment on the part of agency staff. For example, according to these officials, finding logs in an export facility may constitute a violation, but would require the agency to determine whether the logs were being prepared for shipment outside the United States. Officials from both agencies said they would benefit from a clear definition of export violation.
In addition, the agencies do not have up-to-date information about sourcing areas, which is used to determine substitution violations. Under the 1997 act, manufacturers may not engage in substitution— that is, exporting timber from private lands while purchasing federal timber to supply their mills. However, it is not considered substitution if a company purchases federal timber from within a particular “sourcing area” and exports nonfederal timber harvested from areas outside the sourcing area. Sourcing areas outside Washington State are subject to Forest Service or BLM approval, and the agencies are required by law to review them at least every 5 years. Forest Service headquarters officials said they had not reviewed sourcing areas for at least 20 years, and said that over this time, many timber companies with approved sourcing areas have gone out of business or no longer purchase national forest timber. Forest Service headquarters officials said that they did not maintain lists of sourcing areas, and none of the six Forest Service regions subject to the ban had information about sourcing areas. BLM provided us a list of sourcing areas identified by the Forest Service, but the list dates to 1992. Moreover, many Forest Service and BLM officials we interviewed said they were unfamiliar with the concept of substitution and sourcing areas. A few officials said identifying sourcing areas may no longer be relevant given the changes in the organizational structure of timber companies and the resulting lower likelihood of substitution.
According to the Standards for Internal Control in the Federal Government, management should implement control activities through policies, including by periodically reviewing policies, procedures, and related control activities for continued relevance and effectiveness in achieving an entity’s objectives or addressing related risks. Forest Service officials said the agency has not reviewed its policies specific to export and substitution since the enactment of the 1997 act, largely because of competing priorities and the officials’ view that the likelihood of illegal export or substitution is low. Nevertheless, these officials agreed that it would be beneficial for the Forest Service to review and update its policies, especially in light of the significant changes to the timber economy in the past 2 decades. BLM officials said they reviewed the agency’s export regulations in 2010, but this effort did not include a review of log export policies. They said they did not believe such a review would be useful until new regulations are issued, since it is important that policies conform with regulations. These officials noted that BLM’s Oregon/Washington State Office updated some of its policies in 2016, but the officials did not indicate the extent to which the policies were reviewed for relevance and effectiveness—and, as noted, some BLM policies appear unclear or are inconsistent with the 1997 act. By reviewing agency policies and making changes to them as necessary, in accordance with applicable regulations, the Forest Service and BLM will have better assurance that their policies are relevant and effective for addressing the risk of illegal timber export and substitution.
Conclusions
For 50 years, Congress has restricted the export and substitution of federal timber from the western United States. Since the restrictions were put in place, substantial changes to the timber economy have occurred, and agency officials and stakeholders view the likelihood of illegal timber export and substitution as low. The Forest Service and BLM have various regulations, policies, and procedures to carry out the ban. However, the agencies did not issue new regulations as required by the Forest Resources Conservation and Shortage Relief Act of 1997 and have not obtained legislative relief from this requirement. As a result, the agencies are relying on regulations issued before 1995. Without issuing new coordinated and consistent regulations or obtaining legislative relief, the Forest Service and BLM will continue to be out of compliance with the regulation provisions of the 1997 act.
Further, some agency policies are outdated or unclear. For example, Forest Service policy calls for collecting a certification form that expired in 1999, and BLM policy does not clearly define what constitutes a violation of the export ban. The Forest Service and BLM have not reviewed their policies for continued relevance and effectiveness, consistent with federal internal control standards. By reviewing agency policies and making changes to them as necessary, the Forest Service and BLM will have better assurance that their policies are relevant and effective for addressing the risk of illegal timber export and substitution.
Recommendations for Executive Action
We are making four recommendations, including two to the Forest Service and two to the BLM:
The Chief of the Forest Service should determine whether new regulations governing timber export and substitution are appropriate. If the agency determines new regulations are appropriate, it should issue them in accordance with the 1997 act, in consultation with BLM. Otherwise, the agency should seek legislative relief from the act’s requirement. (Recommendation 1)
The Director of the BLM should determine whether new regulations governing timber export and substitution are appropriate. If the agency determines new regulations are appropriate, it should issue them in accordance with the 1997 act, in consultation with the Forest Service. Otherwise, the agency should seek legislative relief from the act’s requirement. (Recommendation 2)
The Chief of the Forest Service should review agency policies for continued relevance and effectiveness in addressing the risk of illegal timber export and substitution, and based on that review—and in accordance with applicable regulations—should issue new policies as necessary. (Recommendation 3)
The Director of the BLM should review agency policies for continued relevance and effectiveness in addressing the risk of illegal timber export and substitution, and based on that review—and in accordance with applicable regulations—should issue new policies as necessary. (Recommendation 4)
Agency Comments
We provided a draft of this report for review and comment to the Departments of Agriculture and the Interior. The departments provided written comments, which are reproduced in appendixes I and II of this report. The Forest Service, responding on behalf of the Department of Agriculture, stated in its written comments, and in a subsequent e-mail from the Forest Service audit liaison, that it generally concurred with our findings and recommendations. The Forest Service stated that it will coordinate with BLM to determine the next best steps in moving ahead in administering the export law.
In its written comments, the Department of the Interior concurred with the recommendations we directed to BLM. Regarding our recommendation related to regulations, Interior stated that BLM will review its regulations to identify inconsistencies with the 1997 act, and if it determines new regulations are appropriate, will begin consultation with the Forest Service to maximize consistency between the agencies to minimize the impact to federal timber purchasers. Regarding our recommendation related to policies, Interior stated that BLM will review its export and substitution policies as well as its relevant contracts and forms for any immediate updates needed to conform with the 1997 act, and will ensure the policies are updated in conjunction with any new regulations.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretaries of Agriculture and the Interior, the Chief of the Forest Service, the Director of the BLM, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff members have any questions regarding this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to this report are listed in appendix III.
Appendix I: Comments from the Department of Agriculture
Appendix II: Comments from the Department of the Interior
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Steve Gaty (Assistant Director), Ulana M. Bihun (Analyst-In-Charge), Mark Braza, Justin Fisher, Richard P. Johnson, and Kyle Stetler made key contributions to this report. Important contributions were also made by Tara Congdon, Barb El Osta, Kimberly Gianopoulos, and Dan Royer. | Why GAO Did This Study
Each year, the federal government sells millions of dollars of timber from federal forests. Federal law generally prohibits the export of unprocessed logs harvested from federal lands in the western United States. It also prohibits substitution of federal logs for privately sourced timber in domestic mills when the privately sourced timber is exported without processing.
GAO was asked to examine the issue of illegal federal timber export and substitution. This report (1) describes the extent to which the Forest Service and BLM identified violations of the timber export and substitution ban that occurred from 2007 through 2017 and the likelihood of violations and (2) examines the agencies' regulations, policies, and practices to help prevent, detect, and respond to illegal timber export and substitution.
GAO reviewed laws, regulations, and policies regarding illegal timber export and substitution; compared agency regulations with laws, and agency policies with federal internal control standards; and interviewed agency officials and stakeholders—such as trade groups and state officials— selected to provide a range of perspectives.
What GAO Found
The Forest Service, within the Department of Agriculture, and the Bureau of Land Management (BLM), within the Department of the Interior, found no violations of the ban on federal timber export and substitution from 2007 through 2017, according to agency documents and officials. All agency officials and stakeholders GAO interviewed said the likelihood of illegal timber export and substitution is low, citing several reasons, including economic factors associated with log markets, which have changed over the years. For example, many officials and stakeholders said the timber harvested from federal lands is smaller and of lower quality compared to what was harvested in the 1990s, making it less likely to be exported.
The Forest Service and BLM did not issue new regulations related to illegal federal timber export and substitution, and some agency policies related to export and substitution are outdated or unclear. The agencies did not issue regulations to implement the Forest Resources Conservation and Shortage Relief Act of 1997, as required by the act. Without issuing new regulations or obtaining legislative relief from this requirement, the agencies will continue to be out of compliance with the act. The agencies have policies to help prevent, detect, and respond to illegal timber export and substitution, such as policies that require the marking of logs to identify them as coming from federal lands. However, the agencies have not reviewed their policies for continued relevance and effectiveness as called for by federal standards for internal control, and some policies are outdated or unclear. For example, Forest Service policy calls for the collection of a certification form to help determine whether timber purchasers are engaged in export or substitution, but the form expired in 1999. Also, it is unclear what BLM considers a violation of the export ban because agency policy does not define what constitutes a violation. Forest Service officials said the agency has not reviewed its policies since 1997, largely due to competing priorities, but agreed it would be beneficial to do so. BLM officials said they reviewed the agency's export regulations in 2010, but this effort did not include a review of timber export policies. By reviewing agency policies and making changes as necessary, the agencies will have better assurance that their policies are relevant and effective for addressing the risk of illegal timber export and substitution.
What GAO Recommends
GAO recommends that the Forest Service and BLM issue new regulations or seek legislative relief from the requirement to do so, and review their policies for relevance and effectiveness and issue new policies as necessary. The agencies generally agreed with GAO's recommendations. |
gao_GAO-18-466 | gao_GAO-18-466_0 | Background
Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services— are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The information systems and networks that support federal operations are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the myriad of operating systems, applications, and devices comprising the systems and networks.
Cybersecurity professionals can help to prevent or mitigate the vulnerabilities that could allow malicious individuals and groups access to federal IT systems. The ability to secure federal systems depends on the knowledge, skills, and abilities of the federal and contractor workforce that uses, implements, secures, and maintains these systems.
Nevertheless, the Office of Management and Budget (OMB) has noted that the federal government and private industry face a persistent shortage of cybersecurity and IT talent to implement and oversee information security protections to combat cyber threats. In addition, the RAND Corporation and the Partnership for Public Service have reported that there is a nationwide shortage of cybersecurity experts, in particular, in the federal government. According to these reports, this shortage of cybersecurity professionals makes securing the nation’s networks more challenging and may leave federal IT systems vulnerable to malicious attacks. The persistent shortage of cyber-related talent has given rise to efforts to identify and assess the federal cybersecurity workforce.
The National Initiative for Cybersecurity Education (NICE) Created a Framework for Defining Cybersecurity Workforce Positions
NICE, led by NIST, is a partnership among government, academia, and the private sector focused on cybersecurity education, training, and workforce development. The mission of NICE is to energize and promote a robust network and an ecosystem of cybersecurity education, training, and workforce development. NICE fulfills this mission by coordinating with government, academic, and industry partners to build on existing successful programs, facilitate change and innovation, and bring leadership and vision to increase the number of skilled cybersecurity professionals that are helping to keep our nation secure. NICE issued an initial draft of the National Cybersecurity Workforce Framework (National Framework) for public comment in September 2011 and the final version 1.0 in April 2013. The National Framework was intended to help identify, describe, and assess all cybersecurity roles within an organization. The National Framework organized cybersecurity job functions into 7 categories and 31 specialty areas:
Category: a high-level grouping of common cybersecurity functions.
Categories group together work and workers that share common major functions, regardless of job titles or other occupational terms.
Specialty area: an area of concentrated work, or function, within cybersecurity and related work. Related specialty areas are grouped together into categories. In version 1.0 of the National Framework, each specialty area was also associated with a distinct set of cybersecurity related tasks and knowledges, skills, and abilities.
In November 2016, NIST issued draft special publication 800-181 which revised and replaced earlier versions of the National Framework. The draft was co-authored by NIST, DOD, and DHS and was renamed the NICE Cybersecurity Workforce Framework (NICE Framework). In August 2017, NIST published the final version of the special publication.
The NICE Framework is intended to help the federal government better identify cybersecurity workforce needs by enabling agencies to examine specific cybersecurity work roles, and identify personnel skills gaps, rather than merely examine the number of vacancies by job series. The NICE Framework added 2 additional specialty areas within the 7 categories. Figure 1 identifies the 7 categories and the 33 specialty areas in the NICE Framework.
The NICE Framework also introduced the concept of work roles as the third component of cybersecurity job functions. Work roles provide a more detailed description of the roles and responsibilities of cybersecurity job functions than do the category and specialty area components of the NICE Framework. The NICE Framework defines one or more work roles within each specialty area. For example, as depicted in figure 2, the NICE Framework defined 11 work roles within the 7 specialty areas in the “Securely Provision” category.
OPM Has Led Several Efforts to Assess the Federal Cybersecurity Workforce
In October 2012, in coordination with a NICE interagency working group, OPM published a cybersecurity employment coding structure that aligned with the initial draft version of the National Cybersecurity Workforce Framework. The coding structure assigned a unique 2-digit cybersecurity employment code to each category and specialty area in the NICE Framework. According to OPM, the coding of federal positions with cybersecurity functions was intended to enhance agencies’ ability to identify critical cybersecurity workforce needs, recruit and hire employees with needed skills, and provide appropriate training and development opportunities to cybersecurity employees.
In July 2013, OPM initiated the Special Cybersecurity Workforce Project to support federal efforts to reduce the cybersecurity workforce skills gaps across agencies. Agencies were to use the definitions of cybersecurity work, as described in the National Cybersecurity Workforce Framework, along with OPM’s cybersecurity coding structure, to code positions performing cybersecurity work by the end of fiscal year 2014. The project was intended to enable agencies to identify and address their needs for cybersecurity skill sets to meet their missions.
In July 2016, OPM and the Office of Management and Budget (OMB) issued the Federal Cybersecurity Workforce Strategy. The strategy details government-wide actions to identify, expand, recruit, develop, retain, and sustain a capable and competent workforce in key functional areas to address complex and ever-evolving cyber threats. The strategy identifies a number of actions intended to address cybersecurity workforce challenges in: (1) identifying cybersecurity workforce needs, (2) expanding the cybersecurity workforce through education and training, (3) recruiting and hiring highly skilled talent, and (4) retaining and developing highly skilled talent.
The strategy states that OPM is to expand cybersecurity position coding and agencies are to conduct strategic workforce planning. These actions are related to the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015, under which OPM is to establish an employment coding structure and agencies are to identify and report on cybersecurity workforce critical needs.
Figure 3 depicts a timeline of recent efforts to assess the federal cybersecurity workforce.
OPM Issued a Cybersecurity Position Coding Structure, Procedures, and Progress Report Later Than the Deadlines Established in the Act OPM Developed a 3-digit Cybersecurity Coding Structure
As required by the Federal Cybersecurity Workforce Assessment Act of 2015, OPM developed a cybersecurity coding structure under NICE, issued guidance to implement the coding structure to identify all federal civilian cybersecurity positions, and provided a progress report to Congress on the implementation of the act. However, the coding structure and procedures were issued later than the act’s deadlines because OPM was working with the National Institute of Standards and Technology (NIST) to align the structure and procedures with the draft version of the NICE Cybersecurity Workforce Framework, which NIST issued later than planned. The delays in issuing the coding structure and procedures have extended the expected time frames for implementing subsequent provisions of the act.
The Federal Cybersecurity Workforce Assessment Act of 2015 (the act) required OPM, in coordination with NIST, to develop a cybersecurity coding structure by June 15, 2016.
OPM addressed this requirement by developing a 3-digit cybersecurity employment coding structure that fully aligns with the NICE Cybersecurity Workforce Framework. OPM issued version 1 of the coding structure on November 15, 2016, 5 months after the deadline established in the act.
The coding structure assigns a unique 3-digit cybersecurity employment code to each work role outlined in the draft version of the NICE Cybersecurity Workforce Framework. Table 1 presents an example of the 3-digit employment codes associated with one category—”Securely Provision”—and its component specialty areas and work roles.
Although the act had called for the coding structure to be established by June 15, 2016, OPM officials explained that the coding structure was issued 5 months later than the established deadline because the structure was to be aligned with the NICE Cybersecurity Workforce Framework. However, the draft version of the NICE Framework was not issued until November 2, 2016.
According to NIST officials, the issuance of the draft NICE Framework was delayed because some of the knowledge, skills, and abilities (KSA) and task statements that had been originally developed by the intelligence community were marked as sensitive. NIST delayed publication of the draft NICE Framework until officials in the intelligence community had removed any sensitivity designations on the KSAs and task statements.
OPM Developed Government-wide Procedures for Assigning Codes to Civilian Cybersecurity Positions
The act required OPM, in coordination with NIST, DHS, and ODNI to establish procedures to assist agencies in implementing the cybersecurity coding structure. OPM was to develop the procedures no later than September 18, 2016.
In accordance with this requirement, OPM coordinated with NIST, DHS, and ODNI to develop its Guidance for Assigning New Cybersecurity Codes to Positions with Information Technology, Cybersecurity, and Cyber-Related Functions. The guidance provides instructions on how agencies are to assign the 3-digit cybersecurity employment codes to filled and vacant positions, including required activities for identifying and assigning codes to cybersecurity positions. The guidance also referenced additional updates and guidance that were to be posted on OMB’s MAX website.
OPM posted the guidance on the Chief Human Capital Officers Council website on January 4, 2017, 4 months after the deadline established in the act. OPM officials said they delayed issuance of the guidance so that it could be released in coordination with the cybersecurity coding structure, which was dependent on the release of the draft NICE Framework.
OPM Submitted a Progress Report to Congress
The act required OPM to report on the progress of agencies’ implementation of the act’s requirements, as well as OPM’s efforts to develop a coding structure and government-wide coding procedures. OPM was to submit the progress report to the appropriate congressional committees no later than June 15, 2016.
OPM prepared and submitted its progress report to the congressional committees identified in the act on July 12, 2016, about 1 month after the act’s deadline. Among other things, the report stated the following:
OPM was coordinating closely with NICE to revise the cybersecurity coding structure to align with the latest version of the NICE Framework, which was scheduled to be finalized in September 2016.
OPM had begun an education campaign to inform the federal community of the act and its requirements and was collaborating with stakeholders and interagency partners on ideas for how to implement the requirements of the act.
An official in OPM’s Employee Services division stated that OPM was delayed in completing and submitting the report to congressional committees due to the agency’s internal review process.
OPM’s Delays in Completing Required Activities Have Resulted in Later Implementation of Other Provisions of the Act
Because the deadlines for agencies to implement certain provisions of the act are contingent on the completion of earlier activities, delays by OPM in issuing the revised cybersecurity coding structure and the government- wide coding procedures have extended the due dates for agencies to implement other provisions of the act by about 4 months. Specifically:
The act required agencies to establish procedures for identifying all IT or cybersecurity positions and for assigning the appropriate employment code to each position no later than 3 months after OPM issued the government-wide coding procedures. If OPM had issued the coding procedures by September 2016 as the act required, agencies would have been required to establish their coding procedures by December 2016. However, because OPM did not issue the government-wide procedures until January 2017, agencies did not have to develop their coding procedures until April 2017.
Similarly, agencies were to assign employment codes to all of their cybersecurity positions no later than 1 year after establishing their coding procedures. Had agencies been required to establish their procedures by December 2016, they would have been required to assign the employment codes by December 2017. However, because they did not have to develop coding procedures until April 2017, they were therefore required to complete the assignment of employment codes by April 2018.
Further, agencies are required to identify and report on cybersecurity work roles of critical need beginning 1 year after the employment codes are assigned. If agencies had been required to assign employment codes by December 2017, they would have to begin reporting on their critical needs by December 2018. However, because they did not have to complete the assignment of employment codes until April 2018, they are therefore required to identify and begin reporting on critical needs by April 2019.
Figure 4 depicts the delays in earlier activities which can result or have resulted in later implementation of subsequent provisions of the act.
Most CFO Act Agencies Submitted Baseline Assessments, but the Results May Not Be Reliable
Most of the 24 CFO Act agencies conducted baseline assessments identifying the extent to which their cybersecurity employees held certifications and submitted them to Congress as required by the act. However, 3 agencies did not complete the assessments for various reasons, such as a lack of resources and tools to do so. Further, of the 21 agencies that did complete the assessments, 4 agencies did not address all of the reportable information, such as the extent to which personnel without certifications were ready to obtain them or strategies for mitigating any gaps. In addition, the assessments conducted by the 21 agencies did not contain complete, comprehensive, or consistent information on the certifications held by agencies’ cybersecurity employees due to limitations in the ability of the agencies to collect the needed information. As a result, the information collected and reported by most agencies about the certifications held by agency cybersecurity personnel may be of limited value for assessing the credentials and qualifications of their cybersecurity workforces.
Most CFO Agencies Conducted Baseline Assessments but Several Agencies Did Not Include All Reportable Information
The Federal Cybersecurity Workforce Assessment Act of 2015 required agencies to prepare baseline assessment reports identifying the extent to which their cybersecurity workforces held industry-recognized certifications as identified under NICE. OPM’s August 2016 memorandum on the requirements and time frames of the act further stated that agencies were to report the results of the assessments to the appropriate congressional committees of jurisdiction by December 2016.
In the absence of a NICE-defined list of appropriate industry-recognized certifications, 21 of the 24 agencies covered by the CFO Act had conducted baseline assessments of the certifications held by their cybersecurity workforces and submitted the baseline assessment reports to Congress as of March 2018. Table 2 shows the status of the agencies’ submissions of the baseline assessments as of March 2018.
Three agencies did not conduct baseline assessments: Instead of conducting a baseline assessment as called for by the act, DHS submitted its 2016 Comprehensive Cybersecurity Workforce Update to Congress in March 2017. However, this report did not include a baseline assessment of the department’s workforce as called for by the act. The report noted that DHS’s Office of the Chief Human Capital Officer lacked the ability to view or easily produce consolidated reports on employee certifications from all DHS components, and lacked consistent and detailed information about the readiness of additional employees to complete certification exams and specific certifications identified by components as being required for success in their positions. The report further noted that the department was working with cybersecurity subject matter experts from each component to revalidate the certifications most important to the work of their organizations and to organize the information according to the NICE Framework.
The Department of Housing and Urban Development (HUD) prepared an assessment of IT specialist skills, but did not conduct a baseline assessment that identified the extent to which its cybersecurity workforce held industry-recognized certifications. Officials in HUD’s Office of the Chief Information Officer (CIO) and Office of the Chief Human Capital Officer stated that the department intends to conduct a workforce assessment of its cybersecurity employees. The officials did not provide a time frame for when the assessment would be conducted.
The CIO and Chief Human Capital Officer of the Small Business Administration (SBA) stated that the agency has been unable to complete a baseline assessment due to resource constraints. The officials added that the agency intends to conduct workforce planning efforts in the future. However, they did not provide a time frame for when the assessment would be conducted.
By not conducting baseline assessments, DHS, HUD, and SBA lack valuable information about the knowledge and skills of their cybersecurity employees. This lack of information limits the agencies’ ability to effectively gauge the competency of individuals who are charged with ensuring the confidentiality, integrity, and availability of federal information and information systems. Additionally, by not conducting or reporting on the assessment, the agencies have not provided Congress the information it required in the act regarding existing credentials and certifications of personnel with information technology, cybersecurity, or other cyber-related job functions.
Not All Agencies That Prepared Baseline Assessment Reports Addressed Reportable Information
The act required agencies’ baseline assessment reports to identify the following: the percentage of personnel with cybersecurity job functions who held the appropriate industry-recognized certifications as identified under NICE; the level of preparedness of cybersecurity personnel without existing credentials to take certification exams; and a strategy for mitigating any gaps in (1) personnel holding industry- recognized certifications and (2) the preparedness of personnel without existing credentials to take certification exams.
In September 2016, OPM provided a template that agencies could use in reporting on their baseline assessments. Using the template, agencies could report on the number and percentage of surveyed staff with current certifications and the number and percentage of staff without such certifications that were planning to obtain them within the next year. Human resource strategists and program management officials in OPM’s Employee Services division stated that the template was a guide to help agencies with the reporting process; however, agencies were not required to use the template or report their results in the format described in the template.
The 21 CFO agencies that prepared baseline assessment reports did not always address the reportable information in their baseline assessments. Specifically, of the 21 assessments that the CFO agencies had prepared, all of the assessments included information on the percentage of cybersecurity personnel holding certifications; 17 assessments discussed the level of preparedness for personnel without certifications to take certification exams; and 20 included strategies for mitigating certification gaps. Table 3 shows the extent to which the 21 agencies’ assessments reported this information.
Moreover, 4 of the 21 agencies did not address all reportable information in their baseline assessments. Specifically:
The Department of Commerce did not assess and did not report information on (1) the level of preparedness for personnel who did not hold certifications to take certification exams or (2) strategies for mitigating gaps. Officials in Commerce’s Office of Human Resources Management and Office of the CIO stated that information on the level of preparedness and gaps was not readily available because they have not fully identified and coded the department’s cybersecurity workforce, and there is no federal requirement for cybersecurity personnel to hold certifications. The officials stated that they did not have the time or resources to assess these reporting requirements.
Officials in the Department of Energy’s Office of the Chief Human Capital Officer stated that they did not assess the level of preparedness for personnel without certifications to take certification exams because the department does not require its cybersecurity personnel to hold certifications. As a result, they did not have criteria for identifying personnel who are prepared to take certification exams.
According to the Department of the Interior’s Principal Deputy Assistant Secretary for Policy, Management, and Budget, the department did not assess the level of preparedness for personnel without certifications to take certification exams because neither OPM nor the department currently requires certifications for these cybersecurity positions. However, the department's Office of Human Resources and Office of the Chief Information Officer are exploring options to determine the level of preparedness across its IT workforce.
According to the National Aeronautics and Space Administration’s (NASA) baseline assessment report, the agency did not assess the level of preparedness for personnel without certifications to take certification exams because the agency does not require its cybersecurity personnel to maintain certifications. The agency did not know how many of its personnel were planning to seek certifications on their own.
Data regarding the number of cybersecurity employees that hold certifications and the level of preparedness of personnel without certifications can be a useful indicator of the skills and knowledge of an agency’s cybersecurity workforce. In addition, strategies for addressing gaps can help an agency increase the skills and knowledge of its cybersecurity workforce. By not including all reportable information in the assessments, these four agencies may lack valuable information that could help them identify and meet the certification and training needs of their cybersecurity employees who are charged with protecting federal information and information systems from cyberattacks. However, as discussed later in this report, the absence of NICE identified appropriate industry-recognized certifications may have also contributed to uncertainty for agencies in their efforts to comply with the requirements of the act.
Limitations in Agency Baseline Assessments Raise Concerns About the Reliability of Information about Certifications Held by Agencies’ Cybersecurity Employees
Limitations in the 21 agencies’ baseline assessments raise concerns about the reliability of the assessments, thus constraining the conclusions that can be drawn from their results about the federal cybersecurity workforce’s certifications. The 21 agencies in our review that conducted assessments were not able to collect complete, comprehensive, or consistent information about the certifications held by their cybersecurity workforces for various reasons. As a result, these agencies had limited assurance that the certification information contained in their baseline assessment reports was reliable, thereby diminishing the usefulness of the assessments in determining the certification and training needs of their cybersecurity employees.
Agencies Were Required to Assess Cyber Employees’ Certifications before They Had Fully Defined Their Cybersecurity Workforces
As previously noted, OPM’s August 2016 memorandum on the requirements of the act stated that, agencies were to report their baseline assessments to Congress by December 2016. However, according to OPM’s January 2017 coding guidance, agencies were not required to complete the assignment of the appropriate 3-digit employment codes to each position until April 2018. Consequently, agencies were required to submit their reports on the percentage of personnel performing cybersecurity functions who possessed certifications before the agencies had identified all members of their cybersecurity workforce and assigned the 3-digit cybersecurity employment codes to each position.
Because the agencies had not yet fully defined their cybersecurity workforces using the NICE Framework and the 3-digit coding structure, the 21 agencies in our review that prepared assessments did not use consistent criteria to define the population of personnel with cybersecurity job functions that were included in their baseline assessments. Examples of the criteria that these agencies used to define the target populations for their assessments included: cybersecurity employees who had been coded with the 2-digit cybersecurity employment codes during the 2013 Special Cybersecurity Workforce Project; employees within certain occupational series, such as the 2210 Information Technology Management series; personnel within certain roles or organizations, such as the Office of Information Security or the Office of the CIO; or personnel who performed cybersecurity duties for a defined percentage of the time.
As a result of not having fully defined their cybersecurity workforces prior to conducting their baseline assessments, the agencies have limited assurance that their baseline assessments reflected all relevant agency positions or personnel performing cybersecurity functions as defined by the NICE Framework.
Agencies Were Not Always Able to Obtain Certification Information from All Relevant Employees
Several agencies reported that they were not able to obtain information on certifications from all of the employees they surveyed when conducting their baseline assessments. Specifically, 6 of the 21 agencies that prepared assessments reported response rates of between 15 and 42 percent to their surveys or data calls to employees for such information. Also, officials from two agencies told us that employees’ responses to their information requests were voluntary due to union and legal concerns. As a result, these agencies have limited assurance that their baseline assessment reports conveyed comprehensive information about all agency cybersecurity personnel and the certifications that they held because of the limited response from employees.
NICE Had Not Defined Appropriate Industry- Recognized Certifications
Although the act required agencies to report on the percentage of personnel who held appropriate industry-recognized certifications as identified under NICE, NICE had not defined such a list of certifications as of the agencies’ reporting deadline of December 2016. In August 2017, a NICE official told us that the organization did not believe it was appropriate for NICE, which is led by NIST, to identify industry appropriate certifications because doing so may be perceived as endorsing certain private certifications over other certifications. Currently, the NICE website describes an effort under a NICE working group—which includes representatives from government, academia, and the private sector—to map industry-recognized certifications to work roles based on the updated NICE Framework. However, this effort has not yet been completed. According to NICE officials, the mapping of certifications to the NICE Framework is expected to be completed by November 2018.
In the absence of a defined list of industry-recognized certifications, the agencies in our review developed their own approaches for determining the certifications on which they based their assessments. Examples of agencies’ approaches included: asking that cybersecurity staff provide input on any or all certifications using a list of certifications developed by the DHS National Initiative for Cybersecurity Careers and Studies, which was referenced in OPM’s reporting template; using certifications identified in the Department of Defense’s (DOD)
Information Assurance Workforce Improvement Program; or having the agency Office of the CIO or cybersecurity workforce- planning workgroup identify certifications to include in the assessment.
Because the baseline assessments were not based on a defined list of certifications, there is limited assurance that the assessments consistently or accurately conveyed the extent to which federal cybersecurity professionals held industry-recognized certifications that are appropriate for their job functions.
Most Agencies Did Not Require Cybersecurity Personnel to Hold Certifications
In addition, no government-wide requirement exists for cybersecurity personnel to hold certifications, and most of the agencies in our review did not require certifications. Specifically:
Although OPM guidance states that agencies may use certifications as a selective factor for some positions where specific qualifications are required, no government-wide requirement exists for positions performing cybersecurity related functions to hold certifications.
Most agencies did not require IT or cybersecurity personnel to hold certifications. Only 6 of the 24 agencies reported that they had requirements for personnel to hold an industry-recognized certification, while only one agency—DOD—required certifications for all cybersecurity positions.
As a result, the information collected by most agencies about the certifications held by agency cybersecurity personnel may be of limited value for assessing the qualifications and skills of their cybersecurity workforces.
Most CFO Act Agencies Established Coding Procedures, but Six Agencies’ Procedures Only Partially Addressed Activities Required by OPM
Almost all of the CFO Act agencies established procedures to identify all of their civilian positions and assign the appropriate cybersecurity employment codes to the positions as called for by the act. However, 6 agencies’ procedures did not fully address 1 or more of 7 activities required by OPM, such as the activities to review all encumbered and vacant positions and annotate reviewed position descriptions with the appropriate employment code. Additionally, DOD did not establish procedures for coding noncivilian cybersecurity positions. By not developing coding procedures that address all of the required activities in their procedures, these agencies may not have reasonable assurance that they will fully realize the benefits of (1) comprehensively identifying the cybersecurity workforce, and (2) applying the employment codes to meet the intended goal of defining the workforce and helping to address critical mission needs.
Most Agencies Established Coding Procedures as Required by the Act
The act required agencies to establish procedures for identifying cybersecurity positions and assigning employment codes to each position. In January 2017, OPM issued a memorandum that required agencies to establish their coding procedures by April 2017. The memorandum also required agencies to perform a number of activities to identify and assign codes to cybersecurity positions. Among others, the memorandum stated that agencies were to: use the updated cybersecurity coding structure to find the appropriate cybersecurity employment code(s); identify encumbered and vacant positions with cybersecurity functions; have their CIO staff, managers, and human resources (HR) and classification staff work together to identify cybersecurity positions; annotate reviewed position descriptions with the appropriate employment code(s); account for the fact that cybersecurity positions will extend beyond the Information Technology Management 2210 (GS-2210) occupational series; assign code “000” to positions that do not perform cybersecurity assign up to three employment codes to each position, in the order of the level of criticality.
Most of the agencies in our review had established coding procedures. Specifically, of the 24 CFO Act agencies, 23 had established procedures. Fourteen of these 23 agencies established their procedures by April 2017 as OPM required, while the remaining 9 agencies established their procedures by March 2018.
Officials from the 9 agencies that did not complete their procedures by April 2017 gave several reasons for their late development or completion of the procedures. For example:
General Services Administration officials said that the procedures were delayed due to their internal review processes.
DOD officials said that the procedures were delayed because of the size and complexity of the processes required to identify and code the large number of civilian cybersecurity positions across the department, and because of the length and complexity of the department’s policy review processes.
In October 2017, an official in DHS’s Office of the Chief Human Capital Officer stated that the department did not plan to develop procedures until the National Finance Center (NFC) payroll systems were updated to accept the 3-digit cybersecurity codes. The NFC systems were updated to accept the new codes in December 2017, and DHS issued its procedures in March 2018.
One agency—the Department of Energy—had not established coding procedures:
An official in the Department of Energy’s Office of the Chief Human Capital Officer stated that, because responsibility for IT is not centralized under the department-level CIO organization (but rather, is distributed throughout the component agencies), the official had not determined who had the authority to issue coding procedures for the entire department. By not establishing coding procedures, the Department of Energy faces increased risk that it will not fully identify its cybersecurity workforce or assign the appropriate employment codes to each position, limiting its ability to identify cybersecurity skills gaps or work roles of critical need.
Agency Procedures Did Not Always Address Required Coding Activities
The agencies that developed coding procedures generally, but did not always, address the seven required activities that we reviewed in their procedures. Specifically, 17 of the 23 agencies that developed procedures addressed all 7 activities in their procedures, while the remaining 6 agencies partially addressed or did not address 1 or more of the 7 activities. Table 4 describes the extent to which agency procedures addressed the activities required by OPM.
The six agencies that did not address all activities required by OPM cited a variety of reasons for not including them in their coding procedures. For example:
An official in the Department of Education’s Office of Human Resources explained that it was not necessary for the coding procedures that were provided to each component to address assigning code “000” to noncybersecurity positions because the Office of Human Resources would assign the “000” code to any position that did not have an assigned code.
An official from the National Science Foundation’s Division of Human Resources Management stated that not addressing all activities may have been an oversight by the agency.
Officials in NASA’s Office of Human Capital Management and its Office of the CIO said they felt that it was unnecessary to address assigning code “000” to noncybersecurity positions in the agency’s coding procedures because the agencies’ existing guidance for assigning the old 2-digit codes specified that such positions should be coded with “00.”
By not addressing all of the activities required by OPM in their procedures, these 6 agencies lack assurance that the activities will be performed or performed consistently throughout their organizations.
DOD Did Not Establish Coding Procedures for Noncivilian Cybersecurity Positions
In addition to developing procedures for civilian positions, the act required DOD to establish government-wide procedures for identifying and assigning employment codes to noncivilian (i.e., military) positions with cybersecurity job functions by June 2017. The act also required DOD to establish its internal departmental procedures for military positions by September 30, 2017.
According to officials in the department’s Office of the CIO and Office of the Chief Human Capital Officer, the only military personnel not currently within DOD are in the Coast Guard (which resides within DHS). Therefore, the department planned to fulfill its requirements to establish government-wide procedures and internal departmental procedures for identifying and coding military positions by establishing a single consolidated procedure. The officials added that the consolidated procedure is to include procedures for DHS to implement the coding structure for uniformed Coast Guard personnel along with the internal DOD procedures.
However, as of February 2018, DOD had not finalized its consolidated coding procedures. An official in the department’s Office of the CIO in February 2018 stated that, because the military services use multiple Human Resources systems that all have to be updated to accommodate the cybersecurity employment codes, the office was working with each of the military services on guidance to meet the act’s deadlines while the services develop implementation plans for updating their human resources systems. Until DOD establishes both government-wide and DOD-specific procedures for identifying and coding noncivilian cybersecurity positions, increased risk exists that DOD and DHS will not be able to identify and code all positions in their noncivilian cybersecurity workforce, limiting the departments’ ability to identify cybersecurity skills gaps or work roles of critical need in their noncivilian cybersecurity workforce.
Conclusions
To implement the objectives of the Federal Cybersecurity Workforce Assessment Act of 2015, OPM and NIST, although delayed, have revised the coding structure and cybersecurity workforce framework, and developed coding procedures to support the identification and assignment of codes to federal cybersecurity positions. In addition, most CFO Act agencies have developed baseline assessments to identify cybersecurity personnel within their agencies that held certifications. Having information on the certifications held by cybersecurity employees can be a useful indicator of the skills and knowledge of an agency’s cybersecurity workforce. However, because agencies have not consistently defined the workforce and NICE had not developed a list of appropriate certifications, efforts such as conducting the baseline assessment to determine the percentage of cybersecurity personnel that hold appropriate certifications have yielded inconsistent and potentially unreliable results. By not conducting assessments or including all required information in the assessments, some of these agencies may lack valuable information that could help them identify the certification and training needs of their cybersecurity employees that are charged with protecting federal information and information systems from cyberattacks.
Lastly, while most CFO agencies have developed procedures for assigning cybersecurity codes to positions, several agencies did not address activities required by OPM. Unless those agencies address all of the activities, they may not have reasonable assurance that they are comprehensively identifying the cybersecurity workforce and applying the correct employment codes. As such, increased risk exists that the federal government will not meet its intended goal to define the cybersecurity workforce and address the critical mission needs for a qualified cybersecurity workforce.
Recommendations for Executive Action
We are making a total of 30 recommendations to 13 agencies in our review to develop and submit their baseline assessments and to fully address the required activities in OPM’s guidance in their procedures for assigning employment codes to cybersecurity positions. Specifically: The Secretary of Commerce should evaluate the level of preparedness for cybersecurity personnel not currently holding certifications to take certification exams, identify strategies for mitigating any gaps identified, and report this information to Congress. (Recommendation 1)
The Secretary of Defense should develop, document, and implement government-wide procedures for identifying IT, cybersecurity, and cyber- related noncivilian positions and assigning employment codes to those positions. (Recommendation 2)
The Secretary of Defense should develop, document, and implement internal departmental procedures for identifying IT, cybersecurity, and cyber-related noncivilian positions and assigning employment codes to those positions. (Recommendation 3)
The Secretary of Education should include requirements to assign code “000” to positions that do not perform IT, cybersecurity, and cyber-related functions in departmental procedures. (Recommendation 4)
The Secretary of Energy should evaluate the level of preparedness for cybersecurity personnel not currently holding certifications to take certification exams and report this information to Congress. (Recommendation 5)
The Secretary of Energy should develop, document, and implement departmental procedures for identifying IT, cybersecurity, and cyber- related positions and assigning employment codes to those positions, taking into account the key elements described in OPM’s instructions for agencies’ procedures. (Recommendation 6)
The Secretary of Homeland Security should conduct a baseline assessment of the department’s cybersecurity workforce that includes (1) the percentage of personnel with IT, cybersecurity, or other cyber-related job functions who hold certifications; (2) the level of preparedness of other cyber personnel without existing credentials to take certification exams; and (3) a strategy for mitigating any gaps identified with appropriate training and certification for existing personnel. (Recommendation 7)
The Secretary of Homeland Security should submit a report of the department’s baseline assessment of its existing cybersecurity workforce to the appropriate congressional committees of jurisdiction. (Recommendation 8)
The Secretary of Housing and Urban Development should conduct a baseline assessment of the department’s cybersecurity workforce that includes (1) the percentage of personnel with IT, cybersecurity, or other cyber-related job functions who hold certifications; (2) the level of preparedness of other cyber personnel without existing credentials to take certification exams; and (3) a strategy for mitigating any gaps identified with appropriate training and certification for existing personnel. (Recommendation 9)
The Secretary of Housing and Urban Development should submit a report of the department’s baseline assessment of its existing cybersecurity workforce to the appropriate congressional committees of jurisdiction. (Recommendation 10)
The Secretary of the Interior should evaluate the level of preparedness for cybersecurity personnel not currently holding certifications to take certification exams and report this information to Congress. (Recommendation 11)
The Secretary of Labor should include requirements to annotate reviewed position descriptions with the appropriate cybersecurity data standard code(s) in departmental procedures. (Recommendation 12)
The Secretary of Labor should ensure that departmental procedures fully account for the fact that IT, cybersecurity, and cyber-related positions will extend beyond the Information Technology Management 2210 occupational series. (Recommendation 13)
The Secretary of Labor should fully clarify requirements to assign code “000” to positions that do not perform IT, cybersecurity, and cyber-related functions in departmental procedures. (Recommendation 14)
The Secretary of Labor should include requirements to assign up to three employment codes per position in order of their criticality in departmental procedures. (Recommendation 15)
The Administrator of the National Aeronautics and Space Administration should evaluate the level of preparedness for cybersecurity personnel not currently holding certifications to take certification exams and report this information to Congress. (Recommendation 16)
The Administrator of the National Aeronautics and Space Administration should include requirements to assign code “000” to positions that do not perform IT, cybersecurity, and cyber-related functions in agency procedures. (Recommendation 17)
The Director of the National Science Foundation should fully clarify requirements to review all encumbered and vacant positions performing IT, cybersecurity, and cyber-related functions in agency procedures. (Recommendation 18)
The Director of the National Science Foundation should include requirements to annotate reviewed position descriptions with the appropriate cybersecurity data standard code(s) in agency procedures. (Recommendation 19)
The Director of the National Science Foundation should ensure that agency procedures account for the fact that IT, cybersecurity, and cyber- related positions will extend beyond the Information Technology Management 2210 occupational series. (Recommendation 20)
The Director of the National Science Foundation should include requirements to assign code “000” to positions that do not perform IT, cybersecurity, and cyber-related functions in agency procedures. (Recommendation 21)
The Director of the National Science Foundation should include requirements to assign up to three employment codes per position in order of their criticality in agency procedures. (Recommendation 22)
The Chairman of the Nuclear Regulatory Commission should ensure that agency procedures account for the fact that IT, cybersecurity, and cyber- related positions will extend beyond the Information Technology Management 2210 occupational series. (Recommendation 23)
The Chairman of the Nuclear Regulatory Commission should fully clarify requirements to assign up to three employment codes per position in order of their criticality in agency procedures. (Recommendation 24)
The Administrator of the Small Business Administration should conduct a baseline assessment of the department’s cybersecurity workforce that includes (1) the percentage of personnel with IT, cybersecurity, or other cyber-related job functions who hold certifications; (2) the level of preparedness of other cyber personnel without existing credentials to take certification exams; and (3) a strategy for mitigating any gaps identified with appropriate training and certification for existing personnel. (Recommendation 25)
The Administrator of the Small Business Administration should submit a report of its baseline assessment of its existing cybersecurity workforce to the appropriate congressional committees of jurisdiction. (Recommendation 26)
The Administrator of the U.S. Agency for International Development should fully clarify requirements to review all encumbered and vacant positions performing IT, cybersecurity, and cyber-related functions in agency procedures. (Recommendation 27)
The Administrator of the U.S. Agency for International Development should fully clarify requirements to annotate reviewed position descriptions with the appropriate cybersecurity data standard code(s) in agency procedures. (Recommendation 28)
The Administrator of the U.S. Agency for International Development should include requirements to assign code “000” to positions that do not perform IT, cybersecurity, and cyber-related functions in agency procedures. (Recommendation 29)
The Administrator of the U.S. Agency for International Development should include requirements to assign up to three employment codes per position in order of their criticality in agency procedures. (Recommendation 30)
Agency Comments and Our Evaluation
We provided a draft of this report to the 24 CFO Act agencies for their review and comment. Of the 13 agencies to which we made recommendations, 7 agencies stated that they agreed with all of the recommendations directed to them; 1 agency agreed with one and did not agree with one recommendation; 2 agencies provided comments but did not state whether they agreed or disagreed with the recommendations; 2 agencies stated that they had no comments; and 1 agency—DOD—did not respond to our request for comments on the report.
In addition, of the 11 agencies to which we did not make recommendations, 2 provided comments on the report and 9 responded that they had no comments on the report. We also received technical comments from 2 agencies, which we have incorporated into the report as appropriate.
The following seven agencies agreed with our recommendations: In its written comments (reprinted in appendix II), the Department of Commerce agreed with our recommendation. The department stated that it will evaluate the level of preparedness for cybersecurity personnel who do not hold certifications to take certification exams, identify strategies for mitigating any gaps identified, and report this information to Congress.
In its written comments (reprinted in appendix III), the Department of Education agreed with our recommendation. The department stated that it had updated its coding guidance to require that positions that do not perform substantial work in information technology, cybersecurity, or cyber-related functions be assigned a code of “000.”
In its written comments (reprinted in appendix IV), the Department of Energy agreed with our recommendations and stated that it has planned, or taken steps to address them. Specifically, with regard to our recommendation concerning cybersecurity certification, the department stated that it plans to conduct a department-wide evaluation of the level of preparedness for its cybersecurity personnel without existing credentials to take certification exams and will report the information to Congress.
In addition, the department stated that it had developed and issued procedures for identifying and coding IT, cybersecurity, and cyber- related positions, as we recommended, and that it had since completed its coding of applicable positions across the department. The department also provided us its updated coding procedures, along with its written comments.
In its written comments (reprinted in appendix V), the Department of Homeland Security agreed with our recommendations. Regarding the recommendation to conduct a baseline assessment of its cybersecurity workforce, the department stated that it is taking steps to collect data about certifications relevant to DHS cybersecurity work. The department also stated that it plans to identify the percentage of its cybersecurity workforce that holds certifications, the percentage prepared to take a relevant certification exam, and strategies for mitigating any gaps. The department added that it plans to provide this information to Congress, as we recommended. The department also provided technical comments, which we have incorporated into this report as appropriate.
In its written comments (reprinted in appendix VI), the Department of the Interior stated that it agreed with our recommendation. The department also indicated that it is exploring options to determine the extent to which its cybersecurity employees who currently do not hold certifications are prepared to take certification exams.
In its written comments (reprinted in appendix VII), the Small Business Administration agreed with our recommendations. The agency also stated that it had recently completed an assessment of its IT workforce and reported on existing skills gaps, and that it plans to execute its IT workforce plan to address the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015.
In comments on a draft of this report provided via email on May 15, 2018, a Program Analyst in the National Science Foundation’s Office of Integrative Activities stated that the agency concurred with our recommendations.
One agency did not agree with one of the two recommendations directed to it: In its written comments (reprinted in appendix VIII), the National Aeronautics and Space Administration did not agree with our first recommendation and agreed with the second. Specifically:
NASA did not concur with our recommendation to evaluate the level of preparedness for cybersecurity personnel not currently holding certifications to take certification exams and report this information to Congress. The agency stated that there is no federal or NASA requirement for employees in cybersecurity positions to hold and/or maintain a certification, and therefore the agency has no plans to assess the readiness of its cybersecurity personnel to take certification exams.
Nonetheless, we continue to believe our recommendation remains valid because the level of preparedness of personnel without certifications to take certification examinations can be a useful indicator of the skills and knowledge of an agency’s cybersecurity workforce. In addition, this information could help the agency identify and meet the certification and training needs of its cybersecurity employees who are charged with protecting NASA’s information and information systems from cyberattacks. Moreover, the act contains provisions that demonstrate congressional interest in assessing agency use of professional certifications.
NASA concurred with our recommendation to include in the agency’s coding procedures, requirements to assign code “000” to positions that do not perform IT, cybersecurity, and cyber-related functions. The agency stated that it planned to update its procedures to include this requirement, and indicated that supervisors and human resource specialists had been trained to assign cybersecurity codes to all positions, including code “000.”
The following two agencies provided comments, but did not state whether they agreed or disagreed with our recommendations: In its written comments (reprinted in appendix IX), the Nuclear Regulatory Commission stated that it was in general agreement with the overall content of the draft report. However, the agency asked that we revise the final report to reflect that the Nuclear Regulatory Commission had updated its cybersecurity coding procedures to include language explaining that IT, cybersecurity, and cyber-related positions will extend beyond the GS-2210 occupational series, and to outline the requirement that positions can be assigned up to three different employment codes in order of criticality. The agency provided its updated coding procedures along with its written comments.
In its written comments (reprinted in appendix X), the U.S. Agency for International Development stated that it had completed various actions related to coding its cybersecurity positions which addressed our four recommendations. For example, among other actions, the agency said it had updated its plan for coding cybersecurity positions to include procedures for assigning codes for multiple functional areas, with the predominant functional area being coded first.
In addition, two agencies to which we made recommendations---the Departments of Housing and Urban Development and Labor—stated via email that they did not have comments on the report. The agencies did not state whether they agreed or disagreed with our recommendations.
Of the agencies to which we did not make recommendations, the Social Security Administration also provided a letter acknowledging its review of the report. The agency’s letter is reprinted in appendix XI.
The remaining nine agencies to which we did not make recommendations—the Departments of Agriculture, Health and Human Services, Justice, State, Transportation, and Treasury; the Environmental Protection Agency; the General Services Administration; and the Office of Personnel Management—stated that they did not have any comments on our report.
We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov.
If you have any questions regarding this report, please contact me at (202) 512-6244 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix XII.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to determine whether (1) OPM developed a coding structure and procedures for assigning codes to cybersecurity positions and submitted a progress report to Congress, (2) Chief Financial Officers (CFO) Act agencies submitted complete and reliable baseline assessment reports of their cybersecurity workforces, and (3) CFO Act agencies established procedures to identify and assign codes to cybersecurity positions.
The scope of our review included the 24 departments and agencies (hereafter referred to as agencies) covered by the Chief Financial Officers Act of 1990. It also included OPM, DOD, DHS, and NIST in their roles related to the development of a cybersecurity coding structure and related guidance. Our work focused on the agencies’ cybersecurity positions and on workforce planning actions that the act required the agencies to complete by November 2017.
To address the first objective, we obtained and compared OPM’s federal cybersecurity employment coding structure, issued in November 2016, to the work roles described in the National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework, issued in draft form by NIST in November 2016. We also examined OPM memorandums to identify if and when OPM had issued procedures to agencies for identifying cybersecurity positions and assigning employment codes. Additionally, we reviewed any progress reports submitted by OPM to Congress on the implementation of the act. We compared the issuance date of each of these documents to the deadlines by which OPM was to issue them, as established in the act. Also, we interviewed OPM and NIST officials about their efforts to develop these documents and the reasons for any delays.
To address the second objective, we obtained available baseline assessments from each of the 24 CFO Act agencies and evaluated them against the act’s requirements to include information on (1) cybersecurity personnel holding certifications, (2) the level of preparedness of other personnel to take certification exams, and (3) strategies for mitigating any gaps identified. We also obtained agencies’ letters transmitting their assessments to the relevant congressional committees and evaluated them against the reporting deadline established in OPM guidance. In addition, we analyzed other relevant agency documentation and interviewed cognizant agency officials about their efforts to identify the appropriate certifications, identify relevant personnel, and collect information on employee certifications. We obtained the officials’ views on the reasons for any delays in agencies’ submissions of the assessments and the reliability of assessment results.
To address the third objective, we obtained and analyzed available cybersecurity coding procedures established by each of the 24 CFO Act agencies. We reviewed the required coding activities described in OPM’s Guidance for Assigning New Cybersecurity Codes to Positions with Information Technology, Cybersecurity, and Cyber-Related Functions. We judgementally selected seven of the activities that we determined to be particularly important for effectively identifying and coding all relevant encumbered and vacant cybersecurity positions. We then evaluated each agency’s procedures against these seven required coding activities. We also compared the issuance date of the procedures to the deadline established in OPM’s coding guidance for agencies to issue the procedures. In addition, we interviewed agency officials about their efforts to complete the procedures by the required deadline and the reasons for any delays.
Further, the Federal Cybersecurity Workforce Assessment Act of 2015 established a separate requirement and deadline for DOD to develop government-wide procedures for implementing the coding structure for federal noncivilian cyber positions. As such, we reviewed relevant documentation and interviewed cognizant officials from the Department of Defense’s Office of the Chief Information Officer and Office of the Under Secretary for Personnel and Readiness about their efforts to establish coding procedures for both civilian and noncivilian positions by the deadlines set forth in the act.
We conducted this performance audit from October 2016 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Commerce
Appendix III: Comments from the Department of Education
Appendix IV: Comments from the Department of Energy
Appendix V: Comments from the Department of Homeland Security
Appendix VI: Comments from the Department of the Interior
Appendix VII: Comments from the Small Business Administration
Appendix VIII: Comments from National Aeronautics and Space Administration
Appendix IX: Comments from the Nuclear Regulatory Commission
Appendix X: Comments from the United States Agency for International Development
Appendix XI: Comments from the Social Security Administration
Appendix XII: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Nick Marinos (director), Tammi Kalugdan (assistant director), William Cook (analyst in charge), Chris Businsky, Virginia Chanley, Wayne Emilien, Lisa Maine, David Plocher, Priscilla Smith, Dwayne Staten, Daniel Wexler, and Merry Woo made significant contributions to this report. | Why GAO Did This Study
A key component of mitigating and responding to cyber threats is having a qualified, well-trained cybersecurity workforce. The Federal Cybersecurity Workforce Assessment Act of 2015 requires OPM and federal agencies to take several actions related to cybersecurity workforce planning.
GAO is to monitor agencies' progress in implementing the act's requirements. For this report, GAO assessed whether: (1) OPM developed a coding structure and procedures for assigning codes to cybersecurity positions and submitted a progress report to Congress; (2) CFO Act agencies submitted complete, reliable baseline assessments of their cybersecurity workforces; and (3) CFO Act agencies established procedures to assign codes to cybersecurity positions. GAO examined OPM's coding procedures and progress report on the act's implementation, and baseline assessments and coding procedures from the 24 CFO Act agencies. GAO also interviewed relevant OPM and agency officials about efforts to address the act's requirements.
What GAO Found
As required by the Federal Cybersecurity Workforce Assessment Act of 2015 (act), the Office of Personnel Management (OPM) developed a cybersecurity coding structure under the National Initiative for Cybersecurity Education (NICE) as well as procedures for assigning codes to federal civilian cybersecurity positions. However, OPM issued the coding structure and procedures 5 and 4 months later than the act's deadlines because OPM was working with the National Institute of Standards and Technology (NIST) to align the structure and procedures with the draft NICE Cybersecurity Workforce Framework , which NIST issued later than planned. OPM also submitted a progress report to Congress on the implementation of the act 1 month after it was due. The delays in issuing the coding structure and procedures have extended the expected time frames for implementing subsequent provisions of the act.
Most of the 24 agencies covered by the Chief Financial Officers (CFO) Act submitted baseline assessment reports to Congress but the results may not be reliable. As of March 2018, 21 of the 24 CFO Act agencies had conducted baseline assessments identifying the extent to which their cybersecurity employees held professional certifications and had submitted the assessment reports to Congress as required by the act. Three agencies had not conducted the assessments for various reasons, such as a lack of resources and tools to do so. Of the 21 agencies that did, 4 did not address all of the reportable information, such as the extent to which personnel without professional certifications were ready to obtain them or strategies for mitigating any gaps. Additionally, agencies were limited in their ability to obtain complete or consistent information about their cybersecurity employees and the certifications they held. This was because agencies had not yet fully identified all members of their cybersecurity workforces or did not have a consistent list of appropriate certifications for cybersecurity positions. As a result, the agencies had limited assurance that their assessment results accurately reflected all relevant employees or the extent to which those employees held appropriate certifications. This diminishes the usefulness of the assessments in determining the certification and training needs of these agencies' cybersecurity employees.
Most of the 24 CFO Act agencies established coding procedures, but 6 agencies only partially addressed certain activities required by OPM in their procedures. Of the 24 agencies reviewed, 23 had established procedures to identify their civilian cybersecurity positions and assign the appropriate employment codes to the positions as called for by the act. However, 6 of the 23 agencies did not address one or more of 7 activities required by OPM in their procedures, such as the activities to review all filled and vacant positions and annotate reviewed position descriptions with the appropriate employment code. These 6 agencies cited a variety of reasons for not addressing all of the required activities in their coding procedures. For example, these agencies stated that they addressed the activities in existing guidance or did not include activities that their components did not have the responsibility to perform. By not addressing all of the required activities in their coding procedures, the 6 agencies lack assurance that the activities will be performed or performed consistently throughout their agency.
What GAO Recommends
GAO is making 30 recommendations to 13 agencies to fully implement two of the act's requirements on baseline assessments and coding procedures. Of the 12 agencies to which we made recommendations that provided comments on the report, 7 agreed with the recommendations made to them, 4 did not state whether they agreed or disagreed, and 1 did not agree with one of two recommendations made to it. GAO continues to believe that the recommendation is valid as discussed in this report. |
gao_GAO-18-383T | gao_GAO-18-383T_0 | Accelerated by E- Commerce, Changes in the Counterfeits Market Present Challenges to U.S. Agencies, Consumers, and the Private Sector
E-Commerce Has Contributed to a Shift in the Market for Counterfeit Goods
The rise of e-commerce has contributed to a fundamental change in the market for counterfeit goods, according to our analysis of documents from CBP, ICE, and international organizations and our interviews with CBP and ICE officials. U.S. agencies and international organizations have observed a shift in the sale of counterfeit goods from “underground” or secondary markets, such as flea markets or sidewalk vendors, to primary markets, including e-commerce websites, corporate and government supply chains, and traditional retail stores. Whereas secondary markets are often characterized by consumers who are knowingly purchasing counterfeits, primary markets involve counterfeiters who try to deceive consumers into purchasing goods they believe are authentic.
This shift has been accompanied by changes in the ways in which counterfeit goods are sold. In the past, consumers could often rely on indicators such as the location of sale or the goods’ appearance or price to identify counterfeit goods in the marketplace. However, counterfeiters have now adopted new ways to deceive consumers. For example, as consumers increasingly purchase goods online, counterfeiters may exploit third-party online marketplaces to gain an appearance of legitimacy and access to consumers. When selling online, counterfeiters may post pictures of authentic goods on the websites where they are selling counterfeits and may post pseudonymous reviews of their products or businesses in order to appear legitimate. Additionally, by setting the price of a counterfeit at, or close to, the retail price of a genuine good, counterfeiters may deceive consumers, who will pay the higher price because they believe the goods are real or who believe that they are getting a slight bargain on genuine goods.
CBP Data Indicate Changes in Several Key Characteristics of Counterfeit Goods Seized
According to CBP seizure data and CBP officials, the volume, variety, and methods of shipment of counterfeit goods seized by CBP and ICE have changed in recent years. CBP reports indicate that the number of IPR seizures increased by 38 percent in fiscal years 2012 through 2016. According to CBP data, approximately 88 percent of IPR seizures made during this period were shipped from China and Hong Kong. The variety of products being counterfeited has also increased, according to CBP officials. CBP and ICE officials noted that, while many consumers may think of luxury handbags or watches as the most commonly counterfeited goods, counterfeiting occurs in nearly every industry and across a broad range of products. In addition, according to CBP data we reviewed and officials we spoke to, the methods of importing counterfeit goods into the United States have changed in recent years. Specifically, express carriers and international mail have become the predominant form of transportation for IPR-infringing goods entering the United States, constituting approximately 90 percent of all IPR seizures in fiscal years 2015 and 2016, according to CBP data.
Twenty of 47 Items Purchased from Third- Party Sellers on Popular E-Commerce Websites Were Counterfeits, Highlighting Potential Risks to Consumers
In an attempt to illustrate the risk that consumers may unknowingly encounter counterfeit products online, we purchased a nongeneralizable sample of four types of consumer products—shoes, travel mugs, cosmetics, and phone chargers—from third-party sellers on five popular e-commerce websites. According to CBP data we reviewed and officials we spoke to, CBP often seizes IPR-infringing counterfeits of these types of products. As table 1 shows, the rights holders for the four selected products we purchased determined that 20 of the 47 items were counterfeit.
We did not identify any clear reasons for the variation among the counterfeit and authentic items that we purchased based on the products that they represented, the e-commerce websites where we bought the items, or the third-party sellers from whom we bought them. For three of the four product types, at least one item we purchased was determined to be counterfeit, with results varying considerably by product. Representatives of the rights holders also could not provide a specific explanation for the variation among authentic and counterfeit goods that we received. They noted that the results of covert test purchases can fluctuate depending on enforcement activities and the variety of goods and sellers on a particular website on a given day. Rights-holder testing also showed that we purchased at least one counterfeit item and one authentic item from each of the five e-commerce websites. In addition, our analysis of the customer ratings of third-party sellers from whom we bought the items did not provide any clear indications that could warn consumers that a product marketed online may be counterfeit. For example, we received both counterfeit and authentic items from third- party sellers with ratings that were less than 70 percent positive as well as sellers with ratings that were up to 100 percent positive.
Rights holders were able to determine that items we purchased were not authentic on the basis of inferior quality, incorrect markings or construction, and incorrect labeling. Some counterfeit items we purchased were easily identifiable as likely counterfeit once we received them. For example, one item contained misspellings of “Austin, TX” and “Made in China.” Other items could be more difficult for a typical consumer to identify as counterfeit. For example, the rights holder for a cosmetic product we purchased identified one counterfeit item on the basis of discrepancies in the color, composition, and design of the authentic and counterfeit items’ packaging. Counterfeit goods may also lack key elements of certification markings and other identifiers. For example, on a counterfeit phone charger we purchased, the UL certification mark did not include all components of the authentic mark. Figure 1 shows examples of these counterfeit items.
The risks associated with the types of counterfeit goods we purchased can extend beyond the infringement of a company’s IPR. For example, a UL investigation of counterfeit iPhone adapters found a 99 percent failure rate in 400 counterfeit adapters tested for safety, fire, and shock hazards and found that 12 of the adapters tested posed a risk of lethal electrocution to the user. Similarly, according to a rights holder representative, counterfeits of common consumer goods, such as Yeti travel mugs, may contain higher-than-approved concentrations of dangerous chemicals such as lead, posing health risks to consumers. According to ICE, seized counterfeit cosmetics have been found to contain hazardous substances, including cyanide, arsenic, mercury, lead, urine, and rat droppings.
Representatives of rights holders and e-commerce websites whom we interviewed reported taking independent action to try to protect IPR within their areas of responsibility. For example, both rights holders and e- commerce websites maintain IPR protection teams that work with one another and with law enforcement to address infringement issues. E- commerce websites may also take a variety of steps to block and remove counterfeit items listed by third-party sellers. These efforts rely on data collected through a variety of means, including consumer reporting of counterfeits, rights-holder notifications of IPR infringement, and corporate efforts to vet potential third-party sellers, according to private sector representatives.
Our January 2018 report includes information on steps that consumer protection organizations and government agencies recommend consumers take to limit the risk of purchasing counterfeits online. These steps include, for example, buying only from authorized retailers online, avoiding prices that look “too good to be true,” and reporting counterfeit purchases.
Changes in the Marketplace Can Pose Challenges to U.S. Agencies and the Private Sector
We identified a number of key challenges that the changes in the market for counterfeit goods can pose to CBP and ICE as well as to the private sector. First, the increasing sophistication of counterfeits can make it difficult for law enforcement officers to distinguish between legitimate and counterfeit goods. Second, as the range of counterfeit goods expands, CBP has a wider variety of goods to screen, which requires CBP officials to have in-depth knowledge of a broad range of products and of how to identify counterfeits. Third, counterfeiters may break up large shipments into multiple smaller express carrier or mail packages to decrease the risk of losing significant quantities of merchandise to a single seizure. This shift toward smaller express shipments of counterfeit goods to the United States poses challenges to CBP and ICE because, according to CBP officials, seizure processing requires roughly the same amount of time and resources regardless of shipment size or value.
The changing marketplace also presents challenges to the private sector, according to representatives from rights holders and e-commerce websites. For example, it is more difficult for rights holders and e- commerce websites to identify and investigate individual counterfeit cases, because e-commerce websites face a growing inventory from a larger registry of sellers. Tracking goods from known counterfeiters through various website fulfillment and delivery mechanisms is also a significant challenge for the private sector. Furthermore, the growth of e- commerce has accelerated the pace at which counterfeiters can gain access to consumers or reinvent themselves if shut down.
CBP and ICE Engage in Activities to Enhance IPR Enforcement, but CBP Has Not Fully Evaluated the Results of Its Activities
CBP and ICE engage in a number of activities to enhance IPR enforcement; however, while ICE has assessed some of its efforts, CBP has taken limited steps to do so. CBP’s and ICE’s IPR enforcement activities broadly include detecting imports of potentially IPR-infringing goods, conducting special operations at U.S. ports, engaging with international partners, and undertaking localized pilot programs or port- led initiatives. CBP and ICE have collected some performance data on activities we reviewed, and ICE has taken some steps to better understand the impact of its efforts, such as creating a process to track cases it deems significant. However, we found that CBP has conducted limited evaluation of its efforts to enhance IPR enforcement. Consequently, we concluded that CBP may lack information needed to ensure it is investing its resources in the most efficient and effective activities. We recommended in our report that CBP take steps to evaluate the effectiveness of its IPR enforcement efforts; CBP concurred with this recommendation.
CBP and ICE Generally Collaborate on IPR Enforcement, but CBP Is Restricted in Sharing Information with the Private Sector
Our analysis showed that CBP and ICE interagency collaboration on IPR enforcement is generally consistent with the following selected key practices for effective interagency collaboration: (1) define and articulate a common outcome; (2) establish mutually reinforcing or joint strategies; (3) identify and address needs by leveraging resources; (4) agree on roles and responsibilities; and (5) establish compatible policies, procedures, and other means to operate across agency boundaries. For example, the agencies may leverage resources by collocating staff or sharing their expertise. CBP and ICE have also issued guidance and developed standard operating procedures to clarify roles and responsibilities. CBP and ICE also coordinate with the private sector in a variety of ways, such as obtaining private sector assistance to determine whether detained goods are authentic and to conduct training.
Representatives of rights holders and e-commerce websites noted that information shared by law enforcement entities is critical to private sector IPR enforcement, such as pursuing civil action against a counterfeiter or removing counterfeit items from websites. In the Trade Facilitation and Trade Enforcement Act of 2015, Congress provided CBP with explicit authority to share certain information with trademark and copyright owners before completing a seizure. CBP officials stated that they share information about identified counterfeits with e-commerce websites and rights holders to the extent possible under current regulations. However, according to private sector representatives we spoke to, restrictions on the amount and type of information about seized items shared by CBP limit the ability of rights holders and e-commerce websites to protect IPR. CBP officials noted that there are legal limitations to the amount and type of information they can share, particularly if the e-commerce website is not listed as the importer on forms submitted to CBP.
Several private sector representatives stated that receiving additional information from CBP would enhance their ability to protect IPR. Representatives of one website noted that information on the exterior of seized packages, such as business identifiers on packages destined for distribution centers, would be helpful for identifying groups of counterfeit merchandise from the same seller. However, according to CBP officials, CBP cannot provide such information to e-commerce websites. Representatives of one e-commerce website noted that ICE sometimes shares information related to an investigation, but that ICE’s involvement in the enforcement process begins only after CBP has identified and seized counterfeit items. Representatives of two e-commerce websites stated that, because of the limited information shared by CBP, they may not be aware of IPR-infringing goods offered for sale on their websites, even if CBP has seized related items from the same seller.
According to CBP officials, CBP is reviewing options for sharing additional information with rights holders and e-commerce websites and is assessing what, if any, additional information would be beneficial to share with private sector entities. CBP officials stated that they have not yet determined whether changes to the amount and types of information provided to e-commerce websites would require regulatory changes or additional legal authorities. These officials also said that they have discussed differences in CBP’s and ICE’s information sharing with ICE officials. In our report, we recommended that CBP, in consultation with ICE, assess what, if any, additional information would be beneficial to share with the private sector and, as appropriate, take action to enhance information sharing where possible. CBP concurred with this recommendation.
Chairman Hatch, Ranking Member Wyden, and Members of the Committee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time.
GAO Contact and Staff Acknowledgments
If you or your staff have any questions about this testimony, please contact Kimberly Gianopoulos, Director, International Affairs and Trade, at (202) 512-8612 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Joyee Dasgupta, Kara Marshall, Katie Bassion, Kristen Timko, Reid Lowe, Sarah Collins, Neil Doherty, Ramon Rodriguez, Helina Wong, Julie Spetz, Kevin Loh, Wayne McElrath, Grace Lui, James Murphy, Mary Moutsos, Justin Fisher, Rachel Stoiko, and Sarah Veale.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
This testimony summarizes the information contained in GAO's January 2018 report, entitled Intellectual Property: Agencies Can Improve Efforts to Address Risks Posed by Changing Counterfeits Market , ( GAO-18-216 ).
What GAO Found
Changes in the market for counterfeit goods entering the United States pose new challenges for consumers, the private sector, and U.S. agencies that enforce intellectual property rights (IPR). Specifically, growth in e-commerce has contributed to a shift in the sale of counterfeit goods in the United States, with consumers increasingly purchasing goods online and counterfeiters producing a wider variety of goods that may be sold on websites alongside authentic products. For example, 20 of 47 items GAO purchased from third-party sellers on popular consumer websites were counterfeit, according to testing by the products' rights holders (see table), highlighting potential risks to consumers. The changes in the market for counterfeit goods can also pose challenges to the private sector—for example, the challenge of distinguishing counterfeit from authentic goods listed for sale online—and complicate the enforcement efforts of U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE).
CBP and ICE engage in a number of activities to enhance IPR enforcement; however, while ICE has assessed some of its efforts, CBP has taken limited steps to do so. CBP's and ICE's IPR enforcement activities broadly include detecting imports of potentially IPR-infringing goods, conducting special operations at U.S. ports, engaging with international partners, and undertaking localized pilot programs or port-led initiatives. CBP and ICE have collected some performance data for each of the eight activities GAO reviewed, and ICE has taken some steps to understand the impact of its efforts. However, CBP has conducted limited evaluation of its efforts to enhance IPR enforcement. Consequently, CBP may lack information needed to ensure it is investing its resources in the most efficient and effective activities.
CBP and ICE generally collaborate on IPR enforcement, but according to private sector representatives, restrictions on CBP's information sharing limit private sector enforcement efforts. GAO found that CBP and ICE have undertaken efforts that align with selected key practices for interagency collaboration, such as participating in developing a national IPR enforcement strategy and agreeing on roles and responsibilities. However, sharing additional information about seized items with rights-holding companies and e-commerce websites could improve enforcement, according to private sector representatives. CBP officials said they share information to the extent allowed under current regulations, but CBP has not completed an assessment of what, if any, additional information would be beneficial to share with private sector entities. Without such an assessment, CBP will not know if sharing additional information requires regulatory or legal changes. |
gao_GAO-18-580 | gao_GAO-18-580_0 | Background
Indian Health Service
IHS was established within the Public Health Service in 1955 to provide health services to members of AI/AN tribes, primarily in rural areas on or near reservations. IHS provides these services directly through a network of hospitals, clinics and health stations, while also providing funds to tribally operated facilities. These federally and tribally operated facilities are located primarily in service areas that are rural, isolated, and underserved. In fiscal year 2017, IHS allocated about $1.9 billion for health services provided by federally and tribally operated facilities. Federally operated IHS facilities, which received over 5.2 million outpatient visits and over 15,000 inpatient admissions in 2016, provide mostly primary and emergency care, as well as some ancillary and specialty services in 26 hospitals, 55 health centers, and 21 health stations. According to IHS, federally operated IHS hospitals range in size from 4 to 133 beds and generally are open 24 hours a day for emergency care needs; health centers offer a range of care, including primary care services and some ancillary services, such as pharmacy, laboratory, and X-ray services, and are open for at least 40 hours a week; and health stations offer only primary care services on a regularly scheduled basis and are open fewer than 40 hours a week.
The 12 IHS area offices are responsible for distributing funds to the facilities in their areas, monitoring their operation, and providing guidance and technical assistance (see fig. 1). In addition, five human resources regional offices assist the area offices in the recruitment and hiring of providers.
IHS federally operated facilities employ both federal civil service personnel and Commissioned Corps officers. IHS may pay higher salaries for certain federal civil service providers through the development and implementation of special pay tables, which specify the ranges of salaries that these certain providers can receive. According to IHS officials, the Commissioned Corps officers follow the same process for applying for positions at IHS as federal civil service employees. However, the Commissioned Corps officers are uniformed health professionals whose pay and allowances are different. IHS also supplements its workforce capacity with both temporary and long-term contracts with individual physicians or a medical staffing company.
IHS downloads information on all funded and active positions from the Capital Human Resource Management System, an HHS data system used for personnel and payment transactions that IHS began using in 2016 to track all employee vacancies. According to IHS officials, the accuracy of the data is verified quarterly by regional human resources officers. As the IHS health care workforce also includes Commissioned Corps officers—who have a separate personnel system—the information on Commissioned Corps officers assigned to IHS are entered into the Capital Human Resource Management System manually, according to IHS officials.
Rural Health Care Delivery Challenges
According to the National Rural Health Association, the challenges of rural health care delivery are different than those in urban areas. These challenges include those related to more complex patient health status and poorer socioeconomic conditions, as well as physician workforce shortages. According to the Agency for Healthcare Research and Quality, compared with their urban counterparts, residents of rural counties are older, poorer, more likely to be overweight or obese, and sicker. Those living in rural areas also have greater transportation difficulties reaching health care providers, often traveling great distances to reach a doctor or hospital. Exacerbating these challenges is a relative scarcity of medical providers in rural areas compared to urban areas. For example, the National Center for Health Statistics reported the primary care physician- to-patient ratio in rural areas in 2012 was 39.8 physicians per 100,000 people, compared to 53.3 physicians per 100,000 in urban areas.
IHS Data Show Sizeable Provider Vacancies and Officials Identified Various Challenges to Filling Them IHS Data Demonstrate Sizeable Provider Vacancies
IHS data demonstrate large percentages of vacancies for providers in the 8 areas in which IHS has substantial direct care responsibilities. As of November 2017, the overall percentage of vacancies for physicians, nurses, nurse practitioners, CRNAs, certified nurse midwives, physician assistants, dentists, and pharmacists in these areas was 25 percent, ranging from 13 to 31 percent across the areas. (See fig. 2)
However, variation in vacancy rates existed among provider types across IHS areas. For example, while the overall percentage of vacancies for physicians, nurses, nurse practitioners, dentists, and physician assistants each exceeded 25 percent, the vacancy rate for pharmacists was less than 25 percent. In addition, for certain provider types in some areas, more than one-third of the positions were vacant. For example, although 29 percent of the total positions for physicians across these 8 areas were vacant, the vacancy rate ranged from 21 percent in the Oklahoma City area to 46 percent in the Bemidji and Billings areas. (See fig. 3.)
As another example, although 27 percent of the total positions for nurses across these 8 areas were vacant, the vacancy rate ranged from 10 percent in the Oklahoma City area to 36 percent in the Albuquerque and Bemidji areas. (See fig. 4.)
Similarly, across these 8 areas
32 percent of the total positions for nurse practitioners were vacant, ranging from 12 percent in the Oklahoma City area to 47 percent in the Albuquerque area;
27 percent of the total positions for dentists were vacant, ranging from 14 percent in the Phoenix area to 39 percent in the Bemidji area; and
30 percent of the total positions for physician assistants were vacant, and although 4 of the areas had few such positions (the Albuquerque, Bemidji, Oklahoma City, and Portland areas each had 7 or fewer positions), the percentage of vacancies in the 4 areas with 15 or more such positions ranged from 21 percent in the Phoenix area to 40 percent in the Billings area.
In contrast, 13 percent of the total positions for pharmacists were vacant, ranging from 3 percent in the Bemidji area to 17 percent in the Albuquerque area. For more information about the vacancies for specific clinical positions, see appendix I.
While sizeable vacancies existed across provider types and areas, the majority of positions in all eight areas were occupied by civilians, and about 13 percent were filled by Commissioned Corps officers who are fulfilling assignments with a minimum 2-year term. The percentages of positions by IHS area that were vacant, filled by civilians, and filled by Commissioned Corps officers as of November 2017 are shown in figure 5.
IHS Officials Identified Challenges to Filling Provider Vacancies, As Well As Negative Effects of Vacancies on Patient Care and Provider Satisfaction
IHS officials told us they have experienced considerable challenges in filling vacancies for providers—as well as negative effects on patient care and provider satisfaction when positions are vacant. According to IHS officials, the rural locations and geographic isolation of some IHS facilities create recruitment and retention difficulties. IHS data indicate that 36 of the 102 IHS facilities, including four hospitals, are identified as isolated hardship (ISOHAR) posts. Agency documentation describes ISOHAR posts as ‘‘unusually difficult, which may present moderate to severe physical hardships for individuals assigned to that geographic location,’’ and states that physical hardships may include crime or violence, pollution, isolation, a harsh climate, scarcity of goods on the local market, and other problems. In addition, IHS has reported that insufficient housing, substandard schools, lack of entertainment opportunities, and shopping centers located more than three hours away are all typical not only of ISOHAR posts, but also of many other IHS facility locations. Officials stated that, especially for job candidates and employees with families, these can be critical factors in choosing whether or not to accept or stay in a position. For example, officials from the Portland Area office told us the Colville Service Unit has experienced challenges recruiting physicians because the service unit is 110 miles away from Spokane, and many of the smaller towns nearby have limited amenities—including limited employment opportunities for spouses and school systems that may not meet the expectations of some prospective employees.
In addition to hardships generally associated with rural locations, IHS facilities can experience additional challenges specific to recruiting and retaining providers for facilities on tribal lands. For example, Navajo area officials told us that providers who are non-native or are not married to a tribal member generally must go off the reservation to find housing if it is not provided by IHS. According to IHS, the Navajo Nation is one of the largest Indian reservations in the United States, consisting of more than 25,000 contiguous square miles and three satellite communities, and extending into portions of Arizona, New Mexico, and Utah. Living off the reservation can result in long commutes, contributing to a difficult work- life balance. Furthermore, IHS officials noted, public transportation such as buses or trains do not exist in proximity to most IHS facilities.
IHS facility staff told us long-standing vacancies have a direct negative effect on patient access to quality health care, as well as employee morale. Officials from multiple facilities we visited told us they have had to cut certain patient services due to ongoing provider vacancies. For example, officials from the Phoenix Area office told us the Nevada Skies Youth Wellness Center, an adolescent substance abuse treatment center, decreased the number of beds available due to staffing vacancies. Similarly, officials from the Rosebud Hospital stated the facility has diverted obstetrics patients to other facilities since July 2016 due to a shortage of physicians, nurses, and nurse anesthetists. During the diversion, those patients were referred to other hospitals in Valentine, Nebraska, and Winner, South Dakota—about 45 miles away. An official from the Sioux San Hospital said that because of vacancies in the diagnostic testing laboratory, the hospital stopped conducting Chlamydia tests in-house and instead sends specimens out to another laboratory for testing. As a result, the official stated it takes about a week longer to get the test results, which can delay treatment. In addition facility staff we interviewed told us the increased stress and fatigue of providers working to make up for staffing shortages results in decreased employee morale. These staff stated that, in some cases, this stress and fatigue has caused providers to leave IHS. One doctor we spoke with described this dynamic of vacancies begetting additional vacancies as a “never-ending cycle” for the facility.
IHS Uses Multiple Strategies to Recruit and Retain Providers
In an effort to recruit and retain permanent employees, IHS has used strategies that are similar to strategies used by VHA and tribal facilities in our review. Specifically, IHS has provided financial incentives, professional development opportunities, and some access to housing. The agency has also taken steps to recruit students and connect with potential applicants through webinars, career fairs, and conferences.
Salaries and Other Financial Incentives
IHS offers increased special salary rates for certain health care positions, as well as other financial incentives, such as recruitment and retention bonuses. IHS also offers student loan repayments, in return for health professionals’ commitment to work at IHS for a specified period of time.
Special salary rates. IHS offers special higher salary rates for physicians, dentists, nurses, CRNAs, certified nurse midwives, nurse practitioners, optometrists, pharmacists, and physician assistants. IHS officials stated that special salary rates are an important recruitment and retention tool for providers, and that without them, federally operated IHS facilities would be at a competitive disadvantage with the private sector, VHA, and tribally operated facilities. In 2015 IHS reported that recruiting and retaining CRNAs was “an ongoing problem for IHS—mostly due to pay,” and the agency rarely had “a sufficient applicant pool.” IHS reported “CRNA services were integral to IHS operations” and without the ability to recruit and retain these providers, IHS was “at risk of having to curtail services to clients.” As a result, according to IHS officials, the agency developed special salary rates for CRNAs, which became effective on December 31, 2015. As of November 2017, IHS had no CRNA vacancies.
However, according to IHS officials, the agency has only developed seven national special pay tables and two local special pay tables for Alaska, as of January 2018, due to a lack of human resources personnel trained in this process. Officials told us only one human resources staff person at IHS is experienced with developing special pay tables, which takes a substantial amount of work. However, they stated that this task is only one of her job responsibilities, and she can complete about one special pay table each year. In comparison, according to an official, VHA has developed and regularly revises over 3,000 special salary rates based on local market conditions. For example, IHS officials stated that Phoenix Indian Medical Center cannot offer salaries that are competitive with VHA because salaries for providers in the Phoenix area are relatively high compared to national salaries, and IHS has not developed local salary rates in the Phoenix market. For example, using pay rates effective January 7, 2018, a nurse just starting a career in the Phoenix area could make $63,871 at VHA (local pay table), versus $44,835 at IHS (national pay table).
Although offering increased salaries is an important strategy that IHS uses for recruitment, IHS still experiences challenges in offering competitive salaries. Officials from two area offices told us the maximum amount for a physician salary or certain nursing salaries were not enough for some potential hires, who sought employment elsewhere. While IHS may seek approval from HHS to exceed the maximum salary of certain pay tables, IHS officials said the approval process can be lengthy, which has resulted in the loss of promising candidates—including emergency medicine, general surgery, radiology, and anesthesiologist providers. Similarly, officials from one area office stated that federally operated IHS facilities have experienced challenges competing with other health care systems in recruiting local health care providers, including tribally operated facilities. For example, officials from the Oklahoma City area office told us their area has four of the largest American Indian tribes in the country running their own health systems. According to these officials, in addition to IHS funds, these tribes use money from other sources to pay health care salaries. IHS officials explained that, as a result, tribes can pay higher salaries and may be able to offer other incentives that IHS is unable to provide.
Recruitment, relocation, and retention incentives. IHS may offer recruitment, relocation, and retention incentives. Specifically, for positions that are difficult to fill or for individuals who are unlikely to accept the position without an incentive, IHS may offer potential employees a recruitment incentive up to 25 percent of their annual salary. IHS may also pay a relocation incentive for a current employee who must relocate for a position that would otherwise be difficult to fill. In addition, IHS may pay a retention incentive of up to 25 percent of an employee’s current salary if he or she (1) has unusually high or unique qualifications or if there is a special need of the agency, which makes retention essential, or (2) is likely to leave IHS without the retention incentive. Officials from the Phoenix area office told us IHS facilities use the retention bonuses extensively for nursing staff, in particular, to help match the market pay. IHS also analyzed the recruitment and retention of nurses and, as a result of this analysis, requested an exception to the 25 percent limit on recruitment, relocation, and retention incentives, from the Office of Personnel Management (OPM). In December, 2017, OPM approved IHS’s request to offer incentives up to 50 percent, and IHS officials told us that they are currently reviewing implementation options.
Loan repayment. IHS’s Loan Repayment Program pays provider education loans in exchange for an initial two-year service commitment to practice in health facilities serving AI/AN communities. Recipients agree to serve two years in exchange for up to $20,000 per year in loan repayment funding and up to an additional $5,000 per year to offset tax liability, which IHS pays directly to the Internal Revenue Service. Loan repayment recipients can extend their initial two-year contract on an annual basis until their original approved educational loan debt is paid. In fiscal year 2017, a total of 1,267 providers—about 8 percent of the federal IHS workforce—were receiving IHS loan repayments. This included 434 new two-year contracts, 396 one-year extension contracts, and 437 providers starting the second year of their fiscal year 2016 two-year contract. However, IHS’s Loan Repayment Program is not able to pay for the loans of all providers who request it due to limited funding. According to officials in one area office, this has caused providers to either decline a job offer or leave IHS. According to IHS’s fiscal year 2019 budget justification, in fiscal year 2017, 412 providers employed by IHS who applied for loan repayment, did not receive one. An additional 376 applicants either declined a job offer because they did not receive loan repayment funding or were unable to find a suitable assignment meeting their personal or professional needs. Officials in the Billings Area Office told us several physicians stated during exit interviews that they were leaving because they did not receive the loan repayment funding they hoped to receive. According to area office officials, the Billings area lost 5 physicians in 2 weeks because they were not awarded loan repayments.
In addition to its own loan repayment program, IHS has worked with HHS’s Health Resources and Services Administration (HRSA) to increase opportunities for providers to apply for loan repayment through the National Health Service Corps. Specifically, IHS worked with HRSA to increase the number of facilities deemed medically underserved and therefore designated Health Professional Shortage Areas. According to IHS, this resulted in 684 health care delivery sites for placement of National Health Service Corps providers, and the number of placements increased to 443 providers as of August 2016. As of January 2018, according to IHS officials, there were 499 providers serving at 797 eligible sites. Applicants cannot receive loan repayment from more than one program concurrently.
Professional Development Opportunities
Officials from several facilities told us they provide access to professional development opportunities for IHS employees as a retention tool. For example, Northern Navajo Medical Facility (Shiprock) officials said they are sending nurse managers and two to three potential future leaders to the American Organization of Nurse Executive trainings. Officials told us this training allows the nurses to network with private executives and look at fellowships. In addition, Chinle Comprehensive Health Care Facility officials told us they paid for a 2-year residency at University of Texas Health Science Center so one of their dentists could obtain additional training in pediatric dentistry. Officials told us that, in return, the dentist agreed to stay at the Chinle Comprehensive Health Care Facility for 6 years. In addition, Shiprock service unit officials told us they have offered their providers, through a partnership with the University of New Mexico, an online Masters of Science in Public Health program in health management.
Housing
When housing is limited near IHS facilities, IHS has made some housing available to assist with recruitment and retention of providers. Area officials told us federally operated IHS facilities in the Albuquerque, Great Plains, Phoenix, Billings, and Navajo areas provide some government- subsidized housing for providers and their families. At four of the seven facilities we visited—the Kayenta Health Center, Chinle Comprehensive Health Care Facility, Rosebud Hospital, and Pine Ridge Hospital—we observed some staff housing.
Kayenta Health Center. Officials from Kayenta Health Center told us that they provide 158 housing units, from 1 bedroom to 4 bedrooms. In addition, the facility has a 19-unit building, similar to a hotel (fully furnished), for temporary contract providers. Officials said they are considering opening units in this building to permanent employees.
Chinle Comprehensive Health Care Facility. Officials from Chinle Comprehensive Health Care Facility told us there are 264, 1 to 4 bedroom housing units available for providers both on its campus and nearby. IHS officials also told us they provide access to 19 parking spaces for camping vehicles.
Rosebud Hospital. Officials from Rosebud Hospital stated they provide 150 housing units and are also constructing a 19-unit hotel- style building. They said that most, if not all, candidates from outside of the area ask about housing unit availability when deciding whether to accept a position.
Pine Ridge Hospital. Officials from Pine Ridge Hospital told us that IHS also provides 105 housing units for its employees. IHS officials explained the housing is a necessity for on-call providers because staff without on-site housing are required to commute extreme distances in very harsh environments to locate housing outside of reservation boundaries.
See figure 6 for examples of government-subsidized provider housing near the Kayenta Health Center, Chinle Comprehensive Health Care Facility, Rosebud Hospital, and Pine Ridge Hospital. See appendix II for information about housing provided by one selected tribe.
However, there is a greater demand for housing than IHS can provide. During our site visit, Chinle Health Care Facility officials stated that government-subsidized housing availability to meet employee demand is severely limited at all of their three facilities, and the availability of private housing in the community is “non-existent.” As a result, IHS officials from Chinle told us that some providers commute 60 to 90 minutes to work one-way each day. IHS officials told us that, after conducting a needs assessment in 2016, they determined the unmet need for housing at IHS facilities was 1,100 units. According to these officials, the needs assessment also helped them identify some of the greatest needs for housing. The President’s fiscal year 2017 budget proposal for IHS requested $12 million to build new staff housing units “in isolated and remote locations for healthcare professionals to enhance recruitment and retention.” According to agency officials, based on its needs assessment, HHS provided $24 million to build new staff housing units at the Rosebud and Pine Ridge hospitals in the Great Plains area, at the Crownpoint and Chinle health care facilities in the Navajo areas, and at the Supai clinic in the Phoenix area.
Student Recruitment Efforts
IHS has also taken steps to recruit future providers by providing scholarships, externships, internships, and residency rotations to health professional students.
Scholarships. IHS’s scholarship program provides financial support to qualified AI/AN candidates in exchange for a minimum 2-year service commitment within an Indian health program. Nearly 7,000 AI/AN students have received scholarship awards since the program started in 1978. The awards include (1) scholarships for candidates enrolled in preparatory or undergraduate prerequisite courses in preparation for entry to a health professions school, (2) pre-graduate scholarships for candidates enrolled in courses leading to a bachelor’s degree, including pre-medicine, pre-dentistry, and pre-podiatry, and (3) health professions scholarships for candidates who are enrolled in an eligible health profession degree program. According to IHS, in fiscal year 2017, there were 805 new scholarship applications submitted. After evaluating the applications, 331 applications were deemed eligible for funding, and the program was able to fund 108 new awards. The IHS Scholarship program also reviewed applications from previously awarded scholars who were continuing their education. In fiscal year 2017, 154 continuation awards were funded. In addition to the scholarship program, according to IHS officials, the agency funds two medical students enrolled at the Uniformed Service University of the Health Sciences each year. Each graduate agrees to a 10-year obligation to IHS after medical school graduation and completion of training. In future years, IHS endeavors to fund two additional medical students at the Uniformed Service University of Health Sciences.
Externships and internships. IHS provides scholarship recipients with opportunities to receive clinical experience in IHS facilities. In fiscal year 2017, the agency funded 94 students, who were employed for 30 to 120 workdays per calendar year. In addition, IHS provides externships to students temporarily called to active duty as Commissioned Corps officers through the Commissioned Officer Student Training and Extern Program (COSTEP). IHS officials said that the agency funded about 60-70 students in COSTEP in 2016. IHS also offers a Virtual Internship program through a partnership with the Department of State. Virtual interns spend 10 hours a week from September through May working remotely on their projects, which have included producing bilingual Navajo and English videos for rural health clinics, developing Navajo-specific health education materials on palliative care, improving behavioral health data collection methods, and creating social media strategies and campaigns for health promotion. For the 2017-2018 academic year, about 15 students are participating in virtual internships with IHS.
Residency rotations. IHS service units offer rotation opportunities for medical, nursing, optometry, dental, and pharmacy residents as a recruitment tool because research shows students are likely to stay and practice medicine in the area where they studied. For example, the Oklahoma City area has a Memorandum of Agreement with the Oklahoma State College of Medicine, which permits area officials to annually recruit up to two residents from the current year’s residency class to become federal employees while completing their residency program. For every year that IHS sponsors the residents’ position at the university, the resident has a one-year service obligation. In addition, IHS officials from Chinle stated that the service unit participates in educational agreements with numerous universities and residency programs to host medical students, nursing students, and medical residents for rotations. According to officials, recent graduates from residency programs applying for permanent positions with the Chinle Comprehensive Health Care Facility often cite prior rotations at the service unit, or word of mouth from students or residents who have rotated through the service unit, as a reason for applying. The IHS Pharmacy Resident Program is another recruitment program that offers residency training to pharmacists who are willing to serve in high-need locations. Pharmacy residents who are Commissioned Corps officers are required to complete 2 years of service at an IHS federal or tribal facility. Twenty-six Commissioned Corps and civilian pharmacists participate in the Pharmacy Residency Program. See app. II for information on residency programs at tribally operated facilities.
Connecting with Potential Applicants
IHS officials said they have conducted webinars and career fairs in an attempt to connect with health professional students. For example, in 2016, IHS conducted two informational webinars to recruit Commissioned Corps applicants to facilities in the Great Plains area with critical clinical vacancies. According to IHS officials, approximately 60 applicants attended the two webinars, resulting in 15 nurse hires. In addition, Nashville area officials stated that the area office conducted a marketing campaign at the National Congress of American Indians Conference.
Officials explained that the area office provided information about desirable aspects of living in the Nashville area and collected e-mail addresses and areas of interest from potential job candidates. IHS’s Office of Human Resources also partners with HRSA’s Bureau of Health Workforce by participating in nationwide virtual career fairs to promote the National Health Service Corps scholarship and loan repayment opportunities.
IHS has also worked with the Office of the Surgeon General to increase the recruitment and retention of Commissioned Corps officers. In May 2017, the Office of the Surgeon General gave IHS priority access to new Commissioned Corps leads—meaning IHS has at least 30 days to make contact with potential applicants to the Commissioned Corps before other agencies have the opportunity to contact them. According to IHS officials, since being given priority access to Commissioned Corps leads, the agency has made 20 direct clinical care selections, of which 15 have entered on duty.
IHS Uses Strategies to Maintain Patient Access to Services and Reduce Provider Burnout When Positions Are Vacant, But Lacks Agency- wide Data on Use of Temporary Providers Providing Telehealth Services
In addition to its recruitment and retention strategies, IHS uses strategies to mitigate the negative effects of vacancies by helping to maintain patient access to services, and helping to reduce provider burnout when positions are vacant. Specifically, IHS provides telehealth services; implements alternative staffing models, including hiring nurse practitioners and physician assistants in lieu of physicians; temporarily assigns Commissioned Corps officers to alternate duty stations as needed; and contracts with temporary providers.
IHS’s telehealth services include two agency-wide programs that provide teleophthalmology and telebehavioral health services.
Teleophthalmology. The IHS Joslin Vision Network (IHS-JVN)
Teleophthalmology Program provides annual diabetic eye exams to AI/AN patients in almost all IHS areas with federally operated facilities. According to IHS, patients’ retinal images are scanned locally and sent to a reading center where doctors interpret the images and report back. Officials told us the IHS-JVN program examined 22,000 patients in 2016.
Telebehavioral health. The Telebehavioral Health Center of Excellence provides direct care services through video conferencing to patients at remote facilities from providers at IHS facilities that are able to provide the services. These services are provided in all IHS areas with federally operated facilities, and more than 5,800 patient visits occurred in 2016. Additionally, officials told us there are regional telebehavioral health programs, such as in the Oklahoma City area that, combined with the Telebehavioral Health Center of Excellence, saw over 10,000 patients in 2016. IHS officials stated that patients appreciate the telebehavioral services in their communities, because they are the only behavioral health services available in many communities. The IHS psychiatrist who provides services is located in Oklahoma City because, according to IHS officials, it is easier to recruit providers to a more urban location.
In addition to these agency-wide telehealth programs, IHS officials identified multiple other local telehealth arrangements that facility staff have developed to help maintain patient access to medical services. For example, there is a diabetes consultant for the Portland area who conducts telenutrition services. There is also a teledermatology program for the Phoenix Area federal facilities operated out of the Phoenix Indian Medical Center. Additionally, several service units—including Pine Ridge Hospital, Rosebud Hospital, and the Sioux San Medical Center—have contracts for emergency department telehealth services. Figure 7 shows telehealth equipment in the Rosebud Hospital emergency department.
Implementing Alternative Staffing Models
Staff from multiple facilities told us they have implemented alternative staffing models to focus on hiring for non-physician practitioner positions because these positions are slightly easier to fill. For example, Northern Navajo Medical Center officials told us the facility, facing an emergency department physician shortage, hired physician assistants and nurse practitioners instead. These officials said they converted two physician positions into four physician assistant and nurse practitioner positions. In addition, Chinle officials stated that they added two physician assistants to the urgent care department due to complaints about patient wait times, and patient wait times have decreased as a result. Officials also mentioned dental therapists as an additional type of clinical professional who may be added to the Chinle Health Care Facility staffing model because the service unit has been unable to recruit and retain enough dentists to meet patient need.
Commissioned Corps Deployments and Temporary Duty Assignments
IHS officials stated that they have worked with the Office the Surgeon General to deploy Commissioned Corps officers, mainly to the Great Plains area, and have also coordinated voluntary temporary duty assignments of Commissioned Corps officers (within IHS and from other agencies) to temporarily fill staffing shortages or meet other mission- critical needs. IHS officials stated that Commissioned Corps officers may also be temporarily assigned to an IHS site to provide services, such as behavioral health support during a suicide cluster.
Temporary Contract Providers
IHS officials from 9 of the 10 geographic areas with federally operated facilities and all seven facilities in our review told us they regularly use temporary contract providers—such as through locum tenens contracts and contracts with university fellowship programs—to maintain patient access to care when positions are vacant.
Locum tenens. Officials from the Kayenta Health Center said they contract with temporary providers to compensate for vacancies, and the facility contracts with about 9 providers who rotate to fill 3 vacant emergency department positions. Officials from the Portland area stated that they use temporary providers when there is a staffing shortage with providers. They explained that the Portland area has provider vacancies that have been open for years, and temporary providers fill these vacancies for an extended period of time, usually with a rotating series of providers. Chinle Health Care Facility officials said temporary providers, when of sufficiently high quality, have been recruited to join the permanent corps of civilian service staff. However, they told us locum tenens can cost between $50,000-$200,000 more annually than permanent physicians’ salaries, exclusive of benefits, depending on the specialties and hourly rates associated with the contracts. They said they are finding that increasingly higher hourly rates are needed to ensure a sufficient supply of high-quality temporary providers.
IHS officials at all levels of the agency told us they prefer to hire permanent providers, rather than use locum tenens contracts. Facility officials explained that persistent turnover in temporary staff may jeopardize continuity of care. For example, Sioux San Medical Center officials expressed concern about the quality of the care provided by temporary contractors, as well as the consistency of the care provided because the contractors rotate frequently. IHS officials told us that many providers prefer to be on contract due to the higher compensation rates as a contractor, even when taking federal benefits into account.
University physicians. IHS officials explained that area offices may also contract with university fellowship programs to provide visiting providers. For example, according to IHS, the Chinle Health Care Facility has entered into long-term contractual agreements with two academic fellowship programs—University of California-San Francisco Health Program and the University of Utah Global Health Fellowship. Officials told us these programs provide U.S. residency- trained, board certified physicians interested in global health to work 6-month assignments alternating with another fellow at an international site. In addition, IHS officials stated that the Navajo area office is collaborating with the University of California-San Francisco and its global health fellowship to assign global health fellows to a Navajo Area site for 6 months out of each year. The officials explained that 24 fellows were placed in Navajo-area facilities in 2017 at costs substantially lower than that of locum tenens contracts. According to IHS, the Great Plains area office has collaborated with the University of Washington’s global health fellowship program to assign global health fellows in Internal Medicine to Pine Ridge Hospital for 11- month placements.
Agency-wide information on the extent to which facilities use these temporary providers, and the amount spent on them, is not readily available to IHS leadership. While IHS has agency-wide information on vacancies through the Capital Human Resource Management System, IHS delegates the acquisitions process for temporary provider contracts to the head of each area-level Contracting Office. Therefore, agency-wide information on the number of full-time equivalent employees that are temporary providers working at IHS facilities, as well as the cost of these providers, is not readily available. As discussed, officials we spoke with at IHS facilities told us that temporary providers can cost more depending on the specialties and hourly rates. Without agency-wide information on the extent to which such providers are used, IHS is not fully informed about facilities’ reliance and expenditures on temporary providers or their potential effect on patient care, which is inconsistent with federal internal control standards regarding the availability of relevant information to facilitate management decision making and performance monitoring. Specifically, federal internal controls standards state that agency management should obtain, process, and use quality information to make informed decisions and evaluate the agency’s performance in achieving key objectives and addressing risks. IHS’s lack of agency-wide information on the costs and number of temporary providers used at its facilities impedes its ability to make decisions about how best to target its resources to address gaps in provider staffing and ensure that health services are available and accessible across IHS facilities.
Conclusions
Maintaining a stable clinical workforce capable of providing quality and timely care is critical for IHS to ensure that comprehensive health services are available and accessible to American Indian/Alaska Native people. However, despite efforts to recruit and retain providers, IHS continues to face considerable challenges to overcome its long-standing struggle to fill sizeable provider vacancies, including geographic isolation and limited amenities. Although IHS is authorized to offer recruitment and retention incentives, such as loan repayments and subsidized housing, the demand for these incentives has been greater than the agency can meet due to resource constraints. However, more complete information on contract providers could help IHS officials make decisions on where to better target its limited resources to address gaps in provider staffing and ensure that health services are available and accessible to American Indian/Alaska Native people across IHS facilities.
Recommendation for Executive Action
We are making the following recommendation to IHS: The Director of IHS should obtain, on an agency-wide basis, information on temporary provider contractors, including their associated cost and number of full-time equivalents, and use this information to inform decisions about resource allocation and provider staffing. (Recommendation 1)
Agency Comments
We provided a draft of this report to HHS and the Department of Veterans Affairs (VA) for review and comment. We received written comments from HHS that are reprinted in appendix III. HHS concurred with our recommendation.
In its comments, HHS stated that IHS plans to update its policies by December 2018 to include a centralized reporting mechanism requirement for all temporary contracts issued for providers. HHS also stated that, upon finalization of the policy, IHS will broadly incorporate and implement the reporting mechanism agency-wide and maintain it on an annual basis. HHS also provided technical comments, which we incorporated as appropriate.
VA provided comments on a draft of this report in an email, stating that VA officials continue to work to improve recruitment and retention of providers at VHA to ensure that they have the correct number of providers with the appropriate skills.
We are sending copies of this report to HHS, the Department of Veterans Affairs, and appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov/.
If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV.
Appendix I: Provider Vacancies with the Indian Health Service (IHS)
Appendix I: Provider Vacancies with the Indian Health Service (IHS)
IHS data collected in November 2017, included the number of positions and vacancies for several types of providers, including physicians, nurses, dentists, pharmacists, nurse practitioners, certified registered nurse anesthetists, certified nurse midwives, and physician assistants. Most of these positions are in the 8 of 12 IHS areas in which IHS has substantial direct care responsibilities. Vacancies for nurse practitioners, nurse midwives, dentists, pharmacists, and physician assistants are provided in this appendix.
Nurse practitioners. Nationwide, 97 of 303 positions were vacant in November 2017, and vacancy rates in the 8 areas in which IHS has substantial direct care responsibilities ranged from 12 percent in the Oklahoma City area to 47 percent in the Albuquerque area. (See fig. 8)
Certified nurse midwives. Nationwide, 8 of 55 positions were vacant in November 2017. See table 1.
Dentists. Nationwide, 81 of 306 positions were vacant in November 2017 and vacancy rates in the 8 areas in which IHS has substantial direct care responsibilities ranged from 14 percent in the Phoenix area to 39 percent in the Bemidji area. (See fig. 9.)
Pharmacists. Nationwide, 80 of 637 positions were vacant in November 2017 and vacancy rates in the 8 areas in which IHS has substantial direct care responsibilities ranged from 3 percent in the Bemidji area to 17 percent in the Albuquerque area. (See fig. 10.)
Physician assistants. Nationwide, 37 of 125 positions were vacant in November 2017. See table 2.
Appendix II: Tribal Strategies of Housing Units and Physician Residency Programs to Recruit and Retain Healthcare Providers
Tribal officials from the Chickasaw Nation and Choctaw Nation described their use of strategies to address vacancies, which were very similar to strategies used by the Indian Health Service (IHS). Like the IHS, one tribe uses the availability of housing units near its medical facility as a recruitment tool for health care providers. Both tribes that described their strategies to recruit and retain providers told us they use their physician residency program in Family Medicine as a recruitment tool.
Availability of housing units near the medical facility. Tribal officials from the Choctaw Nation told us the tribe uses housing units—58 housing units that range from studio apartments to multi- room houses—as a recruitment strategy for providers. The provider housing units are occupied by physicians, as well as by physician residents who need housing during their residency or for medical students doing clinical rotations through the facility. According to tribal officials, a factor they considered in making housing units available for providers was the location of its hospital in a rural area of Oklahoma, in a town with a population of about 1,000, which lacks sufficient housing.
In September 2017, tribal officials told us all the available housing units were occupied, and the tribe was in the process of constructing at least two 4-bedroom houses. See fig. 11 for photos of a completed multi-room house and one under construction. Offering the housing units to provider staff is also part of the tribe’s overall strategy of offering quality-of-life benefits to attract and retain providers.
Implementing Accredited Physician Residency Programs. Tribal officials we interviewed noted that they developed physician training programs—specifically graduate medical education, commonly known as residency training—which they use as an important recruitment tool for physicians. One tribe has implemented its Family Medicine residency program, while the other tribe intends for its Family Medicine residency program to be operational in July 2018. Both residency programs are accredited by the American Osteopathic Association, in addition to the American College of Osteopathic Family Practice for one tribe and the American Council for Graduate Medical Education for the other tribe. One program is accredited for 3 resident physicians per year for a total of 9 physician residents at a time, while the other program is accredited for 4 resident physicians per year.
We previously found that physicians may practice in geographic areas similar to those where they complete their residency training. Tribal officials with the implemented Family Medicine residency program told us it is successful in that they hired 7 of the 9 residents who completed the residency program. There is also a retention benefit—current providers have the opportunity to stay up-to-date on the latest medical treatment methods by serving as either mentors or as faculty for the residents.
Appendix III: Comments from the Department of Health and Human Services
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Kathleen M. King (Director), Ann Tynan (Assistant Director), Kelly DeMots (Assistant Director/Analyst-in- Charge), Sam Amrhein, Kristen Anderson, Muriel Brown, Kaitlin Farquharson, Peter Mann-King, Maria Ralenkotter, Lisa Rogers, and Jennifer Whitworth made key contributions to this report. | Why GAO Did This Study
IHS is charged with providing health care to AI/AN people who are members or descendants of 573 tribes. According to IHS, AI/AN people born today have a life expectancy that is 5.5 years less than all races in the United States, and they die at higher rates than other Americans from preventable causes. The ability to recruit and retain a stable clinical workforce capable of providing quality and timely care is critical for IHS. GAO was asked to review provider vacancies at IHS.
This report examines (1) IHS provider vacancies and challenges filling them; (2) strategies IHS has used to recruit and retain providers; and (3) strategies IHS has used to mitigate the negative effects of provider vacancies. GAO reviewed IHS human resources data for the provider positions that the agency tracks. GAO also reviewed policies, federal internal control standards, and legal authorities related to providers in federally operated IHS facilities. GAO interviewed IHS officials at the headquarters and area level and at selected facilities. GAO selected facilities based on variation in their number of direct care outpatient visits and inpatient hospital beds in 2014.
What GAO Found
Indian Health Service (IHS) data show sizeable vacancy rates for clinical care providers in the eight IHS geographic areas where the agency provides substantial direct care to American Indian/Alaska Native (AI/AN) people. The overall vacancy rate for providers—physicians, nurses, nurse practitioners, certified registered nurse anesthetists, certified nurse midwives, physician assistants, dentists, and pharmacists—was 25 percent, ranging from 13 to 31 percent across the areas.
IHS officials told GAO that challenges to filling these vacancies include the rural location of many IHS facilities and insufficient housing for providers. Officials said long-standing vacancies have a negative effect on patient access, quality of care, and employee morale.
IHS uses multiple strategies to recruit and retain providers, including offering increased salaries for certain positions, but it still faces challenges matching local market salaries. IHS also offers other financial incentives, and has made some housing available when possible. In addition, IHS uses strategies, such as contracting with temporary providers, to maintain patient access to services and reduce provider burnout. Officials said these temporary providers are more costly than salaried employees and can interrupt patients' continuity of care. However, IHS lacks agency-wide information on the costs and number of temporary providers used at its facilities, which impedes IHS officials' ability to target its resources to address gaps in provider staffing and ensure access to health services across IHS facilities.
What GAO Recommends
GAO recommends that IHS obtain, on an agency-wide basis, information on temporary provider contractors, including their associated cost and number of full-time equivalents, and use this information to inform decisions about resource allocation and provider staffing.
IHS concurred with GAO's recommendation. |
gao_GAO-19-165 | gao_GAO-19-165_0 | DOD Has Made Limited Progress since June 2018 in Addressing Remaining Statutory Requirements and Is Reducing the Number of Cross-Functional Teams It Considers Responsive to Section 911
DOD Has Addressed One Statutory Requirement since June 2018, but Has Not Addressed Five Remaining Requirements
DOD has addressed one additional statutory requirement of section 911 of the NDAA for Fiscal Year 2017 since our June 2018 report. However, DOD has still not addressed five other requirements, including (1) issuing its organizational strategy, (2) issuing guidance on cross-functional teams, (3) providing training on cross-functional teams for team members and their supervisors, (4) providing training for presidential appointees, and (5) taking actions to streamline the Office of the Secretary of Defense, as shown in table 1.
DOD addressed one of the statutory requirements in section 911 by submitting a report to Congress on the establishment of cross-functional teams on June 21, 2018. The report described the number of cross- functional teams established to date and the design and function of those teams, consistent with the requirements in section 911.
OCMO officials told us that DOD plans to address three of the five remaining requirements by March 2019. Specifically, the department plans to take the following actions.
Issue DOD’s organizational strategy. DOD has drafted, but not issued, its organizational strategy, which section 911 required to be issued by September 1, 2017. In June 2018, we reported that OCMO officials had revised the draft strategy to address the recommendation from our February 2018 report, including identifying potential action steps for the department that align with our leading practices for mergers and organizational transformations. OCMO officials have again revised the draft organizational strategy, incorporating, among other things, the criteria that distinguish cross-functional teams established under section 911 from other cross-functional working groups, committees, integrated product teams, and task forces, as required by section 918 the NDAA for Fiscal Year 2019. The officials said they expect the Secretary of Defense to issue the strategy in March 2019—18 months later than required by section 911.
Take actions to streamline the Office of the Secretary of Defense.
OCMO officials have revised the draft organizational strategy to identify the actions the department has taken that it views as responsive to this requirement. For example, the draft strategy states that DOD has delegated authority to approve certain global force management actions to the Chairman of the Joint Chiefs of Staff and certain acquisition oversight functions to the military departments. Section 911 required DOD to take these actions by June 23, 2018. As noted above, however, the organizational strategy has not been finalized. We will assess these actions against the requirements of section 911 after the organizational strategy has been issued.
Issue guidance on cross-functional teams. DOD has drafted, but not issued, guidance on cross-functional teams, which section 911 required to be issued by September 30, 2017. In June 2018, we reported that OCMO officials had revised the draft guidance to address the recommendation from our February 2018 report. OCMO officials stated that they have no other planned revisions and that they expect the Secretary of Defense to issue the guidance in March 2019—18 months later than required by section 911.
Further, OCMO officials told us that DOD plans to finalize the draft curricula and provide training to fulfill two additional section 911 requirements after the organizational strategy is issued.
Training for cross-functional team members and their supervisors.
OCMO has not provided the required training to cross-functional team members and their supervisors. OCMO officials stated that they plan to send the draft training curriculum for cross-functional team members and their supervisors to the Secretary after they send the strategy. In February 2018, we reported that the draft training curriculum addressed the section 911 requirements; OCMO officials told us they plan no further revisions to the curriculum. After the Secretary approves the curriculum, the officials stated, they plan to offer the training to cross-functional team members. Some cross- functional team members we met with stated that receiving training on cross-functional teams earlier would have been helpful for them to understand how to operate in a cross-functional team environment, such as reporting to both the team leader and to their home organization.
Training for presidential appointees. OCMO has not provided the required training to individuals filling presidentially-appointed, Senate- confirmed positions in the Office of the Secretary of Defense. Section 911 requires these individuals to complete the training within 3 months of their appointment, or for DOD to request waivers. However, as of January 2, 2019, 23 of 35 such officials had been in their positions for more than 3 months, and none had received the training or been granted a training waiver. In our February 2018 report, we found that the draft curriculum met only one of the four required elements in section 911. We recommended, and DOD concurred, that the CMO should either (1) provide training that includes all of the required elements in section 911 or (2) develop criteria for obtaining a waiver and have the Secretary of Defense request such a waiver from the President for these required elements. In October 2018, an OCMO official stated that OCMO had revised the draft training curriculum for presidential appointees to include all of the required elements in section 911. The official also stated that OCMO plans to send the draft training curriculum to the Secretary of Defense for review after OCMO sends the organizational strategy. Once the curriculum is approved, the official stated that OCMO plans to recommend to the Secretary of Defense that all presidential appointees in the Office of the Secretary of Defense receive the training and does not plan to request waivers.
As described above, we have previously recommended that DOD take actions to improve its implementation of the section 911 requirements related to the organizational strategy, guidance, and training. As we have reported before, addressing our recommendations and fully implementing the remaining requirements would better position DOD to effectively implement its cross-functional teams and advance a collaborative culture, as required by the NDAA. We will continue to monitor DOD’s progress in addressing these statutory requirements and our related recommendations.
DOD Plans to Establish One Cross-Functional Team, Disestablish Another, and Will No Longer Consider Nine Business Reform Teams as Responsive to Section 911
DOD is establishing a new cross-functional team to address growing challenges in the electronic warfare mission area. Section 918 of the NDAA for Fiscal Year 2019 requires DOD to establish this cross- functional team by November 11, 2018, to identify gaps in electronic warfare and joint electromagnetic spectrum operations, capabilities, and capacities within the department across personnel, procedural, and equipment areas. In January 2019, an OCMO official stated that the Office of the Under Secretary of Defense for Acquisition and Sustainment had drafted the team's charter and that it had been sent to the Secretary of Defense for review and approval.
In addition, DOD plans to disestablish the first cross-functional team established in response to section 911 to address challenges with personnel vetting and background investigations. This team was responsible for managing the transfer of background investigations for certain DOD personnel from the Office of Personnel Management to DOD. However, Office of the Under Secretary of Defense for Intelligence officials stated that DOD plans to subsume the roles and responsibilities of the team into a new Personnel Vetting Transformation Office. According to the officials, the new office will be responsible for managing the administration’s proposed transfer of background investigations for all executive branch personnel from the Office of Personnel Management to DOD. As a result, the cross-functional team’s roles and responsibilities would overlap with those of the Personnel Vetting Transformation Office, the officials stated. The officials expect to formally disestablish the cross-functional team in the first quarter of fiscal year 2019 after DOD issues the charter for the Personnel Vetting Transformation Office.
Last, DOD continues to implement its nine cross-functional teams dedicated to reforming and improving business operations, but plans to no longer consider these teams as responsive to section 911. The National Defense Business Operations Plan for Fiscal Years 2018-2022, issued in May 2018, stated that these teams were established pursuant to section 911. As of October 2018, however, DOD’s draft organizational strategy states that these teams were not established in response to section 911. Instead, it describes them as a second layer of cross- functional coordination that will aid in ensuring broader implementation of collaborative and team-oriented practices in the department. We describe these teams’ efforts to improve DOD’s enterprise business operations below and in appendix III.
DOD’s Enterprise Business Reform Is Largely Driven by Nine Cross- Functional Teams, but Progress Has Been Uneven
Nine Cross-Functional Teams Are Key to DOD’s Enterprise Business Reform
The National Defense Business Operations Plan for Fiscal Years 2018- 2022 highlights nine cross-functional teams as key mechanisms for implementing the plan’s strategic objective to improve and strengthen business operations through a move to enterprise or shared services. From October 2017 through January 2018, the Deputy Secretary of Defense, at the direction of the Secretary, established these nine teams to implement initiatives intended to improve the quality and productivity of the department’s business operations, including moving toward more use of enterprise services. According to memoranda appointing the team leaders, the teams support the Secretary of Defense’s focus on creating a more lethal and effective force by allowing the department to reallocate resources from business operations to readiness and to recapitalization of the combat force. These nine teams—hereafter referred to as business reform teams and whose leaders report to the CMO—address community services management, financial management, health care management, human resources, information technology and business systems, real property management, service contracts and category management, supply chain and logistics, and testing and evaluation. They are described in more detail in appendix III.
The Fiscal Year 2019 DOD Annual Performance Plan identifies performance goals and measures to achieve the strategic goals and objectives described in the National Defense Business Operations Plan, including the goal of reforming the department’s business practices. It designates several business reform team leaders as responsible for meeting the performance goals and associated performance measures. For example, the leader of the information technology and business systems reform team is responsible for the performance goal to transform how the department delivers secure, stable, and resilient information technology infrastructure in support of warfighter lethality. This goal is consistent with the team’s overarching objective to plan and execute the transformation of all business systems affecting support areas within the department.
The Annual Performance Plan’s objectives and timeframes related to the business reform teams, however, do not fully align with some of the initiatives that the teams are pursuing. For example, according to the plan, the leader of the community services management team is responsible for developing a strategic plan for armed forces retirement home reform by the second quarter of 2018. However, according to a list of the team’s current initiatives as of September 2018, the team was not pursuing this initiative. In October 2018, OCMO officials stated that Washington Headquarters Service is currently leading the armed forces retirement home reform effort. When we asked these officials how they view the relationship between performance measures in the plan and those of the business reform teams’ initiatives, they acknowledged that the teams’ initiatives have evolved since the plan’s development and that the teams have identified additional initiatives that may not be reflected in the plan. They also noted that OCMO drafted the content for the Fiscal Year 2019 DOD Annual Performance Plan before most of the teams were fully staffed and operational. As of October 2018, the officials stated that OCMO was coordinating with the team leaders to review the Fiscal Year 2019 DOD Annual Performance Plan and, as appropriate, to modify or develop new performance measures and targets for the Fiscal Year 2020 DOD Annual Performance Plan. Given DOD’s efforts to address this issue, we are not making a recommendation at this time, but will continue to monitor their efforts as part of our ongoing work on the high-risk nature of DOD’s business transformation efforts.
The Progress of the Business Reform Teams Has Been Uneven, and Some Teams Lack Resources to Fully Implement Their Initiatives
DOD has made some progress establishing and organizing the business reform teams, but implementation of the teams’ initiatives has been uneven. We found that implementation of the business reform teams has demonstrated some key characteristics of leading practices for implementing effective cross-functional teams that we have identified in our prior work. For example, across all the teams we spoke with, members were responsible for leading the development of their team’s initiatives and communicating with their home organizations to obtain input, demonstrating a well-defined team structure. In addition, the business reform teams are structured to facilitate open and regular communication, another leading practice. For example, the teams are generally co-located with each other, which enables direct communication among team members and between teams, members stated. Further, members from most of the teams we spoke with were supportive of their team leaders and viewed them as effective in their roles, demonstrating an inclusive team environment. Team leaders across all teams also stated that they regularly interact with senior management, such as through weekly one-on-one meetings with the CMO or Deputy CMO. This engagement reflects a key characteristic that states team leaders should regularly interact with senior management.
However, we found that the business reform teams’ efforts have not proceeded according to early plans outlined by the department. DOD’s August 2017 report to Congress on restructuring the CMO organization stated that the teams were intended to help modify processes to move toward enterprise service delivery. According to the report, the department would transition to DOD enterprise services by the end of fiscal year 2018. In July 2018, OCMO officials acknowledged that they were behind schedule, but told us they expected to catch up to this deadline by the end of fiscal year 2018, as originally planned. That deadline was not realized. According to OCMO officials, the teams are identifying new milestones for implementing initiatives, some of which will contribute to a move toward enterprise services.
In addition, the business reform teams vary in the number of initiatives they are pursuing. As of September 2018, OCMO reported that the teams were pursuing a total of 135 initiatives and that the number of initiatives per team ranged from 2 to 38. For example, the community services management team was developing 2 initiatives—1 to examine the feasibility of merging DOD’s three military exchange services and the Defense Commissary Agency into a single resale enterprise, and the other to streamline the inventory of DOD lodging. In contrast, the supply chain and logistics team was developing 21 short- and long-term initiatives, such as reducing the footprint of underutilized warehouses and developing better data interoperability throughout the supply chain and logistics enterprise.
Further, the teams’ progress in advancing their initiatives to the implementation and monitoring phase has varied. The Reform Management Group oversees the business reform teams. The Deputy Secretary of Defense chairs the Reform Management Group, and the CMO facilitates regular meetings of the group. The Reform Management Group authorizes the business reform teams to proceed with their initiatives through five gates—0 through 4. These gates trace initiatives from conception to implementation and monitoring. Before proceeding from one gate to the next, the teams must submit certain deliverables to the Reform Management Group for review and approval. For example, before an initiative can proceed to gate 1, OCMO requires the teams to submit a charter for the initiative, which can identify, among other things, the problem or opportunity statement, the project scope, expected outcomes and risk analysis, and preliminary performance measures. Figure 1 provides an overview of the five gates and the status of initiatives by gate, as of September 2018.
As shown in figure 1, while some teams have successfully advanced several initiatives to gate 4, others have not yet progressed initiatives past gate 2. Specifically, as of September 2018, DOD reported that 104 of the teams’ 135 initiatives had not yet reached gate 3, the implementation phase. According to the teams we interviewed, several factors may affect the progress of an initiative, such as its complexity or a team’s approach to developing initiatives. For example, the community services management team leader stated that the team is primarily focused on the consolidation of the defense commissaries and exchanges, an initiative that is relatively large in scope and complexity. According to the team leader, this initiative involves a number of internal stakeholders, including all of the military services, as well as outreach to external stakeholders, such as veterans’ organizations. In addition, the leader stated that the team would need legislative changes to fully implement the initiative. As a result of the large scope and complexity, the leader expects the initiative to take longer to implement than others. Some teams have pursued a proof-of-concept approach to developing their initiatives, which involves pilots to test initiatives to prove their value prior to department-wide implementation. For example, the health care management team is conducting a regional pilot to test the feasibility of consolidating the purchasing of services across the military health system.
DOD has asserted that some of its initiatives have produced benefits through savings or efficiencies. For example, according to a September 2018 DOD report on the department’s investments in support of the National Defense Strategy, the department achieved $1.61 billion in benefits by implementing private-sector best practices in purchasing goods and service contracts in the Air Force and defense agencies. In addition, DOD reported that the department saved $297 million through commercial information technology solutions, department-wide network management, and optimized data centers. Further, according to the report, consolidating four health care enterprises improved patient care and medical readiness, with an estimated savings of more than $2.5 billion annually by 2023. OCMO officials stated that they are still in the process of working with the Office of the Under Secretary of Defense for Comptroller to document savings generated from the business reform teams’ initiatives. Given that OCMO officials stated they are taking steps to document savings generated from the teams’ initiatives, we are not making a recommendation at this time, but will continue to monitor their efforts as part of our ongoing work on the high-risk nature of DOD’s business transformation efforts.
One senior DOD official involved in the reform effort acknowledged that the teams’ progress has been uneven. He cited a number of factors that can affect teams’ implementation, including the degree to which the teams have support from the highest levels of department leadership to operate independently and advance changes that may be unpopular with internal or external stakeholders, and the ability of teams to tackle longstanding systemic challenges, such as inaccurate cost data throughout the department. This official and several teams we met with cited the importance of the team leader’s commitment to driving team success.
We found that uncertainty with funding for initiatives may be an additional factor inhibiting some teams’ progress. In some cases, the business reform teams need funding to further develop and implement their initiatives, such as the supply chain and logistics team’s requirement for $2.4 million to conduct a pilot project that included conducting three site visits for warehouse and labor assessments in support of one of its initiatives. According to OCMO officials, the business reform teams can request funding from OCMO to further develop their initiatives, or if funding is not available from OCMO, the teams can seek funding from functional organizations. However, even in the early stages of their implementation, some teams told us that they did not have access to sufficient funding to fully develop and implement some of their approved initiatives or that the process for obtaining the funding was uncertain. For example, in June 2018, one team leader told us that the team did not have sufficient funding to implement four initiatives. The leader also stated that the team was not alerted to the lack of funding until immediately prior to its planned implementation of these initiatives. Members from another team stated that the Reform Management Group wanted the team to implement its initiatives more quickly, which increased the amount of funding the team needed for implementation. When the team requested additional funding, however, OCMO did not have it available. Further, OCMO officials told us that the teams submitted nine requests for funding in fiscal year 2018, but OCMO did not have funding to support four of these requests as of the end of fiscal year 2018.
As the teams continue to develop and implement their initiatives, the number of requests for funding may increase in the future. Our prior work on efficiency initiatives has found that up-front investments may often be required to realize long-term efficiencies and savings. In this regard, OCMO officials told us that, as of September 2018, the nine teams had planned investments of about $6.7 billion to implement their initiatives from fiscal years 2018 through 2024. OCMO officials stated that this amount is a projection from the teams, and DOD has not yet identified sources for this funding. In addition, officials stated that more investment could be needed as the teams continue to develop initiatives and more enter the implementation phase. However, according to DOD’s budget materials for fiscal year 2019, requested funding for OCMO—a source used to fund the development of some of the teams’ initiatives—will decrease from about $48 million in fiscal year 2018, to about $36 million in fiscal year 2019.
Leading practices for implementing effective cross-functional teams highlight the importance of senior management providing teams with access to resources. These leading practices also state that teams should have well-defined team operations with established rules and procedures. Further, the findings from a study contracted by DOD in August 2017 to determine how best to implement effective cross- functional teams identified actions for DOD to consider for supporting the implementation of its cross-functional teams, including identifying funding mechanisms to fully support cross-functional teams. The study suggested that language outlining the preferred mechanisms and authorities for this purpose can be included in cross-functional team guidance.
OCMO officials told us that the office maintains a list of funding requests from the teams and prioritizes which initiatives to fund based on several factors including estimated yield, feasibility, and available resources for implementation. However, OCMO did not have a process for identifying and prioritizing available funding for implementing the initiatives planned by the business reform teams for fiscal year 2018, and has not established one for fiscal year 2019. According to OCMO officials, the department initially planned to use available funding from OCMO or the savings generated by the initiatives to fund the development and implementation of other initiatives. However, OCMO officials have since recognized that funding is needed and they are in the early stages of developing an approach to do so. Specifically, OCMO officials said they are working with the Office of the Under Secretary of Defense for Comptroller to identify funding for initiatives in fiscal year 2020. While there will likely be initiatives that cannot be funded given limited resources, OCMO and the reform teams could benefit from a clear process for identifying and prioritizing available funding. Without such a process, OCMO and the reform teams may not be able to adequately plan for and execute their initiatives.
Conclusions
Section 911 of the NDAA for Fiscal Year 2017 called for organizational and management reforms to assist DOD in addressing challenges that have hindered collaboration and integration across the department. While the department has taken some steps to implement the section 911 requirements, it has still not met statutory due dates for implementing key requirements intended to support its cross-functional teams and to advance a more collaborative culture within the department. We continue to believe it is important for senior leadership to demonstrate their commitment to fulfilling section 911 by addressing our prior related recommendations and by completing the remaining requirements.
Further, section 921 of the NDAA for Fiscal Year 2019 requires DOD to reform its enterprise business operations to increase the effectiveness and efficiency of mission execution. DOD has highlighted its nine cross- functional teams dedicated to improving the department’s business operations as key to achieving enterprise business reform. However, this effort has been marked by a slow start and uneven progress, and teams face a number of challenges. One key challenge is the teams’ lack of resources to drive their initiatives forward. OCMO has not established a process for identifying and prioritizing available funding for the development and implementation of the teams’ initiatives, which has hampered the success of some of the enterprise reform efforts.
Recommendation for Executive Action
The Secretary of Defense should ensure that the Chief Management Officer establishes a process for identifying and prioritizing available funding to develop and implement initiatives from the cross-functional reform teams. (Recommendation 1)
Agency Comments
We provided a draft of this report to DOD for review and comment. In its written comments, which are reproduced in Appendix V, DOD concurred with our recommendation and described ongoing and planned actions to address it.
We are sending copies of this report to the appropriate congressional committees, the Acting Secretary of Defense, and DOD’s Acting Chief Management Officer. In addition, the report is available at no charge on our website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2775 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Prior GAO Reports on the Department of Defense’s (DOD) Implementation of Section 911 of the National Defense Authorization Act (NDAA) for Fiscal Year 2017
Section 911 of the NDAA for Fiscal Year 2017 included a provision for us—every 6 months after the date of enactment on December 23, 2016, through December 31, 2019—to submit to the defense committees a report. Each report is to set forth a comprehensive assessment of the actions that DOD has taken pursuant to section 911 during each 6-month period and cumulatively since the NDAA’s enactment. We issued our first report in June 2017, and did not make recommendations. We issued our second report in February 2018, and made four recommendations to improve DOD’s implementation of section 911. We issued our third report in June 2018, and did not make recommendations. Table 2 identifies our three prior reports on DOD’s implementation of section 911 and the status of the four recommendations from our February 2018 report.
Appendix II: Summary of Requirements in Section 911 of the National Defense Authorization Act for Fiscal Year 2017
Section 911 of the National Defense Authorization Act for Fiscal Year 2017 requires the Secretary of Defense to take several actions. Table 3 summarizes these requirements, the due date, and the date completed, if applicable, as of December 2018.
Appendix III: Overview of the Department of Defense’s (DOD) Nine Cross-Functional Teams Implementing Business Reform Initiatives
The Deputy Secretary of Defense has established nine cross-functional teams since October 2017 to implement reform initiatives intended to improve the quality and productivity of the department’s business operations, including moving toward more use of enterprise services. According to the memoranda appointing the team leaders, these teams support the Secretary of Defense’s focus on creating a more lethal and effective force by allowing the department to reallocate resources from business operations to readiness and to recapitalization of the combat force.
As of September 2018, these nine cross-functional teams varied in size, ranging from 5 to 31 members. According to OCMO officials, the size of the teams can vary based on the knowledge and expertise needed to implement the teams’ initiatives. The team leaders are either presidential appointees or members of the Senior Executive Service. In addition, the Deputy Secretary of Defense directed the military departments and functional organizations to appoint reform team members, and the teams include representatives from the military departments, functional organizations relevant to the reform topic, and external experts. At the time we met with the teams, most reported that they were the appropriate size and had the right skills and expertise represented on the team. Figure 2 provides additional details on the composition of these nine cross-functional teams, as of September 2018.
Appendix IV: Leading Practices for Implementing Effective Cross-Functional Teams
In February 2018, we reported on eight leading practices for implementing effective cross-functional teams. Table 4 identifies these leading practices and their related key characteristics.
Appendix V: Comments from the Department of Defense
Appendix VI: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Margaret Best (Assistant Director), Tracy Barnes, Arkelga Braxton, William Carpluk, Michael Holland, William Lamping, Chad Johnson, Matthew Kienzle, Amie Lesser, Ned Malone, Judy McCloskey, Sheila Miller, Sally Newman, Richard Powelson, Daniel Ramsey, Ron Schwenn, Jared Sippel, Susan Tindall, and Sarah Veale made key contributions to this report. | Why GAO Did This Study
DOD continues to confront organizational challenges that hinder collaboration. To address these challenges, section 911 of the NDAA for FY 2017 directed the Secretary of Defense to issue an organizational strategy that identifies critical objectives that span multiple functional boundaries; establish cross-functional teams to support this strategy and provide related guidance and training; and take actions to streamline the Office of the Secretary of Defense. Further, section 921 of the NDAA for FY 2019 calls for the Secretary of Defense to reform the department's enterprise business operations.
The NDAAs for FY 2017 and 2019 also included provisions for GAO to assess DOD's actions in response to sections 911 and 921, respectively. This report assesses the extent to which DOD has made progress in (1) addressing the requirements of section 911, and (2) reforming the department's enterprise business operations under section 921. GAO reviewed documentation on DOD's implementation of sections 911 and 921; interviewed cross-functional team leaders, members, and other DOD officials; and compared DOD's implementation of its cross-functional teams to GAO's key practices.
What GAO Found
The Department of Defense (DOD) has implemented four statutory requirements in section 911 of the National Defense Authorization Act (NDAA) for Fiscal Year (FY) 2017, but has not addressed five requirements intended to support cross-functional teams and promote department-wide collaboration (see table).
For two of these requirements, DOD has missed the statutory deadline by more than a year. GAO previously recommended that DOD take actions to improve its implementation of section 911, and DOD reported it is doing so, such as revising its draft cross-functional team guidance to address statutory requirements. Fully implementing GAO's prior recommendations and the remaining statutory requirements would better position DOD to effectively implement its cross-functional teams and advance a collaborative culture, as required by the NDAA.
Nine cross-functional teams are driving DOD's enterprise business reform efforts under section 921 of the FY 2019 NDAA, but the teams' progress has been uneven. As of September 2018, DOD reported that these nine teams were pursuing a total of 135 business reform initiatives. However, 104 of these initiatives have not reached the implementation phase. A key challenge facing the teams is that some lack resources to fully implement their approved initiatives. For example, DOD officials stated that the department did not fulfill four of nine funding requests from the teams in fiscal year 2018 to implement their initiatives. As of September 2018, DOD officials estimated that the teams need about $6.7 billion to implement their initiatives from FYs 2018 through 2024, but DOD has not identified sources for this funding. GAO's prior work on efficiency initiatives found that up-front investments may be required to realize long-term savings. In addition, GAO's prior work on leading practices for implementing effective cross-functional teams highlights the importance of providing teams with access to resources and having well-defined team operations with established rules and procedures. However, DOD has not established a process for identifying and prioritizing available funding for implementing the teams' initiatives. Without such a process, DOD and the teams may not be able to adequately plan for and execute their reform initiatives.
What GAO Recommends
GAO recommends that DOD establish a process to identify and prioritize funding for implementing its cross-functional teams' business reform initiatives. DOD concurred with this recommendation. |
gao_GAO-18-623T | gao_GAO-18-623T_0 | Background
The attrition among VHA physicians has been of particular concern given that the Health Resources and Services Administration (HRSA) anticipates that by 2025 the national demand for physician services will exceed supply. HRSA’s Office of Rural Health Policy reported, in 2017, that physician shortages were exacerbated in rural areas, where communities struggle to attract and keep well-trained providers. This difficulty has posed a particular challenge for VHA, as approximately one in four VAMCs is located in a rural area.
Most physicians providing care at VAMCs are employed by VHA. VHA also supplements the capacity of its employed physician staff by acquiring additional physician services through fee-basis arrangements or contracts. Under fee-basis arrangements, providers are paid a pre- agreed-upon amount for each service provided. Under contracts, physician services may be obtained on a short-term basis; for example, through sole-source contracts with academic affiliates. VAMCs may also use physicians who volunteer their time, who are referred to as work- without-compensation providers.
In addition to VHA-employed, contract, and fee-basis physicians, VAMCs often supplement their capacity by using physician trainees, who include medical residents and advanced fellows. In 2016, 135 of the 170 VAMCs had active physician training programs. According to VHA officials, there were 43,768 medical residents who trained at a VAMC in 2016. VHA has been expanding its physician training program, as directed by the Veterans Access, Choice, and Accountability Act of 2014, as amended. In 2017, VHA added 175 physician trainee positions across VAMCs nationwide, including 3 VAMCs that did not have physician trainees prior to this expansion. VHA’s objective is to add 953 additional physician trainee positions to its VAMCs by 2025 in order to improve access and hire additional physicians. Further, VHA officials told us they want to continue to add new positions that would eventually allow all VAMCs access to physician trainees.
VHA Lacked Information on the Total Number of Mission-Critical Physicians Who Provided Care at VAMCs and Does Not Plan to Collect this Information
In our October 2017 report, we found that VHA’s data on physicians who provided care at VAMCs were incomplete. Specifically, we found that VHA had data on the number of mission-critical physicians it employed (more than 11,000) and who provided services on a fee-basis (about 2,800), but lacked data on the number of contract physicians and physician trainees. As a result, VHA did not have data on the extent to which VAMCs used these arrangements and thus, underestimated its physician use overall. Therefore, VHA was unable to ensure that its workforce planning processes sufficiently addressed any gaps in staffing.
All six VAMCs included in our review used at least one type of arrangement other than employment for physicians, and five of the six used contract physicians or physician trainees. (See fig. 1.) On average, contract and fee-basis physicians made up 5 to 40 percent of the physicians in a given mission-critical physician occupation at each VAMC in our review. For example, officials from a large, highly complex VAMC told us that, in March 2017, they augmented the 86 employed primary care physicians with eight contract and three fee-basis physicians, which represented about 16 percent of their primary care physician workforce. Further, this VAMC also had about 64 primary care physician trainees providing certain medical services under the supervision of a senior physician.
During the course of our work for the October 2017 report, VHA officials told us that its personnel databases were designed to manage VHA’s payroll systems, but that these databases did not contain information on contract physicians or physician trainees. VHA officials told us they were working to include information on physician trainees in a new human resources (HR) database—HR Smart—which at the time of our review, was scheduled to be implemented in 2017. However, these officials were not aware of plans to add information to the database on contract physicians. Instead, VAMC leaders used locally devised methods to identify and track contract physicians, fee-basis physicians, and physician trainees. For example, one VAMC in our October 2017 review used a locally maintained spreadsheet to track its physicians under arrangements other than employment, while another VAMC asked department leaders to identify how many of these provided care within their respective departments. At each of the six VAMCs in our review, we found that department leaders were generally knowledgeable about the total number of physicians that provided care within the departments they managed. However, this locally maintained information was not readily accessible by VHA officials.
To address the limitations in VHA’s data, we recommended in our October 2017 report that VHA develop and implement a process to accurately count all physicians providing care at each of its VAMCs, including physicians not employed by VHA. VHA did not concur with this recommendation, stating that it uses other tools for workforce planning. However, a VHA official acknowledged that data sources used for workforce planning may not include all types of contract physicians or work-without-compensation physicians.
As we discussed in our prior report, implementing such a systematic process would eliminate the need for individual VAMCs to use their own mechanisms, such as a locally developed and maintained spreadsheet to track its physician workforce, as was done by one VAMC in our prior review. Further, local mechanisms may not be readily accessible to VHA officials engaged in workforce planning, resulting in incomplete information for decision-making purposes.
Since our report, VHA officials told us that they have completed implementation of HR Smart, which provides the capability to track every position with a unique position number, and each employee’s full employment history. However, VHA officials told us they do not plan to enhance the capability of HR Smart to track contractors.
We continue to believe that having a systematic and consistent process to account for all physicians who provide care across VAMCs, including physicians not employed by VHA, would help address concerns that VHA is unable to identify all physicians providing care at its VAMCs.
VHA Has Begun to Develop Guidance for Determining Its Staffing Needs for All Physicians
In our October 2017 report, we found that VHA gave responsibility for determining staffing needs to its VAMCs and provided its facilities with guidance, through policies and directives, on how to determine the number of physicians and support staff needed for some physician occupations. Specifically, VHA provided this guidance for primary care, mental health, and emergency medicine, but lacked sufficient guidance for its medical and surgical specialties, including occupations such as gastroenterology and orthopedic surgery. For these occupations, VHA provided guidance on the minimum number of physicians, but did not provide information on how to determine appropriate staffing levels for physicians or support staff based on the need for care.
Specifically, the VHA guidance available at the time set a minimum requirement that VAMCs of a certain complexity level have at least one gastroenterologist and one orthopedic surgeon that is available within 15 minutes by phone or 60 minutes in person 24 hours a day, 7 days a week. VHA guidance did not include information on how to use data, such as workload data, to manage the demand for care or help inform staffing levels for these physician occupations beyond this minimum requirement. Officials from four of the six VAMCs we reviewed for our October 2017 report told us that because they lacked (1) guidance on how to determine the number of physicians and support staff needed, and (2) data on how their staffing levels compared with those of similar VAMCs, they were sometimes unsure whether their staffing levels were adequate.
In our October 2017 report, we discussed that VHA had previously established, in 2016, a specialty physician staffing workgroup that examined the relationships between staffing levels, provider workload and productivity, veterans’ access, and cost across VAMCs for its medical and surgical specialties, including gastroenterology and orthopedic surgery. This group’s work culminated in a January 2017 report that found VHA was unable to assess and report on the staffing at each VAMC, as required by the Veterans Access, Choice, and Accountability Act of 2014, because a staffing model for specialty care had not been established and applied across VAMCs. This report made a number of recommendations, including that VHA provide guidance to its VAMCs on what level of staffing is appropriate for its mission-critical physician occupations. However, as we noted in our October 2017 report, VHA leadership had not yet taken steps to develop such staffing guidance. We reported that, according to a VHA official, other priorities were taking precedence and continued work in this area had not yet been approved by VHA leadership. Although VHA officials agreed that further steps should be taken, they did not indicate when these would occur. In our report, we concluded that until VHA issues guidance on staffing levels for certain physician occupations that provide specialty care to veterans, there would continue to be ambiguity for VAMCs on how to determine appropriate staffing levels.
To address this, we recommended that VHA develop and issue guidance to VAMCs on determining appropriate staffing levels for all mission-critical physician occupations. VHA concurred with our recommendation and reported it would evaluate and develop staffing guidance for its medical and surgical specialties.
Since our report, VHA officials told us that on November 27, 2017, the Executive-in-Charge for VHA signed the specialty care workgroup charter. The primary goal of the workgroup is to develop a specialty care staffing model that will include staffing information for all specialty care. VHA anticipates completing its work and issuing staffing guidance by December 2018.
VHA Used Multiple Strategies for Physician Recruitment and Retention, but Has Not Comprehensively Evaluated Them to Assess Effectiveness
In our October 2017 report, we found that VHA used various strategies to recruit and retain its physician workforce, including providing assistance recruiting for mission-critical physician occupations through the National Recruitment Program; policies and guidance; financial incentives to enhance hiring and retention offers; and a national physician training program. (See table 1.)
In our October 2017 report, we found that VHA faced challenges using its strategies for recruiting and retaining physicians. For example, according to VHA officials, budget shortfalls in the Education Debt Reduction Program—which reimburses qualifying education loan debt for employees, including physicians, in hard-to-recruit positions—reduced VAMCs’ ability to offer this recruitment incentive to physician candidates. In addition, the relatively small number of physician recruiters in VHA’s National Recruitment Program—19 recruiters for the 170 VAMCs at the time of our report—limited their ability to understand the particular nuances of some markets, particularly in rural areas.
Further, despite VHA’s large and expanding graduate medical training program, VAMCs experienced difficulties hiring physicians who received training through its residency and fellowship programs. VHA did not track the number of physician trainees who were hired following graduation, but officials told us that the number was small in comparison to the almost 44,000 physician trainees educated at VAMCs each year.
We found that VAMCs faced challenges hiring physician trainees, in part, because VHA did not share information on graduating physician trainees for recruitment purposes with VAMCs across the system. VHA officials told us that recruitment efforts could be improved by developing and maintaining a database of physician trainees, but said that VHA had no such database. According to VHA officials, information sharing could help both VAMCs in geographically remote locations that do not have a residency program and help identify trainees who want to work at VHA after graduating, but who received no offers from the VAMC they trained at due to the lack of vacancies in their specialty.
We also reported in October 2017 that VHA did not have complete information on whether its recruitment and retention strategies were meeting its needs. VHA had gathered feedback on barriers VAMCs face when offering financial incentives to physician candidates through its Education Debt Reduction Program and created a workgroup to look at its overall use of physician retention strategies, although it had not completed a comprehensive review of its recruitment and retention strategies to identify any areas for improvement. As a result, VHA did not have complete information on the underlying causes of the difficulties VAMCs faced or whether its recruitment and retention strategies met its objective of having a robust physician workforce to meet the health care needs of veterans.
To address these issues, we recommended that VHA (1) establish a system-wide method to share information about physician trainees to help fill vacancies across VAMCs, and (2) conduct a comprehensive, system- wide evaluation of its physician recruitment and retention efforts, and establish an ongoing monitoring program. VHA concurred with our recommendations, and reported it planned to enhance its personnel database, HR Smart, to include physician trainees. Additionally, VHA said it planned to complete a comprehensive, system-wide evaluation of the physician recruitment and retention strategies.
Since our report, VHA reported taking some steps to address these recommendations. Specifically, officials told us they are working to include information in the newly implemented HR Smart database on work-without-compensation employees, such as physician trainees, and anticipate conducting pilot projects at various sites before fully implementing this capability by September 30, 2019. Additionally, officials said that they are in the process of completing a review of physician recruitment and retention incentives. Furthermore, according to VHA officials, beginning in October 2017, VHA’s Office of Workforce Management and Consulting partnered with the Partnered Evidence- based Policy Resource Center—an internal VHA resource center—to evaluate and recommend a systematic approach for allocating workforce management resources, such as the Education Debt Reduction Program. VHA expects to complete its efforts by September 2018.
Chairman Dunn, Ranking Member Brownley, and Members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions you may have.
GAO Contact and Staff Acknowledgments
For further information about this statement, please contact Debra A. Draper at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Key contributors to this statement were Janina Austin (Assistant Director), Sarah Harvey (Analyst-in-Charge), Jennie Apter, Frederick Caison, Alexander Cattran, and Krister Friday.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
As the demand for VHA's services grows—due, in part, to increasing demand from servicemembers returning from the United States' military operations in Afghanistan and Iraq and the growing needs of an aging veteran population—attracting, hiring, and retaining top talent is critical to VHA's mission to provide high quality and timely care for the nation's veterans.
Physicians—who provide and supervise a broad range of care including primary and specialty care—serve an integral role in VHA's mission. Certain physician types are consistently among the most difficult to recruit and retain, and are thus considered mission-critical by VHA.
Over the past two decades, GAO and others have expressed concern about VHA's ability to ensure that it has the appropriate clinical workforce, including physicians, to meet the current and future needs of veterans.
This statement is based on GAO's October 2017 report and examines (1) VHA information on how many mission critical physicians provided care at VAMCs, (2) VHA guidance for determining its physician staffing needs, and (3) the strategies VHA used to support the recruitment and retention of physicians at VAMCs, and the extent to which it has evaluated these strategies to determine their effectiveness.
For this statement, GAO updated the information from its October 2017 report and obtained information from VHA officials in June 2018 about steps they have taken to implement the 2017 recommendations.
What GAO Found
The Department of Veterans Affairs (VA) Veterans Health Administration (VHA) continues to face challenges related to physician staffing, recruitment, and retention, though it has begun work to implement recommendations made in GAO's October 2017 report. Specifically, GAO's report found the following:
VHA's data on the number of physicians that provided care at VA medical centers (VAMC) were incomplete. GAO found that data were incomplete because they did not include data on the number of contract physicians and contained only limited data on the number of physician trainees—two types of physicians that augment the care provided by physicians employed by VHA. Thus, VHA data underestimated the total number of physicians providing care in its medical centers leaving it unable to ensure that its workforce planning processes sufficiently addressed gaps in staffing. GAO recommended that VHA implement a process to accurately count all its physicians. VHA did not concur with this recommendation, stating that it used other tools for workforce planning. VHA has since implemented a new human resources (HR) database—HR Smart—that has the capability to track each position at its VAMCs. However, VHA officials told us they do not plan to include information on physician contractors in this database.
VHA provided VAMCs with guidance on how to determine the number of physicians and support staff needed for some physician occupations, although it lacked sufficient guidance for its medical and surgical specialties. GAO recommended that VHA issue guidance to VAMCs on determining appropriate staffing levels for all physicians. VHA concurred and reported it would develop staffing guidance for its medical and surgical specialties. VHA officials told GAO VHA signed a specialty care workgroup charter November 27, 2017; the primary goal of the workgroup was to develop a specialty care staffing model that would include staffing information for all specialty care. VHA anticipates completing its work and issuing staffing guidance by December 2018.
VHA used various strategies to recruit and retain its physician workforce, but had not comprehensively evaluated them to assess effectiveness . Without such an evaluation, VHA did not have complete information on the underlying causes of the difficulties VAMCs face, or whether its recruitment and retention strategies were meeting physician workforce needs. GAO recommended VHA (1) establish a system-wide method to share information about physician trainees to help fill vacancies across VAMCs and (2) conduct a comprehensive, system-wide evaluation of VAMCs' physician recruitment and retention efforts and establish an ongoing monitoring program. VHA concurred and reported it has since taken steps to address the recommendations. For example, VHA's Office of Workforce Management and Consulting has partnered with its Partnered Evidence-based Policy Resource Center to evaluate and recommend a systematic approach for allocating workforce management resources. In addition, VHA has added the capability to track physician trainees to its HR Smart database. VHA expects to complete its efforts by September 2018 and September 2019, respectively. |
gao_GAO-18-482 | gao_GAO-18-482_0 | Background
Job Corps Eligibility Criteria and Program Services
To be eligible for the Job Corps program, an individual must generally be 16 to 24 years old at the time of enrollment; be low income; and have an additional barrier to education and employment, such as being homeless, a high school dropout, or in foster care. See table 1 for characteristics of students served by Job Corps during program year 2016.
Once enrolled in the program, youth are assigned to a specific Job Corps center, usually one located nearest their home and which offers a job training program of interest. The vast majority of students live at Job Corps centers in a residential setting, while the remaining students commute daily from their homes to their respective centers. This residential structure is unique among federal youth programs and enables Job Corps to provide a comprehensive array of services, including housing, meals, clothing, academic instruction, and job training. In program year 2016, about 16,000 students received a high school equivalency and about 28,000 students completed a career technical training program, according to ETA officials.
Job Corps Structure and Operations
ETA administers Job Corps’ 123 centers through its national Office of Job Corps under the leadership of a national director and a field network of six regional offices located in Atlanta, Boston, Chicago, Dallas, Philadelphia, and San Francisco (see fig. 1). Job Corps is operated primarily through contracts, which according to ETA officials, is unique among ETA’s employment and training programs (other such programs are generally operated through grants to states). Among the 123 centers, 98 are operated under contracts with large and small businesses, nonprofit organizations, and Native American tribes. The remaining 25 centers (called Civilian Conservation Centers) are operated by the U.S. Department of Agriculture’s (USDA) Forest Service through an interagency agreement with DOL. Job Corps center contractors and the USDA Forest Service employ center staff who provide program services to students. The President’s fiscal year 2019 budget seeks to end USDA’s role in the program, thereby unifying responsibility under DOL. The Administration reported that it was proposing this action because workforce development is not a core mission of USDA, and the 25 centers it operates are overrepresented in the lowest performing cohort of centers. According to ETA officials, the Office of Job Corps has oversight and monitoring responsibility to ensure that center operators follow Job Corps’ Policy and Requirements Handbook, including the safety and security provisions. Job Corps regional office staff are largely responsible for these duties.
Requirements for Job Corps Centers Related to Incident Reporting
Job Corps’ Policy and Requirements Handbook requires centers to report certain significant incidents to the national Office of Job Corps and to regional offices using SIRS. Centers are required to report numerous categories of incidents, including assaults, alcohol and drug-related incidents, and serious illnesses and injuries (see appendix II for definitions of these categories of incidents). Within the Policy and Requirements Handbook, ETA establishes student standards of conduct that specify actions centers must take in response to certain incidents. In some cases, the incident categories in SIRS are related to the specific infractions defined in the Policy and Requirements Handbook, which are classified according to their level of severity. Level I infractions are the most serious, and includes infractions such as arrest for a felony or violent misdemeanor or possession of a weapon, and are required to be reported in SIRS. Level II includes infractions such as possession of a potentially dangerous item like a box cutter, or arrest for a non-violent misdemeanor. The majority of these infractions are required to be reported in SIRS. Minor infractions—the lowest level—include failure to follow center rules, and are not required to be reported in SIRS.
Centers must report incidents involving both Job Corps students and staff, and incidents that occur onsite at centers as well as those that occur at offsite locations. According to ETA officials, the agency and its center operators must take steps to protect the safety and security of Job Corps students when students are under Job Corps supervision. Students are under Job Corps supervision when they are onsite at Job Corps centers and when they are offsite and engaged in center-sponsored activities, such as work-based learning or community service. According to ETA officials, the agency and its contractors are not responsible for protecting the safety and security of Job Corps students when students are offsite and not under Job Corps supervision, such as when students are at home on leave. However, when offsite safety and security incidents of any type occur, Job Corps center operators are responsible for enforcing the student conduct policy. For example, if a student is arrested for a felony offsite while not under Job Corps supervision, the arrest may result in a Level I infraction and dismissal from the program.
Job Corps Student Satisfaction Survey
Since 2002, ETA used its student satisfaction survey to periodically obtain views from enrolled Job Corps students on various aspects of the program, including career development services, interactions between students and staff, access to alcohol and drugs, and overall satisfaction with the program. The survey of 49 questions has remained the same over time and included 12 questions on students’ perceptions of safety and security at centers.
ETA used the responses to the 12 safety-related survey questions to calculate a center safety rating, which represented the percentage of Job Corps students who reported feeling safe at each center, as well as a national safety rating, which represented the percentage of Job Corps students who reported feeling safe nationwide. ETA officials said they used these ratings to assess students’ perceptions of safety at individual centers and nationwide, to monitor and evaluate center operators, and to determine whether ETA needed to take action to better address students’ safety and security concerns. In 2018, ETA will pilot a stand-alone survey for safety related topics and remove the safety questions from the student satisfaction survey.
Job Corps Centers Reported Nearly 14,000 Incidents of Various Types during Program Year 2016, Which Mainly Occurred Onsite and Involved Recently Enrolled Males under Age 20
Almost Half of the Reported Onsite and Offsite Incidents Involved Drugs or Assaults
Our analysis of ETA’s data from the Significant Incident Reporting System (SIRS) showed that Job Corps centers reported 13,673 safety and security incidents involving students, including those that occurred both onsite and offsite, in program year 2016. During this time period (July 1, 2016, through June 30, 2017), approximately 79,000 students were served by the program, according to ETA officials. Drug-related incidents (29 percent) and assaults (19 percent) accounted for 48 percent of all reported incidents involving students. The remaining 52 percent of reported incidents involving students included breaches of security and safety (12 percent), alcohol-related incidents (6 percent), serious illness and injury (6 percent), theft or damage to property (5 percent), danger to self or others (5 percent), and all other types of incidents (18 percent) (see fig. 2). According to ETA officials, about half of the 3,926 drug- related incidents are due to positive drug test results among students that are administered drug tests about 40 days after entering the program.
We found that about 20 percent of reported onsite and offsite incidents in program year 2016 were of a violent nature, which we define as homicides, sexual assaults, and assaults. There were two reported homicide incidents in program year 2016 and both occurred while students were offsite and not under Job Corps supervision. Also, centers reported 177 sexual assaults and 2,593 assaults involving students during program year 2016. For each reported sexual assault and assault, SIRS provides an additional description of the incident (see table 2).
In our June 2017 testimony, we stated that 49,836 onsite and offsite safety and security incidents of various types were reported by Job Corps centers between January 1, 2007, and June 30, 2016, based on our preliminary analysis of ETA’s SIRS data. We cannot compare our analysis of safety and security incidents in our June 2017 testimony to the analysis contained in this report for program year 2016 due to a policy change by ETA beginning July 1, 2016, which affected the categorization and number of reportable incidents. Specifically, ETA changed the way some incidents are defined, and required that some incidents be reported in SIRS that previously had no such requirement. Anecdotally, officials from one ETA regional office and two Job Corps centers that we visited said that the number of reported incidents has increased since July 1, 2016, due to these changes. In its December 2017 report, the DOL OIG compared the number of safety and security incidents reported to the OIG for the same 8-month periods in 2016 and 2017 and found an increase of 134 percent. According to the DOL OIG, this increase is likely due to more accurate incident reporting as a result of the recent policy change. In addition, the DOL OIG said an actual increase in incidents is also possible.
Most Reported Incidents Occurred Onsite, but Arrests and Deaths Most Frequently Occurred Offsite While Students Were Not Under Job Corps Supervision
Our analysis of SIRS data found that in program year 2016, 90 percent of the 13,673 reported safety and security incidents involving students occurred onsite at Job Corps centers, and 10 percent occurred at offsite locations (see fig. 3). For example, 99 percent of drug-related incidents, 96 percent of assault incidents, and 84 percent of alcohol-related incidents occurred onsite. While most reported incidents occurred onsite, our analysis showed that the majority of reported arrests, deaths, and motor vehicle accidents occurred offsite. For example, of the 21 student deaths,18 occurred at offsite locations and 3 occurred onsite. In our June 2017 testimony, we reported that from January 1, 2007, through June 30, 2016, 76 percent of the reported safety and security incidents occurred onsite at Job Corps centers, and 24 percent occurred at offsite locations based on our preliminary analysis of ETA’s SIRS data. However, as previously noted, that analysis is not comparable to the analysis in this report for program year 2016 due to ETA’s July 1, 2016, policy change that impacted the categorization and number of reportable incidents.
We analyzed the 1,406 incidents of 13,673 total reported incidents that were reported to have taken place offsite in program year 2016 to determine if the students involved were on duty (i.e., under Job Corps supervision) or off duty (i.e., not under Job Corps supervision). We found that for offsite incidents, similar percentages of student victims and perpetrators were on duty and off duty. Specifically, we found that 50 percent of student victims were on duty, 44 percent were off duty, and we were unable to determine the duty status of 6 percent. For student perpetrators, we found that 45 percent of students were on duty, 45 percent were off duty, and we were unable to determine the duty status of 10 percent. Some types of reported incidents occurred more frequently when students were offsite and off duty. For example, of the reported arrest incidents that occurred offsite, 76 percent of student perpetrators were off duty. Of the reported death-related incidents that occurred offsite, student duty status was reported as off duty for 16 of 18 incidents.
We were unable to determine the duty status for all students involved in offsite incidents due to inconsistencies in ETA’s data. Of the 1,406 offsite incidents reported in SIRS, there were 178 instances in which a student’s duty status location conflicted with the incident location. For example, the student’s duty status was listed as onsite and on duty, but the incident location was listed as offsite. We asked ETA officials why these inconsistencies existed and they were unable to explain all instances in which these inconsistencies occurred. ETA officials did state, however, that these inconsistences can sometimes occur when centers enter information in SIRS based on the student’s duty status at the time the incident report is completed instead of the student’s duty status at the time the incident occurred. Due to this data limitation, we were unable to determine if the 178 students involved in those incidents were on duty or off duty.
Student Victims and Perpetrators Most Often Were Recently Enrolled Males under Age 20, Reflective of the Job Corps Population
We analyzed SIRS data to determine the characteristics of students involved in reported safety and security incidents and found that about 17,000 students were reported as victims or perpetrators of all onsite and offsite incidents in program year 2016. The total number of students reported as victims or perpetrators is 22 percent of the students served in program year 2016. The number of student victims and perpetrators varied across incident types (see fig. 4).
In program year 2016, we found that about 5,000 students (6 percent of students served) were reported as victims of various types of onsite and offsite incidents. We separately examined the gender, age, and enrollment time of reported student victims and found that for all reported incidents the majority of student victims were male, under age 20, and enrolled in Job Corps for less than 4 months (see fig. 5). These characteristics are somewhat similar to the overall Job Corps student population, which is primarily male and under age 20, as previously noted. For example, 65 percent of reported assault victims and 73 percent of reported theft victims were male. However, the number of female victims exceeded the number of male victims within some reported incident categories, such as sexual assault, inappropriate sexual behavior, and missing persons. Students under age 20 were victims of 67 percent of reported assault incidents and 63 percent of danger to self or others incidents. According to ETA officials, 18 percent of students served in program year 2016 were enrolled for less than 4 months; however, across all reported incidents 56 percent of student victims were enrolled for less than 4 months. For example, about 60 percent of student victims of reported assault and danger to self or other incidents were enrolled in Job Corps for less than 4 months.
Our analysis of SIRS data shows that about 13,000 students (17 percent of students served) were reported as perpetrators of various types of onsite and offsite incidents in program year 2016. The most commonly reported incidents—drug-related and assaults—also had the highest numbers of student perpetrators. We found that 6 percent and 5 percent of students served in program year 2016 were perpetrators of reported drug-related and assault incidents, respectively. Similar to our analysis of student victims, we separately examined student characteristics and found that the majority of reported student perpetrators of all reported incidents were male, under age 20, and enrolled in Job Corps for less than 4 months (see fig. 6).
Students Generally Reported Feeling Safe; ETA Plans to Create a New, Expanded Survey
Most Students Reported Feeling Safe, but Fewer Reported Feeling Safe on Selected Questions
Our analysis of ETA’s student satisfaction survey data from program year 2016 showed that while students generally reported feeling safe at Job Corps centers, a smaller proportion reported feeling safe in certain situations. ETA considers students to feel safe if they provide certain responses to each of the 12 safety-related survey questions, some of which are phrased as statements. For example, if a student provided a response of “mostly false” or “very false” to the statement “I thought about leaving Job Corps because of a personal safety concern,” that student would be counted as feeling safe on that survey question. On 6 of the 12 safety-related survey questions in program year 2016, at least 70 percent of responding students indicated that they felt safe (see table 3). For example, 74 percent of students responded that they did not ever or in the last month carry a weapon, and 83 percent of students responded that it was very or mostly true that a student would be terminated from Job Corps for having a weapon at the center. These are responses that ETA considered to indicate feeling safe. At the two centers we visited, students that we interviewed said that they felt safe onsite at their center. For example, students at one center said that they felt safe because absolutely no weapons, fighting, or drugs were allowed at the center.
A smaller number of students reported feeling safe on questions that dealt with hearing threats or hearing things from other students that made them feel unimportant. For example, 36 percent of students reported they had not ever or in the last month heard a student threaten another student at the center, which is considered safe according to ETA policy. Meanwhile, 49 percent reported that they had heard a student threaten another student at least once in the last month, and ETA considered these responses to indicate that students felt unsafe. Another 15 percent chose “don’t know / does not apply.” On another question, 53 percent of students reported that other students had not ever or in the last month said things that made them feel like they were not important, which ETA considered as feeling safe. Yet 30 percent reported that others made them feel unimportant at least once in the last month—which ETA considered as feeling unsafe—and 17 percent chose “don’t know / does not apply.”
In response to a question about the student conduct policy, 35 percent of students indicated that the policy was not applied equally to all students. At the two centers we visited, students that we interviewed had varying views on applying the student conduct policy. Students from one center said that staff have applied the policy in a fair way. Yet at another center, students told us that they have occasionally perceived that staff have not applied the student conduct policy fairly. They mentioned that they were aware of favoritism in a few recent incidents when staff applied the policy’s disciplinary consequences for certain students but not others. For example, they said that a student they perceived as the perpetrator remained in Job Corps while a student they perceived as innocent was dismissed.
Our June 2017 testimony contained similar observations about students’ perceptions of their safety, with students generally reporting that they felt safe at their Job Corps centers. For example, most students reported feeling safe because a student found with a weapon at the center would be terminated. In that testimony, we also noted that students reported feeling less safe on such questions as hearing threats or applying the student conduct policy.
In addition to the 12 safety-related questions, we examined data on the 2 questions about access to alcohol or drugs, and found that almost two- thirds of survey respondents said that it was mostly or very false that they could access alcohol or drugs at their Job Corps center. Although a large number of reported incidents in program year 2016 involved drugs or alcohol, less than 15 percent of survey respondents said that it was mostly or very true that they could access alcohol or drugs at their Job Corps center.
National Measures of Safety and Security Have Been Developed
Based on students’ responses to the 12 safety-related questions, ETA determined that 88 percent of students indicated that they felt safe in program year 2016. ETA calculated its national measure of safety— referred to as a safety rating—to summarize and track students’ perceptions of their safety and to determine the need for additional action, as noted previously. Similarly, it calculated a safety measure for each center.
However, we calculated a national measure differently and found that an average of 73 percent of students reported feeling safe in program year 2016. Our national measure reflected the average of how safe each student felt on the 12 safety-related survey questions. We estimated that one key difference accounted for about 11 of the 15 percentage points between our and ETA’s measure. (See table 7 in appendix I.) Specifically, we calculated our measure based on a numeric average for each student without rounding. For example, if a student answered all 12 safety questions with 6 responses that he felt safe and another 6 that he felt unsafe, we counted this student as half safe (0.5). Meanwhile, ETA rounded the average to either safe or unsafe, so that ETA counted a student with 6 safe responses and 6 unsafe responses as feeling safe.
In addition to differences in calculations, we developed our own national measure of safety because it is important to assess and track students’ perceptions for the program as a whole, as ETA has noted. Also, a national measure facilitates analysis of groups of students, such as male or female students or younger or older students, as described below.
We examined whether our national measure differed by age, gender, time in program, center size, or operator type and found statistically significant and meaningful differences in our national measure by students’ length of time in the program. In particular, an average of 78 percent of students in the program for less than 4 months responded that they felt safe, compared to an average of 71 percent for students in the program for at least 4 months. According to ETA officials, differences in responses based on length of time in the program may relate to new students being less aware about life at the center because they begin the program with other newly arrived students for up to 2 months. For example, ETA officials said that new students may live in a dormitory specifically for new students. Thus, they are not yet fully integrated into the larger student body. Although differences were also statistically significant between age groups, center size, and operator type, such differences were not meaningful in a practical manner (i.e., around 3 percentage points or less). Differences in our national measure by gender were not statistically significant.
When we analyzed the survey’s separate question about overall satisfaction with Job Corps, we found that students who reported they were satisfied with the Job Corps program responded that they felt safer than students who were not satisfied. In program year 2016, about two- thirds of students said it was very or mostly true that they would recommend Job Corps to a friend, which ETA uses to gauge overall satisfaction with the program. Of the 65 percent of students who would recommend Job Corps to a friend, 79 percent said they felt safe. Of the 11 percent of students who would not recommend Job Corps to a friend, 52 percent felt safe.
ETA’s New Web-based Survey Is Designed to Be More Timely and Detailed
ETA officials said that the agency is creating a new expanded safety survey to improve upon the prior survey. With Job Corps’ heightened attention to safety and security, the new survey—the Student Safety Assessment—is focused solely on safety and security issues and is designed to provide more timely and more detailed information.
More timely information. ETA plans to administer the new safety survey monthly to a random sample of students rather than twice per year to all enrolled students. Also, it will be web-based, rather than the current paper-based survey. As a result, ETA officials said that they will receive more timely information from students because it will take less time to administer the survey and analyze the responses.
More detailed information. The number of questions about center safety will increase from 12 to about 50—pending finalization of the survey—which is about the same number of questions on the current student satisfaction survey. For example, the new questions will ask about sexual assaults and harassment or the types of drugs bought or used at the center, which were not topics covered by the prior survey.
ETA continues to work with its contractor with survey expertise to develop, test, and administer the new survey in 2018, according to ETA officials. To develop the new survey, ETA and its contractor have considered, incorporated, and revised questions from other existing surveys. For example, they have drawn from safety surveys of teenage students and postsecondary students. ETA plans to continue developing and refining the survey and its administration in 2018, including conducting monthly pilots from January to June 2018, assessing response rates, and developing a new way to calculate national and center-level safety measures. Additionally, ETA officials said that, in 2018, they will seek to obtain comments and approval on the survey from the Office of Management and Budget. ETA officials told us that they plan to administer the new survey nationally by January 2019. As ETA refines and administers this new survey, officials told us they plan to develop a new way to measure student safety based on the more detailed survey.
ETA Initiated Multiple Actions to Improve Center Safety and Security, but the New Monitoring Strategy Was Implemented Inconsistently and ETA Lacks a Comprehensive Plan
ETA Initiated Multiple Actions to Improve Center Safety and Security
In 2014, ETA launched multiple actions to improve safety and security at Job Corps centers in response to DOL OIG recommendations (see table 4). For example, in 2015 the DOL OIG found ETA’s oversight of Job Corps centers ineffective, in part, because ETA’s student conduct policy excluded some violent offenses. As a result, ETA revised its student conduct policy by elevating several infractions previously classified as Level II to Level I (the most severe) and by adding several new categories of reportable incidents. Under the revised student conduct policy, assault, a Level I infraction, now includes fighting, which was previously a Level II infraction. In addition, the DOL OIG found that ETA did not monitor centers regularly enough to ensure center consistency in administering Job Corps disciplinary policies. In response, ETA implemented a risk- based monitoring strategy that identifies potential safety and security issues before they occur.
Staff from five ETA regional offices and at one Job Corps center we visited said that ETA’s actions overall helped to improve center safety and security. For example, staff from five regional offices said that the changes to the student conduct policy that were implemented in July 2016 clearly describe the penalties for infractions and eliminate grey areas that previously allowed center staff to use their professional judgement. Staff from four regional offices also said these changes resulted in tradeoffs that reduced center staff discretion in imposing penalties. In addition, at one center we visited, the Director of Safety and Security told us he updated the center’s security-related standard operating procedures in response to ETA’s guidance. ETA’s guidance was part of the 2017 updates to the Policy and Requirements Handbook in response to DOL OIG concerns about reporting potentially serious criminal misconduct to law enforcement.
ETA Officials Reported That Some New Actions Improved Center Monitoring, but That Actions Were Inconsistently Implemented and May Create Reporting Overlaps
ETA national officials said that the new risk-based monitoring strategy has improved center monitoring because it has allowed them to more effectively direct resources to areas of greatest need. Officials in five ETA regional offices agreed that the new strategy improved their ability to monitor centers. The new monitoring strategy shifted the focus from addressing problems after they have occurred to a data-driven strategy that tracks center performance and identifies emerging problems. This strategy provides ETA and center operators an opportunity to address problems before they occur, according to ETA national officials. For example, the new monitoring strategy features new tools, including the Risk Management Dashboard. The dashboard is a summary analysis tool that conducts trend analysis using center data and allows regional staff to engage in targeted interventions at centers with potential safety and security concerns. In addition, under the new monitoring strategy, instead of only conducting scheduled monitoring visits to a center at set times, regional staff conduct unannounced visits based on data indicating a decline in center performance or other triggers. See appendix VI for additional information on the new monitoring strategy.
Although the new risk-based monitoring strategy has improved center monitoring, it is not consistently implemented across regional offices, according to ETA national officials. They told us that similar problems identified at centers may be treated with different levels of focus or intensity from one region to another. In addition, national and regional officials told us that regional office staff have relied on professional judgment to determine the appropriate response to centers that may be at risk of noncompliance with safety and security policies, which could lead to inconsistencies. For example, when problems are identified at centers, the type of assessment to conduct is left to regional office staff discretion. As a result, staff in one region may decide that the most comprehensive assessment, the Regional Office Center Assessment, is needed, while another region’s staff would select a targeted assessment, which is more limited in scope. ETA national officials said that although each determination could be justified based on resource constraints and competing priorities, they would like to increase implementation consistency in this area.
To address regional inconsistencies, ETA national and regional office staff said that guidance in the form of standard operating procedures (SOP) would be helpful. These procedures would promote consistency in how policies are interpreted and applied and would help ensure that centers are held to the same standards, according to ETA national officials. For example, SOPs could specify which type of assessment to conduct in response to specific problems identified at centers. Internal control standards state that managers should document in policies each unit’s responsibility for an operational process.
Regional office staff said that they previously had a helpful tool, the Program Assessment Guide, that linked policies in the Policy and Requirements Handbook to the monitoring assessment process. Regional office staff said they used the Program Assessment Guide to prepare for center monitoring visits and it was a helpful training tool for new staff. Our review of ETA documentation found that the Program Assessment Guide included specific questions to ask center staff about how they meet safety and security requirements and suggested where to look for information to determine center compliance with policies. However, the Program Assessment Guide, which has not been updated since 2013, does not include recent changes to the Policy and Requirements Handbook, such as the updated student conduct policy. ETA national officials told us that limited staffing has made it difficult to update the Program Assessment Guide as frequently as changes are made to the Policy and Requirements Handbook.
In February 2018, ETA national officials told us they plan to issue a variety of SOPs related to monitoring center safety and security issues (see table 5). ETA officials initially said these SOPs would be completed in August or November 2018 and later revised its plans with a goal of completing all SOPs by August 2018. However, in August 2017, ETA officials had told the DOL OIG that these SOPs would be completed in the March to July 2018 timeframe. ETA officials said that a staffing shortage in the Office of Job Corps’ Division of Regional Operations and Program Integrity delayed development of the SOPs. This Division— established in 2015 to coordinate regional operations and strengthen communications and quality assurance—includes eight staff positions; however, as of January 2018, the Division has two staff members on board. ETA officials said that they have not yet received departmental approval to fill the six vacant positions in the Division.
Given this uncertainty, it is questionable whether ETA’s revised timeframes will be met. Without SOPs or other relevant guidance, ETA cannot ensure that monitoring for center safety and security will be carried out uniformly across the program. As a result, centers may be held to different standards, and the program may not achieve its center safety and security goals.
In addition to inconsistencies in monitoring and a lack of sufficient guidance, staff in all six regional offices told us that components of ETA’s risk-based monitoring strategy created reporting overlaps. As part of the new monitoring strategy, regional staff have additional reports that they complete—such as the Risk Management Dashboard Action report and Corrective Action Tracker—about potential safety and security problems or actual violations found at centers. Some regional staff said the desk monitoring report includes similar information to the Risk Management Dashboard and Corrective Action Tracker reports, which regional offices submit to the ETA national office. Staff in one regional office said that they enter the same information about the status of center safety and security violations multiple times on the Corrective Action Tracker because the time between reporting periods is too short to allow for meaningful action to be taken. Staff from four regional offices said completing duplicative reports reduces time that could be used to conduct additional center monitoring, such as onsite visits, or to perform other key duties.
ETA national officials disagreed that overlap exists among monitoring reports. They said that although reports may appear to overlap, the reports are complementary and not duplicative, and are used at different points in the monitoring process (see fig. 7 for an overview of ETA’s monitoring process). For example, ETA national staff told us that desk monitoring reports are primarily used by regional staff at the beginning of the monitoring process to identify potential problems and are not substantially reviewed by the national office. ETA national officials also said that the Risk Management Dashboard report is used at the beginning of the monitoring process to identify problems, whereas the Corrective Action Tracker is used later in the process after violations have been identified and corrective actions have been planned to bring the center back into compliance. In addition, ETA national officials also noted that regional staff are not asked to complete all reports every month. For example, regional staff complete a Risk Management Dashboard Action report only for those centers with potential safety and security concerns.
We compared the information included in five monitoring reports—the Center Culture and Safety Assessment, Corrective Action Tracker, Desk Audit, Regional Office Center Assessment, and Risk Management Dashboard Action report—and found opportunities for streamlining. For example, we found that the Center Culture and Safety Assessment, Corrective Action Tracker, and Regional Office Center Assessment, all include a narrative description of the violations identified by regional staff categorized according to the corresponding requirement in the Policy and Requirements Handbook. In addition, ETA regional office staff said the Corrective Action Tracker, a Microsoft Excel spreadsheet, is cumbersome to use and within the spreadsheet they attach and submit additional documentation. ETA national officials agreed that streamlining or automating monitoring tools would be helpful for its regional staff, along with additional training to help staff understand the different reports and how to write the required narratives. ETA national officials also told us that they did not systematically review existing reports before creating additional ones for the new risk-based monitoring process. Officials said they have lacked the resources to make some improvements that could reduce the time regional office staff spend on reporting.
Standards for internal control state that managers should identify the organizational level at which the information is needed, the degree of specificity needed, and state that managers should review information needs as an on-going process. Streamlining or automating reporting requirements can help centralize documentation relevant to monitoring center safety and security, possibly eliminate seemingly duplicative reporting requirements, and help regional staff manage their workloads.
ETA Lacks a Comprehensive Plan to Link Its Various Efforts to Improve Center Safety and Security
While ETA initiated multiple actions to address various safety and security issues, the agency does not have a comprehensive plan to improve center safety and security. A comprehensive plan describes the organization’s long-term goals, its strategy and timelines for achieving those goals, and the measures that will be used to assess its performance in relationship to its goals. It can also guide decision-making to achieve desired outcomes, including the priority with which to implement these efforts. ETA officials told us that although they do not have a single document that reflects a formal comprehensive plan, they have employed a comprehensive approach to improve center safety and security. However, in prior work, GAO established the importance of comprehensive planning to ensure agencies effectively execute their missions and are accountable for results.
GAO has also identified leading practices that help ensure organizations achieve their objectives. These leading practices include developing goals, strategies to achieve goals, plans to assess progress toward goals, and leadership and stakeholder involvement in plan development (see table 6).
ETA officials agreed that a comprehensive plan is needed, but told us that limited staff capacity and lack of expertise have hindered their ability to produce a comprehensive plan. In particular, the Division of Regional Operations and Program Integrity would have a role in developing the agency’s comprehensive plan. As previously mentioned, ETA officials told us that they did not have approval to fill the six vacant positions in the Division. With only two of the eight positions filled, ETA officials said that they prioritized correcting the deficiencies identified by the DOL OIG and responding to immediate safety and security concerns. ETA officials told us they plan to produce a comprehensive plan when they have secured the staff to do so. However, at this time, ETA does not have a specific timeframe for producing such a plan.
When the agency begins developing a comprehensive plan, it could consider using the leading practices outlined above and drawing on the expertise of the government-wide Performance Improvement Council. In the absence of a comprehensive plan for safety and security, ETA risks the success of its new initiatives because they are not linked in an overall framework that demonstrates how they are aligned or contribute to goals for improving center safety and security.
Conclusions
It is important that Job Corps students be provided with a safe and secure learning environment. For the last several years, however, numerous incidents have threatened the safety and security of students. ETA has taken steps to improve center safety and security, but its efforts could be strengthened by ensuring regional office staff responsible for monitoring Job Corps centers are better supported with additional guidance and streamlined reporting requirements. Without providing regional staff with this additional support, the full potential of the new monitoring strategy may not be realized. While ETA has implemented several actions to address safety and security concerns, it does not have a comprehensive plan to guide all of its efforts. Without a comprehensive plan, ETA will not be able to assess its overall effectiveness in addressing center safety and security.
Recommendations for Executive Action
We are making the following three recommendations to ETA: The Assistant Secretary of ETA should ensure the Office of Job Corps expeditiously develops additional guidance, such as SOPs or updates to the Program Assessment Guide, to ensure regional offices consistently implement the risk-based monitoring strategy. (Recommendation 1)
The Assistant Secretary of ETA should ensure the Office of Job Corps streamlines the monitoring reports completed by regional office staff. This streamlining could include automating monitoring tools, consolidating monitoring reports, or taking other appropriate action. (Recommendation 2)
The Assistant Secretary of ETA should ensure the Office of Job Corps commits to a deadline for developing a comprehensive plan for Job Corps center safety and security that aligns with leading planning practices, such as including a mission statement with goals, timelines, and performance measures. This could also include developing the planning expertise within the Office of Job Corps, leveraging planning experts within other agencies in DOL, or seeking out external experts, such as the government-wide Performance Improvement Council. (Recommendation 3)
Agency Comments and Our Evaluation
We provided a draft of this report to DOL for review and comment. We received written comments from DOL, which are reprinted in appendix VII. DOL concurred with our three recommendations. The department stated that it will move forward to develop standard operating procedures for its risk-based monitoring strategy, review and streamline existing monitoring reports, and provide additional training for its regional office staff. The department also plans to develop a formal written comprehensive plan for Job Corps safety and security. DOL also provided technical comments that we have incorporated in the report as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Labor. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII.
Appendix I: Additional Information about Our Methodology
The objectives of this review were to examine (1) what is known about the number and types of reported incidents involving the safety and security of Job Corps students in program year 2016; (2) what is known about student perceptions of safety and security at Job Corps centers, and what steps, if any, is the Employment and Training Administration (ETA) taking to improve the survey used to collect this information; and (3) the extent to which ETA has taken steps to address safety and security at Job Corps centers.
To address all three objectives, we reviewed agency policies and procedures, such as the Job Corps Policy and Requirements Handbook and guidance issued to center operators and ETA staff. In addition, we interviewed ETA officials, including Office of Job Corps national staff, Office of Job Corps regional directors, and staff in all six regional offices. We also conducted site visits at the Woodstock Job Corps Center in Woodstock, Maryland, and the Potomac Job Corps Center in Washington, D.C. We selected these two centers because they were within geographical proximity to Washington, D.C., operated by different contractors, and had over 100 reported safety and security incidents each in program year 2016. At each center, we interviewed the Center Director, Head of Safety and Security, a group of staff members, and a group of students. The staff and students we spoke with were selected by the centers. While these two site visits are not generalizable to all Job Corps centers, they provide examples of student and staff experiences with safety and security.
Analysis of Safety and Security Incidents at Job Corps Centers
To determine the number and types of safety and security incidents reported by Job Corps centers, we analyzed ETA’s incident data for program year 2016 (July 1, 2016 to June 30, 2017). This was the most recent year of Job Corps data available at the time of our review. ETA captures these data in its Significant Incident Reporting System (SIRS). Centers must report incidents involving both Job Corps students and staff, and incidents that occur at onsite and offsite locations. ETA has 20 categories of incidents in SIRS. See appendix II for incident category definitions. The incident categories and definitions in this report are taken directly from ETA documents and represent how ETA categorizes these incidents. We did not assess these categories and definitions.
In this report, we present information on reported safety and security incidents in program year 2016 involving at least one student victim or perpetrator. There were 13,673 reported incidents involving students; additional incidents are reported in SIRS that did not involve students.
When these additional incidents are included, a total of 14,704 safety and security incidents were reported in program year 2016. See appendix III for further information on the total number of incidents reported.
To calculate the number and types of reported incidents, we analyzed the primary incident type that was assigned to each incident reported in SIRS. To provide additional information on reported assaults and sexual assaults, we also analyzed the secondary incident type that was assigned to each reported assault and sexual assault in SIRS. To calculate the total number and types of reported deaths, we analyzed both primary incident types and secondary incident types. In SIRS, deaths can be reported under three different primary incident types (“death”, “assault”, and “danger to self or others”). When an incident is assigned to any of these primary incident types, it may also be assigned a secondary incident type of “homicide,” among other secondary incident types.
In addition, we analyzed the duty status for student victims and perpetrators of offsite incidents. In SIRS, students are described as being either (1) on duty, which means that they are onsite at a center or in a Job Corps supervised offsite activity; or (2) off duty, which means they are offsite and not under Job Corps supervision. For the 1,406 offsite incidents, we were unable to determine student duty status in 178 instances due to inconsistencies in ETA’s data.
This report focuses on reported safety and security incidents in program year 2016, which was from July 1, 2016, to June 30, 2017. On July 1, 2016, ETA implemented policy changes that impacted the categorization and number of reportable safety and security incidents. Accordingly, incident data after July 1, 2016, are not comparable with earlier incident data, including incident data we reported in a June 2017 testimony.
We assessed the reliability of SIRS data by reviewing relevant agency documentation about the data and the system that produced them and interviewing ETA and Department of Labor Office of Inspector General (DOL OIG) officials knowledgeable about the data. We determined the data were sufficiently reliable to report the minimum number of incidents that occurred in program year 2016. It is likely that the actual number of incidents was greater than the number reported in SIRS because the information is reported by Job Corps centers and the DOL OIG previously found instances of underreporting by a non-generalizable sample of center operators. In its March 2017 report, DOL OIG found that 12 of 125 Job Corps centers did not report 34 percent of significant incidents in SIRS from January 1, 2014, through June 30, 2015. ETA has recently taken steps to improve center reporting of significant incidents, such as revising the student conduct policy to more clearly define behavior infractions and conducting system-wide training to ensure uniform understanding and enforcement of student conduct policies. However, DOL OIG officials told us in January 2018 that it is too early to determine if these steps have resolved the DOL OIG’s concerns regarding center underreporting.
Analysis of Student Perceptions of Safety
Survey Response Rate and Reliability
To examine what is known about student perceptions of their safety and security at Job Corps centers, we analyzed students’ responses to the student satisfaction survey administered during program year 2016: September 2016 and March 2017. We analyzed responses from both of these surveys in program year 2016, which was the most recent year for which data were available. ETA provided centers with the standardized paper-based survey to administer to students in-person on designated weeks. The survey of 49 close-ended questions contained 12 questions that ETA used to assess students’ safety. In addition to questions on student safety, the survey includes questions on other topics, including student demographics, overall satisfaction with Job Corps, and access to drugs and alcohol on center.
According to data from ETA, the response rate for each survey was approximately 90 percent of all enrolled students. ETA calculated the response rate by dividing the number of students who responded to the survey by the number of enrolled students during the week of survey administration. Students responded anonymously to the survey.
Because about 90 percent of students provided responses and about 10 percent did not, we analyzed the potential for non-response biases based on several student characteristics. If the responses of those who did not respond would have differed from the responses of those who did on relevant safety questions, the results calculated solely from those who responded may be biased from excluding parts of the population with different characteristics or views. We compared age, time in program, race, and gender—key characteristics available for the population of enrollees and respondents—to determine areas for potential bias.
We determined that the potential for non-response biases existed for particular groups of students: younger students and those enrolled in the program for at least 6 months. For race, the potential for non-response bias was unclear. We found no potential bias for gender. Specifically, we found the following:
Age. Younger students were under-represented, and older students were over-represented among survey respondents. Thus, to the extent that non-responding younger students would have answered safety questions differently than responding younger students, the potential for bias existed in the survey results we analyzed. When we asked ETA officials about such a potential bias, they responded that they did not have evidence or documentation suggesting that age is a predictor of students’ level of perceived safety in the program.
Length of time in the program. Students in the program less than 6 months were over-represented among survey respondents, and students enrolled in the program over 6 months were under- represented in the survey. To the extent that non-responding students would have answered safety questions differently based on length of time enrolled, the potential for bias existed in the survey results we analyzed. When we asked ETA officials about such a potential bias, they noted that new students may be less aware about life at the center because they begin the program with other newly arrived students for up to 2 months. Thus, they are not yet fully integrated into the larger student body. Otherwise, they did not have evidence or documentation suggesting that length of time in the program correlates with students’ level of perceived safety.
Race. It is unclear whether the distribution of race for respondents differs from that in the population. Specifically, ignoring item non- response, about 7 percent of respondents selected “Other,” and if those respondents were Black/African American, the distributions between the respondents and sample would be similar since this would result in the respondent race percentage being close to 50 percent, like the population of enrollees. If respondents who selected “Other” were actually distributed across the race categories, this would result in a difference between the respondent and population race/ethnicity characteristics, and to the extent that students’ responses to safety questions differ by race, this could result in a potential bias of respondent survey results we analyzed. We analyzed race for purposes of potential non-response bias, and not as part of statistical tests of survey results described below.
Gender. We found no potential non-response bias for gender because the distribution of gender for respondents was similar to that in the population of students enrolled in the program.
In addition to our non-response bias analysis, we assessed the reliability of the survey data by reviewing relevant agency documentation about the data and the system that produced them, testing data electronically, and interviewing ETA officials knowledgeable about the data. We determined that the student survey data were sufficiently reliable for our purposes.
Calculations of Safety for Individual Survey Questions and for National Measures
For the 12 safety-related survey questions, Job Corps policy specified responses that the agency counted as safe or unsafe, which we followed. As noted previously, ETA considers students to feel safe if they provided certain responses to each of the 12 safety-related survey questions, some of which are phrased as statements. For example, if a student provided a response of “mostly false” or “very false” to the statement “I thought about leaving Job Corps because of a personal safety concern,” that student would be counted as feeling safe on that survey question (see table 3). The percentages that we calculated are not comparable to prior publications, including ETA reports, because, for example, ETA revised (i.e., recoded) students’ responses in certain circumstances, as explained below in table 7. Meanwhile, we used the original responses that students provided and did not revise them. Also, ETA excluded responses of “don’t know / does not apply” from its percentages. As a result, our percentages are not comparable with those reported by ETA.
We also calculated national measures of safety for the program and for particular demographic groups of students (e.g., male, female). Our calculation was similar to ETA’s national safety rating in certain respects. For example, as ETA did, we determined how safe each individual student felt as the unit of analysis. Therefore, the national measures of GAO and ETA may not equal the average of the 12 questions because, for example, not all students answered every safety question.
However, in other respects, we produced our national measure differently than ETA. Table 7 explains the three ways that our calculation differed from ETA’s.
Although the student safety surveys were an attempt to survey a census of the population of participants, we treated the survey as a sample in certain respects due to the non-response of about 10 percent of students as well as the ongoing nature of the regularly repeated survey. Therefore, we considered these data as a random sample from a theoretical population of students in this program and used statistical tests to assess any differences.
Treating the data as a statistical sample, we carried out statistical tests of differences in safety measures for student characteristics (e.g., age, gender, length of time in the program). Because of the large sample size, smaller differences may be detected as statistically significant. This is because statistical significance is a function of the magnitude of the true difference (statistical tests are more likely to detect differences when the true values are very different) as well as the sample size (larger samples can detect statistical significance of smaller magnitudes, when compared to smaller sample sizes, when all else is equal). However, we used statistical significance in conjunction with whether the detected differences are meaningful or important, in a practical sense. In particular, we used a series of f-tests to statistically test, at the alpha = 0.05 level, for difference in average safety measure, across categories of age, gender, time in program, center size, and operator type.
Appendix II: Categories of Incidents in the Significant Incident Reporting System (SIRS)
Appendix II: Categories of Incidents in the Significant Incident Reporting System (SIRS)
Appendix III: All Significant Incidents Reported by Job Corps Centers in Program Year 2016
Our analysis of the Employment and Training Administration’s (ETA) Significant Incident Reporting System (SIRS) data showed that there were 14,704 reported safety and security incidents at Job Corps centers in program year 2016, which include incidents involving students, staff, and non-Job Corps individuals. See table 9.
Appendix IV: Reported Safety and Security Incidents Involving Students by Job Corps Center, Program Year 2016
Job Corps centers reported 13,673 safety and security incidents involving students, including those that occurred both onsite and offsite, in program year 2016. See table 10 for information on each Job Corps center, including the number of incidents involving students reported in program year 2016.
Appendix V: GAO Safety Measure for Job Corps Centers, March 2017
We calculated safety measures for each Job Corps center, based on student responses to the safety-related questions on the student satisfaction survey (see table 11). We used the methodology described in appendix I to calculate safety measures for the centers. Results in table 11 are from the March 2017 survey, the most recent for program year 2016. The percentages in this table are not comparable and should not be analyzed with the numbers of reported incidents at each center because they are distinct measures that cover different periods of time.
Appendix VI: ETA’s Monitoring of Job Corps Centers
The Employment and Training Administration’s (ETA) risk-based monitoring strategy is designed to identify emerging problems that place a Job Corps center at-risk for safety and security problems. The strategy is largely implemented by regional office staff, which work with the Office of Job Corps’ newly formed Division of Regional Operations and Program Integrity and use a variety of tools to assess, track, and report on center performance (see table 12).
Appendix VII: Comments from the Department of Labor
Appendix VIII: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Mary Crenshaw (Assistant Director), Andrea Dawson (Analyst-in-Charge), Sandra Baxter, and Matthew Saradjian made key contributions to this report. Additional assistance was provided by Alex Galuten, Gretta Goodwin, Benjamin Licht, Grant Mallie, Mimi Nguyen, Nhi Nguyen, Monica Savoy, Almeta Spencer, Manuel Valverde, Kathleen van Gelder, and Sonya Vartivarian. | Why GAO Did This Study
Deficiencies identified in multiple DOL Inspector General audits since 2009 and two student deaths in 2015 have raised concerns regarding the safety and security of Job Corps students. GAO was asked to review safety and security of students in the Job Corps program. GAO's June 2017 testimony summarized preliminary observations. This report further examines (1) the number and types of reported safety and security incidents involving Job Corps students; (2) student perceptions of their safety at Job Corps centers; and (3) the extent to which ETA has taken steps to address safety and security at Job Corps centers.
GAO analyzed ETA's reported incident data for Job Corps centers from July 1, 2016, through June 30, 2017. GAO also analyzed ETA's student survey data from the same period, reviewed relevant documentation, and interviewed ETA officials at its national office and all six regions. GAO also visited two Job Corps centers that had different operators and at least 100 recent incidents. These two centers are not generalizable to all centers.
What GAO Found
Job Corps centers reported 13,673 safety and security incidents involving students from July 2016 to June 2017, according to GAO's analysis of the Department of Labor's (DOL) Employment and Training Administration's (ETA) data. Most reported incidents occurred onsite and involved recently enrolled male students under age 20. During that time, the program served about 79,000 students at 125 Job Corps centers, according to ETA officials. ETA's Office of Job Corps administers the program, which is the nation's largest residential, educational, and career and technical training program for low-income youth generally between the ages of 16 and 24. Drug-related incidents and assaults accounted for 48 percent of all reported incidents (see fig.).
Students generally felt safe at Job Corps centers, yet fewer felt safe in some situations, based on GAO's analysis of ETA's September 2016 and March 2017 Job Corps student satisfaction surveys. At least 70 percent of students reported that they felt safe on half of the 12 safety-related questions in the 49 question survey about their experiences in the Job Corps program; but fewer students reported feeling safe when asked if they were made to feel unimportant or if they heard students threaten each other. ETA plans to administer a new survey nationally by January 2019 that focuses solely on safety and security issues.
ETA has initiated several actions to improve safety and security at Job Corps centers, but insufficient guidance for its monitoring staff and absence of a comprehensive plan for safety and security may put the success of these actions at risk. Among its actions, ETA adopted a new risk-based monitoring strategy to identify emerging problems at the centers. Officials GAO spoke with in five of ETA's regional offices said that the new strategy has improved monitoring, but that more guidance on how to interpret and apply safety and security policies is needed to promote consistency across centers. Also, ETA lacks a comprehensive plan linking its new efforts to an overall safety and security framework. ETA officials told GAO that limited staff capacity and lack of expertise have hindered their efforts in developing such a plan. Without a comprehensive plan, ETA runs the risk that its new efforts will not be successful.
What GAO Recommends
GAO is making three recommendations to DOL, including that ETA develop additional monitoring guidance and a comprehensive plan for safety and security. DOL agreed with GAO's three recommendations. |
gao_GAO-18-175 | gao_GAO-18-175_0 | Background
Federal agencies and our nation’s critical infrastructures—such as energy, transportation systems, communications, and financial services— are dependent on computerized (cyber) information systems and electronic data to carry out operations and to process, maintain, and report essential information. The information systems and networks that support federal operations are highly complex and dynamic, technologically diverse, and often geographically dispersed. This complexity increases the difficulty in identifying, managing, and protecting the myriad of operating systems, applications, and devices comprising the systems and networks.
Cybersecurity professionals can help to prevent or mitigate the vulnerabilities that could allow malicious individuals and groups access to federal information technology (IT) systems. The ability to secure federal systems depends on the knowledge, skills, and abilities of the federal and contractor workforce that designs, develops, implements, secures, maintains, and uses these systems. This includes federal and contractor employees who use the systems in the course of their work, as well as the designers, developers, programmers, and administrators of the programs and systems.
However, the Office of Management and Budget has noted that the federal government and private industry face a persistent shortage of cybersecurity and IT talent to implement and oversee information security protections to combat cyber threats. This shortage of cybersecurity professionals makes securing the nation’s networks more challenging and may leave federal IT systems vulnerable to malicious attacks. Having experienced and qualified cybersecurity professionals is important for DHS to help mitigate vulnerabilities in its own and other agencies’ computer systems as a result of cyber threats.
Federal Initiative and Guidance Are Intended to Improve Cybersecurity Workforces
In recent years, the federal government has taken various steps aimed at improving the cybersecurity workforce. These include establishing a national initiative to promote cybersecurity training and skills and developing guidance to address cybersecurity workforce challenges.
The National Initiative for Cybersecurity Education (NICE): This initiative, which began in March 2010, is a partnership among government, academia, and the private sector. It is coordinated by the National Institute of Standards and Technology (NIST) to help improve cybersecurity education. According to NICE, its mission includes promoting cybersecurity education, training, and workforce development, and coordinating with government, academic, and industry partners to build on existing successful programs and facilitate change and innovation. The initiative’s goal is to increase the number of skilled cybersecurity professionals in order to boost national IT security.
National Cybersecurity Workforce Framework: In April 2013, NICE published the National Cybersecurity Workforce Framework, which is intended to provide a consistent way to define and describe cybersecurity work at any public or private organization, including federal agencies. The initial framework defined 31 cybersecurity- related specialty areas that were organized into 7 categories. In August 2017, the framework was revised to include 33 cybersecurity- related specialty areas. The 7 categories are: securely provision, operate and maintain, protect and defend, investigate, collect and operate, analyze, and oversee and govern. For example, in the oversee and govern category, a specialty area is cybersecurity management, which covers the management of personnel, infrastructure, policy, and security awareness. Further, in the protect and defend category, the vulnerability assessment and management specialty area covers conducting assessments of threats and vulnerabilities and recommending appropriate mitigation countermeasures in order to protect information systems from threats.
In August 2017, NIST also revised the framework to define work roles within each specialty area and describe cybersecurity tasks for each work role. The revision also described the knowledge, skills, and abilities that a person should have in order to perform each work role. The revised framework is intended to enable agencies to examine specific IT and cybersecurity-related work roles and identify personnel skills gaps.
OPM Guidance for Assigning Employment Codes to Cybersecurity Positions: OPM sets data standards for federal job classifications, including cybersecurity positions. The data standards, issued by OPM in November 2014 created a 2-digit employment code for each work category and specialty area defined in the initial 2013 NICE cybersecurity workforce framework. Federal agencies use the codes to identify cybersecurity positions in personnel systems, such as the National Finance Center’s personnel and payroll system. According to OPM, assigning codes to federal cybersecurity positions is intended to lay the groundwork for a consistent governmentwide count of the federal cybersecurity workforce. Use of these codes is intended to enable OPM and federal agencies to more effectively identify the cybersecurity workforce; determine baseline capabilities; examine hiring trends; identify skill gaps; and recruit, hire, train, develop, and retain an effective cybersecurity workforce. (See appendix II for a description of the specialty areas defined in the NICE Cybersecurity Workforce Framework and their corresponding OPM codes).
In January 2017, OPM issued new guidance to agencies for assigning employment codes to cyber-related positions. This guidance created a unique 3-digit employment code for each cybersecurity work role identified in a draft version of the 2017 NICE cybersecurity workforce framework. To enhance the recruiting and hiring of workers with needed skills, agencies are to use the new 3-digit employment codes to identify critical needs, and provide training and development opportunities for cybersecurity personnel. In October 2017, NIST issued guidance, which reflected the finalized 2017 NICE framework and included a crosswalk of the 2-digit employment codes to the 3- digit employment codes.
DHS’s Cybersecurity Workforce Performs a Wide Range of Critical Missions
DHS is the third largest department in the federal government, employing approximately 240,000 people and with an annual budget of about $60 billion—$6.4 billion of which was spent on IT in fiscal year 2017. The department leads the federal government’s efforts to secure our nation’s public and private critical infrastructure information systems. For example, DHS collects and shares information related to cyber threats and cybersecurity risks and incidents with other federal partners to enable real-time actions to address these risks and incidents.
DHS is made up of 15 components: 7 front-line, or operational, components, and 8 support components. The operational components lead the department’s front-line activities to protect the nation, while the support components are to provide the resources, analysis, equipment, services, and other support to ensure that the operational components have the tools and resources to accomplish the department’s mission. The 15 operational and support components, including the 6 that we reviewed, are identified in figure 1.
The components perform a diverse range of cybersecurity functions. These functions include combating cybercrime; responding to cyber incidents; sharing cyber-related information, including threats and best practices; providing cybersecurity training and education; and securing both privately owned critical infrastructure and non-military federal networks. The missions and cybersecurity functions for the six components selected for our review are described in table 1.
Federal Laws Require DHS to Assess Its Cybersecurity Workforce
HSCWAA required DHS to perform several workforce assessment-related activities. Specifically, the department was to: 1. Establish procedures for identifying and categorizing cybersecurity positions and assigning codes to those positions. This was to be done within 90 days of the law’s enactment. 2. Identify all positions with cybersecurity functions and determine the work category and specialty areas of each position. DHS was required to identify all cybersecurity positions—both filled and vacant—within the department. In addition, it was to determine the cybersecurity work category and specialty areas for each such position. Work categories and specialty areas are defined in the NICE Cybersecurity Workforce Framework. 3. Assign codes to all filled and vacant cybersecurity positions. The department was to assign the appropriate 2-digit employment code, as set forth in OPM’s Guide to Data Standards, to each position based on the position’s primary cybersecurity work category and specialty areas.
In addition, after completing the aforementioned activities, the department was to: 4. Identify the cybersecurity work categories and specialty areas of critical need in the department’s cybersecurity workforce and report to Congress. 5. Submit to OPM an annual report through 2021 that describes the work categories and specialty areas of critical need and substantiates the critical need designations.
The act required DHS to complete the majority of the activities by specific due dates between March 2015 and September 2016 (see table 2).
Beyond HSCWAA, the Federal Cybersecurity Workforce Assessment Act of 2015 was enacted in December 2015. It assigned specific workforce planning-related activities to all federal agencies, including DHS. Specifically, the law requires all federal agencies to identify all positions that perform information technology, cybersecurity, or other cyber-related functions and assign the appropriate employment code to each position. Similar to HSCWAA, the federal act also requires all federal agencies, including DHS, to identify and report to OPM on its cybersecurity work roles of critical need; each agency also is to submit a progress report on identifying cyber-related work roles of critical need to Congress. According to OPM officials within Employee Services, which oversees the federal cybersecurity workforce activities and implementation, agencies are not expected to continue coding to the 2-digit data standard and, instead, are to adopt the 3-digit data standard and complete coding the 3- digit standard by April 2018.
DHS Has Not Fully Identified Cybersecurity Positions or Assigned Employment Codes in a Complete and Reliable Manner
As defined in OPM’s guidance and required by HSCWAA, DHS has begun activities related to identifying, categorizing, and assigning the appropriate employment codes to its cybersecurity positions. However, DHS has not completed all of these activities, as required. Specifically, the department did not develop timely and complete procedures or review its components’ procedures. In addition, it did not completely and reliably identify and assign employment codes because its processes were manual, undocumented, and resource-intensive.
As indicated in table 3, the department did not complete any of the activities associated with establishing procedures and identifying and assigning employment codes to positions by the statutorily defined due dates, and two of these efforts are still ongoing.
DHS Did Not Ensure Cybersecurity Workforce Procedures Were Timely, Complete, or Reviewed
HSCWAA required DHS to establish procedures to identify and assign the appropriate employment code to all of the department’s filled and vacant positions with cybersecurity functions, in accordance with OPM’s Guide to Data Standards by March 2015. In addition, DHS’s April 2016 Cybersecurity Workforce Coding guidance stated that components should ensure procedures are in place to monitor and to update the employment codes as positions change over time. Further, Standards for Internal Control in the Federal Government recommends that management assign responsibility and delegate authority to key roles and that each component develop individual procedures to implement objectives. The standard also recommends that management periodically review such procedures to see that they are developed, relevant, and effective.
Toward this end, OCHCO has developed procedures and recommended implementation steps for coding positions with cybersecurity functions for the department’s components. The procedures include criteria to be used in identifying cybersecurity positions. For example, the procedures state that any position that performs cybersecurity work at least 25 percent of the time should be identified as a cybersecurity position. The procedures also include information on how components are to select the appropriate data element codes.
Nevertheless, although OCHCO developed procedures for identifying positions and assigning codes, the procedures were not timely. Specifically, DHS did not include in its procedures information on identifying positions and assigning codes to address the act’s requirements until April 2016—13 months after the due date.
In addition, the procedures were not complete in that they did not include information related to identifying and coding vacant positions, as the act required. For example, while the National Finance Center system, which is DHS’s system of record for employment codes assigned to cybersecurity employees, was modified to capture the codes for filled positions, the system was not modified to capture data on vacant positions. (For an explanation of National Finance Center’s system and how DHS relates to it, see footnote 12.) In addition, the department’s procedures did not address how to identify or code vacant positions, or where such information should be reported in a standardized manner across the department.
Moreover, the departmental procedures did not identify the individual within each DHS component who was responsible for leading and overseeing the identification and coding of the component’s cybersecurity positions. For example, the procedures did not identify a responsible individual for leading the effort to identify and code CBP’s cybersecurity positions. Because there was no identified individual responsible for the entirety of the CBP cybersecurity workforce identification efforts, CBP officials told us they were unable to comment on, or provide a status update on, where they were on the cybersecurity coding process.
Further, although components were able to supplement the departmental procedures by developing their own component-specific procedures for identifying and coding their cybersecurity positions, DHS did not review selected components’ procedures for consistency with departmental guidance. The department could not provide documentation that OCHCO had verified or reviewed component-developed procedures. OCHCO officials acknowledged that they had not reviewed the components’ procedures and had not developed a process for conducting such reviews.
OCHCO officials identified several factors that they said limited their ability to develop timely and complete procedures for identifying and coding cybersecurity positions, and to review the supplemental procedures developed by the components. For example, they stated that:
DHS did not complete its update of the procedures for identifying cybersecurity positions and assigning codes until April 2016 because the department could not decide whether or not certain positions within the department should be considered cybersecurity positions; each component had the best understanding of their human capital systems and processes, so the development of tailored procedures was best left up to each component; each of the six selected DHS components recorded and tracked vacant positions differently; therefore, the department’s human capital office could not issue department-wide guidance on vacant positions; the cybersecurity specialty areas for vacant positions were not known until a position description was developed or verified and a hiring action was imminent; and
DHS did not assign responsibilities for, or review, components’ procedures because, as noted previously, the department believed that its components had the best understanding of their specific human capital systems; thus, what the components included in their own procedures was best left up to them.
OCHCO officials said that they plan to work with their internal accountability team to review component-developed procedures, but they had not established a time frame for doing so. Without assurance that procedures are timely, complete, and reviewed, DHS cannot be certain that components are effectively prepared to identify and code all positions with cybersecurity functions, as required by the act.
DHS Has Not Yet Completed Required Identification Activities
HSCWAA required DHS to identify all cybersecurity positions, including vacant positions, by September 2015 in order to meet the act’s other deadlines. Further, the act called for the department to use OPM’s Guide to Data Standards to categorize the identified positions and determine the work category or specialty area of each position.
As of December 2016, the department reported that it had identified 10,725 cybersecurity positions, including 6,734 federal civilian positions, 584 military positions, and 3,407 contractor positions. However, as of November 2017, the department had not completed identifying all of its cybersecurity positions or determining the work categories or specialty areas of the positions. For example, three of the six DHS components we reviewed had not identified their vacant cybersecurity positions. OCHCO officials stated that components varied in reporting their identified vacant positions because the department did not have a system to track vacancies.
DHS also reported that it most commonly determined that the work category or specialty area of its cybersecurity positions were in the “protect and defend,” “securely provision,” and “oversight and development” work categories, and in the “security program management” and “vulnerability assessment and management” specialty areas of the NICE framework. DHS reported at least 12 of 15 DHS components as having cybersecurity positions in these categories and specialty areas. However, DHS could not provide data to show the actual numbers of positions in each of these categories and specialty areas. According to OCHCO officials, the department was still in the process of identifying positions for the 2-digit codes and would continue this effort until the 3-digit codes were available in the National Finance Center personnel and payroll system in December 2017. At that time, OCHCO officials stated that the department intends to start developing procedures for identifying and coding positions using the 3-digit codes.
DHS Has Not Completely and Accurately Assigned Employment Codes
In addition to identifying all of its positions with cybersecurity functions and determining the work categories and specialty areas of each position consistent with the NICE framework, HSCWAA required DHS to assign positions codes to all such identified positions by September 2015. According to the Office of Management and Budget, having complete data consistent with the framework will help agencies to effectively examine the cybersecurity workforce; identify skill gaps; and improve workforce planning. Further, Standards for Internal Control in the Federal Government states that agencies should obtain relevant data from reliable sources that are accurate.
DHS has not completely and accurately assigned employment codes to its cybersecurity workforce. As of August 2017—23 months after the due date—the department had not completed the process of assigning the 2- digit employment codes to all of its identified cybersecurity positions. For example, five of the six components we selected for review had not completed the coding of their cyber positions.
In addition, DHS did not completely or accurately assign codes to all filled and vacant cybersecurity positions as required by the act. In August 2017, OPM provided a progress report to Congress containing DHS data that stated that 95 percent of DHS-identified cybersecurity positions had been coded. However, our analysis determined that the department had assigned cybersecurity position codes to approximately 79 percent, rather than the reported 95 percent, of identified federal civilian cybersecurity positions. See figure 2 below. DHS could not demonstrate that it had assigned codes to 95 percent of its positions, as reported, since its coding progress data never indicated such a percentage.
The percentage of coded positions reported for DHS was overstated because it was not based on complete information. Specifically, the percentage reflected information on the progress of filled federal civilian cybersecurity positions, but excluded vacant positions, even though the act required DHS to report these positions. Among the six components that we selected for our review, five of them had not yet completed the coding of their positions.
Figure 2 shows the results of our analysis of DHS’s progress in coding its cybersecurity positions, which considered both filled and vacant federal civilian cybersecurity positions, in comparison to what the department identified, which considered incomplete data—using only filled positions.
In addition to being incomplete, DHS’s results were not accurate. Specifically, OCHCO developed a bi-monthly dashboard to monitor and report coding progress; however, the office did not have assurance that its data were accurate. OCHCO officials stated they did not verify the components’ data for accuracy. For example, while no more than 100 percent of identified positions should be coded, OCHCO reported 122.7 percent of positions as being coded for the Office of the Chief Information Officer. Such anomalies were due to DHS components reporting the total number of identified cybersecurity positions on a semi-annual basis, while OCHCO determined positions coded on a bi-monthly basis using data from the National Finance Center personnel and payroll system. Yet, OCHCO analyzed and reported these numbers together, even though they were representative of different time periods. This produced unreliable results that were not representative of actual progress.
Table 4 provides examples of components’ coding progress, as reflected in DHS’s August 29, 2017 dashboard report, which showed one component that had more cybersecurity positions coded than were identified.
OCHCO officials reported several factors related to their processes and systems that had limited their ability to collect and use data that were complete and accurate. Specifically, the officials stated that OCHCO did not have documented processes to collect and verify data from the components. The officials also stated that the components did not report vacancies consistently, and that the department does not have a system to track the vacancies. The officials further stated that the cybersecurity workforce amounts frequently changed, and that they could not review workforce data for reliability, as such a review was a resource-intensive activity.
However, if DHS does not assure that processes are in place to obtain and use data that are complete, including vacant positions, and accurate, then the department cannot be assured that it will have an accurate understanding of its internal coding progress. Without the ability to code its cybersecurity positions in a complete and accurate manner, DHS will not be able to effectively examine the cybersecurity workforce; identify skill gaps; and improve workforce planning.
DHS Has Not Identified or Reported Its Department-wide Cybersecurity Workforce Areas of Critical Need
While DHS has identified workforce capacity and capability gaps, it has not identified or reported to Congress its department-wide cybersecurity critical needs that align with the NICE framework. Additionally, the department has not reported its critical needs to OPM or developed plans and time frames for completing priority actions for reporting critical needs annually to OPM. Further, as indicated in table 5, the department did address any required activities by the statutorily defined due dates.
DHS Has Not Identified Critical Needs in Alignment with the NICE Framework or Provided Guidance to Components
HSCWAA required DHS to identify its cybersecurity work categories and specialty areas of critical need in alignment with the NICE framework and to report this information to the appropriate congressional committees by June 2016. In addition, according to a DHS directive, the DHS Chief Human Capital Officer is responsible for providing guidance to the department’s components on human resources standards, such as identifying workforce needs. According to GAO’s leading practices on strategic workforce planning, developing and providing guidance could help agencies identify their critical needs in order to effectively recruit, hire, train, and retain cybersecurity personnel.
Although required to do so by June 2016, DHS has not yet identified its cybersecurity work categories and specialty areas of critical need in alignment with the NICE framework. The department identified workforce skills gaps and included this information in a report that it submitted to congressional committees in March 2017. However, the department did not align the workforce skills gaps report to the NICE framework’s work categories and specialty areas as required by HSCWAA. (The categories and specialty areas are described in appendix II.)
Specifically, although the framework required that critical needs be align with a specific specialty area, DHS did not align the skills gaps to a particular specialty area in the NICE framework. For example, DHS identified a skill gap called development operations, which is related to 12 different specialty areas in the NICE framework. This skill gap also overlaps with other DHS skill gaps and creates the potential for double- counting critical needs. Furthermore, although three selected components reported in our questionnaires that they were able to identify their critical needs that aligned to the framework, they did not report this information to OCHCO.
According to OCHCO officials, DHS has not identified department-wide cybersecurity critical needs that align with the framework partly because OPM had not provided DHS with guidance for identifying cybersecurity critical needs. According to OPM officials, however, they provided oral guidance to DHS on using the 2-digit codes for identifying its critical needs during four meetings in 2016 and 2017. The OPM officials also stated that they had plans to develop governmentwide guidance for using the 3-digit codes to identify cybersecurity critical needs by March 2018 to fulfil the requirements of the Federal Cybersecurity Workforce Assessment Act of 2015. According to OPM, agencies such as DHS are required to identify critical needs for the 3-digit codes by April 2019. DHS OCHCO officials said that DHS plans to transition to identifying cyber- related work roles of critical need once they have completed the 3-digit coding efforts under the 2015 federal act mentioned previously.
Further, DHS has not developed and provided guidance to help its component-level agencies to identify their critical needs that align to the NICE framework. Specifically, DHS did not include guidance in its procedures that instructed components on how to report on their critical needs or to align to the NICE framework work categories and specialty areas. Two selected components’ officials told us they required guidance from OCHCO on how best to identify critical needs.
According to OCHCO officials, they did not provide components guidance on critical needs that align with the NICE framework because the components were in the best position to determine their critical needs. Further, OCHCO officials stated that the components do not generally view critical skills gaps in terms of the categories or specialty areas as defined in the NICE framework, but instead, describe their skills gaps using position titles that are familiar to them. For example, one selected component identified security engineering as a skills gap familiar to them. However, according to OCHCO officials, this gap may align to five different specialty areas in the NICE framework’s securely provision work category. As mentioned previously, the framework required that critical needs be align with a specific specialty area.
In September 2017, OCHCO developed a draft document that crosswalks identified department-wide cybersecurity skills gaps to one or more specialty areas in the NICE framework. However, the document does not adequately help components identify their critical needs by aligning their gaps with the NICE framework. Half of the DHS skills gaps overlap with two or more work categories, but the National Finance Center payroll system allows components to enter only one code per position. Further, the document does not provide additional decision rules to help components determine a critical need in cases in which a skills gap is mapped to multiple work categories.
Without providing relevant guidance to help components identify their critical needs, DHS and the components are hindered from effectively identifying and prioritizing workforce efforts to recruit, hire, train, develop, and retain cybersecurity personnel across the department.
DHS Did Not Report Critical Needs Annually to OPM or Develop Plans and Time Frames for Completing Priority Actions
HSCWAA required that, annually from September 2016 through September 2021, DHS, in consultation with OPM, submit a report to OPM that describes and substantiates critical need designations. In addition, Standards for Internal Control in the Federal Government states that management should develop plans to achieve objectives. Developing plans to report critical needs is a control activity that could help capture and sequence all of the activities that DHS must complete in order to report critical needs. This involves clearly defining what is to be achieved, who is to achieve it, how it will be achieved, and the time frames for achievement.
DHS did not report cybersecurity critical needs to OPM in September 2016 or September 2017 as required. Instead, the department first reported its cybersecurity coding progress and skills gaps in the March 2017 report that it sent to OPM and Congress addressing several of the HSCWAA requirements. The report did not describe or substantiate critical need designations because DHS has not yet identified them. OCHCO officials stated that the department plans to submit another report to OPM; however, they did not indicate whether critical needs will be included in the report, and did not have a time frame for when they plan to submit the report to OPM.
Additionally, DHS has not developed plans or time frames to complete priority actions that OCHCO officials said must be completed before it can report its cybersecurity critical needs to OPM. DHS’s Comprehensive Cybersecurity Workforce Update reported two priority actions to identify, describe, and substantiate cybersecurity critical needs—developing a DHS cybersecurity workforce strategy and completing its initial cybersecurity workforce research—by the end of fiscal year 2017. However, DHS did not complete the priority actions by the end of fiscal year 2017, as planned.
As of September 2017, the department was still in the process of finalizing the DHS cybersecurity workforce strategy and had not yet completed the initial cybersecurity workforce research. OCHCO officials said that the strategy is to be influenced by ongoing efforts to finalize the DHS comprehensive cybersecurity mission strategy, provide DHS reports required by the May 2017 cybersecurity-related presidential executive order, and finalize and implement the new cybersecurity-focused personnel system. According to OCHCO officials, the department plans to conduct additional interviews and focus groups in fiscal year 2018.
According to DHS OCHCO officials, the department did not develop plans or schedules with time frames to report cybersecurity critical needs. These officials stated that the report that the department submitted to Congress in March 2017 had contained plans and schedules. However, it did not capture and sequence all of the activities that DHS officials said must be completed in order to report critical needs. For example, the report did not include a schedule for completing the cybersecurity workforce strategy or conducting additional interviews and focus groups to complete the initial cybersecurity workforce research.
Until DHS develops plans and schedules with time frames for reporting its cybersecurity critical needs, the department may not have important insight into its needs for ensuring that it has the workforce necessary to carry out its critical role of helping to secure the nation’s cyberspace. Further, OPM may be hindered from using DHS’s reports to understand critical needs consistently on a governmentwide basis.
Conclusions
DHS has begun the required workforce assessment activities to identify, categorize, and assign codes to its cybersecurity positions. However, the department did not complete the activities by their statutorily defined due dates and efforts are still ongoing. Specifically, the department did not develop timely and complete procedures or review its components’ procedures. In addition, DHS’s efforts to identify, categorize, and code cybersecurity positions were incomplete and unreliable. Without the ability to identify, categorize, and code its cybersecurity positions in a complete and accurate manner, DHS will not be able to effectively examine the cybersecurity workforce, identify skill gaps, and improve workforce planning.
DHS has identified critical gaps in its cybersecurity workforce, but these gaps did not align with the NICE framework work categories and specialty areas of critical need, as required by the act. Specifically, DHS has not developed guidance to help its component agencies and offices identify their cybersecurity critical needs. Moreover, DHS lacks plans with defined time frames for completing its required annual reporting to OPM. Until the department addresses these issues, it may continue to miss reporting deadlines and be hindered from effectively identifying and prioritizing critical workforce efforts to recruit, hire, train, develop, and retain cybersecurity personnel across its multiple components. In addition, DHS may not have cybersecurity personnel with the required skills to better protect federal networks and national critical infrastructure from threats.
The commitment of DHS’s leadership is essential to successfully addressing these issues and the associated management weaknesses. By taking urgent and diligent action now, DHS will be better positioned to fulfill the requirements of HSCWAA and to identify and code its filled and vacant cybersecurity positions accurately when it transitions to using the revised NICE framework.
Recommendations for Executive Action
We are making the following six recommendations to DHS: The Secretary of Homeland Security should develop procedures on how to identify and code vacant cybersecurity positions. (Recommendation 1)
The Secretary of Homeland Security should identify the individual in each component who is responsible for leading that component’s efforts in identifying and coding cybersecurity positions. (Recommendation 2)
The Secretary of Homeland Security should establish and implement a process to periodically review each component’s procedures for identifying component cybersecurity positions and maintaining accurate coding. (Recommendation 3)
The Secretary of Homeland Security should ensure OCHCO collects complete and accurate data from its components on all filled and vacant cybersecurity positions when it conducts its cybersecurity identification and coding efforts. (Recommendation 4)
The Secretary of Homeland Security should develop guidance to assist DHS components in identifying their cybersecurity work categories and specialty areas of critical need that align to the NICE framework. (Recommendation 5)
The Secretary of Homeland Security should develop plans with time frames to identify priority actions to report on specialty areas of critical need. (Recommendation 6)
Agency Comments and Our Evaluation
We received written comments on a draft of this report from DHS. In the comments (reprinted in appendix III), the department concurred with our six recommendations and provided estimated completion dates for implementing each of them.
With regard to recommendations 1 and 2, DHS stated that, by February 28, 2018, it plans to finalize and disseminate an updated version of its cybersecurity position identification and coding guidance to address vacant positions, as well as issue a memorandum requiring its components to designate a lead for reporting progress to OCHCO. Further, by April 30, 2018, the department said it plans to address recommendation 3 by disseminating a memorandum that includes a process for periodically reviewing component procedures and instructions for components to report related data and documents.
DHS also stated that, by June 29, 2018, it plans to issue memorandums to its components that provide instructions, guidance, and plans to address recommendations 4 through 6. The department added that it intends to (1) periodically review compliance and cybersecurity workforce data concerns with component leads to ensure data accuracy; (2) disseminate a reporting schedule for identifying cybersecurity critical needs; and (3) develop and disseminate a project plan with milestones, due dates, and responsibilities for reviewing progress and reporting on workforce planning actions in fiscal years 2018 and 2019.
The aforementioned actions, if implemented effectively, should help DHS address the intent of our recommendations. In addition, we received technical comments from the department, which we have incorporated, as appropriate.
We also provided a draft of this report for OPM’s review and comments. In response, an OPM program analyst stated, via email, that the agency had no edits, comments, or revisions to the draft report.
We are sending copies of this report to appropriate congressional committees, the Secretary of Homeland Security, and the Director of the Office of Personnel Management. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected], or Chris Currie at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to identify the extent to which DHS has: 1. identified, categorized and assigned employment codes to 2. identified its cybersecurity workforce areas of critical need.
To address both objectives, we examined Department of Homeland Security (DHS) Office of Chief Human Capital Officer (OCHCO) and component cybersecurity workforce data and documentation and interviewed OCHCO and component officials. In addition, we reviewed Standards for Internal Control in the Federal Government and Key Principles for Effective Strategic Workforce Planning, and then compared the cybersecurity workforce internal controls and project management processes that DHS implemented to address the act to the selected standard.
We also administered a questionnaire and data collection instrument (DCI) to a nonprobability sample of 6 of 15 DHS components. To select the 6 components we used OPM’s Enterprise Human Resources Integration-Statistical Data Mart data on DHS civilian positions. We segmented the 15 components into 3 groups, based on their reported total number of cybersecurity personnel in DHS—high, medium, and low. From each group, we selected 2 DHS components with the highest number of cybersecurity functions, as reported by DHS. Where components or offices in the same tier have equivalent cybersecurity functions, we selected the DHS component or office with the highest share of cybersecurity employees. This approach resulted in the selection of the following DHS components:
U.S. Customs and Border Protection,
Departmental Management and Operations,
National Protection and Programs Directorate,
U.S. Secret Service,
Science & Technology Directorate, and
U.S. Citizenship and Immigration Services.
The results of this analysis are not generalizable to all DHS components.
In both the questionnaire and DCI, we asked questions related to the status of DHS’s identification, categorization and assignment of employment codes to cybersecurity positions, and identification of its cybersecurity workforce areas of critical need. To minimize errors that might occur from respondents interpreting our questions differently from our intended purpose, we performed a preliminary review of the questionnaire and DCI with OCHCO officials.
The selection of OCHCO officials for preliminary review was based on OCHCO’s oversight role in the implementation of the Homeland Security Cybersecurity Workforce Assessment Act of 2014 (HSCWAA). During this review, we interviewed the officials to ensure that the questions were applicable, clear, unambiguous, and easy to understand. We then revised our questionnaire and DCI based on the feedback provided during the preliminary review. All respondents completed the final questionnaire and DCI, although not all survey respondents answered every question. We then reviewed the responses and interviewed relevant component officials in order to get clarification and validation of their responses.
We determined that the data obtained from the questionnaire and DCI are sufficiently reliable for the purpose of reporting DHS’ progress in assigning cybersecurity codes. However, these data have the following limitations: component responses may be from a particular program or office and not cover the breadth of the program, and component reported data may be estimated or unavailable.
To address our first objective, we reviewed and analyzed DHS’s department-level cybersecurity workforce procedures and communications and organizational documents for identifying cybersecurity positions and assigning work-position codes in accordance with the act. Further, we examined department-level data from the Department of Agriculture’s National Finance Center, DHS dashboard reports, and DHS progress reports to the Office of Personnel Management (OPM) and Congress. To assess the reliability of OCHCO and component cybersecurity workforce data, we compared them with data from OPM’s Enterprise Human Resources Integration-Statistical Data Mart data on DHS civilian positions and against the National Finance Center personnel and payroll system data on the cybersecurity coding of DHS civilian positions as appropriate. In addition, we reviewed and analyzed component-level cybersecurity workforce procedures, as well as cybersecurity workforce data and documentation, including data calls to selected component-level offices in DHS. We evaluated these documents against the act’s requirements and Standards for Internal Control in the Federal Government to ensure that DHS’s processes addressed leading practices.
To address our second objective, we reviewed and analyzed DHS’s planned actions for identifying its cybersecurity workforce areas of critical need, including data calls to components, and DHS progress reports to OPM and Congress. We also examined OCHCO and component cybersecurity workforce data and department-level workforce planning documentation to evaluate the status of the department’s efforts to identify its cybersecurity workforce areas of critical need. We compared these documents against the act’s requirements, DHS-wide and component-specific workforce planning processes, the National Initiative for Cybersecurity Education (NICE) framework categories and specialty areas, and Standards for Internal Control in the Federal Government to ensure DHS met its requirements.
To assess the reliability of OPM’s Enterprise Human Resources Integration-Statistical Data Mart data on DHS civilian positions, we reviewed the data for obvious errors as well as compared OPM’s written responses to our data reliability questionnaire regarding the generation and use of the data. We determined that the data were sufficiently reliable for the purpose of helping inform our selection of a nonprobability sample of 6 DHS components as described above.
To assess the reliability of National Finance Center personnel and payroll system data on the cybersecurity coding of DHS civilian positions, we examined the data for outliers and obvious errors and compared those data to data and documentation from DHS components. In addition, we interviewed and observed DHS officials generate and use the National Finance Center data. We determined that the data were sufficiently reliable for the purposes of reporting DHS cybersecurity workforce coding progress. The data are limited in that only filled federal civilian positions were reported in the National Finance Center system. Vacancies, contractors, and military were not included in those data.
To assess the reliability of DHS’s OCHCO and component human capital systems data on the DHS civilian cybersecurity workforce, we reviewed the data for outliers and obvious errors, and compared them against data from the National Finance Center personnel and payroll system. We also interviewed officials from OCHCO and selected DHS components regarding the generation and use of the data. We determined that the data were sufficiently reliable for the purpose of reporting DHS’ progress in assigning cybersecurity codes. However, the data have the following limitations: component responses may be from a particular program or office and not cover the breadth of the program, data may be estimated by components, and data may be measured at different intervals—for example, total cybersecurity workforce may be measured at a different point in time than cybersecurity workforce positions coded.
For both objectives, we supplemented the information and knowledge obtained from our assessments by holding discussions with relevant DHS OCHCO and the six components’ officials to evaluate the status of the department’s efforts to implement the act.
We conducted this performance audit from March 2017 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework Categories and Specialty Areas
Appendix II: National Initiative for Cybersecurity Education (NICE) Cybersecurity Workforce Framework Categories and Specialty Areas Oversees, evaluates, and supports the documentation, validation, assessment, and authorization processes necessary to assure that existing and new information technology (IT) systems meet the organization’s cybersecurity and risk requirements. Ensures appropriate treatment of risk, compliance, and assurance from internal and external perspectives.
Develops and writes/codes new (or modifies existing) computer applications, software, or specialized utility programs following software assurance best practices.
Works on the development phases of the systems development life cycle.
Consults with customers to gather and evaluate functional requirements and translates these requirements into technical solutions. Provides guidance to customers about applicability of information systems to meet business needs.
Develops system concepts and works on the capabilities phases of the systems development life cycle; translates technology and environmental conditions (e.g., law and regulation) into system and security designs and processes.
Conducts technology assessment and integration processes; provides and supports a prototype capability and/or evaluates its utility.
Develops and conducts tests of systems to evaluate compliance with specifications and requirements by applying principles and methods for cost- effective planning, evaluating, verifying, and validating of technical, functional, and performance characteristics (including interoperability) of systems or elements of systems incorporating IT.
Addresses problems; installs, configures, troubleshoots, and provides maintenance and training in response to customer requirements or inquiries (e.g., tiered-level customer support).
Develops and administers databases and/or data management systems that allow for the storage, query, and utilization of data.
Manages and administers processes and tools that enable the organization to identify, document, and access intellectual capital and information content.
Installs, configures, tests, operates, maintains, and manages networks and their firewalls, including hardware (e.g., hubs, bridges, switches, multiplexers, routers, cables, proxy servers, and protective distributor systems) and software that permit the sharing and transmission of all spectrum transmissions of information to support the security of information and information systems.
Installs, configures, troubleshoots, and maintains server configurations (hardware and software) to ensure their confidentiality, integrity, and availability. Also, manages accounts, firewalls, and patches. Responsible for access control, passwords, and account creation and administration.
NICE Specialty Area Systems Analysis
NICE Specialty Area definition Conducts the integration/testing, operations, and maintenance of systems security.
Conducts training of personnel within pertinent subject domain. Develops, plans, coordinates, delivers and/or evaluates training courses, methods, and techniques as appropriate.
Applies knowledge of data, information, processes, organizational interactions, skills, and analytical expertise, as well as systems, networks, and information exchange capabilities to manage acquisition programs. Executes duties governing hardware, software, and information system acquisition programs and other program management policies. Provides direct support for acquisitions that use information technology (IT) (including National Security Systems), applying IT-related laws and policies, and provides IT-related guidance throughout the total acquisition life cycle.
Provides legally sound advice and recommendations to leadership and staff on a variety of relevant topics within the pertinent subject domain. Advocates legal and policy changes, and makes a case on behalf of client via a wide range of written and oral work products, including legal briefs and proceedings.
Oversees the cybersecurity program of an information system or network; including managing information security implications within the organization, specific program, or other area of responsibility, to include strategic, personnel, infrastructure, requirements, policy enforcement, emergency planning, security awareness, and other resources.
Develops policies and plans and/or advocates for changes in policy that supports organizational cyberspace initiatives or required changes/enhancements.
Supervises, manages, and/or leads work and workers performing cybersecurity work.
Uses defensive measures and information collected from a variety of sources to identify, analyze, and report events that occur or might occur within the network in order to protect information, information systems, and networks from threats.
Tests, implements, deploys, maintains, reviews, and administers the infrastructure hardware and software that are required to effectively manage the computer network defense service provider network and resources. Monitors network to actively remediate unauthorized activities.
Responds to crises or urgent situations within the pertinent domain to mitigate immediate and potential threats. Uses mitigation, preparedness, and response and recovery approaches, as needed, to maximize survival of life, preservation of property, and information security. Investigates and analyzes all relevant response activities.
Conducts assessments of threats and vulnerabilities; determines deviations from acceptable configurations, enterprise or local policy; assesses the level of risk; and develops and/or recommends appropriate mitigation countermeasures in operational and nonoperational situations.
NICE Specialty Area Analyze category All-Source Analysis
Analyzes threat information from multiple sources, disciplines, and agencies across the intelligence community. Synthesizes and places intelligence information in context; draws insights about the possible implications.
Analyzes collected information to identify vulnerabilities and potential for exploitation.
Applies current knowledge of one or more regions, countries, non-state entities, and/or technologies.
Identifies and assesses the capabilities and activities of cybersecurity criminals or foreign intelligence entities; produces findings to help initialize or support law enforcement and counterintelligence investigations or activities.
Applies language, cultural, and technical expertise to support information collection, analysis, and other cybersecurity activities.
Collect and Operate category Collection Operations
Executes collection using appropriate strategies and within the priorities established through the collection management process.
Performs activities to gather evidence on criminal or foreign intelligence entities in order to mitigate possible or real-time threats, protect against espionage or insider threats, foreign sabotage, international terrorist activities, or to support other intelligence activities.
Performs in-depth joint targeting and cybersecurity planning process. Gathers information and develops detailed Operational Plans and Orders supporting requirements. Conducts strategic and operational-level planning across the full range of operations for integrated information and cyberspace operations.
Investigate category Digital Forensics
Collects, processes, preserves, analyzes, and presents computer-related evidence in support of network vulnerability mitigation, and/or criminal, fraud, counterintelligence or law enforcement investigations.
Applies tactics, techniques, and procedures for a full range of investigative tools and processes to include, but not limited to, interview and interrogation techniques, surveillance, counter surveillance, and surveillance detection, and appropriately balances the benefits of prosecution versus intelligence gathering.
OPM guidance states that individuals primarily engaged in project or program management for cybersecurity projects or tasks should be coded with the Cybersecurity Program/Project Management value (80).
Appendix III: Comments from the Department of Homeland Security
Appendix IV: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contacts above, Ben Atwater (assistant director), Tammi Kalugdan (assistant director), David Hong (analyst-in-charge), Christy Abuyan, Alexander Anderegg, David Blanding, Jr., Chris Businsky, Wayne Emilien, Jr., David Plocher, Luis E. Rodriguez, and Priscilla Smith made significant contributions to this report. | Why GAO Did This Study
DHS is the lead agency tasked with protecting the nation's critical infrastructure from cyber threats. The Homeland Security Cybersecurity Workforce Assessment Act of 2014 required DHS to identify, categorize, and assign employment codes to all of the department's cybersecurity workforce positions. These codes define work roles and tasks for cybersecurity specialty areas such as program management and system administration. Further, the act required DHS to identify and report its cybersecurity workforce critical needs.
The act included a provision for GAO to analyze and monitor DHS's implementation of the requirements. GAO's objectives were to assess the extent to which DHS has (1) identified, categorized, and assigned employment codes to its cybersecurity positions and (2) identified its cybersecurity workforce areas of critical need. GAO analyzed DHS and OPM workforce documentation and administered a data collection instrument to six major DHS components. GAO also interviewed relevant DHS and OPM officials.
What GAO Found
The Department of Homeland Security (DHS) has taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014 ; however, its actions have not been timely and complete. For example, DHS did not establish timely and complete procedures to identify, categorize, and code its cybersecurity position vacancies and responsibilities. Further, DHS has not yet completed its efforts to identify all of the department's cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. In August 2017, DHS reported to the Congress that it had coded 95 percent of the department's identified cybersecurity positions. However, GAO's analysis determined that the department had, at that time, coded approximately 79 percent of the positions. DHS's 95 percent estimate was overstated primarily because it excluded vacant positions, even though the act required DHS to report these positions.
In addition, although DHS has taken steps to identify its workforce capability gaps, it has not identified or reported to the Congress on its department-wide cybersecurity critical needs that align with specialty areas. The department also has not reported annually its cybersecurity critical needs to the Office of Personnel Management (OPM), as required, and has not developed plans with clearly defined time frames for doing so. (See table).
Without ensuring that its procedures are complete and that its progress in identifying and assigning codes to its positions is accurately reported, DHS will not be positioned to effectively examine its cybersecurity workforce, identify its critical skill gaps, or improve its workforce planning. Further, until DHS establishes plans and time frames for reporting on its critical needs, the department may not be able to ensure that it has the necessary cybersecurity personnel to help protect the department's and the nation's federal networks and critical infrastructure from cyber threats. The commitment of DHS's leadership to addressing these matters is essential to helping the department fulfill the act's requirements.
What GAO Recommends
GAO recommends that DHS take six actions, including ensuring that its cybersecurity workforce procedures identify position vacancies and responsibilities; reported workforce data are complete and accurate; and plans for reporting on critical needs are developed. DHS concurred with our six recommendations and described actions the department plans to take to address them. OPM did not have any comments. |
gao_GAO-18-523 | gao_GAO-18-523_0 | Background
In 1961, the Navy commissioned the first and only Enterprise-class aircraft carrier, CVN 65, which was the world’s first nuclear-powered aircraft carrier. CVN 65 served the Navy’s needs for 51 years, deploying 25 times and sailing more than 1 million miles during that time. The carrier, which was powered by eight nuclear reactors, was the predecessor of the two-reactor Nimitz-class aircraft carriers that followed it into service. The Navy plans to begin retiring the Nimitz-class carriers in the next decade.
Following the retirement of CVN 65 in 2012, the Navy began preparing the ship for dismantlement and disposal in a process called inactivation. These inactivation activities—which Navy officials stated cost $863 million to complete—included removing the nuclear fuel from the ship’s reactors and taking off equipment and other materials in preparation for dismantlement of the ship. The Navy’s next steps include planning efforts to meet the environmental requirements associated with dismantling and disposing of a nuclear-powered ship, such as handling of radioactive and other hazardous materials. The final step for CVN 65 will be dismantlement, including the recycling of non-nuclear portions of the ship and safe disposal of nuclear and other hazardous materials. Figure 1 provides a timeline of CVN 65 events.
CVN 65 is the largest nuclear-powered ship that has been retired by the Navy. Figure 2 compares the size of CVN 65 to previous and future Navy vessels requiring dismantlement and disposal, as well as other relatable structures.
Puget Sound Naval Shipyard Dismantlement and Disposal Activities
In 1990, the Navy authorized a program to recycle decommissioned submarines at Puget Sound Naval Shipyard in Bremerton, Washington. According to Navy officials, the Department of Energy’s low-level waste site in Hanford, Washington, was the only practical site at the time for disposal of the defueled submarine reactor compartments, which included low-level radioactive waste. Puget Sound Naval Shipyard is the largest shipyard on the U.S. West Coast, and while it is equipped and staffed to work on all classes of Navy vessels, it primarily conducts maintenance on nuclear-powered aircraft carriers and submarines, which the Navy considers a priority. This shipyard has the only dry dock on the West Coast capable of servicing an aircraft carrier and is the Navy’s only site for dismantlement and disposal of nuclear-powered ships.
Since 1990, the Navy has inactivated over 130 nuclear-powered vessels. Inactivation is the process used to prepare a ship for disposing of the compartments that house the reactors and recycling the hull or for safe storage pending dismantlement and disposal at a later date. Inactivation includes draining hydraulic systems and tanks, and removing hazardous and expendable materials, tools, spare parts, and furnishings from the ship. The removal of the spent fuel from a ship’s nuclear reactor(s), referred to as defueling, usually happens as part of inactivation. Historically, when a ship is dismantled at Puget Sound Naval Shipyard, the reactor compartments are removed and packaged for transport to the Hanford low-level radioactive waste disposal site. Figure 3 shows the typical path followed for dismantlement and disposal at the shipyard.
Dismantlement and Disposal by Commercial Industry
The Navy often uses commercial industry to dismantle and recycle its non-nuclear ships, including aircraft carriers, such as ex-USS Constellation and ex-USS Ranger completed in 2017. Navy officials noted that the cost to the government in recycling recent ships has been minimal—ranging from 1¢ to $6 million—because of the resale value of their scrap metal.
Commercial companies have decommissioned 32 civilian nuclear reactor plants—work that the Navy has noted is comparable to nuclear-powered ship dismantlement and disposal. Commercial industry uses a component-based process for commercial nuclear plant decommissioning. This process breaks the reactor down into smaller components for transport and disposal, and separates nuclear waste from non-nuclear waste as much as possible to reduce disposal costs.
Requirements Related to Dismantlement and Disposal of Nuclear- Powered Ships
Several laws and an executive order have established the regulatory authority and requirements underlying the dismantlement and disposal of nuclear-powered Navy vessels. The Atomic Energy Commission exercised control of nuclear technology primarily for military purposes until 1954, when the Atomic Energy Act was amended. These amendments allowed for the possibility of a privatized nuclear energy industry. Twenty years later, the Atomic Energy Commission was abolished and split into the Nuclear Regulatory Commission (NRC) and the Energy Research and Development Administration—which was later absorbed into the Department of Energy.
Under this structure, NRC is responsible for overseeing commercial nuclear reactor safety, licensing reactors, and establishing regulations and guidelines for radioactive waste disposal for the commercial nuclear industry. The National Nuclear Security Administration, a separately organized agency within the Department of Energy, is responsible for the management and security of the nation’s nuclear weapons, as well as nonproliferation programs. The Naval Nuclear Propulsion Program—also known as Naval Reactors—is a joint program of the Department of Energy and DOD that has cradle-to-grave responsibility for all naval nuclear propulsion matters. Figure 4 provides a brief description of laws and orders related to nuclear materials.
In addition to the nuclear-specific requirements guiding the dismantlement and disposal process, the Navy must comply with the National Environmental Policy Act. Specifically, this act requires federal agencies to evaluate the likely environmental effects of projects they are proposing, generally by preparing either an environmental assessment or a more detailed environmental impact statement. An environmental impact statement must, among other things, (1) describe the environment that will be affected, (2) identify alternatives to the proposed action and identify the agency’s preferred alternative, (3) present the environmental impacts of the proposed action and alternatives, and (4) identify any adverse environmental impacts that cannot be avoided should the proposed action be implemented. The Act’s requirements are invoked for major federal actions, such as the construction of buildings or highways, or the dismantlement and disposal of reactor compartments from nuclear- powered vessels.
Since 1996, nuclear-related dismantlement and disposal activities performed by Puget Sound Naval Shipyard have been based on the same environmental impact statement—which addresses the effects of disposing of submarine and cruiser reactor compartments. In 2012, the Navy produced an environmental assessment analyzing the effects of removing and preparing the reactor compartments of CVN 65 for disposal at Puget Sound Naval Shipyard and transporting the compartment packages to the Hanford site for disposal. It found that these activities would have no significant impact on the environment beyond existing activity. Naval Reactors subsequently decided, however, that a new environmental impact statement is required for CVN 65 because the alternatives identified for dismantling and disposing of the ship could potentially have significant impacts on the environment that are not captured by the existing environmental assessment. As part of the new statement for CVN 65, Navy officials said environmental factors that account for the naval shipyard and full commercial options will be reviewed, as well as indefinite waterborne storage of the ship pending dismantlement and disposal at a later date.
Naval Shipyard Option for CVN 65 Is More Defined than Full Commercial Option but May Pose Challenges for Meeting Navy Priorities
The Navy is weighing a number of considerations before making a decision for CVN 65 dismantlement and disposal. The naval shipyard option offers well-established processes for dismantlement and disposal of the ship’s nuclear material and better understood cost and schedule estimates than the full commercial option. Our analysis of available data, however, found that the naval shipyard option would contribute to existing workload backlogs and exacerbate facility challenges at the shipyard that could affect its work maintaining the active fleet—a Navy priority. While the Navy has not defined its requirements for the full commercial option, industry does not expect to face workload or facility challenges. Navy officials also believe that the full commercial option potentially could shorten the timeline for completing the work and reduce the total cost.
Naval Shipyard Option Is Based on a Well- Established Process, While Navy Has Yet to Characterize Full Commercial Option
Although CVN 65 is the first nuclear-powered aircraft carrier requiring dismantlement and disposal, the Navy has well-established processes for dismantling and disposing of nuclear-powered submarines and cruisers. Navy officials explained that the shipyard’s extensive dismantlement and disposal experience with these vessels has resulted in a strong understanding of how to accomplish the work. Further, the Navy has been working on plans to address the ship-specific needs of CVN 65 for many years. If the Navy chooses the naval shipyard option for CVN 65, it expects to adapt and use these well-established processes to dismantle the 28,000-ton nuclear propulsion space section at Puget Sound Naval Shipyard. This section would contain the 8 defueled reactors and all other nuclear-related material that remains on the ship. To separate the propulsion space from the ship, a commercial company would perform “ship-shaping” to create a dedicated ship section for all of the nuclear- related work. This activity would minimize the portion of the ship transported to the naval shipyard for dismantlement and disposal. The remaining ship sections would be commercially recycled. The shipyard is evaluating two designs for reactor compartment packages that could be used for transport and disposal of the ship’s nuclear material. One design—based on a package previously used for cruiser reactors—would involve the shipyard preparing 8 single reactor packages. The other includes a new design that would enclose 2 reactors in dual reactor packages. Figure 5 shows how the Navy anticipates the ship would be divided into sections through this process.
In contrast, the Navy formally began considering the potential for a full commercial option for CVN 65 within the past 4 years. According to Navy officials, although information received through previous requests for information and hosting discussions with commercial industry helped shape their understanding of the potential for a commercial ship dismantlement, they ultimately have had relatively limited interaction with commercial companies to determine their potential plans and processes for CVN 65 dismantlement and disposal. Naval Reactors officials stated they are waiting for the environmental impact statement process to officially begin before further engaging with prospective commercial companies and the public.
Many of the details for a full commercial option will depend upon Navy requirements, such as standards, technology, or specific procedures required to do the work; data and analysis in the environmental impact statement; and preferred work practices and facilities of prospective companies. Officials we interviewed from companies with potential interest in the work stated that because the Navy has not communicated its CVN 65 requirements for a full commercial option, any commercial approach described for the work would be hypothetical at this point, relying on their extensive prior experience with nuclear materials handling, packaging, shipping, and disposal—including nuclear ship maintenance and decommissioning of commercial reactors—or ship recycling. Commercial company officials noted that despite the lack of definitive information available, they would anticipate employing typical practices used for commercial nuclear reactor decommissioning, ship dismantlement, and control of nuclear materials to complete CVN 65 work. In terms of locations for the work, Naval Reactors officials noted that many coastal sites in the United States could potentially accommodate CVN 65 dismantlement activities, and the location of the work site would affect the proposed disposal site or sites. Table 1 provides characteristics of the two options that the Navy is considering for CVN 65 dismantlement and disposal.
Estimates for Both Dismantlement Options Require Further Development
Cost and schedule estimates for both CVN 65 options have yet to be formally established by the Navy. Puget Sound Naval Shipyard has been refining CVN 65 plans and estimates over many years. However, its most recent estimates for cost and schedule still may not fully account for uncertainties in completing the work because it represents a first-of-its- kind project with an unprecedented scale. The Navy’s notional estimates for the commercial option are a first step in establishing expectations and will evolve as requirements for the work are better understood. The Navy awarded a contract in July 2018 to the Center for Naval Analyses—a federally funded research and development center serving the Navy and other defense agencies—to complete a cost analysis for the full commercial option. This effort is expected to provide the Navy with a cost estimate for CVN 65 in October 2018, followed by a model through which the Navy can develop cost estimates for future Nimitz-class dismantlement and disposal efforts. The findings from the CVN 65 environmental impact statement may contribute to the final cost and schedule estimates for either option.
Better Fidelity in Existing Naval Shipyard Option Estimates Puget Sound Naval Shipyard officials explained that as their planning has progressed, they have refined their cost and schedule estimates for CVN 65 dismantlement and disposal. Overall, the Navy’s cost estimates have increased significantly from initial estimates but have been relatively stable since 2016. The schedule went through similar fluctuations but has steadied. Table 2 outlines changes in the shipyard’s plans and how they affected cost and schedule.
The schedule for starting the work at the naval shipyard also changed. Navy officials stated that as a result of the Navy’s decision in early 2017 to reassess its options for CVN 65, it delayed the expected start date for the naval shipyard option from 2019 to 2034 based on analysis of the workload at the naval shipyard, which we discuss below.
Although Puget Sound Naval Shipyard officials noted their cost estimate includes some margin to account for CVN 65 being the first project of its kind, it may not adequately account for the extent of unknown facts or circumstances that could affect cost. For example, unrecognized hazardous materials may exist in inaccessible areas of the CVN 65 propulsion space section that will only be discovered once the work is underway, which could affect cost and schedule. Execution of the work in support of a new dual reactor compartment package design also could lead to unanticipated challenges that cause deviations from estimates.
No Formal Estimates for Full Commercial Option The Navy has notionally estimated cost and schedule for a full commercial option to be $750 million to $1.4 billion and about 5 years to complete. These estimates suggest that the commercial option could cost less and take less time to complete than the naval shipyard option. Navy officials stated that the notional cost estimate is derived from data reported by nuclear power plant operators, with differences in size and scope for the nuclear reactors incorporated. They also said that the notional estimate will be updated once it receives additional information from industry during the planning process.
Navy officials told us they expect the cost per reactor for CVN 65 would be significantly less than the NRC decommissioning average for a commercial facility because CVN 65 reactor compartments are smaller, the reactors are more compact, and they have already been through the costly defueling activity. A 2016 international study on the cost of decommissioning nuclear power plants identified several high-level categories and their contribution to total costs for reactors decommissioned in the United States, such as project management, site restoration, and waste packaging, transportation, and disposal. According to this study, about 25 percent of decommissioning costs can be attributed to reactor decontamination and dismantling. Using this percentage and the average cost to decommission a commercial nuclear reactor, we estimate the cost to dismantle the eight CVN 65 defueled reactors to range from $1.2 billion to 1.3 billion, which is at the higher end of the Navy’s notional estimated range for the full commercial option.
In addition to the potential cost, the Navy initially projected about a 5-year period of performance for the full commercial option based on limited industry input. Navy officials told us the full commercial option start date, beginning no earlier than 2024, is contingent on the finalization of the environmental impact statement and a record of decision that chooses this option as the Navy’s path forward. The Navy’s intent would be to award a contract shortly after the environmental impact statement is completed if the Navy decides to pursue the full commercial option. Commercial officials told us they do not anticipate a need for significant lead time before starting work, though the need will be better understood once the Navy outlines requirements for the work.
Finally, the cost for a full commercial option could be influenced by the contract type selected by the Navy. Contract type selection is a key factor in determining how cost risk is shared between the Navy and the contractor. Firm-fixed-price contracts are suitable for situations where the risk involved is minimal or can be predicted with an acceptable degree of certainty. Conversely, cost-type contracts are used when either requirements are not sufficiently defined or uncertainties with contract performance do not permit costs to be sufficiently estimated to use a fixed-price contract. Although no decision has been made, Navy officials told us they are interested in using a firm-fixed-price contract—a contract type that has been used for commercial reactor decommissioning. Under a firm-fixed-price contract, the contractor agrees to perform the work for a price that is not subject to change based on the contractor’s cost experience in performing the contract, placing full responsibility for all costs and resulting profit on the contractor. Navy officials stated that because CVN 65 is the first nuclear-powered aircraft carrier to be disposed of, the scope of the effort will need to be better defined before they could reliably conclude that firm-fixed-price contracting would be appropriate. Specifically, insufficiently understood risks may make potential contractors unwilling to accept the risks associated with a firm- fixed-price contract.
The Navy’s Priorities for Puget Sound Naval Shipyard Present Challenges Not Expected for Full Commercial Option
The Navy has stated its priority for Puget Sound Naval Shipyard is the work associated with maintaining nuclear-powered aircraft carriers and submarines currently in the fleet. However, as we reported in 2017, Puget Sound Naval Shipyard has had significant fleet maintenance delays since fiscal year 2000. These delays resulted in 4,720 lost operational days for nuclear-powered aircraft carriers and submarines. The addition of CVN 65 would contribute to challenges in the naval shipyard’s ability to meet workload demands and further constrain its available facilities. In comparison, despite the lack of detail about the Navy’s requirements, commercial company officials we interviewed stated they currently do not anticipate any major workload challenges or conflicts with other ongoing or future work in completing the work on CVN 65 based on their existing workforce and potential facilities for performing the work.
Puget Sound Naval Shipyard Workload and Facility Challenges Based on our analysis of workload and resources data from Puget Sound Naval Shipyard, we found that the shipyard consistently operates at its maximum annual workload level and this likely will continue regardless of the Navy’s decision for CVN 65. A Naval Reactors analysis of the shipyard’s workload data also shows the workload meeting or exceeding capacity for the foreseeable future. The shipyard’s workload projections that we reviewed show it will be working at or near capacity through fiscal year 2025—the last year for which data were available. Adding the work associated with dismantlement and disposal of CVN 65 would put the shipyard over current workload capacity.
Shipyard officials explained that historically, the workload projection for a given year matures as that year approaches, and the dips that sometimes are depicted in future-year workload projections generally vanish. Workload maturity or growth can be attributed to changes in the Navy’s maintenance plans, deferred maintenance, growth from the previous year, and overall shipyard productivity. The condition of a ship when it arrives for maintenance can also contribute to growth if inspections of systems or components reveal a need for unplanned repairs. To account for historical variability and improve projections of overall workload, in 2015 shipyard officials began including 10 percent in unallocated workload to projections.
In reviewing the shipyard workload and resources data, we also found that the shipyard regularly underestimates workload for future years— especially 5 years or more out—with workload growth for future years consistently exceeding 15 percent. Even without the CVN 65 work at the naval shipyard, projections show its workload with average notional growth will meet or exceed the workforce available to complete the work, as shown in figure 6.
According to the Navy, it is typical for naval shipyards to continually shift resources across projects to align worker-specific trade skills to the type of work executed on any hull in the shipyard, at any particular time. To achieve a level and sustainable workforce across the fiscal years, the number of full-time employees required to support planned work is sized as part of the total workforce. The shipyard mitigates peaks in workload (above the available workforce) through the use of additional overtime, loans from other naval shipyards, and contracting. When that cannot occur, the shipyard will defer workload until it can be executed.
The CVN 65 dismantlement and disposal work could affect the shipyard’s ability to complete active fleet maintenance. We found that the addition of the CVN 65 dismantlement and disposal would add almost a year’s worth of work across the estimated 10-year dismantlement and disposal period to an already busy shipyard that has demonstrated difficulties in accurately projecting its future work. The Navy prioritizes maintenance of the active fleet, but the scale of the CVN 65 work would reduce the shipyard’s ability to delay or reprioritize dismantlement and disposal. Shipyard officials noted that the Navy often defers planned dismantlement and disposal to address higher-priority active fleet maintenance. For example, smaller submarines prepared for dismantlement can instead be stored at the shipyard until workforce and space are available to complete the work. However, an aircraft carrier—even when reduced to a propulsion space section as proposed for CVN 65—would not offer the same level of flexibility to defer work. CVN 65 would involve a more extensive resource commitment because of its increased size relative to past ship dismantlement projects and would occupy limited facilities at the shipyard. Specifically, current plans require 3 years pier side to prepare the propulsion space section for dismantlement and reactor compartment disposal and about 5 years in a dry dock for the actual dismantlement.
Further, the shipyard expects a significant increase in its submarine inactivation and reactor compartment disposal and hull recycling workload due to the end of service for an additional class of submarines— specifically, the Ohio-class submarines starting in 2027. The estimated increase in inactivation and reactor recycling workload would overlap with the planned start for CVN 65 dismantlement and disposal in 2034, if the Navy elects to pursue this option. In addition, the shipyard already has a backlog of 10 submarines and the ex-USS Long Beach cruiser in storage awaiting disposal and recycling at its long-term storage facility for defueled, decommissioned, and inactivated nuclear-powered ships. Another 3 submarines are pier-side at Puget Sound Naval Shipyard. This backlog is not expected to subside as submarines continue to be retired, and each vessel represents thousands of workdays that the shipyard has to commit to its dismantlement and disposal.
Navy and Industry Expect Full Commercial Option to Face Fewer Challenges While the Navy has not established specific requirements for the full commercial option, Navy officials maintain that it does not present the same workload and facility challenges that exist for the naval shipyard option. Commercial companies have flexibility in selecting a location for CVN 65 dismantlement activities based on facility and workforce availability considerations. Some company officials we spoke with also noted they have existing worksites—which are audited and approved by Naval Reactors—where they process, package, and transport low-level radioactive waste or operate low-level radioactive waste disposal sites licensed by NRC. These include facilities for radioactive waste processing and decontamination of materials for recycling. Additionally, company officials said they anticipate that a substantial amount of the work could be performed with the ship in the water—similar to the traditional approach used to dismantle non-nuclear vessels for recycling—and existing contractor facilities likely would not require major upgrades or improvements other than to provide for the radiological-based waste handling and packaging considerations.
Commercial company officials told us that they would not expect significant additional hiring needs based on their limited understanding of the potential CVN 65 work and their existing workforce capacity. They added that the nuclear dismantlement and disposal industry has an available, qualified workforce that could easily be employed if additional workforce were needed. Given the early stage of the Navy’s planning for CVN 65 and the Navy’s lack of formal engagement with commercial companies at the time of our review, we did not assess the current or future commercial workforce capacity. Any details on potential CVN 65 facility and workforce plans from commercial companies will be hypothetical until the Navy formally begins efforts to seek input from commercial companies and communicate requirements.
Budget Documentation and Reporting Does Not Include Sufficient Information to Facilitate Transparency and Oversight for CVN 65
The Navy’s approach typically used to budget for and report on ship dismantlement and disposal does not provide sufficient information to support decision makers’ oversight of CVN 65—a multi-year project that may require more than $1 billion to complete. We found the Navy is not required to provide detailed budget information or report dismantlement and disposal cost, schedule, and programmatic information to decision makers. Providing additional information through budget requests and reporting would help ensure that decision makers have sufficient information to oversee CVN 65 dismantlement and disposal activities and to support future decisions.
Budget Exhibits for Dismantlement and Disposal Lack Ship- Specific Details
The Navy uses budget exhibits to provide congressional decision makers information about dismantlement and disposal efforts. If no changes are made to the information provided within the Operation and Maintenance, Navy (OMN) budget exhibits, the CVN 65 dismantlement and disposal budget request will include limited details for planned work, funding needs, and total estimated costs. The bulk of the Navy’s past dismantlement and disposal work is comprised of comparatively low-cost projects—particularly submarines—with limited resource demands compared to a nuclear-powered aircraft carrier like CVN 65, a multi-year project with a cost that will potentially exceed $1 billion. For example, nuclear-powered submarines have an average dismantlement and disposal cost of about $26 million and average about 50,000 workdays. Federal internal control standards recommend that agency management communicate with external stakeholders the necessary quality information—such as complete cost and schedule information for CVN 65 dismantlement and disposal—to achieve objectives. Budget exhibits are a primary source of information about all programs and other activities during budget planning and congressional appropriation decisions. Well- prepared budget exhibits help provide a rationale for the amount and timing of funding requests. Given that this multi-year, large-scale project is the first of its kind, more detailed information would facilitate greater transparency and oversight of cost, schedule, and performance.
Limited Budget Information Provided for Dismantlement and Disposal The Navy uses the OMN appropriation account to fund dismantlement and disposal activities. The Navy’s Financial Management Policy Manual provides overall summary guidance on OMN budget formulation, but it does not provide specific guidance on reporting criteria for dismantlement and disposal of Navy ships. Budget exhibits are prepared to justify appropriation requests and are key documents that can be used to support congressional oversight. DOD acquisition training materials state that well prepared budget exhibits make programs more defensible. However, in assessing the OMN budget exhibits associated with dismantlement activities for fiscal years 2007-2018, we found they provide little ship-specific detail that could be used to monitor a significant project such as the planned effort for CVN 65 dismantlement and disposal, which may begin requesting funding as soon as fiscal year 2023.
Specifically, we reviewed the dismantlement and disposal funding requests from the past several years, which reside within the Navy’s OMN budget exhibits under the Ship Activations/Inactivations sub-activity group of the Mobilization budget activity. In doing so, we found these exhibits generally contain high-level information with a summary of funding changes for the current fiscal year and the requested funding estimate for the budget year. We could not definitively identify or track dismantlement and disposal of specific ships because key work activities are not described by ship, cost and schedule for individual ships are not presented, and prior year costs and cost to complete a specific ship’s dismantlement and disposal are not provided.
In reviewing programmatic documentation other than the budget requests, such as Puget Sound Naval Shipyard dismantlement planning documents and the Navy’s long-range shipbuilding plans, we found instances of submarine inactivation costs significantly exceeding estimates and notable delays to the start dates for work activities. We found that, although not required, this information was not reflected in the budget exhibit documents we reviewed. As another example we previously noted, Navy officials stated that CVN 65 inactivation—already completed in December 2017—cost $863 million. We could not track this cost from the budget exhibits because of their limited detail. As a consequence of the general lack of detail in the budget exhibits, decision makers cannot readily identify if cost growth occurred or if a specific ship was dismantled when planned, hindering oversight of dismantlement and disposal projects.
The Navy’s OMN annual appropriations fund work activities on a year-by- year basis, which does not necessarily allow for tracking of the full resource commitment of a project over time or enable monitoring of cost growth to determine if additional funds are needed. Navy officials stated that they fully fund dismantlement and disposal efforts that span multiple fiscal years. They added that for CVN 65, the Navy may divide the work into multiple discrete phases that are separately funded due to the lengthy projected schedule. This approach could require the Navy to seek OMN appropriations in several non-consecutive years. Such an approach could make tracking CVN 65 dismantlement and disposal funding challenging, as the total cost and any changes would be obscured among the multiple funded activities that collectively compose the total dismantlement and disposal effort. Navy officials acknowledged that they could provide further information, such as total project cost and an overall schedule for CVN 65, in the OMN budget exhibits. However, without direction from DOD leadership or Congress, Navy officials stated that they have no plans to deviate from providing the traditional OMN budget exhibit information. Providing additional information in the CVN 65 budget exhibit could enable decision makers to track total cost, any cost changes, schedule progress, and general performance for the CVN 65 dismantlement and disposal.
Navy Could Provide More Budget Details for CVN 65 While the Navy funds ship dismantlement and disposal from the OMN account, budget exhibits for other accounts—such as the Shipbuilding and Conversion, Navy (SCN) account typically used for major investment items—offer examples of how to provide decision makers with more detailed information. Budget exhibits for SCN appropriations are structured to identify major elements of cost and track those costs over time, consistent with DOD Financial Management Regulations. For example, the SCN budget exhibits typically contain specific information for each ship being procured with a distinct funding line for major cost categories such as basic construction, propulsion, and electronics. Additionally, these budget exhibits describe the program with specific plans for the upcoming budget year and estimate across 5 fiscal years (known as DOD’s Future Years Defense Program), including the total cost to complete the program. While some of the SCN budget exhibit elements are not applicable to dismantlement and disposal, others could be adapted and used in an OMN budget exhibit for CVN 65 to provide information that would enable better oversight, such as work activities planned and performed by fiscal year; prior years’ funding data; future years’ funding plans; cost to complete dismantlement; schedule of key events; and information on the contractor(s), contract type, and contract award and completion dates.
Navy officials said they typically would not provide the level of detail found in SCN budget exhibits or the exhibits for other DOD acquisition programs because OMN exhibits are not designed to support the same level of oversight. Unlike DOD acquisition programs, DOD projects completed with operation and maintenance funds typically are not investment programs and generally do not require the same level of oversight. However, as we previously indicated, Navy officials noted that if DOD leadership or Congress provided clear direction on what additional details related to CVN 65 dismantlement and disposal should be included in OMN budget exhibits, it could provide that additional information to support oversight. Navy officials stated that given the considerable funding needs and congressional interest with CVN 65, they were assessing options for providing specific detail in the OMN budget exhibit for its dismantlement and disposal activities. They added that no specific decisions had been made on what additional information, if any, would be included for CVN 65.
Lack of Reporting Requirements Limits Opportunities for Insight into CVN 65 Dismantlement and Disposal Cost, Schedule, and Performance
Despite being a part of the final phase in the program’s life cycle, we found no specific reporting requirement related to the cost, schedule, risk management, and general performance of dismantlement and disposal activities in DOD or Navy policy that would support oversight by DOD or Congress. Officials from Naval Reactors and the Naval Sea Systems Command confirmed that there is no reporting requirement for performance of dismantlement and disposal of Navy ships. Navy officials noted that dismantlement and disposal activities are included in their annual briefings to Congress that support the Navy’s budget requests, but acknowledged that the typical comparatively low-cost ship dismantlement and disposal activities are generally of less interest when combined with a briefing on shipbuilding and other high-dollar acquisition investments. This approach may be appropriate for submarine dismantlement and disposal activities that have lower costs, shorter periods of performance, and a well-established history. However, the magnitude of CVN 65’s anticipated cost of dismantlement and disposal is comparable to that of large DOD acquisition programs. Such programs generally are expected to provide more information to decision makers within DOD and Congress through formal reporting on plans, activities, and performance to support accountability than what has traditionally been provided with respect to Navy dismantlement and disposal activities.
The precedent-setting nature of the CVN 65 dismantlement and disposal adds a level of risk and heightens the importance of having sufficient accountability measures to facilitate oversight. There is greater potential for unexpected challenges to arise because a nuclear-powered aircraft carrier has not been dismantled and disposed of before. Additionally, CVN 65 provides an opportunity to establish a foundation for management and oversight of future aircraft carrier dismantlement and disposal efforts, with the first of 10 Nimitz-class carriers expected to reach the end of its service life in the next decade. Standards for internal control in federal government state that in order to identify and mitigate risk, program objectives such as a baseline for cost and schedule, should be clearly defined in measurable terms so performance in attempting to achieve those objectives can be assessed. Doing so would also provide the Navy with the ability to collect important historical cost data that could be used to inform cost estimates for future aircraft carrier dismantlement and disposal efforts.
DOD acquisition programs could serve as a model to identify appropriate cost and schedule objectives for the CVN 65 dismantlement and disposal, even though it is not an acquisition program and not subject to these requirements. DOD acquisition programs with significant resource commitments comparable to that expected of CVN 65 are generally subject to structured oversight and have reporting requirements to support performance transparency and accountability. As discussed earlier, preliminary cost estimates for CVN 65 dismantlement and disposal may exceed $1 billion, regardless of the option the Navy ultimately selects. While many requirements for DOD acquisition programs are not relevant to dismantlement and disposal, even when costs may reach similar levels, we found elements of the reporting requirements associated with larger DOD acquisition category (ACAT) programs that the Navy could leverage to facilitate oversight of CVN 65 dismantlement and disposal. For example, ACAT II programs—which have estimated costs comparable to CVN 65 dismantlement and disposal cost expectations—are required by DOD policy to establish a program cost and schedule baseline prior to program start and report any significant deviations from the established baseline. They also are required by statute to provide information on risk management. Table 3 highlights some DOD acquisition program reporting elements that could support oversight of CVN 65 dismantlement and disposal.
For example, once a cost baseline is established, comparison to an independent cost estimate or assessment could provide greater assurance that the risks associated with performing CVN 65’s large-scale, first-of-a-kind dismantlement activities were adequately considered and appropriately estimated. GAO’s Cost Estimating and Assessment Guide states an independent review of a program’s cost estimate is crucial to establishing confidence in the estimate. It provides an unbiased test of whether the program cost estimate is reasonable and can be used to identify risks related to budget shortfalls or excesses. The Naval Center for Cost Analysis is responsible for developing independent cost assessments for ACAT II Navy programs, while the Office of the Secretary of Defense’s Office of Cost Assessment and Program Evaluation develops independent cost estimates for major defense programs.
As noted earlier, the Navy continues to refine its cost estimate of the naval shipyard option and expects to receive an estimate for the full commercial option from the Center for Naval Analyses in October 2018. The Navy stated it considers the anticipated cost estimate and model from the Center for Naval Analyses to be the independent cost estimate for the full commercial option. We view this estimate as a valuable step in establishing cost expectations for the full commercial option, but believe it is inadequate because it will determine the Navy’s cost expectations as opposed to validating an existing estimate—the intent of having an independent assessment. For the naval shipyard option, the Navy suggested no plans for an independent cost estimate as it continues to refine the current cost estimate prior to a decision for CVN 65. Completing an independent cost estimate for both CVN 65 options prior to a Navy decision on its dismantlement and disposal approach would provide additional information to inform a decision that could have repercussions for carrier dismantlement and disposal activities for years to come. Adapting certain acquisition program requirements to the CVN 65 effort, as described above, would help the Navy establish baselines that can be tracked by decision makers to assess cost and schedule, and help identify deviations, if any. These types of reporting requirements would provide decision makers with greater information to support their oversight and hold the Navy accountable for meeting CVN 65 dismantlement and disposal expectations.
The Navy’s Evaluation of CVN 65 Dismantlement and Disposal Options Is Hampered by a Regulatory Authority Disagreement
The regulatory authority determines the rules, procedures, and oversight that will guide the dismantlement and disposal process for CVN 65. The Navy is considering three regulatory authority scenarios related to the naval shipyard or full commercial options, as discussed in table 4.
The Navy Has Regulatory Precedent for the Naval Shipyard Option
If the Navy chooses the naval shipyard option, it can rely on Naval Reactors’ extensive experience serving as the regulatory authority for dismantlement activities conducted at Puget Sound Naval Shipyard. Naval Reactors has overseen the dismantlement and disposal of roughly 130 reactors from submarines and cruisers by the naval shipyard. Many shipyard oversight organizations and activities, as well as on-site Naval Reactors personnel, help control environmental and human health exposures. For example, the Radiological Controls Office is responsible for monitoring radiation exposure to the workforce and ensuring radioactivity is confined to controlled work areas. The Nuclear Quality Division employs nuclear auditors who review performance, processes, and instructions for all nuclear work at the shipyard.
Although the scale and design of CVN 65 creates some unique dismantlement and disposal considerations as compared to the submarine and cruiser activities, Navy officials stated they plan to use the same organizations located at the shipyard and practices to oversee performance if they decide to complete the CVN 65 work at the shipyard. The environmental impact statement planned for CVN 65 is expected to outline the different needs that the aircraft carrier presents for the dismantlement process and disposal path, such as changes related to the transportation of CVN 65 reactor packages required if the Navy chooses to use four larger dual reactor compartment disposal packages instead of eight single packages to dispose of the carrier’s reactors. While the Navy can rely on familiar regulatory practices to support the naval shipyard option, as discussed earlier, this option includes potential workload and schedule disadvantages.
Disagreement Persists about the Appropriate Regulatory Authority for the Full Commercial Option
Agreement State Program The Atomic Energy Act gives the Nuclear Regulatory Commission (NRC) authority over domestic industrial, medical, and research uses of radioactive materials. The act also authorizes NRC to enter into agreements with states (called agreement states) so they assume, and NRC relinquishes, regulatory authority over specified radioactive materials. Specifically, NRC is authorized to enter into agreements to allow states to assume regulatory authority over source, byproduct, and special nuclear materials in quantities insufficient to form a critical mass. NRC must find a state program adequate to protect public health and safety and compatible with NRC’s program for regulating such materials before entering into these agreements. The mechanism for the transfer of NRC's authority to a state is an agreement signed by the governor of the state and the chair of the Commission.
Naval Reactors’ position is that a commercial company could dismantle and dispose of CVN 65 under the regulatory authority of NRC or an agreement state. According to Naval Reactors officials, the full commercial option would represent a continuation of Naval Reactors’ long history of nuclear-related activities with vendors licensed and regulated by NRC or agreement states. For example, Naval Reactors officials noted they commonly have used facilities licensed by NRC or agreement states for a range of manufacturing, processing, and disposal activities available for naval nuclear materials. Naval Reactor officials specifically assert that, as CVN 65 has already been defueled, such a facility should be able to process the byproduct material on the ship. However, NRC stated its disagreement that it or an agreement state is able to serve as the regulatory authority for CVN 65, emphasizing that regulatory responsibility for the safe processing and disposal of Navy ships falls to Naval Reactors under its Department of Energy authority. NRC officials also noted that Naval Reactors has been regulating nuclear-powered ship dismantlement and disposal activities exclusively at Puget Sound Navy Shipyard for decades.
Coordination between Naval Reactors and NRC to identify the applicable regulatory authority and establish a regulatory plan for the CVN 65 full commercial option would help ensure accountability for safe dismantlement and disposal of CVN 65 under the full commercial option. It would also enable the Navy and commercial companies to effectively estimate costs. Without a resolution, the Navy could face challenges in estimating the cost and completing a comprehensive business case analysis of costs, benefits, and risks for the full commercial option if it is unsure of which regulatory authority will be responsible for enforcement. Furthermore, companies with potential interest in the CVN 65 work may not be able to effectively estimate the workload and associated cost without a clearly identified regulatory authority. Resolution of this disagreement also has relevance for other future ship dismantlement and disposal activities, such as with the Surface Ship Support Barge in the near term and the Nimitz-class aircraft carriers in the long term.
Navy Surface Ship Support Barge The Surface Ship Support Barge is a dockside refueling facility constructed from a converted Navy tanker vessel used to disassemble spent nuclear fuel for shipment within a water pool. Naval Reactors noted this facility is now obsolete, with no further use planned, and the Navy is interested in dismantling and disposing of it commercially. According to Naval Reactors officials, the barge contains very low radioactivity in the water pool and fluid systems, which requires appropriate dismantlement and disposal measures. The Navy halted its pursuit of a contract award to dismantle and dispose of the barge in early 2017 based on NRC formally stating it has no regulatory authority over the dismantlement and disposal of naval vessels. A Naval Reactors official stated a request for information may be issued in 2018 to solicit input from commercial companies for dismantlement and disposal of this barge, but plans remain unsettled.
Naval Reactors could use its own authority to regulate a full commercial dismantlement of CVN 65. Naval Reactors officials stated, however, that NRC or agreement states—which regulate industrial, medical, and research uses of radioactive materials—also have authority to regulate commercial dismantlement and disposal of CVN 65, and the Navy would benefit from leveraging their regulatory experience and structure. In particular, Naval Reactors officials stated that for the full commercial option, their responsibility to provide for processing and disposal of the byproduct material—which Naval Reactors indicated is what remains on CVN 65—can be best met by contracting with commercial companies licensed by NRC or an agreement state.
According to Naval Reactors officials, even if NRC maintains that it cannot regulate material from CVN 65, some states may do so under their own authority. Specifically, Naval Reactors’ position is that states that had agreements with the old Atomic Energy Commission prior to its abolishment and the creation of NRC in 1974 were granted—and continue to maintain—authority to process naval nuclear propulsion waste. Accordingly, Naval Reactors officials stated that these states could serve as the sole regulatory authority over commercially-performed CVN 65 dismantlement and disposal.
Naval Reactors officials also asserted specific potential advantages of having NRC or an agreement state regulate commercial dismantlement and disposal of CVN 65. First, they said the regulatory structure that NRC and agreement states apply to commercial nuclear-related activities includes an enforcement process to impose fines for violations, which Naval Reactors does not have.
Additionally, Naval Reactors officials noted the Navy’s contract strategy options could be improved if NRC or an agreement state serves as the regulatory authority for CVN 65. Specifically, they stated that a reason for the Navy’s interest in using NRC or agreement state authority is the possibility of emulating the firm-fixed-price contract currently being used to decommission a commercial nuclear power plant. In this example, the operating license was transferred from the utility that owns the plant and site to a dismantlement contractor to more quickly complete the decommissioning. This effectively gave a dismantlement and disposal company the power plant owner’s responsibility for the safe dismantlement and disposal of the power plant, with NRC continuing to act as the regulatory authority. According to Naval Reactors, the firm- fixed-price contract used in this case was viable because the dismantlement contractor had total responsibility independent of the plant owner to perform the work in accordance with the regulations and requirements of NRC. Naval Reactors officials stated that the firm-fixed- price contract created an incentive for the company to thoroughly understand what the work entailed and perform the work efficiently to maximize its profit.
Naval Reactors officials stated that a total separation of the owner from regulatory decisions and interpretations, like the one currently being used for the commercial nuclear power plant, is the Navy’s best means to facilitate the potential use of a firm-fixed-price dismantlement contract for CVN 65. They further stated that an approach wherein Naval Reactors retained regulatory authority could undercut the prospect of a firm-fixed- price contract by eliminating the clear division between regulator and owner. In taking this position, Naval Reactors officials suggest that a conflict of interest exists in being both the owner who wants to establish a fixed price for the work as well as the regulator with the potential to affect costs. Naval Reactors officials also noted that if Naval Reactors were the regulator, with no experience in regulating this type of work, commercial companies could have difficulty pricing such regulatory risk. In contrast, they stated that in NRC-regulated commercial plant dismantlement, as well as agreement state-regulated, large-scale radioactive waste processing work, commercial companies have demonstrated that they are willing to accept this regulatory risk, agreeing to contracts on a fixed-price basis.
Nuclear Regulatory Commission’s Position In February 2017, NRC formally stated its position in a letter responding to a congressional inquiry that it has no regulatory authority for Navy ships, such as CVN 65. NRC said that under the Atomic Energy Act it is the responsibility of the Department of Energy, and accordingly Naval Reactors, to provide for processing and disposal of naval nuclear propulsion waste. NRC stated that agreement states also lack jurisdiction because their authority derives from NRC’s authority. NRC officials we interviewed also disputed Naval Reactors’ position that states have independent authority to process naval nuclear propulsion waste for two reasons. First, they stated that regulation of reactor dismantlement is not an activity that can be relinquished to the states. Second, they pointed out that the authorities that can be relinquished to the states under the Atomic Energy Act are licensing activities conducted under specific provisions of the act, and that the responsibility to safely process and dispose of naval nuclear propulsion waste is conducted under a different set of provisions which are not subject to licensing.
NRC officials acknowledged that naval nuclear propulsion waste has been processed at facilities licensed by NRC or an agreement state, but distinguished such examples from CVN 65. Specifically, they noted that no additional regulatory oversight was required to process incidental amounts of such waste at facilities licensed to process commercial waste, but CVN 65 is not licensed by NRC or an agreement state and would involve only naval nuclear propulsion waste. NRC officials emphasized that the additional work that would be required to regulate the dismantlement of CVN 65—an unlicensed facility—puts it beyond NRC’s jurisdiction. Additionally, NRC stated that while such work could be carried out by a contractor, including a contractor with an NRC or agreement state license, the work would not be covered by that license, as NRC and agreement states do not have authority to regulate such activity. Essentially, NRC’s position is that while Naval Reactors can contract to have the dismantlement and disposal performed by a commercial entity, Naval Reactors would retain its own regulatory responsibility for enforcing that contract.
NRC stated that if Naval Reactors desired technical support in regulating a commercial dismantlement, NRC or an agreement state could provide such services through a contract. This approach, according to NRC officials, would offer Naval Reactors a regulatory consultant familiar with commercial dismantlement while maintaining Naval Reactors as the regulatory enforcement authority. In such an arrangement, NRC or an agreement state could identify regulatory concerns, but Naval Reactors would be responsible for determining what corrective action is taken to address those concerns. Naval Reactors officials stated they are in ongoing discussions with NRC about this potential approach. They also asserted that this potential approach is not optimal because, as previously discussed, it could create regulatory uncertainty for commercial companies by preventing a clear separation of the regulator and owner.
Naval Reactors Lacks Regulatory Experience and Structure for Commercial Dismantlement and Disposal
Since Naval Reactors has its own authority as part of the Department of Energy, it could choose to regulate a CVN 65 commercial dismantlement. However, with Puget Sound Naval Shipyard having performed the dismantlement and disposal work for previous nuclear-powered vessels, Naval Reactors lacks experience to draw upon for a full commercial option. It also cannot rely on the organizational structure and practices in place at Puget Sound Naval Shipyard to support a commercial CVN 65 dismantlement that will be conducted at an offsite facility. If Naval Reactors serves as the regulatory authority for a full commercial dismantlement, it will have to determine what mechanisms are needed to provide sufficient monitoring of the work and how they will fulfill the roles and responsibilities typically filled by the naval shipyard’s support. These mechanisms may include elements similar to those used by the naval shipyard as well as new ones unique to the dismantlement practices used by commercial companies.
A significant consideration for Naval Reactors when working to establish an approach to monitor commercial dismantlement and disposal is the component-based dismantlement process that companies may use. This process, which is commonly used to dismantle commercial nuclear power reactors, involves segmenting reactor components (i.e., cutting to reduce in size) so the pieces can be put in standardized containers for transport and disposal. This process is a contrast to the traditional dismantlement approach that Nuclear Reactors uses at Puget Sound Naval Shipyard— an approach that would leave CVN 65’s reactors largely intact by encasing them in packages for disposal. As noted by Naval Reactors officials, commercial dismantlement practices potentially could require the Navy to decide whether to adjust its standard radiological work practices to better align with different dismantlement and disposal activities or use the same practices it uses for work performed at Puget Sound Naval Shipyard. Using the same practices could affect cost expectations for the Navy and commercial companies by changing the way the work is performed. As an example, applying the Navy’s standard practices for total containment of radionuclides to a dismantlement process that involves increased cutting could require additional measures to control the work environment.
Conclusions
Over 50 years ago, CVN 65 set a precedent as the Navy’s first nuclear- powered aircraft carrier. The Navy’s plans and decisions for this aircraft carrier’s dismantlement and disposal represent an opportunity to create a standard that the Navy may use for decades to come as the Nimitz-class carriers enter retirement. As the Navy considers how to proceed, it will be critical to ensure that there is sufficient oversight and accountability for what likely will be an effort greater than $1 billion that lasts the better part of a decade. Since budget exhibits are a primary tool to aid Congress in making well-informed funding decisions, without additional details, transparency and the ability to assess CVN 65 progress could be limited. In particular, a more robust budget exhibit for CVN 65 that includes cost and schedule information across the Future Years Defense Program, as well as the status of activities—including any contract awards and a tracking of high level changes in cost and schedule—could help increase transparency for oversight.
Reporting requirements for DOD acquisition programs, which are not required or currently planned for CVN 65 dismantlement and disposal, provide examples of the types of information that decision makers can use to ensure that resource-intensive programs are meeting expectations or make changes as necessary. Without establishing a cost and schedule baseline that has been validated by an independent cost estimate or assessment, it will be difficult for decision makers to track cost and schedule performance or have confidence in CVN 65 costs. The Navy has indicated it is receptive to providing additional information to support oversight that is commensurate with other Navy programs of a similar funding level. However, the Navy also stated that it requires clear direction from DOD leadership or Congress on what additional accountability measures are desired before it would make any changes to current budget exhibits and reporting.
Naval Reactors is charged with cradle-to-grave responsibility for our nation’s naval nuclear propulsion material. The disagreement between Naval Reactors and NRC about the regulatory authority for commercial dismantlement and disposal of Navy nuclear ships persists. Coordination between the two agencies to identify the applicable regulatory authority for a full commercial dismantlement and disposal of CVN 65 and to develop a regulatory plan would help establish which practices and standards will apply to uphold nuclear safety and security. It would also help ensure the Navy’s selection of a dismantlement and disposal plan for CVN 65 is informed by well understood regulatory expectations and cost and schedule estimates that reflect those expectations.
Matter for Congressional Consideration
We are making one matter for congressional consideration.
Congress should consider requiring Naval Reactors to coordinate with the Nuclear Regulatory Commission for any CVN 65 dismantlement and disposal performed commercially to identify the applicable regulatory authority. In the event that an entity other than Naval Reactors will serve as the regulatory authority, Naval Reactors should submit to Congress a plan that identifies the regulatory authority for CVN 65 activities, and includes acknowledgement from that regulatory entity of its agreement with Naval Reactors and the legal basis for its authority. If the regulatory entity is an agreement state, such acknowledgment should be coordinated with the Nuclear Regulatory Commission. (Matter 1)
Recommendations for Executive Action
We are making the following four recommendations to DOD.
The Secretary of Defense should ensure that the Navy provides additional information in the annual President’s budget exhibits associated with CVN 65 dismantlement and disposal to facilitate improved transparency and accountability. Additions should, at a minimum, include the CVN 65 funding estimate across the Future Years Defense Program, activities planned or performed for applicable fiscal years, tracking of total cost and high level changes in cost and schedule from the prior year with explanations for changes, and if applicable, contract type, awardee, award value, and award and completion date estimates. (Recommendation 1)
The Secretary of Defense should require the Navy to obtain an independent cost estimate, performed by DOD’s Office of Cost Analysis and Program Evaluation or the Naval Center for Cost Analysis, for both the naval shipyard and full commercial options before choosing a dismantlement and disposal approach for CVN 65. (Recommendation 2)
The Secretary of Defense should require the Navy to complete a risk management plan prior to beginning the CVN 65 dismantlement and disposal. (Recommendation 3)
The Secretary of Defense should require the Navy to approve a cost and schedule baseline prior to beginning the CVN 65 dismantlement and disposal. (Recommendation 4)
Agency Comments
We provided a draft of this report to DOD and NRC for comment. Both DOD and NRC agreed with the draft report and its findings, and DOD concurred with the four recommendations we directed to the department. DOD and NRC provided written comments, which have been reproduced in appendix II and appendix III, respectively. DOD and NRC also provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense, the Secretary of the Navy, the Nuclear Regulatory Commission, and other interested parties. This report will also be available at no charge on GAO’s website at http://www.gao.gov.
If you or your staff members have any questions regarding this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to the report are listed in appendix IV.
Appendix I – Objectives, Scope, and Methodology
This report (1) describes the differences between the dismantlement and disposal options under consideration, including cost and schedule as well as workload and facilities; (2) evaluates the Navy’s funding and reporting practices for dismantlement and disposal activities; and (3) assesses the effect that nuclear regulatory authority considerations have on dismantlement and disposal options for CVN 65.
To identify the differences between the potential CVN 65 dismantlement and disposal options, we reviewed Navy documentation on prior, ongoing, and future dismantlement and disposal activities, as well as information related to the different options the Navy has considered or is considering for CVN 65. We interviewed Navy officials and reviewed documentation from the Naval Sea Systems Command, which includes Naval Reactors, and Puget Sound Naval Shipyard and Intermediate Maintenance Facility (hereafter referred to as Puget Sound Naval Shipyard). To obtain an understanding of the full commercial dismantlement and disposal approach, including work practices and potential work sites, we interviewed officials and reviewed documentation from commercial companies that the Navy identified as having involvement in shipbreaking or nuclear-related industries and potential interest in CVN 65. These companies include Atkins Global; EnergySolutions; Huntington Ingalls Industries (HII – Nuclear); International Shipbreaking Limited; NorthStar Group Services; and Waste Control Specialists.
For CVN 65 cost and schedule estimates, the Navy considers all estimates to still be preliminary because the Navy has yet to formally begin the environmental impact statement process and remains years away from a decision on its dismantlement and disposal approach. As a result, we did not formally evaluate the reasonableness of any cost or schedule estimates. However, we did review the initial estimates to gain insight on historical and current cost expectations. To assess the Navy’s preliminary cost estimates for the naval shipyard option, we reviewed Navy data on the basis for the cost estimates, particularly estimates since 2011. This included reviewing the cost factors that contributed to each estimate to understand how the shipyard’s increasing knowledge of CVN 65’s ship characteristics and changes to the planned dismantlement approach fed into the different estimates. For the Navy’s notional cost estimate of the CVN 65 full commercial option, we reviewed the data and approach used by the Navy to develop initial cost information. This included commercial decommissioning data, which the Navy used to establish a rough order of magnitude cost estimate based on the limited information available that is comparable to CVN 65 dismantlement and disposal.
We used the same data to generate our own notional estimated cost range based on a Nuclear Regulatory Commission (NRC) cost formula, as well as published data from the Organisation for Economic Co- operation and Development’s Nuclear Energy Agency. This included analysis of costs reported by operating power reactor licensees in NRC’s 2015 decommissioning funding status report to comply with decommissioning financial assurance reporting requirements. Our review of historical data from the Nuclear Energy Agency and a 2011 report on nuclear decommissioning by an independent panel established by the California Public Utilities Commission helped us identify cost drivers and categories of costs attributed to specific activities that occur when decommissioning commercial power plants.
To assess workload and facility considerations related to CVN 65, we analyzed Puget Sound Naval Shipyard workload and resource requirements data for fiscal years 2006 through 2025, and facility data for fiscal years 2018 through 2035. To assess the reliability of these data, we interviewed knowledgeable officials and reviewed documentation to verify the controls and measures used to validate and maintain the data. We determined these data to be reliable for our purposes of discussing the existing and planned workload at Puget Sound Naval Shipyard. We compared projections to actual workload when available to identify differences and compared the average amount of annual projected workload to the average amount of annual projected workforce available. We also reviewed a 2018 report on the Navy’s strategic plan for addressing the infrastructure deficiencies at the public naval shipyards as well as the Navy’s long-range shipbuilding plans for fiscal years 2011, 2016, and 2019. Additionally, we reviewed past GAO reports that addressed operation and maintenance activities at naval shipyards, and the related workload demands and facilities’ requirements.
To identify the Navy’s funding and reporting practices for dismantlement and disposal activities, we reviewed Navy documentation on prior, ongoing, and future ship dismantlement and disposal activities, as well as Navy procurement and operation and maintenance budget exhibits— fiscal years 2016 and 2017 for procurement exhibits and fiscal years 2007 through 2017 for operation and maintenance budget exhibits. We also reviewed Federal Acquisition Regulations, Office of Management and Budget guidance on budget information, and the Department of Defense and Navy acquisition regulations. We interviewed officials from Naval Reactors and the Program Executive Office for Aircraft Carriers. Based on these efforts, we evaluated the Navy’s historical approach for funding, conducting oversight, and reporting on dismantlement and disposal activities. We assessed the Navy’s approach against federal standards for internal control. Additionally, we assessed how funding and typical reporting requirements for Department of Defense acquisition programs align with the potential need to facilitate oversight for CVN 65 dismantlement and disposal.
To determine the effect that nuclear regulatory authority considerations have on dismantlement and disposal for CVN 65, we examined applicable laws, regulations, executive orders, policies, and guidance documents related to nuclear-powered ships. We also reviewed past GAO reports related to environmental and nuclear requirements. We reviewed Navy documentation on prior, ongoing, and future ship dismantlement and disposal activities. We also interviewed officials and reviewed documentation from Naval Reactors; the Assistant Secretary of the Navy for Energy, Installations, and Environment; the Chief of Naval Operations Environmental Readiness Division; the Program Executive Office for Aircraft Carriers; Puget Sound Naval Shipyard; and the Nuclear Regulatory Commission. Additionally, we interviewed officials from the Washington State Departments of Health and Ecology, the Texas Commission on Environmental Quality, and the Texas Department of State Health Services—two states in which ship dismantlement activities have recently occurred and that have nuclear waste disposal sites.
We conducted this performance audit from August 2017 to August 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II – Comments from the Department of Defense
Appendix III – Comments from the Nuclear Regulatory Commission
Appendix IV – GAO Contact and Staff Acknowledgments
GAO Contact
Shelby S. Oakley, (202) 512-4841 or [email protected].
Staff Acknowledgments
In addition to the contact named above, key contributors to this report were Diana Moldafsky, Assistant Director; Antoinette Capaccio; Kurt Gurka; Stephanie Gustafson; Kristine Hassinger; Jean Lee; Sean Merrill; LeAnna Parkey; Karen Richey; and Roxanna Sun. | Why GAO Did This Study
The Navy is planning to dismantle and dispose of CVN 65 after 51 years of service. In 2013, the estimated cost to complete the CVN 65 work as originally planned increased to well over $1 billion, leading the Navy to consider different dismantlement and disposal options.
The Senate Report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 included a provision for GAO to review the Navy's plans for CVN 65. This report addresses (1) dismantlement and disposal options under consideration; (2) nuclear regulatory authority considerations; and (3) funding and reporting practices.
GAO reviewed budget, cost, and schedule documentation, as well as applicable laws, regulations, executive orders, policies, and guidance. GAO interviewed officials from the Navy and commercial companies about the dismantlement and disposal options, and NRC and state agencies about regulatory considerations.
What GAO Found
The Navy is assessing two options to dismantle and dispose of its first nuclear-powered aircraft carrier—ex-USS Enterprise (also known as CVN 65). CVN 65 dismantlement and disposal will set precedents for processes and oversight that may inform future aircraft carrier dismantlement decisions.
Characteristics of the Navy's Potential CVN 65 Dismantlement and Disposal Options
Source: GAO analysis of Navy and Nuclear Regulatory Commission information. | GAO-18-523
The Navy could rely on its extensive regulatory experience for the naval shipyard option. However, the Navy's ability to effectively evaluate the full commercial option is hampered by a disagreement with the Nuclear Regulatory Commission (NRC), which oversees the commercial nuclear industry. Naval Reactors officials assert that NRC's regulatory authority should apply to the full commercial option. NRC disagrees with this position. Coordination between the two agencies to identify the applicable regulatory authority and craft a regulatory plan would help ensure accountability, solidify cost estimates, and facilitate a CVN 65 decision.
The budget documentation and reporting that the Navy typically uses for ship dismantlement and disposal projects will not enable adequate oversight of CVN 65—a multi-year project with a cost that may exceed $1 billion. The documents that support Navy budget requests for dismantlement and disposal funding do not provide data that decision makers can readily use to track dismantlement costs against an established baseline or to evaluate funding plans for future years. Further, the Navy has no reporting requirements to support accountability for CVN 65 activities. Large defense acquisition programs generally are required to submit more detailed budget information and report on cost, schedule, and performance. These practices could be adapted for CVN 65 to provide information that will facilitate oversight commensurate with the scale of the effort.
What GAO Recommends
Congress should consider requiring Naval Reactors to coordinate with NRC to identify the applicable regulatory authority for a CVN 65 commercial dismantlement and disposal. GAO is also making four recommendations, including that the Navy take action to provide additional budget information and reporting to facilitate improved transparency and accountability for the CVN 65 cost, schedule, and risks. The Department of Defense agreed with all four recommendations. |
gao_GAO-18-536 | gao_GAO-18-536_0 | Background
Enacted in 1970, NEPA, along with subsequent CEQ implementing regulations, sets out an environmental review process that has two principal purposes: (1) to ensure that an agency carefully considers information concerning the potential environmental effects of proposed projects; and (2) to ensure that this information is made available to the public. DOT’s Federal Highway Administration (FHWA) and Federal Transit Administration are generally the federal agencies responsible for NEPA compliance for federally funded highway and transit projects. Project sponsors—typically state DOTs and local transit agencies—may receive DOT funds, oversee the construction of highway and transit projects, develop the environmental review documents that are approved by federal agencies, and collaborate with federal and state stakeholders.
In addition, the Clean Water Act and the Endangered Species Act are two key substantive federal environmental protection laws that may be triggered by a proposed transportation project and that may require the federal resource agencies to issue permit decisions or perform consultations before a project can proceed.
Permits under Section 404 of the Clean Water Act
Section 404 of the Clean Water Act generally prohibits the discharge of dredged or fill material, such as clay, soil or construction debris, into the waters of the United States, except as authorized through permits issued by the Corps. Before the Corps can issue a section 404 permit, it must determine that the discharge of material is in compliance with guidelines established by the Environmental Protection Agency.
The Corps issues two types of permits: Individual permits: issued as a standard permit for individual projects, following a case-by-case evaluation of a specific project involving the proposed discharge of dredged or fill material and/or work or structures in navigable water.
General permits: issued for categories of projects the Corps has identified as being similar in nature and causing minimal individual and cumulative adverse environmental impacts. General permits may be issued on a state, regional, or nationwide basis.
In fiscal year 2016, the Corps completed approximately 250 individual permits and 10,750 general permits for transportation projects, based on agency data. The Corps is not required to complete its permit reviews within a specified time frame; however, it has performance metrics, including target time frames for issuing permit decisions based on permit type.
Consultations under Section 7 of the Endangered Species Act
The purpose of the Endangered Species Act is to conserve threatened and endangered species and the ecosystems upon which they depend. Section 7 of the Act directs federal agencies to consult with FWS or NMFS when an action they authorize, fund, or carry out, such as a highway or transit project, could affect listed species or their critical habitat. Section 7 also applies if non-federal entities receive federal funding to carry out actions that may affect listed species.
Before authorizing, funding, or carrying out an action, such as a highway or transit project, lead federal agencies must determine whether the action may affect a listed species or its critical habitat. If a lead federal agency determines a proposed action may affect a listed species or its critical habitat, formal consultation is required unless the agency finds, with FWS’ or NMFS’ written concurrence, that the proposed action is not likely to adversely affect the species. Formal consultation is initiated when FWS or NMFS receives a complete application from the lead agency, which may include a biological assessment and other relevant documentation, which describe the proposed action and its likely effects. The formal consultation usually ends with the issuing of a biological opinion by FWS or NMFS, which generally must be completed within time frames specified in the Endangered Species Act and in its implementing regulations. Specifically, FWS and NMFS have 135 days to complete a formal consultation and provide a biological opinion to the lead federal agency and project sponsor in order for the project to proceed. The consultation period can be extended by mutual agreement of the lead federal agency and FWS or NMFS. In fiscal year 2016, FWS completed 179 formal consultations and NMFS completed 29 formal consultations for federally-funded highway and transit projects, based on agency data.
The three most recent transportation reauthorization acts include provisions that are intended to streamline various aspects of the environmental review process for highway and transit projects. We identified 18 statutory provisions from these acts that could potentially affect time frames for the environmental permitting and consulting processes for highway and transit projects. Based on our review, we grouped the provisions into two general categories: Administrative and Coordination Changes and NEPA Assignment. See appendix II for a complete list and descriptions of the 18 provisions that we identified.
The 16 Administrative and Coordination Changes provisions are process oriented. These provisions, for example: (1) establish time frames for the environmental review process, (2) encourage the use of planning documents and programmatic agreements, and (3) seek to avoid duplication in the preparation of environmental review documents. The two NEPA Assignment provisions authorize DOT to assign its NEPA responsibility to states.
Agency Experience Suggests Streamlining Provisions Had Some Positive Effect, but Lack of Reliable Data Hinders Impact Assessment
Resource agency and state DOT officials told us they believe that some actions called for by the 18 provisions we identified, such as programmatic agreements, have helped streamline the consulting and permitting processes. However, a lack of reliable agency data regarding permitting and consulting time frames hinders a quantitative analysis of the provisions’ impact. Further, limitations in FWS and NMFS data, such as missing or incorrect data and inconsistent data entry, could impair the agencies’ ability to determine whether the agencies are meeting statutory and regulatory requirements, such as the extent to which the agencies complete formal consultations and provide biological opinions within 135 days. FWS and NMFS have limited controls that would help ensure the completeness and accuracy of their data.
Officials at Resource Agencies and State DOTs Identified Some Actions That Are Called for by Streamlining Provisions That May Accelerate Environmental Reviews
Resource agency and state DOT officials we interviewed told us they believe that some actions called for by the provisions we identified have helped streamline the consulting and permitting processes. While these officials generally did not quantify or estimate the number of days review times may have been reduced, they did generally explain how the review processes were accelerated, depending upon the action being taken, for example:
Programmatic agreements: Officials from 18 of the 23 state DOTs and federal resource agency field offices we spoke with told us that using programmatic agreements has generally helped reduce review times. Programmatic agreements can standardize the consulting and permitting processes for projects that are relatively routine in nature (e.g., repaving an existing highway). For example, one state DOT and an FWS field office have an agreement that establishes a consistent consultation process to address projects, such as pavement marking, that have either a minimal or no effect on certain federally protected species and their critical habitat. Programmatic agreements may contain review time targets that are shorter than those for reviews not subject to the agreements. For example, officials from one FWS field office said that they typically met the 60-day time limit that was established in one such agreement, compared to the standard 135- day period for completing formal consultations and issuing biological opinions. In part, DOT has assisted in establishing programmatic agreements affecting consultation and permit review processes. For example, according to DOT, its Every Day Counts initiative has helped create scores of programmatic agreements through efforts such as identifying best practices, performing outreach, developing new approaches, and improving existing ones. In our 2018 report on highway and transit project delivery, 39 of 52 state DOTs in our survey reported that programmatic agreements had sped up project delivery within their states.
Federal liaison positions: Officials from 21 of the 23 selected state DOT and federal resource agency field offices told us that liaison positions at resource agency offices, which are positions held by federal employees who work on consultation and permit reviews for state DOTs, have streamlined the consultation and permit review processes. According to almost all of the selected officials, these positions provide benefits, such as dedicating staff to process the state DOTs’ applications for permits and consultations, allowing state DOTs to prioritize projects, and enabling enhanced coordination between agencies to avoid conflicts and delays in the review process.
For example, officials from one state DOT said that having a dedicated liaison at an FWS field office gave the state DOT a responsive point of contact, helped address workload concerns at the FWS field office, and enabled FWS office staff to attend interagency coordination meetings. According to DOT, as of November 2017, states had 43 full-time equivalent positions at FWS and 11 at NMFS. Corps officials stated that states had more than 40 full-time equivalent positions at the Corps in fiscal year 2017. In our 2018 report on highway and transit project delivery, 32 of 52 state DOTs in our survey reported that they had used this provision. We found that 23 of those state DOTs reported that it had sped up project delivery within their states.
Early coordination: Officials from 18 of the 23 state DOT and federal resource agency field offices we spoke with told us that early coordination in consultation and permit review processes has generally reduced review times. According to most selected state DOT and resource agency officials, this early coordination can provide benefits, such as improving the quality of applications, avoiding later delays by identifying concerns early in the process, and allowing permitting to be considered in the design phase of projects. For example, officials at one of the Corps’ district offices told us that they routinely hold pre-application meetings with state, DOT, and resource agency contacts to define what the Corps needs to process the application quickly and to avoid later problems. Similarly, in our 2018 report on highway and transit project delivery, 43 of 52 state DOTs in our survey reported that they had used this provision, and 27 of those reported that the provision had sped up project delivery within their states.
Although selected federal resource agency and state DOT officials were able to identify actions called for by the provisions that they believe have helped streamline the consulting and permitting processes, officials from all three resource agencies said that their agencies had not analyzed the impact of the streamlining provisions on permit review or consultation time frames and did not have plans to do so in future.
Lack of Resource Agencies’ Data Hinders Analysis of Whether Streamlining Provisions Reduced Time to Conduct Reviews
For two reasons, we were unable to quantify the impact the 18 streamlining provisions had on the three federal resource agencies’ consultation and permit review time frames. First, factors other than the streamlining provisions may have also affected review times, limiting our ability to discern the extent to which the provisions had an impact. Second, the resource agencies could not provide enough reliable data for us to analyze changes in consultation and permit review durations over time.
With respect to the first reason, factors other than the streamlining provisions can influence the durations of permit reviews and consultations, a situation that would make it difficult to establish whether the streamlining provisions in the reauthorization acts had a direct impact. In particular, officials from resource agencies and state DOTs we interviewed informed us that some offices took actions included in some of the various streamlining provisions before the three transportation reauthorizations were enacted. For example, officials at one FWS field office said that the office completed a programmatic agreement in 2004. Officials at one state DOT said that they had funded positions at resource agency offices for two decades. Corps officials said that the Corps implemented early coordination before the provision requiring this action was enacted. DOT officials also said that the provisions generally codified and expanded on existing actions. Further, factors such as staffing shortages at state DOTs and resource agency offices may also affect the length of consultations and permit reviews. Therefore, even if the durations of permit reviews and consultations could be evaluated over time with enough reliable data, it could be difficult to connect changes in the durations to the streamlining provisions with any confidence.
Second, none of the three resource agencies could provide enough reliable data to evaluate trends in the duration of consultations and permit reviews after the 15 provisions were introduced in SAFETEA-LU and MAP-21, and the FAST Act was enacted too recently to evaluate any trends following the 3 provisions it introduced. To evaluate trends in permit review and consultation durations before and after the provisions were enacted, we would need sufficient data before and after their enactment. The SAFETEA-LU, MAP-21, and FAST Act provisions were enacted in August 2005, July 2012, and December 2015, respectively.
Available Corps’ data could not be used to determine trends in permit review durations before and after the SAFETEA-LU and MAP-21 provisions were enacted. Specifically, Corps officials told us that their data prior to October 2010 should not be used to evaluate trends due to changes in the Corps’ data tracking system and data entry practices. The Corps did not provide more than one full fiscal year of data prior to 2012, and we would need more than one year of data to establish an adequate baseline in order to control for variations that may occur from year to year.
Further, FWS and NMFS could not provide reliable data to evaluate trends in the durations of consultations before or after enactment of SAFETEA-LU and MAP-21. FWS and NMFS officials informed us of limitations in their agencies’ consultation data that rendered the data incomplete prior to fiscal year 2009 and calendar year 2012 respectively, a circumstance that would prevent us from evaluating trends following SAFETEA-LU. Specifically, FWS officials told us that use of its data tracking system was not mandatory in all regions for consultation activities prior to fiscal year 2009. NMFS officials told us that data from its tracking system are incomplete prior to 2012, because some prior records did not transfer properly during a migration to a newer version of the database. Further, the weaknesses in more recent FWS and NMFS data that we identify below would also limit an analysis of changes in consultation durations following MAP-21.
Finally, since the three agencies provided data through fiscal year 2016, we had less than one fiscal year of data following the December 2015 enactment of the FAST Act, an amount that was insufficient to evaluate trends in consultation and permit review durations following the Act’s enactment.
Weaknesses in FWS and NMFS Data Would Limit Analysis of Consultation Time Frames
We identified limitations, such as incorrect or missing data and inconsistent data entry practices, in more recent FWS and NMFS data, and such limitations would limit future analysis of trends in the duration of consultations. We did not identify similar limitations in Corps data. These limitations could also hinder analyses of the extent to which the agencies meet statutory and regulatory requirements, such as the extent to which the agencies completed formal consultations and issued biological opinions within 135 days. Standards for internal control in the federal government state that agency management should use quality information to achieve the agency’s objectives and should design appropriate controls for information systems that ensure that all transactions are completely and accurately recorded. Information systems should include controls to achieve validity, completeness, and accuracy of data during processing, including input, processing, and output controls. However, we identified errors in consultation data provided by FWS and NMFS officials. For example, FWS’s data included 1,568 unique transportation-related formal consultations that started and concluded within fiscal years 2009 through 2016. Of those records, 27 had formal consultation initiation dates that followed the conclusion date, resulting in a negative duration; 113 lacked an initiation date, precluding a determination of the duration; and 19 had formal consultation initiation dates that preceded the dates on which FWS could begin work. NMFS officials said that records cannot be removed from the database once saved—including duplicate, incomplete, withdrawn, or otherwise bad records—and that the database does not always retain corrections after they are made. As a result, data exported from the database are manually reviewed for errors, according to NMFS officials. However, data provided to us after this manual review process still contained errors.
Further, FWS and NMFS officials described limited controls to ensure the completeness and accuracy of their data. FWS officials said that they do not currently conduct systematic reviews to examine the accuracy of the data. The officials also said that they do not have procedures for follow-up when errors are found, although regional or headquarters staff may conduct outreach to an affected office if errors are found. FWS officials also acknowledged that the database lacks sufficient electronic safeguards on all fields to prevent errors. Similarly, NMFS officials said that NMFS has not tracked the accuracy of its data and that many fields in NMFS’s database do not have safeguards to limit data entry errors.
FWS and NMFS also lack procedures to ensure that they consistently track all data associated with consultation time frames. For example, FWS and NMFS officials could not provide data on whether formal consultations and the issuance of biological opinions that exceeded 135 days obtained extensions, data that officials would need to track the extent to which their agencies comply with the requirement to complete consultations and issue biological opinions within 135 days absent an extension. The officials said that the agencies do not require their staff to enter extension data, and that some staff enter extension dates but others do not. In addition, although hundreds of projects may be reviewed under a single programmatic agreement, FWS and NMFS do not record all projects reviewed under programmatic agreements. For example, NMFS officials told us that the agency’s system is not designed for staff to enter individual actions reviewed under programmatic agreements. This process prevents comparisons of review time frames for individual projects under programmatic agreements with projects not reviewed under those agreements. FWS’s database also does not require some critical information for determining consultation time frames, such as the initiation dates for formal consultations. Further, FWS headquarters officials acknowledged that differing field office procedures had contributed to varying record-keeping methods, and officials at five of the seven FWS field offices we interviewed told us that FWS’s database is not used consistently among field offices.
The quality of FWS’s and NMFS’s consultation data may limit the ability of the agencies to determine whether they are completing consultations within required time frames, as described above, and may also impact other internal and external uses of the data. For example, the quality of the data may limit the agencies’ evaluation and management of their consultation processes. FWS officials said that FWS uses its data internally in calculating annual performance measures and to answer questions from senior leadership, among other purposes. NMFS officials said that NMFS uses its data internally to examine the agency’s Section 7 workload, help set agency funding priorities, and track projects through the consultation process. FWS and NMFS will also have to ensure that their data systems can provide reliable data to comply with an executive order requiring federal agencies to track major infrastructure projects, including the time required to complete the processing of environmental reviews. The August 2017 executive order directed the Office of Management and Budget, in coordination with the Federal Permitting Improvement Steering Council, to issue guidance for establishing a system to track agencies’ performance in conducting environmental reviews for certain major infrastructure projects. To meet this directive, this system is to include assessments of the time and costs for each agency to complete environmental reviews and authorizations for those projects, among other things. According to a multi-agency plan, system implementation is planned to begin in the fourth quarter of fiscal year 2018, and publishing of performance indicator data is planned to begin in the first quarter of fiscal year 2019. In addition, FWS has provided consultation data to outside researchers who have publicly reported them in a study and a web portal. NMFS makes some data for completed consultations publicly available through the internet.
NMFS and FWS officials we interviewed said that the agencies are developing new versions of their databases, and FWS officials said that they will develop new standard-operating procedures and guidance for data entry. Specifically, FWS officials said that they have discussed the development of a new version of their database that would better track consultations chronologically and ensure greater data accuracy and consistency, but that effort is still in the planning stage. Those officials also said that they have formed a team to explore the development of new standard-operating procedures, training, and guidance for consistent data entry and that they are considering how to include data on whether consultations received extensions in the new system. NMFS officials said that the agency is modernizing its database, including improving data entry, error prevention, maintenance, and tracking of actions under programmatic agreements. However, FWS and NMFS officials could not provide specific time frames for implementation or documentation of these efforts. Therefore, it is not clear whether these efforts will include internal controls that address all of the types of issues we identified.
Federal and State Officials Identified Additional Actions That Helped Resource Agencies Streamline Processes
Some Officials at Resource Agencies and State DOTs Took Actions to Improve Applications
Officials at 19 of the 23 federal resource agency field offices and state DOTs we spoke with generally mentioned two additional actions, beyond the 18 provisions we identified, for streamlining the consultation and permitting process: field office assistance to lead federal agencies and project sponsors, including state DOTs, to improve applications for permits and consultations; and electronic systems for environmental screening and document submission.
First, officials from some of the 16 federal resource agency field offices we spoke with stated that they provide assistance to lead federal agencies and project sponsors to clarify the information required in permit and consultation applications before they are submitted to the resource agency. Officials from 8 of those 16 offices stated that they provided that assistance in order to improve the quality and completeness of information included in the applications. Resource agency officials stated that the permit or consultation process is delayed when the lead federal agency or project sponsor does not initially provide the quantity or quality of information necessary for resource agencies’ field office staff to complete permits and consultations. These staff must then request additional information from the lead federal agency or project sponsor, extending the permit or consultation reviews. Therefore, officials at 16 of the 23 federal resource agency field offices and state DOTs we spoke with said that field office staff provided training to state DOT staff to specify the information field offices required for initial permit or consultation applications. In addition, officials at 6 of the 23 resource agency field offices and state DOTs we spoke with created or were in the process of creating documents, such as application templates or checklists, that specify information required initially by field offices for applications. For example, according to officials at one FWS field office, a staff member created a standardized form letter for consultation applications that includes information for the state DOT to submit with its applications.
Second, officials at federal resource agency field offices and state DOTs also identified electronic systems for environmental screening and document submission as helpful streamlining actions. Some state agencies created electronic systems for permitting and consultation applications, according to officials at 6 of the 23 resource agency field offices and state DOTs we spoke with. Some of those state agencies created systems for submitting application documentation, which can include multiple reports and studies related to an endangered species or its critical habitat. In addition, some of those state agencies created electronic tools that screen potential transportation project areas for environmental impacts. For example, in Pennsylvania, state agencies created two electronic systems. The first system allows application materials to be shared with multiple state and federal agencies while the second allows applicants to screen project areas for potential impacts on endangered species. The Pennsylvania Natural Heritage Program, a partnership between four state agencies, created a system that allows lead federal agencies or project sponsors to determine what potential environmental impacts, if any, exist in a proposed project’s geographic area (fig. 1). According to field office officials who use this resource, it saves time and improves agency coordination on transportation projects. Officials at two additional offices stated that their state agencies were in the process of establishing such electronic systems. In addition, FWS has piloted additional capabilities for its existing electronic system that screens for species information. According to FWS officials, the current pilot is restricted to specific species included in existing programmatic agreements, but this updated system would guide applicants through the consultation application and allow electronic document submission.
The federal resource agencies continue to seek out additional opportunities for their field offices to streamline the permitting and consultation processes, according to officials at 11 of the 16 field offices. Officials at four of those offices stated that they discuss additional streamlining opportunities at regular transportation-related meetings with other federal and state agency offices. However, beyond the streamlining actions and provisions cited above, officials at resource agency field offices and state DOTs did not identify additional opportunities used by multiple field offices to streamline permits and consultations.
DOT Supports Actions to Streamline the NEPA Process
DOT has a role in streamlining the overall NEPA process for transportation projects. Officials from DOT and its modal administrations, in coordination with federal resource agencies, participate in or support several efforts, including the following, to streamline the NEPA process:
Coordination meetings: DOT officials participate in some early or regular coordination efforts, according to officials at some federal resource agency field offices and state DOTs we spoke with. For instance, according to officials at one Corps district office, DOT officials participate in some monthly meetings between federal and state agencies to discuss both specific transportation projects and recurring issues that may present streamlining opportunities.
Transportation liaisons: As mentioned above, recipients of DOT funds may partially fund the transportation liaison positions at federal resource agency field offices. Officials at some resource agency field offices and state DOTs we spoke with stated that liaisons implemented streamlining actions at those offices. For example, officials at one FWS field office stated that the office’s transportation liaisons are responsible for creating and maintaining programmatic agreements with the state DOT. In addition, DOT currently has interagency agreements to provide national transportation liaisons at resource agencies—including the Corps, FWS, and NMFS—who lead nationwide efforts, such as meetings among field offices where officials can share streamlining actions.
Streamlining resource database: DOT maintains an online database of resources created by DOT and transportation liaisons for streamlining the NEPA process. The database, which is part of the Transportation Liaison Community of Practice online portal, includes programmatic agreements, regional streamlining efforts, and liaison- funding agreements, among other resources. The purpose of this database is to provide examples of streamlining actions for transportation liaisons and state DOT officials to use in implementing these actions with state and federal agency offices to streamline NEPA processes.
DOT also participates in multi-agency efforts to identify recommendations for streamlining the NEPA process. Those efforts produced two multi- agency reports that have identified best practices for improving streamlining of the NEPA process:
Red Book: In 2015, DOT coordinated with multiple federal agencies, including the resource agencies, to update the Red Book, a resource to help both federal and state agencies conduct concurrent environmental review processes and to improve coordination in the NEPA process for major transportation and other infrastructure projects. For instance, the Red Book recommended electronic information systems, including systems that share geographic information with the agencies involved, as a way to streamline the NEPA process.
Annual interagency report: DOT and multiple federal agencies, including the resource agencies, contribute to the Federal Permitting Improvement Steering Council’s annual report on recommended actions for federal agencies. In the reports for fiscal years 2017 and 2018, those recommended steps included actions taken by some resource agency field offices. For example, recommended steps in the 2017 report included the creation of electronic application submission systems and training to improve permit and consultation applications.
DOT officials stated that they continue to seek additional streamlining opportunities with federal and state entities, including federal resource agencies and state DOTs, through outreach to those agencies. For example, the officials told us that they had reached out to the resource agencies and provided training to help them identify what basic application information is needed for certain types of projects that are unlikely to be fully designed at that point in the project’s design. DOT officials also suggested that expanding the current streamlining actions that resource agencies have taken, such as utilizing the transportation liaison positions, would help streamline the process.
The Council on Environmental Quality Has Issued Regulations and Guidance to Streamline NEPA Reviews
CEQ oversees NEPA implementation, reviews and approves federal agency NEPA procedures, and issues regulations and guidance documents that govern and guide federal agencies’ interpretation and implementation of NEPA. In addition, CEQ has focused some of its efforts on furthering the goal of streamlining environmental reviews. Those efforts have included publication of various guidance and memorandums on the effective use of programmatic reviews, according to CEQ officials. For example, CEQ issued regulations that direct agencies, to the fullest extent possible, to integrate the NEPA process into project planning at the earliest possible time to avoid delays and resolve potential issues, and to perform coordinated and concurrent environmental reviews to the extent possible to minimize duplication of effort. CEQ officials also noted that CEQ continues to co-chair the Transportation Rapid Response Team, a working group of federal agencies that facilitates interagency coordination and seeks to improve surface transportation project delivery consistent with environmental guidelines.
CEQ periodically reviews and assesses its guidance and regulations to improve the effectiveness and timeliness of NEPA reviews, according to a CEQ official. For example, CEQ reviewed the environmental review processes of selected agencies in 2015 to identify model approaches that simplify the NEPA process and reduce the time and cost involved in preparing NEPA documents. CEQ used this review to identify and recommend changes to modernize NEPA’s implementation, including using information technology, such as a web-based application that identifies environmental data from federal, state, and local sources within a specific location, to improve the efficiency of environmental reviews.
On August 15, 2017, the President signed an executive order that directed CEQ to develop a list of actions it will take to enhance and modernize the environmental review and authorization process. In September 2017, CEQ outlined its actions to respond to the executive order in a Federal Register Notice. According to CEQ officials, in response to the executive order, CEQ is in the process of reviewing its existing regulations on the implementation of the provisions of NEPA to identify changes needed to update and clarify its regulations. In June 2018, CEQ published an advance notice of proposed rulemaking to solicit public comment on potential revisions to its regulations to ensure a more efficient, timely, and effective NEPA process consistent with the national environmental policy. In addition, CEQ, along with the Office of Management and Budget, issued guidance for federal agencies for processing environmental reviews and authorizations in accordance with the executive order’s goal of reducing the time for completing environmental reviews for major infrastructure projects. Finally, CEQ officials stated that CEQ is leading an interagency working group, which includes representatives from the resource agencies, to review agency regulations and policies to identify impediments to the processing of environmental review and permitting decisions. CEQ anticipates the working group findings will address a number of issues relating to environmental reviews, including the environmental consulting and permitting processes.
Conclusions
The federal government has enacted a number of statutory provisions aimed at streamlining the environmental review process for highway and transit projects. However, while Corps, FWS, and NMFS officials believe that these provisions have helped streamline their permit reviews and consultations, the lack of data hinders quantification of any trends in the duration of those reviews. Furthermore, agency and government-wide efforts to track major infrastructure projects, such as the planned Office of Management and Budget performance tracking system, will be hindered without accurate and reliable data. FWS and NMFS do not have adequate internal control procedures in place to ensure accurate and reliable data and cannot accurately assess their ability to meet statutory and regulatory requirements for completing consultations and issuing biological opinions. Although FWS and NMFS are in the process of upgrading their data systems, the agencies do not have documented plans or time frames that identify what controls they will use to ensure accurate data on the time taken for consultation reviews.
Recommendations for Executive Action
We are making a total of two recommendations, one to the Fish and Wildlife Service and one to the National Marine Fisheries Service.
Specifically, we are making the following recommendation to the Fish and Wildlife Service: The Principal Deputy Director of the Fish and Wildlife Service should direct the Fish and Wildlife Service to develop plans and time frames for improving its new consultation tracking system and develop appropriate internal controls, such as electronic safeguards and other data-entry procedures, to ensure accurate data on the time taken for consultations. (Recommendation 1)
We are making the following recommendation to the National Marine Fisheries Service: The Assistant Administrator for Fisheries should direct the National Marine Fisheries Service to develop plans and time frames for improving its new consultation tracking system and develop appropriate internal controls, such as electronic safeguards and other data-entry procedures, to ensure accurate data on the time taken for consultations. (Recommendation 2)
Agency Comments
We provided a draft of the report to the Departments of Transportation, Defense, Commerce, and Interior and the Council on Environmental Quality. The Departments of Commerce and Interior each provided written responses, which are reprinted in appendixes III and IV, respectively. The Departments of Commerce and Interior agreed with our recommendations. In addition, the Departments of Transportation, Defense, Commerce, and Interior and the Council on Environmental Quality provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to appropriate congressional committees, the Secretary of the Department of Transportation, Secretary of the Department of Defense, Secretary of the Department of the Interior, Secretary of the Department of Commerce, and other interested parties. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Scope and Methodology
Our work focused on federal-aid highway and transit projects and the provisions included in the past three surface-transportation reauthorizations that are intended to streamline the environmental consulting and permitting processes performed by the three federal resource agencies: Fish and Wildlife Service (FWS), National Marine Fisheries Service (NMFS), and the U.S. Army Corps of Engineers (Corps). This report (1) addresses the extent to which identified streamlining provisions had an impact on the time frames for the environmental consulting and permitting processes; (2) identifies actions taken by the resource agencies to streamline their consulting and permitting reviews and identifies additional streamlining opportunities, if any; and (3) describes the actions taken by the Council on Environmental Quality (CEQ) to accelerate highway and transportation projects.
To identify relevant provisions that were aimed at streamlining the consulting and permitting processes for highway and transit projects, we reviewed the last three surface transportation reauthorization acts and relevant federal statutes, regulations, and guidance. The three reauthorizations we reviewed are as follows: the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU); the Moving Ahead for Progress in the 21st Century Act (MAP-21); and the Fixing America’s Surface Transportation Act (FAST Act).
We identified 18 provisions that are intended to streamline various aspects of the NEPA environmental review process and could potentially affect the permitting and consultation processes of the three federal resource agencies. Provisions were grouped into categories developed in a previous GAO report on project delivery for ease of understanding. In our review we identified relevant statutory provisions as they had been amended by the three surface transportation reauthorization acts. Some of the provisions, as originally enacted, were modified by subsequent legislation.
To evaluate the extent to which the streamlining provisions had an impact on the consulting and permitting processes, we requested official responses from each of the three resource agencies on the impact of the 18 provisions we identified on the consulting and permitting processes. We also conducted interviews with resource agency officials in Washington, D.C. and the respective field, district, and regional offices to determine the use and impact of the streamlining provisions from the surface transportation reauthorization acts.
To quantify the extent to which the streamlining provisions had an impact on the time frames for completing consultations and permit reviews, we requested data on the time frames of consulting and permitting from FWS, NMFS, and Corps data systems for fiscal years 2005 through 2016 for all federally funded highway and transit projects. We requested data from the resource agencies with a variety of information for each record that included the start and end dates for each consultation and permit decision, the type of consultation or permit decision, the project sponsor or entity requesting the consultation or permit decision, the project type, a description of the project, and the field, district, or regional office that received and entered each record. The agencies provided the most recently available data, which we analyzed. FWS was unable to provide us reliable data prior to fiscal year 2009; the Corps was unable to provide us reliable data prior to fiscal year 2011, and NMFS was unable to provide us reliable data prior to calendar year 2012. Agency officials stated that data prior to those years were unreliable because of various factors, such as NMFS’s performing a data migration to a new system where some records did not transfer properly and Corps changes to its database in 2011 that made earlier data incomparable to post-2011 permit records. We performed checks to determine the reliability of the agency data and to identify potential limitations, such as missing data fields, errors, and discrepancies in calculations between records. We determined that the data provided by FWS and NMFS were not sufficiently reliable for examining the impact of the streamlining provisions on the time frames for completing consultation reviews. We also determined that the data provided by the Corps was sufficiently reliable to conduct analysis of permitting time frames, but because the Corps was unable to provide reliable data prior to fiscal year 2010, we were unable to examine the impact of streamlining provisions on the time frames for completing permit reviews. Our discussion in the report of resource agency data focuses on these limitations. We reviewed agency policies and procedures on ensuring accurate and reliable data and compared them with federal standards for internal controls.
To examine the actions used by resource agencies to streamline consulting and permitting reviews, we interviewed officials in seven FWS field offices, seven Corps district offices, two NMFS regional offices, three transit agencies, and seven state departments of transportation (state DOTs) to discuss leading practices and additional opportunities for streamlining the consulting and permitting processes, as well as the use of the respective agency data systems. We reviewed field office documents and policies used to accelerate consulting and permitting. To select the federal resource agency field and district offices for interviews, we used the consultation and permit data collected from the agencies. We selected the offices based on a number of criteria identified through analysis of federal resource agency data between fiscal years 2009 and 2016, including: the most consultations or permit decisions performed; a mix of the average length of time for consultations or permit a mix of the types of consultations (e.g., formal or programmatic) or permit decisions (e.g., general or individual) performed by office; and a mix of geographic regions.
For the selection of state DOTs, we used a number of selection criteria including: the most consultations and permit decisions requested by state; a mix of the average consultation or permit decision time by state; a mix of the types of consultations or permit decisions the states a mix of geographic regions.
To select the transit agencies for interviews, we used a number of selection criteria including: high ridership numbers, substantial federal capital funding between 2005 and 2015, and a mix of geographic regions. We interviewed officials from these offices to identify actions that the offices use to accelerate the consulting and permitting processes, challenges in the processes, and potential actions that could be implemented to further streamline the consulting and permitting processes. The officials we interviewed from three local transit agencies did not offer any perspectives on the use of streamlining practices or provisions related to environmental consulting and permitting, and are therefore not included in this report. These interviews are not generalizable to all resource agency, state DOT, or transit agency offices.
In addition, we met with transportation and environmental advocacy groups to discuss potential additional actions for consulting and permitting. We also reviewed federal reports and recommendations on best practices for streamlining environmental reviews for federal infrastructure projects, including highway and transit. These reports included the Department of Transportation’s Red Book and the Federal Permitting Improvement Steering Council’s annual best practices reports.
To describe actions taken by CEQ, we reviewed guidance and regulations issued by CEQ and interviewed CEQ officials on the actions the Council has taken to help streamline the environmental review process for federal transportation projects. We also interviewed officials at the Department of Transportation and resource agencies to discuss the extent to which CEQ actions helped streamline environmental reviews for transportation projects.
We conducted this performance audit from March 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Provisions from Recent Transportation Reauthorization Acts that May Streamline Consultation and Permit Reviews
Description of the provision and the transportation reauthorization act reference 1.
Programmatic approaches: Directs the Department of Transportation (DOT) to allow for programmatic approaches to conducting environmental reviews for an environmental impact statement and to the extent determined appropriate, other projects. Requires DOT to seek opportunities with states to enter into programmatic agreements to carry out environmental and other project reviews. MAP-21: §§ 1305(a) and 1318(d) and FAST Act: § 1304(b) (codified at 23 U.S.C. § 139(b)(3) and 23 U.S.C. § 109(note))
Identifying participating agencies: Requires the lead agency to identify, no later than 45 days after the date of publication of a notice of intent to prepare an environmental impact statement or the initiation of an environmental assessment, any other federal and non-federal agencies that may have an interest in the project, and to invite those agencies to become participating agencies in the environmental review process for the project. SAFETEA-LU: § 6002(a) as amended by FAST Act: § 1304(d)(1) (codified at 23 U.S.C. § 139(d)(2))
Concurrent reviews: Requires that each participating and cooperating agency carry out its obligations under other applicable law concurrently and do so in conjunction with the review required under the National Environmental Policy Act (NEPA), unless doing so would impair the ability of the agency to conduct needed analysis or otherwise to carry out those obligations, and that each agency should implement mechanisms to enable the agency to ensure completion of the environmental review process in a timely, coordinated, and environmentally responsible manner. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1305(c) (codified at 23 U.S.C. § 139(d)(7))
Use single NEPA document: Requires to the maximum extent practicable and consistent with federal law, that the project’s lead agency develop a single NEPA document to satisfy the requirements for federal approval or other federal action, including permits. FAST Act: § 1304(d)(2) (codified at 23 U.S.C. § 139(d)(8))
Limiting participating agency responsibilities: Requires that participating agencies provide comments, responses, studies, or methodologies on areas within the special expertise or jurisdiction of the agency, and that an agency use the environmental review process to address any environmental issues of concern to the agency. FAST Act: § 1304(d)(2) (codified at 23 U.S.C. § 139(d)(9))
Environmental checklist: Requires the development of a checklist by the lead agency, in consultation with participating agencies, as appropriate, to help identify natural, cultural, and historic resources. FAST Act: § 1304(e) (codified at 23 U.S.C. § 139(e)(5))
Alternatives analysis: Requires the lead agency to determine the range of alternatives for consideration in any document that the lead agency is responsible for preparing for a project, and requires that those alternatives should be used to the extent possible in all reviews and permit processes required for the project, unless the alternatives must be modified to address significant new information or circumstances or for the lead agency or a participating agency to fulfill the agency’s responsibilities under NEPA in a timely manner. SAFETEA-LU: § 6002(a) and FAST Act: § 1304(f) (codified at 23 U.S.C. § 139(f)(4))
Coordination and scheduling: Requires a coordination plan for public and agency participation in the environmental review process within 90 days of notice of intent to prepare an EIS or the initiation of an EA, including a schedule for completion of the environmental review process for the project. SAFETEA-LU: § 6002(a) as amended by MAP-21: 1305(e) and FAST Act: § 1304(g) (codified at 23 U.S.C. § 139(g)(1))
Description of the provision and the transportation reauthorization act reference 9.
Issue resolution process: Establishes procedures to resolve issues between state DOTs and relevant resource agencies, including those issues that could delay or prevent an agency from granting a permit or approval, and describes lead and participating agency responsibilities. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1306, and FAST Act: § 1304(h) (codified at 23 U.S.C. § 139(h)) 10. Financial penalty provisions: Can cause a rescission of funding from the applicable office of the head of an agency, or equivalent office to which the authority for rendering the decision has been delegated by law, if that office fails to make a decision within certain time frames under any federal law relating to a project that requires the preparation of an EIS or EA, including the issuance or denial of a permit, license, or other approval. MAP-21: § 1306 as amended by FAST Act: § 1304(h)(3) (codified at 23 U.S.C. § 139(h)(7)) 11. Use of federal highway or transit funds to support agencies participating in the environmental review process: Allows a public entity to use its highway and transit funds to support a federal (including DOT) or state agency or Indian tribe participating in the environmental review process on activities that directly and meaningfully contribute to expediting and improving project planning and delivery. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1307, and FAST Act: § 1304(i) (codified at 23 U.S.C. § 139(j)) 12. 150-Day statute of limitations: Bars claims seeking judicial review of a permit, license, or approval issued by a federal agency for highway projects unless they are filed within 150 days after publication of a notice in the Federal Register announcing the final agency action, or unless a shorter time is specified in the federal law under which the judicial review is allowed. SAFETEA-LU: § 6002(a) as amended by MAP-21: § 1308 (codified at 23 U.S.C. § 139(l)) 13. Enhanced technical assistance and accelerated project completion: At the request of a project sponsor or a governor of the state in which the project is located, requires DOT to provide additional technical assistance for a project where EIS review has taken 2 years, and establish a schedule for review completion within 4 years. In providing assistance, DOT shall consult, if appropriate, with resource and participating agencies on all methods available to resolve the outstanding issues and projects delays as expeditiously as possible. MAP-21: § 1309 (codified at 23 U.S.C. § 139(m)) 14. Early coordination activities in environmental review process: Encourages early cooperation between DOT and other agencies, including states or local planning agencies, in the environmental review process to avoid delay and duplication, and suggests early coordination activities. Early coordination includes establishment of memorandums of agreement with states or local planning agencies. MAP-21: § 1320 (codified at 23 U.S.C. § 139(note)) 15. Planning documents used in NEPA review: To the maximum extent practicable and appropriate, authorizes the lead agency for a project and cooperating agencies responsible for environmental permits, approvals, reviews, or studies under federal law to use planning products, such as planning decisions, analysis, or studies, in the environmental review process of the project. MAP-21: § 1310 as amended by FAST Act: § 1305 (codified at 23 U.S.C. § 168(b)) 16. Programmatic mitigation plans used in NEPA review: Allows a state DOT or metropolitan planning organization to develop programmatic mitigation plans to address potential environmental impacts of future transportation projects. It also requires that any federal agency responsible for environmental reviews, permits, or approvals for a transportation project give substantial weight to the recommendations in a state or metropolitan programmatic mitigation plan, if one had been developed as part of the transportation planning process, when carrying out responsibilities under NEPA or other environmental law. MAP-21: § 1311 as amended by FAST Act: § 1306 (codified at 23 U.S.C. § 169(f))
Description of the provision and the transportation reauthorization act reference 17. Categorical exclusion determination authority: Authorizes DOT to assign and a state to assume responsibility for determining if projects can be categorically excluded from NEPA review, and allows states that have assumed that responsibility to also assume DOT’s responsibility for environmental review, consultation, or other actions required under federal law applicable to activities classified as categorical exclusions. SAFETEA-LU: § 6004(a), as amended by MAP-21: § 1312, and FAST Act: § 1307 (codified at 23 U.S.C. § 326) 18. Surface transportation project delivery program: Authorizes DOT to assign and a state to assume many federal environmental review responsibilities for highway, public transportation, and railroad projects, to be administered in accordance with a written agreement between DOT and the participating state. SAFETEA-LU: § 6005(a), as amended by MAP-21: § 1313 and FAST Act: § 1308 (codified at 23 U.S.C. § 327)
Appendix III: Comments from the Department of Commerce
Appendix IV: Comments from the Department of Interior
Appendix V: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Brandon Haller (Assistant Director), Lauren Friedman, Tobias Gillett, Rich Johnson, Delwen Jones, Hannah Laufe, Jeff Miller, Cheryl Peterson, Malika Rice, Alison Snyder, Kirsten White, and Elizabeth Wood made significant contributions to this report. | Why GAO Did This Study
Since 2005, the federal government has enacted various statutes aimed at accelerating the environmental review process for highway and transit projects. In addition, the Clean Water Act and the Endangered Species Act may require three federal agencies—the Corps, FWS, and NMFS—to issue permits or perform consultations before a project can proceed.
GAO is required by statute to assess the extent to which statutory provisions have accelerated and improved environmental permitting and consulting processes for highway and transit projects. This report examines, among other things: 1) the impact of streamlining provisions on consulting and permitting time frames, and (2) additional actions used by federal resource agencies to streamline their reviews. GAO analyzed permitting and consulting data from the 3 federal agencies and interviewed officials from the 3 agencies, 16 agency field offices, and 7 state DOTs for their perspectives on the effect of streamlining provisions and other efforts. GAO selected these offices to include a range of locations and those with a greater number of permits and consultations, among other factors.
What GAO Found
Federally funded highway and transit projects must be analyzed for their potential environmental effects, as required by the National Environmental Policy Act, and may be subject to other environmental protection laws, including the Clean Water Act and the Endangered Species Act. These laws may require the U.S. Army Corps of Engineers (Corps) to issue permit decisions and the U.S. Fish and Wildlife Service (FWS) and the National Marine Fisheries Service (NMFS) to conduct consultations before a project can proceed. These three agencies are referred to as “resource agencies” for this report. The three most recent transportation reauthorization acts include provisions that are intended to streamline various aspects of the environmental review process; 18 of these provisions could potentially affect time frames for the environmental permitting and consulting processes for highway and transit projects.
While officials GAO interviewed at resource agencies and state departments of transportation (state DOT) noted that some actions called for by the 18 statutory provisions have helped streamline the consultation and permitting processes for highway and transit projects, GAO found that a lack of reliable agency data regarding permitting and consulting time frames hinders a quantitative analysis of the provisions' impact. Officials said, for example, that a provision that allows federal liaison positions at resource agencies to focus solely on processing applications for state DOT projects has helped avoid delays in permit and consultation reviews. However, none of the three resource agencies could provide enough reliable data to analyze changes in the durations of consultations and permit reviews over time for any of the provisions. Further, GAO identified limitations, such as negative or missing values, and inconsistent data entry practices for FWS and NMFS data. FWS and NMFS have limited controls, such as electronic safeguards and other data-entry procedures, to ensure the accuracy and reliability of their data on the duration of consultations. Left unaddressed, these data quality issues may impair the agencies' ability to accurately determine whether they are meeting their 135-day statutory and regulatory deadlines to complete consultations and provide biological opinions, and could affect their ability to provide accurate data on time frames for efforts of the Office of Management and Budget to track agencies' performance in conducting environmental reviews. While FWS and NMFS officials stated that the agencies plan to improve their tracking systems, the agencies do not have documented plans or time frames for the improvements and it is unclear whether the efforts will include internal controls to improve data reliability.
Some federal resource agency and state DOT officials GAO interviewed identified additional actions that have been used to streamline the consultation and permitting processes to avoid delays in agency reviews. For example, 16 of the 23 resource agency and state DOT officials said that field office staff provided training to state DOT staff about the information field offices required for permit or consultation applications. Resource agency and state DOT officials also identified electronic systems with environmental data and for submitting documents as streamlining actions that have been helpful.
What GAO Recommends
GAO is making two recommendations, one to FWS and one to NMFS, to develop plans and time frames for improving their tracking systems and to develop internal controls to improve data reliability.
The Departments of Commerce and Interior concurred with our recommendations. |
gao_GAO-19-34 | gao_GAO-19-34_0 | Background
Fraud Risk Management
Fraud and “fraud risk” are distinct concepts. Fraud—obtaining something of value through willful misrepresentation—is a determination to be made through the judicial or other adjudicative system, and that determination is beyond management’s professional responsibility. Fraud risk exists when individuals have an opportunity to engage in fraudulent activity, have an incentive or are under pressure to commit fraud, or are able to rationalize committing fraud. Although the occurrence of fraud indicates there is a fraud risk, a fraud risk can exist even if actual fraud has not yet been identified or occurred. When fraud risks can be identified and mitigated, agencies may be able to improve fraud prevention, detection, and response. Managers of federal programs maintain the primary responsibility for enhancing program integrity and managing fraud risks. Those who are effective at managing their fraud risks collect and analyze data and identify fraud trends and use data and trends to improve fraud risk management activities. Implementing effective fraud risk management processes is important to help ensure that federal programs fulfill their intended purpose, funds are spent effectively, and assets are safeguarded.
The Fraud Risk Framework provides a comprehensive set of leading practices that serve as a guide for agency managers developing or enhancing efforts to combat fraud in a strategic, risk-based manner. The Fraud Risk Framework is also aligned with Principle 8 (“Assess Fraud Risk”) of the Standards for Internal Control. It is designed to focus on preventive activities, which generally offer the most cost-efficient use of resources since they enable managers to avoid a costly and inefficient “pay-and-chase” model of recovering funds from fraudulent transactions after payments have been made. The leading practices in the Fraud Risk Framework are organized into four components—commit, assess, design and implement, and evaluate and adapt—as depicted in figure 1.
FRDAA Requirements
Legislation and guidance has increasingly focused on the need for program managers to take a strategic approach to managing risks, including fraud. FRDAA was enacted to improve federal agency controls and procedures to assess and mitigate fraud risks, and to improve agencies’ development and use of data analytics for the purpose of identifying, preventing, and responding to fraud. FRDAA requires agencies to establish financial and administrative controls that incorporate the Fraud Risk Framework’s leading practices, including 1. conducting an evaluation of fraud risks and using a risk-based approach to design and implement financial and administrative control activities to mitigate identified fraud risks; 2. collecting and analyzing data from reporting mechanisms on detected fraud to monitor fraud trends, and using that data and information to continuously improve fraud-prevention controls; and 3. using the results of monitoring, evaluation, audits, and investigations to improve fraud prevention, detection, and response.
Further, agencies are required to annually report to Congress on their progress in implementing the act for each of the first 3 fiscal years after its enactment.
FRDAA required OMB, in consultation with the Comptroller General, to establish guidelines for agencies that incorporate leading practices from the Fraud Risk Framework as well as to establish a working group that shares best practices in fraud risk management. In addition, the working group is required to submit a plan to develop a federal interagency data analytics library for fraud risk management. This working group was also required to consult with the Offices of Inspector General and federal and nonfederal experts on fraud risk assessments, financial controls, and other relevant matters as well as to meet not fewer than four times per year. See figure 2 for additional details on FRDAA’s requirements and implementation timeline.
Agencies Have Taken Steps to Manage and Report on Fraud Risks as FRDAA Requires, but Have Identified Challenges
Agencies Indicated They Are Planning or Implementing Activities to Manage Fraud Risks
Agencies’ steps to manage fraud risks at the agency-wide level—and in response to FRDAA—are at varying stages of planning and implementation, according to our survey of agencies subject to the act. In our survey, we asked the 72 agencies subject to FRDAA to characterize (1) the overall status of their efforts to plan for and implement the act as “not started,” “started but not mature,” or “mature” and (2) whether they regularly undertook specific fraud risk management activities prior to and after FRDAA’s enactment. With respect to overall status, most surveyed agencies (85 percent) indicated that they have at least started planning how they will meet FRDAA requirements (started or mature), and about 78 percent indicated that they have also started or are mature in their efforts to implement the requirements. Fewer agencies, however, characterized either their planning or implementation efforts as “not started” (about 15 and 22 percent, respectively). See figure 3 for agency responses on their FRDAA planning and implementing efforts.
While most agencies indicated they have taken planning and implementation steps, agencies varied in the extent to which they indicated undertaking specific fraud risk management activities required by FRDAA at the agency-wide level, according to our survey results. We asked agencies whether they were currently performing key fraud risk management activities at the agency-wide level. The fraud risk management activities identified in the survey were an abbreviated version of the FRDAA requirements for agencies to establish financial and administrative controls, which included (1) conducting an evaluation of fraud risks and using a risk-based approach to design and implement financial and administrative control activities to mitigate identified fraud risks; (2) collecting and analyzing data from reporting mechanisms on detected fraud to monitor fraud trends and using that data and information to continuously improve fraud-prevention controls; and (3) using the results of monitoring, evaluation, audits, and investigations to improve fraud prevention, detection, and response. Most agencies (about 86 percent) indicated they use the results of monitoring, evaluation, audits, and investigations to manage fraud risk. Fewer agencies (about 63 percent) indicated they collect fraud-related data for prevention. Agencies also varied in the frequency with which they perform certain activities. For example, of the agencies that indicated that they collect fraud-related data for prevention, 44 percent indicated they do so regularly, while 18 percent indicated that they do so but not on a regular basis. See figure 4 for additional information on the frequency with which agencies indicated they perform fraud risk management activities related to FRDAA requirements for financial and administrative controls.
The majority of agencies we surveyed indicated that they were engaged in a variety of fraud risk management activities before FRDAA’s enactment, but a larger number indicated action in each of these activities since the law was enacted. For example, 86 percent of agencies indicated they used findings from monitoring, auditing, or evaluation of fraud risk activities after the enactment of FRDAA, compared with 79 percent of agencies that indicated they used such findings before FRDAA. See figure 5 for a comparison of the number of agencies reporting that they undertook fraud risk management activities before and after the enactment of FRDAA.
To identify relationships among survey responses associated with progress implementing elements of FRDAA and fraud risk management practices, we considered direction and strength of correlations between those questions. Agencies that indicated that they have started implementing FRDAA (85 percent) also reported higher use of some key fraud risk management activities, according to our analysis of the survey data. For example, agencies that indicated their implementation efforts were “mature” or “started but not mature” indicated at higher rates that they conduct risk-based evaluations of fraud risks and collect fraud- related data for prevention since the enactment of FRDAA. As mentioned, these activities are FRDAA requirements and are leading practices in the Fraud Risk Framework. These agencies also indicated at higher rates that they incorporated fraud risk activities into broader ERM, as directed by OMB Circular A-123. Further, while most (89 percent) agencies indicated having a designated entity for managing fraud risk, consistent with one leading practice identified in the Fraud Risk Framework, fewer (74 percent) have designated an entity specifically for FRDAA implementation. Agencies that indicated they had a designated entity for implementing FRDAA indicated that they were at a mature stage of FRDAA implementation more often than agencies without such an entity.
All CFO Act Agencies Reported on Their Progress Implementing FRDAA, but Reporting Varied in Completeness and Detail
Each of the 24 CFO Act agencies reported on their progress implementing FRDAA in their fiscal year 2017 annual financial reports to Congress, as FRDAA requires, but the reporting varied in completeness and detail. FRDAA specifies that, beginning in fiscal year 2017 and for the following 2 fiscal years, agencies must include the following 11 elements in their reports:
Agencies must report their progress implementing the financial and administrative controls required to be established by the agency, which include (1) conducting an evaluation of fraud risks and using a risk-based approach to design and implement financial and administrative control activities to mitigate identified fraud risks; (2) collecting and analyzing data from reporting mechanisms on detected fraud to monitor fraud trends and using that data and information to continuously improve fraud-prevention controls; (3) using the results of monitoring, evaluation, audits, and investigations to improve fraud prevention, detection, and response; (4) implementing the fraud risk principle as described in the Standards for Internal Control; and (5) implementing the OMB Circular A-123 section related to leading practices for managing fraud risk.
Agencies must report their progress identifying risks and vulnerabilities to fraud. These include (6) payroll, (7) beneficiary payments, (8) grants, (9) large contracts, and (10) purchase and travel cards.
Agencies must report their progress (11) establishing strategies, procedures, and other steps to curb fraud.
In August 2017, OMB updated its financial-reporting guidance in Circular A-136, Financial Reporting Requirements, with a section on FRDAA reporting requirements, including the reporting elements specified in the act. While the reporting requirements in FRDAA and OMB’s guidance list three categories of information, as noted above, we broke out the unique requirements in each category for our assessment. As a result, our analysis of the completeness of agencies’ annual financial reports is based on whether they contain each of 11 specific reporting elements. See appendix I (table 2) for additional information about these reporting elements.
The 24 CFO Act agencies each included fraud-reduction sections in their annual financial reports as FRDAA requires but, at times, the completeness and detail of reporting was limited because some reports did not completely address all of the elements specified in the act. Four agencies reported on all of the specified elements, 19 agencies reported on more than half of the specified elements, and 1 agency reported on fewer than half of the specified elements, according to our analysis. For example, each of the 24 CFO Act agencies reported on their progress in establishing financial and administrative fraud controls required by FRDAA and OMB Circular A-123, but 7 agencies did not report on progress in implementing the fraud risk principle in the Standards for Internal Control. In addition, some agencies did not report on their progress in identifying risks and vulnerabilities with respect to payroll, beneficiary payments, and other elements specified in the act. Specifically, 12 of the CFO Act agencies did not report on payroll, 11 did not report on beneficiary payments, 5 did not report on grants, 9 did not report on large contracts, and 7 did not report on purchase and travel cards. See figure 6 for an analysis of the inclusion of required FRDAA reporting elements in agency reports.
Variation in reporting on progress in identifying specific risks and vulnerabilities could result from some agencies’ determinations about their applicability to the agency. For example, some agencies that participated in our roundtable discussion noted that grant risks are not applicable to their agency because they do not have grant programs. However, this would not explain some areas of risk that are applicable to all agencies, but were not reported, such as payroll. As discussed later in this report, variation in reporting on progress in identifying specific risks and vulnerabilities may also be partly due to some agencies’ uncertainty about what information must be reported.
The reports also varied in terms of detail provided about agencies’ efforts, including specific actions taken to implement elements of FRDAA. For example, one agency reported that its efforts to comply with the fraud risk principle in the Standards for Internal Control included implementing enterprise risk management (ERM) and establishing a policy for having a common risk assessment tool to ensure consistency across the agency and to determine appropriate mitigation strategies for risks identified in all programs. Conversely, another agency reported that it updated an annual entity-level control assessment to comply with this principle, but the agency did not describe how this update achieved compliance. Without this detail in the report, it is not possible to determine the extent of the agency’s implementation progress, as we describe later in the report.
Further, most (16 of the 24 CFO Act agencies) included details about financial fraud risks but did not address nonfinancial fraud risks. For example, one agency reported it had low fraud risk and, as such, did not implement any new controls in response to FRDAA. As support, the agency provided examples of identifying no or limited financial fraud risks, and concluded that it did not have fraud risks to address. The agency did not discuss nonfinancial fraud. However, a 2016 GAO report identified this agency as having vulnerabilities to nonfinancial fraud that present national security risks. In addition, a 2017 report recommended that two agencies responsible for a program with national security–related responsibilities conduct joint fraud risk assessments to obtain comprehensive information on inherent fraud risks that may affect program integrity; provide reasonable assurance that their controls mitigate those risks; and ensure that fraud-prevention efforts target the areas of highest risk. However, one of these agencies did not mention nonfinancial fraud in its report. Further, neither agency identified this program in their report. As mentioned in the Fraud Risk Framework, nonfinancial fraud, such as fraudulently obtained credentials, can potentially facilitate other crimes related to national security such as international terrorism and drug trafficking. In addition, a leading practice of the Fraud Risk Framework is that managers consider nonfinancial effects of fraud, such as those related to the program’s reputation and compliance with laws, regulations, or standards. As discussed later in this report, these limitations in agency reporting may be partly due to limited guidance provided by OMB to agencies regarding the level of detail and type of information that should be included in the reports.
Agencies Identified Challenges Undertaking Fraud Risk Management Activities
Agencies identified challenges undertaking some fraud risk management activities required by FRDAA, according to our analysis of survey and roundtable responses. Top identified challenges were generally related to staffing and resources, among other things. These challenges may affect agencies’ ability to implement leading practices from the Fraud Risk Framework. Some roundtable participants also noted strategies for mitigating some of these challenges. The factors agencies most frequently indicated as great or moderate challenges in undertaking fraud risk management activities include the following:
Availability of resources. Agencies most frequently noted the availability of resources, such as staffing and funding to conduct fraud risk management activities, as a challenge to managing fraud risk. About 75 percent of agencies indicated in their surveys that this was a great or moderate challenge. Agencies that participated in our roundtable discussion identified similar “bandwidth” concerns related to staffing. For example, one agency noted the ability of staff to manage multiple responsibilities—such as conducting fraud risk management activities in addition to daily program-related activities— as a top challenge, especially within smaller units of the agency. Some agencies at the roundtable discussion told us that having the authority to use program-integrity funding for fraud risk management would help provide necessary resources to undertake fraud risk management activities required by FRDAA. However, one agency noted that this may not be a viable solution for all agencies, since not all agencies may receive additional program-integrity funding to conduct fraud risk management activities.
Limited tools and techniques for data analytics. Most agencies (about 68 percent) indicated that limitations in having and using tools and techniques for data analytics were a great or moderate challenge, according to our survey. Using data analytics to manage fraud risk is a leading practice in the Fraud Risk Framework. While one agency at our roundtable discussion told us that the agency does not have software to assist staff in performing data analytics, other agencies suggested leveraging free or existing resources to gain access to and use data tools. For example, one agency representative described the usefulness of the Department of the Treasury’s Do Not Pay Business Center. This agency representative noted that the Department of the Treasury can proactively analyze agency data it has received and share it with agencies. Another agency suggested that agencies ask their shared service providers to provide data analytics, provide insight, and benchmark against other agencies.
Lack of available expertise. The availability of staff with expertise to conduct fraud risk management activities also presents challenges for agencies. Leading practices in the Fraud Risk Framework include designating an antifraud entity that serves as the repository of knowledge on fraud risks and controls and increasing managers’ and employees’ awareness of potential fraud schemes through training and education. About 56 percent of agencies we surveyed, however, identified availability of staff expertise as a great or moderate challenge. Agencies that identified this as a challenge also more frequently indicated that they experience some other challenges associated with FRDAA implementation, such as understanding FRDAA requirements and implementation time frames; reporting on implementation progress in the annual financial reports; and sufficiency of other information or tools to aid in implementation. During the roundtable discussion, some agencies also described having a staffing gap where data-analytic skills were concerned. In response to this challenge, one agency moved its centralized antifraud unit to a newly created, more-experienced unit within the agency to increase the antifraud unit’s capacity to conduct data- analytics reviews.
Access to data and information. A majority of agencies also identified having access to data to look for fraud or fraud indicators as a challenge. About 55 percent of agencies indicated that access to data is a great or moderate challenge to their ability to implement fraud risk activities. Agencies that participated in our roundtable discussion also told us that access to data is a key challenge associated with implementing FRDAA requirements. For example, one agency stated that the Privacy Act presents a challenge to data matching that may limit agencies’ ability to share data with one another, such as Social Security numbers involved in potentially fraudulent activity that could cut across multiple agencies. This challenge is not new. In our July 2013 report on using data analytics for oversight and law enforcement and in our March 2017 report on using data analytics to address fraud and improper payments, we reported on similar perceived challenges from other agencies and organizations regarding data sharing among agencies.
Some agencies at the roundtable discussion also stated that they did not receive information from their respective Office of Inspector General that would help them manage fraud risks and implement FRDAA. The Fraud Risk Framework highlights the role of the Office of Inspector General in agencies’ fraud risk management activities. According to the framework, the Office of Inspector General itself should not lead or facilitate fraud risk assessments, in order to preserve its independence when reviewing the program’s activities. However, the framework notes that program managers and their Office of Inspector General should collaborate and communicate to help improve understanding of fraud risks and identify emerging fraud risks, in order to proactively enhance fraud-prevention activities. While one agency at the roundtable discussion identified the lack of information from their Office of Inspector General limiting their ability to address fraud risks, some agencies appear to be reaching out to their respective Offices of Inspector General for this information. We spoke with the Council of the Inspectors General on Integrity and Efficiency, which comprises representatives of Offices of Inspector General in the executive branch. During the Council of the Inspectors General on Integrity and Efficiency meeting, representatives from three agency Inspectors General told us that their agencies reached out to them to discuss fraud, such as how an agency can use databases to look for fraud. At least one representative expected to coordinate with the representative’s agency to strengthen internal controls as the agency continues to implement FRDAA.
OMB Established Guidelines and a Working Group as Required by FRDAA, but Limited Details and Coordination Hindered Agencies’ Implementation of the Act
OMB has taken steps to establish guidelines and a working group for agencies, as required by FRDAA, but limited guidelines and working- group coordination hindered some agencies’ implementation of the act. Specifically, OMB issued guidelines for agencies to implement FRDAA’s requirement to establish controls and report on their progress and has established a FRDAA working group, but agencies indicated the need for additional guidance and involvement in working-group activities. Our analysis of survey responses, roundtable discussion results, and agencies’ annual financial reports indicates that (1) agencies had mixed perspectives on the usefulness of OMB’s guidelines for agencies to establish controls; (2) limited details in OMB’s reporting guidelines contributed to CFO Act agencies’ incomplete and insufficiently detailed annual financial reports; and (3) agencies had challenges implementing FRDAA in part due to their lack of involvement in and lack of communication from the working group. In addition to FRDAA, OMB has issued guidance on other government-wide reform and burden-reduction initiatives that could shape how agencies address FRDAA implementation, such as reforms that may change the structure of agencies and related programs or how agencies collect data used in managing fraud risks. While it is still too early to determine the effect of these broader initiatives on agencies’ efforts to implement FRDAA, we have previously reported that broader reform efforts can be leveraged by OMB and agencies to address the high-risk areas and government-wide challenges that present vulnerabilities to fraud, waste, abuse, and mismanagement.
OMB Updated Existing Guidelines to Meet FRDAA Requirements, but Agencies Have Mixed Perspectives on the Guidelines’ Usefulness
To comply with FRDAA, OMB updated existing guidelines for agencies to establish financial and administrative controls to manage fraud risks, but agencies indicated having challenges with the usefulness of these guidelines, according to our survey and roundtable discussion results. Specifically, OMB incorporated guidelines to meet FRDAA requirements into its July 2016 update of Circular A-123, Management’s Responsibility for Enterprise Risk Management and Internal Control, within 90 days of enactment, as required by the act. This particular update of Circular A-123 introduced requirements for agencies to implement ERM and integrate with existing internal control capabilities to improve mission delivery, reduce costs, and focus corrective actions on key risks. The update to Circular A-123 also included a discussion of the Fraud Risk Framework and aligned internal control processes with the 2014 update to the Standards for Internal Control—such as the reference to the fraud risk principle (Principle 8)—which OMB staff stated provided agencies with a broad context for why fraud risk management is expected of agencies.
According to OMB staff, including the reference to the Fraud Risk Framework in the circular met the FRDAA requirement to issue guidelines for agencies to establish financial and administrative controls to identify and assess fraud risks. The guidelines have a section on “Managing Fraud Risks in Federal Programs” that encourages agencies to develop the same financial and administrative controls that are listed in FRDAA requirements. This section also directs agencies to adhere to the leading practices described in the Fraud Risk Framework as part of their efforts to effectively design, implement, and operate an internal control system that addresses fraud risks. However, based on our review of the guidance, because FRDAA is never mentioned in the guidelines, there is a risk that agencies may not be aware that the guidelines directly apply to implementing FRDAA’s requirement to establish financial and administrative controls. In addition, OMB’s guidelines provide limited information related to steps that agencies should take to implement FRDAA’s requirement to establish financial and administrative controls, according to our review of the guidelines.
Agencies indicated having mixed views on the sufficiency of OMB’s guidelines. For example, 65 percent of the agencies surveyed indicated that OMB’s Circular A-123 guidelines were moderately or very useful. However, 40 percent of the agencies surveyed also identified the sufficiency of OMB’s guidelines as a great or moderate challenge in implementing the act. Among other things, these challenges included agencies’ uncertainty about how ERM and FRDAA requirements differ, given that OMB included the guidelines for managing fraud risk as a subsection of ERM requirements. These challenges contributed to agencies’ lack of clarity, among other things, on the actions they should take to implement FRDAA, as described below.
Challenges using OMB guidelines to implement FRDAA’s requirement to establish controls. Some agencies indicated that using OMB guidelines for FRDAA implementation was a challenge, according to our analysis of survey responses. Specifically, 40 percent of agencies indicated the sufficiency of the guidelines was a great or moderate challenge to their implementation efforts. CFO Act agencies reported this challenge more often than non–CFO Act agencies (61 and 30 percent, respectively).
Selected Agency Officials’ Perspectives on Office of Management and Budget (OMB) Fraud Reduction (FRDAA) and Data Analytics Act of 2015 Guidelines “What does compliance mean specifically when it comes to FRDAA?” “aving looked at other guidance that’s come out of OMB, particularly like the DATA Act or even ERM [enterprise risk management], there was lots of guidance. . . . In this particular case I think it has not been as robust”
Lack of guidance and unclear requirements were also identified as top challenges in our roundtable discussion on implementation of FRDAA required controls. For example, some roundtable participants stated that clearer requirements, such as information on what activities would be considered compliant with the act, would be helpful to better implement FRDAA. In particular, two agencies identified grants and contracts as an area where additional guidance on managing fraud risks would be helpful.
In contrast, a theme of the roundtable discussion was that there were trade-offs in having clarity on the objectives and having the flexibility to tailor requirements to different programs. One roundtable participant said that agencies had different definitions of fraud and that it would be difficult to create standardized tools that met every agency’s needs. In order to better understand what steps they should take to implement the controls required by FRDAA, two roundtable participants sought out alternative sources of information to determine whether they were complying with Circular A-123, such as a previously issued GAO report on the Fraud Risk Framework. Other roundtable participants described using non-OMB guidance to implement FRDAA, such as the ERM playbook developed by the CFO Council and Performance Improvement Council, and materials developed by the Association of Certified Fraud Examiners. While relying on other sources of information can be helpful, agencies that do not have knowledge of or access to additional resources such as these may not have sufficient information to effectively implement the act. This point is underscored by the 40 percent of agencies that identified the sufficiency of OMB’s guidance as a great or moderate challenge to their implementation of FRDAA.
Selected Agency Officials’ Perspectives on Office of Management and Budget Fraud Reduction and Data Analytics Act of 2015 (FRDAA) Guidelines “I would like some clarification on the intent of , like what will it achieve that the other A-123 or ERM [enterprise risk management] is not achieving?”
Uncertainty about the difference between ERM and FRDAA requirements. Many agencies are leveraging existing ERM processes to implement fraud risk activities, according to our survey results, but OMB guidelines were unclear on the relationship between FRDAA and ERM requirements, according to our review of the guidelines and roundtable discussion responses. Under ERM, agencies are required to assess the full spectrum of an organization’s risks, and identify those that are enterprise-level risks. For enterprise risks, agencies are expected to rate those risks in terms of impact and build internal controls to monitor and assess the risk developments at various time points and incorporate risk awareness into the agencies’ culture and operations. Our survey results indicate that more agencies (56 percent) are currently incorporating fraud risk activities into broader ERM compared with before FRDAA enactment in June 2016 (34 percent). Additionally, some roundtable participants stated that they leveraged their existing ERM process and teams to implement FRDAA’s control requirements. While Circular A-123 directs agencies to assess fraud risks as part of a broader assessment of enterprise risk, it does not provide information on how ERM and fraud risk management requirements differ. For example, it does not clarify that FRDAA encompasses a broad set of actions that agencies must take to manage fraud risks, regardless of whether the fraud risk is identified as an enterprise risk.
Additionally, Circular A-123 does not specify how to implement the strategies identified in the Fraud Risk Framework within the context of ERM. According to the circular, managers should adhere to the leading practices identified in the framework and are responsible for determining the extent to which the leading practices are relevant to their program. Managers are also responsible for tailoring the practices to align with the program’s operations. While the Fraud Risk Framework does state that the leading practices can be tailored, it enumerates four components and overarching concepts that are necessary for an effective risk management approach. These four components of the framework— commit, assess, design and implement, and evaluate and adapt— collectively encompass the control activities for managing fraud risks and, as outlined in the framework and Standards of Internal Control, should be present in some form to be effective. Therefore, even if agency officials identify fraud risks in a particular program that are not determined to be enterprise-level risks, the officials are still responsible for designing and implementing controls to address them and evaluating and adapting improvements to these controls over time, in line with the Fraud Risk Framework requirements. However, OMB staff informed us that if a fraud risk does not rise to the level of an enterprise risk for an agency in the ERM process, the agency may not go through all of the steps outlined in the Fraud Risk Framework or required by FRDAA to assess and respond to that risk. The Fraud Risk Framework acknowledges that agencies may use initiatives like ERM efforts to assess their fraud risks, but it does not eliminate the separate and independent fraud risk management requirements of FRDAA.
In response to our draft report, OMB staff stated that other parts of Circular A-123 helped to fulfill their requirement to establish guidelines for agencies to establish financial and administrative controls. According to OMB, if agencies identify fraud risks that are not discussed in ERM, they will still be addressed by the broader risk management requirements in Circular A-123. These other sections of Circular A-123 existed prior to FRDAA and therefore, were not developed in response to FRDAA’s requirement that OMB establish guidelines for agencies. However, our review of Circular A-123 found that there are some references to managing fraud risks that are in alignment with the spirit of the financial and administrative controls identified in FRDAA. For example, other sections of Circular A-123 describe requirements for agencies to develop a risk profile and state that agency risk profiles must include an operational objective related to administrative and major program operations, including financial and fraud objectives. Further, agencies should identify the existing management process that will be used to implement and monitor proposed actions to address the risks. However, according to Circular A-123, these sections of the document define management’s responsibilities for ERM, which is focused on enterprise level risks. Further, these sections of Circular A-123 do not encourage agencies to incorporate the leading practices outlined in the Fraud Risk Framework to manage their fraud risks, as required by FRDAA.
According to OMB staff, if agencies identify fraud risks that are not discussed in ERM, they will still be addressed by the broader risk management requirements in Circular A-123. These other sections of Circular A-123 existed prior to FRDAA and therefore were not developed in response to OMB’s requirement to provide guidance on FRDAA. However, our review of Circular A-123 found that there are some references to managing fraud risks that are in alignment with the spirit of the financial and administrative controls identified in FRDAA. For example, other sections of Circular A-123 describe requirements for agencies to develop a risk profile and state that agency risk profiles must include an operational objective related to administrative and major program operations, including financial and fraud objectives. Further, agencies should identify the existing management process that will be used to implement and monitor proposed actions to address the risks. However, according to Circular A-123, these sections of the document define management’s responsibilities for ERM, which is focused on enterprise-level risks. Further, these sections of Circular A-123 do not encourage agencies to incorporate the leading practices outlined in the Fraud Risk Framework to manage their fraud risks, as required by FRDAA.
In addition, OMB staff stated that they believe that, along with Circular A-123, the Standards for Internal Control and the Fraud Risk Framework provide all the guidance that agencies need to implement and comply with FRDAA. However, based on the results of our survey and roundtable, we informed OMB that agencies reported experiencing confusion about the similarities and differences between FRDAA and other requirements, including ERM. According to OMB staff, Circular A- 123 and its focus on ERM is the appropriate place for the FRDAA guidelines because fraud is one type of risk an agency might face. However, OMB staff noted that it is the agencies’ responsibility to determine how to implement the act’s requirements in a way that aligns with the agency’s mission, and accordingly does not have immediate plans to update Circular A-123 to provide more-detailed guidelines for agencies to implement the financial and administrative controls required by FRDAA.
The Standards for Internal Control state that management should implement control activities through policies. Documentation of responsibilities through policies and periodic review of control activities contribute to the design, implementation, and operating effectiveness of control activities. In addition, management should externally communicate the necessary quality information to achieve the entity’s objectives. These standards are practices that can assist any entity that is providing guidance to agencies with ensuring that intended objectives are accomplished. To better understand the type and level of detail in guidance that agency managers need to implement management controls, OMB and other similar oversight bodies often seek input and comments from agencies on draft guidance. In this case, OMB staff has not provided evidence that it consulted with agencies on whether the update to Circular A-123 met their needs in implementing FRDAA. While OMB staff stated they held three solicitations for agency comments on a draft update of Circular A-123 prior to FRDAA’s enactment, they did not obtain input from agencies on whether the updates provided the guidance agencies needed to implement the controls in FRDAA’s final enacted requirements.
Without input from agencies, OMB does not have the information it needs to determine what additional guidance agencies need to effectively implement the controls required by the act. In addition, without clarifying that FRDAA’s requirements must be addressed for all fraud risks— including those that agencies may have assessed and determined are not enterprise-level risks—agencies may not follow through on the additional steps of designing, implementing, evaluating, and improving controls for their remaining fraud risks. Lastly, without additional detailed guidelines for implementing FRDAA’s control requirements, agencies will continue to lack clarity on the actions they should take to effectively implement the act.
OMB’s Guidelines on FRDAA Reporting Requirements Lack Information Needed for Agencies to Produce Complete and Detailed Reports
OMB updated existing guidelines to include a section on FRDAA reporting requirements, but did not include enough information to effectively assist agencies in producing complete and detailed reports, according to our analysis of annual financial reports and survey and roundtable responses. FRDAA directs agencies to report to Congress on the progress of FRDAA implementation in their annual financial reports for each of the 3 fiscal years after enactment. Although FRDAA does not require OMB to establish guidelines for agencies to comply with the act’s reporting obligations, OMB generally provides guidance to support agencies’ annual financial-reporting requirements in Circular A-136, Financial Reporting Requirements, and accordingly updated this guidance to include a section on FRDAA reporting requirements first in August 2017 and again in July 2018. There were no significant changes to the FRDAA section of Circular A-136 in the July 2018 update.
Agencies are to include in their annual financial reports to Congress their progress in: (1) implementing the financial and administrative fraud controls as required by FRDAA, the fraud risk principle in the Standards for Internal Control, and the OMB Circular A-123 section related to leading practices for managing fraud risk; (2) identifying risks and vulnerabilities to fraud, including with respect to payroll, beneficiary payments, grants, large contracts, and purchase and travel cards; and (3) establishing strategies, procedures, and other steps to curb fraud. However, as previously discussed, our analysis of the 24 CFO Act agencies’ annual financial reports found that many reports issued in 2017—the first year of reporting—were incomplete and lacked detail. Some agencies did not report on their progress in identifying risks and vulnerabilities with respect to payroll, beneficiary payments, and other elements specified in the act and did not address nonfinancial fraud risks. In addition, according to our survey results, some agencies considered reporting on implementation progress in the annual financial reports a challenge. Specifically, 31 percent of agencies indicated that reporting was a great or moderate challenge, see figure 7.
Further, some of our roundtable participants indicated that they needed more detailed guidance on what should be reported to comply with FRDAA. In the absence of more-detailed guidance from OMB, some agencies turned to each other for help. For example, some roundtable participants indicated that they looked at other agencies’ annual financial reports to see what they were reporting. While relying on other agencies’ reports can be helpful, agencies may be reviewing incomplete information based on our review of the annual financial reports, and may not have appropriate examples of how FRDAA information should be reported.
OMB’s guidance to agencies on FRDAA reporting did not include information on the level of detail agencies should report. The FRDAA section of Circular A-136 is a near-exact replication of the reporting elements listed in FRDAA and specifies the period in which agencies are to report on their progress implementing FRDAA. According to OMB staff, they included the content of FRDAA verbatim in Circular A-136 because the reporting requirements are outlined in the act. However, the act provides high-level information on what should be included in agency reports, not operational guidance on how to address the reporting requirements, which is typically outlined in executive guidance to agencies. Further, OMB staff informed us that they instructed agencies to provide a status update of fraud-reduction efforts undertaken in the final quarter of fiscal year 2016 through fiscal year 2017, but did not provide agencies with any specific guidance on how detailed that reporting should be in their annual financial reports. The Standards for Internal Control state that management should implement control activities through policies and documentation and externally communicate the necessary quality information to achieve the entity’s objective. Until OMB provides additional guidelines directing agencies to report more-complete and more-detailed information related to their progress on both financial and nonfinancial risks, some agencies may continue to report incomplete information on their full range of fraud risks and activities they are performing to manage these risks.
On the basis of the limitations we identified in agencies’ annual financial reports, Congress and OMB do not have complete and detailed information about agencies’ progress implementing FRDAA’s requirements to establish fraud controls as intended by the act. For example, as previously mentioned, 12 of the 24 CFO Act agencies did not report on payroll fraud risks, which are applicable to all agencies, and 16 did not report on nonfinancial risks such as effect on reputation and compliance with laws, regulations, or standards. The agency reporting requirement was intended to help Congress monitor the progress made by agencies in addressing and reducing fraud risk, including the success or failures of the guidelines created by OMB as a result of the act. Similar to reporting requirements for improper payments, agencies’ reports on their progress implementing FRDAA serve as important oversight tools that can be used to evaluate agency efforts to make needed changes to their processes and policies. In the absence of additional OMB guidelines that include more-complete and more-detailed information for reporting on both financial and nonfinancial risks, some agencies may continue to produce incomplete information on their full range of fraud risks and fraud risk management activities. However, as noted, OMB did not make changes to the FRDAA section in its July 2018 update of Circular A-136, which might have informed agencies’ 2018 reporting efforts.
On the basis of FRDAA’s requirements, Congress sought 3 years of reporting on FRDAA implementation, and therefore agencies’ obligation to report on their progress expires after fiscal year 2019. Even if OMB makes changes to its guidelines in 2019 to support more-complete and more-detailed reporting, agencies would report only one time after that— in their 2019 annual financial reports, due in November 2019. We have previously reported on the importance of reporting information that helps facilitate proper stewardship of federal resources, congressional oversight, transparency, and public accountability, among other things. Without an extension of reporting requirements, Congress will not have access to useful information through this reporting mechanism to support oversight and accountability of agencies’ progress implementing the fraud risk management practices required by FRDAA.
OMB Established a Working Group, but Agencies Identified Involvement and Information Sharing as Challenges
OMB established a working group of agencies as required by FRDAA, but has not met all of the requirements for the working group, such as those related to member composition, and meeting frequency. As a result of these and other working-group limitations, agencies identified a lack of involvement in and limited information sharing from the working as two of the top challenges to implementing the act. As required, OMB established a working group within 180 days of enactment to improve the sharing of financial and administrative controls and other best practices for detecting, preventing, and responding to fraud, including improper payments, and the sharing and development of data-analytics techniques. OMB also submitted to Congress—but not within 270 days of enactment—a plan for the establishment and use of a federal interagency library of data analytics and data sets to facilitate fraud risk management. However, OMB did not initially include the CFO of each agency in earlier working-group meetings, or, according to OMB, meet four times per year in 2017 as required. The working group also did not effectively facilitate the sharing of controls, best practices, and data-analytics techniques, according to our survey results and roundtable discussion. OMB encountered challenges that limited its ability to fulfill some of these requirements, but did not take the necessary actions to implement others.
Plan for data library. In May 2017, OMB submitted a letter to Congress describing the working group’s plan to use a phased approach to establish a federal interagency library of data analytics and data sets, as required by FRDAA. However, OMB did not do so within 270 days of enactment, as required by FRDAA. According to OMB’s letter, the working group is taking a phased approach to develop the plan to establish an interagency data library and took some steps, but identified challenges in the process. When developing the plan, the working group identified two challenges to developing the interagency data library: (1) standardizing how agencies define fraud in their programs, and (2) developing a fraud taxonomy to accurately compile fraud risks and categories. According to the letter, to address these challenges, the working group is creating a fraud-classification system that leverages the existing Association of Certified Fraud Examiners fraud-classification system. OMB’s letter also states that the working group performed an initial inventory of existing tools and materials that will be used to populate the first phase of the library, which is currently located in the OMB MAX Information System. According to the letter, the working group is partnering with agencies to identify a permanent location for the library as well as to develop future enhancements based on the needs of agencies. OMB stated in the letter that it plans to provide Congress additional information once the next phase of the library is implemented.
Working-group composition. FRDAA requires the working group to include the CFO of each agency. OMB, in its role as Chair, did not involve all of the relevant agencies in the working group by inviting them to participate or otherwise providing access and input into the working group as required by FRDAA, according to agencies we surveyed and our assessment of OMB documents. In addition to the statutory requirement, we have previously reported that early outreach to participants to identify shared interests is a key practice for enhancing interagency collaboration. However, OMB’s initial working-group efforts in particular did not include some CFO Act agencies or most non–CFO Act agencies subject to FRDAA, representing missed opportunities to share practices and collaborate on ways to advance federal efforts to reduce fraud, waste, and abuse. While the May 2017 letter to Congress states that the CFO from every agency was invited to participate in the working group, OMB staff later noted that only the 24 CFO Act agencies and the Small Agency Council representative from the CFO Council were invited to the working-group meetings. OMB staff indicated that they did not independently reach out to non–CFO Act agencies to invite them to participate because they believed the Small Agency Council representative was responsible for communicating this information to its members. Nevertheless, FRDAA requires the working group to include the CFO of each agency subject to the act, as well as other parties determined to be appropriate by OMB.
According to our survey results, about half of the agencies subject to FRDAA were not at all familiar with the working group and about two- thirds did not have an entity responsible for participating in it. Non–CFO Act agencies indicated these responses more often than CFO Act agencies. Specifically, 71 percent of non–CFO Act agencies indicated they were not at all familiar with the working group compared with 21 percent of CFO Act agencies. In addition, 90 percent of non–CFO Act agencies indicated they did not have a designated person or entity participating in the working group, compared with 29 percent of CFO Act agencies (see fig. 8).
Similarly, two roundtable participants stated that they thought the working group was geared towards the CFO Act agencies. Most of the CFO Act agencies that participated in our discussion noted that they had been involved in the FRDAA working group. In contrast, almost all of the non– CFO Act agencies that participated in our discussion stated that they were not aware of the working group.
Selected Non–Chief Financial Officers (CFO) Act Agency Officials’ Perspectives on Lack of Communication from and Participation in the Working Group “There’s been nothing that I’m aware at Small Agency Council level that’s had meetings or anything to give extra guidance … and I think that would have been very helpful. In most things in small agencies we wait for things to trickle down from the larger agencies if OMB [Office of Management and Budget] doesn’t give us guidance, and we just haven’t gotten any sort of feedback.”
It is also unclear how many and which CFO Act agencies attended the working-group meetings. In particular, OMB and agencies provided conflicting information about which agencies attended the working-group meetings. For example, according to one CFO Act agency roundtable participant, the representative was invited to the first meeting and not invited to the next. The participant further stated that the agency recently started to receive information from OMB. However, the information OMB provided about this agency’s involvement in working-group meetings conflicted with this participant’s description of the agency’s attendance at the first four meetings.
Agencies identified the lack of involvement in the working group as one of the top challenges to implementing FRDAA. Most CFO and non–CFO Act agencies indicated that their lack of involvement was a moderate or great challenge to implementing FRDAA (see fig. 9). Agencies that indicated having these challenges also more frequently reported challenges with sharing best practices and data-analytics techniques about fraud with other agencies, which was the purpose of the working group. The need for this coordination underscores the importance of identifying shared interests and developing collaborative solutions to help achieve outcomes.
OMB and the working group did consult with the Offices of Inspector General on fraud risk matters, as required by FRDAA, by including them in working-group meetings. In OMB’s May 2017 letter to Congress, the agency reported that the working group coordinated with the Council of the Inspectors General on Integrity and Efficiency and other interagency working groups to discuss and share best practices in mission-specific areas. In addition, two agencies’ Offices of Inspector General are listed as having attended the first four working-group meetings. This coordination between the working group and Inspectors General—who often identify and investigate instances of fraud in agencies—is a positive step for the working group. Inspectors General may be able to provide agencies with information that can assist the agencies in analyzing data for potential fraud, such as fraud indicators. In addition, we have previously reported that if collaborative efforts, like the working group, do not consider the input of all relevant stakeholders, important opportunities’ for achieving outcomes may be missed.
Frequency of meetings. The working group did not meet the FRDAA requirement to hold at least four meetings per year. OMB staff stated that there have been eight working-group meetings to date—one in 2016, three in 2017, and four in 2018—but these meetings do not meet the FRDAA requirement to meet at least four times per year in 2017. As of October 2018, OMB has shown improvements towards meeting this particular FRDAA requirement in 2018. Specifically, the working group has met at least four times in fiscal year and calendar year 2018, as of October 2018.
Vacant appointment positions at OMB and the agencies have slowed efforts to establish the working group, according to OMB staff. FRDAA requires the OMB Controller to serve as the chairperson of the working group, but as of October 2018 the Senate has not made a confirmation for this position. During the roundtable discussion, one participant shared that there was a period when there was no OMB leadership and the working group was largely silent for months. According to OMB staff, it has also been difficult to establish agency membership of the working group due to the lack of confirmed CFOs at some of the 24 CFO Act agencies. As of September 2018, 7 of the 24 CFO Act agencies did not have a CFO. However, OMB and the working group could have held the required minimum number of meetings regardless of OMB and agency vacancies, as evidenced by the seven meetings that were held in the midst of these vacancies. Further, according to OMB staff, aside from the first meeting led by the former Controller, all working-group meetings have been led by the Deputy Controller and other OMB staff, while the Controller position was vacant.
Information sharing about controls, best practices, and data- analytics techniques. It is unclear whether OMB, as chair of the working group, documented working-group meetings or any work products that were developed to facilitate sharing information about financial and administrative controls, best practices for fraud management, and data- analytics techniques. OMB staff stated that they do not have documented minutes or notes from working-group meetings, but in August 2018 stated that they uploaded work products to the FRDAA federal community site on the MAX Information System website. However, apart from two screenshots of the MAX website provided to us in February 2018, which indicated that a fraud taxonomy was among the materials produced by the working group, we were not able to obtain documentation of these work products. We have previously reported that one key practice for enhancing and sustaining agency collaboration is using plans and reports to reinforce accountability for collaborative efforts. Without documented discussions, plans, or reports for these collaborative meetings, OMB is unable to share the lessons learned from the meetings with those who cannot attend, and does not have a record of the plans and actions that the working group has agreed to take. This documentation is also important to maintaining the continuity of the working group’s initiatives when leadership changes occur within the agencies and OMB.
With respect to the information that was shared at some of the initial working-group meetings, roundtable participants stated that the topics discussed were related to the interagency data library and the working- group plan required to be submitted to Congress, as OMB described in the May 2017 letter. For example, some participants confirmed that the first few meetings were spent discussing ways to establish a standard definition of fraud, the implementation plan due to Congress, and the difficulties agencies experience in sharing data. Our survey results indicate that most agencies identified the sufficiency of information coming from the working group as a great or moderate challenge in their efforts to implement FRDAA (see fig. 10).
Roundtable participants also identified data access and sharing, and inter- and intra-agency communication and collaboration, as top challenges for implementing FRDAA. We have previously reported that collaborative mechanisms can be used for a range of purposes such as information sharing. Without participation in appropriately recurring working-group meetings and documentation to facilitate information sharing, agencies will continue to miss opportunities to learn from each other’s experiences and share solutions for establishing financial and administrative controls to prevent, detect, and respond to fraud risks in their programs.
FRDAA Implementation during Broader Reforms
OMB has recently issued guidance on other government-wide reform and burden-reduction initiatives that could shape how agencies address FRDAA implementation, such as reforms that may change the structure of agencies and related programs or how agencies collect data used in managing fraud risks. These changes may present challenges and opportunities in establishing the fraud risk management practices outlined in the FRDAA. As examples of these recent reforms, in March 2017 the President issued an executive order requiring a proposed plan to reorganize executive branch agencies. In April 2017, OMB provided guidance to federal agencies for developing their reform and workforce- reduction plans, as required by the President’s executive order. Executive Order 13781—Comprehensive Plan for Reorganizing the Executive Branch—and other recent administration actions prompted OMB to issue a memorandum (M-17-22), that required agencies to submit an agency reform plan to OMB by September 2017. These reform plans were part of the agencies’ fiscal year 2019 budget submission to OMB that included long-term workforce reductions. In addition, OMB issued a memorandum (M-17-26) that required agencies to streamline reporting requirements— an initial effort at removing duplicative, outdated reporting requirements, with the goal of making the federal government more efficient and effective.
In March 2018, OMB released the President’s Management Agenda, which provided updated information on the status of government reorganization efforts and is connected with these reform efforts. The President’s Management Agenda also identified a set of cross-agency priority goals, required under the GPRA [Government Performance and Results Act] Modernization Act of 2010, to target those areas where multiple agencies must collaborate to effect change and report progress in a manner the public can easily track. One of these collaborative efforts is focused on reducing the amount of dollars lost to taxpayers through improper payments, including payments resulting from fraud. In addition to the President’s Management Agenda, OMB was required by the March 2017 executive order to develop a comprehensive government-wide reform plan, including, as appropriate, recommendations for both legislative proposals and administrative actions based on agency reform plans, OMB-coordinated crosscutting proposals, and public input.
In June 2018, OMB released the government-wide reform plan, which consists of government-wide reorganization and reform proposals with the goal of increasing focus on integrated mission, service, and stewardship delivery. While it is too early to tell whether or how all of these reforms will affect agencies’ efforts to implement FRDAA, we have previously reported that OMB and agencies can leverage these broader reform efforts to address the high-risk areas and government-wide challenges that present vulnerabilities to fraud, waste, abuse, and mismanagement, or are in need of transformation. We surveyed the 72 agencies about whether their plans to implement reforms have had an effect on their efforts to implement FRDAA. About 83 percent of the agencies surveyed reported that they did not address aspects of their fraud risk management in their agency reform plans. Further, OMB reported to us that these plans are still evolving, and have not yet been finalized. However, as we have previously reported, OMB and agencies can consider whether (1) the agency has addressed ways to decrease the risk of fraud, waste, and abuse of programs as part of its proposed reforms and (2) the size of the workforce or resources dedicated to fraud risk management activities may be affected by any of the organizational reforms or efforts to reduce burden, and to make decisions with these considerations in mind.
Conclusions
Fraud is one contributor to financial and nonfinancial risks that cost taxpayers dollars, threaten national security, or put consumers at risk. Therefore, agencies must take a more-rigorous preventive approach to managing the risk of fraud in their programs. Compliance with FRDAA provisions can support these efforts. We recognize that effective implementation of the act will take time, and each program and agency may evolve at a different pace. While a small number of agencies reported being mature in their implementation of FRDAA activities, most are in the process of developing key fraud risk activities, and others have yet to start developing them. Wherever agencies fall on this spectrum, it is important that they continue taking actions to enhance their ability to prevent, detect, and respond to fraud risks in their programs and operations.
OMB plays an important role in supporting agencies’ efforts to manage fraud risks by providing clear guidelines and facilitating agencies’ involvement with the working group. OMB has taken steps to assist agencies, such as updating ERM guidelines and chairing working-group meetings, but improvements to these efforts could better facilitate agencies’ abilities to implement the act. Specifically, agencies reported the need for additional guidance and clarity on the actions they should take to effectively establish the required controls and report their progress on implementation of the act’s requirements, uncertainty about the difference between ERM and FRDAA requirements, and the need for more involvement and information from the working group. With enhanced guidelines from OMB and improvements to collaboration, agencies would be better positioned to improve controls and procedures to assess and mitigate fraud risks, as FRDAA intends.
Promoting the oversight and accountability of agency fraud risk activities through reporting is an important aspect of congressional oversight, as agencies enhance their fraud risk management controls. However, the progress reports submitted by agencies as part of their annual financial reports were incomplete and lacked detailed information to effectively inform Congress of agencies’ implementation status. Further, agencies are only required to report their progress in implementing the requirements of FRDAA through fiscal year 2019. However, it is not clear that more-complete information will be reported by then. Until OMB provides additional guidelines directing agencies to report more-complete and more-detailed information related to both financial and nonfinancial risks, agencies may continue to produce incomplete information on their fraud risk management activities. Requiring agencies to report on the progress of their implementation efforts beyond 2019 could better position Congress to ensure oversight and accountability.
Matter for Congressional Consideration
We are making the following matter for congressional consideration.
Congress should consider extending the requirement in FRDAA for agencies to report on their implementation of fraud controls, identification of fraud risks, and strategies for mitigating them, beyond the current 2019 expiration. (Matter for Consideration 1)
Recommendations for Executive Action
We are making the following three recommendations to OMB:
The Director of OMB should enhance the guidelines for agencies to establish the controls required by FRDAA, by clarifying the difference between FRDAA and ERM requirements, and through collaboration with agencies to determine what additional information agencies need to implement the controls. (Recommendation 1)
The Director of OMB should enhance FRDAA reporting guidelines by directing agencies to report complete and detailed information on each of the reporting elements specified by FRDAA, which should include information related to financial and nonfinancial fraud. (Recommendation 2)
The Director of OMB should ensure the working group’s composition meets FRDAA requirements by involving the CFO of all agencies subject to the act by inviting them to participate or otherwise providing access and input into the working group, and ensure that mechanisms to share controls, best practices, and data-analytics techniques are in place. (Recommendation 3)
Agency Comments and our Evaluation
We provided a draft of this report to OMB for review and comment. OMB staff provided oral comments that disagreed with our three recommendations, which we summarize below. OMB staff also provided technical comments that we incorporated as appropriate.
OMB disagreed with our first recommendation that it should enhance the guidelines for agencies to establish the controls required by FRDAA by clarifying the difference between FRDAA and ERM requirements, and through collaboration with agencies to determine what additional information agencies need to implement the controls. According to OMB staff, Circular A-123 incorporates all of the guidance that agencies need to implement FRDAA and, outside of the current guidance in Circular A- 123 which OMB staff stated incorporates both GAO’s Standards for Internal Control and GAO’s Fraud Risk Framework, agencies are in the best position to make decisions about how they should implement FRDAA. Further, OMB staff stated that they did not believe that our survey of the 72 agencies and the roundtable with the 14 agencies provided sufficient evidence that a change in their guidance is needed because these responses are based on agencies’ opinions.
While Circular A-123 contains a section on Managing Fraud Risks in Federal Programs, we identified important limitations to that section of guidance in our report. In its comments on our report, OMB staff stated that other parts of Circular A-123 provide guidance on FRDAA requirements. These sections of Circular A-123 existed prior to FRDAA and therefore, were not developed in response to FRDAA’s requirement that OMB establish guidelines for agencies. Our review of Circular A-123 found that there are some references to managing fraud risks that are in alignment with the financial and administrative controls identified in FRDAA, and therefore we incorporated that additional information into our report. However, as we reported, agencies stated that they needed additional guidance on how to effectively establish the controls required by FRDAA. OMB was required by FRDAA to establish guidelines. Specifically, lack of guidance and unclear requirements were identified as top challenges during the roundtable discussion, and the sufficiency of OMB’s guidelines was a challenge for 40 percent of the agencies we surveyed. OMB staff stated that they did not believe that our survey and roundtable results are sufficient evidence to warrant a change in their guidance because these responses are based on agencies’ opinions. However, because the purpose of OMB’s guidance is to assist agencies in implementing the administrative controls required by FRDAA, agencies’ experiences and perspectives on the sufficiency of the guidance is an essential part of assessing its effectiveness. Therefore, we reiterate the positions expressed by many agencies that they do not have sufficient guidance on implementing FRDAA requirements related to the establishment of financial and administrative controls. As a result, our recommendation on improving this guidance is still warranted.
OMB also disagreed with our second recommendation that it should enhance FRDAA reporting guidelines by directing agencies to report complete and detailed information on each of the reporting elements specified by FRDAA, which should include information related to financial and nonfinancial fraud. According to OMB staff, Circular A-136 is sufficient guidance because it includes the requirements stated in FRDAA, and incorporating this guidance into Circular A-136 was not a requirement of the act. Although not required by FRDAA, OMB’s guidance to agencies on FRDAA reporting is important because these reports can be used to evaluate agency efforts to make changes to their processes and policies. OMB Circular A-136 establishes reporting guidance for executive branch entities required to submit agency financial reports, among other things. Agencies were required to report on their progress implementing FRDAA in these reports. However, FRDAA provides high-level information on what should be included in agency reports, not operational guidance on how to address the reporting requirements, which is typically outlined in executive guidance to agencies. Consequently, the initiative that OMB took to provide guidance on FRDAA in Circular A-136 was an important step in the right direction. However we found that the 24 CFO Act agencies’ annual financial reports for 2017 were incomplete and lacked details, which can be attributed in part to the limited guidance provided by OMB. We found that 31 percent of surveyed agencies indicated that reporting on FRDAA progress was a great or moderate challenge. The agency reporting requirement was intended to help Congress monitor the progress made by agencies in addressing and reducing fraud risks, including the success and failures of the guidelines created by OMB as a result of the act. Therefore, our recommendation to improve OMB’s reporting guidelines is still appropriate.
OMB also disagreed with our third recommendation that it should ensure that the FRDAA working group’s composition meets the act’s requirements by involving the CFO of all agencies subject to the act by inviting them to participate or otherwise providing access and input into the working group, and ensuring mechanisms to share controls, best practices, and data-analytics techniques are in place. According to OMB staff, they disagreed because they believe that OMB provided an opportunity for all agencies to attend the working group meeting and they have held four working group meetings in 2018. However, evidence submitted by OMB throughout our review and agencies’ responses to our survey indicate that not all agencies had the opportunity to participate in the working group. The working group was required to include the CFOs of every agency subject to FRDAA, including those that are not subject to the CFO Act. However, 71 percent of non–CFO Act agencies were not at all familiar with the working group, and ninety percent did not have a designated person or entity that participated in the working group, according to our survey. Moreover, 21 percent of CFO Act agencies, which represent the largest federal agencies, were not at all familiar with the working group, and 29 percent did not have a designated person or entity that participated in it, according to our survey results, as of March 2018. To ensure that we obtained information from the right contacts regarding agency participation, we surveyed the CFO or the CFO’s designee of each agency subject to FRDAA. During our audit, OMB indicated that it did not have a list of CFO contacts for all agencies subject to the act, and requested that we share our list of contacts. We have agreed to do so consistent with our protocols, upon public release of the report. Given our findings, our recommendation for OMB to ensure that every agency is then given the opportunity to participate is still warranted.
Our survey results also indicated that most agencies identified the sufficiency of information coming from the working group as a great or moderate challenge in their efforts to implement FRDAA. Further, OMB staff stated that they do not have documented minutes or notes from working-group meetings. As we stated in our report, without documented discussions, plans, or reports for these collaborative meetings, OMB is unable to share the lessons learned from the meetings with those who cannot attend, and does not have a record of the plans and actions that the working group has agreed to take. This documentation is also important to maintaining the continuity of the working group’s initiatives when leadership changes occur within the agencies and OMB. As we previously noted, without participation in working-group meetings and documentation to facilitate information sharing, agencies will continue to miss opportunities to learn from each other’s experiences and share solutions for establishing financial and administrative controls to prevent, detect, and respond to fraud risks in their programs. Therefore, we believe that our recommendation on ensuring mechanisms are in place to share controls, best practices, and data-analytics techniques is still warranted. Finally, although OMB did not hold the required number of meetings per year in 2017, it has done so for fiscal year and calendar year 2018, as of November 2018. Therefore, we modified our recommendation to reflect the new actions taken.
We are sending copies of this report to appropriate congressional committees and OMB. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Rebecca Shea at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
This report reviews agencies’ and the Office of Management and Budget’s (OMB) efforts to implement the Fraud Reduction and Data Analytics Act of 2015 (FRDAA). Specifically, it examines (1) federal agencies’ progress and challenges in implementing fraud risk management practices, including those required by FRDAA, and (2) the extent to which OMB has taken steps that complied with FRDAA requirements and that facilitated agencies’ implementation of the act. To address both of these objectives, we developed and implemented a government-wide survey of agencies subject to the act, conducted a roundtable discussion with selected agencies, reviewed the 24 Chief Financial Officer (CFO) Act agencies’ annual financial reports, interviewed staff from OMB, the CFO Council and the Council of the Inspectors General on Integrity and Efficiency, and reviewed relevant OMB circulars and documents.
Survey
Determination of Executive Branch Agencies Subject to FRDAA
To determine which agencies were subject to FRDAA and subsequently surveyed, we first sent information requests to 93 federal executive branch entities to determine whether their organization met the definition of “agency” in 5 U.S.C. § 551(1). FRDAA requires the CFO of each agency to be a member of the FRDAA working group. Therefore we identified each entity’s CFO or equivalent using publicly available websites. We sent an email to the 93 entities’ CFO or equivalent and GAO liaison, if present, to notify the agency that we planned to administer a government-wide survey related to the act and requested that an official from the entity’s Office of the General Counsel confirm whether the entity is an “agency” as defined in 5 U.S.C. § 551(1). If the CFO was not the official who was most appropriate to answer our survey about activities related to the act, we requested that the agency identify who should receive our survey. Of these 93 entities, 72 indicated they met this definition of agency, 20 reported that they did not, and 1 entity, the Central Intelligence Agency, did not respond. See table 1 for a list of the 72 executive branch agencies that identified themselves as being subject to the act.
Survey Questionnaire Development
To improve the response rate of agencies receiving our survey, while mitigating respondent burden and reducing total survey error, we developed the survey using a variety of quality-assurance techniques. Survey error can arise from population coverage, measurement, nonresponse, and processing errors associated with questionnaire surveys. GAO survey specialists determined survey design parameters and developed, tested, revised, and finalized the questionnaire, in consultation with subject-matter experts on the engagement team. The survey design parameters included population coverage, mode of administration, respondent communication methods, and protection from disclosure of identifiable information.
To reduce measurement error, we pretested the questionnaire with selected agency representatives using cognitive interviewing techniques, such as nondirective probing of answers and asking respondents to think aloud when formulating answers. This process allowed us to determine whether questions were understood and answered as intended. Specifically, pretests examined respondent issues related to comprehension of the questions, ability to accurately respond to the questions, perceptions of bias in the questions or scales, and completeness of answer responses. For example, during pretesting we probed respondents on whether our scales were appropriately balanced, and whether individual questions were likely to be applicable to all respondents. We conducted pretests over the phone with CFOs or other FRDAA designated officials from three types of agencies for a total of six agencies: two executive-department CFO Act agencies; two CFO Act agencies that are not executive departments, and two non-CFO Act agencies that are not executive departments. As a result of these pretests, we made modifications to question wordings, scale categories, and other response options to improve respondent comprehension, reduce respondent burden, and mitigate risks of inaccurate or biased responses.
An additional survey specialist, who had not been involved in the development of the questionnaire, also reviewed the questionnaire. We then modified the questionnaire based on suggestions made by the reviewer and subject-matter experts. The final version of the questionnaire was copy edited for grammatical and editorial errors.
The final questionnaire included questions designed to capture information about FRDAA implementation government-wide and obtain a high-level status update of agencies’ implementation of the act including, but not limited to, the steps agencies had taken since the enactment of the act, fraud risk management activities, challenges they have experienced implementing FRDAA, and their perspectives about OMB’s support of these efforts. It was composed of questions with predetermined answer choices (closed-ended questions) and questions without predetermined answer choices requiring written response (open- ended questions). See appendix II for survey questions and frequencies of agencies’ responses.
Survey Administration
To administer the survey, we emailed each agency a fillable PDF questionnaire. We fielded the survey from January 18, 2018, through March 27, 2018. To follow up with agencies that did not respond to the initial notice, we emailed or called multiple times to encourage survey participation or provide technical assistance, as appropriate. We received usable questionnaire responses from all 72 agencies, for a response rate of 100 percent. Because this survey was sent to all agencies that were identified as being subject to FRDAA, there is no error as a result of sampling, and results cover the entire population. However, the practical difficulties of conducting any survey may also introduce other types of errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information available to respondents, or in how the data were entered into a database or analyzed can introduce unwanted variability into the survey results. With this survey, we took a number of steps to minimize these nonsampling errors. For example, our staff with subject-matter expertise designed the questionnaire in collaboration with our survey specialists, and all questions were cognitively pretested with knowledgeable respondents. When the survey data were received from agencies and analyzed, a second independent analyst on our staff verified the analysis programs to ensure the accuracy of the code and the appropriateness of the methods used for the computer-generated analysis. Since this was an electronic survey, respondents entered their answers directly into the questionnaire, thereby mitigating the need to have the data keyed into a database, thus avoiding a source of data-entry error.
Roundtable Discussion
To collect information about agencies’ experiences implementing FRDAA, we also facilitated a roundtable discussion with selected agencies subject to FRDAA that had completed the survey. The purpose of the roundtable discussion was to obtain agency officials’ perspectives on the strategies and activities used to establish fraud controls and related fraud risk management activities; the guidance and resources used to facilitate the implementation of FRDAA; their challenges in implementing FRDAA; and potential solutions to improve implementation of the act, including any additional guidance or resources that may be useful to implementing the act.
We randomly selected and invited a diverse group of agencies that are subject to FRDAA. We planned for a group of agencies that were diverse in terms of the following: 1. agency type, such as whether the agency was a CFO Act agency, an executive department or non–executive department, and membership in the Small Agency Council; and 2. FRDAA implementation status as indicated by their responses to two survey questions. These two survey questions were “overall, what is the status of your agency-wide efforts to implement FRDAA” and “as of today, does your agency do the following to manage fraud risk at the agency-wide level.” We used the survey responses to divide agencies into two groups, a more-mature implementation group and a less-mature implementation group.
We invited a total of 27 agencies to participate in our roundtable, an initial group of 20 agencies and 7 backup agencies. Fourteen agencies attended our roundtable: six executive-department CFO Act agencies; two CFO Act agencies that are not executive departments; and six Small Agency Council member agencies. Agency representatives included agency officials with responsibility for antifraud activities, including either the agency’s CFO, Chief Risk Officer, or other staff responsible for fraud risk management activities.
The roundtable discussion was held March 26, 2018, and included three sessions: an opening session, a breakout session, and a closing session. In the opening session, all 14 of the roundtable participants were given an overview of our researchable questions and the agenda for the day. Then the agencies were split into two breakout groups based on their response to our survey questions about the maturity of their implementation of FRDAA. In the two breakout groups, roundtable participants discussed the guidance and resources they used for implementation of the act, their approaches used for implementation of the act, and the strategies and challenges associated with implementation of the act. In each breakout group, roundtable participants identified and voted on their top challenges in implementing FRDAA. After the breakout session, GAO facilitators and subject-matter experts on the engagement team then met to create a new list of the top voted challenges of both groups as well as any crosscutting challenges. Finally, in the closing session, all 14 agencies came back together to recap the breakout discussions and have a broader discussion about experiences of successful implementation and potential solutions to improve implementation, including any additional guidance or resources that may be useful to implementing the act. Roundtable participants identified and voted on their top challenges to implementing FRDAA. These results are not generalizable to agencies beyond the 14 that participated.
Fiscal Year 2017 Annual Financial Reports
To further assess steps that agencies have taken to implement fraud risk management practices, as required by FRDAA, we also reviewed the fiscal year 2017 annual financial reports for the 24 agencies subject to the CFO Act. FRDAA required agencies to report to Congress on the status of their efforts to implement financial and administrative controls that incorporate leading practices from GAO’s Fraud Risk Framework, identify fraud risks, and establish strategies to mitigate fraud in these reports. We selected these 24 agencies because they were known at the time of our selection to be agencies that were subject to FRDAA, and are estimated to account for over 99 percent of the government-wide improper payments in fiscal year 2015. These agencies also are required to submit their reports directly to GAO. We conducted a content analysis to determine the completeness and quality of the information provided in these reports related to these FRDAA requirements.
Because content analysis relies on the judgment of coders to determine whether qualitative data reflects particular categories, we took several steps to ensure that this judgment remained objective, accurate, and consistent. Prior to beginning the content analysis, we worked with subject-matter and legal experts to develop a codebook and definitions for the different kinds of information that FRDAA requires agencies to report, as well as supplemental coding categories related to leading practices in fraud risk management identified in our framework. In order to test the clarity of these codes, we had four independent analysts pretest the content analysis on two annual financial reports, and found high levels of interrater reliability. Specifically, each of the categories had at least 95 percent agreement between coders. As a result to this pretest, minor changes were made to the category definitions.
We used two independent coders within GAO to ensure consistent judgment of categories. For the content analysis, each of the 24 annual financial reports was coded by two independent analysts, including one subject-matter expert familiar with fraud risk management and another familiar with each of the CFO Act agencies. Agreement among coders exceeded 99 percent across all of the coding categories. On the basis of this high level of agreement between coders, we are confident that our content analysis represents an objective, accurate, and consistent assignment of these coding categories. Because these coding categories would be further reviewed in making our determinations about completeness and detail, we decided to resolve any intercoder disagreements by keeping all coded material for that review.
To assess the completeness of agencies’ reporting on FRDAA implementation, we broke out the unique requirements in each of the three broad categories outlined in FRDAA’s reporting requirements. As a result, our analysis included an assessment of 11 coding categories, which are listed with their definitions in table 2 below. An element was considered present if the corresponding code was applied one or more times in the annual financial reports, and missing if the corresponding code was applied zero times. Each annual financial report was then categorized into one of four categories of completeness, based on these assessments: 1. Fully complete: agencies with reports that contained information on all 11 elements. 2. Mostly complete: agencies with reports that contained information on 6–10 elements. 3. Partially complete: agencies with reports that contained information on 1–5 elements. 4. Not at all complete: agencies with reports that contained information on 0 elements.
In addition to assessing whether the annual financial report contained these elements, as required by FRDAA, we also reviewed the content of each of these coding categories, as well as additional categories related to leading practices in fraud risk management. In order to demonstrate the range of the quality and level of detail provided for each element, and for the overall reporting on fraud risk management efforts, we reviewed the specific coded excerpts in NVivo for each agency and summarized the level of detail, length, and other observations specific to each category.
To address our second objective, determining the extent to which OMB has taken steps that complied with FRDAA requirements and that facilitated agencies’ implementation of the act, we reviewed relevant documents produced to support the implementation of FRDAA. We also assessed the extent to which the guidelines were consistent with leading practices from the Fraud Risk Framework and the Standards for Internal Control in the Federal Government.
To determine the extent to which OMB has taken steps that complied with FRDAA requirements and facilitated agencies’ implementation of the act, we did the following: 1. We interviewed staff from OMB’s Office of Federal Financial Management and Office of Personnel and Performance Management regarding their development of guidelines, the working group, and any challenges OMB may have experienced while implementing the act’s requirements, to determine the extent to which OMB’s efforts to facilitate agency implementation of the act were viewed as helpful by agencies. 2. We reviewed relevant memorandum, circulars, and other OMB documents including Circular A-123, Management’s Responsibility for Enterprise Risk Management and Internal Control, and Circular A-136, Financial Reporting Requirements, and compared these with the requirements for OMB outlined in FRDAA. 3. We evaluated agencies’ perspectives and experiences using OMB’s guidelines and other initiatives to implement the act by assessing our survey responses, annual financial-report analysis, and roundtable discussion for responses related to OMB guidelines and other efforts, and related strengths and challenges. 4. We also interviewed officials from the CFO Council and Council of the Inspectors General on Integrity and Efficiency to get a broader opinion about the effectiveness of OMB and agency efforts to implement FRDAA.
We conducted this performance audit from August 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Results of GAO’s Survey on Agencies’ Implementation of the Fraud Reduction and Data Analytics Act of 2015
To obtain information about the extent to which executive branch agencies have taken steps required by the Fraud Reduction and Data Analytics Act of 2015 (FRDAA), we identified 72 agencies subject to the act and surveyed these agencies about their fraud risk management practices and related challenges. We received responses from all 72 agencies, for a response rate of 100 percent. The questions we asked in our survey and the percentage of agencies’ responses are shown below. Our survey was composed of questions with predetermined answer choices (closed-ended questions) and questions without predetermined answer choices requiring written response (open-ended questions). In this appendix, we include all survey questions and results of responses to the closed-ended questions; we do not provide information on responses to open-ended questions.
The tables below represent the percentage of agencies’ responses to the close-ended questions. The percentages we report are rounded to the nearest whole number. For a more-detailed discussion of our survey methodology, see appendix I.
Survey question 13: What other information, if any, should GAO know about your agency’s efforts to implement FRDAA or manage fraud risks? (open-ended response)
Survey question 14: Do you have any additional explanations for your answers or comments on any of the issues in this questionnaire? (open-ended response)
Survey question 15: Please enter the contact information for the primary person who completed this survey. (open-ended response)
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Latesha Love (Assistant Director); Georgette Hagans (Analyst in Charge); Sarah Cantatore, Joy Kim, Grant Mallie, James Murphy, Eve Nealon, Steven Putansu, Kristen Timko, and Shana Wallace made key contributions to this report. Other contributors include Marcus Corbin, Carrie Davidson, Colin Fallon, Barbara Lewis, and Maria McMullen. | Why GAO Did This Study
Fraud poses a significant risk to the integrity of federal programs and erodes public trust in government. Implementing effective fraud risk management processes can help ensure that federal programs fulfill their intended purpose, spend their funding effectively, and safeguard assets.
FRDAA requires agencies to establish internal controls to manage their fraud risks and to report implementation progress for the first 3 years after enactment. It also directs OMB to (1) develop guidelines for agencies to establish fraud risk management controls and (2) establish a working group to share best practices in fraud risk management and data analytics.
GAO was asked to review agencies' and OMB's efforts to implement FRDAA. This report examines steps (1) agencies and (2) OMB have taken to implement FRDAA. GAO conducted a survey of the 72 agencies subject to the act, held a roundtable discussion with 14 selected agencies, reviewed 24 selected annual financial reports, examined OMB guidelines, and interviewed OMB staff.
What GAO Found
At varying stages, agencies have begun planning for and implementing fraud risk activities (like conducting an evaluation of fraud risks) required by the Fraud Reduction and Data Analytics Act of 2015 (FRDAA), according to GAO's survey of agencies subject to the act. Overall, most of the 72 surveyed agencies (85 percent) indicated that they have started planning how they will meet FRDAA requirements, and about 78 percent indicated that they have also started taking steps to implement the requirements.
To assist agencies in implementing fraud risk management activities, the Office of Management and Budget (OMB) established FRDAA-related guidelines and a working group, as required by the act. However, agencies experienced challenges with OMB's guidelines and the working group, among other things, according to GAO's survey and roundtable discussion results (see figure below).
Implementation guidelines. To meet FRDAA requirements, OMB updated Circular No. A-123 guidelines that govern executive agencies. However, this update included limited information on the methodologies agencies can use to assess, document, and report on internal controls required by FRDAA, according to GAO's review of the guidelines. Surveyed agencies had mixed perspectives on the usefulness of OMB's guidelines for implementing FRDAA controls. Similarly, agencies identified the lack of clear requirements and guidance as top challenges in GAO's roundtable discussion with 14 selected agencies.
Reporting on implementation progress. Although not required by FRDAA, OMB updated annual financial report guidelines to include FRDAA requirements, but GAO found that the guidelines did not contain enough information to aid agencies in producing complete and detailed progress reports in 2017, the first year of reporting. Additional guidelines from OMB could help agencies produce more complete and detailed reports for 2019, the final year of required reporting. Without a longer reporting period, however, Congress may not have the useful information for continued oversight of agencies' progress.
Working Group. OMB has taken steps to establish the working group, but GAO found the working group did not fully meet FRDAA requirements. As Chair, OMB did not (1) involve all agencies subject to the act in the working group or (2) hold the required number of meetings in 2017. Most surveyed agencies indicated a lack of involvement with and information from the working group as challenges in implementing FRDAA.
What GAO Recommends
GAO is making three recommendations, including that OMB (1) enhance its guidelines for establishing controls, (2) enhance guidelines for reporting on agencies' progress, and (3) fully implement the working group. OMB did not concur with the need for the recommendations. GAO continues to believe the recommendations are valid, as discussed in the report. Additionally, Congress should consider extending agencies' reporting requirements. |
gao_GAO-18-311 | gao_GAO-18-311_0 | Background
In 2010 PPACA authorized the establishment of PCORI to improve CER quality and relevance. PPACA also established requirements for HHS to, among other things, disseminate findings from federally funded CER, including findings published by PCORI, and coordinate with relevant federal health programs to build data capacity for research. PPACA established a Trust Fund to fund these CER activities by PCORI and HHS through fiscal year 2019.
PCORI Activities Required by PPACA
PPACA authorized the establishment of PCORI as a federally funded, nonprofit corporation aimed at advancing the quality and relevance of evidence through research to help patients, clinicians, purchasers, and policy-makers to make informed health care decisions. PCORI is required to identify research priorities, establish a research project agenda, fund research consistent with its research agenda, and disseminate research findings, among other responsibilities. In 2015 we reported that PCORI had conducted activities consistent with its legislative requirements. For example, we reported that since its inception in 2010, PCORI established and implemented priorities for funding CER and related activities, developed plans to disseminate funded research and track its utilization, and took steps to make its research more centered on outcomes prioritized by patients. Further, PCORI developed PCORnet as a distributed research network initiative that enables electronic health-related data from multiple sources to be available for research.
HHS CER-related Activities Required by PPACA
PPACA requires HHS to perform several requirements related to CER, which it has implemented through AHRQ and ASPE. Specifically, AHRQ is required to disseminate and support the incorporation of CER funded by PCORI and other federal entities, as well as to foster capacity for conducting CER by supporting training in the methods used to conduct such research. ASPE, in turn, is required to build data capacity for conducting CER. In 2015, we reported that AHRQ had taken some steps to disseminate research findings, but had not taken other actions to help it fully address its dissemination requirements. Furthermore, we reported that ASPE coordinated among various agencies to fund projects intended to build data capacity for research, but that its approach lacked key elements—such as defined objectives, milestones, and time frames—that are necessary to ensure effectiveness. In our 2015 report, we made five recommendations to HHS to direct AHRQ and ASPE to address these issues, as appropriate. HHS concurred with these recommendations and specified actions it would take to address them. Four of the recommendations have since been implemented.
PPACA Funding for CER
PPACA established the Trust Fund through which PCORI and HHS receive funds for CER activities. The law provides that for fiscal years 2010 through 2019, the Trust Fund will receive appropriations from the general fund of the Treasury, transfers from the Medicare trust funds, and fees collected by the Department of the Treasury (Treasury) from private insurance and self-insured health plans. Eighty percent of the amounts in the Trust Fund must be made available to PCORI in fiscal years 2011 through 2019, and Treasury must transfer the remaining 20 percent to the Secretary of HHS in each of those years. Under current law, appropriations and transfers to the Trust Fund will end in fiscal year 2019. The law also provides that no amounts shall be available for expenditure from the Trust Fund after September 30, 2019, and specifies that any amounts remaining in the Trust Fund after that time will be transferred to the general fund of the Treasury. (See fig. 1 for an overview of transfers to the Trust Fund and distribution of funds to PCORI and HHS).
PPACA limits the use of CER in certain ways; for example, the law prohibits PCORI from developing or using a dollars-per-quality adjusted life-year to establish what type of health care is cost effective or recommended, and prohibits the Secretary of HHS from using such measures as a threshold to determine coverage, reimbursement, or incentive programs under Medicare. HHS may use CER findings to help inform Medicare coverage decisions, but PPACA does not allow Medicare coverage to be denied solely on the basis of CER findings.
PCORI Committed Funds Primarily to Research and Data Capacity Efforts; Awards for Dissemination and Implementation of Findings Were Limited as Most Research Was Still Underway
In fiscal years 2010 through 2017, PCORI committed about $1.6 billion (or 79 percent of its total award commitments of $2.0 billion) to awards for conducting CER and $325 million (or 16 percent) to awards for building data capacity for research. In addition, PCORI committed $93 million for engagement and workforce awards to involve stakeholders in the research process and expand the research workforce, and committed $12 million for awards to disseminate and implement its research findings. Awards for the dissemination and implementation of its research findings were limited as of the end of fiscal year 2017, as most of this research was still underway. (See table 1 for PCORI’s award commitments for fiscal years 2010 through 2017.)
By the end of fiscal year 2024, PCORI projects to spend a total of almost $3.3 billion, which reflects its projected Trust Fund revenue through fiscal year 2019 plus interest income. This total amount encompasses the commitments PCORI has made for awards through fiscal year 2017, as well as $514 million in projected additional research award commitments to be made by the end of fiscal year 2019 and $207 million for other award commitments to be made by the end of fiscal year 2021. In addition to awards, the total includes PCORI’s expenditures for program and administrative support services in fiscal years 2010 through 2017, as well as projected expenditures for these services through fiscal year 2024. (See fig. 2 for PCORI’s actual and projected commitments and expenditures and see app. I for an overview of PCORI’s awards.)
The following information provides details on PCORI’s awards related to research, building data capacity, engagement and workforce activities, and the dissemination and implementation of its research findings.
Research Awards
PCORI committed $1.6 billion, or 79 percent of its total award commitments, for research in fiscal years 2010 through 2017. In fiscal years 2018 and 2019, PCORI projects to commit an additional $514 million for research awards. PCORI research awards have increasingly focused on conditions that impose a substantial health or financial burden on patients and the healthcare system. (See table 2 for information on the health conditions that received the highest research award funding.)
Similar to certain types of CER that may take many years, the entire research award process for PCORI-funded CER may span multiple years from the funding announcement to the dissemination of completed research. Specifically, the process PCORI established can take as many as 6 years, which includes requesting and reviewing proposals, awarding contracts, recruiting participants or obtaining data, conducting and reviewing research, and disseminating findings and typically involves awards that span multiple years. For example, PCORI estimates that the typical timeframe for announcing funding and selecting applications to receive research awards can take 8 to 11 months as PCORI brings scientists, patients, payers, and other stakeholders together to prioritize proposals based on the impact of the condition, potential to improve health, technical merit, patient-centeredness, and engagement. (See fig. 3.)
Most of PCORI’s research projects, awarded through fiscal year 2017, were still underway. Only 53 of its 543 research projects had been completed as of the end of fiscal year 2017—in part because PCORI’s research award process typically takes 2 to 6.5 years to complete, and because almost two-thirds of the funds committed for research projects were awarded in fiscal years 2015 through 2017. While most PCORI- funded research is underway, a larger number of research studies are projected to be completed by of the end of each year from 2018 to 2022, with all of the remaining studies to be completed by 2024. (See fig. 4.) PCORI officials told us that the institute attempts to manage its funds to ensure that its research awards are funded and managed through completion, including peer review and the distribution of research findings, in recognition of the time needed to conduct this research as well as the uncertainty regarding the total amount of funding available.
Officials from all but one of the stakeholder organizations we interviewed—public and private payers, health care providers, and patient advocacy organizations that represented potential users of CER— generally supported PCORI’s research award priorities. Most of the stakeholders we interviewed stressed the importance of research conducted by unbiased organizations, such as the federally funded research funded by PCORI and HHS. In addition, most stakeholders also told us that PCORI’s efforts to engage patients in the research process have changed the way research is conducted for the better, such as prioritizing research outcomes that are most meaningful to patients. However, officials from an organization representing payers (and from an individual health plan) told us that PCORI’s priorities did not fully align with their needs, such as their needs for CER on certain high-cost conditions, medications or treatments.
Building Data Capacity Awards
PCORI committed the second largest portion of award funding—$325 million through fiscal year 2017—for awards to build data capacity for research through the development of PCORnet. PCORI officials told us that the institute supported the development of the PCORnet initiative in order to use existing medical records and claims data and to transform much of that data into a common data model to be used for clinical research, until such time when such data will have been standardized in electronic health records so that they can easily be used for research. As of December 2017, PCORnet included 36 partner networks agreeing to link their electronic claims and health data. PCORI officials told us that this distributed data network already comprises a nationally representative sample of approximately 128 million individuals whose data can be used in randomized clinical trials, large observational studies, and other research. In fiscal years 2018 and 2019, PCORI projects to commit an additional $70 million for these awards to continue building this data capacity.
PCORnet research is managed through its Coordinating Center, which oversees the translation of certain categories of the partner networks’ data into the common data model and forges agreements with each of the partners to share results of queries using their data with researchers.
This research process generally starts when a researcher requests to query data on a specific population, after which PCORnet may approve the request and invite network partners to participate. Participating network partners then run queries on their data following established parameters and submit the results to a secure portal that the researcher can access in order to analyze the results for research. (See fig. 5.) PCORI officials told us that there were 32 research projects using PCORnet that received funding through PCORI’s research award process as of the end of December 2017, as well as 45 research projects funded by other parties, including federal agencies and private industry.
Further, as part of its building data capacity awards, in fiscal year 2017 PCORI committed $25 million to the People-Centered Research Foundation, a nonprofit foundation formed in March 2017 to support the network partners and other entities conducting research using PCORnet. This funding was provided to support this foundation’s development of a business plan, as well as its governance structure, to ensure the continuity of the PCORnet network partnership efforts after PCORI funding for PCORnet ends. PCORI has indicated it may provide additional funding to the foundation, provided that the foundation and the networks make progress toward self-sustainability.
Officials from most stakeholder organizations we interviewed generally agreed that PCORnet offers value by improving the data available to conduct CER. Officials from two organizations told us that PCORnet has made it possible to use network partners’ aggregated data to make conducting research more efficient than in the past.
Engagement and Workforce Awards
Through fiscal year 2017, PCORI also committed $93 million for engagement and workforce awards. For example, PCORI committed a total of $63 million for engagement awards, intended to involve a variety of stakeholders in the research process and to improve the methodology for carrying out CER. Engagement awards include “Eugene Washington Engagement Awards” that are intended to bring patients, caregivers, clinicians, and other healthcare stakeholders into the research process and to disseminate study results. In addition, “Pipeline to Proposal Awards” are intended to bring together stakeholders with strong interests in a specific health issue to develop research proposals to address their needs. Officials from the two patient advocacy organizations we interviewed told us that PCORI’s engagement awards have helped to support patient involvement in the research process. For example, one official noted that, while it has not been easy to find patients willing to participate, these awards have been important to train and support patients in the research process.
PCORI also committed $30 million to workforce training awards for clinicians and researchers. For example, one of PCORI’s career development programs, conducted in partnership with AHRQ, is designed to train clinician and research scientists to conduct patient-centered outcomes research and to actively engage stakeholders in efforts to improve the quality and safety of care.
Dissemination and Implementation Awards
Dissemination and implementation awards for PCORI-funded research findings thus far have been limited as most of the research was still underway, but, according to PCORI officials, awards for this work will substantially increase as research is completed. Specifically, through fiscal year 2017, PCORI committed a total of $12 million for awards to disseminate and implement PCORI-funded research by helping researchers and other stakeholders to publicize findings and by supporting patients and providers to utilize findings. PCORI projects to commit an additional $91 million for these awards in fiscal years 2018 through 2021.
Dissemination and implementation awards are intended to encourage PCORI awardees that have completed research and their patient and stakeholder partners to pursue strategic activities to disseminate and implement their findings. For example, PCORI awarded about $0.4 million to increase awareness and promote the use of research findings on using technology to deliver virtual care home visits for those with Parkinson’s disease. According to PCORI, these funds will be used to train neurologists and other health professionals to provide virtual care for patients in their homes. In addition, as part of its efforts to summarize research findings, PCORI also awarded funds to the American Institutes for Research to establish a Translation Center that develops two summaries of each of PCORI’s research findings: a public abstract for general audiences that is also translated into Spanish and a professional abstract for clinicians.
In addition to awards, PCORI has fostered the dissemination and implementation of its research findings in other ways, including through its website, publications, and roundtable briefings. For example, according to PCORI, it posts research findings on its website within 90 days of receiving final peer-reviewed research results so that patients and providers have access to the information to make healthcare decisions. In addition, according to PCORI, it pays journals’ open access fees to allow free public access to selected research and plans to support research awardees to place accepted journal manuscripts in the PubMed Central database. PCORI also facilitates roundtable briefings that bring together clinicians, patients, and others with interests in recent findings in order to build support for immediate use of the findings. PCORI also coordinates its dissemination efforts with AHRQ.
PCORI considers the implementation of its research methods and findings to be an integral part of its dissemination efforts and a culmination of its work and so has begun efforts to track implementation, such as the number of its findings published in peer-reviewed journals, and the use of its findings in clinical care. For example, PCORI officials told us that that there were 891 publications in peer-reviewed journals that resulted from studies fully or partially funded by PCORI through October 2017. According to PCORI, two PCORI-funded studies on prostate cancer, one study on oral versus intravenous antibiotics for certain children, and one study on self-monitoring of blood glucose were included in medical resource software that is used by nearly 90 percent of academic medical centers in the United States.
Most of the stakeholder officials we interviewed noted the importance of disseminating research findings quickly and in ways that are readily available and understandable to both experts and the general public to raise awareness about the findings. While officials representing two payers noted limitations to the usefulness of PCORI’s research findings because they do not take treatment costs into account, most stakeholder officials noted the importance of the PCORI-funded research underway and looked forward to utilizing the research findings once they become available. In particular, officials representing provider and patient advocacy organizations told us that they were interested in ensuring that the most important research findings would be quickly implemented by patients and clinicians.
HHS Obligated Funds Primarily for the Dissemination and Implementation of Research
Between fiscal years 2011 and 2017, HHS’s AHRQ obligated about $260 million (or 58 percent of HHS’s $448 million in total obligations) for the dissemination and implementation of CER findings. According to AHRQ officials, because most PCORI-funded research had not been completed by the end of fiscal year 2017, these efforts were primarily focused on the dissemination and implementation of research funded by other entities, including NIH and the Centers for Disease Control and Prevention (CDC). Additionally, AHRQ obligated $94 million for efforts to train researchers on conducting CER, and ASPE obligated $85 million for efforts to build data capacity. AHRQ and ASPE have obligated a total of $9 million for administrative activities during those years. Table 3 provides an overview of HHS’s obligations in each fiscal year.
AHRQ and ASPE plan to obligate an additional $120 million for dissemination and implementation, training, building data capacity, and administrative activities during fiscal years 2018 through 2020. They expect to have $245 million available to fund ongoing and future CER activities, based on expected transfers from the Trust Fund in fiscal years 2018 and 2019. (See fig. 6.)
The following information provides details on HHS-funded projects related to dissemination and implementation, training on conducting CER, and building data capacity.
Dissemination and Implementation
During fiscal years 2011 through 2017, AHRQ obligated a total of $260 million for CER dissemination and implementation initiatives and plans to obligate an additional $93 million for these initiatives in fiscal years 2018 through 2020. According to officials, AHRQ plans to fund additional dissemination and implementation initiatives in fiscal years 2018 and 2019 but had not finalized those plans as of January 2018. (See app. II for an overview of all of AHRQ’s dissemination and implementation initiatives.) AHRQ’s dissemination and implementation initiatives comprise efforts to synthesize CER findings, translate and communicate research findings to potential users, and implement them:
Synthesis of CER findings: According to AHRQ officials, AHRQ’s Evidence-Based Practice Centers developed 48 systematic reviews of CER findings based on completed research. As of the end of fiscal year 2017, 40 of these reviews had been published, while 8 were still in progress. Officials told us that these systematic reviews have likely not included PCORI-funded research, as most of that research had not been completed by the end of fiscal year 2017.
Translation and communication of CER findings: AHRQ funded initiatives, which—according to the agency—are aimed at making CER findings accessible and understandable to health care professionals, patients, and others. For example, AHRQ developed a “Library of Patient-Centered Outcomes Research Resources” website with links to CER databases maintained by other entities including NIH and PCORI. Another example is AHRQ’s “John M. Eisenberg Center for Clinical Decisions and Communications Science,” which translates research findings into information that can be used by consumers, health care providers, and policymakers.
Implementation of CER findings: AHRQ funded four key initiatives to implement CER findings. According to AHRQ officials, one of the four initiatives includes PCORI-funded research, while the other three have thus far focused on implementing existing CER funded by other entities:
The “Dissemination and Implementation Initiative” was designed to disseminate and implement government-funded CER findings— including PCORI-funded findings—relevant to physicians, healthcare providers, patients, and others. This initiative consists of a multi-step approach for identifying several areas of CER each year that—according to AHRQ officials—have the greatest potential for impact and are feasible to implement. (See figure 7 for an overview of this process.) According to AHRQ officials, as of December 2017, 37 findings have been nominated for consideration under AHRQ’s Dissemination and Implementation Initiative, including 5 findings nominated by PCORI. According to these officials, 1 of the findings PCORI has nominated is under consideration for implementation. Two were rejected—1 because of insufficient impact and the other because of challenges in implementation feasibility. (Two are still under review.)
The “Evidence Now” initiative disseminates CER evidence directly to primary care practices and supports them in implementing clinical and organizational evidence in practice through regional cooperatives.
The “Comparative Health System Performance Initiative” established three centers of excellence and a coordinating center to identify, classify, track, and compare health systems. AHRQ’s goal is to understand the factors that affect health systems’ use of CER and to identify best practices in disseminating and using CER.
The “Clinical Decision Support (CDS) Initiative” is designed to use CDS to promote the timely incorporation of CER findings into clinical practice.
Some of AHRQ’s dissemination and implementation initiatives—such as “Evidence Now” and “CDS Initiative”—include an evaluation component, as described in app. II. According to AHRQ officials, as of January 2018 results from these evaluations were not yet available.
Training on Conducting CER
Between fiscal years 2011 and 2017, AHRQ obligated a total of $94 million for awards supporting training in the methods used to conduct CER. AHRQ plans to obligate an additional $14 million for training on conducting CER by fiscal year 2020. AHRQ has funded eight categories of awards for individual researchers or research institutions. For example, AHRQ’s “Infrastructure Development Program in Patient-Centered Outcomes Research” award supports institutions in the development of their capacity to conduct and implement CER. Its “Institutional Mentored Career Development Award Program in Patient-Centered Outcomes Research” award supports the development of researchers in academic and applied settings. (See app. III for an overview of these awards.) Starting in fiscal year 2018, AHRQ plans to fund an additional training award category in conjunction with PCORI. AHRQ developed a plan to evaluate its training activities and, according to AHRQ officials, the evaluation is expected to be funded in fiscal year 2018.
Building Data Capacity
Between fiscal years 2012 and 2017, ASPE obligated a total of $85 million for 30 projects designed to build data capacity for conducting CER and plans to obligate an additional $6 million to existing projects and 1 new project through fiscal year 2019. Officials told us that ASPE plans to fund additional projects to build data capacity in fiscal years 2018 and 2019, based on HHS leaders’ priorities, but had not finalized those plans as of January 2018. ASPE manages these projects, which are largely carried out by other HHS agencies through interagency agreements and are intended to develop and maintain a comprehensive, interoperable data network to collect, link, and analyze data on outcomes and effectiveness from multiple sources for CER. (See app. IV for an overview of these activities.)
In response to a recommendation in our 2015 report on HHS’s CER activities, ASPE implemented a monitoring system to track progress toward its milestones and deliverables for these projects. ASPE also contracted to evaluate its projects to build data capacity for CER. The evaluation, completed in December 2017, found that ASPE made progress managing these projects towards the core functionalities outlined in its strategic framework. However, among other things, the evaluation found that additional efforts are needed to explore how to enhance data privacy and security, ensure data quality, and operationalize related standards. According to ASPE officials, the evaluation will inform the development and implementation of future ASPE projects to build data capacity for conducting CER.
Agency Comments
We provided a draft of this report to PCORI and HHS for review and comment. PCORI and HHS provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees, the Executive Director of PCORI, the Secretary of Health and Human Services, the Director of AHRQ, the Assistant Secretary for ASPE, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov.
If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in app. V.
Appendix I: Patient-Centered Outcomes Research Institute (PCORI) Award Commitments Made During Fiscal Years 2010 through 2017
Appendix I: Patient-Centered Outcomes Research Institute (PCORI) Award Commitments Made During Fiscal Years 2010 through 2017 Award category description These awards generally fund research studies in priority areas for conditions that impose a substantial burden on patients and the healthcare system. Information about individual research awards can be found at: https://www.pcori.org/research-results?f%255B0%255D=field_project_type%3A298&f%5B0%5D=fiel d_project_type%3A298#search-results.
These awards fund infrastructure projects to build data capacity through the development of PCORnet and support for clinical and patient-powered data research networks. Information about individual building data capacity awards can be found at: https://www.pcori.org/research-results?f%5B0%5D=field_project_type%3A441#search-results.
Engagement awards fund projects to improve the methodology for carrying out research by involving patients, caregivers, clinicians, and other healthcare stakeholders into the research process. Workforce training awards provide accredited continuing education opportunities, in coordination with the Agency for Healthcare Research and Quality, for researchers and clinicians. Information about individual engagement awards and workforce awards can be found at: https://www.pcori.org/research-results?f%5B0%5D=field_project_type%3A299#search-results and https://www.pcori.org/research-results/2017/k12-institutional-mentored-career-development-program.
Dissemination and implementation awards are intended to help researchers and other stakeholders to publicize findings, and support the utilization of findings for patients and providers. Information about individual dissemination and implementation awards can be found at: https://www.pcori.org/research-results?f%5B0%5D=field_project_type%3A308#search-results.
Appendix II: Agency for Healthcare Research and Quality’s (AHRQ) Dissemination and Implementation Initiatives, as of September 30, 2017
Appendix II: Agency for Healthcare Research and Quality’s (AHRQ) Dissemination and Implementation Initiatives, as of September 30, 2017 Description Disseminates evidence to primary care practices and supports them in implementing clinical and organizational evidence in practice through regional cooperatives. AHRQ awarded a separate grant to establish an independent, external evaluation to study improvements in the delivery of the ABCs.
Develops three Centers of Excellence on “Comparative Health System Performance in Accelerating PCOR Dissemination.” According to AHRQ, the Centers of Excellence will identify and classify characteristics of health care systems over 5 years. They will also identify ways to assess the quality and cost of such systems, including their use of PCOR, understand the characteristics of high performing systems, and identify what system characteristics are associated with more rapid adoption and diffusion of PCOR- recommended practices throughout a system.
Evaluates and synthesizes research findings to aid decision-making for patients, providers, and payers, among others.
Translates PCOR findings into tools, such as research summaries and decision aids, designed to help patients and consumers, clinicians, and policymakers make informed and evidence-based health care decisions.
Expands an existing initiative or creates a new initiative that supports multi-site, multi-region, multi-stakeholder dissemination and implementation of evidence.
Develops and tests methods for translating and disseminating PCOR findings to hard-to- reach audiences, including patients with low health literacy, disadvantaged populations, isolated clinicians and policy makers, and other decision makers who may not have had the benefit of more traditional translation and dissemination efforts.
Integrates PCOR into clinical practice using various methods shown to improve the uptake of scientific evidence in clinical decision making. Grantees were asked to consider both educational theory and the relevance of “new media” as they designed their programs.
Searches for emerging interventions, prioritizes those most likely to have a large impact in the near future, and disseminates the information to the public. According to AHRQ, the Horizon Scanning System screened more than 22,000 potential intervention leads and tracked over 2,300 intervention topics.
Provides targeted audiences—such as providers and payers—with an accessible tool for obtaining objective, detailed information on evidence-based clinical practice guidelines to further their dissemination, implementation, and use.
Promotes collaboration, reduces redundancy, and improves transparency in patient registries.
Description Created informational tools to support the dissemination and implementation of PCOR findings, including best practices and new knowledge about the use of electronic health record data for research and quality improvement.
Promotes the timely incorporation of PCOR findings into clinical practice—which encompasses a variety of tools to enhance clinical decision-making.
Collaborated with 176 national organizations to disseminate materials for the Effective Health Care Program.
Multi-media campaign to educate health care consumers about the value of reviewing medical evidence when weighing treatment options.
Educating the Educators Conducted and disseminated research to develop a process for shared decision making that includes exploring and comparing the benefits, harms, and risks of each option through meaningful dialogue about what matters most to the patient.
Collects patient-generated health data, integrates patient-generated health data with PCOR evidence, and disseminates PCOR findings using mobile health technology.
Identifies ways to reduce health care differences across diverse populations with a particular focus on minority populations in under-resourced healthcare settings.
Established five regional offices, responsible for developing and cultivating dissemination partnerships within each region.
This repository houses study data extracted from primary research publications during the course of conducting systematic reviews. It is designed to increase the transparency of comparative effectiveness reviews, improve the ability to update systematic reviews, improve the quality of abstracted data, and enhance the efficiency and reduce the costs of conducting reviews.
Gathers input from patients on a complex topic related to the implementation of evidence- based health-care decision making.
Increases the relevance of AHRQ systematic reviews for patients, clinicians, and policymakers by examining and addressing challenging topic areas that may affect the credibility and utility of the review for end users and that are areas of inconsistency or variation among AHRQ systematic reviews.
Conducted three projects to improve the development of registries, a major activity of AHRQ’s Effective Health Care Program.
Description Provides online continuing education materials that inform physicians and other health care providers about PCOR from the Effective Health Care Program.
Worked with health professional student associations to evaluate students’ understanding of the importance and clinical applicability of PCOR and shared decision-making to their practice and evaluated students’ educational needs and preferences related to integrating PCOR findings into their training curricula.
Created a decision-modeling methods center that reviewed the existing research and guidance published on modeling methods with input from a multidisciplinary group of experts.
Provides for maintenance and updating of existing data resources to conduct future CER through a grant competition. The grants fund three to four 1-year pilot projects aimed at enabling a future, larger competition to enhance the data infrastructure and move the resources to self-sustaining models.
Disseminates CER findings published by the Patient-Centered Outcomes Research Institute (PCORI) and other government entities to providers, patients, payers, and others. This initiative consists of a seven-step approach for identifying research findings that have the greatest potential for implementation.
Provided an understanding of how AHRQ could effectively disseminate and promote PCOR findings and tools in the development and maintenance of clinical decision support systems. The project included a market analysis and an assessment of potential stakeholders and audiences, including vendors of health information technology focused on clinical decision support. Information gathered from this project directly informed the concept for the PCOR clinical decision support initiative that was launched in 2016.
Promoted PCOR through public service announcements nationwide.
Created a new page on AHRQ’s website that highlights the agency’s own resources, as well as directs researchers, health professionals, patients, caregivers, and families to additional databases that collect information on CER. These databases provide summaries of findings from a wide range of CER findings and research that is in progress.
PCOR is a form of CER.
Appendix III: Agency for Healthcare Research and Quality’s (AHRQ) Training Awards, as of September 30, 2017
Appendix III: Agency for Healthcare Research and Quality’s (AHRQ) Training Awards, as of September 30, 2017 Description Funds a 5-year, renewable effort to support the development of PCOR capacity among institutions that have basic health services research capacity but need to develop capacity to conduct and implement PCOR. The program would potentially include institutions located in geographic areas that lack capacity, and institutions that serve predominantly minority populations.
Supports the development of researchers in academic and applied settings. The program combines didactic and experiential opportunities, focusing on the generation, adoption, and spread of new scientific evidence. The goal is to improve population-specific health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers, responding to their expressed needs, about which interventions are most effective for which patients under specific circumstances.
Provides basic, advanced, and experiential training on the methods to conduct PCOR, particularly prospective observational research, registries, and clinical trials. The program was open to researchers employed in both the public and private sectors, particularly those who serve minorities, economically or medically disadvantaged populations.
Facilitates the transition of postdoctoral candidates from mentored to independent research positions, accelerating research independence for PCOR researchers.
Provides support for intensive, research career development for individual investigators in academic or applied settings, leading to research independence in the field of PCOR and the generation and translation of new scientific evidence and analytic tools.
Provides career development awards for established investigators to further develop their research expertise in PCOR methodologies. This concept seeks to accelerate the development of the research workforce capable of conducting PCOR.
Provides 2-year fellowships for training in PCOR. A focus for these fellowships is recruitment of trainees from diverse disciplines, including social and behavioral sciences, business, and engineering. The expected output of these fellowships is trained PCOR researchers.
Establishes an expert panel, comprised of 7 to 10 leaders in the fields of learning healthcare system, health services research, and PCOR, to assess the current state of health services research and PCOR training and recommend ways to improve core competencies/curriculum to meet the needs of the health system. Develops a report summarizing the panel’s recommendations concerning current deficiencies and recommendations regarding skills and competencies needed to meet the challenges.
PCOR is a form of CER.
Appendix IV: Office of the Assistant Secretary for Planning and Evaluation’s (ASPE) Projects to Build Data Capacity, as of September 30, 2017
Total obligations (dollars in millions)
Developed technical standards for how health care providers, researchers, and the public health community access and extract data from electronic health records to conduct Patient-Centered Outcomes Research (PCOR).
Identified and developed the functional and technical specifications necessary to enable electronic health record systems to retrieve, display, and fill a structured form or template and store and submit the completed form to an external repository.
Provided researchers with access to the Centers for Medicare & Medicaid Services’ Chronic Conditions Warehouse, which contains Medicare and Medicaid beneficiary, claims, and assessment data, and supported infrastructure enhancements to conduct CER.
Longitudinal follow-up of certain cancer patients to assess vital statistics, disease recurrence, disease progression, and additional treatment types. Treatment data submitted each year to the Centers for Disease Control and Prevention and provided to researchers through the National Center for Health Statistics Research Data Center.
Included clinical encounters for all patients and all conditions seen at the community health centers from 2006 to 2013 in the Community Health Applied Research Network Registry data warehouse, a research network comprising 18 community health centers. A de-identified analytic file and associated data codebook were developed to support the use of analytic files by researchers outside of the network. Established a process for investigators to access the data warehouse through the development of a data access plan.
Maintained the infrastructure for PCOR and for quality improvement in the safety net.
Developed common data elements and standards for CER .The results were the initial entries into the National Institutes of Health’s National Library of Medicine common data element repository.
Developed a conceptual framework and environmental scan; produced policy documents ranging from patient-initiated data, through research data on care processes, transitions and coordination, to researcher access to claims data; and developed the ‘HHS Strategic Roadmap for Building Data Capacity for Clinical Comparative Effectiveness Research.’ The overall CER Inventory project was to design and implement a system for the categorization and cataloguing of CER activities through a web-based tool. Due to the rapidly evolving technologies supporting web-based search engines, and the improved methods for identification of more recent CER, the development of the CER Inventory (as a web-based search engine using a retrospective algorithm) was determined to have been superseded by existing search engine tools available.
Description Designed and conducted an independent evaluation of the ASPE portfolio to systematically assess progress related to the strategic framework functionalities.
Total obligations (dollars in millions)
Conducted CER analyses on the beta release of the Multi-Payer Claims Database and evaluated beta testers’ experiences requesting and using data from the MPCD for research. Results of the beta test found that the project was successful in achieving the key objectives of building a pilot database.
Planned for development and implementation of the Centers for Medicare & Medicaid Services’ Blue Button—a service that allows patients to access their own health information in electronic form.
Linkage of data on fact, cause, and manner of death from the National Death Index to several federal population-based health data platforms in order to demonstrate the feasibility of such linkage, enable PCOR on patterns and correlates of mortality via the resulting linked data; and to facilitate collaboration between federal partners regarding strengthening the infrastructure and methods for linking healthcare data to mortality outcomes and using such linked data for PCOR.
Improve the infrastructure to support timely and complete mortality data collection through more timely delivery of state death records to the National Death Index database and by linking National Death Index database records with nationally collected hospital datasets to obtain a more complete picture of patient care.
Identify the best patient attributes to address the challenge of linking patients’ data across research, clinical, and claims data sets in order to support the PCOR data infrastructure that enables standardization and sharing of patient data across organizations.
Create a coordinated registry network for women’s health technologies that will collect patient reported outcomes and employ structured data capture from electronic health records for data collection and exchange.
Build data infrastructure for conducting PCOR using data from routine clinical settings. The sources of these data may include, but are not limited to, insurance billing claims, electronic health records, and patient registries. This project intends to harmonize several existing common data models, potentially including PCORnet and other networks.
Develop technical tools for collecting and integrating patient-reported outcome assessments into electronic health records or other health information technology products.
Create an interface that enables CMS beneficiaries to connect their MyMedicare.gov data to applications and services they trust, including research platforms related to research studies in which the beneficiary may be interested in participating.
Provide technical assistance to the Trust Fund awardees in informatics and assist ASPE in setting up additional oversight processes and procedures to monitor progress.
Description Develop a privacy and security data infrastructure blueprint, legal analysis, and ethical framework to address legal and privacy and security related policy issues that affect the use of data for various types of PCOR.
Convene clinical topic-specific working groups to discuss the data definitions currently in use and how these definitions can be harmonized to promote common definitions for outcome measures across systems. These common definitions are to be made publicly available to PCOR researchers and analysts.
Develop a natural language processing service that will be accessible and publicly available to researchers on the Public Health Community Platform – a cooperative platform for sharing interoperable technologies to address public health priority areas aimed at improving population health outcomes and health equity (e.g., tobacco use).
Leverage the Sync for Science and Blue Button application programming interface programs to enable Medicare beneficiaries to donate their medical claims data for scientific research studies.
Develop and test the capability to conduct timely and secure distributed regression analysis in distributed data networks. Additionally, explore the feasibility of creating virtual linkage capabilities to utilize data from multiple data sources and data for one specific patient with information at different institutions.
Create the infrastructure for collecting data from patients through a mobile device application, allowing patient-generated data to be linked with a single data partner that participates in the Food and Drug Administration’s Sentinel distributed network. The project will develop and pilot a mobile application to capture data from pregnant women who volunteer to participate.
Develop a policy framework for the use of patient-generated data in research and care delivery that addresses data collection tools, data donation policies, regulatory gaps, combining data with medical record data, and interoperability of data across health information systems and devices.
Create and implement a metadata standard data capture and querying system for data quality and characteristics, data source and institutional characteristics, and “fitness for use.”
Cross-Network Directory Service Create an interoperable service that allows data partners to participate in multiple data research networks, query across the networks, and share analytic capabilities and knowledge across networks. The project will be piloted across two existing networks: Food and Drug Administration’s Sentinel and PCORnet.
Generate tools and data standards that could be deployed in other CER studies by leveraging the infrastructure of an existing research study called the ADAPTABLE trial (Aspirin Dosing: A Patient-Centric Trial Assessing Benefits and Long Term Effectiveness). This trial is the first major randomized comparative effectiveness trial to be conducted by PCORnet.
Description Create a flexible, extensible, and computable mechanism for rolling data into clinically relevant equivalence groups that enable more efficient processing aggregation of laboratory data and other data from diverse health information technology systems. The primary focus of this work will be on laboratory tests.
Total obligations (dollars in millions)
Create a single point data capture approach from the electronic health record to electronic data capture systems using the Retrieve Form for Data Capture standard. Stakeholders will be provided with a tool to seamlessly integrate electronic health record and electronic data capture systems.
In addition to the projects listed, ASPE plans to obligate $2.0 million for one new project starting in fiscal year 2018.
Appendix V: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Karin Wallestad, Assistant Director; Michael Zose, Analyst-in-Charge; Kye Briesath; Laurie Pachter; Vikki Porter, and Jennifer Whitworth made key contributions to this report. | Why GAO Did This Study
In 2010, the Patient Protection and Affordable Care Act (PPACA) authorized the establishment of PCORI to carry out CER and improve its quality and relevance. PPACA also established new requirements for HHS to, among other things, disseminate findings from federally funded CER, including findings published by PCORI; and coordinate with relevant federal health programs to build data capacity for this research. To fund CER activities, PPACA established the Trust Fund from which PCORI and HHS are expected to receive an estimated $4.0 billion from fiscal years 2010 through 2019.
PPACA included a provision for GAO to review PCORI's and HHS's use of the Trust Fund. This report examines (1) PCORI's use of the Trust Fund for CER activities, including the dissemination and use of research findings; and (2) HHS's use of the Trust Fund for these activities.
GAO examined PCORI and HHS documents and data related to use of the Trust Fund, such as commitment, obligation, and expenditure data; PCORI's audited financial statements; and descriptions of CER activities. GAO also interviewed PCORI and HHS officials responsible for planning and carrying out CER activities and interviewed officials from stakeholder organizations representing potential users of CER, including public and private payer organizations, provider organizations, and patient organizations. PCORI and HHS provided technical comments, which GAO incorporated as appropriate.
What GAO Found
The Patient-Centered Outcomes Research Institute (PCORI) made about $2 billion in commitments for awards in fiscal years 2010 through 2017. PCORI is a federally funded, nonprofit corporation established to carry out and improve comparative clinical effectiveness research (CER), which evaluates and compares the health outcomes and the clinical effectiveness, risks, and benefits of two or more medical treatments, services, or items. PCORI provides funding through award commitments from the Patient Centered Outcomes Research Trust Fund (Trust Fund) and may pay these awards over multiple years. Of the $2 billion PCORI committed as of the end of fiscal year 2017, about $1.6 billion (or 79 percent of its commitments) is for research awards, and $325 million (or 16 percent) is for building the capacity to use existing health data for research. Through fiscal year 2017, commitments for dissemination and implementation awards—intended to share CER findings with potential users of this research—were limited because most PCORI-funded research was still underway. PCORI projects to commit an additional $721 million for awards in fiscal years 2018 through 2021. In addition to awards, PCORI spent $310 million on program and administrative support services in fiscal years 2010 through 2017 and projects to spend an additional $206 million for these services through fiscal year 2024.
From fiscal years 2011 through 2017, the Department of Health and Human Services (HHS) obligated about $448 million from the Trust Fund. Of this amount, HHS obligated about $260 million (or 58 percent of all obligations) to the dissemination and implementation of CER findings. As most PCORI-funded CER had not yet been completed due to the time needed to conduct this research, HHS efforts focused instead on the dissemination and implementation of CER funded by other federal entities. Additionally, HHS obligated funds for efforts to train researchers on conducting CER, build data capacity, and on administrative activities. HHS projects to obligate an additional $120 million for these activities in fiscal years 2018 through 2020. |
gao_GAO-18-81 | gao_GAO-18-81_0 | Background
The GRF EXORD generally establishes the GRF as a set, or “menu,” of forces from the military services—each of which possesses unique capabilities—that the Secretary of Defense can deploy rapidly anywhere in the world. According to Joint Staff officials, deployment is for a duration that can range from a few weeks to several months. The GRF EXORD was first issued in 2007 and, according to DOD officials, has been revised several times to modify the number or types of assigned units. The current version, which was issued in 2015, continues to identify two uses for the GRF, described as follows:
One is to enhance DOD’s ability to respond quickly to a range of worldwide contingencies. In this scenario, the GRF would generally be used as a tailorable joint force. For example, in the event of a humanitarian crisis such as an earthquake, GRF units possessing the capabilities needed to meet the crisis can be combined into a joint force and rapidly deployed to the affected area. In this scenario, the GRF units selected would act together as a joint force under the GRF- supplied Joint Task Force headquarters or a preexisting one.
The other identified use of the GRF is to augment the capabilities of geographic combatant commands in light of unexpected challenges. In this scenario, GRF units would generally be deployed as individual units or in groups. For example, a combatant command may on occasion require additional intelligence, surveillance, and reconnaissance capability, and accordingly a GRF unit possessing that requisite capability can be taken from the GRF and temporarily allocated to the combatant command for a certain period of time.
Although the GRF EXORD identifies these two intended uses, the document does not prioritize one use over the other.
To meet the range of capabilities delineated in the GRF EXORD, the services nominate and assign units to the GRF on a rotating basis for a certain period of time. Each nominated and assigned unit possesses a specific capability outlined in the GRF EXORD. These specific capabilities correspond to the operational requirements of eight global mission scenarios listed in the GRF EXORD. For example, the GRF includes a Marine Expeditionary Unit and an Airborne Brigade Combat Team because of the unique capabilities of those units. According to DOD officials, once a force is assigned to the GRF, it is on alert status for a period of typically 6 to 9 months, with a potential to be deployed. Accordingly, services rotate units onto and off of the GRF in order to maintain a high state of readiness, which, in turn, allows them to meet the rapid response timeframes required by the GRF EXORD.
To gain access to units assigned to the GRF, according to Joint Staff officials, combatant commanders submit an emergent request for forces to the Joint Staff. Generally as part of the global force management process, when a combatant command identifies an emergent requirement for a force that cannot be met using units already assigned or allocated to the combatant command, the combatant command then submits a request for forces. If the Joint Staff, joint force providers, and military services determine that a GRF-assigned unit is the most appropriate solution for the combatant command’s requirement, the Joint Staff will recommend it as the sourcing solution to the Secretary of Defense. Once approved, the GRF-assigned unit will be allocated to the combatant commander.
DOD Has Generally Used the GRF to Augment Combatant Command Capabilities and Has Not Assessed the Risk on Its Ability to Respond as a Joint Force
According to an official from the Joint Staff office responsible for managing the GRF across DOD, since 2010 DOD has used the GRF 35 times in support of worldwide contingencies—with 32 of those uses involving individual GRF units being deployed in support of or to augment combatant commander needs. However, according to Joint Staff officials overseeing the management of the GRF, DOD has not assessed the extent to which it assumes risk associated with the potential unavailability of GRF units for a short-notice deployment as a joint force in response to a contingency, given the predominant use of the GRF as a resource for combatant commands to obtain individual units. According to an official from the Joint Staff, deployment of select GRF units as part of a joint task force has occurred three times: once to Haiti in support of an earthquake humanitarian response, and twice to Afghanistan in July 2010 and June 2011 in support of Operation Enduring Freedom. According to these officials, GRF capabilities in support of Haiti included command and control, security, and transportation and distribution of humanitarian supplies. GRF units in support of Operation Enduring Freedom provided force protection to coalition forces as well as train, advise, and assist capabilities.
The predominant use—32 of 35 deployments—of individual GRF units to augment a combatant commander’s needs has, in turn, diminished the set of units available for mission scenarios related to the GRF’s use as a tailorable joint force, and accordingly the capabilities available for inclusion under a GRF joint task force. For example, Joint Staff officials stated that DOD deployed a ballistic missile defense unit designated for the GRF to a geographic combatant commander’s area of responsibility to augment that combatant command’s missile defense capabilities. According to Joint Staff officials, the deployment of individual GRF- assigned units is intended to be a temporary solution for a specified period of time. According to these officials, the ballistic missile defense unit’s deployment was extended beyond its original timeframe and it was not replaced on the GRF menu of forces with another such unit because there are not enough of these particular types of units to meet the requirements across the combatant commands. Therefore, during the ballistic missile defense unit’s deployment, the particular capability that unit supplied to the GRF was not available as part of a tailorable joint force to respond quickly to a potential worldwide contingency—the other broad intended use of the GRF. Given that DOD has not defined an acceptable level of risk—relative to the length of time during which units remain committed to augmenting combatant commanders’ needs—DOD lacked reasonable assurance that extending the ballistic missile defense unit’s deployment would not surpass an acceptable level of risk to mission for either of the GRF’s uses.
Two other units with capabilities particularly suited for use as part of a joint force have also been deployed individually to augment combatant command capabilities. One is U.S. Transportation Command’s Joint Enabling Capabilities Command, which provides joint communications, planning, and public affairs support to a joint force or joint task force headquarters. A second is U.S. Transportation Command’s Joint Task Force – Port Opening, which provides capabilities able to deploy within 12 to 36 hours to support the opening of a port, including the capability to rapidly establish and initially operate an aerial or sea port of debarkation, conduct cargo handling and movement operations to a forward distribution node, and facilitate port throughput in support of contingency operations. Like ballistic missile defense units, these two units are limited in number. According to officials from U.S. Transportation Command, because the units have been used primarily to augment geographic combatant command capabilities, they are at times unavailable for use as part of a tailorable joint force that can be used to respond quickly to unforeseen worldwide contingencies. Because DOD has not defined the risk it assumes in its use of GRF units, it cannot determine the likelihood that units used to augment combatant commanders’ needs might be required to constitute a joint force composed of GRF units, nor has DOD defined the significance of the risk it incurs by not having a given capability available to the GRF. Further, although DOD has used the GRF primarily to augment combatant commanders’ needs, risks for both uses should be identified and analyzed appropriately since neither use is prioritized over the other. While DOD did not encounter issues accessing GRF units that it required during any of the three instances in which the GRF was deployed as part of a joint force, Joint Staff officials have nonetheless raised an issue concerning the degree of risk that DOD continues to assume by using GRF capabilities to augment combatant commander needs that may be needed by the GRF to constitute a joint force.
DOD officials stated that using GRF units to augment geographic combatant command requirements leaves them unavailable for use as part of a joint force ready to respond to an unforeseen worldwide contingency. They stated that this is largely due to the fact that some GRF units are limited in quantity but in high demand worldwide. For example, according to DOD officials, while intelligence, surveillance, and reconnaissance systems are in such high demand that they are consistently used to augment combatant commanders’ requirements, they are also typically used as an essential part of a joint force. As such, there is a likelihood that a GRF joint force might require, but not have access to these capabilities, thus potentially increasing the risk of not accomplishing a given mission. DOD officials stated that in the event of a crisis requiring the employment of GRF units as part of a joint task force, GRF units currently employed elsewhere could be reassigned. It is uncertain, however, whether such reassignment would enable a GRF joint task force to meet its timeframes for deployment given that GRF units are expected to be ready for deployment on very short notice. Moreover, the potential effect of and risks associated with such an occurrence—specifically, the unavailability of required forces to assemble GRF units as part of a joint force—has not been assessed. The identification and analysis of risks provides the basis for developing appropriate risk responses, such as, in this case, further defining and prioritizing the GRF’s intended uses and missions. Because DOD has not identified or analyzed risks associated with the uses of the GRF, it may lack reasonable assurance that this response will be sufficient to mitigate the risks. Further, without identifying risk, DOD is not well positioned to develop other risk-mitigating strategies, and to know when to activate them.
Standards for Internal Control in the Federal Government establish that management should assess risks related to achieving defined objectives. Specifically, the standards state that management should analyze the identified risks to estimate their significance and define tolerances for levels of risk assumed, thereby providing a basis for responding to the risks. The standards also call for management to design responses such that risks are contained within the defined risk tolerance for the identified objective. DOD has not assessed the risks to readiness for mission scenarios that it might assume for both uses of the GRF because of its general reliance upon the GRF as an augmentation capability available to individual geographic combatant commands for response to unforeseen challenges or opportunities.
Furthermore, we found that there are varying perspectives within DOD concerning the intended uses of the GRF, although the GRF EXORD generally identifies two overarching uses, as previously discussed. Specifically, officials from the Office of the Under Secretary of Defense for Personnel and Readiness and the Joint Staff stated the view that the GRF is a menu of forces, each unit possessing unique capabilities that can be used either individually to address geographic combatant command-identified capability gaps or collectively as a joint force to react to unforeseen worldwide contingencies. However, officials from U.S. Africa Command and U.S. Central Command view the GRF primarily as a pool from which they can draw forces, and it is these geographic combatant commands that have most often requested those capabilities provided by individual GRF units. Officials from the Army expressed another perspective, based in large part on the requirement for the Army to provide a joint task force headquarters for the GRF. Army officials said that, in their view, the GRF serves primarily as a pool of forces from which a joint task force can be created to meet unforeseen worldwide contingencies.
Although the GRF EXORD generally identifies the two uses, it does not prioritize the use of GRF assets to meet either. Additionally, DOD has not defined the risk to meeting the objectives of either of the two uses, and, thus does not have the necessary knowledge to determine when to deploy units for one use or the other. As previously stated, DOD has used the GRF to augment combatant commanders’ forces more frequently—32 out of 35 deployments—rather than retaining the units assigned to the GRF to support a rapidly deploying joint force.
Conducting a risk assessment that identifies any risks associated with the use of the GRF could help DOD to design responses, such as further defining and prioritizing the GRF’s intended uses and missions in an effort to mitigate any identified risks. Without conducting a risk assessment and taking steps to address any identified risk to accomplishing either of the GRF’s uses, DOD’s attempt to satisfy one of the two intended uses of the GRF may inadvertently hamper the other intended use.
GRF Units Have Trained Individually to Meet GRF Missions, but They Have Not Trained as Part of an Integrated Joint Force
GRF units train individually to meet GRF missions, but there are no GRF- specific joint training exercises, and the individual GRF units have limited opportunities to train as part of an integrated joint force, according to DOD officials. Specifically, according to service officials, GRF readiness, and that of assigned units, is based on the assigned force’s participation in their respective service training exercises and are generally focused on the respective units’ core missions or functions. In addition to service- level training, GRF units can also participate in joint training exercises sponsored by one of the geographic combatant commands. These commands can give authoritative direction to subordinate commands and forces necessary to carry out missions assigned to the command, including over all aspects of joint training. However, if GRF units are service retained or assigned to different combatant commands, they would not all fall under the authority of a single commander that could direct joint training. According to military service officials, there are no GRF-specific joint training exercises. However, according to some combatant command officials, some joint training exercises have included units currently assigned to the GRF. Few, if any of these exercises, however, provide opportunities to conduct training for the GRF’s joint task force headquarters in conjunction with GRF-assigned units. For example, according to U.S. Southern Command officials, the Joint Staff’s 2017 Joint Task Force Forming Exercises will be held in U.S. Southern Command’s area of responsibility, and will include the unit currently assigned as the GRF’s Joint Task Force headquarters. However, the exercise will not include any other GRF-assigned units. Therefore, the training will not provide an opportunity for the GRF to demonstrate readiness, gain efficiencies, or identify deficiencies associated with deploying elements of the GRF as a tailorable joint task force.
Chairman of the Joint Chiefs of Staff Instruction 3500.01H, Joint Training Policy for the Armed Forces of the United States, notes that U.S. forces may be employed across the range of military operations, and that DOD must support national security requirements with joint military capabilities designed to adapt and succeed in any operational environment. It further states that the department and its mission partners must prepare to operate in a joint, interagency, intergovernmental, and multinational environment. Finally, it notes that the joint training challenge is to be responsive to all emerging and extant mission requirements of the combatant commanders.
The need for interoperability is especially important for units assigned to the GRF not only because the GRF EXORD requires that they be ready for eight global mission scenarios, but because the overall GRF concept suggests they need to be capable of integration into a tailorable joint force. Underscoring this need for interoperability and jointness, the GRF EXORD outlines that combatant commanders should integrate elements of the GRF into Joint Exercise Program events to help sustain the readiness and capabilities of those units to execute various mission capability requirements. It also notes that combatant commanders should conduct a training event with the GRF’s Joint Task Force-capable headquarters at least once every 30 months in order to maintain the headquarters’ readiness to support each geographic combatant command. While these requirements are important to ensure the GRF units receive the proper training and are integrated into combatant command joint exercises, there are no specific GRF joint training exercises that provide opportunities for individual units assigned to the GRF to train as a tailorable joint task force.
Joint Staff and service officials told us that the GRF’s assigned forces do not require additional or special training because they will perform the core missions for which they train regardless of whether they are deployed individually or as part of the GRF joint task force. These officials stated, therefore, that existing training is sufficient to develop and determine readiness of the GRF. However, the importance of exercising the GRF Joint Task Force headquarters and associated GRF-assigned units was demonstrated to us when we observed an Army-sponsored joint training event involving GRF-assigned forces during a January 2017 Deployment Readiness Exercise at Fort Bragg, North Carolina, during which several interoperability challenges arose. For example, the Army and Air Force faced a challenge in calculating the weight of Army heavy equipment being loaded onto Air Force aircraft preparatory to a simulated airdrop mission. Based on the Army’s calculations, the equipment load was well under the specified weight limit for the aircraft, but the Air Force’s onboard computers showed the load as being over the limit. While the cause of the difference in the two figures was not identified to us at the time, Army officials suspected that it could be attributed to a double-counting of the weight of the parachute. In another example, inclement weather at Fort Bragg during the exercise caused ice build-up on participating aircraft. This showed that the Air Force’s de-icing capability was limited to a few aircraft at a time, which caused delays in loading and preparing the aircraft for take-off. According to Army officials, had the mission required more personnel, equipment, and aircraft, this issue would have created a risk to meeting the GRF’s mission timelines.
Despite the challenges encountered during the exercise, Army officials told us that exercises, such as the Deployment Readiness Exercise conducted at Fort Bragg, are important because they give units from different services the opportunity to identify challenges and develop solutions. As a result, these exercises can enhance the GRF’s joint task force capability. Additionally, a senior official from the Office of the Under Secretary of Defense for Personnel and Readiness’ Force Training Directorate told us that the ability to act jointly was very important in military operations and noted the need for joint training.
Two studies conducted on behalf of DOD further underscore the importance of joint exercises for developing GRF force readiness. The first study, released by the Institute for Defense Analysis in 2015, reported that the current joint exercise program did not ensure a proficient and ready GRF. Specifically, the study identified three key issues associated with GRF training. First, realistic interoperability training of individual units assigned to the GRF was not sufficient to ensure overall GRF readiness. Second, while the then-current version of the GRF EXORD assigned joint training responsibilities to the services, according to the study, the service responsible for the Joint Task Force-capable headquarters element lacked the authority to direct the required level of joint training for GRF elements provided by other services. Third, the GRF, in its entirety, had not been exercised or deployed as a joint force since its inception and thus had not demonstrated the ability to rapidly deploy as an operationally coherent joint task force. The report recommended that DOD designate a single commander with authority to establish and enforce joint integrated training at the tactical level, make changes to improve training for the GRF’s Joint Task Force headquarters, and implement a joint demonstration campaign for the GRF. According to Joint Staff officials, they are not aware of any actions taken in response to these recommendations. The second study, released by RAND in 2016, also emphasized that realistic exercises were key to ensuring and validating the GRF’s readiness. The report added that current exercises rarely included full and realistic force packages and recommended that joint airborne exercises be designed explicitly to identify and assess the implications of possible challenges and validate planning assumptions about a GRF joint task force.
According to Army officials, a major factor inhibiting joint training exercises focused at GRF-assigned units as a joint task force is the fact that it can be difficult to get other services to agree to participate in service-sponsored events because—as the Institute for Defense Analysis study pointed out—services lack the authority to direct other services to supply forces for joint training exercises, even when those forces are currently on a GRF rotation. Moreover, since the disestablishment of U.S. Joint Forces Command in 2011, which was responsible, among other things, for being the lead agent for joint force training, there is no single commander with the authority to require joint force training. As noted above, although geographic combatant commanders may direct joint training of forces under their command, units designated for the GRF mission may come from forces assigned to different geographic combatant commands or service-retained forces, according to officials.
According to a senior Office of the Under Secretary of Defense for Personnel and Readiness’ Force Training Directorate official, the challenge to conducting joint GRF training is that there is no entity having authority and responsibility for such training. He noted that because the GRF is department-wide and is not assigned to a single service or geographic combatant command, there is no single advocate for the GRF mission and training with the authority to direct the services and geographic combatant commands with GRF-dedicated units to prepare for the joint requirements inherent in the GRF mission. As a result, there are no joint training exercises specifically designed to exercise GRF units as a joint force. According to Standards for Internal Control in the Federal Government, management should develop an organizational structure with an understanding of the overall responsibilities, and assign these responsibilities to enable the organization to operate in an efficient and effective manner, comply with applicable laws and regulations, and reliably report quality information. To achieve this, management should assign responsibility and delegate authority to key roles throughout the entity.
Without an entity having the responsibility and authority to plan, direct, and conduct joint training exercises focused on GRF-assigned units deploying as a joint task force as appropriate, DOD risks undermining the effectiveness of the rapid deployment of a GRF joint task force in response to unforeseen worldwide contingencies.
Conclusions
DOD has developed the GRF as a rapid response force available to react to unforeseen contingencies or crises. While the GRF has responded to worldwide contingencies, GRF units have been primarily used to augment existing geographic combatant command capabilities. DOD has not assessed the risks it assumes by its reliance upon the GRF for augmenting combatant commanders’ forces as opposed to having the GRF-assigned units available for allocation to a joint task force in response to a contingency. Without performing a risk assessment and, as appropriate, designing responses to mitigate any identified unacceptable risks to accomplishing either of the two GRF uses, DOD cannot ensure that the GRF is able to meet its mission. Additionally, without a designated authority to establish and enforce integrated joint training for GRF-assigned units as appropriate, DOD has not developed GRF- specific joint training exercises or fully integrated the GRF into existing joint exercises. Without making improvements in these areas, DOD risks the ability of the GRF to respond to unforeseen, worldwide contingencies as an integrated joint force in a timely fashion with all the resources it needs.
Recommendations for Executive Action
We are making the following three recommendations to DOD:
The Secretary of Defense, in conjunction with the Chairman of the Joint Chiefs of Staff, should assess the risks to accomplishing both of the GRF’s uses: that is, its use as an augmentation capability available as needed to individual geographic combatant commands; and its use as a tailorable joint force available for rapid response to a specific threat. (Recommendation 1)
The Secretary of Defense, in conjunction with the Chairman of the Joint Chiefs of Staff, should, as appropriate following the assessment of risk, design responses, such as further defining and prioritizing the GRF’s intended uses and missions, to mitigate any identified risks. (Recommendation 2)
The Secretary of Defense, in conjunction with the Chairman of the Joint Chiefs of Staff, should designate an authority to establish and enforce integrated joint training for GRF-assigned units, as appropriate. (Recommendation 3)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with our three recommendations and noted planned actions to address them. DOD’s comments are reprinted in their entirety in appendix II.
We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary for Personnel and Readiness, the Chairman of the Joint Chiefs of Staff; the Secretaries of the Army, the Navy and the Air Force; and the Commandant of the Marine Corps. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-5431, or [email protected]. Contact points for our Offices of Congressional Relations and of Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
The objectives of our review were to examine the extent to which (1) Department of Defense (DOD) has used the Global Response Force (GRF) and assessed any risks associated with its use; and (2) GRF- assigned units are trained to meet GRF missions, both individually and as a joint force.
For our objective of determining the extent to which DOD has used the GRF and assessed any risks associated with its use, we reviewed the Chairman of the Joint Chiefs of Staff GRF Execute Order (EXORD) to identify the GRF’s overall uses and the global mission scenarios it is intended to meet, as well as the operational requirements and forces assigned to meet the requirements. We also interviewed the responsible DOD officials to understand how DOD selects, designates, and validates forces on the GRF, and the processes for making changes to the GRF EXORD, as well as how DOD decides when to use GRF forces for either of the two intended uses of the GRF. Also, we reviewed Standards for Internal Control in the Federal Government to identify relevant internal controls—specifically, that management should assess risks related to achieving defined objectives, analyze the identified risks to estimate their significance, define tolerances for levels of risk assumed, and design responses such that risks are within the defined risk tolerance—and compare them with DOD’s risk assessment efforts for the GRF. Also, we reviewed the Joint Staff’s GRF deployment information from 2010 to 2017 to understand the frequency of GRF deployments and identify specific instances in which the GRF’s ability to accomplish its missions was affected—specifically, instances in which GRF capabilities were unavailable for use during a GRF operation.
For our objective of determining the extent to which GRF-assigned units are trained to meet GRF missions, both individually and as a joint force, we reviewed the Chairman of the Joint Chiefs of Staff GRF EXORD and DOD’s Guidance for the Defense Readiness Reporting System to understand how GRF readiness is developed, reported, and evaluated. We also reviewed DOD’s Joint Training Policy for the Armed Forces of the United States to identify existing requirements related to joint training, and documents related to GRF training to determine the extent to which the frequency and types of GRF training meet overall joint training requirements as well as training requirements established in the GRF EXORD. We observed a Deployment Readiness Exercise at Fort Bragg, North Carolina, to learn about the types of GRF training, as well as challenges and potential benefits of training exercises for GRF units. We also interviewed senior officials from the Joint Staff, military service force providers, and geographic combatant commands to better understand training practices for the GRF and its assigned units, as well as varying perspectives regarding the challenges and potential benefits of GRF training exercises for accomplishing GRF missions.
We interviewed senior officials from the Office of the Under Secretary of Defense for Personnel and Readiness; Joint Staff; and Army, Marine Corps, Navy, and Air Force headquarters, and conducted site visits to force providers at Army Forces Command, Marine Forces Command, Navy Fleet Forces Command, Air Force Air Combat Command, and U.S. Transportation Command. We also interviewed officials from U.S. Africa Command, U.S. European Command, and U.S. Pacific Command, and visited U.S. Central Command and U.S. Southern Command. Our interviews focused on understanding the degree to which DOD organizations assess and maintain a consistent understanding of the risks entailed in using GRF forces and gaining an understanding of the challenges encountered in identifying, designating, and employing forces on the GRF, as well as the extent to which the GRF’s ability to accomplish its intended missions has been affected.
We conducted this performance audit from May 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Defense
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, individuals who made key contributions to this report include Guy LoFaro, Assistant Director; Adam Anguiano; Alberto Leff; Michael Shaughnessy; Michael Silver; Yong Song; and Cheryl Weissman. | Why GAO Did This Study
DOD must be able to rapidly deploy forces to respond to a range of worldwide contingencies, and in 2007 it established the GRF to enhance that capability. The GRF is a set, or “menu,” of forces from the military services, each of which possesses unique capabilities, and which the Secretary of Defense can deploy rapidly anywhere in the world.
House Report 114-537, accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017, included a provision for GAO to evaluate challenges DOD may be facing regarding the GRF. GAO reviewed the extent to which (1) DOD has used the GRF, and assessed any risks associated with its use of the GRF; and (2) GRF-assigned units are trained to meet GRF missions individually and as a joint force. GAO reviewed GRF deployment information from 2010 to 2017 and the GRF Execute Order, observed a training exercise, and interviewed knowledgeable officials.
What GAO Found
The Department of Defense's (DOD) Global Response Force (GRF) has two distinct uses: one is to enhance DOD's ability to rapidly deploy forces in response to a range of worldwide contingencies with a tailorable joint force; and the other is to provide a set, or “menu,” of units that combatant commands can request to augment their capabilities in light of unexpected challenges when requirements exceed their capabilities. Since 2010, according to officials, DOD has used the GRF 35 times in support of worldwide contingencies, with 32 of those times involving deployment of individual GRF units to augment combatant commander needs, and 3 times involving their use as part of a joint task force. This predominant use of individual GRF units to augment combatant commanders' needs has diminished the set of units available for mission scenarios related to the GRF's use as a tailorable joint force. For example, when DOD deployed a ballistic missile defense unit as a part of the GRF to augment a combatant command's missile defense capabilities, the particular capability it supplied to the GRF was not available for participation in a tailorable joint force to respond quickly to a potential worldwide contingency, if such an event occurred. DOD does not know what risks it assumes to readiness for GRF mission scenarios due to its general reliance upon the GRF as an augmentation capability available to individual geographic combatant commands, because DOD has not assessed those risks. Without conducting a risk assessment and taking steps to address any identified risk to accomplishing the GRF's intended uses, DOD's attempt to satisfy one of the uses (that is, individual GRF-assigned units assisting combatant commands) may hamper the other use (that is, deployment of a joint task force for a contingency).
GRF units train individually to meet GRF missions, but DOD does not conduct any GRF-specific joint training exercises, and the individual GRF units have limited opportunities to train as part of an integrated joint force, according to DOD officials. While the GRF Execute Order calls for integrating elements of the GRF into existing joint training, the military services lack the authority to direct other services to supply forces for joint training exercises, even when those forces are currently on a GRF rotation. Moreover, since the disestablishment in 2011 of U.S. Joint Forces Command—which, among other things, was the lead agent for joint force training—and because units designated for the GRF mission may be assigned to different combatant commands or may be service-retained, no single commander has the authority to require joint force training of GRF units. As a result, no joint training exercises are specifically designed to exercise GRF units as a joint task force. Army officials told GAO that joint exercises are important because they give individual units from different services the opportunity to identify challenges and develop solutions, thereby enhancing the GRF's joint task force capability. Without an entity having the responsibility and authority to plan, direct, and conduct joint training exercises focused on GRF-assigned units deploying as a joint task force as appropriate, DOD risks undermining the effectiveness of the rapid deployment of a GRF joint task force in response to unforeseen worldwide contingencies.
What GAO Recommends
GAO recommends that DOD (1) assess the risks assumed in its reliance upon the GRF as both an augmentation capability and a tailorable joint force; (2) design appropriate responses following the risk assessment; and (3) designate an authority to establish and enforce integrated joint training as appropriate for GRF-assigned units. DOD concurred with GAO's three recommendations. |
gao_GAO-18-271 | gao_GAO-18-271_0 | Background
CBP and Partner Agency Processing of Imported Goods
Imported goods flow into the U.S. market through a process that CBP facilitates and enforces, in collaboration with other federal agencies and with companies, including customs brokers, engaged in international trade. Imported goods enter the United States at more than 300 ports by air, land, or sea. The processing of imported goods includes three stages: pre-arrival, arrival/cargo release, and post-release.
Pre-arrival. Before goods leave their country of origin, importers and shipping companies file paperwork and provide required advance electronic information for CBP to review.
Arrival/cargo release. Importers or brokers file entry documents when goods reach a U.S. port of entry. At the ports, CBP and other agencies with regulatory responsibilities review documents and may examine the goods for import security and trade enforcement purposes. Some goods selected for examination may be deemed nonadmissible because of trade law or other violations. Admissible goods are released from the port and enter into U.S. commerce.
Post-release. After goods are released from a port, importers or brokers file additional entry summary documents, which CBP reviews to ensure compliance with trade laws. CBP verifies importers’ cargo classifications and calculation of customs duties, taxes, and fees owed, taking action when needed. CBP and other agencies may determine that entered goods are noncompliant, thus triggering post- release enforcement action.
Figure1 summarizes agency roles at these three stages of import processing.
ACE Development and Implementation, 1994– 2013
CBP initiated planning and preliminary development of ACE in 1994, following the enactment of the North American Free Trade Agreement Implementation Act. Title VI of the act required the creation of a national customs automation program that would allow electronic processing of commercial imports. According to CBP, its existing electronic system for processing imports—the Automated Commercial System (ACS), which became operational in 1984—used antiquated hardware and software and, because of limited processing capability, was increasingly difficult and expensive to operate. In addition, despite ACS’s availability, CBP continued to rely heavily on paper documents.
The following year, a multi-agency task force launched an effort to develop the International Trade Data System—a government-wide system for reporting data used to clear imports and exports—and efforts to develop ITDS and ACE were subsequently integrated. The 2006 SAFE Port Act mandated the creation of ITDS to provide a “single portal” trade data system, to be implemented no later than the date when ACE is fully implemented. CBP initially planned to deploy ACE incrementally from 1998 through 2005. According to CBP officials, after substantial difficulties, CBP awarded a contract to begin implementing ACE in 2001 and began deploying ACE capabilities in 2003. However, continued slow progress led DHS to halt all new ACE development in 2010. A CBP acquisition decision memorandum issued at that time stated that the scope and complexity of ACE projects had been consistently underestimated during the period leading up to this decision.
DHS authorized CBP to renew work on ACE in 2013, after CBP had completed a revision of ACE’s schedule, cost, and performance goals. This “rebaselining” of ACE included adopting the agile approach to system development, which involves segmenting development and deployment into small consecutive stages, with frequent opportunities to test new capabilities and confirm that they meet requirements. CBP’s new plan called for completing core ACE capabilities to allow CBP and partner agencies to employ the system in all phases of import and export processing by November 2016, 11 years later than initially planned. A February 2014 Executive Order, as well as provisions in TFTEA, subsequently reinforced this commitment to complete the system before the end of 2016.
In rebaselining ACE, CBP consulted with partner agencies and trade community representatives to identify the core trade processing capabilities needed for the system to achieve full operational capacity, according to CBP officials. CBP officials stated that these capabilities are laid out in an internal 2013 CBP document describing, in general terms, key activities, processes, and functions that must be performed to automate import and export processing and improve targeting and security. We use “core ACE capabilities” to refer to activities, processes, and functions that CBP has defined as core.
CBP Has Implemented Core ACE Capabilities but Delayed Completion Several Times
After revising its schedule, cost, and performance goals for ACE in 2013, CBP developed and deployed most of the capabilities that it defined as core ACE. On February 27, 2018, CBP announced that it had deployed the last of the major scheduled core trade processing capabilities. However, CBP delayed completion of these capabilities several times and has deferred deployment of collections—a capability for collecting import duties, taxes, and fees—while it considers alternative approaches to make this capability operational.
Using the agile approach, CBP began deploying new ACE capabilities in November 2013, introducing elements iteratively every few months. For example, the November 2013 deployment included functions related to the pre-arrival and arrival/cargo release phases of import processing, initial steps to support two agencies in pilot testing ACE participation, and a number of efforts to resolve technical problems. By mid-2016, CBP had deployed all core pre-arrival and arrival/cargo release capabilities, but several post-release capabilities remained to be deployed. In June 2016, CBP officials reported that the program would not complete several key events by November 2016 as planned and declared a cost and schedule breach; in November 2016, CBP rebaselined ACE again. CBP subsequently reported that it expected to finish deploying post-release core ACE capabilities by January 2017, but the agency was unable to complete this deployment as planned. In April 2017, CBP officials reported that the program was again in breach, and CBP subsequently moved the target date for completing deployment of remaining core capabilities to July 2017.
Reconciliation, Liquidation, and Drawback During reconciliation, preliminary data on import transactions provided to CBP at the time of entry (such as the dollar value of imported goods) may be updated. During liquidation, import transactions are finalized and duty, taxes, and fees due to CBP are determined. During drawback, exporters may be able to claim and recover certain duties, taxes, or fees upon the exportation or destruction of imported merchandise under CBP supervision.
February 2018. The February 2018 deployment completed most core capabilities for post-release, including reconciliation, liquidation, and drawback—functions related to the final determination and payment of duties to CBP (see sidebar).
CBP initially intended to implement collections in ACE along with other post-release core capabilities. However, CBP officials told us that after a series of unsuccessful attempts to move collections from ACS to ACE, the agency decided in July 2017 to decouple collections from the other remaining post-release capabilities. Agency officials explained that this would allow deployment of other post-release capabilities by the end of February 2018. CBP officials observed that technical challenges involved in moving the current collections function—which is needed to complete post-release functions such as liquidation—from ACS into ACE primarily accounted for CBP’s inability to finish deploying core ACE capabilities in 2017.
CBP officials stated that the agency will continue to link the newly deployed post-release capabilities to collections in ACS while deciding how to proceed. According to CBP officials, the agency expects to select one of three options for collections by the end of March 2018: (1) add a collections capability to ACE, (2) retain collections in ACS, or (3) develop a separate collections system. CBP officials stated that the agency would revise its estimate of the overall cost of completing and maintaining ACE through the system’s expected life cycle after reaching this decision.
The timeline in figure 2 summarizes CBP’s efforts to develop and deploy core ACE capabilities since 2013.
Partner Agencies That Clear or License Cargo Have Access to ACE, but Extent of Use Varies
All partner agencies that CBP identified as bearing responsibility for clearing or licensing goods for import or export have been granted some access to ACE data. However, as our case studies of five partner agencies illustrate, the extent to which these agencies use the system varies, and agencies are continuing efforts to enhance their use of ACE.
All Agencies with Responsibility for Clearing and Licensing Cargo Have Been Granted Access to ACE Data
Each of the 22 partner agencies with responsibility for clearing or licensing cargo has signed a memorandum of understanding with CBP that allows access to ACE and details the information the agency will receive through the system, according to CBP officials. Table 1 lists the 22 partner agencies CBP identified as having responsibility for clearing or licensing cargo and as having signed a memorandum of understanding with CBP According to CBP, each memorandum of understanding specifies data that the partner agency may access in accordance with its responsibilities and as allowed by statute. Agencies may obtain these data through ACE in the following ways:
Agencies may specify data elements to be included in the ACE partner government agency message set—that is, the consolidated set of data that importers and exporters submit electronically. In many cases, the message set includes data elements formerly collected through paper forms, according to CBP officials.
Agencies may require submission of supporting documents (e.g., cargo manifests) as image files through the ACE Document Image System.
Agencies may access these data directly through ACE or may establish web linkages between ACE and their own data processing systems that will allow their systems to receive automatic transmissions of ACE data. CBP documents show that among the 22 agencies CBP identified as having responsibility for clearing or licensing cargo,
16 have established web linkages between ACE and their own data
14 obtain agency-specific data through the ACE message set, and
17 receive document image files from importers through ACE.
In addition, 15 of the 22 agencies have completed, or are conducting, pilots to initiate or expand their participation in ACE.
Case Studies Show Variation in Use of ACE among Agencies That Clear or License Cargo
While all of the 22 agencies that CBP identified as having responsibility for clearing or licensing cargo have access to ACE data, our case studies of 5 agencies found considerable variation in the extent to which they use ACE for import processing. As table 2 shows, 4 of these agencies (FDA, NHTSA, CPSC, and APHIS) have established linkages between ACE and their own import data analysis systems, apply ACE data in those systems, and have completed pilots to begin or expand their use of ACE. Agency staff also may access ACE directly to obtain additional information that is not available in their agencies’ systems. Nonetheless, we found significant differences in the agencies’ use of ACE to obtain agency- specific data from importers: While FDA and NHTSA have largely transitioned to using ACE for this purpose, CPSC and APHIS use it to a more limited extent, and FWS continues to obtain data on imported goods largely without using ACE. All five agencies reported ongoing efforts to resolve difficulties related to using ACE and make greater use of the system.
According to CBP documents, several of these 27 other agencies have not concluded an ACE memorandum of understanding with CBP and do not appear to be accessing ACE—in some cases because ACE does not generate information that serves an agency need, according to CBP and Treasury Department officials.
FDA Uses ACE Data to Review and Target Imports for Health Risks
Food and Drug Administration (FDA) FDA applies health and safety standards to a variety of imported products, including food, drugs, cosmetics, medical devices, biologics, tobacco, and radiation-emitting electronic products. To carry out these functions, FDA maintains a nationwide network of port-based staff with authority to review and, if necessary, refuse entry to goods that do not comply with pertinent laws and regulations that it enforces. FDA maintains two internal information technology systems to assist these efforts: the Operational and Administrative System for Import Support, for admissibility review of imports, and the Predictive Risk Evaluation for Dynamic Import Targeting system, a risk-based screening tool that performs an initial electronic screening of import entries containing FDA regulated articles to target those items with potentially higher public health risk for a manual admissibility review.
FDA has integrated its internal systems with ACE and uses ACE data to review imports under its jurisdiction, targeting FDA-regulated imports that pose higher public health risks for manual review to determine the imports’ admissibility, according to FDA officials. FDA has worked with CBP to establish bilateral transmission of import entry data between CBP and FDA since 1997, when the two agencies linked FDA’s earlier import operations system with CBP’s ACS, according to FDA. Consequently, FDA officials described the transition to ACE as an upgrade, substantially expanding the information available to the agency, rather than a new approach to processing imports.
FDA officials stated that they coordinated with the trade community and CBP to complete the transition to using ACE. For example, the officials said that they consulted with the trade community to develop FDA’s ACE message set, with the goal of improving the clearance process. According to FDA officials, the data that the agency required through the message set included more information than it had previously required from importers through ACS. FDA officials explained that their intent in adding data elements was to facilitate the automated admissibility review of low- risk FDA-regulated articles and thus focus agency resources on articles associated with a higher public health risk. Additionally, FDA worked with the trade community to develop recommendations for technical enhancements to ACE. Finally, FDA tested the new systems and the viability of the message set in a pilot that it successfully concluded in 2016.
According to agency officials, in November 2016, FDA issued a final rule requiring that the trade community, when electronically submitting an entry in ACE, provide certain information on all incoming cargo that is subject to FDA regulation. In most cases, FDA finds this information sufficient to determine admissibility. However, in about 3 percent of cases, FDA requests additional information directly from importers, using the agency’s Import Trade Auxiliary Communications System. FDA officials stated that the agency is pursuing improvements in its ability to communicate with importers via ACE.
NHTSA Uses ACE Data to Review Motor Vehicle Imports
National Highway Safety Transportation Administration (NHTSA) NHTSA works to ensure that imported motor vehicles and equipment (e.g., tires) meet U.S. safety standards. According to agency officials, because NHTSA does not have independent authority to hold incoming cargo and does not have any staff at U.S. ports, it relies on U.S. Customs and Border Protection officials to hold and inspect cargo and to take enforcement action if indicated (e.g., seizing goods or denying entry) in consultation with NHTSA. To fulfill its tasks, NHTSA uses its Motor Vehicle Importation Information database to assist in admissibility and targeting decisions.
NHTSA is using ACE data to review and clear imported motor vehicles and equipment for entry into the U.S. market and works with CBP to assess the compliance of certain products offered for importation. NHTSA established an electronic link between its internal system and CBP’s ACS in 1992. At that time, NHTSA and CBP arranged for importers to submit NHTSA’s required paper form electronically through ACS. In 2015, NHTSA began transitioning to ACE by pilot-testing submission of data for a large ACE message set. According to NHTSA officials, the testing process revealed significant technical problems. Prior to the pilot testing, the trade community expressed concern about the number of data elements that NHTSA asked CBP to collect from the trade community. The Office of Management and Budget determined that certain proposed requirements were burdensome for the trade community and asked NHTSA to eliminate some of these requirements. Subsequently, in March 2016, NHTSA completed its transition to ACE with fewer data requirements.
In addition to using ACE data, NHTSA continues to obtain information directly from importers, when necessary, through its Motor Vehicle Importation Information system. For example, according to NHTSA officials, the agency requests information through its system when it identifies reporting errors in ACE or when additional information is needed for certain temporary imports, such as vehicles or equipment imported for research or demonstration purposes. NHTSA officials stated that they are working with CBP to overcome a major challenge to efficient collaboration: NHTSA uses vehicle identification numbers to track imported vehicles, while ACE does not. According to NHTSA officials, NHTSA has developed a database to provide public access to manufacturer identification and vehicle identification number-deciphering information submitted by manufacturers. According to NHTSA, CBP port staff have begun accessing the database but it has not yet been linked to ACE.
CPSC Uses ACE Data to Target Imported Consumer Products
Consumer Product Safety Commission (CPSC) CPSC protects the public from unreasonable risk of injury or death associated with consumer products, including over two-thirds of all categories of imported goods, such as toys, children’s sleepwear, and household electronics. CPSC expanded examination of. imported goods in 2008 following passage of the Consumer Product Safety Improvement Act of 2008, which required the agency to develop a risk-assessment methodology for certain imports. CPSC maintains a limited presence at U.S. ports and has independent authority to hold incoming cargo for inspection. The agency employs its Risk Assessment Methodology targeting system to assist in its import oversight responsibilities by generating potential targets for inspection.
CPSC uses only ACE data collected under CBP authority to support its oversight of consumer product imports and is considering expanding the information it receives from ACE. CPSC’s internal Risk Assessment Methodology targeting system focuses on 300 high-risk categories of imports listed by CPSC, using U.S. Harmonized Tariff Schedule codes, and currently receives the standard data that CBP obtains via ACE on all imported goods under the agency’s jurisdiction, according to CPSC officials. After launching an initial pilot version of its system in 2011, CPSC initiated discussion with the trade community in 2014 about expanding its electronic data reporting requirements to add certain data elements to the ACE message set that would assist the agency in determining whether incoming products meet applicable standards. However, CPSC reduced the scope of the proposed expansion of reporting requirements after trade community representatives expressed concerns. In 2016, CPSC concluded an initial, limited pilot test of electronic filing of several additional data elements. According to CPSC officials, the agency plans to study the benefits of adding these elements before it initiates a second pilot and has not reached a final decision about requiring importers to submit any additional information through ACE.
CPSC staff continue to rely primarily on the agency’s internal targeting system to target incoming shipments for review and possible inspection, with contributions from CPSC staff at CBP’s Commercial Targeting and Analysis Center and at ports, according to CPSC officials. These officials stated that the agency’s representative at the Commercial Targeting and Analysis Center employs CBP and CPSC resources to generate about 30 percent of the targeting orders disseminated to CPSC staff at ports. Agency staff at two New York ports told us that ACE can be a useful source of additional information for their local targeting efforts.
APHIS Makes Limited Use of ACE Data to Review Imports for Agricultural Risks
Animal and Plant Health Inspection Service (APHIS) APHIS collaborates with Customs and Border Protection agricultural specialists to keep agricultural pests and diseases out of the United States. In pursuit of this mission, the agency maintains Plant Protection and Quarantine and Veterinary Services units at some ports of entry and operates its own data analysis system, the Agriculture Risk Management system. APHIS also implements a requirement to file a plant and plant produce import declaration on arrival in the United States, as mandated under the 2008 Lacey Act. Importers may file the declaration in ACE or in APHIS’s Lacey Act Web Governance System.
APHIS’s use of ACE data remains limited while the agency works to expand linkages between its data processing systems and ACE. According to APHIS officials, the agency did not establish an electronic link to ACS, ACE’s precursor system, and instead used paper forms in its import review processes. In 2016, the agency pilot-tested electronic submission of APHIS-specific partner agency message set data through ACE and subsequently announced that data could be submitted through ACE for APHIS compliance review. However, trade community participation remains voluntary except for Lacey Act–covered imports. According to APHIS officials, companies that import APHIS-regulated products have been slow to invest the resources required to transition to reporting through ACE and, as a result, use paper forms to submit information about most shipments of such products. However, APHIS officials observed that reporting through ACE occurs for a small but growing share of all imports subject to APHIS regulation.
APHIS has been collaborating with CBP to provide for the effective flow of information between ACE and APHIS’s systems, but these efforts remain incomplete. While staff of APHIS’s Veterinary Services unit may access ACE data directly to complete their import review processes, APHIS intends for its Plant Protection and Quarantine staff to access ACE data through the agency’s Agriculture Risk Management system, according to APHIS officials. However, these officials informed us that the functionality required for accessing ACE data through that system is still under development. They explained that Plant Protection and Quarantine staff will use ACE to receive and reply to inquiries from, and provide assistance to, CBP agricultural specialists regarding incoming cargo requiring inspection and that significant coordination is required to fully integrate the two agencies’ data processing systems.
APHIS officials observed that a CBP requirement for partner agencies to complete extensive background checks of staff before they can receive access to ACE has presented another obstacle to greater use of the system by staff of both Plant Protection and Quarantine and Veterinary Services. In November 2017, APHIS officials informed us that more than 100 agency staff had completed these background checks and thus had access to ACE but that the current number of users remained insufficient to process many APHIS-regulated goods in ACE.
Fish and Wildlife Service Makes Little Use of ACE Data
Fish and Wildlife Service (FWS) FWS monitors wildlife trade and works to prevent the illegal importation or exportation of species (including parts and products thereof) that are regulated under the Convention on International Trade in Endangered Species of Wild Fauna and Flora and U.S. wildlife laws and regulations, according to U.S. Customs and Border Protection (CBP). Virtually all wildlife imports and exports must be declared to FWS and cleared by FWS wildlife officers, according to CBP. To carry out its responsibilities, FWS maintains staff at 38 U.S. ports and generally requires that all internationally traded wildlife and wildlife products be routed through designated ports. FWS staff are able to place holds on, to inspect, and to deny entry or exit to incoming or outgoing cargo, according to agency officials. FWS staff obtain information about incoming or outgoing cargo from data filed by the trade community in the agency’s own data analysis and targeting system, the Law Enforcement Management Information System.
FWS use of ACE data in its import review and regulation activities has been minimal, in part because of technical challenges. According to FWS officials, the agency attempted during the 1990s to integrate its activities with ACS. After concluding that ACS did not meet FWS needs, the agency discontinued these efforts in 2000 and developed its own Electronic Declarations system for the trade community to submit data to the agency’s data analysis and targeting system. Agency officials told us that FWS port staff may access ACE and that some find it a useful source of additional information on incoming cargo. However, FWS has not yet integrated ACE into FWS operations.
FWS officials told us that lack of alignment between the Harmonized Tariff Schedule codes that CBP uses to organize its work and FWS’s regulatory responsibilities constitutes a significant challenge in integrating ACE into FWS operations. For example, the tariff schedule may indicate only that an import is leather footwear, while FWS operations may also require additional information about the leather’s source, such as the type of animal, its nation of origin, and its domestication status. According to FWS and CBP officials, FWS has so far been unable to overcome this difficulty.
According to FWS officials, the agency pilot-tested participation in ACE during 2016 but suspended the test in January 2017 in light of trade community concerns about expanded reporting requirements, lack of clarity in the requirements, and uncertainty regarding FWS’s authority to collect data electronically. According to FWS officials, the agency subsequently began efforts to reach agreement with trade community representatives and CBP on an approach to data collection through ACE that will meet the needs of both FWS and the trade community. FWS officials stated in November 2017 that these discussions had produced an interim solution and were continuing and that FWS and CBP planned to resume pilot testing in March or April 2018.
ACE Users Report Cost Savings and Enforcement Benefits
CBP and partner agency officials and trade community representatives told us that their use of ACE has reduced costs by increasing the efficiency of trade processing. CBP and partner agency officials also reported that the system has strengthened their ability to enforce trade laws and regulations. CBP has developed metrics for itself and the trade community that estimate savings associated with the increased efficiency of some processes in ACE. According to CBP documents and officials, the agency plans to expand its metrics for capturing ACE benefits—for example, to estimate the value of increased efficiencies for partner agencies and to measure any savings associated with the remaining core ACE capabilities after they are implemented.
ACE Users Report the System Has Improved Efficiency, Reduced Costs, and Enhanced Enforcement
Agencies and Trade Community Report Improved Efficiency and Associated Savings
CBP, partner agencies, and trade community representatives who use ACE to conduct their work told us that the use of ACE had improved the efficiency of import processing and brought associated cost savings.
Fewer paper records. According to CBP officials at the Port of New York, the use of ACE for electronic data submission has significantly reduced reliance on paper forms in processing imports. The officials noted that before ACE was implemented, their reception area was typically filled with couriers delivering large volumes of paper for manual processing. CBP officials told us that electronic data submission through ACE had allowed CBP and partner agencies to automate over 250 paper forms. In addition, one trade community representative we spoke with said that elimination of paper records had been the primary benefit realized through ACE implementation.
CBP has estimated, on the basis of an informal poll survey of private companies, that eliminating document delivery to CBP offices would save $25 per courier trip.
Faster processing. According to CBP and partner agency officials, ACE’s automated review of data submitted by importing companies speeds the agencies’ processing and clearing of eligible shipments for release. CBP officials at the Port of New York commented that although reviewing and clearing incoming cargo for release through ACS required approximately 24 hours, performing this process through ACE takes only a few minutes if data are complete and properly formatted and if the cargo does not require inspection. For example, CBP officials stated that the Environmental Protection Agency formerly took an average of about 4 days to clear cargo for release into the U.S. market but now takes only seconds to clear nonproblematic shipments. CBP officials further observed that the reduction in document processing and the elimination of manual data review for nonproblematic imports increases the time available for CBP officials at ports to engage in tasks such as examining cargo that may violate U.S. trade and customs laws. In addition, NHTSA officials stated that ACE had substantially speeded their review and clearance process. Further, FDA reported that since the agency’s cargo review and clearance process had been linked to ACE, the portion of incoming FDA-regulated cargo receiving an automated “may proceed” had increased from 26 to 62 percent and processing time for these entries averaged less than 2 minutes. According to trade community representatives and CBP officials, ACE has also dramatically reduced the time required to file bond applications, from several days to a few seconds.
Reduced labor and storage costs. CBP officials and trade community representatives reported that efficiency improvements resulting from the use of ACE can lead to substantial labor- and storage-cost savings for the trade community. CBP officials observed that expedited processing can reduce storage and demurrage costs for importers. For example, CBP officials commented that companies in the Newark, N.J., area could be charged $250 to $300 per day to store a container awaiting clearance to enter the U.S. market.
Fewer supply chain disruptions. CBP and trade community representatives reported that ACE had reduced the negative impacts that import processing delays can have on company supply chains. For example, a pharmaceutical company representative stated that ACE had reduced delays in processing incoming cargo that, before ACE was implemented, sometimes lasted for 10 days or longer, resulting in costly supply chain failures. According to this representative, a longer-than-expected delay of an imported material that is a vital ingredient in a time-sensitive clinical trial or a treatment could result in significant material losses.
CBP and Partner Agencies Report ACE Has Improved Enforcement
While ACE is not a targeting system, the data that ACE provides has improved CBP’s and partner agencies’ ability to identify and examine incoming cargo for inspection, according to CBP and partner agency officials. For example, ACE, in addition to other sources, provides data that CBP uses in its Automated Targeting System and that most of the partner agencies we examined use in their data analysis and targeting systems to flag relatively high-risk cargo for possible inspection by port officials. (See text box for examples of CBP’s and partner agencies’ targeting efforts.)
Examples of CBP and Partner Agency Efforts to Target High-Risk Imports U.S. Customs and Border Protection (CBP) and its partner agencies perform targeting of imports at the national and local levels. For example:
CBP. At the national level, CBP maintains the Automated Targeting System, which compares traveler, cargo, and conveyance information against law enforcement, intelligence, and other enforcement data, using risk-based targeting scenarios and assessments to identify relatively high-risk cargo. CBP also operates the Commercial Targeting and Analysis Center, which facilitates targeting and enforcement information sharing among partner agencies involved in clearing or licensing cargo. In addition, CBP maintains five National Targeting and Analysis Groups, each targeting higher-risk imports related to one of the CBP’s priority trade issues. For instance, the National Targeting and Analysis Group for Trade Agreements targets shipments for which the country of origin has been misrepresented to avoid import duties. CBP officials at ports of entry also conduct locally focused targeting efforts.
Partner agencies. All five of the partner agencies we selected for our review—the Food and Drug Administration, the National Highway Traffic Safety Administration, the Consumer Product Safety Commission, the Animal and Plant Health Inspection Service, and the Fish and Wildlife Service—work with CBP in the Commercial Targeting and Analysis Center while also employing their own import data analysis and targeting systems. In addition, agencies with personnel at U.S. ports of entry may conduct locally focused targeting efforts.
CBP officials indicated that ACE had improved their trade enforcement efforts. For example:
CBP officials stated that ACE’s streamlining of import processing helps to better ensure compliance with trade laws and regulations. CBP port staff stated that reduction in the time required to process paper forms has allowed them to devote more time to higher value- added activities such as inspecting incoming cargo. In addition, CBP officials at the Commercial Targeting and Analysis Center said that it was easier to access and generate reports in ACE than in ACS.
CBP officials observed that ACE’s collection of additional information facilitates trade enforcement. Officials in the agency’s National Targeting and Analysis Groups explained that ACE functions as a valuable system of record that can be employed to refine and focus targeting efforts, as the results of each examination undertaken are recorded in ACE for future reference. Similarly, CBP officers in the New York area said that ACE was a valuable source of additional information—for example, data on particular products or importing companies—that helped them in their local targeting efforts.
In addition, partner agency officials at ports indicated that ACE data were indirectly or directly useful in their enforcement efforts. For example, FDA officials in the New York area told us that, while they do not access ACE directly, FDA’s targeting system, on which they primarily rely, does access ACE data. FDA headquarters officials noted that ACE provides the agency’s targeting system with more data elements than it received through ACS and that this has led to greater processing efficiency. A CPSC port official stated that he found ACE a very useful source of information that helped him to refine his local targeting efforts.
CBP expects the use of ACE to also yield indirect, economy-wide benefits by improving the targeting of shipments that violate U.S. trade policy, according to a CBP official and a CBP analysis. For example, according to a CBP official we interviewed, more-thorough enforcement of U.S. anti- dumping and countervailing duty orders would reduce the entry of products that unfairly compete with U.S. producers. Similarly, a cost- benefit analysis that CBP conducted in 2002 cited reduced predatory or unfair trade practices as a potential benefit of ACE. In addition, the CBP official observed that the use of ACE for targeting shipments could help to prevent injuries to American consumers by reducing the number of unsafe foreign products that enter the U.S. market.
CBP Has Developed, and Plans to Expand, Metrics to Estimate the Value of Process Efficiencies Gained through ACE
CBP Has Developed Some Metrics to Value Efficiency Gains for Itself and the Trade Community
CBP has developed metrics to estimate the value of efficiency gains associated with the use of some of the implemented ACE capabilities for itself and the trade community. CBP’s metrics capture reductions in the time required for CBP staff to complete certain import processes now included in ACE and translate these efficiency gains into dollar values. CBP performs similar calculations for the trade community, using survey data from companies on the savings they estimate are realized when import processes are transitioned into ACE. For fiscal year 2017, CBP estimated that efficiencies gained through the implemented core ACE capabilities for which it had developed metrics had a total value of nearly $28 million for itself and about $52 million for the trade community. These metrics estimate potential cost savings associated with efficiency gains resulting from the use of ACE, according to CBP officials; the estimates do not account for CBP’s costs for developing and maintaining ACE, which, according to CBP, amounted to about $118 million in fiscal year 2017. In addition, the estimates do not account for costs that the trade community has sustained in adapting to ACE. For example, one representative of a large company estimated that the total cost of developing appropriate software had exceeded $12 million.
CBP’s metrics capture increased efficiency gains in a number of areas. For example, ACE includes a feature that allows members of the trade community to submit corrections to data on incoming shipments after the data have been summarized and presented to, and accepted by, CBP.
Importers formerly requested such “post summary corrections” by submitting a paper form for CBP’s review. To capture the value of this procedural change for CBP, the agency surveys CBP officials to determine their time savings on each post summary correction and multiplies the average per-transaction time saved by the number of summaries submitted and the CBP officials’ average hourly compensation rate. To capture the value of the change for members of the trade community, CBP surveys importers, brokers, and shippers to determine their average savings for each transaction and multiplies the reported savings by the number of summaries submitted. CBP’s metrics also capture reductions in the time that CBP officers devote to completing primary processing for incoming cargo, the time that trucks must spend waiting at border crossings for clearance to enter the United States, and the time that CBP and members of the trade community devote to processing applications for customs bonds, among other things.
CBP’s estimate of the value of efficiencies resulting from the use of ACE has grown over time. For example, for fiscal year 2014, CBP estimated the total value of these efficiencies for CBP and the trade community at about $33 million—about 40 percent of the total value of such efficiencies CBP reported for fiscal year 2017. This increase reflects CBP’s progress in deploying core capabilities and in developing and applying metrics to capture the capabilities’ value to CBP and the trade community. The increase in the estimated value also reflects growing use of ACE by partner agencies and members of the trade community. For example, the number of import entry summaries that partner agencies filed in ACE increased fourfold in the 3-year period from January 2014 through January 2017.
According to CBP officials, CBP and partner agencies are unable to develop metrics to quantify trade enforcement benefits that may have resulted from their use of ACE, in part because of a lack of baseline information and the difficulty of isolating such impacts. For example, an increase in seizures may reflect increased efforts, increased efficiency in those efforts, or an increase in the volume of imports subject to seizure. Similarly, according to a CBP official, a lack of baseline information makes it difficult to assess any broader impacts of improved trade enforcement resulting from the use of ACE, such as prevention of injuries to American consumers through better targeting of harmful foreign products.
CBP Plans to Expand Its Metrics for Savings and Other Benefits
CBP reported that it is working to expand its metrics for estimating cost savings associated with improved trade processing efficiencies and other benefits resulting from the use of ACE.
CBP officials stated that they expect to have collected sufficient data in the near future to begin reporting on the estimated dollar value of efficiencies that partner agencies are realizing through ACE. While CBP measures efficiency improvements and associated savings resulting from CBP and the trade community’s use of ACE, CBP and most partner agencies currently do not collect or report information about efficiency improvements or associated savings that the partner agencies may have realized.
CBP has prepared baseline information that will allow it to measure efficiency improvements and estimate any savings associated with several post-release core ACE capabilities, including reconciliation, liquidation, and drawback, after they are implemented. For example, on the basis of an internal study completed in late 2016, CBP has determined that agency officials take about 1.8 hours, on average, to process a drawback entry summary. Comparing this average time with the average time required after this post-release capability is implemented in ACE will allow CBP to calculate the average time saved per transaction. CBP plans to obtain comparable information from the trade community to allow similar calculations of efficiency improvements for importing companies.
CBP officials stated that, while the agency does not currently measure any improvement in revenue collection that may have resulted from the implemented capabilities, CBP plans to undertake efforts to better understand the current revenue collection environment and to explore ways to collect baseline information on revenue collections. The officials said that CBP intends to identify revenue collection metrics that are quantifiable and reportable after it deploys the liquidation and reconciliation capabilities in ACE and completes deployment of collections.
According to CBP documents, CBP’s Office of Trade has outlined a strategy for improving the agency’s ability to measure benefits resulting from the use of ACE. CBP documents indicate that this strategy will include efforts to measure, to the extent that data are available, the impact of any enhancements to the system after implementation of core capabilities is complete, including enhancements identified as critical components in improving import or export operations.
Approach to Managing ACE after Implementation of Core Capabilities Has Not Been Established
CBP does not have a process in place to manage the continued development of ACE after February 2018, when it finished implementing most of the capabilities it identified as core. ACE users in CBP, partner agencies, and the trade community have identified a number of shortcomings in ACE and have suggested enhancements to address them. CBP has identified a small number of enhancements suggested by CBP and the trade community as near-term priorities and identified a number of others to consider for priority status. However, a substantial number of additional suggested enhancements, including submissions from partner agencies, remain unaddressed. Further, a process for prioritizing all suggested enhancements has not been established. Moreover, funding for the continued development of ACE after fiscal year 2018—including funding to address most of the suggested enhancements—has not been identified. CBP and its partner agencies are working to establish a management approach that includes processes for prioritizing and funding enhancements from all sources, but it is unclear when these discussions will conclude or the extent to which they will resolve outstanding issues. Federal guidance calls for establishing the organizational structure necessary to achieve objectives, including compatible means of operating across agency boundaries.
ACE Users Have Identified Shortcomings in ACE and Suggested Enhancements to Address Them
ACE users in CBP, the trade community, and partner agencies have identified a variety of shortcomings in ACE and have suggested enhancements to address them. Examples of reported shortcomings include the following:
CBP officials tasked with validating data in ACE to assess compliance with trade laws and with processing importers’ protests of duty assessments told us that performing those tasks in ACE is labor intensive and cumbersome.
CBP and agency officials noted that ACE has not yet been updated to respond to a number of legal requirements, including several TFTEA provisions and agency regulations necessitating certain enhancements to ACE.
Some partner agency officials cited capabilities that were included in ACS but, despite being needed by the agencies for their import review and enforcement responsibilities, had not been deployed in ACE.
CBP agriculture specialists identified a number of shortcomings in ACE capabilities for processing imported agricultural goods. ACE contains a “workspace” specifically designed for agricultural goods, but it is incomplete.
Trade community officials highlighted the need for a variety of improvements in the arrival/cargo-release and post-release phases of the import process, such as improving the ability of agency officials and the trade community to send messages in ACE and increasing the size of files that the trade community can submit.
A 2016 CBP survey of ACE users, including trade community representatives and partner agency officials, found that while the majority of respondents were satisfied with the ease of using ACE, substantial minorities (29 percent of CBP respondents, 36 percent of partner agency respondents, and 31 percent of trade community respondents) were dissatisfied, citing concern with navigation and functional limitations.
In response to such shortcomings, ACE users have submitted a large number of suggestions for enhancements to ACE. According to a CBP document, as of July 2017, 671 enhancements had been submitted since the early 2000s and many of these had been addressed; however, a third of those submitted (223) remained to be addressed. Of the unaddressed enhancements, nearly three-quarters were submitted by trade community representatives (see fig. 3). According to CBP officials, funding constraints, as well as the effort required to complete deployment of core ACE capabilities within established time frames, largely precluded efforts to address enhancements over the last year. CBP officials stated that, because ACE is not funded to support enhancements, funding for enhancements suggested by CBP or the trade community must be provided by a CBP unit and funding for enhancements suggested by a partner agency must be provided by that agency.
While postponing action on these suggestions, as of November 2017 CBP had prioritized seven enhancements suggested by CBP staff or the trade community to be implemented in the near term, most of them in response to legal or technical requirements. CBP also had identified 22 additional enhancements suggested by CBP staff or the trade community for consideration as priorities.
Prioritized enhancements. CBP’s seven prioritized ACE enhancements include two that had been scheduled for implementation in fiscal year 2017 and five that were scheduled for implementation as post-core activities begin. According to CBP officials, the agency prioritized three of the seven enhancements in response to provisions in TFTEA; one of these three, pertaining to drawback processes, was necessitated by changes in the act, and the other two were intended to support changes in CBP procedure mandated by the act, according to CBP officials (see table 3). The CBP officials said that a fourth enhancement was required to comply with a new electronic filing rule by the U.S. Court of International Trade and that a fifth was needed to correct technical obsolescence. As table 3 shows, the information that CBP officials provided identified in general terms the enforcement or other benefits that could be realized through addressing these prioritized enhancements. As the table shows, as of September 2017, CBP had identified funding for three of these seven priorities.
Accepted but unprioritized enhancements. CBP officials also provided us with a list of 22 unprioritized enhancements suggested by CBP staff and the trade community that had been presented to CBP’s Product Management Committee for assessment and possible prioritization. Several of these enhancements are aimed at strengthening ACE provisions for processing agricultural imports. For example, one enhancement would improve the interface between ACE and various Department of Agriculture subsystems, reducing the need to manually enter data in multiple systems. Another enhancement would integrate the ACE agricultural workspace and CBP’s Automated Targeting System, strengthening targeting for agricultural imports. The list of unprioritized enhancements also includes initiatives to simplify several import processing steps for the trade community, allowing faster processing and associated cost savings.
Process for Prioritizing Enhancements from All Sources Has Not Been Established
While CBP has a process for prioritizing enhancements suggested by its own staff or by members of the trade community (see text box), no process has been established for prioritizing enhancements suggested by partner agencies or for making priority decisions among all suggested enhancements, including those submitted by partner agencies. Enhancements suggested by partner agencies are provided to the Border Interagency Executive Council (BIEC) for prioritization. The BIEC, which CBP chairs, was created to improve coordination among ITDS partner agencies. The BIEC’s responsibilities extend to reviewing and prioritizing partner agency suggestions for enhancing ACE, according to CBP officials. However, CBP officials told us in September 2017 that the BIEC did not have explicit criteria for prioritizing partner agency suggestions and had not yet agreed on a cost-sharing strategy that would allow multiple agencies to share the cost of enhancements that might benefit those agencies. In the absence of such a process, CBP has been evaluating partner agency–suggested enhancements on a first-come, first-served basis, and partner agencies requesting such enhancements are required to pay for them on a fee-for-service basis, according to CBP officials.
CBP’s Documented Process for Prioritizing ACE Enhancements Suggested by CBP Staff or the Trade Community
CBP policy offices consider six criteria to decide whether to accept or reject enhancements suggested by CBP and the trade community: (1) completion of technical requirements to assess the required level of effort; (2) legal and regulatory provisions; (3) overlap with, or connection to, other enhancements in development or already deployed; (4) availability of funding and contract vehicles; (5) possible burden on trade, especially on existing coding or business processes; and (6) possible burden on CBP. CBP adds accepted enhancements to a list of “unprioritized initiatives.”
CBP’s Product Management Committee considers four criteria in assessing unprioritized initiatives for placement on the agency’s “short list” of priorities: (1) the enhancement aligns with a CBP mission priority, (2) the enhancement meets a legislative or regulatory requirement, (3) the enhancement is associated with a security protocol or gap, and (4) funding for the enhancement is available. According to CBP officials, an affirmative response to one or more of these criteria yields a higher probability that the enhancement will be deemed a priority. To prepare enhancements for development and deployment, CBP estimates the level of effort required, gathers high-level requirements, and conducts impact assessments. Once planning is complete, the CBP policy office sponsoring the priority develops a business case for initiatives on the “short list” of priorities, including budget justification and information on potential benefits/return on investment.
Funding for ACE Development after Fiscal Year 2018 Has Not Been Identified
Although CBP identified funding to complete the implementation of core ACE capabilities as defined by CBP in fiscal year 2018, officials of CBP and its partner agencies stated that they have not identified funding for the continued development of ACE, including most of the enhancements that have been suggested by CBP, the trade community, or partner agencies. Through fiscal year 2017, CBP maintained separate accounts to support ACE operations and maintenance and ACE acquisitions—that is, development and deployment of new ACE capabilities. According to CBP officials, the agency’s ACE acquisition funds were used exclusively to develop and deploy ACE capabilities that the agency defined as core. Neither acquisition funds nor operations and maintenance funds were available for enhancements to the core system, according to the officials.
However, CBP officials told us in November 2017 that, beginning in fiscal year 2018, the agency’s planned annual budgets for ACE would include funds only for operations and maintenance and would no longer include funds to support acquisitions. CBP officials stated that the agency had identified additional funding to complete core ACE capabilities, other than collections, in fiscal year 2018 and to ensure that these capabilities operate in concert with ACS, which the agency uses for collections. However, the agency had not yet identified funding for several enhancements that CBP considered near-term priorities (see table 3) or for the longer list of accepted but unprioritized enhancements suggested by CBP staff or the trade community. CBP officials estimated that supporting post-core development will require about $7 million in additional funds in fiscal year 2019 and slightly more than $14 million annually in additional funds in the succeeding 3 years. Figure 4 summarizes CBP’s anticipated ACE funding requirements for fiscal years 2019 through 2022, as identified by CBP in November 2016 and September 2017.
Approach to Managing ACE after Completion of Core Capabilities Has Not Been Finalized
CBP is working with its partner agencies in the BIEC to reach agreement on an approach to managing ACE’s continued development after completing the implementation of core capabilities, but this approach has not been finalized. According to CBP officials and some partner agency officials, the BIEC is seeking agreement on processes for prioritizing all suggested enhancements and for sharing the costs of maintaining and enhancing the system.
Process for prioritizing enhancements. According to CBP officials, the BIEC is developing a process for prioritizing enhancements, including criteria to be applied and a governance process to guide decision making. CBP officials stated that this process would be applied to all suggested enhancements, regardless of their source.
Process for sharing costs. According to CBP officials, the BIEC agreed in early 2016 to begin working toward consensus among CBP and its partner agencies on an approach to sharing future ACE operations and maintenance and development costs. This consensus is to include an agreement on criteria for classifying suggested enhancements as operations and maintenance or as new capabilities and on funding arrangements for both categories. Additionally, the Office of Management and Budget requested the Department of Homeland Security and CBP to develop a cost-sharing framework, according to CBP.
However, the BIEC has not yet finalized a management approach to address these tasks. According to CBP, in early December 2017 the BIEC produced a document, titled “BIEC Principals Single Window Sustainment Decision Memorandum,” proposing a “sustainment model” for ACE and received partner agency comments on this document later that month. CBP did not provide us with copies of the memorandum or the partner agencies’ comments but stated that the comments covered the following areas: acceptance of a proposed definition of operations and maintenance and a “pay as you go” funding model, evaluation criteria for prioritizing suggested enhancements, and an overall process for making prioritization decisions. According to CBP officials, a draft cost-sharing and prioritization process plan was distributed to the BIEC principals and discussed in detail at a principals meeting on January 30, 2018, and work on refining and finalizing this plan is continuing. CBP officials estimated that this process would be completed by October 31, 2018.
In light of funding constraints and the need for broad interagency agreement to adopt processes such as those reportedly under discussion in the BIEC, it is unclear whether these discussions will conclude within the specified time frame or whether the sustainment model will resolve all outstanding issues in a manner satisfactory to participating agencies. For example, according to FDA and Treasury officials, some partner agencies maintain that certain improvements to ACE suggested by partner agencies should be regarded as part of the core system—traditionally supported by CBP acquisition funds—rather than treated as enhancements that must be supported by the agencies that suggest them. It remains unclear how such enhancements will be categorized or funded, since CBP has indicated that it will no longer allocate funds to ACE acquisition and that operations and maintenance funds have traditionally not been used for such purposes.
The solutions to these unresolved issues will affect both CBP and its partner agencies, according to agency officials. FDA officials observed that CBP will not fund or implement additional capabilities without funding for these efforts, whether through its own budget or from partner agencies. Treasury officials observed that interagency coordination and transfers of funding are cumbersome, costly processes. FDA officials also commented that, rather than try to arrange cost sharing with other agencies that may have funding constraints, partner agencies might develop alternative systems to compensate for capabilities lacking in ACE. FDA officials observed that this could result in multiple agencies’ developing separate systems to meet similar needs.
According to Standards for Internal Control in the Federal Government, management should establish an appropriate organizational structure and communicate effectively to achieve agency objectives. In addition, key practices to enhance and sustain interagency collaboration include articulating a common outcome, establishing mutually reinforcing or joint strategies, and establishing compatible means of operating across agency boundaries. Until CBP, in collaboration with partner agencies, finalizes its management approach to ACE, including processes for prioritizing, and sharing costs for, critical enhancements, U.S. agencies and the trade community will not realize the system’s full potential benefits.
Conclusions
The need for an international trade data system to enhance U.S. agencies’ efficiency and effectiveness in processing cargo and enforcing U.S. trade laws has long been clear. Indeed, information available from CBP, partner agencies, and the trade community points to savings and enforcement benefits resulting from the implemented core ACE capabilities, including faster import processing; improved targeting; and other benefits to partner agencies, the trade community, and consumers. However, realization of the full benefits of transitioning to ACE continues to be hampered by a variety of functional shortcomings’.
CBP and its partner agencies recognize the need to agree on an approach to maintaining and continuing to develop the system after core ACE is completed. While CBP recently completed deployment of most of the capabilities that it identified as core, CBP and its partner agencies in the BIEC have not yet agreed on processes for prioritizing enhancements—including those that ACE users have suggested to improve the system—and for sharing the costs of operating and enhancing the system. Until CBP, in collaboration with its partner agencies, finalizes an approach to post-core management of ACE that includes such processes, as well as time frames for implementing them, CBP, its partner agencies, and the trade community will not realize the full potential benefits of the substantial investment ACE represents.
Recommendation for Executive Action
We are making the following recommendation to DHS: The Secretary of Homeland Security should ensure that the Commissioner of CBP, in collaboration with partner agencies, finalizes an interagency approach to the post-core management of ACE that includes (1) processes for prioritizing enhancements to ACE and for sharing ACE operations and maintenance and development costs, including the costs of suggested enhancements among partner agencies that may benefit, and (2) time frames for implementing such processes. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report to DHS; the Departments of Agriculture, Health and Human Services, the Interior, the Treasury, and Transportation; and CPSC. DHS provided substantive comments, which are reproduced in appendix III. In addition, DHS; the Departments of Health and Human Services, the Interior, Transportation, and the Treasury; and CPSC provided technical comments, which we incorporated as appropriate. The Department of Agriculture did not provide comments.
In its substantive comments, DHS concurred with our recommendation. DHS also reported that some steps toward developing an interagency approach to post-core management of ACE had been taken after we distributed our draft report for agency comment. DHS estimated that the process would be completed by the end of October 2018. We updated our report accordingly.
We are sending copies of this report to the appropriate congressional committees, the Commissioner of CBP, the Secretaries of the Departments of Agriculture, Health and Human Services, the Interior, the Treasury, and Transportation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-8612 or gianopoulosk.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX.
Appendix I: Objectives, Scope, and Methodology
In this report, we examine (1) the status of U.S. Customs and Border Protection’s (CBP) efforts to implement core Automated Commercial Environment (ACE) capabilities since 2013, (2) CBP partner agencies’ access to ACE and use of the system for import processing, (3) available information about any cost savings and trade enforcement benefits that have resulted from using ACE, and (4) the approach that will be used to manage ACE after core capabilities have been completed.
To examine CBP’s efforts to implement ACE since 2013, we obtained information from CBP’s Office of Information Technology and Office of Trade, which have been responsible for developing and administering ACE. CBP documents reviewed include ACE deployment schedules, acquisition decision memos, remediation plans, cost estimates, and a staff post mortem report on the ACE acquisition process. We also interviewed officials from CBP and five partner agencies regarding the ACE acquisition process since 2013: the Department of Health and Human Services’ Food and Drug Administration (FDA), the Department of Transportation’s National Highway Traffic Safety Administration (NHTSA), the Consumer Product Safety Commission (CPSC), the Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS), and the Department of the Interior’s Fish and Wildlife Service (FWS). We selected these five agencies on the basis of their size (to include both large and small agencies), the degree to which they require documentation for clearing or licensing cargo, and recommendations from officials of CBP and the Department of the Treasury regarding agencies that would provide a range of experience in transitioning to ACE. We also reviewed prior GAO reports on ACE acquisition. While ACE is designed to permit management of both exports and imports, we focused on the implementation of ACE capabilities to manage imports, because CBP’s efforts to complete and improve ACE functionality are currently focused primarily on import trade.
To examine other agencies’ progress in accessing and using ACE data, we obtained summary information on ACE usage for CBP’s 49 partner agencies, including information such as whether an agency had a memorandum of understanding with CBP regarding ACE access, whether it accessed trade data through ACE data and how it did so. While we collected information on all 49 partner agencies, we focused our analysis on the 22 partner agencies that CBP identified as requiring documentation for clearing or licensing cargo for import or export. To collect this information, we identified and reviewed Federal Register notices posted by the agencies. We obtained documentation on agency participation in ACE from CBP officials and from the Department of the Treasury. We also discussed the documentation and our descriptions with CBP officials and partner agency officials. To understand how the five selected agencies used ACE, we conducted case studies that included reviewing CBP user guidance documents and documents from the respective agencies on their transitions and interviewing agency officials in Washington, D.C., and at the ports of New York and Newark.
To examine available information about actual and potential cost savings and enforcement benefits from using ACE, we obtained information on efforts by CBP, partner agencies, and companies involved in international trade to identify and measure efficiency gains and potential cost savings. The CBP documents we reviewed included listings and definitions of metrics for determining efficiency gains and CBP’s method for using those to calculate potential cost savings, and also documentation of CBP’s process for determining the reliability of the data and measures. In addition, we reviewed a 2015 report on CBP’s ACE metrics by the DHS Office of the Inspector General, which recommended that CBP strengthen its metrics; the Inspector General subsequently closed those recommendations as implemented. On the basis of our review of the available information, we determined that CBP’s metrics were sufficiently reliable for the purpose of conveying the estimated value of these efficiency gains. To understand earlier CBP estimates of potential cost savings from ACE, we reviewed a cost-benefit analysis conducted and revised by CBP during 2002-2004. We also reviewed a more recent cost benefit analysis conducted by FDA. In addition, we interviewed officials at CBP and the 5 case study partner agencies regarding information on potential cost savings and other benefits from ACE, including officials in CBP’s Office of Enforcement who discussed challenges with developing metrics to measure enforcement benefits. In addition, to obtain information on observed and potentials benefits and cost savings of ACE to importers and exporters, and related companies, we interviewed representatives of these companies. We also obtained information from CBP regarding their preparations to assess the benefits of enhancements to ACE after core ACE capabilities are completed. We interviewed CBP and agency officials in Washington, D.C., and at the ports of New York, N.Y., and Newark, N.J., concerning benefits and challenges associated with using ACE. We selected these ports because they allowed us to interview CBP officials charged with processing a large volume of diverse imported goods, representing both air and sea cargo. These ports also afforded an opportunity to interview field staff representing four of our five case-study agencies (APHIS, CPSC, FDA, and FWS). We also discussed these issues with CBP officials with the agency’s Center for Commercial Targeting and Analysis, each of CBP’s five National Targeting and Analysis Groups, and six of the agency’s 10 Centers of Excellence and Expertise (national-level CBP units responsible for processing imported goods associated with designated industry sectors), which we judgmentally selected. We also discussed these issues with 16 trade community representatives—that is, representatives of companies that buy and sell internationally traded products as well as brokers and shippers that work for and with these companies—some of whom participate in organizations that advise CBP regarding its operations. These 16 representatives included members of the Trade Support Network, a private sector group created to provide input to CBP on its business processes, including ACE; the Commercial Customs Operations Advisory Committee, a private sector group created to advise the Departments of the Treasury and Homeland Security on CBP’s commercial operations; and the National Customs Brokers and Freight Forwarders Association.
To analyze the approach that will be used to manage ACE after core capabilities have been completed, we obtained information on CBP processes to identify, evaluate, and operationalize changes to enhance ACE. We also obtained information from CBP about its projected “post- core” budgetary needs. In addition, we reviewed documentation from CBP regarding interagency dialogue on post-core management of ACE and interviewed officials from CBP and other agencies to obtain their views on the challenges to be addressed and progress toward addressing them.
We conducted this performance audit from January 2017 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Partner Agency Participation in ACE
Table 4 provides information about participation in U.S. Customs and Border Protection’s (CBP) Automated Commercial Environment (ACE) by the 22 partner agencies that CBP identified as requiring documentation to clear or license cargo. Table 5 provides information about participation in ACE by the 27 partner agencies that CBP did not identify as requiring such documentation.
Appendix III: Comments from the Department of Homeland Security
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Celia Thomas (Assistant Director), Michael McAtee (Analyst-in-Charge), Marybeth Acac, Ryan Deloughry, Philip Farah, Reid Lowe, Scott McClinton, Maria Stattel, Bryant Torres, and Alex Welsh made key contributions to this report. Neil Doherty and Justine Lazaro provided technical assistance. | Why GAO Did This Study
CBP began work on ACE in 1994 to update the agency's existing electronic trade processing system. In 2006, Congress broadened this effort by mandating creation of a “single portal” International Trade Data System to, among other things, efficiently regulate the flow of commerce and more effectively enforce laws and regulations relating to international trade. Performance problems halted implementation of ACE from 2010 to 2013. In 2014, the President set a deadline of December 31, 2016, for completing the system.
The Trade Facilitation and Trade Enforcement Act of 2015 included a provision for GAO to report on issues related to ACE implementation. In this report, GAO examines (1) CBP efforts to complete core ACE capabilities since 2013; (2) agencies' access to ACE and use of the system to process imports; (3) any cost savings and trade enforcement benefits from using ACE; and (4) the approach that will be used to manage ACE after core capabilities are completed. GAO reviewed information from 22 agencies as well as importers, exporters, and brokers and interviewed agency and trade community representatives.
What GAO Found
Since renewing efforts to implement the Automated Commercial Environment (ACE) in 2013, U.S. Customs and Border Protection (CBP) has deployed a number of key ACE activities, processes, and functions that it terms core capabilities. After several delays, CBP reported that it had finished implementing these capabilities—other than a capability for revenue collections—in February 2018. CBP expects to decide how to proceed with collections by the end of March 2018, according to agency officials.
The 22 agencies CBP identified as requiring documentation to clear or license cargo are all authorized to access ACE, although GAO found considerable variation in their use of the system for import processing. For example, the Food and Drug Administration has integrated its systems with ACE and uses ACE data to review imports under its jurisdiction and target public health risks. In contrast, the Fish and Wildlife Service has not yet integrated ACE into its operations.
ACE users at CBP and partner agencies and in the trade community told GAO that using ACE has reduced costs by making trade processing more efficient and has strengthened enforcement of trade laws and regulations. CBP has developed metrics for itself and the trade community and estimated savings that could result from the increased efficiency of some processes in ACE. CBP also reported efforts to expand its metrics to capture more ACE benefits—for example, to estimate the value of increased efficiencies for partner agencies.
CBP has not yet established an approach for the management of ACE after February 2018. The agency plans to enhance ACE to address shortcomings ACE users have identified—such as difficulty in transmitting messages and required information —but has not established a process for prioritizing all suggested enhancements. CBP also has not identified funding for continued ACE development, including enhancements, after fiscal year 2018. CBP is leading an interagency effort to develop an ACE management approach that includes processes for prioritizing enhancements and sharing costs, but this approach has not been finalized. Federal guidance calls for establishing the organizational structure necessary to operate effectively and for examining efforts as needed to adopt coordinated approaches. Until processes for prioritizing ACE enhancements and sharing costs are finalized, agencies and the trade community will not realize the system's full potential benefits.
What GAO Recommends
The Secretary of Homeland Security should ensure that the Commissioner of CBP, in collaboration with partner agencies, finalizes an interagency approach to managing ACE that includes processes for prioritizing enhancements and sharing system costs. CBP concurred with GAO's recommendation. |
gao_GAO-18-169 | gao_GAO-18-169_0 | Background
CMS operates the FFM consistent with PPACA and relevant HHS regulations. In plan year 2015, 37 states relied on the FFM. The remaining 14 states, including the District of Columbia, operated their own state-based marketplaces. According to published HHS figures, the FFM accounted for about 76 percent, or approximately 8.8 million, of plan selections made via marketplaces from November 15, 2014, through February 22, 2015. Overall, we found that about 8.04 million applicants selected a plan, effectuated enrollment, and received coverage with an associated subsidy for plan year 2015. We discuss these 8.04 million applicants later in this report. More than half of the 8.8 million plans in plan year 2015 were applicants who did not have a plan via the FFM in plan year 2014, which was the FFM’s first year. Of the 8.8 million total plans, 87 percent qualified for an APTC with an average APTC of $263 per application per month.
All marketplaces, including the FFM, are required by PPACA to verify applicant information to determine eligibility for enrollment and income- based subsidies, if applicable. Marketplaces, among other things, must check for Medicaid eligibility before determining eligibility for qualified health plans; validate an applicant’s SSN, if one is provided, by comparing with SSA records; verify citizenship, status as a U.S. national, or lawful presence by comparing with SSA or DHS records, respectively; and verify household income and family size by comparing with tax-return data from the IRS, as well as data on Social Security benefits from SSA.
If the information the applicant provided on the application does not match the information contained in the data source, or if a data source is not available to verify the information, the FFM generates an inconsistency. The FFM then sends a notification to the applicant, who generally has 90 days to present satisfactory documentary evidence to resolve the inconsistency, and grants the applicant conditional eligibility if the applicant is otherwise qualified. While waiting for supporting documentation, the FFM attempts to review and resolve the inconsistency, which can include looking for obvious errors on the application. The FFM will generally categorize inconsistencies as expired if the applicant was not able to provide the supporting documentation to resolve the inconsistency within the allotted time frame and the FFM was not able to resolve the inconsistency. Depending on the type of inconsistency and availability of data sources, an applicant with an expired inconsistency may have his or her coverage terminated, or the applicant’s subsidy amount may be recalculated based on the trusted source information or eliminated. In other circumstances, the applicant’s situation may change such that no additional action is required by the FFM to address the inconsistency. These inconsistencies are categorized as overcome by events (OBE) and can include situations where the application changes to a non-financial-assistance application or another inconsistency has expired. Inconsistencies that the FFM cannot resolve, expire, or categorize as OBE remain open.
We previously made recommendations to improve the FFM’s enrollment and eligibility-verification process. Specifically, in 2016, we made eight recommendations, including that CMS consider analyzing outcomes of the verification system, take steps to resolve inconsistencies related to SSNs, and conduct a risk assessment of the potential for fraud in marketplace applications. HHS concurred with our recommendations. In 2017, we made 10 recommendations to HHS involving the annual reporting of APTC improper-payments estimates, improving control activities related to eligibility determinations, and calculations of APTC based on incomes and family sizes. HHS concurred with 7 of the recommendations and neither agreed nor disagreed with the remaining 3 recommendations, which related to improving control activities for verifying identities of individuals, preventing duplicate coverage of individuals receiving minimum essential coverage through their employers, and verifying household incomes and family sizes. As of November 2017, HHS has not provided us with documentation to support the implementation of recommendations made in 2016 or 2017. As a result, the 18 recommendations remain open.
Analyses Identified about 1 Percent of Enrollments during Plan Year 2015 as Potentially Improper or Fraudulent, with Challenges Remaining in the Identification and Reenrollment of Reportedly Deceased Individuals
About 1 Percent of Enrollments for Plan Year 2015 Were Identified as Potentially Improper or Fraudulent
Our analysis of plan year 2015 FFM enrollment and eligibility data identified a small percentage—about 1 percent—of enrollments that were potentially improper or fraudulent because they had an unresolved issue related to citizenship, status as a national, or lawful presence, or to SSN, or were reportedly deceased. The presence of an unresolved data- matching inconsistency could indicate that an enrollment is potentially improper or fraudulent because an unresolved inconsistency indicates that the FFM could not verify information provided by the applicant. When a data-matching inconsistency is generated, HHS regulations require that the applicant provide supporting documentation generally within 90 days to resolve the inconsistency. If the applicant does not provide requested documentation within the time frame and the FFM cannot otherwise verify the information provided by the applicant, the inconsistency may be expired, which could lead to termination from coverage or a recalculation or elimination of subsidy amounts based on the trusted data source information, depending on the type of inconsistency. In addition, in our prior undercover work, we were able to obtain and maintain coverage for fictitious applicants by submitting fictitious or no documents to resolve a data-matching inconsistency. Our undercover work has also previously shown that the FFM did not verify the authenticity or accuracy of the documents we submitted to resolve inconsistencies. As part of our current analyses, we did not independently verify the instances where the FFM resolved inconsistencies when applicants provided the requested documentation during this engagement. However, if the FFM did not corroborate information on applicant-provided documentation with the appropriate agency, some applicants with resolved data-matching inconsistencies may have received coverage with an associated subsidy potentially improperly or fraudulently.
Verification of Citizenship, Status as a National, or Lawful Presence Status
Most of the about 8.04 million applicants who received coverage with an associated subsidy in plan year 2015 provided information that allowed the FFM to verify an applicant’s status as a U.S. citizen or national, or lawfully present in the United States. Nevertheless, the FFM did identify some inconsistencies related to citizenship, status as a national, or lawful presence. The FFM flags applicants as having an inconsistency if they attested to being a citizen but their status as a citizen could not be verified—for example, because their SSN and other information does not match SSA records—or they attest to an eligible immigration status but their lawful presence could not be immediately verified. Specifically, based on our analysis of enrollment data provided by CMS, the FFM initially identified approximately 88 percent of about 8.04 million applicants as a U.S. citizen or national, or lawfully present in the United States. The FFM identified the remaining approximately 961,000 applicants (12 percent), as having inconsistencies related to citizenship, status as a national, or lawful presence.
The FFM was able to obtain information from the DHS SAVE program to address some inconsistencies related to citizenship, status as a national, or lawful presence, but issues with applicant-provided information precluded the FFM from querying all of the inconsistencies. The FFM queried DHS SAVE records for about 242,000 of the 961,000 applicants with inconsistencies (25 percent), but we were not able to identify queries for about 719,000 (75 percent). See figure 1 below for a comparison of FFM inconsistencies related to citizenship, status as a national, or lawful presence to DHS SAVE records.
We found that the FFM could not query these 719,000 applicants mostly because of the quality of information submitted by applicants. Specifically, many of the applicants the FFM could not query were missing information such as immigration numbers that the DHS SAVE program requires. For example, we found applicants who provided their name and date of birth but did not provide an immigration number, which prevented the FFM from using the DHS SAVE program to verify citizenship or lawful presence status. Such cases required the FFM to request supporting documentation from the applicant.
After the initial comparison to the DHS SAVE program, the FFM attempts to resolve remaining inconsistencies by first looking for obvious errors and then by using additional documentation requested from the applicant. See figure 2 below for an overview of inconsistencies related to citizenship, status as a national, or lawful presence that remained unresolved (i.e., open), as of December 31, 2015.
As shown in figure 2, CMS addressed some, but not all, inconsistencies. Specifically, about 43,000 inconsistencies related to citizenship, status as a national, or lawful presence (less than 1 percent of total applicants) remained in an open status as of December 31, 2015. An open status indicates that CMS was unable to resolve or obtain documentation to clarify the issues that led to the inconsistency. In some cases, an inconsistency generated late in the year may have remained open but, according to CMS officials, would have carried forward and generated a new inconsistency for plan year 2016. Inconsistencies that remained open because they were not resolved within the required time frame represent potentially improper or fraudulent applicants who retained coverage without providing sufficient supporting documentation to resolve their inconsistency. However, the number of potentially improper or fraudulent applicants may be understated since we only took into consideration those with inconsistencies in an open status and not applicants with expired inconsistencies who may have continued to receive coverage and had subsidies paid to issuers on their behalf before CMS was able to terminate their coverage and subsidies.
To examine steps taken by the FFM when processing inconsistencies related to citizenship, status as a national or lawful presence, we selected a nongeneralizable sample of 15 of the 961,000 applicants that the FFM identified. For 13 out of the 15, the FFM verified the applicant’s information through supporting documentation or DHS SAVE and resolved or expired the inconsistency in accordance with its standard operating procedures, or the FFM categorized the applicant as OBE because of an application update that made the inconsistency no longer relevant. We did note that in 2 of the 13 cases, the FFM did not perform a DHS SAVE program query to corroborate the supporting documentation. However, this was not required at the time the applicants enrolled, which was prior to June 2015 when CMS established that procedure.
In the remaining cases, the FFM did not verify the applicants’ information in plan year 2015, but the applicants received coverage beyond the 95- day inconsistency-resolution period. For example, in one case we found that the applicant obtained multiple policies for different periods during the year without ever providing sufficient information to verify his or her status as a U.S. citizen or national, or being lawfully present in the United States. As a result, the applicant was able to obtain coverage for two- thirds of the coverage year. According to CMS, this inconsistency was carried over to plan year 2016, when the inconsistency was expired and the applicant’s coverage was terminated.
Verification of Social Security Numbers
Most applicants for plan year 2015 who received coverage with an associated subsidy submitted SSNs and other information that matched SSA records, and the FFM identified SSN inconsistencies for most of the applicants whose information did not match SSA records. As shown in figure 3, our analysis found that over 96 percent of applicants (7.74 million out of about 8.04 million) submitted information that was consistent with SSA records, but about 139,000 (1.7 percent of total applicants) did not. The other 166,000 applicants (2.1 percent) did not provide an SSN on their application.
Of the approximately 139,000 applicants (1.7 percent) whose information did not match SSA records in our analysis, we found that the FFM identified an SSN inconsistency for about 109,000 (1.4 percent of total applicants). The FFM did not designate the remaining applicants whose information did not match SSA records in our analysis (about 31,000 of 139,000 applicants) as having an SSN inconsistency for plan year 2015, indicating that the FFM did not flag the applicant’s information as not matching SSA records. The FFM may not have flagged an applicant’s information for plan year 2015 as not matching SSA records if the applicant’s information matched SSA records at the time of enrollment but the applicant later changed his or her name with SSA.
The FFM did not address all SSN inconsistencies for plan year 2015. Specifically, about 33,000 of the 109,000 applicants for whom the FFM identified an SSN inconsistency for plan year 2015 (less than 1 percent of total applicants) had an open SSN inconsistency only (see fig. 4).
An open SSN inconsistency may indicate a potentially improper or fraudulent enrollment because it indicates that the FFM did not verify the applicant’s identity information but the applicant retained coverage. Applicants may have had open SSN inconsistencies in plan year 2015 because the FFM did not take steps to actively resolve SSN inconsistencies at that time. In some cases, an inconsistency generated late in the year may have remained open but, according to CMS officials, would have carried forward and generated a new inconsistency for plan year 2016. According to CMS officials, the FFM did not actively take steps to resolve SSN inconsistencies in plan year 2015 primarily because the FFM could not update SSNs in the data system at the time, as discussed in more detail later in this section. We previously reported that open SSN inconsistencies are indicators of potentially fraudulent applications. Specifically, we reported that we had successfully enrolled and received coverage with an associated subsidy in plan year 2015 for eight undercover identities that either did not provide an SSN or had an invalid Social Security identity. Further, HHS regulations state that the FFM must follow its standard inconsistency procedures if it is unable to validate an individual’s SSN through SSA. To address this issue we recommended that CMS design and implement procedures to resolve SSN inconsistencies. In May 2017, CMS established written procedures for verifying SSNs with documents submitted by applicants, as discussed in more detail later in this report.
The remaining applicants with an SSN inconsistency for plan year 2015 had either a resolved SSN inconsistency (14,000 applicants) or an SSN inconsistency that was expired or OBE (62,000 applicants). Although the FFM was not actively resolving SSN inconsistencies in plan year 2015, according to CMS officials, most applicants with an SSN inconsistency also had an inconsistency related to citizenship, status as a national, or lawful presence, and documentation submitted to resolve those inconsistencies may also resolve SSN inconsistencies. For example, according to CMS officials, if an applicant submitted a Social Security card to the FFM, an SSN inconsistency could be resolved based on that documentation. If an inconsistency related to citizenship, status as a national, or lawful presence expired, the FFM automatically expired the SSN inconsistency, according to CMS procedures. According to CMS officials, the FFM closed SSN inconsistencies as OBE if no action needed to be taken on the inconsistency because it was no longer relevant to the application, such as in cases where the applicant corrected his or her SSN on the application.
To examine steps taken to verify SSNs and process SSN inconsistencies, we reviewed a nongeneralizable sample of 15 applicants of the 139,000 applicants who received coverage with an associated subsidy in plan year 2015 whose information did not match SSA records in our analysis. In 3 of the 15 cases, additional information provided by CMS indicates that the FFM verified that the SSN on the application was correct. Specifically, in two of the cases, our analysis found that the applicant’s information did not match SSA records but the FFM verified the applicant’s information and did not generate an SSN inconsistency. As previously discussed, the FFM may not have identified an SSN inconsistency if the applicant’s information matched SSA records at the time of enrollment but the applicant later changed his or her name with SSA. In both of these cases, we found that the applicant’s date of birth matched SSA records but the name did not, indicating that the applicant may have changed his or her name. In the third case, the FFM resolved the SSN inconsistency in plan year 2015 when the applicant submitted a Social Security card showing the same name and SSN as the application.
In 5 of the 15 cases, the applicant had an SSN inconsistency in plan year 2015 that was not resolved. Specifically, in two of the five cases, the SSN inconsistency expired when an inconsistency related to citizenship, status as a national, or lawful presence was expired, in accordance with CMS procedures. In one case, the SSN inconsistency remained open because, as previously noted, the FFM did not take direct action to resolve SSN inconsistencies in plan year 2015, according to CMS officials. In two cases, the SSN inconsistency was OBE. According to CMS officials, an inconsistency status may be changed to OBE when the inconsistency no longer needs to be addressed as a result of changes to the application, such as when an applicant updates information on his or her application or the application changes to a non-financial-assistance application. CMS officials did not specify what circumstances resulted in the status of these two SSN inconsistencies being changed to OBE; however, one of the applicants had a subsequent health-insurance policy that did not provide financial assistance.
We found that in 5 of the 15 cases, the FFM either resolved an SSN inconsistency in plan year 2015 when the applicant submitted a Social Security card or did not generate an SSN inconsistency for plan year 2015 because the applicant had provided a Social Security card in plan year 2014, but information on the applicant-provided Social Security card did not match information in CMS’s data system. CMS officials did not indicate that the FFM had verified the name and SSN on the applicant- provided Social Security cards in these five cases with SSA records. The SSN on applicant-provided documentation may not have matched the SSN in CMS’s data because, as discussed previously, system limitations existed prior to March 2017. Specifically, even if the FFM received a Social Security card to resolve an inconsistency, the FFM did not reflect this change in CMS’s data system because the system did not have the capability to modify or update SSN information at the time, according to CMS officials. For example, if an applicant mistyped his or her SSN, the inconsistency may have been subsequently resolved if the applicant submitted a Social Security card, but CMS’s data system would continue to reflect the incorrect SSN that had been originally submitted.
Finally, we found that in 2 of the 15 cases, the FFM resolved the SSN inconsistency in plan year 2015 or the FFM did not generate an SSN inconsistency in 2015 because it resolved an SSN inconsistency in plan year 2014, but information provided by CMS did not support the resolution of the SSN inconsistency. Specifically, in one case in which the FFM resolved an SSN inconsistency for plan year 2015, we could not determine how the SSN inconsistency was resolved because, according to CMS officials, the applicant did not provide documentation of his or her SSN. In another case, the FFM automatically reenrolled an applicant for plan year 2015 without an SSN inconsistency after identifying an SSN inconsistency in plan year 2014 because, according to CMS officials, the applicant submitted a passport to resolve a citizenship inconsistency. While submission of a U.S. passport can be used to verify citizenship, CMS procedures do not permit using a passport to resolve an SSN inconsistency, and the applicant’s passport did not contain an SSN. Because the applicant did not provide any other documentation to resolve the SSN inconsistency in plan year 2014 and the FFM did not generate an SSN inconsistency in plan year 2015, even though the applicant’s information did not match SSA records, we could not determine whether CMS’s data system reflects the correct SSN for this applicant.
According to CMS officials, having an incorrect SSN on the application does not affect eligibility, since having an SSN is not a requirement for eligibility. However, as previously discussed, resolving data-matching inconsistencies without corroborating information with the appropriate agency puts the FFM at risk of approving potentially fraudulent or improper applications for subsidized coverage. We identified approximately $59 million in APTC for plan year 2015 associated with the applications of the 14,000 applicants who provided SSNs and other information that did not match SSA records and had a resolved SSN inconsistency. The $59 million may include APTC associated with applicants whose SSN inconsistencies were resolved without sufficient documentation, applicants who had SSN inconsistencies that were resolved based on applicant-submitted documentation that does not match the SSN in CMS’s data system, and applicants whose SSN inconsistencies were resolved appropriately. We identified $112 million in APTC associated with the applications of applicants who did not have an SSN inconsistency flagged in plan year 2015, although some information did not match SSA records in our analysis. We could not associate APTC subsidies with individual applicants because applications may include more than one person.
Further, inaccurate SSNs in CMS’s system potentially impede the IRS’s ability to reconcile APTC. The IRS is responsible for processing tax returns to determine the final amount of PTC to which taxpayers are entitled and for recovering APTC overpayments. To enable the IRS to reconcile APTC, PPACA requires marketplaces to report certain information on individuals with marketplace coverage, including the name, address, and taxpayer-identification number—an SSN in cases where the individual has one—to the IRS. The IRS compares information provided by the marketplace on the APTC paid to issuers on taxpayers’ behalf to the amount for which taxpayers qualify based on actual household income and family size reported on their tax returns.
In March 2017, system functionality upgrades were completed and deployed to enable the FFM to modify or update SSNs, according to CMS officials. In addition, as noted previously, CMS established procedures in May 2017 for verifying SSNs with documents submitted by applicants. These procedures require the FFM to take steps to update and verify SSNs by (1) obtaining documentation of the SSN or processing previously received SSN documents, (2) entering the SSN shown on documentation into CMS’s data system, and (3) trying to verify the newly entered or corrected SSN with SSA records. Further, the procedures direct the FFM to escalate cases for CMS review if the SSN cannot be verified, or documentation submitted to verify the SSN matches the information originally provided by the applicant that could not be verified with SSA records, as this may indicate potential fraud. We did not independently verify that the procedures have been implemented because the changes occurred outside the scope of our review; however, if properly implemented, these changes may help reduce the risk that potentially improper or fraudulent applicants could obtain subsidized coverage by helping to ensure that SSNs are appropriately verified and corrected in CMS’s data system.
Relatively Few Reportedly Deceased Individuals Received Coverage with an Associated Subsidy, but Challenges Remain with Identifying Deceased Individuals before Automatic Reenrollment
We found relatively few indicators that reportedly deceased individuals received coverage with an associated subsidy during plan year 2015. Specifically, we identified about 19,000 out of the approximately 7.74 million applicants who provided SSNs and other information that matched SSA records (about 0.24 percent) who received coverage with an associated subsidy on or after the date listed in the full death file as their date of death. HHS regulations state that in the case of termination of coverage due to death, the last day of enrollment in a qualified health plan through the FFM is the date of death. However, the FFM did not always terminate the enrollment of individuals through the exchange as of the date reported in the full death file as their date of death. Specifically, we found that the coverage for about 2,000 of the 19,000 reportedly deceased individuals ended on their reported date of death, but the remaining approximately 17,000 received or maintained coverage with an associated subsidy—APTC or CSR, which the federal government pays to issuers on behalf of enrollees—after their reported date of death (see fig. 5).
Most insurance policies associated with reportedly deceased applicants began when they were alive and continued after their deaths, but in some cases the date of submission of the application for coverage occurred after the individual’s reported date of death. Specifically, through our analysis, we found about 14,000 (82 percent) of the 17,000 policies that continued beyond the applicant’s reported date of death began while the individual was alive (see fig. 6). However, the remaining policies began after the applicant’s reported date of death, including about 1,000 policies (5 percent) for which the applicant reportedly died after the application was submitted but before coverage started and about 2,000 policies (13 percent) in which the applicant died before the application was submitted.
We identified about $23.0 million in APTC—which the federal government pays to issuers on behalf of enrollees—after the date of death of the applicant associated with the 17,000 policies that started or continued after the applicant’s reported date of death, of which about a fifth (about $4.7 million) was associated with the 2,000 policies of applicants who were reported as deceased before their application was submitted. We could not determine the portion of APTC associated with each individual on a policy or the extent to which the total APTC amount would have changed if the policy had been terminated as of the reportedly deceased individual’s date of death. As previously discussed, taxpayers who choose to have APTC must reconcile the amount of APTC paid to issuers on their behalf with PTC they are eligible for on their income-tax returns. Therefore, the final PTC amount may differ from the amount of APTC paid to issuers because changes in circumstances, such as the death of an enrollee, may affect the amount of PTC for which an enrollee is eligible. We did not determine the extent to which APTC paid on behalf of reportedly deceased individuals was reconciled with PTC for which these individuals were ultimately eligible as the reconciliation process was outside the scope of our review. However, we previously found that not all individuals correctly filed their federal income-tax returns, as required, and the federal government is missing opportunities to recover overpayments of APTC as part of the reconciliation process. As a result, APTC overpayments that the federal government improperly provides to issuers on behalf of deceased enrollees may not be fully recovered through the reconciliation process.
In the majority of cases in which the applicant reportedly died before the application was submitted (about 1,700 out of 2,000 policies), we found that the FFM had automatically submitted the application to reenroll the applicant. We reviewed five sample cases in which the date of the application submission occurred after the individual’s reported date of death. For all five cases, the individual had received coverage with an associated subsidy in plan year 2014 and the FFM automatically reenrolled the individual for plan year 2015 after the reported date of death. According to additional information provided by CMS officials, the federal government paid APTC to issuers on behalf of all five of these individuals in plan year 2015 after their reported date of death.
Deceased individuals may receive coverage with an associated subsidy beyond their reported date of death—or the FFM may automatically reenroll deceased individuals after their reported date of death—because the FFM does not always identify applicants as deceased after their initial enrollment in a qualified health plan. The FFM checks applicants’ information against SSA’s full death file to identify reportedly deceased individuals before enrolling them for coverage and subsidies. However, we previously found that the FFM does not conduct periodic checks during the year to determine whether any individuals have subsequently died. Further, according to CMS officials, the FFM does not recheck the full death file before automatically reenrolling applicants for subsequent plan years or reverify information, but rather only rechecks income, to help encourage individuals to maintain enrollment in coverage from one year to the next and align with the process for individuals with employer- sponsored health insurance. HHS regulations require marketplaces to periodically examine certain available data sources to identify changes— such as the enrollee’s death—to determine whether individuals receiving coverage with an associated subsidy remain eligible.
CMS does not always identify deaths of enrollees in time to terminate enrollment through the exchange as of the date of death or to prevent automatic reenrollment, because CMS relies on third parties, such as family members, to report the death of an enrollee to the FFM. The FFM has procedures in place for individuals to report an enrollee’s death in order to remove the enrollee from coverage. We reviewed a nongeneralizable sample of 15 of the 17,000 reportedly deceased individuals who received coverage with an associated subsidy after the date reported in the full death file as their date of death, including the five cases we reviewed in which the FFM automatically reenrolled the individuals after their reported date of death. In 8 of the 15 sample cases we reviewed, a family member or other individual contacted the FFM and reported the enrollee’s death. In two of these cases, the individual reporting the death did not provide sufficient documentation of the death, as required by CMS. In three cases, the FFM received notification and a death certificate to verify the death, but did not terminate the policy as of the date of death. The FFM did not receive the death certificates for two of the three cases until 2016—after the 2015 plan year had ended. In the other case, the deceased individual received coverage and subsidies for 3 months in 2015 after the reported date of death but the FFM did not receive the death certificate to verify the death until almost 2 months after the coverage was terminated. We could not determine the reason the individual’s coverage had been terminated. According to CMS officials, in plan year 2015, the FFM received notification of policy termination and policy end dates from plan issuers but did not always receive information on the reason coverage was terminated.
When the FFM does not receive sufficient notification of a death, the policy may be terminated by the issuer for nonpayment, according to CMS officials. According to HHS regulations, when individuals stop paying their premiums, such as in the case of death, there is a 3-month grace period, after which the individuals’ policies would be terminated for failure to pay premiums retroactively to the last day of the first month of the grace period. For example, as shown in figure 7, if an individual dies on February 15 and the premium for the policy is not paid for months after the individual’s death, the individual would enter a 3-month grace period covering March, April, and May. The issuer would terminate the policy for nonpayment on May 31, with a policy end date set retroactively to March 31—the last day of the first month of the grace period. As a result, in cases in which the policy for a deceased individual is not paid for months occurring after the individual’s date of death, the deceased individual may still receive subsidized coverage for 1 full month after the month of death.
However, deceased individuals may receive subsidized coverage beyond the end of the first month after their date of death if the policy is not terminated by the issuer for nonpayment of premium. According to CMS officials, the plan issuer may continue to report a deceased individual as covered if the premium continues to be paid. For example, another individual may be authorized to make payments on the policy, such as a spouse who is also covered by the policy. We identified instances in which policies continued beyond the end of the first month after the date of death reported in the full death file, with some policies continuing until the end of the plan year. In 7 of our 15 sample cases—including one case in which the applicant was automatically reenrolled for plan year 2015 after his reported date of death in October 2014—the policy continued for more than 1 complete month in 2015 after the individual’s reported date of death. In four of the seven sample cases in which coverage continued beyond the end of the first month after the individual’s death, the policy also covered the deceased individual’s spouse. In other instances, an individual may have set up payments covering future months prior to death. For example, in the case in which the applicant had been receiving coverage with an associated subsidy in 2014 and was then automatically reenrolled for plan year 2015 after his reported date of death in October 2014, the individual received subsidized coverage for the entirety of plan year 2015. According to CMS officials, the individual may have set up automated payments to pay the premium.
We recommended in July 2017 that CMS assess and document the feasibility of approaches for periodically verifying individuals’ continued eligibility by working with other government agencies to identify changes in life circumstances that affect APTC eligibility, such as death, that may occur during the plan year and, if appropriate, design and implement these verification processes. The agency agreed with the recommendation and stated that it was exploring approaches to identify enrollees who may be deceased and should therefore be unenrolled from coverage. Effectively addressing this recommendation is necessary to help ensure that the FFM does not provide coverage with associated subsidies to deceased individuals. However, as of September 2017, CMS officials could not confirm whether the approaches CMS was exploring would include rechecking the full death file prior to automatically reenrolling individuals. Without rechecking the full death file prior to automatic reenrollment to identify individuals who died during the plan year, the FFM remains at risk of providing coverage to deceased individuals, potentially for prolonged periods of time following their deaths, and of paying APTC to issuers on their behalf that may not be fully recovered through the reconciliation process.
Conclusions
Effective implementation of PPACA eligibility and enrollment provisions is a complex undertaking. As subsidies for insurance coverage through the FFM cost billions of dollars to the federal government annually, effective controls to ensure that only qualified applicants receive subsidized coverage under the act are especially important. For plan year 2015, the FFM generally verified citizenship, status as a national, or lawful presence and SSN information appropriately, with few indications that individuals received coverage with an associated subsidy fraudulently or improperly. However, in some instances, applicant-submitted documentation used to verify applicant information did not match CMS data. CMS has taken steps since 2015 to improve verification of applicant information, including taking steps to improve verification of SSNs using documentation submitted by applicants and adding capability to modify or update SSNs in its data system. These procedures and system upgrades, if properly implemented, should help improve verification of applicant SSNs that initially did not match SSA records.
Further, while relatively few enrollees reportedly died prior to or during plan year 2015, some individuals received or maintained coverage with an associated subsidy after their reported deaths and some individuals were automatically reenrolled for the 2015 plan year after their reported death. The FFM checks the full death file prior to initial enrollment, but does not recheck the full death file to identify enrollee deaths during the plan year or prior to reenrolling individuals for the following plan year. Without processes to identify the deaths of enrollees in a timely manner, including prior to reenrollment for the following plan year, CMS is at risk of providing additional months of subsidized coverage improperly with related costs to the federal government. In 2017, we recommended that CMS assess the feasibility of approaches for periodically verifying changes, such as death, that affect eligibility for subsidies. Implementing our 2017 recommendation, and taking the additional step of assessing whether to check the full death file prior to automatically reenrolling individuals for the following plan year, could help ensure the FFM is not paying APTC on behalf of deceased individuals, especially for prolonged periods following their deaths.
Recommendation for Executive Action
As part of its efforts to assess and document the feasibility of approaches to identify the deaths of enrollees that may occur during the plan year, the Administrator of CMS should specifically assess and document the feasibility of approaches—including rechecking the full death file—to identify the deaths of individuals prior to automatic reenrollment for subsequent plan years and, if appropriate, design and implement these verification processes. (Recommendation 1)
Agency Comments
We provided a draft of this product to HHS for comment. In its written comments, which are reprinted in appendix II, HHS concurred with our recommendation. HHS also provided technical comments, which we incorporated as appropriate.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
The objective of this review was to examine the extent to which indications of potentially improper or fraudulent enrollments exist in the federally facilitated marketplace’s (FFM) application, enrollment, and eligibility-verification process for the 2015 enrollment period.
To identify indications of potentially improper or fraudulent enrollments in the FFM’s application, enrollment, and eligibility-verification process, we reviewed relevant federal statutes, Department of Health and Human Services (HHS) regulations, and Centers for Medicare & Medicaid Services (CMS) policies for plan year 2015. We also met with CMS officials that oversee enrollment into the FFM.
In addition, we obtained and analyzed eligibility and enrollment data for applicants enrolled from November 15, 2014, through December 31, 2015, and identified about 8.04 million applicants with an associated subsidy who effectuated enrollments in plan year 2015. For the purposes of this report, we define applicants receiving coverage with an associated subsidy as applicants receiving coverage in plan year 2015 with an associated advance premium tax credit (APTC) or Cost Sharing Reduction (CSR). The number of applicants receiving coverage with an associated subsidy and the amount of associated subsidies identified through our analysis may differ from the number of applicants who ultimately received subsidized coverage and the amount of subsidies received. In addition, we obtained and analyzed information on inconsistencies associated with these applicants as of December 31, 2015.
We focused our analyses on three areas based on the eligibility and verification requirements the FFM must use to determine whether individuals are eligible to enroll and maintain coverage. Specifically, we identified and analyzed data for applicants receiving coverage with an associated subsidy (1) with inconsistencies related to citizenship, status as a national, or lawful presence; (2) whose information, including Social Security number (SSN), did not match the Social Security Administration’s (SSA) records, and (3) who were reportedly deceased.
Applicants who had inconsistencies related to citizenship, status as a national, or lawful presence. To review applicants with inconsistencies, we used data from the Department of Homeland Security’s (DHS) Systematic Alien Verification for Entitlements (SAVE) system. Specifically, we obtained queries made by the FFM from November 15, 2014, through December 31, 2015, and compared them to approximately 961,000 applicants the FFM identified as having inconsistencies related to citizenship, status as a national, or lawful presence. For the purposes of this report, we considered applicants with open inconsistencies related to citizenship, status as a national, or lawful presence, or SSN, to be potentially improper or fraudulent. However, the number of potentially improper or fraudulent applicants may be understated since we did not take into consideration applicants with expired inconsistencies who may have continued to receive coverage and had subsidies paid to issuers on their behalf before CMS was able to terminate their coverage and subsidies.
Applicants whose information, including SSN, did not match SSA’s records. To identify applicants whose personal information— name, date of birth, or SSN—did not match SSA’s records, we used the SSA Enumeration Verification System (EVS) from November 16, 2016, through December 29, 2016, and SSA’s Affordable Care Act (ACA) batch file from March 2017. Specifically, we processed the approximately 7.9 million applicants who provided an SSN of the about 8.04 million total applicants through SSA EVS and the SSA ACA batch file and analyzed the output codes to determine whether the information matched SSA’s records. To determine whether the FFM had also identified an SSN-related inconsistency, we compared the SSA EVS analysis results to the FFM eligibility information. Although having an SSN is not a condition of eligibility, we consider applicants with open SSN inconsistencies to be potentially improper or fraudulent because open SSN inconsistencies indicate that the FFM was not able to verify the applicant’s identity information, but the applicant retained coverage.
Applicants who were reportedly deceased. To identify applicants who were reportedly deceased prior to or during plan year 2015, we compared the approximately 7.74 million applicants whose information matched SSA records of the about 8.04 million total applicants in the eligibility and enrollment data to the SSA full death file from June 2016. We matched records using the SSN, name, and date of birth. We limited our review to those applicants already verified through SSA EVS. We considered applicants to be potentially improper or fraudulent if they received or maintained coverage with an associated subsidy after the date reported in SSA’s full death file as their date of death.
To determine the reliability of the data used in our analysis, we performed electronic testing to determine the validity of specific data elements in the FFM and other federal data files that we used to perform our work. We also interviewed officials responsible for their respective databases, and reviewed documentation related to the databases and literature related to the quality of the data. On the basis of our own testing and our discussions with agency officials, we concluded that the data elements used for this report were sufficiently reliable for our purposes. For reporting purposes, we present the results of our data-matching analyses as approximate whole numbers.
To review the results of our matches, we selected a nongeneralizable sample of 45 applicants that contained
15 cases with inconsistencies related to citizenship, status as a national, or lawful presence;
15 cases where the applicant SSN information did not match SSA
15 cases where the applicant’s information matched the SSA full death file.
For all 45 cases, we requested and reviewed copies of available supporting documentation from CMS. Our review of applicant cases provides illustrative examples, and the results are not projectable to the entire population of applicants to the FFM.
As discussed above, we focused our analyses on three areas. We did not perform analyses using independent data sources to verify other types of information required for applicants to enroll in qualified health plans and qualify for subsidies, which we have discussed in previous GAO reports. Specifically, we did not perform analysis on the following: Income. Internal Revenue Service (IRS) household income information is necessary in determining subsidy amounts, but can be up to 2 years old. Due to the age of the data, there may be discrepancies between applicants’ attested information and what the marketplace obtains through the federal data services hub (data hub). According to HHS regulations and CMS guidance, if electronic data are unavailable or an applicant’s attestation of projected annual household income is more than 10 percent below the annual household income as computed using available data sources, the marketplace must follow inconsistency-resolution procedures. These procedures will accept differences of up to 20 percent of an applicant’s attested income from what CMS is able to recalculate using supporting documentation.
Residency. Individuals must intend to reside in the state in which they are applying for coverage and are not required to have a fixed address in the state. The marketplace can accept self-attestation unless the information provided by the applicant is not reasonably compatible with other information provided by the applicant or in the records of the marketplace. HHS has recently stated that its previous assessments of available sources did not identify any comprehensive data source for verifying residency. However, we previously reported that CMS did not document an evaluation of available external sources to determine the quality, relevance, and reliability of the data, and recommended that it do so.
Incarceration. Individuals must not be incarcerated (unless incarcerated while awaiting disposition of charges). We have previously reported that there are many challenges associated with using incarceration data, including the risk of false positives.
We conducted this performance audit from November 2015 to December 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Health and Human Services
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, the following staff members made key contributions to this report: Philip Reiff, Assistant Director; Colin Fallon; Suellen Foth; Kristen Juskiewicz; Maria McMullen; Madeline Messick; James Murphy; Ariel Vega; Erin McLaughlin Villas; and Elizabeth Wood.
Related GAO Products
Stated Health-Insurance Marketplaces: Three States Used Varied Data Sources for Eligibility and Had Few Indications of Potentially Improper Enrollments. GAO-17-694. Washington, D.C.: September 7, 2017.
Improper Payments: Improvements Needed in CMS and IRS Controls over Health Insurance Premium Tax Credit. GAO-17-467. Washington, D.C.: July 13, 2017.
Patient Protection and Affordable Care Act: Results of Enrollment Testing for the 2016 Special Enrollment Period. GAO-17-78. Washington, D.C.: November 17, 2016.
Patient Protection and Affordable Care Act: Results of Undercover Enrollment Testing for the Federal Marketplace and a Selected State Marketplace for the 2016 Coverage Year. GAO-16-784. Washington, D.C.: September 12, 2016.
Patient Protection and Affordable Care Act: Most Enrollees Reported Satisfaction with Their Health Plans, Although Some Concerns Exist. GAO-16-761. Washington, D.C.: September 12, 2016.
Patient Protection and Affordable Care Act: Final Results of Undercover Testing of the Federal Marketplace and Selected State Marketplaces for Coverage Year 2015. GAO-16-792. Washington, D.C.: September 9, 2016.
Patient Protection and Affordable Care Act: CMS Should Act to Strengthen Enrollment Controls and Manage Fraud Risk. GAO-16-29. Washington, D.C.: February 23, 2016.
Patient Protection and Affordable Care Act: Preliminary Results of Undercover Testing of the Federal Marketplace and Selected State Marketplaces for Coverage Year 2015. GAO-16-159T. Washington, D.C.: October 23, 2015.
Patient Protection and Affordable Care Act: IRS Needs to Strengthen Oversight of Tax Provisions for Individuals. GAO-15-540. Washington, D.C.: July 29, 2015.
Patient Protection and Affordable Care Act: Observations on 18 Undercover Tests of Enrollment Controls for Health-Care Coverage and Consumer Subsidies Provided under the Act. GAO-15-702T. Washington, D.C.: July 16, 2015.
Patient Protection and Affordable Care Act: Status of CMS Efforts to Establish Federally Facilitated Health Insurance Exchanges. GAO-13-601. Washington, D.C.: June 19, 2013. | Why GAO Did This Study
The Patient Protection and Affordable Care Act (PPACA) offers subsidized health-care coverage for qualifying applicants. States may operate their own health-care marketplaces or rely on the FFM, maintained by CMS. In PY 2015, 37 states relied on the FFM and over 8 million plan selections were made through the FFM. PPACA represents a significant fiscal commitment for the federal government, which pays subsidies to issuers on participants' behalf.
GAO was asked to examine enrollment into the FFM for PY 2015, the most current data available at the time of GAO's review. This report examines the extent to which indications of potentially improper or fraudulent enrollments existed in the FFM's application, enrollment, and eligibility-verification process for the 2015 enrollment period.
GAO reviewed relevant federal statutes, regulations, and policies for PY 2015 and interviewed CMS officials. GAO analyzed eligibility and enrollment data for about 8.04 million applicants in PY 2015 to identify applicants (1) who had a citizenship, status as a national, or lawful presence inconsistency; (2) whose information did not match SSA records; or (3) who were reportedly deceased. GAO also reviewed a nongeneralizable sample of 45 applicants to more fully understand verification processes.
What GAO Found
A small percentage—about 1 percent—of plan year (PY) 2015 enrollments were potentially improper or fraudulent. These applicants had unresolved inconsistencies related to citizenship, status as a national, lawful presence, or Social Security number (SSN), or received coverage while reportedly deceased, according to GAO's analysis of federally facilitated marketplace (FFM) eligibility and enrollment data. To verify applicant information, such as citizenship, status as a national, or lawful presence, and SSNs, the FFM uses data from the Department of Homeland Security (DHS) and Social Security Administration (SSA), among other sources. When an applicant's information does not match the available data sources, the FFM generates an inconsistency, and the FFM should take steps, such as requesting applicant documentation, to resolve it. Having an SSN is not a condition of eligibility; however, unresolved inconsistencies could indicate that an enrollment is potentially improper or fraudulent. The FFM did not actively resolve SSN inconsistencies for PY 2015, but the Centers for Medicare & Medicaid Services (CMS) has since completed system upgrades and established procedures for verifying SSNs with applicant-provided documentation, according to CMS officials.
Note: Some applicants may be included in more than one category.
GAO found that applicants or enrollees may have received or maintained coverage with an associated subsidy after their reported death because the FFM did not always identify individuals as deceased in a timely manner, such as prior to automatic reenrollment. CMS relied on third parties, such as family members, to report the death of an enrollee to the FFM, but did not always receive adequate notification to verify the death. According to CMS officials, CMS is exploring approaches to identify enrollees who may be deceased and should therefore be unenrolled from coverage. The FFM checks applicants' information against death information from SSA before initial enrollment but does not recheck death information prior to reenrollment. According to CMS officials, the FFM does not reverify information, other than income, when automatically reenrolling applicants to help encourage individuals to maintain enrollment in coverage from one year to the next. Without rechecking SSA death information prior to automatic reenrollment, the FFM remains at risk of providing subsidized coverage to deceased individuals with related costs to the federal government.
What GAO Recommends
GAO recommends that CMS assess and document the feasibility of approaches to identify the deaths of individuals prior to automatic reenrollment. HHS concurred with GAO's recommendation. |
gao_GAO-18-595 | gao_GAO-18-595_0 | Background
GSA spends hundreds of millions of dollars each year on needed repairs to the more than 1,600 federally owned buildings under its custody and control, which are occupied by a wide variety of federal tenant agencies. The agency’s R&A program provides repairs and alterations for buildings to ensure that they will protect both the government’s investment and the health and safety of buildings’ occupants, support the transfer of federal agencies from leased space, and be cost-effective.
GSA prioritizes capital and small R&A projects for selection differently. GSA gives priority to repairs to prevent deterioration and damage to buildings, their support systems, and operating equipment. GSA’s central office uses criteria based on agency-wide strategic goals to rank and prioritize projects for funding. According to GSA’s Fiscal Year 2019 Congressional Justification, the agency prioritizes R&A capital projects relative to a set of six criteria, each of which consider factors such as space consolidation, customer priorities, project urgency, facility conditions, historic status, and code compliance. For small R&A projects, GSA’s central office reviews those with estimated costs exceeding $250,000 and develops an “approved” list of projects for its regions using criteria similar to those used to prioritize capital projects. GSA’s small R&A projects primarily focus on building repairs and equipment and other replacement issues.
The Federal Buildings Fund (FBF), established by the Public Buildings Act Amendments of 1972 and administered by GSA, is the primary source of funds for all operating and capital costs associated with federal space—including repairs and alterations. GSA collects rent from tenant agencies, deposits it into FBF, and is appropriated obligational authority by Congress to fund real property acquisition, repairs and alterations, operation, maintenance, and disposal. As shown in figure 1, the amount of funding appropriated in obligational authority for R&A projects has steadily decreased since fiscal year 2014—and has been below the amount GSA requested each fiscal year. According to GSA officials, this decline in funding has contributed to the agency’s backlog of deferred maintenance. In fiscal year 2018, GSA requested more than $1.4 billion for R&A activities; $666 million in obligational authority was appropriated from the FBF to perform major and minor repairs and alterations. GSA has requested $909.7 million for R&A activities for fiscal year 2019.
GSA’s Public Buildings Service manages R&A projects through its central office in Washington, D.C., and 11 regional offices. GSA’s central office establishes programming, design, and construction standards and guidance, and provides technical backup, as needed. GSA officials in both the central and regional offices are involved in assessing the needs of federal facilities and guiding R&A project development and execution. Once a project is authorized and funded, GSA’s regional offices oversee the design and construction phases of the project, from the procurement of design through the management of construction until project closeout.
Further details of GSA’s R&A project design and construction delivery process are shown in figure 2.
GSA Collects Information on R&A Projects Electronically and Is Taking Steps to Improve the Collection of Small Project Information
Regional GSA Offices Collect Information on Capital and Small R&A Projects Electronically
In order to track projects, GSA has developed numerous systems that regional officials are required to use to collect information electronically on R&A projects. Each of these systems is used to collect different types of information, such as information on potential projects or funding details. These systems are used throughout the phases of GSA’s project design and construction delivery process, starting at the point that a potential project is first identified, and each system serves various management purposes, as noted in table 1.
While GSA uses all of these systems to collect information on R&A projects, ePM/ePMXpress is the system used to track a project’s progress because it supports and facilitates the tracking of project status and related performance reporting. GSA regional officials initially create records of capital projects in ePM early in the planning process—about 2 years before funding is requested from Congress—and for small projects in ePMXpress soon after they are authorized for initial funding. Once a project is entered into ePM/ePMXpress, GSA project team members (which include the project manager, other regional GSA staff, and may include external contractors) populate and update key types of project information at specific points in the project’s design and construction delivery process.
GSA’s central office first introduced ePM as a pilot project in 2009 and, to establish consistency in the information collected, issued minimum requirement guidelines for the project information to be input in the system in 2011. These guidelines require project team members to enter specific information on both capital and small projects into ePM. GSA introduced ePMXpress in late 2012, and it provides regional officials with a simplified interface to input and track small project information. This simplification is reflected in the types and amounts of information GSA requires project teams to collect in ePM compared to ePMXpress:
For capital projects in ePM, there are 42 modules such as project details, funding, contracts, and schedule data.
For small projects in ePMXpress, there are 7 modules—program information, project details, project team details, schedule, funding, project manager financials, and file manager information.
Within these modules, project team members are required to input specific baseline and actual milestone dates in ePM/ePMXpress for both small and capital projects, including when a project’s design is complete, when construction is authorized to begin, and when construction is substantially complete. Capital projects require 57 milestones, compared with up to 16 milestones for small projects.
See appendix II for additional information on the specific types of information that regional GSA officials are required to collect on their capital and small R&A projects.
GSA guidelines also encourage project team members to collect and record additional R&A project information in ePM/ePMXpress—beyond what is required for capital and small projects—as a best practice. Officials from GSA’s central office said storing additional information in this system encourages collaboration across both project teams and regions, promotes a project management culture that results in more efficiency, and allows GSA to more efficiently prepare reports for its customers. Officials from three of the four regional offices we contacted provided examples of project team members in their region inputting more information on their R&A projects than required by GSA’s central office.
For example, officials in one region said they have required their project team members to collect additional information on their projects that allow the region to monitor staff workload, forecast the number of future small projects that may be needed, and ensure that officials have sufficient resources available to oversee their region’s projects.
GSA Is Continuing Efforts to Improve Its Collection of Small R&A Project Information
According to GSA officials, they have seen improvements in the collection of capital R&A project information since first requiring regional offices to use ePM. Officials from GSA’s central office said that since ePM was first introduced in 2009, they have worked with regional officials to adjust the types of information that project team members must input to improve the completeness, timeliness, and usefulness of project information collected. As a result, GSA officials reported that project team members are now (1) consistently creating capital R&A projects in ePM and (2) regularly updating information on these projects in a complete and timely manner, throughout the agency’s project design and construction delivery process. Officials from GSA’s central office said they verify that the projects have been entered into ePM when regional officials request them for inclusion in GSA’s budget, a process that occurs during a project’s early planning stages. These officials added that once a capital project is funded, project team members are required to actively manage its details in ePM, providing regular updates through various reporting tools. Furthermore, they stated that, as few new capital projects are funded each year, each capital project is highly visible and subject to a degree of scrutiny that leads to the identification and correction of any errors in ePM. In addition, according to GSA officials, missing project information would be captured in regional performance reports. For these reasons, GSA officials said they do not develop reports on the creation of capital projects in ePM or the timeliness of updates made to these projects.
Project team members we interviewed said that having information on capital R&A projects in ePM is useful in a number of ways. For example, project team members from all four regions we interviewed said they find the “earned value” tool in ePM to be useful for project management. This tool uses schedule and budget information to forecast how a capital project is expected to progress and analyzes progress as new information is added. In addition, officials from two regions stated that ePM is a good tool for storing project documents for internal agency use, and officials from one of the regions said ePM offers a useful means to securely transmit capital project documents to both internal and external stakeholders.
GSA also reported improvements in the completeness and timeliness of updates to small projects’ information in ePMXpress in recent years. GSA conducts monthly checks to assess the number of small projects in ePMXpress with information that is either missing or out of date and issues reports to its regions summarizing the results of these checks. In May 2015, GSA issued an internal memorandum that reiterated its existing requirement that all small R&A projects be created in ePMXpress and updated in a complete and timely manner. In October 2016, GSA’s reports showed that, of all small R&A projects in ePMXpress, on average, 5 percent had schedule data errors and 7 percent had budget data errors. These rates varied across GSA’s regional offices, from 0 to 11 percent for schedule errors and 1 to 16 percent for budget errors. To reduce the rate of budget data errors, in 2017 GSA began using some contract award information available in EASi or FMIS to assess small projects’ performance, instead of relying on information input in ePMXpress. GSA’s central office officials said that they found the information in these systems to be more up to date. After GSA implemented this action, its September 2017 report showed that less than one percent of small R&A projects had errors in their schedule or budget data. Specifically, nine of GSA’s 11 regions had no small R&A projects with schedule errors, and 10 regions had no budget errors.
GSA has reported that the rate at which project team members initially create all of their small R&A projects in ePMXpress has also improved in recent years. Each month, officials from GSA’s central office take steps to verify that funded projects have been created in ePMXpress by manually reconciling information between ePMXpress and IRIS. GSA’s stated goal is to have 100 percent of small projects created in ePMXpress, and its guidelines require project team members to create all small projects in ePMXpress within 30 days of being approved for funding. We found that recent GSA reports on this reconciliation showed that the overall percentage of small projects having been created in the system has improved. At the beginning of fiscal years 2016, 2017, and 2018, nationwide compliance trended from 81 percent to 95 percent to 92 percent, respectively. In addition, the lowest percentage of small projects created in ePMXpress in any one individual region at the start of fiscal year 2016—61 percent—had improved to 88 percent by the outset of fiscal year 2017 and was 85 percent at the beginning of fiscal year 2018. At that time, the percentage of small projects created in ePMXpress ranged, by region, between 85 and 100 percent. GSA officials said they expect to find some small projects to be missing in ePMXpress because, in some cases, not enough time will have elapsed between the date of funding and the date of the reconciliation. GSA officials explained that they are continuing to take steps to emphasize the importance of having complete and timely information on all small R&A projects in ePMXpress to its regional offices. For example, to support the expectation that all small projects are created in ePMXpress, one official from GSA’s central office said monthly meetings are held with regional officials to discuss expectations for the completeness and quality of the project information.
Regional officials, including project team members, told us that ePMXpress is not useful to their work on small R&A projects, a situation that has limited the extent to which the officials use this tool, an outcome that can affect the completeness and timeliness of small project information. Specifically, officials from one region said that they view ePMXpress solely as a tracking tool for GSA’s central office, not as a project management tool. In addition, some regional officials said they do not find ePMXpress to be effective as a project management tool because ePMXpress does not allow them to collect information on useful project details, such as why schedules or cost estimates change during a project or why certain events happened. Project team members from three regions said that they continue to maintain offline “cuff records”— which allow them to customize their notes on why things happened during a project—because they are easier to access and update. Similarly, officials from all four regions we interviewed noted that the process of manually creating and updating all of their small projects in ePMXpress— of which there are hundreds each year—is time consuming. Furthermore, small R&A projects can often be started and completed in a short period of time, and can be completed before a project team is required to create a record in ePMXpress (within 30 days of a project’s approval). For this reason, officials from one region said that it is not useful to use ePMXpress for these projects. Officials in another region also reported that one of the functions that makes ePM useful for managing capital projects—that it can securely transmit documents outside of GSA—is not useful for small projects because they do not require as much interaction with external parties.
GSA has begun considering replacement systems for ePM/ePMXpress that GSA officials suggested could include the automated creation of projects upon project approval. As of March 2018, GSA had developed a statement of work to begin pursuing a replacement for ePM/ePMXpress. According to officials from the Office of GSA’s Chief Information Officer, the overall goals of a replacement include ensuring that it is easier for project team members to use than the current system. However, the capabilities of any such system are not currently known, nor are the ways in which a different system would affect the challenges reported by regional officials.
In the meantime, GSA is continuing to emphasize the importance of using ePMXpress to create and capture information for all small R&A projects to its regional offices, as the agency is using the information to support both ongoing and new efforts. For example, creating and updating project information in a timely manner improves GSA’s ability to assess R&A projects’ performance at the individual, regional, and national levels, as discussed later in this report. In fiscal year 2018, GSA plans to use project information input in ePM/ePMXpress to support its efforts to improve communication with tenant agencies, and GSA guidelines state it will be important that project team members use ePMXpress throughout all project phases for their small projects and ensure that the required information is up to date. In addition, the overall amount of information that project team members are required to input will increase moving forward because GSA is now requiring staff to create additional small projects in ePMXpress in a shorter period of time. In March 2018, GSA both reduced the time that project teams have to create small projects in ePM/ePMXpress from 30 to 15 days and also began requiring that additional, non-R&A small projects be created in the system. GSA has estimated this will result in approximately 1,100 additional projects being created in ePMXpress each year.
GSA Uses Schedule and Budget Measures to Assess the Performance of R&A Projects and Is Taking Steps to Improve Reporting
GSA Uses Schedule- and Budget-Related Performance Measures to Assess R&A Projects
GSA’s central office assesses the performance of capital and small R&A projects across its regional offices by focusing primarily on schedule and budget-related measures. According to internal GSA guidelines on performance measures, measuring projects’ schedule and budget performance allows GSA to continuously improve the project delivery and accountability of its work in order to demonstrate good stewardship of its stakeholders’ limited funding.
GSA assesses the performance of R&A projects using a few key measures. First, GSA uses a “timely award” measure. According to internal GSA guidelines on performance measures, the “timely award” measure reflects the effectiveness of early planning by assessing the timeliness of the obligation of funds for construction contracts following a project’s initial authorization. This measure is based on schedule information that project team members input in ePM/ePMXpress and, as mentioned earlier, budget information from the FMIS and EASi systems to compare planned obligations, projected contract award amounts, and planned contract award dates to actual results. Specifically, GSA officials stated that a project’s performance relative to the timely award measure is determined based on the percentage of awards that are made within set timeframes. This measure varies slightly between capital and small projects; for example, a capital project is viewed as successful if 90 percent of its planned obligation dollars are awarded within 30 days of its planned “baseline” award dates set at a project’s outset, and partially successful if this awarding occurs within 45 days. Conversely, a small project is deemed successful if 85 percent of its planned obligation dollars are awarded within 30 days of its baseline award dates or within 10 percent of its estimated construction costs. If 80 percent of these funds are awarded within 45 days or 20 percent of estimated construction costs, a small project is considered partially successful with respect to this measure.
GSA also has two “project delivery” measures. Once construction begins, GSA uses information from ePM/ePMXpress, EASi, and FMIS to assess whether projects are delivered “on-schedule” and “on-budget” by comparing the alignment of a project’s (1) estimated baseline schedule and budget to its (2) actual schedule and budget. As shown in figure 3, GSA’s project-delivery measures focus on the time between the start of construction and substantial completion, which is the date on which a project is suitable for occupancy. GSA’s project delivery targets are to have 85 percent of R&A projects be completed within 10 percent of their baseline schedules, and 85 percent of them to have total costs within 10 percent of their baseline budgets. GSA reported that it uses these measures to understand how capital R&A projects contribute to its agency-wide strategic objective to establish GSA as a more effective provider of real estate services for all agencies. According to GSA officials, tracking the rate at which capital projects—including capital R&A projects—are completed on time and within budget helps regional officials manage project expectations with their customers.
GSA reported that most of its R&A projects met the agency’s overall timely-award and project-delivery performance targets in fiscal year 2017. For the timely award measure, GSA reported that in fiscal year 2017, 93 percent of capital projects had their planned obligation dollars awarded within 30 days of their baseline award dates and that 87 percent of small projects had awards made within 45 days of their baseline dates. For the project delivery measure, GSA reported that 99 percent of all capital projects completed on-schedule and 99 percent were on-budget in fiscal year 2017. In that same year, GSA reported that 88 percent of small R&A projects were on-schedule and 86 percent were on-budget. GSA arrived at these results by rolling up information on individual projects’ performance. Officials from GSA’s central office said that capital projects are typically completed on-schedule and on-budget at a higher rate than small projects because capital projects have a more comprehensive planning process and are often reviewed by third parties, and they said that this process tends to result in more accurate baseline estimates. These officials also said that, while GSA has assessed the performance of its capital projects for 14 years and its regional officials have grown familiar with measurement of these projects, the agency only began assessing small projects’ performance in the past 3 years and regional officials are still growing accustomed to the idea of measurement on projects with lesser costs.
GSA officials are able to adjust the baseline schedule milestones and cost estimates against which the agency assesses performance when circumstances requiring additional time or funding arise during a project’s construction phase. According to an internal GSA document detailing requirements related to performance measures and reporting for capital projects, it is more difficult to change baseline milestones for a capital project than to adjust the dates for a small project because once a capital project’s baselines are input in ePM, they can only be altered through an adjudication process involving GSA’s central office. As described by officials in one GSA region, this process focuses on determining whether the reasons provided to support a request are strong enough to justify a baseline change. If such a change is approved by the central office, actual performance will then be compared against adjusted baseline milestone dates or cost estimates. GSA officials stated that, although there is no such adjudication process for small projects, any changes to schedule or budget baselines must be approved by regional management or, in some cases, officials from the central office depending on the context of the change.
The brief nature of some small R&A projects may affect the entry of their information and the interpretation of the reported performance. For example, we found that all eight of the small projects we reviewed had either missing baseline dates or baseline and actual milestone dates that matched exactly in the system. When asked why this may occur, officials from one region explained that small R&A projects with short durations can sometimes be completed before a project team is required to create the project’s record in ePMXpress. This can result in either missing data or baseline and completion dates simply being entered in a single session. Officials from GSA’s central office said that they rely on regional officials to input accurate information throughout the course of a project, as baselines are set and actual milestones are either met or exceeded.
GSA Reports on R&A Projects’ Performance at Regional and National Levels and Is Introducing New Reporting Intended to Create a Consistent Understanding of Performance
GSA’s central office produces regional and national reports and provides them to their regional offices to facilitate internal discussion on R&A projects’ performance. Specifically, GSA shares the reports containing regional and overall results of its timely award measure, project delivery measures, and the previously discussed reconciliation measure to encourage conversations among senior GSA leadership and regional management. For example, one report compares projects’ actual progress with baseline milestones using the project delivery measures to assess the accuracy of teams’ planning. GSA also shares R&A project delivery measure results with the Office of Management and Budget when compiling its annual performance reports.
Regional officials varied in the extent to which they viewed R&A performance reports as useful, and some regions have developed their own approaches to understanding projects’ performance. For example, officials in all four GSA regions we interviewed said that some reports distributed by the central office are not specific to their information needs. Officials from one of these regions described one report as having little value because it is difficult to understand what message the report is intended to convey. Officials from another region said they do not find a particular report to be useful because—in addition to the timely award measure that GSA emphasizes in working to understand R&A project performance—it also includes less prominent milestones in identifying whether a project is on schedule. These officials said that while their region focuses on significant milestones like a project’s contract award date (“timely award” measure) to assess progress, the report often flags projects as being behind schedule based on less critical interim milestones that can be done concurrently with other tasks, such as submitting a document for legal review. When regional officials have not found the reports shared by GSA’s central office to be useful, some said they rely on varying sources of information to understand performance. For example, officials from one region we interviewed said they use raw data, made available by the central office, to create reports that they feel offer a more complete picture of performance in their region and highlight projects that may be at risk. Similarly, officials from another region said they create custom consolidated reports to discuss projects and obtain an overall impression of the information available, track and assign workloads, and assess any relevant trends emerging across projects.
Officials from GSA’s central office said they are aware that some regions have not found R&A performance reports to be useful. These officials 1) acknowledged that the extent of information and features that ePM/ePMXpress offers is less than some regions have told them they need to manage their projects and 2) said updating these reports only once or twice per month is not often enough for some regions. The officials added that some regions’ opting to rely on other sources of information has contributed to an inconsistent understanding of R&A projects’ performance across the agency. GSA has been conducting outreach to its regional offices to better understand what information regions find useful to understanding their projects’ performance. GSA’s plan for this outreach states that one of its aims is to ensure that regions clearly understand the purpose, outcome, and value of new reports being developed. According to this plan, GSA intends to assess the effectiveness of its outreach by gathering feedback from regional officials and reviewing analytics on usage of the reports developed.
As outreach to regions continues, GSA has begun to introduce what officials describe as “self-service dashboard” reports to present a consolidated view of R&A project information, with the intent of promoting a consistent understanding of performance across the agency. According to GSA’s outreach plan for one of the forthcoming dashboards, GSA intends for these new reports to improve the transparency and timeliness of information on R&A projects, increase accountability, help identify information gaps and redundancies, and expand knowledge sharing across the agency. Even with these dashboards, GSA officials acknowledge that some regional offices may also continue to rely on other sources of information but added that the near real-time nature and ability to filter information offered by the dashboards will allow regional officials to do more with the information that their project teams input on their projects than in the past.
Specifically, GSA recently introduced a Capital Program Information Dashboard, which is an interactive, online presentation of information on all capital projects—including R&A projects—that is updated as often as daily, in some cases, using information from ePM, IRIS, FMIS, and other sources. The overall Capital Program Information Dashboard consists of a series of dashboards that present project information in a number of ways. For example, the National Summary Dashboard is comprised of three sections:
Program Measures Performance: This section provides a national and regional view of schedule and budget performance for capital projects, using the 85 percent fiscal year 2018 target as a reference line to show how each region is performing.
Program Award Performance: This section provides a national and regional view of capital projects’ performance with regard to GSA’s timely award measure, displaying comparisons of actual contract award dates and original baseline dates.
Program Summary: This section provides a national and regional view of capital projects, both by dollars appropriated and by the number of projects, for categories including: active projects, projects declared substantially complete within the current fiscal year, and overall combined totals. This section displays these values at a regional level in chart form and by state in an interactive map.
At the same time that GSA introduced regional and national-focused dashboard reports on capital projects, it also introduced (1) a Project Details Dashboard for capital projects that provides project-level information by region and state and (2) a Project Award Performance Dashboard that provides capital project-level information for planned awards; this dashboard can be filtered by fiscal year, program, vendor, project name, and contract type or number. Both of these dashboards have multiple sections; for example, the Project Award Performance Dashboard includes sections that focus on performance relative to the project delivery and timely award measures, highlight capital projects that may require adjustments to their schedule or budget baselines, and detail reasons for requested changes to baselines.
In April 2018, GSA also launched a draft version of a dashboard for small projects that it expects to give regional officials direct access to up-to-date information on their small R&A projects. Similar to the Capital Project Information Dashboard, the Small Project Dashboard will integrate information from systems including ePMXpress, IRIS, EASi, and FMIS. GSA’s plan for implementation states that this dashboard will present regional officials with a consolidated view of program and project information that includes status updates on timely-award and project- delivery measures. GSA expects that this dashboard, which is to be finalized before the end of fiscal year 2018, will offer “near real-time access” to small project information and reports to facilitate program management and data-driven decision-making. Finally, GSA officials said the agency is also planning to introduce a dashboard that will provide its customer agencies with up-to-date information in 2018. GSA expects this report to remove the delay between the inputting of project information and its accessibility to all parties involved, making the information more transparent both internally and externally.
GSA’s ability to assess and understand the performance of R&A projects will continue to rely on project team members’ entry of information as it finalizes its set of dashboard reports. GSA documentation on the introduction of the Small Projects Dashboard states that because ePM/ePMXpress will continue to serve as a key source of schedule information, regional officials’ regular input of R&A project information will be needed to make the dashboards meaningful. This documentation also suggests that regional officials consider entering additional project information, beyond what is required, so it will be available to them in the dashboards. Officials from GSA’s central office acknowledge that their ongoing outreach to the regional offices emphasizes the importance of complete and timely information—as discussed earlier—to the agency’s ability to comprehensively understand R&A projects’ performance.
Agency Comments
We provided a draft of this report to GSA for comment. An official in GSA’s Audit Management Division told us in an email that the agency had no comments on the draft report.
We will send copies of this report to appropriate congressional committees and the Administrator of the General Services Administration. In addition, we will make copies available to others upon request, and the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at 202-512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to discuss how GSA (1) collects information on repair and alteration (R&A) projects and (2) assesses the performance of R&A projects. The scope of the work focused on R&A projects from two R&A program funding streams: “capital” R&A projects (those with costs greater than $3.095 million) and “small” R&A projects (those with costs less than or equal to $3.095 million and greater than $25,000); we did not include projects related to new building construction projects or reimbursable work authorization projects, which are those performed by GSA but funded by other federal agencies to improve or renovate federal facilities. We collected information on systems supporting GSA’s management of its R&A projects, including its Electronic Project Management (ePM/ePMXpress) system, Pegasys, Financial Management Information System (FMIS), Inventory Reporting Information System (IRIS), and Enterprise Acquisition Solution integrated (EASi) system. Despite some discussions of the accounting systems involved with R&A projects, this review did not involve a financial audit of the R&A program. We also reviewed our prior work and reports from the GSA’s Office of Inspector General to obtain background information and identify any existing audit findings on the R&A program that might be relevant for our objectives.
To determine how GSA collects information on individual R&A projects, we reviewed documentation related to the R&A program, both provided to us by GSA and found on the agency’s web site. In addition, we reviewed GSA reports of the rates at which regional officials have created and updated information on their small projects in a timely and complete manner in ePMXpress. By reviewing reports generated by GSA’s central office—which are (1) based on their manual reconciliation of information between ePMXpress and IRIS and (2) based on the GSA identified errors in on-budget and on-schedule data in ePMXpress, EASi, and FMIS—we were able to assess the variance between regions in the extent to which project team members created their small R&A projects in ePMXpress— and subsequently updated this information as projects move forward— between 2015 and 2017. We reviewed the information in these reports to identify potential trends in regions’ complete and timely entry of R&A project information and interviewed GSA officials about the sources of information used to generate the reports and steps officials take to ensure its accuracy. However, we did not independently verify the accuracy of the data contained in these reports.
We also selected 12 R&A projects using GSA’s central office data from October 2013 through August 2017 to understand how information is input into the systems by regional officials, how it is used by officials from GSA’s central office and selected regional offices, and whether there are any issues affecting the information’s completeness or timeliness. We selected the 2013 to 2017 time frame because this time period represents the period after GSA officials said that they began tracking small projects in the system used to collect information on project status, and the period represents the most recent data available at the time of our selection. For our project selection, we obtained data from GSA central office for all R&A projects that existed but not were closed out as of the beginning of fiscal year 2014 or had been added since the beginning of fiscal year 2014. We reviewed documentation on the collection of the data and analyzed the data for missing information and found the data to be sufficiently reliable for the purpose of selecting projects to understand how R&A project information is input by regional officials and how it is used across GSA.
To arrive at these 12 projects, we selected projects from regions that had one or more capital R&A projects categorized as having been substantially completed between October 2013 and August 2017, as most regions undertake few capital projects in a given year. We initially identified seven GSA regions that had substantially completed at least one capital project during this timeframe and narrowed this number to four regions—GSA regions 5, 6, 7 and 9—which had varying degrees of performance based on our initial review of GSA reports containing schedule and budget metrics. Specifically, to ensure that we were not selecting four comparable regions, we selected two regions that surpassed GSA performance targets and two regions that did not surpass their performance targets. In addition, we gave preference to regions in proximity to our field offices’ locations to minimize costs associated with site visits. Within each of the four selected regions, we identified the sole capital R&A project that was substantially completed between October 2013 and August 2017, for a total of four capital projects. We then selected two small projects—those with the highest and lowest “Estimated Cost of Construction at Award” and had been active between October 2013 and August 2017—for a total of eight small projects (see table 2 for list of selected projects).
We conducted interviews with regional officials from these four regions— visiting two of the four regions that were located near our field offices. During those interviews, we discussed data entry processes and posed questions both specific to the region’s selected projects and the R&A program more broadly. During interviews with both GSA’s central and regional offices, we asked officials to explain how the IRIS, ePM/ePMXpress, EASi, and any other systems are used throughout the planning and execution of R&A projects. Specifically, we reviewed and discussed processes related to project information collection in general with regional officials and specific project detail, budget, and schedule information with the project team members who input information on the selected capital and small projects into these systems; for example, we raised questions about instances in which baseline and actual dates matched for some projects. Information on the projects we selected is not generalizable to all R&A projects, and the views of the regional officials interviewed are not generalizable to all of GSA’s regional offices.
To determine how GSA assesses the performance of its R&A projects, we requested and reviewed documentation from GSA on the extent to which the agency evaluates the performance of its R&A projects and inquired about the project information systems used to produce related performance reports. In addition to the documents provided by GSA, we used publicly available annual reports and budget justifications detailing GSA’s overall goals and mission and the ways in which GSA has stated that the R&A program supports these aims. After an initial review of documents provided by GSA, we identified and requested specific internal guidance and guidelines, information on the criteria used to select individual R&A projects for funding, and reports related to both capital and small projects’ performance. We used information contained in some of these reports to identify the performance metrics GSA has established for assessing R&A projects’ performance and to assess overall regional performance relative to these metrics, as reported by GSA. We did not independently verify the accuracy of the on-schedule and on-budget figures reported by GSA, a methodological consideration that was beyond the scope of this review; our focus was on how GSA assesses the performance of R&A projects—not on the results of their assessments. We also interviewed officials from GSA’s central office and the four regional offices to discuss the agency’s assessment of R&A projects’ performance and the performance reports provided to regional officials. Furthermore, we reviewed information about GSA’s plans to introduce new “dashboard” reports and outreach that officials from GSA’s central office had conducted to understand regional officials’ reporting needs. Finally, we interviewed these central-office officials and officials from the selected regional offices described above to discuss the use and usefulness of the performance reports.
We conducted this performance audit from May 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: General Services Administration’s Capital and Small Project Data-Entry Requirements for ePM and ePMXpress
The tables below show the details and differences in the General Services Administration’s (GSA) data entry requirements for capital projects in ePM and small projects in ePMXpress. Although some of the ePM modules may not be applicable for every project, there are mandatory fields and functions in each that drive GSA’s metrics, measures, and standardized reports. Table 3 identifies the elements required, by GSA’s fiscal year 2018 measures, for capital projects and indicates whether each is used in a key performance indicator.
Table 4 identifies the small-project data entry requirements for ePMXpress, as required for fiscal year 2018 measures, and whether they are used in a key performance indicator.
Table 5 contains a list of standard project milestones that must be maintained by GSA project managers in the ePM and ePMXpress schedule modules, as identified in table 3 for capital projects and table 4 for small projects.
Appendix III: GAO Contact and Staff Acknowledgments
Contact
Acknowledgements
In addition to the contact named above, Nancy Lueke, Assistant Director; Chad Williams, Analyst-in-Charge; Terence Lam; Les Locke; Cynthia Nelson; Josh Ormond; Amy Rosewarne; Kelly Rubin; James Sweetman, Jr.; and Crystal Wesco made key contributions to this report. | Why GAO Did This Study
Each year, GSA spends hundreds of millions of dollars on R&A projects to address the repair, renovation, or modernization needs of the more than 1,600 federally owned buildings under the agency's custody and control—the average building's age is 47 years old. In fiscal year 2018, Congress appropriated $666 million in obligational authority from the Federal Buildings Fund for GSA's R&A program. Collecting information is fundamental to monitoring progress and assessing projects' performance.
GAO was asked to review issues about GSA's collection of information needed to manage its R&A projects. This report examines how GSA (1) collects information on individual R&A projects and (2) assesses the performance of R&A projects.
GAO reviewed documentation on the systems that GSA uses to support its management of the R&A program, as well as internal GSA reports on regional offices' use of the system that tracks projects' status. GAO also interviewed officials from GSA's central office and four regional offices to understand the types of information collected on R&A projects and how the information is input in GSA's systems. To identify the regional officials to be interviewed, GAO selected a non-generalizable sample of four capital R&A projects and eight small R&A projects, active between October 2013 and August 2017, based on a preliminary analysis of GSA data.
GSA had no comments on the report.
What GAO Found
The General Services Administration (GSA) requires its regional offices to collect information on their repair and alteration (R&A) projects electronically and is working to improve the completeness and timeliness of this collection. Since 2011, GSA has required its regional offices to input and update information on both capital projects (those costing more than $3.095 million as of fiscal year 2018) and small projects (those costing less than $3.095 million). Officials from the four regions GAO interviewed said they find this system to be useful for forecasting how a capital project will progress. Regarding small projects' information, GSA has taken steps to improve regional offices' collection by, for example, conducting monthly checks to ensure that all small projects have been created in the system, assessing the number of projects that have missing information, and introducing a simplified way that GSA's regions can enter information in the system. GSA officials reported that, moving forward, they are continuing to emphasize the importance of collecting complete and timely information, which is needed to assess the performance of all R&A projects.
GSA uses schedule- and budget-focused measures to assess the individual, the regional, and the national performance of capital and small R&A projects and is working to create a consistent understanding of performance. GSA's measures rely on information input by regional officials. For example, during the construction phase, GSA uses two “project delivery” measures, which compare a project's estimated schedule and budget with actual outcomes. GSA produces regional and national reports detailing projects' performance relative to these measures. However, not all regional officials GAO spoke with view these reports as useful because they are not specific to the officials' information needs. As a result, some regions have created their own reports, contributing to an inconsistent understanding of R&A projects' performance across the agency. GSA has conducted outreach to its regions and has begun to introduce new “dashboard” reports that present a consolidated view of R&A projects' information. Moving forward, GSA's ability to assess R&A projects' performance will continue to rely on regional officials' complete and timely input of information for both capital and small projects. |
gao_GAO-18-357T | gao_GAO-18-357T_0 | Background
According to State, the OAS is the primary inter-American political forum through which the United States engages with other countries in the Western Hemisphere to promote democracy, human rights, security, and development. While PAHO, IICA, and PAIGH are independent organizations, the Charter of the Organization of American States directs them to take into account the recommendations of the OAS General Assembly and Councils. PAHO, a specialized international health agency for the Americas, works with member countries throughout the region to improve and protect people’s health and serves as the Regional Office for the Americas of the World Health Organization, the United Nations agency on health. IICA, among other things, supports its member states’ efforts to achieve agricultural development and rural well-being through consultation and the administration of agricultural projects through agreements with the OAS and other entities. PAIGH specializes in regional cartography, geography, history, and geophysics and has facilitated the settlement of regional border disputes.
U.S. Assessed Contributions to Inter- American Organizations and the Reform Act
Member states collectively finance these organizations by providing assessed contributions in accordance with the organizations’ regulations. The member states’ assessed contributions are intended to finance the organizations’ regular budgets, which generally cover the organizations’ day-to-day operating expenses, such as facilities and salaries. The budgets are based on each organization’s total approved quota assessment and other projected income. Member states of each organization meet to review and approve the organizations’ budgets. The exact dollar amount each member state is responsible for providing corresponds to its assessed percentage of the total approved quota assessment for any given year.
In October 2013, the United States enacted the Organization of American States Revitalization and Reform Act of 2013 (Reform Act). The Reform Act directed the Secretary of State to, among other things, submit “a multiyear strategy that…identifies a path toward the adoption of necessary reforms that would lead to an assessed fee structure in which no member state would pay more than 50 percent of the OAS’s assessed yearly fees.” According to the Reform Act, it is the sense of Congress that, among other things, it is in the interest of the United States, OAS member states, and a modernized OAS that the OAS move toward an assessed quota structure that (1) assures the financial sustainability of the organization and (2) establishes, by October 2018, that no member state pays more than 50 percent of the organization’s assessed fees.
The United States Contributed Over Half of Total Assessed Contributions to the Four Organizations, but OAS Member States Have Voted to Consider a Reduction of the U.S. Share
In June 2017, we reported that the United States’ assessed contributions constituted over 57 percent of total assessed contributions by member states to four inter-American organizations from 2014 through 2016 (see table 1). During this time, the annual U.S. percentages (or quotas) of these organizations’ assessed contributions have remained about the same. Therefore, the actual amounts assessed to the United States generally remained the same.
All four organizations apply a similar assessed quota structure that uses the relative size of member states’ economies, among other things, to help determine each member state’s assessed contributions. The OAS determines the assessed quota for each member state based on the United Nations’ methodology, as adapted for the OAS, using criteria that include gross national income, debt burden, and per capita income. The other three organizations use OAS’s system for determining member states’ quotas to calculate their member states’ assessed contributions. Thus, any change in the OAS’s assessed quota structure should be reflected at PAHO, IICA, and PAIGH, according to their respective processes regarding the determination of assessed contributions.
The U.S. share of assessed contributions may be reduced in the future. The Reform Act required State to submit a strategy identifying, among other things, a path toward the adoption of necessary reforms to the OAS’s assessed quota structure that would lead to a structure in which no member state would pay more than 50 percent of OAS assessed contributions. In response to that requirement, State told us that they submitted to Congress a strategy that included working with OAS member states toward ensuring that the OAS would not assess any single member state a quota of more than 50 percent of all OAS assessed contributions. State officials informed us that they worked with other OAS member states, including Canada and Mexico, to explore assessed quota reform options. For example, State officials consulted with their counterparts from Mexico to review the OAS’s assessed quota structure and to consult on alternatives that would adjust all member states’ quotas so that no member state’s quota exceeds 50 percent of the OAS’s assessed contributions. Subsequent to our June 2017 report, at the OAS General Assembly in June 2017, OAS member states voted to draft a proposal to modify the quota structure to potentially reduce the maximum assessed quota to below 50 percent. According to State officials, the modification to the quota structure, if approved, would be gradual and would not be implemented until 2019.
U.S. Agencies Provided Voluntary Contributions to OAS, PAHO, and IICA through Assistance Agreements but Could Enhance Their Monitoring of These Agreements
State, HHS, USAID, and USDA fund activities at OAS, PAHO, and IICA in the form of assistance agreements. In our December 2017 report, we reviewed 12 such agreements across the four agencies and found that State and USDA did not include all key monitoring provisions in their agreements as called for by applicable guidance. State has taken corrective action since the grants were awarded. We also found that all four agencies did not have full documentation of 18 of the 42 monitoring activities required by the 12 assistance agreements we reviewed. State and HHS both initiated corrective action prior to our review of the grants.
U.S. Agencies Provided Voluntary Contributions through Assistance Agreements to OAS, PAHO, and IICA
The United States provided voluntary contributions to OAS, PAHO, and IICA through project-specific assistance agreements, such as grants and cooperative agreements. According to U.S. agency officials, the organizations’ regional knowledge and technical expertise make them effective implementing partners for projects serving U.S. national interests and priorities throughout the hemisphere. From calendar years 2014 through 2016, the United States provided voluntary contributions totaling about $105 million to the OAS, PAHO, and IICA, as shown in table 2. In 2016, for example, the United States contributed $32 million, or approximately 22 percent of the total of $143 million from all member states. According to U.S. officials, levels of U.S. voluntary contributions vary year-to-year due to factors that include the schedule of multiyear agreement disbursements, sudden crises, and member states’ priorities. For example, in 2016, USAID approved an assistance agreement for $2 million to OAS to support international observation of government elections in Haiti.
U.S. Agencies Could Enhance Their Monitoring of Assistance Agreements
In our review of 12 selected assistance agreements from State, HHS, USAID, and USDA (out of a total of 60 active agreements during calendar years 2014 through 2016), we found that none of the agencies had both consistently included all the key monitoring provisions for their agreements and fully documented the monitoring activities required by those provisions. For example, USDA did not have full documentation, such as financial reports, of any of its 10 required monitoring activities, and USAID did not have full documentation of 2 of its 11 required monitoring activities (financial and performance reports). U.S. agencies could have greater assurance that the organizations are using these funds as intended if they enhanced their monitoring of their assistance agreements.
Two of Four U.S. Agencies Did Not Include All Key Monitoring Provisions in the Agreements We Reviewed
Each of the four agencies has established applicable guidance that calls for agencies to conduct monitoring activities as part of their oversight of their assistance agreements. The agencies implement their guidance by including key provisions to carry out required monitoring activities as part of their agreements. Federal standards for internal control call for agencies to include in agreements all key provisions delineating the parties’ responsibilities. For the 12 agreements we reviewed, the number of key monitoring provisions per agreement varied depending on when the agency issued and updated its guidance relative to when the agreements were approved.
Federal standards for internal control call for agencies to document internal controls, transactions, and significant events. Specifically, internal control standards state that agency management should include internal control activities (e.g., monitoring activities) in policies or directives for transactions such as assistance agreements.
For the 12 assistance agreements we reviewed, USDA and State did not include provisions implementing 6 of the 55 total (11 percent) monitoring activities required by applicable guidance (see table 3). For example, State did not include two of the key monitoring provisions (a risk assessment and a monitoring plan) in one of its agreements. State took corrective action in 2015 by issuing a standard operating procedure.
The agencies specify the requirements to fulfill the key monitoring provisions in the individual assistance agreements, such as by requiring financial reports on a quarterly basis or including specific information in performance reports. Grants officers, if they deem it necessary or appropriate, include additional monitoring provisions requiring activities beyond those required by the applicable guidance, such as site visits.
Federal standards for internal control call for agency management to design monitoring activities, such as financial and performance reporting, so that all transactions are completely and accurately recorded. Recording these activities maintains their relevance and value to management in controlling operations and making decisions. Without access to complete monitoring documentation, the agencies risk weakening the effectiveness of these controls.
None of the four U.S. agencies had full documentation of all of the monitoring activities required by their agreements we reviewed (see table 4). The agencies did not have full documentation of monitoring activities for 9 of the 12 agreements we reviewed. For the 42 monitoring activities identified across all of the individual agreements, the four agencies did not have full documentation of 18 of the activities (43 percent). However, State took corrective action in May 2017 to address its gaps in documentation, and according to HHS officials, the Food and Drug Administration addressed its gap in documentation by implementing its agreement monitoring program in fiscal year 2018.
The Strategic Goals of the Four Inter- American Organizations Are Predominantly Aligned with U.S. Agencies’ Strategic Goals
In our December 2017 report, we found that the strategic goals of the four inter-American organizations are predominantly aligned with the high- level strategic goals for the Western Hemisphere documented by State, USAID, HHS, and USDA, as shown in table 5. For example, four of the five goals in State and USAID’s Joint Strategy correspond with goals at the OAS, IICA, and PAIGH. According to officials, the agencies all consider U.S. strategic goals when deciding which projects to fund at OAS, PAHO, and IICA. U.S. agencies, on an ongoing basis, evaluate each inter-American organization to ensure U.S. and organization goals are aligned. For example, according to USAID officials, USAID’s assistance project design and approval policies and procedures ensure that all USAID-funded activities are linked to applicable U.S. and USAID strategies.
In conclusion, monitoring the implementation of U.S. assistance agreements and fully documenting the results of such monitoring are key management controls to help ensure that U.S. agreement recipients use federal funds appropriately and effectively. The agencies risk weakening the effectiveness of these controls by not including in their assistance agreements all the key monitoring provisions called for by applicable agency guidance. Further, if the agencies do not have full documentation of the agreements’ required monitoring activities, they may not be able to effectively manage federally funded projects that support U.S. strategic goals. In addition, agencies may not have all the information they need to make budgetary and programmatic decisions.
In our December 2017 report, we recommended that (1) USDA ensure inclusion of all monitoring provisions as part of agreements and (2) USAID and USDA ensure full documentation of monitoring activities. The agencies concurred with these recommendations and indicated that they will take actions to address them. For example, USAID said it would issue an agency notice to remind all agreement officers to maintain complete files for each agreement.
Chairman Cook, Ranking Member Sires, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you may have at this time.
GAO Contact and Staff Acknowledgments
If you or your staff have any questions about this testimony, please contact Thomas Melito, Director, International Affairs and Trade at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony include Pierre Toureille (Assistant Director), Julia Jebo Grant (Analyst-in- Charge), Leslie Stubbs, Paul Sturm, Alana Miller, Shirley Min, Kira Self, and Rhonda Horried. In addition, David Dayton, Martin de Alteriis, Neil Doherty, Jeff Isaacs, and Alex Welsh provided technical assistance.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
The United States belongs to several inter-American organizations, including the OAS, PAHO, IICA, and PAIGH, which promote democracy, security, health care, agricultural development, and scientific exchange in the Western Hemisphere. The United States helps finance these organizations' operating expenses through assessed contributions. The United States also provides voluntary contributions through the federal funding of assistance agreements to OAS, PAHO, and IICA.
This testimony is based on GAO's June and December 2017 reports that, among other things, (1) determined the amounts and percentages of U.S. assessed contributions to the four organizations, (2) assessed the extent to which U.S. agencies included and documented key monitoring provisions as part of their assistance agreements, and (3) assessed the extent to which the organizations' strategic goals align with those of U.S. agencies.
GAO analyzed documents and interviewed officials from State, HHS, USAID, USDA, and the four organizations. GAO analyzed the four organizations' audited financial reports and a nongeneralizable sample of 12 assistance agreements awarded by State, USAID, HHS, and USDA active in calendar years 2014 through 2016.
What GAO Found
While the United States' assessed contributions constituted over 57 percent of total assessed contributions by member states to four inter-American organizations from 2014 to 2016, the U.S. share may be reduced in the near future (see table). In response to a statutory requirement, the U.S. Department of State (State) said it submitted to Congress a strategy that included working with the Organization of American States (OAS) member states toward ensuring that the OAS would not assess any single member state a contribution amounting to more than 50 percent of all OAS assessed contributions. At the OAS General Assembly in June 2017, OAS member states voted to draft a proposal to modify its system for determining member states' assessed contributions to potentially reduce the maximum assessed contribution to below 50 percent. The other three organizations use OAS's system for setting assessed contributions. Hence, any change in contributions at OAS should also be reflected at Pan American Health Organization (PAHO), Inter-American Institute for Cooperation on Agriculture (IICA), and the Pan-American Institute of Geography and History (PAIGH).
State, the Department of Health and Human Services (HHS), the U.S. Agency for International Development (USAID), and the U.S. Department of Agriculture (USDA) provide voluntary contributions to OAS, PAHO, and IICA in the form of assistance agreements (e.g., grants and cooperative agreements). In December 2017, GAO reported that its review of 12 such agreements across the four agencies found that State and USDA did not include all key monitoring provisions in their agreements as called for by applicable guidance. State has since taken corrective action. GAO also found that all four U.S. agencies did not have full documentation of 18 of the 42 monitoring activities required by the 12 assistance agreements GAO reviewed. For example, USDA did not have full documentation, such as financial reports, of any of its 10 required monitoring activities, and USAID did not have full documentation of 2 of its 11 required activities. State and HHS said they initiated corrective action before our review. If an agency does not have full documentation of monitoring activities, it may lack information needed to make appropriate budgetary and programmatic decisions.
GAO found that the strategic goals of the OAS, PAHO, IICA, and PAIGH are predominantly aligned with the strategic goals of State, USAID, HHS, and USDA. According to agency officials, the agencies employ mechanisms to ensure that assistance agreements with these organizations align with U.S. goals.
What GAO Recommends
In its December 2017 report, GAO recommended that (1) USDA ensure inclusion of all monitoring provisions as part of agreements and (2) USAID and USDA ensure full documentation of monitoring activities. USDA and USAID concurred with GAO's recommendations. |
gao_GAO-18-327 | gao_GAO-18-327_0 | Background
This section outlines the legal framework under which agencies and federal labs license patents and general stages of the patent licensing process.
Legal Framework for Patent Licensing
Prior to 1980, federal agencies generally retained title to any inventions developed through federally funded research—whether extramural, that is, conducted by universities and contractors, or intramural, conducted by federal agencies in their own facilities. By the late 1970s, there was increasing debate in Congress over ways to allow the private and public sectors better access to federally owned inventions by, among other things, creating a uniform policy for those seeking to license inventions developed in federal labs. In the 1980s, Congress began passing a series of key laws that have provided the foundation for federal technology transfer activities, including patenting and licensing inventions that are developed in federal labs and funded by federal dollars. One of the first technology transfer laws, the Stevenson-Wydler Act, established technology transfer as a federal policy and required federal labs to set up Offices of Research and Technology Applications (which, for our purposes, we refer to as technology transfer offices) and devote budget and personnel resources to promoting the transfer of federal technologies to the private sector. In 1980, another key law, the Bayh-Dole Act allowed not-for-profit corporations, including universities, and small businesses to retain title to their federally funded inventions. In 1984, through amendments made to the Bayh-Dole Act, Commerce became responsible for issuing regulations to implement the act.
The Stevenson-Wydler Act was amended by the Federal Technology Transfer Act of 1986, which (1) established the Federal Laboratory Consortium (FLC); (2) required that technology transfer efforts be considered positively in employee performance evaluations; and (3) empowered federal agencies to permit the directors of government- owned, government-operated labs to enter into cooperative research and development agreements (CRADA) and to negotiate license agreements for inventions created in the labs. The FLC began largely as a forum for the education, training, and networking of federal technology transfer officials to promote the integration of technical knowledge that federal departments and agencies developed into the U.S. economy. Over time, the FLC’s role would include serving as a clearinghouse—a central point for collecting and disseminating information—for federal technologies and assisting outside entities in identifying available federal technology. Within Commerce, NIST is the designated host and financial administrator of the FLC.
Additional laws were adopted to help further the development of federally owned inventions for commercial use. Among them was the National Competitiveness Technology Transfer Act of 1989, which directed federal agencies to propose, for inclusion in contracts, provisions to establish technology transfer as a mission of government-owned, contractor- operated labs and permitted those labs, under certain circumstances, to enter into CRADAs. In addition, the Technology Transfer Commercialization Act of 2000 required Commerce to provide Congress with summary reports on agencies’ patent licensing and other technology transfer activities. Since 2007, Commerce has delegated to NIST the role of providing to Congress an annual report summarizing technology transfer at federal agencies. NIST’s role as the lead in an interagency collaborative effort in federal technology transfer grew further when Commerce delegated to the agency the additional responsibility of coordinating the Interagency Working Group for Technology Transfer. Commerce also has delegated to NIST its authority to promulgate implementing regulations pertaining to patenting and licensing at federal labs. In 2011, Congress passed the Leahy-Smith America Invents Act (AIA) that further affected technology transfer activities by federal labs through comprehensive changes made to the U.S. patent system.
Federal Labs
Federal labs are typically managed under either a government-operated or a contractor-operated model. Commerce regulations prescribe the terms, conditions, and procedures that government-operated labs are to use to license their inventions for commercial use or other practical applications. Government-operated labs are usually owned or leased by the federal government and are predominantly staffed by federal employees. Contractor-operated labs, on the other hand, operate facilities and equipment that are owned by the federal government, but the staff is employed by a private or nonprofit contractor that operates the lab under a contract with the federal government. Contractor-operated labs typically license their technologies under the authority of the Bayh-Dole Act, applicable regulations, and their contracts, which generally give contractor-operated labs more flexibility in licensing their technologies. Contractors that manage and operate labs include universities, private companies, nonprofit organizations, or consortia thereof. As discussed below, whether a lab is government-operated or contractor-operated will affect how that lab licenses inventions because each type operates under a different set of licensing regulations and requirements.
The Federal Licensing Process
The pathway of an invention from lab development to commercial product can end at any point, and products may not always reach, or find success in, the marketplace. Figure 2 shows the seven general areas of the patent licensing process at federal labs.
The patent licensing process begins with researchers identifying inventions—a process that primarily relies on researchers disclosing their inventions to lab officials, mostly through the lab director or directly to an agency’s technology transfer office. Various laws and regulations establish a uniform policy for determining who holds the rights to government employees’ inventions. Some government-operated labs allow or encourage researchers to publish their research, including research describing inventions, for public dissemination, such as in research journals. Contractor-operated labs are required to disclose inventions to the agency within 2 months after notifying contractor personnel responsible for patent licensing activities. Labs must then decide within 2 years after the disclosure whether to retain title to the invention. The contract then must file its initial patent application on the invention to which it elects to retain title within one year after election of title. If the contractor-operated lab does not disclose the invention or elect to retain title within the times specified in the law and regulations, it will convey title to the invention to the funding agency upon written request.
Keeping Track of Inventions
Once an invention has been identified and disclosed, federal agencies and labs keep track of the invention. How they do so varies in degree of automation and centralization. For example, systems that keep track of lab inventions can range from spreadsheets to automated software that tracks all patent licensing and other technology transfer activities. Also, such systems can be centralized, with oversight at the agency level, or decentralized, with independent oversight at the lab level—which is generally the case at contractor-operated labs. Some contractor-operated labs manage their federally funded inventions through the Interagency Edison (iEdison) reporting system, which is owned and managed by NIH.
Selecting Inventions to Patent
Before applying for patent protection through USPTO, agency and lab officials review the invention—often using evaluation committees and patent attorneys—to consider a number of factors, including whether it is patentable, it furthers the lab’s mission, and patenting the invention is likely to bring it to commercial use or practical application. The agency must file a patent application within 1 year of the first publication, public use, sale, or offer for sale of the invention or lose U.S. patent rights to that invention. Not all patents will be licensed out to companies for a variety of reasons, including national security considerations. The average time from filing to issuing a patent, or when an application is abandoned, is about 2 years, according to USPTO. Patent applications are often rejected, modified, and refiled, and various fees are associated with filing and prosecuting a patent application. However, according to USPTO, patent maintenance fees that allow federal labs to maintain their patents in force are among the most significant fees.
Attracting Potential Licensees
Agencies and labs use a variety of methods to attract potential licensees, including those from industry, universities, and nonprofits. For example, agencies may post their inventory of patented inventions online, publish them in academic journals, or highlight them at public events. Agencies and labs actively engage with the private sector by, for example, attending conferences where companies can network with federal researchers and federal technology transfer officials. In addition, technology transfer offices often work with partnership intermediaries— such as local or state entities and nonprofit organizations—to support their efforts, including reaching out to potential licensees. Labs have other mechanisms to help attract potential licensees to further develop their inventions. For example, CRADAs can help facilitate licensing or the transfer of knowledge from a lab to a licensee, and new inventions that arise under a CRADA are typically made available to the partner via an option to license.
Negotiating the License Agreement
The technology transfer offices and legal counsel are generally responsible for crafting and negotiating the terms of the patent license, sometimes with input from other lab officials. Negotiations are often an iterative process in which both the lab and the licensee request adjustments to the terms of the license. Laws and regulations specify some terms that government-operated labs must include in their licenses. Among others, a typical license includes terms related to (1) financial compensation (if applicable), (2) the degree of exclusivity of the license, (3) the U.S. manufacturing requirement, (4) retained rights for the government, (5) termination of the license, and (6) enforcement of licenses.
Financial terms may include up-front fees; minimum payments; royalties, usually based on sales; and milestone payments, among others. Federal labs typically establish financial terms on a case-by-case basis that are tailored to the specifics of the technology, licensee, and market conditions. License agreements may be nonexclusive, partially exclusive, or fully exclusive, and may be limited to some fields of the invention’s use or to specific geographic areas.
Government-operated labs must publicly announce their intent to grant an exclusive license for at least 15 days. After this period, comments and objections are considered. Negotiations then begin with the proposed licensee or, if the licensee has changed, another public announcement of the new licensee may be required. Government-operated labs are required to obtain a commercialization plan from a potential licensee regardless of the degree of exclusivity. Contractor-operated labs, which typically retain title to their inventions under the authority of the Bayh-Dole Act, are not subject to the requirement to obtain a commercialization plan from a prospective licensee before granting a license; however, they are subject to requirements specified in their contracts regarding patent licensing. In addition, they are not subject to the same notification requirements as government-operated labs.
The law also contains some other provisions pertaining to patent licenses originating from federal labs. For example, the law generally gives preference to small businesses that are capable of bringing the invention to practical application. There is a general preference for products that incorporate federal inventions to be manufactured substantially in the United States; however, on a case-by-case basis, agencies may waive this requirement. Applicable law also reserves certain rights for the government to protect the public’s interests in federally funded inventions. For example, the government retains a royalty-free license to use inventions that are contractor owned or that are licensed exclusively. In addition, the Bayh-Dole Act provides the government march-in authority when certain statutory conditions have been met. Under this authority, an agency may grant a license to an invention developed with federal funding even if the invention is exclusively licensed to another party if, for example, it determines that such action is needed to alleviate public health or safety needs which are not reasonably satisfied by the contractor, assignee, or their licensee. A federal lab can also terminate a license when the licensee is not meeting its commitment to achieve practical application of the invention. The lab can also, through the license, grant permission to a licensee to pursue patent infringement cases.
Monitoring Licensee Performance
Federal license agreements generally require licensees to report periodically on their commercialization. For instance, labs generally put specific monitoring requirements in the license agreements, including milestones and reporting requirements. Through the agreements, government-operated labs have the right to terminate or modify licenses if certain requirements are not met. Government-operated labs must submit written notices to the licensees and any sublicensees of their intentions to modify or terminate licenses, and allow 30 days for the licensees or sublicensees to remedy any breach of the licenses or show cause why the licenses should not be modified or terminated. Contractor-operated labs also monitor licensee performance in much the same way; however, they are subject to a different set of regulations.
Measuring Licensing Outcomes
Federal labs are responsible for measuring the outcomes of their activities in all areas of the patent licensing process by developing metrics and evaluation methods. Measuring licensing outcomes help labs assess the effectiveness of their patent licensing efforts. Soon after the passage of AIA, President Obama issued a memorandum in October 2011 to the heads of executive departments and agencies calling for, among other things, (1) developing strategies to increase the usefulness and accessibility of information about federal technology transfer opportunities; (2) listing all publicly available, federally owned inventions on a public government database; and (3) improving and expanding its collecting of metrics for Commerce’s annual technology transfer summary report.
Technology Transfer and Agency Mission
Federal law states that it is Congress’s policy and objective to use the patent system to promote the commercialization and public availability of inventions, and that technology transfer, including federal patent licensing, is the responsibility of each laboratory science and engineering professional. No single federal agency is responsible for managing technology transfer activities government-wide. Rather, each federal agency involved in technology transfer designs its own program to meet technology transfer objectives, consistent with its other mission responsibilities.
Federal Labs and Stakeholders Identified Challenges in Patent Licensing, and Agencies Have Taken Some Steps to Address Them, but NIST Has Not Fully Reported Such Challenges
Federal agency and lab officials and external stakeholders have identified challenges across the federal patent licensing process, but NIST has not fully reported such challenges. Specifically, DOD, DOE, NASA, and NIH officials at the agency and lab levels, as well as external stakeholders, cited challenges related to all seven areas of the patent licensing process. In addition, officials and stakeholders cited challenges in one area that cuts across the entire process: prioritizing patent licensing as part of agencies’ missions. In its annual reports to Congress on federal labs’ performance in patent licensing activities, NIST has discussed some challenges identified by agency and lab officials and external stakeholders but has not fully reported on the range of challenges they have experienced.
Federal Labs and Stakeholders Identified Challenges across the Patent Licensing Process, and Agencies Have Taken Some Steps to Address These Challenges.
DOD, DOE, NASA, and NIH officials at the agency and lab levels, as well as external stakeholders, identified challenges in all seven areas of the patent licensing process, including identifying inventions, keeping track of inventions, and negotiating license agreements. They also cited challenges in prioritizing patent licensing as part of agencies’ missions. Based on our analysis of relevant literature and on interviews with external stakeholders, many of these challenges are occurring government-wide. DOD, DOE, NASA, and NIH have taken some steps to address the challenges in each area of the patent licensing process.
Challenges in Implementing the Patent Licensing Process
DOD, DOE, NASA, and NIH officials at the agency and lab levels, as well as external stakeholders, identified challenges in all seven areas of the patent licensing process, including not identifying inventions, keeping track of inventions in inadequate systems, and difficulty negotiating license agreements. For example, several DOD, DOE, NASA, and NIH officials stated that some researchers do not have adequate training in identifying potentially patentable inventions. When a federal researcher does not disclose to lab officials an invention developed in a federal lab, the opportunity to assess the invention’s potential for commercial use may be lost. Federal officials cited various reasons why researchers do not disclose inventions. Navy officials, for example, stated that researchers are often intimidated by the overall invention disclosure process and tend to focus on their research rather than consider what could be patentable. Officials at one NASA lab noted that they have come across a few contractor employees who do not see the benefit of filing invention disclosures, and sometimes researchers are too busy to engage in the patenting process.
Our analysis of relevant literature and interviews with stakeholders also showed that researchers not identifying and disclosing inventions is a government-wide challenge. For example, one stakeholder stated that researchers at federal labs generally have limited understanding of the patenting process, including an understanding of what constitutes patentable subject matter and how to conduct a prior art search on the technology to determine whether it is patentable.
DOD, DOE, NASA, and NIH officials stated that they are taking a variety of actions to help address this challenge. For example, some agency and lab officials stated that labs conduct training to educate researchers about the patenting process, inform researchers about requirements to disclose inventions, and incentivize them by acknowledging their efforts through awards and monetary incentives—such as potential royalty distributions— when their inventions reach commercial success.
In addition, DOD, DOE, and NIH officials described their agencies’ systems for keeping track of inventions developed in the labs as inadequate or in need of improvement. How agencies and labs keep track of such inventions can range from spreadsheets to sophisticated databases that manage all technology transfer activities, including keeping track of patented inventions and licenses. Currently, DOD has a decentralized approach to keeping track of inventions, which, according to DOD officials, needs improvement given how large the agency is.
Several stakeholders we interviewed also noted that the challenge of keeping track of inventions exists government-wide. According to some stakeholders, federal labs not only have inadequate systems to keep track of their own inventions but also limited information on the kinds of inventions being developed in federal labs across the government. The result is that agencies risk being unaware of research across the labs, which can limit their ability to leverage other federal research efforts. For example, one stakeholder stated that there can be research conducted independently at three or four labs under different agencies but little interaction among those labs about the research.
DOD, DOE, and NIH officials stated that they have made efforts to improve their current systems for keeping track of inventions. Specifically, DOE officials reported that they have developed a plan to leverage the capabilities of the iEdison reporting system to unify the agency’s data management process. Air Force and NIH officials stated that they have contacted NASA, which has a centralized system for tracking inventions, about leveraging its expertise. NASA officials reported that they have been hosting regular webinars with other agencies to determine whether NASA’s tracking system could help meet other agencies’ needs.
Furthermore, agency and lab officials and stakeholders noted that federal labs face challenges in negotiating license agreements because the licensing process is lengthy and uniquely regulated, which can deter companies from licensing federal inventions. Stakeholders stated that the federal licensing process can take anywhere from about 3 months to more than 2 years. Some stakeholders stated that from their point of view taking a year to negotiate a license agreement is too long. One stakeholder said that such lengthy processes are particularly difficult for start-ups, which often need to finalize license agreements in 3 months.
DOD, DOE, NASA, and NIH officials said they are taking steps to address companies’ concerns about the time it takes to negotiate a license agreement. For instance, NASA, NIH, and Navy officials told us that they have developed model license agreements to help guide companies through the process, and NASA and NIH have special license agreements for start-ups to shorten the licensing process.
For more detail on challenges in the seven areas of the patent licensing process that agency and lab officials and external stakeholders identified, see appendix II.
Challenges in Prioritizing Patent Licensing
DOD, DOE, NASA, and NIH face challenges in prioritizing patent licensing as part of their agency missions. For example, DOD and DOE officials stated that an agency’s mission affects patent licensing activities. DOD officials stated that the agency’s primary mission is protecting the warfighter and that patent licensing is a secondary benefit to the agency. According to DOE officials, the nuclear security labs do not focus on patenting but instead on developing technologies associated with a weapons program.
In addition, several stakeholders we interviewed stated that some agencies and labs do not have a culture that prioritizes patent licensing. In particular, one stakeholder stated that at some federal labs, patent licensing is not reflected in performance evaluation management plans, which can help incentivize lab personnel to engage in patent licensing activities. A few stakeholders stated that at some labs where management does not prioritize patent licensing activities, researchers’ careers can be negatively affected if they engage in patent licensing activities.
Some agency and lab officials stated that they have taken steps to overcome such challenges. For example, officials at one Navy lab stated that the lab has management support and nine patent attorneys to assist in the reviews of researchers’ invention disclosures. Also, officials at one NIH lab stated that the lab has strong management support and a good royalty stream from successful inventions that pay for patenting and other reinvestments, which allows the lab to not draw from its appropriations.
NIST Has Reported Some Challenges Faced by Federal Labs in Areas of Patent Licensing but Has Not Fully Reported on the Range of Such Challenges
In its three most recent fiscal year summary reports to Congress, NIST identified some challenges faced by federal labs in areas of patent licensing and has assisted agencies in addressing challenges in their patent licensing activities. However, NIST does not fully report on the range of challenges that agency and lab officials and stakeholders identify.
NIST collaborates with agencies to gather patent licensing data for its summary reports to Congress. For example, according to agency officials, NIST engages with agencies to inform them about new requirements in technology transfer and helps them identify their successes in conducting technology transfer activities. NIST also provides administrative support to the FLC, which offers training to federal technology transfer specialists through workshops; publishes a desk reference on federal patent licensing, laws, and regulations; and has commissioned studies on efforts to develop federal inventions for commercial use. Further, NIST developed a survey in 2016 on agency technology transfer processes. NIST officials stated that the survey is aimed in part at improving federal labs’ decisions on whether to spend money on applying for patents, whether patents will facilitate the commercialization of technology, and what data are needed to make those determinations. NIST officials stated that the agency continues to analyze the survey data and currently plans to report its findings in fiscal year 2018.
While NIST has identified in its annual summary reports to Congress some challenges that federal labs face in patent licensing and other technology transfer activities, it has not fully reported the range of challenges that agencies and labs face in patent licensing. For example, in its fiscal year 2015 summary report—its most recent report—on federal technology transfer, NIST reported that the federal intramural research budget has been relatively consistent over the years but not that DOD, DOE, NASA, and NIH face challenges in prioritizing patent licensing as an agency mission. The report also mentions that there is no uniform federal system for tracking research that employees in federal labs published but not that DOE, for example, has faced challenges in keeping track of inventions developed in its labs. In addition, we found that although the report mentions that the Department of Veterans Affairs is facing challenges with its labs disclosing inventions, it does not mention similar challenges at DOD. NIST officials stated that they were generally aware of the challenges identified by agency and lab officials and external stakeholders but had not considered including such challenges to a greater degree in the summary reports to Congress.
We have previously reported on Congress’s goal to make the federal government more results oriented through reporting of agency performance information to aid decision making by agency executives, Congress, and program partners. Specifically, we have reported how the effective implementation of good governance can help address government challenges in five key areas involving agency performance and management: (1) instituting a more coordinated and crosscutting approach to achieving meaningful results, (2) focusing on addressing weaknesses in major management functions, (3) ensuring that agency performance information is useful and used in decision making, (4) sustaining leadership commitment and accountability for achieving results, and (5) engaging Congress in identifying management and performance issues to address. By fully reporting the range of challenges in federal patent licensing—such as those outlined in this report—and including that information in its annual summary reports to Congress, NIST has the opportunity to further ensure that Congress is more aware of challenges that limit agencies’ efforts in patent licensing and ways for potentially addressing those challenges. To identify these challenges, NIST could, for example, leverage its survey, past FLC studies, and agency reports.
Federal Agencies and Labs Have Limited Information on Processes, Goals, and Comparable Licenses to Guide Establishing Financial Terms in Patent Licenses
Federal agencies and labs have limited information on processes, goals, and comparable licenses to guide establishing the financial terms in patent licenses. DOD, DOE, NASA, and NIH labs generally do not document their processes for establishing the financial terms of patent licenses and instead rely on the expertise of technology transfer staff. Furthermore, existing agency and lab guidance does not consistently link the practice of establishing license financial terms to the statutory goal of promoting commercial use of inventions. In addition, although many federal labs rely on comparable licenses to aid them in setting the terms of new licenses, labs have varying levels of access to information about such licenses.
Federal Agencies and Labs Have Limited Documentation of Their Processes for Establishing the Financial Terms of Patent Licenses
DOD, DOE, NASA, and NIH labs have limited documentation of their processes for establishing the financial terms of patent licenses. Such documentation is limited at both the agency level and the lab level.
At the agency level, the four agencies we reviewed had some documentation on patent licensing in general, such as policies, procedures, guides, and handbooks, but had limited information on how to establish financial terms. For example, the Air Force and the Navy had handbooks on technology transfer that include brief passages on financial terms. However, agency officials noted that these handbooks were either outdated or under revision. At DOE, labs collaborated to develop two agency-level documents on patent licensing: one for lab officials on using equity in licenses and a licensing guide for licensees. These documents describe the general structure of various types of financial terms and, in the document on using equity, factors to consider regarding its use in a license, but do not discuss methods for establishing financial terms. NASA and NIH have policies and procedures for patent licensing that mention the types of financial terms that are normally found in licenses but do not cover other aspects, such as methods for establishing financial terms. All four agencies reported that they gave their labs discretion to develop their own processes for establishing financial terms.
At the lab level, DOD, DOE, NASA, and NIH generally had not documented their processes for establishing financial terms in patent licenses. Based on documentation provided by NASA, NIH, and DOD, few labs at these agencies had issued additional documentation on the patent licensing process. DOE labs had documented the patent licensing process in general, and 6 out of 17 DOE labs provided documentation that covered aspects of establishing financial terms. For example, one DOE lab document contained a set of licensing principles that help clarify what financial terms a license usually contains, their purpose, and how to structure the financial terms in patent licenses. In addition, agency and lab officials at NASA and DOE reported using tools, such as financial term calculators, at some of their labs, which aid technology transfer staff in valuing technologies.
Agency and lab officials reported that they generally rely on the expertise of technology transfer staff to establish and vet appropriate financial terms. Accordingly, agencies and labs reported that they have taken some steps to develop, share, and retain expertise among staff in their technology transfer offices. The agencies we reviewed reported that some technology transfer staff participate in training opportunities provided by professional organizations like the Association of University Technology Managers (AUTM) or the Licensing Executives Society (LES), as well as the FLC and the agencies. In addition, some agencies and labs reported that internal working groups and regular meetings are opportunities to share licensing expertise. At DOD, officials stated that on a case-by-case basis, labs may use the expertise of their partnership intermediary to help establish financial terms.
However, according to agency and lab officials and stakeholders, federal labs face challenges in acquiring, developing, and retaining expertise in patent licensing for their technology transfer offices. Specifically, some agency officials, lab officials, and stakeholders cited issues such as losing experienced technology transfer staff to retirement or to the private sector, having difficulties in hiring staff with expertise in part because of limited funding, and facing a limited pool of prospective employees to hire with the expertise to value and license inventions. A few stakeholders said that government training in the business aspects of patent licensing is inadequate and not widespread. In addition, some stakeholders had concerns about consistency in licensing practices both within the labs and across labs. For example, some of these stakeholders said that the outcome of license negotiations can depend on the specific licensing professional handling the license. Varying levels of expertise may lead to inconsistency in licensing practices, including establishing financial terms, as can undocumented processes.
Under the federal standards for internal control, management should design control activities by, for example, clearly documenting them in management directives, administrative policies, or operating manuals, to achieve objectives and respond to risks. Furthermore, documentation can act as a means to retain organizational knowledge and provide some assurance that an approach is operational across the lab or agency.
Agency and lab officials stated that they had not documented their processes for establishing financial terms for various reasons. For example, lab officials stated that establishing financial terms is often complex and varies based on the specific circumstances applicable to each potential license, which may limit what can be documented. Some agency and lab officials stated that labs need flexibility in negotiating terms to make adjustments based on the circumstances and therefore officials do not want to be prescriptive. A few agency and lab officials also noted that there are benefits to having streamlined processes. Furthermore, a few agency and lab officials described negotiating license terms as a craft or art that requires expertise and said that documenting this will not enhance licensing by itself.
However, some agency and lab officials and stakeholders said that it is possible to document some aspects of the process. A few stakeholders we interviewed noted that even if each agreement is unique, it is still possible to develop guidelines or outline a methodology for establishing financial terms. A few agency and lab officials stated that they are investigating opportunities to standardize their processes or would be open to documenting them. For example, one agency official told us that the agency plans to update existing documents with specific information about royalty ranges so labs do not have to constantly “reinvent the wheel.” Some labs also described steps that they take to establish financial terms, such as methods for valuing inventions, without being prescriptive. By documenting processes for establishing the financial terms of licenses while maintaining enough flexibility to tailor the specific terms of each license, the four agencies could have more reasonable assurance of consistency across their labs regardless of the expertise of staff.
Federal Agency and Lab Documentation Does Not Consistently Link Financial Terms to the Goal of Promoting Commercial Use
Agency and lab documentation does not consistently link establishing financial terms in patent licenses to the goal of promoting commercial use of inventions. As noted above, federal law states that it is Congress’s policy and objective to use the patent system to promote the commercialization and public availability of inventions, and that technology transfer, including federal patent licensing, is the responsibility of each laboratory science and engineering professional.
Agency-level documentation at NASA contains a provision that clearly links establishing financial terms to the goal of promoting commercial use of inventions—that is, “terms should be negotiated that provide the licensee incentive to commercialize the invention.” NIH’s documentation mentions financial terms in the context of protecting the public from nonuse, which is one aspect of promoting commercial use, and also mentions the goal of obtaining a fair financial return on investment from the licensed invention. DOD and DOE agency-level documents mention the general goal of promoting the commercial use of inventions without specifically linking it to the financial terms. At the lab level, DOD documents generally do not address the goals for financial terms. Of 17 DOE labs, 4 had a statement in their documentation to link financial terms to the goal of promoting commercial use of inventions.
DOD, DOE, NASA, and NIH officials we interviewed stated that getting the technology into the marketplace is their primary goal in licensing but also mentioned other goals related to financial terms that support their mission. In addition, some agency and lab officials described using revenues from licenses as a means to provide a reward to inventors for their work or to obtain a fair return on investment for research conducted by federal agencies. Furthermore, lab officials we interviewed mentioned the flexibility of revenues from licenses as helpful in funding activities, such as additional research, training, and patent prosecution.
Some agency officials and stakeholders we interviewed expressed concerns about competing goals for establishing financial terms. For example, a few stakeholders stated that licensing professionals may be motivated to negotiate for increased license revenue because it reflects positively on them professionally. Further, some stakeholders expressed concerns about labs taking a short-term view of some licensees, particularly small companies, because they have less ability to pay initially and thus may offer less certain revenues.
Our review of relevant economic literature and interviews with stakeholders suggest that license financial terms set with goals other than promoting commercial use in mind, such as short-term revenue maximization, may undermine that longer-term goal. For example, high up-front license fees typically provide more guaranteed short-term revenue to the licensor than other forms of payment but can also reduce the capital available to develop a product successfully. Labs with other goals in mind when establishing financial terms may be at risk of establishing them in ways that run counter to the goal of promoting commercial use.
NIST plays an important role in providing regulations and guidance to agencies regarding patent licensing. Commerce has delegated to NIST the authority to promulgate implementing regulations pertaining to patenting and licensing at federal labs—that is, regulations that indicate how agencies are to implement statutory provisions, including the goal of, among other things, promoting commercial use of inventions. NIST has developed regulations, but they do not link the financial terms of federal patent licenses and the statutory goal of promoting commercial use of inventions.
As the host of the FLC and a coordinator for the Interagency Working Group for Technology Transfer, NIST also plays a role in supporting the development of interagency guidance on patent licensing that covers, among other topics, establishing financial terms in licenses. However, existing interagency guidance provides limited information regarding the goals for financial terms. For example, the FLC desk reference contains a statement that links royalty rates to the goal of promoting commercial use but does not clarify how the goal applies to other financial terms. Furthermore, the FLC desk reference states that labs are entitled to market-based compensation for their intellectual property. However, licenses are structured differently to accomplish different goals and a primary focus on obtaining market-based compensation may undermine the goal of promoting commercial use.
As the lead agency on the government-wide effort to find commercial uses or practical applications for federally funded inventions, NIST has been delegated the responsibility to promulgate regulations pertaining to patenting and licensing at federal labs, including implementing the statutory goal of promoting commercial use. NIST officials stated that a change to the regulations could be made as part of an upcoming rule- making process. However, in doing so, a stakeholder and agency officials noted that any changes to the regulations should avoid prescriptive language that mandates specific practices. NIST officials also stated that they could update relevant guidance on this issue through one of their current efforts. By clarifying the link between establishing federal patent license financial terms and the goal of encouraging commercial use, through the upcoming rule-making process and updating relevant guidance, NIST would have better assurance that financial terms in patent licenses are targeted to that goal.
Federal Agencies and Labs Have Varying Amounts of Information on Comparable Licenses, but Such Information Is Not Shared across Agencies
According to agency and lab officials, comparable license information can be used as a point of reference to guide establishing financial and other terms in new patent licenses. Just as real estate agents look at sales of comparable houses when setting the selling price of a house, patent licensing professionals can look at licenses for comparable inventions when determining what financial terms to include in a new license.
However, federal labs have varying amounts of information on comparable licenses when establishing financial terms. NASA and NIH each have an agency-wide system that enables each lab to access information from other labs at the agency, including the financial terms in previous licenses. NIH agency officials reported that technology transfer offices have access to thousands of previous licenses and refer to such information frequently to help establish the financial terms of new licenses. Labs at DOE and DOD are generally responsible for tracking their own licenses and do not have access to information on comparable licenses from other labs in their agencies. According to DOE officials, under DOE contracts and relevant law, license information at the agency’s contractor-operated labs is considered business sensitive and a contractor-owned record that resides at the labs, which limits DOE’s ability to share it. Officials at DOE and DOD’s military departments reported that they have investigated and continue to investigate systems that would provide greater access to information on financial terms but have encountered some obstacles, such as network security requirements, that they have not yet overcome.
To bolster their access to comparable license information, some federal labs obtain private sector license information. For example, some lab officials we interviewed said that they have occasionally purchased benchmarking guides and access to other private sector license information through organizations such as AUTM and LES. According to some lab officials and stakeholders, private sector license information is useful for understanding acceptable royalty rates in industry and may cover certain technology areas or inventions that are new to the lab. However, access to private sector license information is typically ad hoc and can be limited by its cost, according to agency and lab officials. Some agency and lab officials stated that they would like increased access to private sector information on comparable licenses. For example, according to agency officials at DOE, there is an effort under way to obtain benchmark financial terms from labs and universities with comparable R&D portfolios.
Although lab officials and stakeholders said that private licensing information can be helpful for understanding financial terms acceptable to the market, using private license information may not always be appropriate for government licenses. Private licenses are often structured to maximize revenue for the licensor—not necessarily to promote commercial use or practical application, according to stakeholders. Our review of economic literature and interviews with stakeholders and agency officials suggest that licenses are structured differently to accomplish different goals. For example, a few stakeholders and agency officials noted that federal licenses would typically be less exclusive and have different financial terms than those in the private sector, where there is a greater emphasis on generating revenue from R&D investments. Some stakeholders and agency officials also stated that in general the value of a government license may be different from that of a private license for a similar technology because of the rights the government retains on its licenses. In addition, according to agency and lab officials and stakeholders, government inventions tend to be in an earlier stage of development than those in the private sector, potentially making it more difficult to find licenses for comparable inventions in the private sector.
Some agency and lab officials and a few stakeholders stated that it would be valuable for federal labs to have greater access to information on financial terms in government licenses to help establish a benchmark for financial terms. Our analysis of approximately 21,000 patents assigned to DOD, DOE, NASA, and NIH and issued since 2000 shows that different agencies may patent inventions in similar technology fields. All four agencies we reviewed had patented inventions in 26 of 35 technology fields covered by the patents, and all had 10 or more patents in 9 of the 35 technology fields. DOD and DOE, including DOE contractor-operated labs, had more patents in a wider range of fields than the other agencies. On the other hand, HHS’s patents are more focused on fields such as biotechnology and medical technology. However, even in the area of biotechnology, there were hundreds of patents issued to the other three agencies. Although other information would be needed to determine whether the agencies’ inventions are truly comparable, their having patents in the same technology fields suggests that some government- wide information on financial terms could be useful to federal labs.
Under internal control standards for the federal government, management should externally communicate the necessary quality information to achieve the entity’s objectives; this includes communicating with and obtaining quality information from external parties using established reporting lines. The four agencies we reviewed communicate and share information through several collaborative efforts to improve federal patent licensing, including the FLC and the Interagency Working Group for Technology Transfer. For example, agency officials said they share experiences, ideas, and best practices related to patent licensing informally through these groups. However, there is no formal sharing of information on financial terms in patent licenses among federal labs, according to NIST officials.
We have previously reported that federal agencies engaged in interagency collaborative efforts should identify and address needs by leveraging their resources to obtain additional benefits that would not be available if they were working separately. NIST plays a leading role in these interagency collaborative efforts on patent licensing, including gathering and sharing information among the labs. As the administrative host for the FLC, NIST has already supported an effort to share information about available technology. NIST is also responsible for gathering information from technology transfer agencies, including gross license income, and submitting summary reports to Congress annually and sharing them with the public. Furthermore, NIST has initiated a survey of practices at federal technology transfer offices and shared some preliminary information with the agencies. By facilitating the formal sharing of comparable license information, NIST could help provide agencies and labs with benchmarks for evaluating which financial terms are best suited to licensing inventions successfully.
NIST officials stated that gathering and sharing comparable license information could be done as part of their existing efforts but that there are obstacles to doing so. Specifically, NIST officials stated that this effort would add to the reporting burdens of agencies, may require additional resources, and would need to take into account data security and proprietary information considerations. Agency officials also stressed that any effort to share license terms would have to ensure that confidential and proprietary information from licensees, including specific financial terms from a particular license, is not divulged.
Conclusions
Federal labs under DOD, DOE, NASA, and NIH face challenges at various stages of the patent licensing process, and agencies have taken some steps to address such challenges. For example, ensuring that researchers identify and disclose inventions is a government-wide challenge, according to interviews with external stakeholders and our analysis of relevant literature. However, such challenges in federal patent licensing are not fully reported by NIST, the lead agency delegated by Commerce to provide annual summary reports to Congress on federal technology transfer activities. By fully reporting the range of these challenges that agencies and labs face, NIST can ensure that Congress has greater awareness of these challenges. To help identify these challenges, NIST could, for example, leverage its survey of practices at federal technology transfer offices, past FLC studies, and agency reports.
In addition, DOE, DOD, NASA, and NIH documentation does not consistently link establishing financial terms in patent licenses to the statutory goal of promoting commercial use. As the lead agency on the government-wide effort to find commercial uses or practical applications for federally funded inventions, NIST has been delegated the responsibility to promulgate regulations pertaining to patenting and licensing at federal labs, including implementing the statutory goal of promoting commercial use. By clarifying the link between establishing patent license financial terms and the goal of encouraging commercial use, through the upcoming rule-making process and updating relevant guidance, NIST would have better assurance that financial terms in patent licenses are targeted to that goal.
Further, federal labs have varying amounts of information on comparable government licenses when establishing financial terms. However, there is no formal sharing of information on financial terms in patent licenses among federal labs, according to NIST officials. NIST plays a leading role in interagency collaborative efforts on patent licensing, including gathering and sharing information among the labs. By facilitating the formal sharing of comparable license information, NIST could help provide agencies and labs with benchmarks for evaluating which financial terms are best suited to successfully licensing inventions.
To establish financial terms, DOD, DOE, NASA, and NIH labs rely on the expertise of their technology transfer staff and take a number of steps to build and share expertise, but had limited documentation of their processes for establishing the financial terms of patent licenses. Agency and lab officials explained that there is a need for flexibility, and thus not every aspect of their processes can be documented in detail. By documenting processes for establishing the financial terms of licenses while maintaining enough flexibility to tailor the specific terms of each license, the four agencies could have more reasonable assurance of consistency across their labs regardless of the expertise of staff.
Recommendations for Executive Action
We are making seven recommendations, including three to Commerce and one each to DOD, DOE, NASA, and NIH:
The Secretary of Commerce should instruct NIST to fully report the range of challenges in federal patent licensing, such as those outlined in this report, by, for example, leveraging its survey of practices at federal technology transfer offices, past FLC studies, and agency reports and including that information in its summary reports to Congress. (Recommendation 1)
The Secretary of Commerce should instruct NIST to clarify the link between establishing patent license financial terms and the goal of promoting commercial use, through appropriate means, such as the upcoming rule-making process and updating relevant guidance. (Recommendation 2)
The Secretary of Commerce should instruct NIST to facilitate formal information sharing among the agencies to provide federal labs with information on financial terms in comparable patent licenses, as appropriate. (Recommendation 3)
The Secretary of Defense should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 4)
The Secretary of Energy should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 5)
The Administrator of NASA should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 6)
The Director of NIH should ensure that the agency or its labs document processes for establishing license financial terms, while maintaining flexibility to tailor the specific financial terms of each license. (Recommendation 7)
Agency Comments
We provided a draft of this report to Commerce, DOD, DOE, NASA, and NIH for review and comment. All provided written responses, which are reproduced in appendixes IV-VIII. Commerce and NIH also provided technical comments, which we incorporated as appropriate.
Commerce agreed with all three of our recommendations to the agency. In general, the agency stated that it will work through interagency groups, such as the Interagency Working Group for Technology Transfer and the FLC, to address our recommendations, including by creating a specific section in its annual reports to Congress with more details on challenges agencies and labs face in patent licensing and by examining and implementing solutions to facilitate the sharing of information among agencies. According to Commerce, such solutions could include identifying licensing officers who have expertise and creating a community of practice in which they can share best practices and approaches for establishing license terms.
DOD, DOE, and HHS agreed, and NASA partially agreed, with the recommendation that they or their labs document processes for establishing financial terms in patent licenses. In its written response, DOD said it will direct the military departments and appropriate defense agencies to have their labs establish documentation of their licensing processes as appropriate. In their written comments, DOE, HHS, and NASA noted the complexity and nuances associated with negotiating license agreements, such as understanding the market for the technology and the level of risk involved. Further, DOE and NASA noted challenges that limit their ability to document processes and emphasized the importance of maintaining flexibility in establishing financial terms in patent licenses. We agree that some flexibility in establishing financial terms of patent licenses is important. DOE, HHS, and NASA all identified steps they would take to ensure that at least some processes for establishing financial terms are documented.
We are sending copies of this report to the appropriate congressional committees; the Secretaries of Commerce, Defense, and Energy; the Administrator of NASA; and the Director of NIH. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX.
Appendix I: Examples of Inventions Developed in Federal Labs
Figure 3 presents examples of inventions developed in federal laboratories under the Department of Defense, Department of Energy, National Aeronautics and Space Administration, and National Institutes of Health.
Appendix II: Selected Descriptions of Challenges Federal Labs Face in Patent Licensing
The following are additional descriptions of challenges in the seven areas of the patent licensing process as well as challenges in prioritizing patent licensing faced by federal laboratories (lab) that were identified by external stakeholders and by agency and lab officials at the Department of Defense (DOD), Department of Energy (DOE), National Aeronautics and Space Administration (NASA), and the National Institutes of Health (NIH)—as well as steps agencies and labs have taken to address those challenges.
Challenges in the Seven Areas of the Patent Licensing Process
Identifying Inventions
DOD, DOE, NASA, and NIH officials reported challenges in identifying inventions that lab researchers developed. When a federal researcher does not disclose to lab officials an invention developed in a federal lab, the opportunity to assess the invention’s potential for commercial use may be lost.
Federal officials cited various reasons why researchers do not disclose inventions. For instance, several DOD, DOE, NASA, and NIH agency and lab officials stated that some researchers do not have adequate training in identifying potentially patentable inventions. Some agency and lab officials pointed to other reasons why invention disclosures may not be filed, such as researchers not having enough incentive to disclose their inventions. Navy officials stated that researchers are often intimidated by the overall invention disclosure process and tend to focus on their research rather than consider what could be patentable. Officials at one NASA lab noted that they have come across a few contractor employees who do not see the benefit of filing invention disclosures, and sometimes researchers are too busy to engage in the patenting process. According to National Institute of Standards and Technology (NIST) officials, some researchers decide not to disclose an invention because they believe filing a patent application, which includes a filing fee, could take away money from the research itself, and most federal researchers are not motivated by the potential for receiving royalty distributions.
Our analysis of relevant literature and interviews with stakeholders also showed that researchers not identifying and disclosing inventions is a government-wide challenge. One stakeholder stated that researchers at federal labs generally have limited understanding of the patenting process, including an understanding of what constitutes patentable subject matter and how to conduct prior research on the technology to determine whether it is patentable.
DOD, DOE, NASA, and NIH agency and lab officials stated that they are taking a variety of actions to help address these challenges. For example, some agency and lab officials stated that labs conduct training to educate researchers about the patenting process, inform researchers about statutory requirements to disclose inventions, and incentivize them by acknowledging their efforts through awards and monetary incentives when their inventions reach commercial success.
Keeping Track of Inventions
DOD, DOE, and NIH officials described their agencies’ systems for keeping track of inventions developed in the labs as inadequate or in need of improvement. How agencies and labs keep track of such inventions can range from spreadsheets to sophisticated databases that manage all technology transfer activities, including keeping track of patented inventions and licenses.
Currently, DOD has a decentralized approach to keeping track of inventions, which, according to DOD officials, needs improvement given how large the agency is. Each military department has its own systems to track and store information on inventions developed in the labs. Officials from DOD and the departments describe the systems as inadequate to keep track of the agency’s inventions. For example, Navy officials described the department’s in-house system to track inventions as “plagued by outages” and thus ineffective. According to officials, the Army funds systems that track inventions, but these systems are different from each other and not connected to headquarters and have been suspended since 2015.
We have previously reported on federal agencies’ challenges in monitoring technology transfer activities, including tracking inventions developed in the federal labs. Several stakeholders we interviewed also noted that keeping track of inventions is a government-wide challenge. According to some stakeholders, federal labs not only have inadequate systems to keep track of their own inventions but also limited information on the kinds of inventions being developed in federal labs across the government. The result is that agencies risk being unaware of research across the labs, which can limit their ability to leverage other federal research efforts. One stakeholder specifically noted that the Interagency Edison (iEdison) reporting system—which allows federal grantees and contractors to report federally funded inventions to the agency that issued the funding award, including inventions developed by some contractor- operated labs—is difficult to navigate and needs improvement. Another stakeholder stated that there can be independent research at three or four labs under different agencies but little interaction among those labs about the research. Information on federal lab inventions can also be accessed publically through the Federal Laboratory Consortium (FLC) website; however, NIST officials stated that the website’s information on inventions relies on agencies to submit accurate information, which may be limited by the agencies’ tracking systems.
DOD, DOE, and NIH officials stated that they have made efforts to improve their current systems. For example, since our 2015 report on the agency’s challenges with its data management systems that track federally funded inventions, DOE officials reported that they have developed a plan to leverage the capabilities of the iEdison reporting system to unify the agency’s data management process. While DOD officials stated that the agency has been unsuccessful in purchasing software to track inventions across the agency, Air Force officials said they are developing a pilot program and seeking new software to manage the Air Force’s inventions, and they expect the pilot program to increase the number of invention disclosures. Air Force and NIH officials stated that they have contacted NASA, which has a centralized system for tracking inventions, about leveraging the agency’s expertise. NASA officials reported that they have been hosting regular webinars with other agencies to determine whether NASA’s tracking system could help meet other agencies’ needs.
Selecting Inventions to Patent
DOD, DOE, NASA, and NIH agency and lab officials cited selecting inventions to patent as a challenge because of the expense of patenting fees. According to some agency and lab officials we interviewed, fees paid to the United States Patent and Trademark Office (USPTO) affect their decision on whether to patent an invention. For example, DOE officials stated that budget constraints force them to make decisions about whether they should file a patent or engage in other agency activities. NIH officials stated that the agency maintains fewer patents because of the patent maintenance fees and the agency’s tight budgets.
NASA officials reported that one step the agency is taking to deal with the costs of maintaining its issued patents is to identify technologies with low licensing potential and allow the patents to expire if they fail to attract licensees. NASA has created a searchable database that catalogs thousands of expired NASA patents already in the public domain, making them freely available to industry for unrestricted commercial use.
Attracting Potential Licensees
Federal labs under DOD, DOE, NASA, and NIH face challenges that limit their ability to attract potential licensees, according to agency and lab officials. Even officials at NASA, described by NIST officials as one of the best agencies in promoting its inventions to industry, said the agency is not selecting among multiple licensees and would like to have more companies license its patents.
There are various reasons why federal labs struggle to attract companies interested in licensing their inventions, according to agency and lab officials we interviewed. First, several agency and lab officials cited that the number of entities that want to license inventions is generally not large. Second, some agency and lab officials identified inadequate promotion of federal inventions and licensing opportunities to companies, including start-ups, as a factor. Third, some agency and lab officials also noted that their inventions are often in the early stages of development and thus pose more of a risk for companies to license.
Based on our analysis of relevant literature and interviews with stakeholders, difficulty in attracting industry to license inventions developed in federal labs is a government-wide challenge. According to several stakeholders, industry perceives federal labs as not friendly to the private sector when it comes to patent licensing, especially for start-ups. For example, one stakeholder said that it is rare that federal agencies want to license to a start-up, and that more often the labs want a “safer route” by licensing inventions to large companies that already have a steady revenue stream. Another stakeholder said that DOE’s contractor- operated labs in particular tend to not issue exclusive licenses to start-ups and prefer to license to large companies because the agency sees those companies as presenting less of a risk. In addition, stakeholders stated that federal inventions are often not yet commercially viable, which can deter companies from licensing federal inventions. One stakeholder, for example, stated that NASA officials may think that NASA technology is more developed than it is and therefore underestimate how long it will take a company to develop it for practical application, the millions of dollars needed to develop it, and whether it can be manufactured for commercial use.
DOD, DOE, NASA, and NIH officials stated that they are taking steps to attract potential licensees by, for example, conducting local outreach to attract companies and working on improving their databases so that companies can learn about federal inventions available for licensing. For instance, NASA officials stated that the agency’s comprehensive database accessible to potential licensees uses a wide variety of search criteria and attracted 6 million unique visitors in 2016.
Negotiating the License Agreement
Agency and lab officials and stakeholders noted that federal labs face challenges in negotiating the license agreement because the process is (1) lengthy and (2) uniquely regulated, which can deter companies from licensing federal inventions.
Stakeholders stated that the federal licensing process can take anywhere from about 3 months to more than 2 years. Some stakeholders stated that from their point of view taking a year to negotiate a license agreement is too long. One stakeholder said that such lengthy processes are particularly difficult for start-ups, which often need to finalize license agreements in 3 months. Another stakeholder noted that the federal government in general does not understand how urgent it is for companies to complete the licensing process in a timely manner. Although actions on the part of both the labs and companies can cause delays, if the overall process is time-consuming, prospective licensees will tend to move onto something else instead, according to agency and lab officials and stakeholders.
Based on our analysis of licensing information provided by the agencies, we found that the amount of time from receipt of an application for a license to signature of the license by the lab varies widely. Specifically, based on this measure of the length of the process, approximately 60 percent of 132 licenses effective in fiscal year 2014 took at most 6 months for DOD, DOE, NASA, and NIH labs to process. Officials at one Navy lab stated that issuing an invention license to a company within 6 months is “highly unusual,” and officials at one NASA lab stated that the fastest they have issued a license was a week because the start-up was prepared and ready to go. For more on our analysis of licensing information from DOD, DOE, NASA, and NIH, see appendix III.
Several agency and lab officials also noted that federal regulations associated with patent licensing can deter companies from licensing federal inventions. Such regulations include requirements that are unique to federally funded and federally owned inventions, including that products arising from the invention must be substantially manufactured in the United States and that the government may retain rights to the invention and terminate the license agreement if the licensee does not take steps to commercialize the technology. In particular, NASA officials stated that venture capital firms sometimes oppose the government retaining rights for federal technology used by start-ups that they fund. According to DOD and DOE officials, federal regulations require a level of documentation or explanation that can deter some companies from licensing inventions developed in federal labs. Based on interviews with stakeholders, as well as our analysis of relevant literature, company concerns about federal regulations is a government-wide challenge that federal labs face in licensing their inventions.
For example, according to NIST officials, the U.S. manufacturing requirement can influence whether companies consider licensing federal inventions, because manufacturing in the United States can be more expensive than manufacturing in other countries. NIST officials also stated that some prospective licensees initially become concerned when they are told about march-in authority, because it applies to federally funded inventions and contractors. However, once companies are told that it is a legal requirement and that the provision has never been exercised, they generally become more comfortable with it.
DOD, DOE, NASA, and NIH agency officials said they are taking steps to address companies’ concerns about the time it takes to negotiate a license agreement and their unfamiliarity with federal licensing requirements. For instance, NASA, NIH, and Navy officials told us they have developed model license agreements to help guide companies through the process, and NASA and NIH have special license agreements for start-ups to shorten the licensing process. Also, DOE created an agency-wide licensing guide to help prospective licensees navigate federal licensing requirements.
Monitoring Licensee Performance
DOD, DOE, NASA, and NIH agency and lab officials we interviewed identified limited resources and inadequate monitoring systems as factors that make it difficult to monitor licensee performance.
NASA and NIH officials reported that the number of license agreements has increased in their labs and that they do not have enough resources to monitor licenses. DOD officials stated that the agency’s technology transfer offices have traditionally been understaffed and that the agency’s monitoring systems are inadequate for tracking the status of issued licenses. Officials at one DOE lab stated that collecting royalties from licensees can be difficult because the lab does not have enough funds to support that activity. In addition, agencies may rely on the same systems they use to keep track of inventions to monitor licensee performance, and as previously discussed, these systems are in need of improvement.
Some stakeholders we interviewed noted that monitoring licensee performance is a government-wide challenge. They explained that sometimes licensees do not pay fees if they are not contacted, and a few stakeholders stated that federal labs have limited funding and resources to monitor contracts effectively. One stakeholder recalled one agency that did not communicate with a licensee for 2 years after the license agreement was signed. According to another stakeholder, ineffective monitoring of licensee performance may limit federal labs’ ability to determine whether a company is developing federal inventions for commercial use per the terms and conditions of the license agreement.
Some agency and lab officials stated that they have taken steps to regularly monitor licensees. In particular, at NASA and NIH—where monitoring of licensee performance is centralized at the agency level— officials have programed systems to remind staff to check on licensee performance.
Measuring Licensing Outcomes
Federal labs, including those under DOD, DOE, NASA, and NIH, also face challenges in effectively measuring patent licensing outcomes, based on our interviews with stakeholders and analysis of relevant literature. According to one stakeholder, labs need metrics to assess whether a licensee has made progress on developing the invention for commercial use and whether the lab needs to get the license back and give it to another company.
However, some stakeholders we interviewed stated that although the 2011 presidential memorandum on technology transfer called for strategies to establish metrics, federal labs are still struggling to implement metrics for measuring technology transfer outcomes, including patent licensing activities. Stakeholders we interviewed and our analysis of relevant literature have indicated that federal labs in general track the numbers of patents, licenses, and revenues instead of using metrics that identify direct economic impacts from patent licensing and other technology transfer activities. In agencies where such metrics do exist, they may be applied inconsistently across labs. For example, officials at one DOE lab stated that DOE metrics are generally not consistent across the agency’s labs.
DOD, DOE, NASA, and NIH agency officials stated that they are working to improve their metrics and incorporate metrics beyond tracking numbers of patents, licenses, and revenues. For example, in addition to measuring the numbers of patents and licenses issued, NASA and Air Force officials stated that they are also measuring factors that affect the length of time it takes for their labs to process licenses. Such information, officials said, will help them expedite the licensing process.
Prioritizing Patent Licensing as an Agency Mission
DOD, DOE, NASA, and NIH face challenges in prioritizing patent licensing as part of their agency missions, which can affect the entire patent licensing process.
For example, DOD and DOE agency and lab officials stated that an agency’s mission affects patent licensing activities. DOD officials stated that the agency’s primary mission is protecting the warfighter and that patent licensing is a secondary benefit to the agency. According to DOE officials, the nuclear security labs do not focus on patenting but instead on developing technologies associated with a weapons program.
In addition, several stakeholders we interviewed stated that some agencies and labs do not have a culture that prioritizes patent licensing. In particular, one stakeholder stated that at some federal labs, patent licensing is not reflected in performance evaluation management plans, which can help incentivize lab personnel to engage in patent licensing activities. A few stakeholders stated that at some labs where management does not prioritize patent licensing activities, researchers’ careers can be negatively affected if they engage in patent licensing activities.
DOD, DOE, NASA, and NIH agency and lab officials cited limited resources to conduct the range of activities related to patent licensing. For example, sometimes there is just one person at a DOD lab overseeing technology transfer activities, according to DOD agency and lab officials. Officials at one NIH lab stated that many labs across the agency do not receive enough royalties to offset their patent licensing costs. In its fiscal year 2015 report—its most recent report—to Congress on federal technology transfer activities, NIST reported that the federal intramural research budget, which include patent licensing activities, has generally not increased in the past 4 fiscal years. Several agency and lab officials stated that budget constraints affect the extent to which they can engage in patent licensing activities—including patent enforcement, which can cost millions of dollars and presents challenges for federal labs, according to DOE officials.
Some agency and lab officials stated they have taken steps to overcome such challenges. For example, officials at one Navy lab stated that the lab has management support and nine patent attorneys to assist in the reviews of researchers’ invention disclosures. Also, officials at one NIH lab stated that the lab has strong management support and a good royalty stream from successful inventions that pay for patenting and other reinvestments, which allows the lab to not draw from its appropriations.
Appendix III: Patent License Summary for Licenses Effective in Fiscal Year 2014
Tables 1 through 3 and figures 4 through 6 are based on 222 patent licenses that became effective in fiscal year 2014, and associated data, provided by the Department of Defense (specifically the Army, Navy, and Air Force), Department of Energy, National Aeronautics and Space Administration, and National Institutes of Health. They include both data provided by the agencies and information compiled directly from the licenses. The tables and figures are provided for informational purposes and are not generalizable to all patent licenses.
Appendix IV: Comments from the Department of Commerce
Appendix V: Comments from the Department of Defense
Appendix VI: Comments from the Department of Energy
Appendix VII: Comments from the Department of Health and Human Services
Appendix VIII: Comments from the National Aeronautics and Space Administration
Appendix IX: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Robert J. Marek (Assistant Director), James D. Ashley, Kevin S. Bray, Virginia A. Chanley, Ellen L. Fried, Sarah C. Gilliland, Cheryl M. Harris, Robert Letzler, Gregory A. Marchand, Christopher P. Murray, Emmy L. Rhine Paule, Dan C. Royer, Ardith A. Spence, Vasiliki Theodoropoulos, and Reed Van Beveren made key contributions to this report.
Bibliography
Bozeman, Barry. Technology Transfer Research and Evaluation: Implications for Federal Laboratory Practice, Final Report to VNS Group, Inc. and the U.S. National Institute of Standards, 2013. Accessed on March 14, 2018. https://www.nist.gov/tpo/return-investment-roi-initiative.
Bozeman, Barry, Heather Rimes, and Jan Youtie. “The Evolving State-of- the-Art in Technology Transfer Research: Revisiting the Contingent Effectiveness Model.” Research Policy, vol. 44, no. 1 (2014): 34-49.
Franza, Richard M. and Kevin P. Grant. “Improving Federal to Private Sector Technology Transfer: A Study Identifies Seven Critical Factors with the Greatest Impact on Whether Transfer Attempt Succeeds or Fails.” Research Technology Management, vol. 49, no. 3 (2006): 36-40.
Greiner, Michael A. and Richard M. Franza. “Barriers and Bridges for Successful Environmental Technology Transfer.” Journal of Technology Transfer, vol. 28, no. 2 (2003): 167-177 Howieson, Susannah V., Stephanie Shipp, Gina Walejko, Pamela Rambow, Vanessa Peña, Sherrica S. Holloman, and Phillip N, Miller. Exemplar Practices for Department of Defense Technology Transfer. Alexandria, Va.: Institute of Defense Analyses, January 2013.
Hughes, Mary E., Susannah V. Howieson, Gina Walejko, Nayanee Gupta, Seth Jonas, Ashley T. Brenner, Dawn Holmes, Edward Shyu, and Stephanie Shipp. Technology Transfer and Commercialization Landscape in the Federal Laboratories. Alexandria, Va.: Institute of Defense Analyses, June 2011.
Jin, D., X. Mo, A. M. Subramanian, K. H. Chai, and C. C. Hang. “Key Management Processes to Technology Transfer Success.” 2016 IEEE International Conference on Management of Innovation and Technology, (2016), 67-71.
Linton, Jonathan D., Cesar A. Lombana, and A. D. Romig, Jr. “Accelerating Technology Transfer from Federal Laboratories to the Private Sector—the Business Development Wheel.” Engineering Management Journal, vol. 13, no. 3 (2001): 15-20.
Office of Science and Technology Policy and the National Institutes of Health, National Heart, Lung and Blood Institute. Lab-to-Market Inter- agency Summit: Recommendations from the National Expert Panel.
Washington, D.C.: National Expert Panel, White House Conference Center, May 2013.
Stepp, Matthew, Sean Pool, Nick Loris, and Jack Spencer. Turning the Page: Reimagining the National Labs in the 21st Century Innovation Economy. Washington, D.C.: Information Technology and Innovation Foundation, Center for American Progress, and Heritage Foundation, June 2013.
Toregas, Costis, E. Colin Campbell, Sharon S. Dawes, Harold B. Finger, Michael D. Griffin, and Thomas Stackhouse. Technology Transfer: Bringing Innovation to NASA and the Nation. Washington, D.C.: National Academy of Public Administration, November 2004.
U.S. Department of Energy, Commission to Review the Effectiveness of the National Energy Laboratories. Securing America’s Future: Realizing Potential of the Department of Energy’s National Laboratories, vol. 1, Executive Report. Washington, D.C.: October 2015. Accessed March 14, 2018. https://www.energy.gov/labcommission/downloads/final-report- commission-review-effectiveness-national-energy-laboratories.
Wang, Mark, Shari Pfleeger, David M. Adamson, Gabrielle Bloom, William Butz, Donna Fossum, Mihal Gross, et al. Technology Transfer of Federally Funded R&D: Perspectives from a Forum. Conference Proceedings. Santa Monica, Calif.: RAND Corporation, 2003. | Why GAO Did This Study
The federal government spends approximately $137 billion annually on research and development—mostly at DOD, DOE, NASA, and NIH—to further agencies' missions, including at federal labs. Multiple laws have directed agencies and labs to encourage commercial use of their inventions, in part by licensing patents, to private sector companies and others that aim to further develop and bring the inventions to market.
GAO was asked to review agency practices for managing inventions developed at federal labs, with a particular focus on patent licensing. This report examines (1) challenges in licensing patents and steps taken to address and report them and (2) information to guide establishing financial terms in patent licenses at DOD, DOE, NASA, and NIH. GAO reviewed relevant literature, laws, and agency documents, including patent licenses from 2014, to match the most recent NIST summary report when the licenses were requested, and GAO interviewed agency officials and knowledgeable stakeholders, including organizations that assist federal labs in licensing patents.
What GAO Found
Federal agency and laboratory (lab) officials identified challenges in licensing patents across the federal government, and agencies have taken some steps to address and report them. Patent licensing is a technology transfer activity that allows, for example, federal inventions to be legally transferred to the private sector for commercial use. Specifically, officials at the Departments of Defense (DOD) and Energy (DOE), National Aeronautics and Space Administration (NASA), and National Institutes of Health (NIH), as well as external stakeholders, noted challenges in having researchers identify potentially patentable inventions. DOD, DOE, and NIH officials also cited having inadequate internal systems to keep track of inventions developed in the labs. In addition, several stakeholders stated that licensing patented inventions can be lengthy and bureaucratic, which may deter companies from licensing. The agencies reported taking steps to address these challenges, such as implementing model license agreements across labs to expedite the process.
The Department of Commerce has delegated to its National Institute of Standards and Technology (NIST) to annually report agencies' technology transfer activities, including patent licensing. Although NIST has reported some challenges, it has not fully reported the range of challenges identified by agency and lab officials and stakeholders. NIST officials stated that they were generally aware of the challenges but had not considered including them to a greater degree in their annual reports to Congress. By fully reporting the range of challenges in federal patent licensing, NIST has the opportunity to further ensure that Congress is more aware of challenges that limit agencies' efforts and ways for potentially addressing those challenges.
Federal agencies and labs have limited information to guide officials when establishing the financial terms of patent licenses. For example, while federal labs can use comparable licenses to help establish financial terms, their access to information on comparable licenses from other labs varies, and such information is not formally shared among the agencies. Based on its established interagency role, NIST is best positioned to assist agencies in sharing information on comparable licenses, in accordance with leading practices for interagency collaboration. By doing so, NIST would provide federal agencies and labs with useful information that can help them better establish financial terms and successfully license inventions.
What GAO Recommends
GAO is making seven recommendations, including that Commerce instruct NIST to fully report the range of challenges in federal patent licensing in its annual reports to Congress and facilitate information sharing among agencies. Commerce, DOD, DOE, NASA, and NIH generally agreed with GAO's recommendations and are taking steps to implement them. |
gao_GAO-18-10 | gao_GAO-18-10_0 | Background
Major Drug-Producing and Drug-Transit Countries in the Western Hemisphere
The majority of illicit drugs consumed in the United States is produced in Mexico and South America and enters the United States across the southwest border or through the Caribbean. Among countries in the Western Hemisphere, Colombia and Peru are major producers of illicit drugs, while Bolivia, Jamaica, and Mexico are both major producers and major transit countries, according to State (see fig. 1).
Mexico is a major source and transit country for heroin, methamphetamine, and marijuana destined for the U.S. market. Jamaica is likewise the largest Caribbean supplier of marijuana for the U.S. market. Colombia is the world’s top producer of cocaine and is the major provider of cocaine available in the United States. While Bolivia and Peru are also major producers of cocaine, cocaine from these countries is generally smuggled into other South American countries for domestic consumption or for shipment to Europe, East Asia, and beyond, according to State.
According to U.S. government estimates, illicit drugs originating in Mexico enter the United States directly through the southwest border, but virtually all cocaine from South America and marijuana from Jamaica are trafficked to the United States through the “Transit Zone”—a 7-million- square-mile area that encompasses Central America, Mexico, the eastern Pacific Ocean, the Gulf of Mexico, and the Caribbean Sea. The Transit Zone has four principal maritime trafficking routes: the Eastern Pacific, Western Caribbean, Central Caribbean, and Eastern Caribbean. The Transit Zone land route is funneled north through Central America into Mexico, where it splits in several directions up to the U.S. southwest border. Although Canada is not within the Transit Zone, various drugs, including fentanyl, transit through it before entering the United States, according to the Department of State.
Illicit Drug-Trafficking Shifts and Related Challenges
In recent years, the production, trafficking, and marketing of various illicit substances consumed in the United States have undergone significant shifts. For example, according to the 2016 National Drug Control Strategy, over the previous 8 years, opioid abuse emerged as the greatest drug threat to the nation. This development was complicated by a spike in the supply and purity of heroin, primarily from Mexico, resulting in a combined epidemic of heroin-opioid overdose deaths. According to the Centers for Disease Control and Prevention, heroin overdose deaths more than tripled between 2010 and 2015, as powerful synthetic opioids, notably illicit fentanyl, were often mixed with heroin without the user’s knowledge. Similarly, in its 2017 International Narcotics Control Strategy Report, State reported various indicators suggesting a significant increase in cocaine production and trafficking from Colombia. For example, according to this report, coca cultivation in Colombia increased by 39 percent in 2014 and by 42 percent in 2015, and the amount of cocaine trafficked out of Colombia has reached record levels. Consistent with these reported trends in cocaine production and trafficking, Centers for Disease Control and Prevention data indicate that, after falling sharply in the middle of the past decade, overdose deaths related to cocaine have been gradually rising in the United States. Finally, while a significant portion of the marijuana consumed in the United States continues to be smuggled from Western Hemisphere countries, including Canada, Jamaica, and Mexico, the domestic production and marketing of marijuana are undergoing important shifts, as several states and the District of Columbia have passed measures that legalize possession of limited amounts of the drug and provide for regulation of its production, processing, and sales. These shifting trends pose challenges for agencies’ counternarcotics efforts in the Western Hemisphere and domestically, as they strive to respond to changing conditions.
Role of ONDCP in U.S. Counternarcotics Efforts in the Western Hemisphere
ONDCP coordinates the National Drug Control Program and develops a 5-year National Drug Control Strategy, which it updates annually, as well as a number of companion strategies that focus on various geographical areas and emerging threats, to articulate the administration’s drug control policy. ONDCP was established by the Anti-Drug Abuse Act of 1988 to, among other things, enhance national drug control planning and coordination and represent the drug policies of the executive branch before Congress. In this role, ONDCP is responsible for (1) developing a national drug control policy, (2) developing and applying specific goals and performance measurements to evaluate the effectiveness of national drug control policy and National Drug Control Program agencies’ programs, (3) overseeing and coordinating the implementation of the national drug control policy, and (4) assessing and certifying the adequacy of the budget for national drug control programs.
ONDCP requires National Drug Control Program agencies to submit an annual drug control budget, categorized into 10 federal drug control program areas. One program area is international efforts, which ONDCP defines as activities focused on regions outside the United States that are intended to reduce illegal drug availability in the United States or abroad. Three additional ONDCP drug control program areas—intelligence, interdiction, and investigations—include domestic as well as international efforts, as interdictions may occur at or outside U.S. borders, and intelligence and investigative efforts may target drug organizations operating outside the United States.
Key Agencies Involved in International Efforts to Combat Illicit Drugs Entering the United States
In addition to ONDCP, eight agencies are involved in the four program areas that support counternarcotics efforts in the Western Hemisphere to stop the production and transshipment of illicit drugs or their precursors destined for the United States. These activities include the following: interdictions at U.S. borders; maritime drug interdictions in international waters and in international interdictions in concert with partner nations in international and territorial waters; intelligence gathering to support drug interdictions, investigations, and international activities; investigations of drug organizations based in countries outside the United States; eradication support and efforts; and building foreign partner capacity to conduct counternarcotics activities.
Table 1 shows the eight U.S. government agencies that allocate resources in one or more of the four ONDCP program areas— counternarcotics intelligence, interdiction, international activities, and investigations—that we included in our review. For a detailed description of ONDCP’s program areas, more information on the roles of these agencies, and the countries in which they operate, see appendixes I, II, and III, respectively.
U.S. Agencies Identified Billions of Dollars in Spending Primarily or Partially for Western Hemisphere Counternarcotics Efforts for 2010 through 2015
Some Agencies Track Counternarcotics Spending by Region and Identified $5 Billion in Obligations for Activities in the Western Hemisphere
Of the agencies included in our review, DOD, ICE, INL, and USAID track counternarcotics spending on a regional basis and provided data on funds obligated for counternarcotics activities in the Western Hemisphere. As table 2 shows, these agencies obligated more than $5 billion for counternarcotics activities in the Western Hemisphere during fiscal years 2010 through 2015. (See app. III for the agencies’ regional or country-level counternarcotics obligations, as available).
DOD obligated a total of more than $2.8 billion for counternarcotics activities in the Western Hemisphere for fiscal years 2010 through 2015. According to DOD documents, these activities support U.S. domestic and foreign government efforts to combat drug trafficking and drug-related terrorist activities through detection and monitoring of illicit drug smuggling, information and intelligence sharing, and capacity building. DOD generally tracks its counternarcotics spending by geographic combatant command and various functional areas. A significant portion of DOD’s counternarcotics activities in the Western Hemisphere are conducted by U.S. Northern Command and U.S. Southern Command. These resources fund DOD’s training and equipment provided to foreign partners conducting counternarcotics activities, surveillance and communications systems, aircraft patrolling the transit zone, and costs associated with operating DOD’s Joint Interagency Task Force South. However, the obligations for counternarcotics activities that DOD reported for fiscal years 2010 through 2015 underrepresent its overall obligations for such activities because the reported amounts do not include U.S. Northern Command’s and U.S. Southern Command’s salaries and expenses of its personnel and counternarcotics-related intelligence activities. It also does not include DOD’s agency-wide intelligence gathering and training, as well as aircraft flight hours and ship days in support of counternarcotics activities.
ICE expended a total of about $212 million for salaries and expenses of Homeland Security Investigations’ (HSI) agents and analysts working on drug cases in various countries in the Western Hemisphere during fiscal years 2010 through 2015. ICE made these expenditures for the following three HSI programs:
The Domestic Investigations program covers enforcement efforts to disrupt cross-border criminal activity related to contraband smuggling and the dismantling of the transnational criminal organizations responsible for these activities.
International Operations covers HSI’s international investigations involving transnational criminal organizations and serves as ICE’s liaison to foreign law enforcement counterparts overseas.
The Office of Intelligence provides intelligence services for Domestic Investigations and International Operations to support criminal investigations to disrupt and dismantle criminal organizations involved in the transnational drug trade and associated money-laundering crimes.
INL obligated a total of more than $1.5 billion for counternarcotics activities in the Western Hemisphere in fiscal years 2010 through 2015. During this period, INL funded projects that were designed to improve foreign law enforcement and intelligence-gathering capabilities; enhance the effectiveness of criminal justice sectors to allow foreign governments to increase drug shipment interdictions; investigate, prosecute, and convict narcotics criminals; and break up major drug-trafficking organizations. INL also used U.S. federal law enforcement entities to provide technical assistance to its counterparts overseas. Examples of INL’s technical assistance include the following: In Mexico, INL’s efforts focused on enhancing the Mexican government’s capacity to interdict illegal narcotics while not impeding the flow of legitimate goods. This included providing detection dogs, equipment, and training to the Mexican Federal Police, Customs, Army, and Navy.
In Colombia, INL’s program focused on aerial eradication of coca plants, land and maritime interdictions, and capacity building for counternarcotics forces.
In Peru, INL programs included support for manual eradication of coca plants, interdiction efforts, and drug demand reduction activities.
In Central America, INL efforts included building interdiction capacities such as funding vetted units sponsored by federal law enforcement partners and providing technical assistance and equipment for air and maritime interdiction.
In the Caribbean, INL efforts focused on building partner nation interdiction capacity, providing support for vetted units, and enhancing information sharing among partner nations.
USAID obligated a total of about $638 million for Western Hemisphere counternarcotics activities in fiscal years 2010 through 2015, supporting alternative development projects in Bolivia, Colombia, Ecuador, and Peru. According to agency officials, the USAID mission in Colombia is working to create licit alternatives to coca production, including holistic support to viable and lucrative agricultural value chains, such as cacao, specialty coffee, and other products that can be sold on domestic and export markets; provision of rural financial services and credits for licit opportunities; efforts to attract private sector investment into rural regions; and, to a lesser degree, helping communities build infrastructure, such as roads, to help licit products reach markets. USAID’s alternative development program in Peru aims to promote licit incomes and improved governance to sustain coca reductions achieved through forced eradication. In partnership with the Peruvian national drug commission, the USAID mission in Peru facilitates the implementation of alternative development programs in the country, including improving the drug commission’s ability to monitor and evaluate these programs. The mission has also partnered with the private sector to improve processes involved in preparing cacao crops for the market.
Agencies That Do Not Track Counternarcotics Spending by Region Reported About $34 Billion for Activities Focused on the Western Hemisphere
While the other agencies in our review—CBP, Coast Guard, DEA, and OCDETF—do not track spending specific to their counternarcotics activities in the Western Hemisphere, they conduct most of their counternarcotics activities in the Western Hemisphere or target threats originating in Western Hemisphere countries, according to agency officials. Thus, while the agencies’ overall counternarcotics obligations overstate spending for such activities in the Western Hemisphere, these obligations approximate the Coast Guard’s, CBP’s, and OCDETF’s spending on activities that were primarily for these purposes in the region. However, DEA was not able to identify spending levels for counternarcotics activities in the Western Hemisphere, and the obligations it provided included spending for some domestic and other international counternarcotics activities. These four agencies had total obligations of nearly $34 billion for their overall counternarcotics activities during fiscal years 2010 through 2015 (see table 3).
The Coast Guard obligated a total of almost $5.3 billion for its drug- interdiction activities for fiscal years 2010 through 2015. As the nation’s principal federal agency for maritime safety, security, and stewardship, the Coast Guard has a drug interdiction objective to reduce the flow of illegal drugs entering the United States by denying smugglers access to maritime routes. The Coast Guard’s counternarcotics obligations in fiscal years 2010 through 2015 covered the agency’s operating expenses, which include costs associated with operating Coast Guard facilities, maintaining capital equipment, improving management effectiveness, and maintaining an active duty military and civilian workforce. These funds also supported reserve training and acquisition, construction, and improvement of capital assets and facilities. The Coast Guard does not maintain data on the portion of the agency’s drug resources that are used for the interdiction of drugs trafficked to or from countries outside the Western Hemisphere. However, according to Coast Guard officials, because the agency’s counternarcotics efforts take place around U.S. maritime borders and in transit zones in the Western Hemisphere, the agency’s drug resources are generally expended in the Western Hemisphere.
CBP obligated a total of more than $13 billion for its counternarcotics activities in fiscal years 2010 through 2015. According to the agency’s budget documents, CBP used its counternarcotics spending to carry out its border security mission at and between all ports of entry and to conduct air and marine operations in source, transit, and arrival zones in the Western Hemisphere. The agency also obligated funds to invest in border security technology and infrastructure to detect and monitor suspicious air, maritime, and land traffic. CBP’s counternarcotics funds also were used for training and information technology to support its activities. CBP officials indicated that, because CBP’s mission is to protect U.S. borders, the agency’s counternarcotics spending should generally be considered resources spent in the Western Hemisphere. However, CBP’s reported obligations also include resources dedicated to border protection measures to interdict shipments of drugs and precursor chemicals from countries outside the Western Hemisphere.
DEA obligated a total of almost $13 billion for its domestic and international enforcement activities in fiscal years 2010 through 2015. DEA is the lead U.S. agency responsible for the development of the overall federal drug enforcement strategy, programs, planning, and evaluation. DEA’s budget includes categories for domestic enforcement, international enforcement, and state and local support. While domestic enforcement accounts for the majority of DEA’s resources, DEA coordinates its domestic and international enforcement activities (i.e., DEA’s foreign offices) to pursue, at the highest level, multinational drug organizations and, at the lowest level, independent drug cells, according to documents. With regard to international enforcement, DEA tracks regional spending for salaries and expenses associated with agents and intelligence analysts posted in countries overseas. DEA’s international enforcement includes more than $1 billion in obligations for salaries and expenses for personnel posted in Western Hemisphere countries in fiscal years 2010 through 2015.
OCDETF obligated a total of about $2.1 billion for counternarcotics- related efforts in fiscal years 2010 through 2015. According to OCDETF reports, this funding supported investigations targeting the highest priority drug-related transnational crime organizations. OCDETF’s funds were used to reimburse a number of DOJ components—DEA, the FBI, and the OCDETF Fusion Center, a multiagency intelligence center—for their support of OCDETF investigations of high-priority targets. According to a senior OCDETF official, although the agency’s financial system does not contain information that would allow us to ascertain the amounts obligated for investigations of international targets located in the Western Hemisphere, very few OCDETF cases involve drugs coming into the United States from outside the Western Hemisphere. Most OCDETF investigations target drugs coming into the United States from other Western Hemisphere countries.
Agencies Reported Collecting and Disseminating Best Practices and Lessons Learned Related to Counternarcotics Efforts
ONDCP Facilitates Sharing of Counternarcotics Best Practices and Lessons Learned
ONDCP facilitates the sharing of best practices and lessons learned with interagency and foreign partners by including the topic on the agendas of key meetings, according to ONDCP officials. For example, ONDCP officials described the sharing of best practices and lessons learned with stakeholders from Canada, Mexico, and the United States at technical workshops of the North American Drug Dialogue held in March 2017. At these workshops, the Department of State shared with its Mexican partners lessons learned pertaining to Colombia and Peru, including the following:
Eradication of coca alone is not sufficient. A whole-of-government approach that provides security, the incentive of alternative development, the disincentive of eradication, and intelligence-led interdiction efforts that deny harvesters or traffickers the ability to profit from the product is essential.
Results take time. For example, the 90-percent reduction in coca production in San Martin, Peru, took 12 years.
Efforts should be geographically targeted and driven by information and intelligence, given scarce resources. For example, data can be used to allow for planning targeted eradication operations, based on intelligence or other information, and for the planning of complementary interventions, such as rural development or target eradication goals.
According to ONDCP officials, best practices and lessons learned are also described in the National Drug Control Strategy as well as companion strategies such as the Southwest Border and Caribbean Counternarcotics strategies. For example, according to the 2010 National Drug Control Strategy, lessons learned such as the following can be drawn from Colombia’s experience that might be useful elsewhere:
Host-government ownership. For example, although Plan Colombia required extensive U.S. financial support, the Colombian government demonstrated that it was fully committed to the initiative under consecutive administrations.
Government-wide approach. Eradication can be an effective deterrent to illicit cultivation and can provide an incentive to move to licit crops. However, eradication must be accompanied by a government presence in rural areas; alternative development to preclude replanting or dispersal of plots; and a focus on rule of law and human rights, humanitarian needs, and social and economic reform to reduce the incentive to revert to illicit crops.
Security. Security is a precondition for the successful expansion of social services and developmental assistance. Security must be maintained to allow the expansion of legal economic activities and the delivery of civilian services, including justice, education, and health, to a population unaccustomed to a significant government presence.
Flexibility. Programs must adapt to changing circumstances, including adjusting programs that are not working as expected and adding new initiatives, if necessary.
Long-term approach. Major counternarcotics programs designed to address complex and long-standing challenges require a multiyear investment in terms of financial resources and political commitment.
ONDCP has also promoted best practices through other efforts. For example, the 2015 National Drug Control Strategy included an action item to work with the Organization of American States’ Inter-American Drug Abuse Control Commission to strengthen counterdrug Institutions in the Western Hemisphere. As part of this effort, ONDCP and the Department of State participated in the Demand Reduction and the Alternatives to Incarceration meetings, which focused on promoting best practices and expanding host-nation capacity. Reflecting this effort, Organization of American States’ officials cited as a best practice the training of 300 Colombian and Argentinian judges and chief justices, who learned about the Alternatives to Incarceration model, in November 2016.
Most Agencies Reported Collecting Best Practices and Lessons Learned from Counternarcotics Efforts
Officials at 7 of the 10 agencies included in our review reported having processes for identifying and collecting best practices and lessons learned from counternarcotics efforts in the Western Hemisphere. Officials at each of these seven agencies also reported having mechanisms to share best practices and lessons learned, including through web-enabled systems, and sharing these best practices and lessons learned with other U.S. agencies and foreign partners. In addition, officials at six of the seven agencies reported having a formal review process for determining best practices and lessons learned.
USAID and DOD guidance and officials described comprehensive processes for collecting and sharing information about best practices and lessons learned. For example, according to USAID guidance, its Country Development Cooperation Strategy “should include a summary of lessons learned from the implementation of the previous Country Development Cooperation Strategy or other strategic plans (if applicable) and from previous experiences (e.g., projects and activities).” The guidance states that at least once during the course of implementing the Country Development Cooperation Strategy, USAID missions must collect information by conducting reviews of ongoing efforts and of options for better aligning their programs with changes in the context, agency direction, and lessons learned. In addition, according to USAID officials, other levels of program planning incorporate lessons learned and good programming, such as portfolio reviews and other processes involving the periodic assessment of a particular aspect of a mission or a Washington operating unit’s strategy, projects, or activities. USAID evaluations of its alternative development projects in Colombia include examples of best practices and lessons learned, such as the following:
The success of a project depends on reducing the appeal of coca by improving the social and economic value of legal alternatives.
Robust licit economies fueled by productive associations, local and regional market integration, and improved transportation networks can reduce coca cultivation.
A necessary precondition for successful alternative development is the allocation of resources and personnel to rural areas where coca is cultivated.
Only those strategies that can be accomplished within predetermined time frames and resource parameters and that have a proven track record of reducing coca cultivation should be implemented.
Reinforcing local community institutions and providing youth-focused programming can help insulate vulnerable communities against the allure of drug trafficking and coca cultivation.
DOD reported using a formal process for identifying and collecting best practices and lessons learned through its Joint Lessons Learned Program, which consists of five phases: discovery, validation, resolution, evaluation, and dissemination. According to DOD officials, the collection of best practices and lessons learned relating to counternarcotics in the Western Hemisphere through this program is intended to enhance readiness and effectiveness. DOD officials noted that the effort to collect best practices and lessons learned is routine and helps inform policy and budget proceedings. Annual conferences, such as the Counternarcotics and Global Threats Coordination Conference and the Program Objective Memoranda Conference, also offer an opportunity to identify, collect, and disseminate best practices and lessons learned as they relate to DOD’s counterdrug and counter-transnational-organized-crime operations. According to DOD officials, such conferences provide a forum for participants to learn how other relevant DOD components working on counternarcotics efforts are approaching counterdrug, transnational organized crime, and related issues. DOD officials also noted that they intend to use an interagency-agency-task-forces approach to counternarcotics interdiction that the U.S. Southern Command developed in Guatemala as a model for sharing best practices and lessons learned in the region. According to DOD officials, the U.S. Southern Command’s support included training in interdiction tactics, techniques, and procedures, and maintenance of provided equipment such as intercept boats, tactical vehicles, communications gear, and night vision devices. DOD officials reported that lessons learned include establishing the interagency legal framework early, clearly defining interagency relationships, developing the task force’s intelligence capability, implementing police authority and leadership, identifying measures of success, communicating the task force’s purpose and success to the public, and maintaining equipment. DOD officials said that they plan to use the Guatemalan interagency task force as a model with other foreign partners and new counterdrug units in Guatemala and in the region.
State’s report, “Lessons Learned from the Mérida Initiative and Plan Colombia with Regard to Judicial Reform Efforts,” provides specific examples of operational and tactical lessons, as follows:
Political will is critical. According to State, one of the clearest symbols of political will was Mexico’s and Colombia’s dedication of additional resources (to initiatives under the Mérida Initiative and Plan Colombia). In addition, according to State, the governments of El Salvador, Guatemala, and Honduras created a joint regional plan, the Plan of Alliance for Prosperity, underscoring their political will and significant commitment to improve economic opportunities, governance, and public safety. For example, these governments identified $2.6 billion in their 2016 budgets to, among other things, target criminal networks, tackle corruption, and strengthen government institutions.
No lasting security without enhanced access to justice. The governments of Colombia and Mexico have undertaken efforts to expand access to justice in their countries. Since 2008, the government of Mexico has been working to improve the transparency and efficiency of its judicial system by implementing an oral-based accusatorial system.
Partnership across agencies is critical. Plan Colombia represented a whole-of-government approach, with a broad U.S. interagency presence to work across the breadth of the Colombian government. This U.S. interagency presence built linkages at all levels and ensured continuity of vision through leadership transitions in the U.S. and Colombian governments.
U.S. Agencies Use Various Mechanisms to Address Changing Counternarcotics Conditions in the Western Hemisphere
ONDCP Strategies Lay Out Key Efforts to Respond to Emerging Counternarcotics Threats
ONDCP works with agencies to coordinate responses to changing conditions in a variety of ways. ONDCP is responsible for developing (1) the National Drug Control Strategy, which sets forth a comprehensive plan to reduce illicit drug use through programs intended to prevent or treat drug use or reduce the availability of illegal drugs; and (2) several associated companion strategies, which target government efforts to respond to emerging counternarcotics threats for key geographic areas.
The Strategy issued in 2010 laid out the administration’s 5-year blueprint for combatting drug use and included a section on counternarcotics efforts in the Western Hemisphere. The 2010 Strategy described an approach that reflected two core focus areas: (1) disrupting domestic drug trafficking and production and (2) strengthening international partnerships to reduce the availability of foreign-produced drugs in the United States. The Strategy, including the portions associated with counternarcotics efforts in the Western Hemisphere, is updated annually to reflect current priorities and conditions. According to ONDCP officials, an example of a key change since 2010 is the developing focus on the opioid crisis. In 2010, the President’s first National Drug Control Strategy emphasized the need for action to address opioid use disorders and overdose, while ensuring that individuals with pain receive safe, effective treatment. On April 19, 2011, the White House released its national Prescription Drug Abuse Prevention Plan, which outlined its goals for addressing prescription drug abuse and overdose.
The 2016 Strategy continued the previous administration’s focus on the opioid crisis but recognized the growing threats from drug-trafficking organizations involved in manufacturing and distributing cocaine and synthetic drugs, including novel psychoactive substances such as synthetic cannabinoids. To address these efforts, the Strategy described U.S. agencies’ interdiction activities, and DEA led efforts to disrupt synthetic drug production and trafficking. The 2016 Strategy also noted U.S. collaboration with China to limit the export of precursor chemicals associated with the production of psychoactive substances.
ONDCP also develops companion strategies with a geographic focus, such as the National Southwest Border Counternarcotics Strategy, the Northern Border Counternarcotics Strategy, and the Caribbean Border Counternarcotics Strategy. The 2015 Strategy acknowledges the companion strategies and indicates that the efforts they describe will be carried out. These strategies include objectives such as enhancing intelligence, interdicting drugs and drug proceeds, ensuring prosecution, disrupting and dismantling drug-trafficking organizations, and improving cooperation with international partners.
The companion strategies have provided opportunities for more targeted responses to address emerging threats in specific geographic areas, which include the following: National Southwest Border Counternarcotics Strategy focused primarily on U.S. government efforts to prevent the trafficking of illicit drugs—heroin, methamphetamine, cocaine, and foreign-produced marijuana—across the U.S.-Mexican border. The strategy also addressed the illegal outbound movement of weapons and bulk currency from the United States, both of which are associated with activities of narcotics traffickers. As an example of the growing threat posed by the trafficking of heroin from Mexico, the quantity seized on the southwest border nearly tripled, from 1,080 kilograms in 2010 to 3,158 kilograms in 2015. To address these threats, ONDCP expanded the focus of the 2011 National Southwest Border Counternarcotics Strategy to provide border communities with enhanced prevention and drug treatment assistance, in the context of maintaining strong and resilient communities. The 2013 strategy stressed the same basic goals and objectives: substantially reduce the flow of illicit drugs, drug proceeds, and associated instruments of violence across the southwest border as well as maintain strong and resilient communities. This strategy also included indicators related to seizures of drugs at the border. The 2016 strategy differed slightly from the 2013 strategy by elaborating on the threats of various illicit drugs. It also noted that “anything that affects one part of the border affects the entire border” and noted that, for this reason, the National Southwest Border Counternarcotics Strategy must be synchronized with the other companion strategies, and the Heroin Availability Reduction Plan.
National Northern Border Counternarcotics Strategy. The 2012 National Northern Border Counternarcotics Strategy, which ONDCP first issued that year, parallels the National Southwest Counternarcotics Border Strategy and focuses on ongoing efforts to reduce transnational organized crime threats on both sides of the border between the United States and Canada, specifically the movement of illicit drugs such as marijuana, ecstasy, methamphetamine, and cocaine, and the proceeds from the sale of those drugs. The 2014 strategy emphasizes enhanced federal collaboration with state, local, and tribal law enforcement agencies. The legislation mandating that ONDCP publish the National Northern Border Counternarcotics Strategy requires that this document be released biannually; as of June 2017, the 2016 version had not been released.
Caribbean Border Counternarcotics Strategy. The Caribbean Border Counternarcotics Strategy, issued in January 2015, is substantially equivalent to the national counternarcotics strategies for the southwest and northern borders, according to ONDCP. The strategy identifies cocaine as the principal drug threat and a source of associated violence in the Caribbean region and notes that the documented cocaine flow via the Caribbean to the United States more than doubled from 2011 to 2013, rising from 38 metric tons to 91 metric tons. According to DEA, over 90 metric tons of cocaine was trafficked from South America using sea routes through the Caribbean corridor, primarily toward the Dominican Republic and Puerto Rico, in 2014.
Interagency Groups, Task Forces, and Committees Coordinate Government Response to Emerging Counternarcotics Threats
Interagency Working Groups
ONDCP facilitates a number of interagency working groups to address emerging threats. According to ONDCP’s 2016 National Southwest Border Counternarcotics Strategy, interagency working groups relevant to counternarcotics efforts allow agencies with different authorities and resources to address common concerns, create a common operating picture, identify resource and capability gaps, and leverage resources. ONCDP has created working groups, such as groups focused on heroin and cocaine, to develop actions, goals, and measures to reduce the supply of those drugs in the U.S. market as a part of the overall effort to address treatment and demand, as noted in the following examples: In November 2015, ONDCP established the National Heroin Coordination Group in coordination with the National Security Council to provide guidance on interagency activities aimed at reducing the supply of heroin and illicit fentanyl in the U.S market. The working group includes agencies with federal law enforcement responsibilities and their components, select High Intensity Drug Trafficking Areas (HIDTA), the U.S. embassy in Mexico, and other federal agencies and state entities. In June 2016, the group produced the 5-year Heroin Availability Reduction Plan as part of the administration’s effort to prevent and treat heroin abuse.
In January 2016, ONDCP created an internal working group on methamphetamine and synthetic drugs to coordinate efforts across drug control agencies. The group’s priorities included working in concert with federal partners, with source and transit countries to reduce the availability of illicit methamphetamine in the United States, and multilaterally to reduce the global trafficking of illicit methamphetamine and precursor chemicals coming primarily from Mexico.
In September 2016, ONDCP created a National Cocaine Coordination Group to address emerging threats from cocaine brought on by the spike in coca cultivation and production as well as the associated increase in its trafficking and use in the United States. In addition to employing three permanent staff, the interagency group draws from expertise in intelligence, public health, and international demand reduction at DOJ, the FBI, other federal partners, and various parts of ONDCP.
Interagency Task Forces
Agencies use task forces to enhance the interagency coordination needed to respond to emerging threats, according to officials. For example, to address the smuggling of illicit drugs over the southwest border, in May 2014 DHS established three new joint task forces—Joint Task Force–East, Joint Task Force–West, and the Joint Task Force for Investigations—in support of its Southern Border and Approaches Campaign. The task forces coordinate operations to combat transnational criminal organizations and counter illegal drug flows at maritime approaches and in between ports of entry. All three joint task forces incorporate elements of the Coast Guard, CBP, and ICE as well as DHS’s U.S. Citizenship and Immigration Services. Joint Task Force–East is responsible for the southern maritime border and approaches, Joint Task Force–West is responsible for the southern land border and the West Coast, and the Joint Task Force for Investigations focuses on investigations in support of the geographic task forces.
Task forces also enhance coordination, deconfliction, and information sharing by colocating representatives from different entities, which facilitates interaction and enables information sharing, as we previously reported. For example, Joint Interagency Task Force South includes 26 agencies and 20 foreign partners that work together to detect and monitor illicit trafficking in the air and maritime domains, facilitating international and interagency interdiction and apprehension. Information sharing is a critical aspect of the Joint Interagency Task Force South’s strategic approach in supporting national and foreign partner nation law enforcement and promoting regional stability in the Western Hemisphere. As part of this effort, Joint Interagency Task Force South uses a tool known as the Cooperative Situational Information Integration system to share strategic communications and information with foreign partner nations, according to Joint Interagency Task Force South officials. In addition, U.S. Tactical Analysis Teams, which are posted at U.S. missions overseas, and liaison officers from foreign partner nations, provide for a high level of integrated information, according to officials at Joint Interagency Task Force South. Officials indicated that Tactical Analysis Teams and liaison officers provide the information that results in 60 to 70 percent of all task force cases, directly contributing to 50 to 60 percent of all Joint Interagency Task Force South drug seizures. The task force reported that its efforts resulted in 80 percent of total U.S. cocaine seizures (282 of 338 metric tons) in fiscal year 2016.
According to Joint Interagency Task Force South, the advantages of working as a task force include the ability to use the participants’ various legal authorities (see the text box for an example):
DOD brings detection and monitoring authorities.
DOJ and DHS bring anticrime authorities.
The Coast Guard brings its maritime law enforcement authorities.
DEA, the FBI, and HSI bring drug and finance laws enforcement authorities.
CBP and HSI bring customs and immigration authorities.
Partner nations bring multiple authorities from their countries.
A typical case that illustrates how the various authorities of component agencies work together in the Joint Interagency Task Force South could start with receipt of actionable law enforcement information from the Drug Enforcement Administration. This information prompts the deployment of a Customs and Border Protection or Coast Guard plane that subsequently detects and monitors a suspect vessel until Joint Interagency Task Force South can deploy a Coast Guard, U.S. Navy, or allied government’s ship with an on-board law enforcement detachment to investigate. When the deployed ship arrives at the vessel’s location, the Coast Guard assumes control of the investigation. If the suspect vessel is not registered in the United States, the Coast Guard commander implements a bilateral agreement with the vessel’s country of registration to confirm the vessel’s nationality and to stop, board, and search the vessel for drugs. If drugs are found, the State Department, Department of Justice, and the vessel’s country of registry coordinate jurisdiction over, and disposition of, the vessel, drugs, and crew.
OCDETF has also established multiagency Strike Forces (i.e., a type of task force) in 12 key cities around the country. According to OCDETF’s fiscal year 2017 report to Congress, the Strike Forces aggressively target the highest-level trafficking organizations and function as central points of contact for OCDETF agents and federal prosecutors nationwide, gathering intelligence and disseminating investigative leads throughout neighboring areas. The report states that Strike Force members are colocated in offices separate from their parent agencies and interact with each other on a daily basis using the resources and support of their parent agencies. According to OCEDTF’s report, Strike Force efforts help further counternarcotics investigations by combining the resources and expertise of all OCDETF participating investigators and prosecutors. The report also states that, in recognition of the nationwide heroin threat, OCDETF adjusted its resources to target heroin investigations and that when heroin use was rising in 2014 and 2015, the percentage of indictments with heroin charges likewise increased over the same time frame. According to OCDETF’s report, Strike Force effectiveness is reflected in the caseload of active investigations linked to OCDETF’s Consolidated Priority Organization Targets. OCDETF reported that, in fiscal year 2015, 45 percent of Strike Forces’ active investigations were linked to OCDETF Consolidated Priority Organization Targets; in contrast, 22 percent of all OCDETF investigations addressing transnational organized crime were linked to these targets.
Interagency Policy Committees
The National Security Council has a number of interagency policy committees that prioritize counternarcotics, including changing conditions, in the Western Hemisphere. National Security interagency policy committees are the primary day-to-day forums for interagency coordination of national security policy, according to Presidential Decision Directive 1. National Security Presidential Directive 25 directs U.S. government agencies to attack the vulnerabilities of drug-trafficking organizations and disrupt key business sectors and weaken the economic basis of the drug trade. For example, the Transborder Security and Western Hemisphere Directorates interagency policy committee on Mexico Security Priorities directed ONDCP to establish the National Heroin Coordination Group. The agencies represented on the interagency policy committees vary, but the core group involved in addressing heroin and fentanyl include ONDCP, State, DOJ, DOD, DHS, the Department of Health and Human Services, the Office of the Director of National Intelligence, the U.S. Postal Inspection Service (as appropriate), and the Office of Management and Budget.
Several interagency policy committees related to addressing heroin include (1) Transborder Security and Western Hemisphere, (2) Fentanyl Surge, and (3) the Heroin Availability Reduction Plan. Among the topics discussed at the committee meetings were the formation of the National Heroin Coordination Group, which created the Heroin Availability Reduction Plan, as well as approval of the plan, and deliberate and tangible actions the interagency policy committees could take under the Heroin Availability Reduction Plan to visibly disrupt the fentanyl supply chain coming into the United States. There were also various efforts set up to address common issues related to illicit opioids among the United States, Mexico, and Canada, which were addressed in forums such as the North American Drug Dialogue or the U.S.-Mexico Security Cooperation Group. Subinteragency policy committees include the U.S.- Mexico Security Group; North American Drug Dialogue; and Fentanyl- Asia, Fentanyl-Cyber, Fentanyl Screening, and Fentanyl Sub-Interagency Policy Committees. Among the topics discussed were the fentanyl threat and sources of supply into the United States, tangible actions to disrupt the fentanyl supply chain, Asia’s role in the fentanyl supply and actions that could be taken to address it, and an examination of the purchase and sale of fentanyl over the Internet for shipment through the mail services and actions taken to detect such shipments. The interagency policy committees that address cocaine and methamphetamine generally involve the same agencies that are involved in the interagency policy committees addressing heroin.
Agency Comments
We are not making recommendations in this report. We provided a draft of this report to the DOD, DHS, DOJ, ONDCP, State, and USAID for review and comment. We received technical comments from DHS, DOJ, ONDCP, and State, which we incorporated as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 12 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Defense, Homeland Security, and State; the Attorney General of the United States; and the Director, Office of National Drug Control Policy. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff members have any questions about this report, please contact me at (202) 512-6991 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Objectives, Scope, and Methodology
This report examines (1) U.S. agencies’ spending for counternarcotics efforts in the Western Hemisphere in fiscal years 2010 through 2015, (2) agencies’ efforts to gather and share best practices and lessons learned from their counternarcotics efforts both domestically and internationally, and (3) mechanisms that agencies have used to address changing drug threats.
To examine U.S. agencies’ spending for counternarcotics efforts in the Western Hemisphere in fiscal years 2010 through 2015—our first objective—we selected eight U.S. departments and components (collectively, in this report, “agencies”) that implement aspects of the National Drug Control Strategy and conduct counternarcotics activities in the Western Hemisphere: (1) the Department of Defense (DOD); the Department of Homeland Security’s (2) Customs and Border Protection (CBP), (3) Immigration and Customs Enforcement (ICE), and (4) Coast Guard; the Department of Justice’s (DOJ) (5) Drug Enforcement Administration (DEA) and (6) Organized Crime Drug Enforcement Task Forces (OCDETF); the Department of State’s (7) Bureau of International Narcotics and Law Enforcement Affairs (INL); and (8) the U.S. Agency for International Development (USAID). To select these eight agencies, we used the following two criteria: 1. Agencies that have international counternarcotics efforts in one or more of the areas that the Western Hemisphere Drug Policy Commission has been asked to review. The Office of National Drug Control Policy (ONDCP), which coordinates the National Drug Control Program, requires all National Drug Control Program agencies to submit an annual drug budget identifying the amounts the agencies plan to spend on counternarcotics efforts for the upcoming fiscal year. The agencies report spending for such efforts in 10 program areas: Corrections, Intelligence, Interdiction, International, Investigations, Prevention, Prosecution, Research and Development, State and Local and Tribal Law Enforcement Assistance, and Treatment. On the basis of ONDCP’s definitions of these program areas, we determined that four of these areas—Intelligence, Interdiction, International, and Investigations—were relevant to the areas that the Western Hemisphere Drug Policy Commission has been directed to examine. 2. Agencies that allocated a combined total of at least $50 million for their counternarcotics efforts for the Intelligence, Interdiction, International, and Investigations program areas in fiscal year 2015. The following summarizes ONDCP’s definitions of these four program areas: Intelligence. Intelligence efforts encompass several drug control intelligence support, including the collection, analysis, and membership, finances, communications, and activities of drug- areas. Such efforts include providing strategic drug‐related dissemination of drug‐related information regarding structure, trafficking organizations and the identification of drug‐related threats.
Other activities facilitate the sharing among U.S. agencies of domestic and foreign intelligence information on the production and trafficking of drugs in the United States and foreign countries; analysis of the willingness and ability of partner nation governments to carry out drug control programs; federal, state, local, and tribal law enforcement initiatives to gather, analyze, and disseminate information among domestic law enforcement agencies; and all other activities that provide intelligence and other information for use by national policy makers, strategic planners, and local law enforcement.
Interdiction. Interdiction activities are intended to reduce the availability of illegal drugs in the United States or abroad by targeting transportation links. Interdiction efforts encompass the interception of shipments of illegal drugs and their precursors and the disruption of trafficking networks and their proceeds; such efforts may include air and maritime seizures and deterring transport via air, sea, and land routes. Other efforts involve accurate assessment and monitoring of interdiction programs; enhancing the ability of nations that are drug sources to interdict drugs; interdicting the flow of drugs, weapons, and bulk currency along borders; and other air and maritime activities that disrupt illegal drug-trafficking operations.
International. International activities are primarily focused on areas outside the United States and are intended to reduce illegal drug availability in the United States or abroad. Activities may include source-country programs designed to help international partners manage the consequences of drug production, trafficking, and consumption in their own societies, including programs to train and equip security forces; efforts to raise awareness of science-based practices and programs to prevent, treat, and provide recovery from substance abuse; and support for economic development programs to help reduce the production or trafficking of illicit drugs. These efforts may also include assessment and monitoring of international drug production programs and policies; coordination and promotion of compliance with international treaties, including those directed at the eradication of illegal drugs and the production and transportation of illegal drugs; involvement of other nations in international law enforcement programs and policies to reduce the supply of drugs; and all other overseas drug law enforcement efforts to disrupt the flow of illicit drugs into the United States.
Investigations. Investigations activities are designed to develop a prosecutable case against individuals and organizations responsible for the production and distribution of illegal drugs, including identifying seize them; identifying the leaders of illegal drug and other criminal profits and assets from drug‐related criminal enterprises in order to organizations; gathering information about drug‐related criminal activity; ensuring that legitimate controlled substances are handled, manufactured, and distributed in accordance with federal laws and regulations; and all other drug law investigative efforts to identify, disrupt, and dismantle drug smuggling in the United States.
We requested and obtained data on spending for counternarcotics activities from these eight agencies and the Federal Bureau of Investigation (FBI), which OCDETF reimburses for international counternarcotics investigations. We also reviewed each agency’s annual accounting for its counternarcotics budget. In addition, we interviewed agency officials to understand their counternarcotics budgets as they are reported in the annual ONDCP budget and performance summary reports and to determine the extent to which the agencies could identify the funding they had obligated for counternarcotics activities in the Western Hemisphere. Our methodology for identifying counternarcotics spending varied by agency, since some of the agencies—DOD, ICE, INL, and USAID—track such spending by region, while other agencies—the Coast Guard, CBP, OCDETF, and DEA—do not. Moreover, with the exception of DEA’s and OCDETF’s counternarcotics activities, the agencies’ counternarcotics activities represent only one aspect of their larger missions. On the basis of our review of the data, our review of each agency’s annual accounting of its drug budget, and interviews with agency officials, we determined that the data were sufficiently reliable for our reporting purposes. The following summarizes the Western Hemisphere counternarcotics activities reflected in the funding data we present for each agency. (The data we present for OCDETF include its reimbursements to the FBI.)
DOD. All DOD counternarcotics activities under U.S. Northern Command and U.S. Southern Command.
CBP. All CBP counternarcotics spending. Given that the agency’s jurisdiction is triggered by the illegal movement of criminal goods across national borders, the agency considers all of its efforts to be specific to the Western Hemisphere. However, the agency’s spending also includes interdictions and intelligence gathering to support these interdictions of drugs coming from all locations outside the United States.
ICE. The portion of ICE’s Homeland Security Investigations’ spending for investigation of Western Hemisphere drug organizations.
Coast Guard. All Coast Guard counternarcotics spending. Given that the Coast Guard’s interdictions occur in Western Hemisphere waters, the agency considers all of its counternarcotics efforts to be specific to the Western Hemisphere.
DEA. DEA obligations for Investigations, Intelligence, and International program areas for domestic and international enforcement activities. DEA was also able to provide its obligations for salaries and expenses for investigations and intelligence-gathering activities conducted by agents posted in overseas locations in the Western Hemisphere (see app. III).
OCDETF. OCDETF reimbursements for drug investigations conducted by DEA, the FBI, and ICE as well as OCDETF contributions to the OCDETF fusion center.
FBI. OCDETF reimbursements for investigations of transnational crime organizations with a drug nexus. (App. III details the FBI’s expenditure of OCDETF funds).
INL. International Narcotics Control and Law Enforcement funds for counternarcotics activities for Western Hemisphere countries.
USAID. Economic Support Funds and Development Assistance funds for alternative development activities in Western Hemisphere countries.
To examine how agencies gather and share best practices and lessons learned from their counternarcotics efforts both domestically and internationally—our second objective—we reviewed the National Drug Control Strategy and companion strategies for examples of best practices as well as other agency documents that identify best practices and lessons learned. We also sent the eight selected agencies, the FBI, and ONDCP a standard set of questions. These questions addressed how the agencies collected and identified best practices and lessons, whether they had formal definitions of best practices and lessons learned, whether their efforts to identify and collect this information were routine, whether they had review processes to assess the information, and whether they shared these practices with other agencies and with international partners. In addition, we asked the agencies to identify best practices related to counternarcotics efforts in the Western Hemisphere. Further, we conducted interviews with agency officials, seeking clarification to written responses as appropriate and asking whether the agencies had any policies or strategies regarding best practices, and we reviewed the documents that were provided to us in response.
To identify the mechanisms U.S. agencies have used to address changing drug threats—our third objective—we reviewed key U.S. government-wide and agency-specific documents pertaining to U.S. counternarcotics efforts in the Western Hemisphere, including those that encompass counternarcotics efforts as part of broader national security areas. These documents include the National Drug Control Strategies, Southwest Border Counternarcotics Strategies, Northern Border Counternarcotics Strategies, the Caribbean Border Counternarcotics Strategy, the Strategy to Combat Transnational Organized Crime, and the National Interdiction Command and Control Plan. Agency-specific strategic plans included CBP’s Vision and Strategy 2020, Homeland Security Investigations’ Strategic Plan, ICE’s Strategic Plan, DOJ’s Strategic Plan, DEA’s Strategic Plan, OCDETF’s Strategic Plan, the Department of State’s Functional Bureau Strategies and the Western Hemisphere Affairs and Latin America and the Caribbean Joint Regional Strategy, and USAID’s Country Development Cooperation Strategies for Colombia and Peru. We also interviewed ONDCP and agency officials about the development of these strategies. We interviewed ONDCP officials about, and obtained documentation describing, the roles of the National Heroin Coordination Group and the Cocaine Coordination Group, and we identified the roles of other working groups through agency interviews and documents. To understand how agencies coordinated efforts and cooperate with foreign partners, we visited the U.S. Southern Command and the Joint Interagency Task Force South in Miami and Key West, Florida, and interviewed officials at both locations. Additionally, in discussions with officials from the other agencies we reviewed, we asked whether the agencies cooperated with foreign partners.
We conducted this performance audit from August 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives.
Appendix II: U.S. Agencies That Conduct Western Hemisphere Counternarcotics Activities
The Office of National Drug Control Policy coordinates the National Drug Control Program and develops the National Drug Control Strategy, which is implemented by a number of U.S. government agencies. The following summarizes the Western Hemisphere counternarcotics activities of key National Drug Control Program agencies and their components as well as the Federal Bureau of Investigation (FBI).
The Department of Defense (DOD) maintains the lead role in detecting and monitoring aerial and maritime transit of illegal drugs into the United States and plays a key role in collecting, analyzing, and sharing intelligence on illegal drugs with U.S. law enforcement and international security counterparts. DOD supports other interdiction activities with the use of its assets. DOD also provides counternarcotics foreign assistance to train, equip, and improve the counternarcotics capabilities of relevant agencies of foreign governments.
The Department of Homeland Security (DHS) is responsible for U.S. policies related to interdiction of illegal drugs entering the United States from abroad. Key agencies within DHS that participate in counterdrug activities include the following:
Customs and Border Protection (CBP) is the lead agency for border security and is responsible for, among other things, keeping terrorists and their weapons; criminals and their contraband, including drugs; and inadmissible aliens out of the country. CBP is responsible for border security at ports of entry; the 6,000 miles of land borders between ports of entry; and nearly 2,700 miles of coastal waters surrounding the Florida Peninsula and Puerto Rico.
Immigration and Customs Enforcement’s (ICE) primary mission is to promote homeland security and public safety through the enforcement of federal laws governing border control, customs, trade, and immigration. ICE’s office of Homeland Security Investigations investigates immigration crime; human rights violations and human smuggling; smuggling of narcotics, weapons, and other types of contraband; financial crimes; cybercrime; and export enforcement issues.
The Coast Guard is the lead federal agency for maritime drug interdiction in the Transit Zone. The Coast Guard provides resources to the Joint Interagency Task Force South, generally including major cutters, maritime patrol aircraft, and helicopters capable of deploying airborne use of force.
The Department of Justice (DOJ) is responsible for federal law enforcement and to ensure public safety against foreign and domestic threats, including illegal drug trafficking. The following are DOJ’s primary agencies that focus on international drug control activities:
The Drug Enforcement Administration (DEA) is the nation’s federal agency dedicated to drug law enforcement and, accordingly, works to disrupt and dismantle the leadership, command, control, and financial infrastructure of major drug- trafficking organizations. DEA operates around the world to disrupt drug-trafficking operations; dismantle criminal organizations; enforce the drug-related laws of the United States; and bring to justice those organizations and individuals involved in the growing, manufacture, or distribution of illicit drugs destined for the United States.
The Federal Bureau of Investigation (FBI) conducts its counternarcotics activities under the agency’s broader strategy to counter transnational criminal organizations by targeting their command-and-control structures as well as the support networks that facilitate the smuggling of illicit goods, including drugs, into the United States.
The Organized Crime and Drug Enforcement Task Forces’ (OCDETF) primary goal is to identify, investigate, and prosecute the transnational, national, and regional criminal organizations most responsible for the illegal drug supply in the United States, the diversion of pharmaceutical drugs, and the violence associated with the drug trade. It effectively leverages the resources and expertise of its seven federal agency members.
The Department of State’s Bureau of International Narcotics and Law Enforcement Affairs develops, funds, and manages counternarcotics and law enforcement assistance programs to help reduce the entry of illicit drugs into the United States and minimize the impact of international crime on the United States.
The U.S. Agency for International Development supports the U.S. counternarcotics effort through alternative development programs that help farmers find legal sources of income through licit crops such as cacao and coffee and that provide technical assistance, such as training in modern farming techniques and access to capital for investment in equipment.
Appendix III: Selected Agencies’ Obligations for Counternarcotics Activities in Fiscal Years 2010-2015
The Department of Defense (DOD), the Department of Homeland Security’s Immigration and Customs Enforcement (ICE), the Department of Justice’s Federal Bureau of Investigation (FBI), the Department of State’s Bureau of International Narcotics and Law Enforcement Affairs (INL), and the U.S. Agency for International Development (USAID) provided data showing their obligations for counternarcotics activities in the Western Hemisphere. The Drug Enforcement Administration (DEA) provided data showing a portion of its counternarcotics obligations for salaries and expenses associated with DEA agents posted overseas.
DOD data show obligations for counternarcotics activities by the U.S. Northern Command and the U.S. Southern Command, which have responsibility over the Western Hemisphere. Table 4 contains the commands’ counternarcotics obligations for fiscal years 2010 through 2015.
Table 5 shows the U.S Northern Command’s and U.S. Southern Command’s counternarcotics obligations in support of foreign partners in the Western Hemisphere, by country, for fiscal years 2013 through 2015.
Table 6 shows ICE expenditures for counternarcotics investigations and intelligence activities conducted by ICE agents for Western Hemisphere drug cases, by country, during fiscal years 2010 through 2015.
Table 7 shows DEA obligations for salaries, expenses, and administrative costs for DEA personnel located in 30 Western Hemisphere countries during fiscal years 2010 through 2015.
Table 8 shows OCDETF reimbursements to the FBI for expenditures related to its investigations of transnational Central American, South American, Mexican, and Caribbean crime organizations; drug-smuggling and money-laundering organizations; alien-smuggling organizations; and drug-related public corruption cases in the Western Hemisphere, as well as headquarters administration expenses, for fiscal years 2010 through 2015.
Table 9 shows INL obligations for counternarcotics activities in 13 Western Hemisphere countries and for two regional programs in the Western Hemisphere, the Central America Regional Security Initiative, and the Caribbean Basin Security Initiative, during fiscal years 2010 through 2015.
Table 10 lists USAID’s obligations for alternative development projects in four countries in the Western Hemisphere during fiscal years 2010 through 2015.
Appendix IV: U.S. Agencies’ Planning for Western Hemisphere Counternarcotics Efforts
National Drug Control Program agencies’ planning for counternarcotics efforts in the Western Hemisphere is represented in a variety of strategic documents, which may be broad or targeted, depending on their mission. For example, the Department of Defense’s (DOD) 2011 Counternarcotics and Global Threats Strategy focuses primarily on the department’s efforts to combat narcotics trafficking and transnational organized crime. DOD officials indicated that they are currently updating the strategy. Similarly, the Coast Guard’s 2014 Western Hemisphere Strategy includes counternarcotics as part of the agency’s broader regional mission. According to Coast Guard officials, the Coast Guard does not plan to update its strategy.
The Department of Homeland Security (DHS) has several strategic documents that relate to its components’ counternarcotics activities, as described below:
Customs and Border Protection’s Vision and Strategy 2020 incorporates counternarcotics efforts as part of its mission to facilitate legitimate trade and safeguard land, air, and maritime borders.
Immigration and Customs Enforcement also has a specific goal, protecting the homeland against illicit trade, travel, and finance, including an objective targeting drug-trafficking organizations in its Homeland Security Investigations’ Strategic Plan Fiscal Years 2012- 2016.
The Department of Justice’s (DOJ) Fiscal Years 2014-2018 Strategic Plan includes the Drug Enforcement Administration’s (DEA) goal of disrupting and dismantling major drug-trafficking organizations within a much broader set of law enforcement missions.
DEA’s Fiscal Years 2009-2014 Strategic Plan indicates the agency has focused on international and domestic drug-trafficking and money-laundering organizations identified as having the most significant impacts internationally and domestically, known as “Consolidated Priority Organization Targets” and “Priority Targeted Organizations.” In addition, DEA’s Drug Flow Attack Strategy, developed in 2009, identifies vulnerable chokepoints to disrupt the flow of drugs. DEA officials indicated they are updating the strategy.
DOJ also released a Strategy for Combating the Mexican Cartels in January 2010, which was designed to be consistent with the National Drug Control Strategy and the National Southwest Border Counternarcotics Strategy. The DOJ strategy’s 10 objectives include (1) reduce the flow of narcotics and other contraband entering the United States, (2) strengthen Mexico’s operational capacities and enhance its law enforcement institutions, (3) increase bilateral cooperation between Mexico and the United States on fugitive capture and extradition activities, and (4) increase intelligence and information sharing among law enforcement agencies in the United States and Mexico to achieve focused targeting of the most significant criminal organizations.
DOJ’s Organized Crime Drug Enforcement Task Forces (OCDETF) has a long-term drug enforcement strategy for using its prosecutor- led, multiagency task forces in the field to conduct intelligence-driven, coordinated, multijurisdictional prosecutions and investigations. Specifically, OCDETF member agencies focus on Consolidated Priority Organization Targets—that is, “command and control” organizations representing the most significant drug-trafficking and money-laundering organizations threatening the United States. OCDETF member agencies also pursue organizations identified as regional priorities because they have a significant impact on the illicit drug supply within a specific region.
Officials in the Department of State’s (State) Bureau of International Narcotics and Law Enforcement Affairs (INL) stated that the bureau uses a variety of strategic planning documents in its efforts to address counternarcotics in the Western Hemisphere.
INL’s Functional Bureau Strategy includes the broad objective of reducing illicit drug production and drug demand, along with other activities such as working with the United Nations Office of Drug and Crime.
The Western Hemisphere Affairs and Latin America and the Caribbean Joint Regional Strategy, which focuses on a goal of a secure and democratic future for all citizens in Latin America and the Caribbean, includes interdiction goals for specific drugs such as opium gum (used for producing heroin) and cocaine.
Integrated Country Strategies at posts and INL Country Plans are focused strategies, targeting, for example, the eradication of a specific number of hectares of coca or the seizure of a certain number of metric tons of illicit drugs and precursor chemicals.
The U.S. Agency for International Development (USAID) does not have a specific strategy related to counternarcotics and instead relies on the Office of National Drug Control Policy’s National Drug Control Strategy to help guide its alternative development activities in countries confronting illicit drug production and trafficking, according to USAID officials. USAID’s targeted efforts are described in its Country Development Cooperation Strategies for Colombia and Peru, where alternative development efforts are currently underway. The Colombia strategy describes the U.S. government’s development assistance in support of Colombian efforts to continue its transition out of conflict. According to the Colombia strategy, investments under several of its development objectives would help create conditions for alternative livelihoods and legal behaviors, contributing to broader U.S. and Colombian efforts to address drug trafficking. The Peru strategy includes alternatives to illicit coca cultivation as a development objective in specific regions, supporting the overall goal of strengthening stability and democracy through increased social and economic inclusion, reductions in illicit coca cultivation, and the illegal exploitation of natural resources. USAID conducted operations focused on alternative development in Bolivia until May 2013, when the mission closed.
Appendix V: U.S. Agencies’ Cooperation with Foreign Partners to Reduce Drug Trafficking in the Western Hemisphere
Cooperation with foreign partners is a crucial element in addressing changing narcotics conditions in the Western Hemisphere. For example, the Department of State’s (State) Bureau of International Narcotics and Law Enforcement Affairs (INL), the U.S. Agency for International Development (USAID); the Department of Homeland Security’s (DHS) Coast Guard and Customs and Border Protection (CBP); and the Department of Justice’s (DOJ) Drug Enforcement Administration (DEA) and Federal Bureau of Investigation (FBI) work with host nation counterparts on a variety of counternarcotics efforts.
U.S. assistance programs to disrupt the flow of cocaine and other harmful products are designed to build capacity of judicial, law enforcement, and treatment institutions in partner countries, according to INL’s 2017 International Narcotics Control Strategy Report. These programs are carried out through the Central America Regional Security Initiative, the Caribbean Basin Security Initiative, and the Mérida Initiative. Key activities of these programs include drug interdiction cooperation, especially maritime-based efforts in Central America and the Caribbean; law enforcement capacity building; anticorruption initiatives and support; and enhanced prosecution and judicial reform strengthening efforts. For example: In Mexico, as of September 2016, Mérida Initiative funding had supported 238,000 federal, state, and municipal police officers’ standardized training in their role as first responders in the country’s new criminal justice system, according to INL’s report. The report also stated that as of 2016, Mexico had seized over 230 metric tons of illegal drugs and over $50 million in illegal currency with Mérida- funded equipment and training.
In Central America, State has provided targeted assistance to help enhance the ability of local partners to interdict drug shipments, disrupt trafficking networks, and control domestic production, according to State officials. For example, State officials reported that State had partnered with DEA to support local vetted police units to interdict drug shipments and investigate traffickers. According to the officials, the 20-officer Maritime Interdiction Vetted Unit in Costa Rica interdicted 1,151 kilograms of cocaine in April 2017, and similar units in Guatemala seized 2,532 kilograms of cocaine in June 2017. In addition, according to State officials, INL assisted the Guatemalan counternarcotics police in developing an opium poppy eradication program that resulted in the destruction of 1,000 acres of poppy cultivation in a 2-month period in the spring of 2017. Moreover, State officials reported that a State-provided wiretapping system and associated training allowed Costa Rican prosecutors to convict seven Sinaloa cartel members in May 2017, shutting down an operation that, according to State officials, had been sending 14 metric tons of cocaine per year to the United States.
USAID also relies on international partnerships to implement its alternative development activities. For example, USAID reported that it plans to continue its mitigation of drug-related security threats in Peru by replicating successes it had in the country’s San Martin region and in other coca-growing regions in collaboration with the government of Peru and other U.S. government agencies, in its Peru Country Development Cooperation Strategy for 2012 through 2016. Results from the Monzon Valley in Peru also demonstrate how foreign partnerships can impact the illicit drugs trade. USAID focused its alternative development assistance on the coca stronghold of the Monzon Valley, which once supported about 10,000 hectares of coca, from 2013 to 2015. The average income was about $1.89 per day per person, well below the national extreme poverty line of $2.20 per day per person in 2013. Households that remained under assistance during the strategy period saw a 53-percent increase in income. Moreover, the percentage of assisted families in extreme poverty dropped by 25 percent, from 55 percent to 30 percent. Coca cultivation dropped by more than 91 percent in all areas where recent coca eradication was followed by sustained alternative development assistance, according to the United Nations Office on Drugs and Crime. The Central Intelligence Agency’s Crime and Narcotics Center recorded a less robust, but still impressive, reduction of 64 percent over the same period, according to USAID officials. Furthermore, USAID officials noted that while its resources for alternative development in Peru diminished, the budget for the National Commission for Development and Life without Drugs, Peru’s development organization, grew from $15 million in 2011 to $38 million during 2014 and 2015.
In Colombia, USAID reported in its 2014-2018 Country Development Cooperation Strategy that it is trying to address the need for licit economic opportunities by supporting cocoa, specialty coffee, rubber, and dairy sectors in former coca-growing areas, which would help create the conditions for alternative livelihoods and legal behaviors for small producers in areas vulnerable to coca cultivation and drug production, contributing to broad U.S. government and Colombian efforts to address drug trafficking. This alternative development work increased under Plan Colombia, with USAID and the government of Colombia working together on several large-scale rural development projects. Three programs evolved that incorporate public and private partnerships to facilitate economic growth from 2006 to 2017. The first program reportedly generated 250,000 new jobs by investing in agricultural sectors such as rubber, cacao, and African palm enterprises as well as hotels and tourism. The second program supported the provision of grant subsidies to agricultural value-chains, linking small farmer associations with national and international private-sector buyers. In the 2013 selection round, for example, more than 30 selected projects included crops and products such as cacao, rubber, fruits, dairy, and meat. In the third program, USAID carried sustainable development by encouraging private-sector investment in target areas. For example, USAID focused on developing alliances with key private-sector leaders in the coffee and cacao sectors in the former sector by raising yields and quality and addressing infrastructure needs especially in conflict-prone zones. Today, Colombia is the world’s largest producer of premium-quality Arabica beans, according to USAID. Likewise, fine cocoa is a successful crop in Colombia, with a growing world demand, according to USAID. The Colombian cocoa industry is relatively small, with 25,000 farmers producing about 42,000 tons, or 0.2 percent of the global market. However, about 85 percent of Colombian cocoa is from “fine” species, giving Colombia a 3-percent share of global fine cocoa exports. USAID also developed a private investment equity fund, providing capital to small- and medium-sized enterprises in Colombia. The fund is now an independent, for-profit enterprise providing small- and medium-sized Colombian enterprises with capital and operational support.
The Coast Guard’s efforts to support foreign partners include its Multilateral Maritime Counter Drug Summits, where U.S. and foreign partners meet to discuss operational and legal issues. The summits are attended by U.S. agencies including, among others, DEA, CBP, the Department of Defense’s Joint Interagency Task Force South, State, and DOJ. Representatives from Western Hemisphere countries, including Belize, Brazil, Canada, Chile, Colombia, Costa Rica, Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, and Peru, among others, also attend the summits. For example, at a summit held in May 2016, Mexico briefed about its judicial system’s transition to an adversarial system, and Honduras briefed about its successes using increased penalties for money-laundering violations, when it is proven that the money is from drug trafficking, according to a Coast Guard document. On the operational side, Panama made presentations on regional operations, and the U.S. Coast Guard presented on capacity building for counterdrug operations, among other efforts. Other issues—such as how to leverage increased maritime awareness regionally resulting from investments by partner nations in radar and the linking of vessel-tracking technologies along their coastlines with the Joint Interagency Task Force South’s Cooperative Situational Information Integration system—are discussed at these meetings.
DHS cooperates with foreign partners in variety of ways to target emerging counternarcotics threats, as follows: ICE’s Homeland Security Investigations works with foreign partners to (1) coordinate criminal investigations, including those related to counternarcotics; (2) disrupt criminal efforts to smuggle people and material, including drugs into the United States; and (3) build international partnerships through outreach and training. In ONDCP’s fiscal year 2017 Budget and Performance Summary report, ICE established a target of 29 percent of transnational drug investigations resulting in the disruption or dismantlement of high-threat, transnational drug-trafficking organizations or individuals for fiscal year 2015. According to the report, ICE fell short at 15 percent but indicated there were several reasons, including a methodology that allowed double counting; as a result, the methodology was revised.
CBP also has a network of attachés and advisors, who serve in U.S. diplomatic missions and act as liaisons between law enforcement components such as DEA; the FBI; and DOJ’s Bureau of Alcohol, Tobacco, Firearms and Explosives. Attachés and advisors also work with foreign partners building capacity and provide training, technical assistance, and mentoring on border security, according to CBP officials. For example, CBP has trained over 1,000 Panamanian customs and law enforcement officers since 2014. Also, since February 2017, CBP helped vet, train, and mentor a unit of Peruvian intelligence analysts. Twenty tons of cocaine have been seized since the unit was created, according to CBP officials.
CBP’s National Targeting Center hosts representatives from participating foreign agencies and works with these international liaisons and other U.S. government agencies to detect and disrupt narcotic-smuggling operations, drug-trafficking organizations, and their associates. According to agency officials, in fiscal years 2015 and 2016, the center’s efforts with foreign partners led to results in the Western Hemisphere such as discovery and seizure of over 100 kilograms of cocaine, identification of a previously unknown foreign company suspected of narcotics involvement, and seizure of counterfeit identification documents destined to the United States with links for possible bank fraud and the illicit money laundering.
DOJ works with foreign country counterparts to conduct bilateral investigations and support joint counterdrug operations, among other things, such as the following:
DEA’s special agents, who work at embassies or consulates overseas, conduct bilateral investigations with their foreign counterparts. These special agents also carry out institution-building activities with their counterparts.
DEA reported that it provides investigative equipment and training, in large part through its Sensitive Investigative Units in selected countries, including Mexico and Colombia. The Sensitive Investigative Units seek to create focused, well-trained, and vetted drug investigative and intelligence units, targeting the most significant drug- trafficking organizations affecting the United States. DEA sees the program’s impact as building international cooperation, facilitating institution building and professional development, and improving judicial processes.
DEA’s International Drug Enforcement Conference is another venue for cooperation with foreign partners. The conference brings senior international drug law enforcement officials together, in regional and bilateral meetings where, according to DEA, topics such as cross- border coordination of operations, intelligence sharing, and joint training activities are addressed. According to INL’s 2017 International Narcotics Control Strategy Report, at a meeting in Peru, in April 2016, geographical regional and multiregional working groups identified collective targets, agreed upon multilateral counterdrug enforcement and interdiction operations, and assessed the progress and evaluated intelligence on existing and emerging targets. The 2015 Caribbean Border Counternarcotics Strategy noted that the DEA-led International Drug Enforcement Conference is a forum for building coalitions between U.S. federal law enforcement and foreign counterparts and that within the Caribbean, law enforcement officials from over 20 nations participate in the annual meetings to discuss regional investigative targeting efforts.
One measure DEA tracks as contributing to ONDCP’s National Drug Strategy is the number of international, domestic, and diversion priority targets linked to consolidated priority organization targets it disrupts or dismantles. In ONDCP’s fiscal year 2017 Budget and Performance Summary, DEA reported that in fiscal year 2015, it set a goal of disrupting or dismantling 440 targets linked to consolidated priority organization targets and achieved 356 of these targets. DEA indicated that it did not achieve its goal due to budgetary constraints.
FBI legal attachés carry out capacity-building programs, providing equipment and training to enhance foreign partners’ ability to combat criminal activity connected to transnational criminal organizations, according to FBI officials. These officials stated that FBI-trained and - vetted investigative units in Colombia and the Dominican Republic target the most significant criminal organizations affecting the United States.
The FBI conducts multiple trainings with Mexican law enforcement as a means of developing contacts and fostering cooperative relationships with its law enforcement counterparts in Mexico, according to FBI officials. These officials noted that the FBI’s ability to advance investigations with a nexus south of the border is greatly enhanced through these contacts. According to these officials, the FBI also sponsors numerous trainings throughout Latin America to enhance its foreign partners’ ability to deal with the increasing transnational organized crime threat.
Appendix VI: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Juan Gobel (Assistant Director), Julie Hirshen (Analyst-in-Charge), Lynn Cothern, Martin De Alteriis, Neil Doherty, Mark Dowling, Reid Lowe, and Shirley Min made key contributions to this report. Dawn Locke and Diana Maurer provided technical support. | Why GAO Did This Study
Western Hemisphere nations such as Mexico and Colombia are major sources of illicit drugs such as cocaine, heroin, methamphetamine, and marijuana. Precursor chemicals used in the production of illicit fentanyl and other dangerous synthetic drugs often originate in China but typically enter the United States through Canada and Mexico. U.S. agencies implementing the National Drug Control Strategy conduct several activities to disrupt the flow of illicit drugs and dismantle the organizations that control them (see fig.). In December 2016, Congress established the Western Hemisphere Drug Policy Commission to, among other things, evaluate the U.S.-funded counternarcotics programs in the Western Hemisphere.
In this context, GAO was asked to review key issues related to U.S. counternarcotics efforts in the Western Hemisphere. This report examines (1) U.S. agencies' spending for counternarcotic efforts in the Western Hemisphere during fiscal years 2010-2015, the most recent data available; (2) how agencies are gathering and sharing best practices and lessons learned from their counternarcotics efforts domestically and internationally; and (3) mechanisms U.S. agencies have used to address changing drug threats. GAO analyzed agencies' data and documents, interviewed agency officials, and conducted fieldwork at the U.S. Southern Command and Joint Interagency Task Force South in Florida.
GAO is not making any recommendations in this report. Several agencies provided technical comments on a draft of this report which we incorporated as appropriate.
What GAO Found
U.S. agencies implementing the National Drug Control Strategy identified billions in spending for Western Hemisphere counternarcotics efforts in fiscal years 2010 through 2015. Agencies that track their counternarcotics spending regionally—the Department of Defense (DOD), the Department of Homeland Security's (DHS) Immigration and Customs Enforcement, the Department of State, and the U.S. Agency for International Development—reported spending nearly $5 billion for such activities in the region during this period. Agencies that do not track counternarcotics spending regionally—DHS's Customs and Border Protection and Coast Guard; and the Department of Justice's Drug Enforcement Administration and Organized Crime Drug Enforcement Task Forces—reported spending about $34 billion for counternarcotics activities in fiscal years 2010 through 2015. According to officials of these four agencies, most of their counternarcotics activities are in the Western Hemisphere. We are not reporting Federal Bureau of Investigation counternarcotics spending separately, since it is included as part of Organized Crime Drug Enforcement Task Forces.
The Office of National Drug Control Policy (ONDCP), which coordinates the National Drug Control Program, facilitates the sharing of best practices and lessons learned at meetings such as the North American Drug Dialogue workshop, including Canada, Mexico, and the United States. In addition, 7 of the 10 agencies GAO reviewed described processes they have in place for identifying and collecting best practices or lessons learned from counternarcotics efforts in the Western Hemisphere. For example, DOD reported using a process, known as the Joint Lessons Learned Program, that consists of five phases: discovery, validation, resolution, evaluation, and dissemination.
U.S. agencies use a variety of mechanisms to address changing narcotics conditions in the Western Hemisphere. ONDCP collaborates with agencies working directly on regional counternarcotics efforts to address emerging threats, as reflected in the annually updated National Drug Control Strategy and the Southwest Border Counternarcotics Strategy. In addition, documentary evidence GAO reviewed showed that a variety of interagency groups, task forces, and committees have been created to coordinate the U.S. government's responses to counternarcotics threats. For example, the National Heroin Coordination Group was established to provide guidance aimed at reducing the growing supply of heroin and illicit fentanyl in the U.S. market. |
gao_GAO-18-283 | gao_GAO-18-283_0 | Background
ARNG is one of two reserve components of the Department of the Army; it has units located in each of the 54 states, territories, and the District of Columbia. The Secretary of the Army is responsible for creating overarching policy and guidance for all of the components of the Army, including ARNG. The Chief of NGB, among other responsibilities, acts as the official channel of communication between the Department of the Army and the 54 states, territories, and the District of Columbia in which ARNG has personnel assigned and is responsible for ensuring that ARNG personnel are accessible, capable, and trained to protect the homeland and to provide combat resources to the Army.
The Selected Reserve Incentive Program
During fiscal years 2010 through 2016, ARNG disbursed more than $1.8 billion in financial incentives to bolster its recruiting and retention efforts. The ARNG program, called the Selected Reserve Incentive Program, includes cash bonuses and other payments. The ARNG regulation for Selected Reserve Incentive Programs includes over a dozen sub- categories of cash bonuses, such as those for newly enlisted soldiers, active duty soldiers who join ARNG, and soldiers who re-enlist or extend with ARNG. In addition to cash bonuses, ARNG makes incentive payments as part of the Student Loan Repayment Program. Under this type of incentive, ARNG disburses incentive payments directly to a third party lender.
The Director of ARNG is responsible for determining the overall policy for the Selected Reserve Incentive Program and issued the regulation that governs incentive procedures and eligibility criteria for soldiers entering into an incentive agreement. On a periodic basis, ARNG updates the policy for a specific fiscal year through a policy or an education and incentive operational message. These updates, which are intended to help ARNG meet its readiness requirements, can provide instructions on the value and frequency of incentives, as well as directing the targeting of incentives to address a particular skill or unit need. The updates can also direct changes to eligibility requirements in order to enable a soldier to receive an incentive.
Each of the 54 states, territories, and the District of Columbia has a state incentive manager in ARNG who provides oversight for authorization, verification, validation, establishment, monitoring, and termination of all incentive payments, including recoupment of incentives. State incentive managers work with recruiting and retention personnel to assist in the use of bonuses. For example, state incentive managers can ensure that the contracts used by recruiting and retention personnel comply with ARNG policy. Additionally, each ARNG unit has personnel who track information on soldier performance, such as attendance, physical fitness, and training. To manage these activities, ARNG uses the Reserve Component Manpower System— an information system that houses manpower readiness data and includes approximately 40 subsystems.
Improper Payments in the California ARNG
A 2008 California National Guard audit revealed that Selected Reserve Incentive Program incentives, including student loan repayments, were being improperly paid to numerous California ARNG soldiers and that some of these cases were results of fraud. In subsequent audits of California ARNG, 17,485 soldiers were identified as having received a bonus or student loan repayment in the period of 2004 through 2010 that was potentially improper and subject to recoupment. By the end of 2016, several follow-on reviews had identified improper incentive payments to more than 1,400 soldiers.
These investigations and audits determined that ARNG lacked internal controls over its incentive process. For example, the state incentive manager could authorize and approve an incentive and then forward the payment request to the state’s U.S. Property and Fiscal Office, the office responsible for authorizing payment. To improve the process, in 2010 ARNG established a contract for an Incentive Support Team to, among other things, review soldier incentives. In 2011, ARNG developed a module within the Reserve Component Manpower System called the Guard Incentive Management System. The Guard Incentive Management System was designed to aid in managing the incentive process across all states by providing an online system to track, monitor, and prioritize all incentive cases. The Guard Incentive Management System was also intended to increase oversight through automated notifications and reporting features and to add a budget control mechanism for NGB and the states, among other things. In 2012, ARNG began a phased implementation of the Guard Incentive Management System in each state and territory. ARNG subsequently expanded the Guard Incentive Management System to include the Student Loan Repayment Program.
Process Before NGB Establishes a Debt
NGB goes through a process before it establishes and collects on a debt. State incentive managers are responsible for ensuring that soldiers receiving incentive payments are satisfying contractual requirements. If the state incentive manager determines that the soldier has violated the contract tied to the incentive payment, the state incentive manager sends a certified letter to the soldier that (1) states the reason the payment may potentially be determined to be improper and (2) lists the steps that the soldier can take to adjudicate the issue. ARNG officials informed us that, if the soldier does not respond to the letter within 45 days, a debt is established in Defense Finance and Accounting Service systems. In response, the soldier may provide documents to address the issue or, should documents already exist, may request that NGB make an exception to policy—a determination by NGB that the circumstances of a soldier’s case merit allowing the soldier to retain the incentive payment. Incentive managers in some states told us that they will assist soldiers in requesting an exception to policy and will sometimes request these exceptions on their behalf in the case of events—such as a reorganization of a state’s ARNG units—that could result in a large number of soldiers not meeting the terms of their incentive contracts. If these steps do not resolve the issue, the soldier can seek recourse through the Army Board for the Correction of Military Records. Once these options are exhausted, the debt is established in Defense Finance and Accounting Service systems.
ARNG Has Implemented Internal Controls to Prevent Improper Payments but Has Not Planned for Future Significant Changes That Could Affect Its Internal Controls ARNG Has Implemented Internal Controls to Prevent Improper Incentive Payments
ARNG has implemented internal controls, including automated and manual reviews, to prevent improper incentive payments, and it also reviews its incentive programs on a periodic basis. First, ARNG has implemented the Guard Incentive Management System and expanded its use over time to oversee its incentive contracts through automation. In 2012, ARNG began using the Guard Incentive Management System to manage the life cycles of contracts between ARNG and soldiers for incentives and education entitlements, including those for the Selected Reserve Incentive Program. When a soldier signs a contract with a recruiter or retention officer, the Guard Incentive Management System alerts the state incentive manager that an incentive is ready for review. ARNG has also implemented automated rules in the Guard Incentive Management System—known as monitor rules—that continuously monitor a soldier’s eligibility for an incentive. The system does this by comparing the data it receives from multiple personnel systems against the soldier’s contract. If any issues are found, the Guard Incentive Management System will flag the incentive case for review by the state incentive manager and will stop future payments until the issue is resolved. In California, for example, we observed an incentive manager reviewing a case that had been flagged for violating a monitor rule because the soldier was no longer in the unit stipulated in the incentive contract. The incentive manager told us that the soldier was informed of the situation and that corrective action would be required before any additional payments could be made. If a soldier is deemed ineligible or loses eligibility at any time during this process, the state incentive manager will stop payments and review the case to determine whether the contract needs to be terminated.
State incentive managers are also required to verify certain eligibility criteria and personnel documents manually in the Guard Incentive Management System. State incentive managers use checklists to review a soldier’s incentive contract and unit orders to determine eligibility. The Selected Reserve Incentive Program requires that incentive contracts of more than three years be paid out in installments. State incentive managers or their designees are required to manually review each incentive contract before making an anniversary payment. During our site visits, we observed state incentive managers using the Guard Incentive Management System to review whether a soldier was eligible to receive a payment. For example, in Nebraska we observed an incentive manager using the Guard Incentive Management System to verify a soldier’s contract period, unit assignment, and physical fitness test scores, among other items, to confirm the soldier’s eligibility to receive a payment. In Illinois, we observed an incentive manager using the Guard Incentive Management System to verify a soldier’s identity, unit transfer orders, and an eligible student loan before approving a student loan repayment for further review at the national level. We also observed an incentive manager in Illinois reviewing a soldier’s incentive contract, which was being terminated because the soldier had failed to attend required drills. ARNG personnel in each of the states we visited told us that the Guard Incentive Management System provides a strong barrier against soldiers receiving improper payments. The Guard Incentive Management System also tracks and records each user’s actions on each incentive case to provide an audit trail, which we observed in multiple states.
In addition to reviews conducted at the state level, ARNG conducts another review of incentive payments using the Guard Incentive Management System. Once a contract has been reviewed at the state level, state incentive managers forward it to the ARNG Incentive Support Team for another review. The ARNG Incentive Support Team has provided assistance to all 54 states, territories, and the District of Columbia, by conducting reviews of 100 percent of incentive payments and terminations, among other things. After the ARNG Incentive Support Team’s review, ARNG officials perform a final review of an incentive payment before it is certified. Specifically, ARNG officials review a random sample of 10 percent of contracts from a batch of incentive payments that the Guard Incentive Management System generates. ARNG officials told us that if 25 percent or more of this 10 percent sample is rejected because it contains errors, all of the contracts in the batch are returned to the ARNG Incentive Support Team or their respective states for additional review. If less than 25 percent are rejected, the individual contracts with errors are returned to the ARNG Incentive Support Team or their respective states for additional review. The remainder of the batch passes ARNG review, and the Guard Incentive Management System generates payment files electronically and transfers them to the Defense Finance and Accounting Service, which disburses funds to the soldiers, as shown in figure 1.
Second, ARNG conducts periodic reviews of its incentive program. Specifically, National Guard Regulation 600-7, Selected Reserve Incentive Program—issued in August 2014—classifies incentive programs as a high-risk function that should be evaluated every year to mitigate risks, and that management controls must be evaluated at least once every five years. Each of the six states we visited had either conducted an internal review of its incentive program since 2016 or told us that it had plans to conduct one within the next year. For example, ARNG officials in Nevada had evaluated and certified the internal controls of their incentive program in 2017, and ARNG officials in Delaware told us that they plan to request an external evaluation of their incentive program in 2018.
ARNG Has Taken Steps to Address Some Weaknesses Affecting Its Internal Controls but Has Not Planned for Future Significant Changes
ARNG took steps to address some identified weaknesses to its internal controls for managing soldier incentive contracts, but has not developed and implemented a plan for future significant changes that could affect its internal controls. For example, in October 2015, a previous contract to support the ARNG Incentive Support Team expired, and performance of the follow-on contract was delayed for approximately two years—until September 2017—by actions related to two GAO bid protests. From October 2015 to January 2016, ARNG used a 3-month bridge contract with the previous contractor to provide support and enable the ARNG Incentive Support Team to continue to perform 100 percent reviews. However, in January 2016, the ARNG Incentive Support Team stopped conducting 100 percent reviews of incentive contracts. At that time, according to ARNG officials, ARNG increased their review of incentive contracts from 10 percent to 30 percent to help mitigate the loss of the 100 percent review that the ARNG Incentive Support Team had previously provided. On September 30, 2017, the current contract for the ARNG Incentive Support Team was awarded and according to ARNG officials, the ARNG Incentive Support Team reinstated 100 percent reviews of soldier incentive contracts on December 8, 2017. ARNG also adjusted their review of soldier incentive contracts from 30 percent back to 10 percent.
As another example, in April 2017, ARNG issued the fiscal year 2017 Selected Reserve Incentive Program policy. Among other things, the policy changed the eligibility requirement for receiving an incentive payment based on soldier performance on the Army Physical Fitness Test. Under the previous policy, soldiers who failed two consecutive fitness tests would be ineligible to receive an incentive. The fiscal year 2017 policy changed this requirement to two failures during the lifetime of a soldier’s incentive contract, which could be up to six years. According to ARNG officials, approximately 8,000 incentive contracts are affected by this requirement. NGB requires the vendor managing the Reserve Component Manpower System, which includes the Guard Incentive Management System, to update the system with any policy changes. However, ARNG officials told us that they had not updated the Guard Incentive Management System with the fiscal year 2017 policy. Therefore, the system’s automated reviews are unable to check for this eligibility requirement.
Additionally, according to ARNG officials, ARNG did not publish official guidance regarding this discrepancy. Instead, ARNG informally discussed with state incentive managers that the fiscal year 2018 policy, once issued, would eliminate this requirement. ARNG officials told us that a separate transition of vendors for the Reserve Component Manpower System that began in 2016 had delayed their ability to update the Guard Incentive Management System with the fiscal year 2017 policy. ARNG had not anticipated that the vendor would be unable to update the Guard Incentive Management System as a result of technical challenges following the transition. ARNG officials also told us that they are currently developing the fiscal year 2018 policy and would update the Guard Incentive Management System with this policy when it is ready. On December 6, 2017, we provided our observations to ARNG on the inability of the Guard Incentive Management System to perform automated monitoring on these 8,000 incentive contracts. According to ARNG officials, on December 7, 2017, they submitted a formal change request to the vendor to incorporate this rule in the Guard Incentive Management System, and they expect the rule to be incorporated in February 2018.
ARNG has also taken steps to address unforeseen technical issues that have affected its incentive program. For example, ARNG officials told us that they have implemented several recommendations that were made as part of the Army’s administrative investigation of the transition in vendors managing the Reserve Component Manpower System, of which the Guard Incentive Management System is a component. The investigation determined that ARNG was not positioned to provide sufficient technical oversight of the transition, and in September 2016, the investigation’s report recommended that ARNG, among other things, assign a highly skilled Information Technology subject matter expert to provide oversight of all government and contractor activities related to the Reserve Component Manpower System. ARNG officials also told us they had since assigned this expert and had implemented other recommendations from the investigation, but were not tracking progress on those recommendations. Additionally, ARNG officials told us that, as of October 2017, they were in the process of revising their performance work statement for the current vendor. These revisions may include, among other things, providing other types of technical support and reducing the amount of time that the system would be unavailable to ARNG and others. Finally, ARNG officials told us that they plan to use an existing Information Technology steering committee to provide oversight for the Reserve Component Manpower System; however, these same officials told us that the steering committee had not met from May 2017 through October 2017.
While ARNG has taken steps to remedy some technical issues and weaknesses in its internal controls, it has not demonstrated that it has learned from its past experiences by planning for significant changes to its incentive program that could affect its internal controls, such as its information systems not functioning correctly or data related to incentive contracts not being readily updated or available for an extended period of time. These changes include, for example, the next vendor transition for the Reserve Component Manpower System, which is expected to be re- competed in 2020. Additionally, as ARNG continues deployment of the Integrated Personnel and Pay System – Army in 2018, it is anticipated that aspects of the Reserve Component Manpower System will change. Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to significant changes that could affect the entity’s internal control system. Because conditions affecting the entity and its environment continually change, management can anticipate and plan for significant changes by using a forward-looking process to prepare for change. Planning for significant changes— including those cited earlier—requires time and coordination in advance of the changes occurring. However, ARNG officials have been unable to demonstrate their planning efforts to identify, analyze, and respond to any significant changes to ARNG’s internal controls that may arise if, for example, the contract is awarded to a new vendor or as the Reserve Component Manpower System fully interfaces with the Integrated Personnel and Pay System – Army. Without taking action to plan for potentially significant changes to its internal controls for the Reserve Component Manpower System, ARNG is at risk of not being prepared for these changes that could contribute to the potential for making improper payments.
DOD Components Have the Authority to Waive Soldier Incentive Debt, and DOD Has Improved the Availability of Documentation to Adjudicate Waiver Cases Two DOD Components Have the Authority to Waive ARNG Soldier Incentive Debts for Military Pay and Allowances
The Defense Finance and Accounting Service (DFAS) and the Defense Office of Hearings and Appeals (DOHA) have the authority to waive erroneous incentive debts for ARNG soldiers. DFAS is a DOD component that maintains records of soldiers’ debts and has the authority to waive established debts of $10,000 or less. DOHA, another DOD component, adjudicates waivers for debts of more than $10,000. For established debts, DFAS will notify the soldier that a debt exists and will be collected. In response, the soldier can submit a request to DFAS to waive the debt. DFAS has the statutory authority to waive debts incurred as a result of erroneous payments of up to $10,000 to members of the armed services, including ARNG soldiers. If DFAS denies all or part of the waiver request, it informs the waiver applicant of the right to file an appeal of the denial to DOHA within 30 days. Soldiers can file for a waiver of indebtedness from DFAS for a period of up to 5 years from the date an erroneous payment is discovered. DFAS may not consider waiver applications that it receives after that 5-year period.
DOHA has the authority to review waiver cases forwarded by DFAS and to adjudicate appeals from soldiers whose waiver applications have been denied. According to DOHA officials, they review only cases in which (1) the payment has been identified as erroneous, (2) a collection action has been started, and (3) the soldier has been given rights under the Fair Debt Collection Practices Act. DOHA officials told us that they do not have authority over the establishment or collection of a debt or the authority to conduct a hearing for a soldier contesting the validity of a debt. However, DOHA officials told us that they will verify the correctness of the debt before adjudicating a waiver case and may request information from DFAS—such as documentation—including enlistment contracts, payment vouchers, and leave and earnings statements. Additionally, DOHA officials told us that they do not have the authority to adjudicate debts for payments made under the Student Loan Repayment Program—one type of payment under the Selected Reserve Incentive Program—because of their determination that their authority to waive debts for erroneously paid “pay and allowances” as defined in 32 U.S.C. § 716 and 10 U.S.C. § 2774(a) does not apply to payments to lenders for educational expenses. Those cases are reviewed and adjudicated at the discretion of the Secretary of the Army. If DOHA denies a soldier’s waiver application, the soldier may request that DOHA reconsider its decision, which DOHA officials told us is accomplished by an appeals panel of three DOHA attorneys. The decision of this panel is final and ends the waiver of indebtedness adjudication process, as depicted in figure 2.
DOD Has Improved the Availability of Documentation to Adjudicate Waiver Cases
DOD has improved the availability of the documentation that is used to adjudicate waiver cases for soldiers’ debts. DOHA officials told us that adjudication was sometimes delayed because case files lacked documentation. As part of our review of DOHA waiver case files, we found several examples of ARNG cases involving Selected Reserve Incentive Program debts that had been adjudicated between January 2014 and December 2016, in which DOHA adjudicators had to acquire missing information, including documentation, from external sources before adjudicating the case. For example, in one case from Alabama that was adjudicated in 2014, it was 83 days before DOHA officials received the documentation they needed. DOHA officials told us this information included the soldier’s bonus agreement, leave and earnings statements, and transfer orders. In another case from California that was adjudicated in 2016, it took adjudicators 74 days to obtain additional information. DOHA officials told us this information included payment vouchers.
DOD’s use of the Guard Incentive Management System has facilitated the availability of documentation needed to adjudicate waiver cases. For example, the system stores incentive payment and eligibility documentation, which may help to reduce delays in the adjudication of waivers associated with missing documentation. Before making an incentive payment, state incentive managers are required to inspect case documentation in the Guard Incentive Management System to validate the payment. During our site visits to selected states, we observed state incentive managers using the Guard Incentive Management System to review documents, such as re-enlistment contracts and unit orders. In several cases, we observed state incentive managers identifying errors in documents, and we observed their ability to correct these documents. For example, in Nebraska we observed a case in which a soldier’s military occupational specialty code in the Guard Incentive Management System was not in line with what was in the incentive contract, because the unit had been reorganized. We then observed a state incentive manager confirming the soldier’s transfer orders and uploading this documentation into the Guard Incentive Management System. Our observations are not generalizable across all states or for all contracts, but they suggest that documentation required to adjudicate waiver cases is now more readily available and will continue to be in the future.
DOD also updated its Financial Management Regulation to improve the availability of documentation. DOHA officials told us that DOD had updated the Financial Management Regulation in January 2016. Specifically, Volume 16, Chapter 4, Section 040403 of the DOD Financial Management Regulation instructs applicants to include in their waiver requests (1) copies of all supporting documentation, (2) copies of leave and earnings statements, (3) copies of notifications of personnel actions, and (4) any statements from the applicant in support of the waiver application. DOHA officials stated that this revision should reduce documentation-related delays during their review of future waiver submissions. Additionally, DOHA officials told us that they have taken steps to train DFAS personnel, who are responsible for reviewing waiver applications, in an effort to reduce delays.
Conclusions
ARNG has made progress in improving its internal controls since widespread improper payments were identified in California in 2008. By using the Guard Incentive Management System and requiring multiple levels of review before incentives are paid, ARNG may have reduced the likelihood of future widespread improper payments similar to what occurred in California. However, it is important for ARNG to be forward looking in preserving the integrity of its internal controls. ARNG has faced challenges during the transition between vendors managing the system that resulted in the weakening of internal controls, including those built into the Guard Incentive Management System. To its credit, ARNG has taken mitigating actions to prevent improper payments while attempting to address those issues. These challenges, and the need for mitigating actions, could have been prevented if ARNG had identified and prepared in advance for challenges potentially resulting from the vendor transition. If ARNG does not proactively identify, analyze, and plan to respond to significant changes that could affect the internal controls to its incentive program, there is an increased risk that additional weaknesses to its internal controls could emerge and result in an increased likelihood of improper payments.
Recommendation
We are making one recommendation to ARNG: The Director of the Army National Guard should develop and implement a plan to identify, analyze, and address any significant changes that could affect internal controls for its Guard Incentive Management System. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report to ARNG for comment. In its comments, reproduced in appendix I, ARNG concurred with our recommendation and stated that they initiated a project to improve internal control measures as significant changes are made to the Guard Incentive Management System to align the system with policy. ARNG also stated that the project would look at the time required to adjust incentives to effect change within the organization and achieve its strength goals. ARNG expects the project to be completed in August 2018. We believe this action would meet the intent of our recommendation.
We are sending copies of this report to the Secretary of Defense, the Chief of the National Guard Bureau, and the Director of the Army National Guard. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Comments from the Army National Guard
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Tina Won Sherman (Assistant Director), David Blanding Jr., Vincent Buquicchio, Wesley Collins, Joanne Landesman, Amie Lesser, Jim Melton, and Paul Seely made significant contributions to this report. | Why GAO Did This Study
ARNG provides trained and equipped units ready to defend life and property in the 54 states, territories, and the District of Columbia. In 2011, the Army Audit Agency reported weaknesses in internal controls over soldier incentive payments in the California ARNG that led to some improper payments. DOD initially took actions to recoup some of these payments, but the National Defense Authorization Act for Fiscal Year 2017 allowed for the waiver or other forgiveness of debt.
The National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to assess policies and procedures for minimizing and waiving the recoupment of improper payments. This report (1) evaluates the extent to which ARNG has implemented and planned to adjust the internal controls for its Selected Reserve Incentive Program to prevent improper payments and (2) describes which DOD organizations have the authority to waive ARNG incentive debts and steps taken to improve waiver documentation. GAO conducted site visits to six states based on the value of their incentive programs, reviewed documentation used to manage incentive programs, examined incentive debt waiver cases, and interviewed DOD officials.
What GAO Found
In response to the over $22 million in improper payments the California Army National Guard (ARNG) made in cash bonuses and other soldier incentives from 2004 through 2010, ARNG officials implemented some internal controls to prevent future improper incentive payments. These internal controls include automated and manual checks of soldier incentive contracts to verify soldiers' eligibility for incentive payments. For example, ARNG implemented automated rules in its Guard Incentive Management System—an online system that tracks incentive contracts—to monitor a soldier's eligibility for an incentive by comparing the data received from multiple personnel systems against the soldier's contract. If any issues are found, the Guard Incentive Management System will flag the incentive case for review by state ARNG officials and will stop future payments until the issue is resolved.
While these internal controls have improved accountability over soldier incentive payments, ARNG is still in the process of completing further actions. For example, in April 2017, ARNG issued the fiscal year 2017 Selected Reserve Incentive Program policy. However, ARNG did not incorporate changes as a result of this policy into the Guard Incentive Management System to ensure that the automated checks captured these policy changes—including one that affects approximately 8,000 solider incentive contracts, according to ARNG officials. ARNG officials told us that they had not updated the Guard Incentive Management System with this policy because of technical challenges resulting from a transition in vendors for the Reserve Component Manpower System—an information system that houses the Guard Incentive Management System. ARNG officials also told us that they plan to update the Guard Incentive Management System to include the 2017 policy in February 2018.
GAO also found that ARNG had not developed and implemented a plan for future significant changes that could affect its internal controls over soldier incentive payments. These changes include, for example, the end of the current vendor contract in 2020 to support the Reserve Component Manpower System and the Army National Guard's migration to the Integrated Personnel and Pay System – Army that is scheduled to occur in 2018. Standards for Internal Control in the Federal Government states that management should identify, analyze, and respond to significant changes that could affect an internal control system. Specifically, because conditions affecting an organization and its environment continually change, management needs to anticipate and plan for significant changes by using a forward-looking process to prepare for those changes. Without taking action to plan for such changes, ARNG puts itself at risk of making improper payments in the future.
The Defense Finance and Accounting Service and the Defense Office of Hearings and Appeals review and adjudicate requests for waivers of incentive debt. DOD has taken two steps to improve the availability of documentation needed to adjudicate waiver cases. First, DOD has clarified the policy in its Financial Management Regulation on the documentation soldiers are required to provide. Second, officials review documentation in the Guard Incentive Management System before validating an incentive payment, which may reduce delays associated with missing documentation when processing waiver requests.
What GAO Recommends
GAO recommends that ARNG develop and implement a plan that identifies, analyzes, and responds to significant changes that could affect internal controls for its Selected Reserve Incentive Program. ARNG concurred with the recommendation and has identified planned actions to address the recommendation. |
gao_GAO-19-5 | gao_GAO-19-5_0 | Background
This section describes DOE’s M&O contracts, incentives in those contracts, general requirements for DOE’s M&O contractor performance evaluation processes, and contracting and performance challenges involving DOE’s M&O contracts that have been identified by previous reporting.
DOE Uses M&O Contracts
Since the Manhattan Project produced the first atomic bomb during World War II, DOE and its predecessor agencies have depended on the expertise of private firms, universities, and others with the scientific, manufacturing, and engineering expertise needed to carry out research and development work and manage and operate the government-owned, contractor-operated facilities where the bulk of the department’s mission activities are carried out. DOE relies on contracts in general, and M&O contracts in particular, to do this work. According to DOE’s Fiscal Year 2017 Agency Financial Report, the department spends approximately 90 percent of its annual budget on contracts, and in fiscal year 2016 DOE managed contracts valued at more than $24 billion. Of that amount, DOE spent approximately 80 percent on its M&O contracts.
The work is closely related to the agency’s mission and is of a long-term or continuing nature, and there is a need to ensure its continuity and for protection covering the orderly transition of personnel and work in the event of a change in contractors. and sites that are contaminated from decades of nuclear weapons production and nuclear energy research.
The Office of Fossil Energy (FE) manages the nation’s Strategic Petroleum Reserve, which is an emergency stockpile of oil stored in underground salt caverns in Texas and Louisiana.
NNSA, a separately organized agency within DOE, is responsible for maintaining and enhancing the safety, reliability, and performance of the nation’s nuclear weapons stockpile, promoting international nuclear safety and nonproliferation, and supporting U.S. leadership in science and technology, among other things.
The Office of Nuclear Energy’s (NE) primary mission is to advance nuclear power as a resource capable of making major contributions in meeting the nation’s clean energy supply and energy security needs.
The Office of Science (SC) supports scientific research for energy and the physical sciences both by supporting (1) such research, and (2) the development, construction, and operation of scientific user facilities.
These DOE offices use M&O contracts to carry out their research and development, nuclear weapons production, and other missions. For example, for research and development, DOE is the nation’s single largest funding source for basic physical sciences research, supporting research in energy sciences, advanced scientific computing, physics, and other fields. For weapons production, NNSA uses production sites to maintain, evaluate, repair, and dismantle both the nuclear and non- nuclear components for nuclear weapons; manufacture weapons components; and process tritium, a key isotope used to enhance the power of nuclear weapons. DOE also uses M&O contracts for sites dedicated to other types of missions, including nuclear waste disposal and an emergency stockpile of oil. Figure 1 and appendix II provide additional information on DOE’s M&O contracts.
In August 2016, we identified three key attributes associated with DOE’s M&O contracts. First, M&O contracts have a limited competitive environment—we found that about half of DOE’s fiscal year 2015 M&O contract spending was on contracts awarded noncompetitively or that received a single offer at the time they were competed. In addition, M&O contracts include longer terms than other federal contracts, so they are competed less frequently. Second, DOE M&O contracts have broad scopes of work that cover nearly all aspects of work at a site. In particular, though mission activities of M&O contractors can be highly technical, mission support activities generally accounted for about 25 to 50 percent of contractors’ total costs in fiscal year 2015, and encompassed such things as managing infrastructure, facilities, and grounds; security; and the internal audit function. Third, M&O contracts and DOE management practices contribute to a closer relationship between contractors and the government. For example, M&O contractors are generally more integrated with DOE in how they are paid and in their accounting systems than other types of contractors. With regard to payment, rather than traditional bill payment methods including invoices, payment approval and authorization, and disbursement of funds, M&O contractors can draw funds directly from federal accounts through “letter of credit financing.”
With regard to accounting systems, as we reported in August 2016, DOE requires M&O contractors to follow DOE’s Accounting Handbook and integrate their costs and liabilities in DOE’s accounts each month. DOE officials said that this provides visibility into contractor accounts and allows DOE to monitor the appropriateness of the contractors’ withdrawal of funds in near real time. According to DOE officials, this integration carries over into how the value of contracts are determined—rather than establishing the cost of the contract at the time of contract award, the value of the M&O contract is determined by the amount annually obligated on the contract by DOE, consistent with DOE’s annual congressional appropriations.
Incentives in M&O Contracts
Cost-reimbursement type contracts allow the agency to contract for work when circumstances do not allow the agency to sufficiently define its requirements or estimate its costs to allow for a fixed-price contract. Under a fixed-price contract, a contractor accepts responsibility for completing a specified amount of work for a fixed price. In contrast, under cost-reimbursement contracts, the government reimburses a contractor for allowable costs incurred, to the extent prescribed by the contract. The government may also pay a fee that is either fixed at the outset of the contract or adjustable based on performance criteria set out in the contract.
In September 2009, we reported that cost-reimbursement contracts are considered high risk for the government because of the potential for cost escalation and because the government pays a contractor’s costs of performance regardless of whether the work is completed. As such, cost-reimbursement contracts are suitable only when (1) circumstances do not allow the agency to define its requirements sufficiently to allow for a fixed-price type contract; or (2) uncertainties involved in contract performance do not permit costs to be estimated with sufficient accuracy to use any type of fixed-price contract. One major reason for the inability to accurately estimate costs is the lack of knowledge of the work needed to meet the requirements of the contract, such as with research contracts, which necessarily involve substantial uncertainties. The DOE Acquisition Regulation (DEAR) states that cost-plus-award-fee (cost reimbursement) contracts are generally the appropriate contract type for M&O contracts and that the agency can choose among a number of different contract types for its M&O contracts.
Under the FAR, cost-reimbursement contracts may include specific incentives, such as arrangements intended to improve contractor efforts and discourage inefficiency and waste. Table 1 provides definitions of incentives commonly included in DOE’s M&O contracts.
Generally, according to DOE officials, award fees and incentive fees are intended to motivate M&O contractor performance on an annual basis, as outlined in annual performance evaluation plans. All DOE M&O contracts GAO analyzed also include “conditional payment of fee” clauses that permit the agency to reduce an otherwise earned fee if it determines that the contractor’s performance did not meet minimum requirements, such as those related to safety, health, or the environment. Under the award term incentive, contractors can earn one additional year of performance under the contract for each year they exceed certain thresholds in their annual performance evaluations. (See apps. III through VIII for additional information on the incentives included in each M&O contract, by DOE office.)
In addition, other elements of contract administration or oversight, while not formally incentives, can influence contractor performance. For example, option periods—which are established in the contract—enable the government to unilaterally extend the performance period and performance of services. According to DOE officials, other potentially important influences on contractor behavior include public reputation and the ability to compete for follow-on DOE or other government contracts.
Performance Evaluation and Award Fee Requirements
The FAR, DEAR, DOE’s Acquisition Guide, and DOE policies provide requirements and guidance for DOE’s annual performance evaluations of contractor performance. Under the FAR, all contracts providing for award fees must be supported by an award fee plan that establishes procedures for evaluating award fees and an Award Fee Board to conduct award fee evaluations. A Fee Determining Official makes the final determination regarding the amount of award fee the contractor earns during the evaluation period. Additionally, the FAR generally calls for entities that administer contracts providing award fees to use a set of ratings from Excellent to Unsatisfactory, which include performance descriptions and associated available award fee percentages (see Table 2 below). Award fee ratings are associated with a range of percentages of the total available award fee that DOE offices may award to a contractor based on the contractor’s assessed performance.
DOE offices develop two primary documents to guide and report assessments of contractors’ performance for each fiscal year: a Performance Evaluation and Measurement Plan (PEMP) and a Performance Evaluation Report (PER). The PEMP is to be developed at the beginning of each fiscal year—which is the beginning of the evaluation period—and is to establish expectations for contractor performance and describe how the responsible DOE office will evaluate and measure performance against those expectations. The PEMP provides the blueprint for what performance is expected of contractors, how contractors’ performance will be evaluated, and how the evaluations will be used to determine award fees, award terms, and any other incentives. The PER is to be developed at the end of each evaluation period—which typically is the end of the fiscal year—and is the responsible DOE office’s evaluation of contractor performance, in which DOE documents the performance rating and, in some cases, the fees and other incentives that will be awarded to the contractor. Figure 2 shows the general steps of DOE’s performance evaluation of contractors.
Further, under the FAR and DOE policy, the department is to consider technical, administrative, and cost performance during acquisition planning. The FAR provides that, for M&O contracts, replacement of an incumbent contractor is largely based on an expectation of meaningful improvement in performance or cost; thus, an agency or department should consider three categories of performance—technical, administrative, and cost—when deciding whether to extend or compete a contract at the end of the contract’s term. According to DOE officials, the annual performance evaluation process and the related PER are important sources of information for making these decisions. Thus, the PER should include relevant information on an M&O contractor’s technical, administrative, and cost performance. For DOE, the M&O contract PER is also important because DOE uses information from the PER to update a contractor’s past performance information in the Contractor Performance Assessment Reporting System (CPARS), which DOE and other agencies use to understand a contractor’s performance history and to inform their evaluations of future contract proposals.
Contracting and Performance Challenges Involving DOE’s M&O Contracts
A number of commissions, task forces, and other outside groups have identified challenges involving DOE’s M&O contracts. For example, two independent commissions—the Augustine-Mies Panel and CRENEL— have reported on related contract management challenges. The 2014 Augustine-Mies report focused on NNSA and made numerous recommendations for comprehensive reforms, including addressing dysfunctional government-M&O contractor relationships, improving oversight of M&O contractors, and reforming award fee and performance incentive structures. CRENEL, taking a broader view of all 17 national laboratories across DOE, in 2015 found a similar erosion of trust between DOE and some of its M&O contractors while noting that some laboratories, in particular those under SC, had better, more effective relationships. The CRENEL report recommended reforms to the management and oversight of M&O contractors and performance incentive structures. In addition to challenges, CRENEL also noted that SC’s annual performance evaluation and planning processes were robust and suggested that they be adapted by other DOE offices.
NNSA’s and EM’s contract management remains on our High-Risk List for government operations vulnerable to fraud, waste, abuse, and mismanagement. In addition, since 2005 we have identified a variety of project and program outcomes associated with deficiencies in DOE’s management and oversight of its M&O contractors. We have also identified improvements needed in core processes and functions DOE relies on to oversee its M&O contractors and assess their performance. These reports include the following examples:
Since 2005, during various reviews, we found that cost accounting practices used by NNSA’s M&O contractors have varied, making it difficult for NNSA to compare costs across its sites or accurately identify the total costs across its nuclear security enterprise and to obtain reliable cost data. In January 2017, we reported on the importance of reliable enterprise-wide cost information to effective management and oversight and found that the plan NNSA submitted to Congress in 2016 to improve and integrate its financial management, as required by Congress in 2013, did not provide a useful road map for guiding NNSA’s efforts. We recommended that NNSA develop a plan for producing cost information that fully incorporates leading planning practices. NNSA agreed, and we are monitoring implementation of the recommendation.
In October 2014, we reported on actions taken to address challenges with the Uranium Processing Facility under construction at the NNSA Production Office Sites (specifically at the Y-12 National Security Complex), which is managed by the M&O contractor at that site. A challenge with this facility was that in July 2012 the M&O contractor concluded that required equipment would not fit into the facility as designed and that addressing this issue would cost an additional $540 million. NNSA’s analysis of the factors that contributed to this issue identified several causes, including project oversight deficiencies— specifically, failure to ensure that requests and directives from NNSA to the contractor were implemented.
In May 2015, we reported on NNSA’s use of contractor assurance systems to conduct oversight and evaluate the performance of M&O contractors. Contractor assurance systems are designed and used by M&O contractors to oversee their own performance and to self- identify and correct potential problems. We found that NNSA had not fully established policies or guidance for using information from these systems to conduct oversight of M&O contractors and that NNSA therefore did not have standards for ensuring that contractors are overseen consistently. We recommended that NNSA establish policies and guidance for using information from contractor assurance systems for the oversight of M&O contractors; NNSA concurred with the recommendations and has taken some steps to establish policies and guidance, though it has not yet fully addressed our recommendations.
In March 2017, we reported that DOE needed quality data to manage its risk of fraud and recommended that DOE require contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to the government. DOE did not concur with the recommendation and has not taken steps to implement it. Because DOE does not require its contractors to maintain sufficiently detailed transaction-level cost data that are reconcilable with amounts charged to DOE, it is not well positioned to employ data analytics as a fraud detection tool. As a result, DOE is missing an opportunity to develop, refine, and improve its data analytics and better meet requirements of the Fraud Reduction and Data Analytics Act.
DOE Offices Use Different Approaches to Evaluate Contractor Performance, and all but NNSA Have Documented Their Approaches
In fiscal years 2006 through 2016, the six DOE offices generally used one of three different approaches to evaluate M&O contractor performance. All but one of these offices have documented their approaches in policies and procedures; NNSA has a broad policy but does not have procedures for implementing it, in particular for collecting and using performance information. In the absence of documented procedures, NNSA may not consistently collect and use performance information in evaluating contractor performance.
DOE Offices Use Three Approaches That Differ in Their Criteria and Methodologies for Ratings and Incentives
According to DOE officials, DOE does not have a department-wide performance evaluation process and offices developed their approaches to performance evaluation based on their varying missions and performance evaluation priorities.
We identified the following three general approaches:
The Science and Energy Lab approach (used by SC, EERE, and NE) uses broad, office-wide performance criteria and a detailed process and web-based tool to collect performance information and determine ratings and incentives.
The NNSA approach also uses broad, office-wide performance criteria, but ratings and incentives are determined through a series of management meetings.
The Site Specific approach (used by FE and EM) uses more detailed performance criteria specific to each contract and makes rating and incentive determinations in ways that vary based on the individual criteria.
These approaches generally differ in their (1) performance criteria, (2) methodologies used to determine contractor ratings, and (3) methodologies used to determine incentives. Appendixes III through VIII provide additional information on each office’s performance evaluation approach.
Performance Criteria
Based on our review of DOE documents, the three approaches all use a combination of what PEMPs describe as subjective and objective performance criteria. The Science and Energy Lab and NNSA approaches use primarily subjective criteria, and the Site Specific approach uses primarily objective criteria. Subjective criteria are generally qualitative statements that describe desired contractor performance, according to DOE officials. For example, a subjective criterion that SC used during fiscal year 2016 was for contractors to “provide effective and efficient strategic planning and stewardship of scientific capabilities and program vision.” In contrast, DOE officials explained that objective criteria generally describe performance that may be measured on a “pass/fail” or quantitative basis. For example, FE used objective criteria such as developing a strategic plan by a specific date or ensuring that all phases of construction were mechanically complete regarding the conversion of a tank.
Performance criteria under the Science and Energy Lab and NNSA approaches share a similar structure of three tiers of criteria: goals, objectives, and notable outcomes (called key outcomes under NNSA’s approach). The criteria are also mostly subjective and broad enough to be consistent across all the contracts of the responsible DOE office. Based on our review of DOE documents and information, SC and EERE have used the Science and Energy Lab approach since fiscal year 2006 and NE since fiscal year 2007. NNSA used the NNSA approach in fiscal years 2013 through 2016.
Under the Science and Energy Lab and NNSA approaches, goals are general overarching statements of the desired outcomes for each major performance area under the contract and constitute the highest performance criteria used to evaluate contractor performance. Based on documentation describing these approaches, goals are to be composed of at least two objectives, which are statements of desired results for an organization or activity and that discuss specific actions the contractor will undertake to accomplish a goal. Each office uses its respective goals and objectives consistently for each of its M&O contracts (EERE and NE each have only one site) and generally cover the same functional areas across the offices, though some NNSA goals focus specifically on NNSA’s nuclear weapons and national security missions. For complete lists of goals and objectives used by the offices using the Science and Energy Lab and NNSA approaches, see appendixes III, VI, VII, and VIII.
The third tier performance criteria used to evaluate contractor performance is the notable outcome, which, according to agency documents, is intended to focus the contractor on specific items that officials identified as the most important initiatives or highest risk issues the contractor must address. According to DOE documents, notable outcomes differ from goals and objectives in that they (1) are usually objective, (2) are specific to each contractor, and (3) change from year to year. However, not all goals and objectives have associated notable outcomes. Figure 3 provides an example of the relationship between a goal and its related objectives and notable outcomes for SC’s Brookhaven National Laboratory contractor for fiscal year 2016.
Our review of agency documents found that the Site Specific approach consists primarily of objective performance criteria that are specific to each contract, as well as a few broader, objective criteria. This is in contrast to the other two DOE approaches to performance evaluation, which primarily rely on broad, subjective criteria and a few objective criteria. Based on our discussions with agency officials, both EM and FE have generally used this Site Specific approach since fiscal year 2006. For both offices, objective performance criteria are defined based on quantifiable metrics (e.g., a contractor’s demonstrated waste processing rate) and milestones (e.g., whether a contractor completed a task on or before a scheduled date). For example, one of FE’s fiscal year 2016 objective performance criteria for the Strategic Petroleum Reserve M&O contract is whether facilities and systems functioned at a level adequate to meet program requirements based on average scores from its Maintenance Performance Appraisal Rating tool. Further, our review of agency documents showed that the Site Specific approach uses subjective performance criteria for aspects of performance that may be difficult to capture objectively—such as determining how effectively measures a contractor has taken have prevented harm to workers, the general public, and the environment. (See apps. IV and V for examples of the objective and subjective criteria EM and FE use.)
Prior to fiscal year 2013, NNSA also used the Site Specific approach, and it had specific, objective performance criteria that varied by contract. Based on our review of agency documents, NNSA’s performance criteria were generally divided into four performance areas: (1) mission, (2) operations, (3) business, and (4) multi-site. According to NNSA officials, as a result of “lessons learned” efforts, NNSA updated this approach to its current one to provide more succinct, structured, and consistent reporting by ensuring that all NNSA M&O contractors have identical goals and objectives.
Methodologies Used to Determine Contractor Ratings
Based on our review of DOE documents, rating methodologies vary across the three approaches—the Science and Energy Lab approach uses a detailed, formulaic methodology; the NNSA approach determines ratings at a series of management meetings; and in the Site Specific approach, ratings depend primarily on whether the contractor accomplishes specific tasks.
Based on our review of agency documents, under the Science and Energy Lab approach, stakeholders—including officials from headquarters, field offices, and internal and external customers— generally evaluate contractor performance against the criteria for each objective and notable outcome (“lab customers” evaluate objectives under science and technology goals only). Their evaluations, in the form of narratives and numerical scores, are entered into a web-based information collection tool that aggregates the scores using a series of calculations and weights to generate ratings that are then approved by the Fee Determining Official for the responsible DOE office. For example, for SC, once individual stakeholders enter objectives’ scores into the Laboratory Rating Tool, those scores are then weighted and added together through a predetermined formula to provide an overall rating of contractor performance for each goal. Under this approach, the Laboratory Rating Tool aggregates the objective scores into numerical goal ratings and corresponding letter grades from 4.3 (A+) to 0 (F) for the contractor. Notable outcomes are rated on a “pass/fail” basis, meaning that the contractor either met or did not meet them. Receiving a passing rating for the notable outcome is required for the contractor to earn a B+ or better for the notable outcome’s associated objective. Thus, although notable outcomes are not given their own numerical score or letter grade, they can have a significant effect on a contractor’s objective ratings and, ultimately, goal ratings. (See apps. III, VII, and VIII for examples of the weighting and calculations involved in aggregating ratings for EERE, NE, and SC M&O contractors.)
Based on our review of agency information, the methodology for the NNSA approach to determine contractor ratings entails officials holding a series of meetings to review various internally developed periodic reports and other inputs (e.g., contractor self-assessments and inspection reports). The participants in these meetings include field office managers, program managers, and NNSA executive leadership who collaboratively review contractor performance and determine ratings. According to NNSA officials, at these meetings NNSA collaboratively reviews all M&O contracts across the NNSA complex, thereby allowing officials to weigh and compare performance. The Fee Determining Official determines the final performance ratings for each M&O contractor using rating categories from the FAR: Excellent, Very Good, Good, Satisfactory, and Unsatisfactory. NNSA does not use numerical calculations to score and weigh individual objectives or goals. Instead, NNSA officials use professional judgment to determine overall goal ratings.
Based on our review of agency information, under the Site Specific approach, field office officials rate contractor performance against objective performance criteria quantitatively or pass/fail and rate subjective performance criteria using FAR award fee categories. That is, they evaluate performance against objective performance criteria as completed or not completed—for example, whether the contractor packaged 10 waste drums during the fiscal year. For the subjective performance criteria, officials assign ratings using the FAR rating categories in a similar manner to the NNSA approach.
Methodologies Used to Determine Incentives
Based on our review of DOE documents, the three performance evaluation approaches also use different methodologies for determining award and incentive fees, and two offices use similar methods to determine whether the contractor receives award term. Based on our review of agency documents, under the Science and Energy Lab approach, once ratings are determined, several additional detailed calculations determine how much of the available award fee is provided to the contractor. Precisely how ratings are weighted to determine fee differs by DOE office, but generally performance in technical areas is more important in determining the amount of fee the contractor earns. For example, SC determines award fees based on the contractor’s final science and technology area rating and adjusts that fee if the final management and operations area rating is 3.0 (grade B) or below. (See app. VIII for additional information on SC’s fee determination, app. III for EERE, and app. VII for NE.)
Based on our review of agency information, under the NNSA approach, officials assign goals specific portions of the available award fee for each contract at the beginning of the fiscal year. At the end of the fiscal year, officials determine ratings and fees at the same time in the collaborative meeting with NNSA leadership. For example, for the Los Alamos National Laboratory contractor in fiscal year 2016, the nuclear weapons goal was 30 percent of fee, and the operations and infrastructure goal was 35 percent. As discussed earlier, the Fee Determining Official makes the final determination on the ratings and also determines how much fee to provide the contractor within the range defined by the FAR rating (Excellent, Very Good, Good, Satisfactory, Unsatisfactory). In fiscal year 2016, NNSA awarded the Los Alamos National Laboratory M&O contractor an “Excellent” rating for the nuclear weapons goal, which is associated with the contractor earning from 91 to 100 percent of the available fee for that goal. To determine the overall award fee for the contract, NNSA adds up the award fees for all of its goals. (See app. VI for an example of a NNSA fee determination letter.)
Our review of DOE documents showed that the Site Specific approach has a different process for determining incentive and award fees, depending on whether the fee is tied to objective or subjective performance criteria. According to agency officials and documents, the Site Specific approach generally provides more money toward incentive fees tied to objective criteria than to award fees tied to subjective criteria—about 60 to 75 percent of available fee money goes to incentive fees. Incentive fees tied to objective performance criteria are awarded based on completion of the specific tasks or quantitative targets defined by the performance criteria. For example, one of the objective performance criteria for EM’s Waste Isolation Pilot Plant (WIPP) M&O contractor in fiscal year 2016 was to develop a maintenance and engineering program, called the Material Condition and Aging Management Program, and complete certain program activities. EM set a maximum incentive fee of $500,000 in the PEMP to be awarded upon completion of the activities.
In regard to award fees that are tied to subjective performance criteria under the Site Specific approach, offices using this approach take a similar method to the NNSA approach, in that they determine ratings and fees simultaneously. Specific portions of an available award fee are assigned to subjective performance criteria at the beginning of the fiscal year and documented in the PEMP, and officials then determine the percentage of fee to award and corresponding ratings from the FAR award fee categories for each subjective performance criterion. The final decision on the percentage of the available fee awarded for subjective performance criteria is made by the Fee Determining Official, who is generally an on-site official. The overall fee awarded is the sum of the individual objective incentive fees and subjective award fees. (See apps. IV and V for examples of how fee is assigned to specific criteria under the Site Specific Approach.)
With regard to award term, for the SC and NNSA contracts that had award term as an incentive, the contracts defined the conditions for receiving it, and those conditions generally included meeting certain rating thresholds, based on our review of documents from those offices. For SC, the contractor (1) was to earn at least a 3.5 (A-) science & technology area rating and a 3.1(B+) management & operations area rating, and (2) have no individual goal ratings below 3.1(B+) for science & technology area goals and 2.5 (B-) for management & operations area goals. The contracting officer is to prepare and submit a standardized document along with an annual contractor performance evaluation presentation for review through program officials, and the Director of the Office of Science is to make the final award term determination. For NNSA contracts, the contractor generally must (1) earn a rating of “Very Good” or better in four of the six goals and receive no rating of “Satisfactory” or lower in any goal, and (2) meet any additional requirements as specified in the contract.
All DOE Offices but NNSA Have Clearly Documented Approaches, Which May Lead NNSA to Inconsistently Collect and Use Performance Information in Contractor Evaluations
All of DOE’s offices have documented policies outlining their performance evaluation approaches, and all but NNSA have documented how information is to be collected and used to make rating determinations. SC, EERE, NE, FE, and EM have included in their documented policies and performance evaluation plans detailed procedures for collecting information on contractors’ performance that outline, among other things, how officials are to gather input from internal and external stakeholders and how the officials are to use that information in making rating determinations. For example, under SC’s Laboratory Performance Appraisal Process and PEMP Preparation Guidance (SC’s Appraisal Guidance), stakeholders are to provide evaluations using SC’s web- based information collection tool, the Laboratory Rating Tool, to provide scores and narratives on contractor performance. As a result, SC’s contractor performance evaluation approach clearly traces where performance information comes from and how the information is used in determining contractors’ final ratings.
Similarly, EM and FE document how officials are to collect information and use it in PEMPs or other performance evaluation plans. For example, EM’s PEMP for the WIPP M&O contract provides step-by-step procedures for how field office officials are to assess contractor performance against each performance criterion. These procedures guide the flow of information from contractor to field office officials, who are to check and validate the information and provide rating and fee recommendations to the on-site Fee Determining Official. Similarly, field office officials at EM’s Savannah River Site and FE’s Strategic Petroleum Reserve also have detailed procedures for assessing and distributing information regarding performance. Such detailed written procedures can provide better assurance to agencies that officials are consistently gathering and using performance evaluation information and that one can trace the ultimate performance rating in the PER to the underlying performance information.
In contrast to the detailed documented policies of other DOE offices, during the period of our review NNSA’s documented policy did not always match its performance evaluation approach, and the policy did not contain procedures for how officials should collect and use information so that one can trace the performance rating to the underlying performance information. As noted above, NNSA changed from using the Site Specific performance evaluation approach that focused on objective performance criteria to the agency’s current approach in fiscal year 2013. However, NNSA did not update its policy to reflect this change until December 2016. Thus, in fiscal years 2013 through 2016, NNSA was using a policy intended to evaluate site-specific objective performance criteria and incentive fees rather than the broad, office-wide subjective performance criteria that NNSA was using during those 4 fiscal years.
NNSA brought its policy into alignment with its performance evaluation approach in December 2016 by issuing its Corporate Performance Evaluation Process for Management and Operating Contractors policy (NAP-4C). NAP-4C provides a general framework under which NNSA officials provide input into the contractor performance evaluation process; the policy also provides a general schedule for implementing the performance evaluation approach, as well as general references to information collection.
However, NAP-4C does not include detailed procedures for how performance information should be collected and used, and according to NNSA officials, individual NNSA offices and officials determine how they collect and distribute information. This means information may be collected inconsistently across the agency, depending on individual offices’ preferences. For example, NAP-4C states that officials should “leverag information from contractor assurance systems . . . to monitor performance” but does not discuss how and when officials should use this information to ensure performance information is traceable to rating determinations.
In May 2015, we reported on the importance of tracing performance information from contractor assurance systems to performance evaluations. We reported that a senior NNSA official told us NNSA could not track the extent to which information from contractor assurance systems was used in evaluating contractor performance because it could be difficult to identify the sources of information used in performance evaluations. We recommended that NNSA revise policy, guidance, and procedures on performance evaluation to fully address how and under what circumstances those responsible for evaluating M&O contractors’ performance should use information from contractor assurance systems for this purpose. NNSA concurred with our recommendation and issued revised policy for contractor oversight but has not yet developed guidance or procedures for how to use information from contractor assurance systems in its performance evaluation process. We continue to follow up on this recommendation.
In addition to NAP-4C, NNSA’s Fee Determining Official issued implementation guidance for the fiscal year 2016 performance evaluation cycle. This implementation guidance directs relevant NNSA officials to follow a series of templates for interim reports to the contractor and provides the format of the final PER and specific dates for those reports.
The guidance does not include procedures as to how officials throughout NNSA are to collect or use information to create the content for those templates. For example, the guidance’s Interim Feedback Report schedule states that the “program/functional offices provide input to field offices.” There is no discussion of how the program/functional office is to provide such input, what types of input are important, or how the input is to be used. Similarly, NNSA’s PEMPs also do not discuss how officials should collect or use performance information.
In the absence of documented, detailed procedures, NNSA may not consistently collect and use performance information from program managers and field office officials for contracts in a given fiscal year and may therefore inconsistently apply NNSA’s evaluation process. For example, we identified two instances in which the NNSA Fee Determining Official made handwritten changes to proposed award fee amounts during fiscal year 2012 without documenting in the PER the basis for the changes, such as by identifying the performance information that would support the handwritten changes to create traceability between the award fee amounts and its supporting performance documentation. These changes awarded (1) Los Alamos National Laboratory’s contractor a year of award term, even though the contractor had not met the established rating threshold for award term, and (2) Lawrence Livermore National Laboratory’s contractor a higher award fee that also qualified the contractor for award term it otherwise would not have received. With these changes, these contractors received award terms and fees in a manner inconsistent with how award terms and fees were assessed for other M&O contractors. According to NNSA officials, this type of action would not happen currently because the agency’s approach is rooted in a policy (NAP-4C) and implementation guide that is supported by a more collaborative decision-making process. However, even under the new policy, because NNSA does not have clearly documented procedures specifying how officials are to collect or use performance information, NNSA leadership cannot have assurance that there is clear traceability between the contractor evaluation and its underlying support.
Federal standards for internal control state that management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals. NNSA has a documented policy, but this policy does not clearly specify how to collect and use contractor performance information to evaluate contractor performance. NNSA officials stated that in their opinion their policy was still effective and robust without detailed procedures for its implementation. However, without developing and documenting clear procedures for implementing NAP-4C that specify the process for collecting contractor performance information and how officials are to ensure this information can be traced to rating determinations, NNSA leadership does not have reasonable assurance that the agency is consistently evaluating contractor performance and that it is using relevant performance information as intended.
Evaluation Reports Could Better Assess M&O Contractors’ Cost Performance
DOE’s Fiscal Year 2016 M&O Contractor Performance Evaluation Reports Provided Less Information on Contractors’ Cost Performance than on Other Types of Performance
We found that DOE offices’ fiscal year 2016 PERs provided less information on M&O contractors’ cost performance—evaluations of the contractor’s spending, budgeting, strategic sourcing, and costs, including the contractor’s cost-effectiveness—and provided more information regarding technical and administrative areas of performance. Specifically, the PERs were 67 pages long on average and contained about 1 page of cost performance-related information overall. In contrast, information on contractors’ technical and administrative performance included in-depth descriptions of contractors’ scientific discoveries and production progress that spanned numerous pages. Figure 4 provides typical examples of the type of technical, administrative, and cost performance descriptions that we found in our review of fiscal year 2016 M&O contract PERs.
In addition, in our review of the number of performance descriptions in DOE’s 2016 PERs, we found about 24 percent (179 of 737) of the performance descriptions in the PERs provided information on cost performance; about 71 percent (524 of 737) provided information on administrative performance (evaluations of contractor’s performance on mission support activities, such as information technology, human resources, legal activities, environmental safety and health, property management, risk management, and leadership activities); and about 53 percent (390 of 737) provided information on technical performance (evaluation of contractor’s performance on mission- related activities such as research and development, production, storage, clean-up, and construction).
Cost Performance Information Included in DOE’s Performance Evaluation Reports is of Limited Use for Acquisition Decision-Making
In addition to providing less information on M&O contractors’ cost performance than on other areas of performance, the cost information contained in DOE offices’ PERs is of limited use for acquisition decision- making. DOE’s Information Quality Guidelines define quality, in part, as information that is useful to DOE and the public. We examined whether the PERs included such useful information that would permit an overall assessment of contractor cost performance. FAR and DOE policy call for such an overall assessment, which therefore is useful to DOE for acquisition decision-making and to the public generally.
Our analysis showed that the information on contractors’ cost performance in the PERs did not permit such an assessment of contractor cost performance for two primary reasons. First, the information consisted of statements that lacked detail, such as “within budget,” and did not address the significance of the performance described. For example, cost performance-related statements such as “over/under budget” and “cost savings/cost overrun” did not commonly provide information on the amount saved or lost, making it difficult to identify the significance of what was reported. Information on cost effectiveness was also rare—cost-effectiveness information was included in about 11 percent of the instances in which cost performance was discussed (48 of 441 instances). Second, cost performance information commonly applied to specific activities under the contract, such as construction activities, rather than to achievement of overall operating efficiencies. When cost performance information is limited to specific activities, it is not possible to assess a contractor’s overall cost performance based on information in the PER.
We identified one reason and DOE officials identified three additional reasons why more cost performance information was not provided in DOE’s fiscal year 2016 PERs. We believe all of these contribute to why the cost performance information that was included was often not useful for acquisition decision-making:
DOE offices’ policies and PEMPs did not specifically require PERs to include cost performance information and did not discuss how to ensure that cost information is useful for acquisition decision-making. Based on our review, DOE offices’ policies did not specifically require that PERs include cost performance information, nor did they discuss information quality. In addition, DOE offices’ PEMPs—which serve as a general blueprint for the type of performance information that offices should include in the corresponding PER—generally did not include specific cost performance criteria or explicitly call for evaluations of contractors’ cost performance. In contrast, DOE offices’ fiscal year 2016 PEMPs commonly included explicit technical and administrative performance criteria such as: “provide S&T results with meaningful impact on the field” (technical) and “provide an efficient and effective worker health and safety program” (administrative). There were three exceptions in which PEMPs included specific cost performance criteria: EM’s WIPP M&O contract, NE’s Idaho National Laboratory, and FE’s Strategic Petroleum Reserve M&O contract.
Although SC does not have explicit cost performance goals or objectives, according to SC officials, cost performance is listed as a factor to consider in SC’s PEMPs’ descriptions of how to evaluate certain performance criteria. However, SC officials told us that PER performance descriptions may not include cost information for these criteria unless there were notable cost overruns or the contractor was doing an exceptionally good job in these areas. SC officials stated this is, in part, to keep PERs shorter and streamlined. However, when PERs are silent on cost performance, there is no formal documented record of M&O contractor cost performance.
M&O contract missions made it difficult for DOE to assess contractor cost performance, resulting in less cost performance information in PERs. According to DOE officials, it is difficult to assess the costs of the scientific and research missions covered by many M&O contracts. For example, according to DOE officials, it is difficult to develop cost estimates for research activities because it is not always certain when scientific breakthroughs will occur or how long they will take. DOE uses cost-reimbursement contracts for its M&O contracts, in part because it is not possible to know with certainty and in advance how much research and development efforts will cost or what level of effort will be required.
While we agree that assessing cost performance for scientific and research activities may be difficult, M&O contractors also carry out a variety of other activities for which costs may be more readily assessed. For example, a sizeable portion of the costs under M&O contracts are for administrative or mission support and other business operations activities, such as personnel, business processes, human resources, procurement, and security. In our previous work, we found that such administrative and support activities accounted for about 25 to 50 percent of M&O contractor costs in fiscal year 2015.Similarly, SC’s fiscal year 2016 annual laboratory plans identify areas, such as infrastructure and information systems, as the major cost drivers for that year. We have found that other agencies assess cost performance for contractors performing such administrative activities. DOE officials we interviewed agreed that measuring cost performance in these areas would be more feasible than measuring it for its scientific and research missions.
The M&O contract type made it difficult to some degree for DOE to assess contractor cost performance. According to DOE officials, certain aspects of how DOE implements cost-reimbursement M&O contracts create challenges to evaluating cost performance. Some officials described these challenges as the result of “the budget-based nature” of M&O contracts. Specifically, according to DOE officials, M&O contract budgets (the amount contractors are allowed to spend) are not set up front in the original contract. Rather, according to DOE officials, M&O contract budgets are commonly determined by the amount DOE obligates to the contract on an annual basis, based mostly on annual congressional appropriations to the relevant DOE programs. Further, these officials noted, because much of DOE’s appropriated funds are available until expended rather than expiring at the end of the fiscal year for which they were appropriated, M&O contractors may be able to carry over those funds to spend in future fiscal years. According to DOE officials, DOE reviews M&O contractor estimates when developing its budget request, including determining how much work is required by its contractors to execute the program scope outlined in the budget request. Agency officials also noted that, with regard to cost reimbursement contracts, the federal government is legally required to reimburse contractors for all allowable costs up to the approved budget amount.
We have previously reported that cost-reimbursement contracts carry a high risk for the federal government, resulting in the potential for cost escalation, as some expenditures may be allowable under the contract but may not be cost effective. We recognize that M&O contracts are unique in many ways. Nevertheless, the manner in which DOE allocates funds to the contract, and the requirement to reimburse contractors for allowable costs do not, by themselves, affect DOE’s ability to assess contractor cost performance.
Some cost performance evaluation conducted outside of the annual performance evaluation process is not included in PERs. DOE officials told us they perform some activities related to contractor cost performance outside the performance evaluation process for M&O contracts, though information on these activities is not always included in PERs. For example, according to DOE officials, some M&O contractors participate in group purchasing efforts, where contractors coordinate purchases to drive up competition and drive down costs. Also, DOE offices generally monitor M&O contractor indirect costs to ensure they do not escalate without reason. In particular, SC’s M&O contractors include a “Cost of Doing Business” section in their annual laboratory plans, in which SC contractors report on indirect costs. According to SC officials, SC also uses its reviews of the Cost of Doing Business sections as opportunities to discuss options to reduce operational costs. SC officials stated that an internal process in which SC’s laboratories compete and are awarded work, in part, also serves to control costs. According to DOE officials, efforts such as group purchasing and indirect cost monitoring and reporting are not commonly included in PERs because the agency considers its existing performance criteria to be sufficiently broad to assess contractor performance.
Though these efforts may be important to address contractor costs and information from the efforts could inform assessments of cost performance, they do not, on their own, represent DOE office’s evaluation of contractor’s cost performance. In addition, PERs are important records of DOE offices’ evaluations of contractor performance because, according to agency officials, DOE uses the PERs to inform acquisition decisions and help form the basis for a contractor’s performance record.
We and the DOE Inspector General have identified how important it is for DOE to obtain quality cost information and use it to evaluate cost performance. For example, for more than a decade, we have reported that some DOE offices have experienced challenges obtaining quality information that could enable the offices to make better-informed decisions about programs’, and therefore DOE’s, budgetary needs. Furthermore, we reported in July 2012 that NNSA based much of its congressional budget request on contractor-generated budget proposals, which the agency often did not thoroughly evaluate. More recently, according to a 2017 DOE Inspector General report, challenges in evaluating cost performance have contributed to NNSA’s and its M&O contractors’ difficulty in demonstrating the anticipated cost savings for the NNSA Production Office Sites contract. DOE created this contract by consolidating the contracts for the Y-12 National Security Complex and the Pantex Plant into a single contract for the explicit purpose of saving costs.
While collecting quality information on, measuring, and reporting on cost performance for M&O contracts may be challenging, this information is important for two reasons. First, the FAR, DOE policy, and CPARS highlight the importance of information on contractor’s cost performance for acquisition decision-making. As we previously noted, the FAR and DOE policy provide that decisions to extend or compete an M&O contract be based on an expectation of meaningful improvement in performance or cost, including consideration of a contractor’s technical, administrative, and cost performance. In addition, according to DOE officials, they largely copy information from PERs into the federal government database on contractors’ past performance, CPARS, which agencies use to inform their awarding of contracts. CPARS has several performance criteria that agencies are required to complete, one of which is “cost control.” This is challenging to address, according to DOE officials, because PERs do not typically include an explicit evaluation of cost performance even though, also according to DOE officials, PERs are the primary source of information entered into CPARs.
Second, as we reported in 2009, there are inherent risks to the government from cost-reimbursement contracts such as DOE’s M&O contracts, particularly with cost escalation because the government is required to pay the contractor’s allowable costs regardless of whether the contractor completes the work. Because of these risks, we found that these types of contracts involve significantly more government oversight than do fixed-price contracts. This is, in part, because the agency needs to monitor contractor costs to provide a reasonable assurance that efficient methods and effective cost controls are used.
As we previously noted, FAR, DOE guidance and policy, and CPARS highlight the importance of quality information on contractor’s cost performance. In addition, federal standards for internal control state that management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control in management directives, administrative policies, or operating manuals.
DOE offices have policies on contractor performance evaluation, but these do not specifically require that PERs include quality cost performance information that can be used to make an overall assessment of cost performance. By updating policies to require inclusion of quality cost performance information in PERs to enable an overall assessment of a contractor’s cost performance, DOE offices could strengthen their oversight of M&O contractor costs. For example, DOE offices could better inform acquisition decisions such as whether to extend or compete a contract, complete CPARS with greater ease, inform incentives for contractor performance, and uncover opportunities for federal cost savings. This is particularly important given that these cost- reimbursement type contracts carry risks of cost escalation.
DOE Generally Awarded M&O Contractors High Ratings and Most Available Performance Incentives, Except in Cases of Significant Safety or Security Incidents
In reviewing DOE’s M&O contractor performance evaluations for fiscal years 2006 through 2016, we found the results of the evaluations to generally include high performance ratings and most available performance incentives, including a median of 94 percent of available award and incentive fees. During this time frame, administrative performance sometimes had lower ratings—though these were balanced out in overall ratings by strong performance elsewhere—and some safety issues and accidents resulted in additional fee reductions outside the performance evaluation process. In fiscal years 2006 through 2016, three contractors received 50 percent or less of available award fee due to two significant incidents—a safety and security issue and a major accident.
M&O Contractors Generally Received High Ratings in Fiscal Years 2006 through 2016 and more than 90 Percent of Available Performance Incentives
For the 239 annual M&O contractor evaluations from the 24 DOE contract rating sites we reviewed, in fiscal years 2006 through 2016, DOE offices provided award and incentive fees equivalent to the FAR rating categories of Excellent or Very Good 94 percent of the time. Contractors at more than half of the 24 contract rating sites (17 of 24) received award and incentive fee percentages consistent with only Excellent or Very Good ratings for all fiscal years from 2006 through 2016. As discussed above, while the precise approaches for determining ratings and fees vary by DOE office, ratings and fees are directly linked in all three approaches: Fee is either determined through a formula based on ratings, or DOE offices determine ratings and fees at the same time. Differences between rating methodologies across offices and changes in performance evaluation approaches over time mean directly comparing ratings requires some caution; however, even acknowledging those differences, there is a clear trend of a high percentage of award and incentive fees awarded and high equivalent performance ratings across sites and years.
From fiscal years 2006 through 2016, DOE also provided its M&O contractors with a median of 94 percent of their available award and incentive fees. See Table 3 for the results by FAR award fee rating category for each contract rating site for this period, and Table 4 for an analysis of average and median percentages of fees awarded by site. The amount of fee available, fee as a share of total contract spending, and the use of other incentives have varied across sites, yet performance results have been generally similar. Appendixes III through VIII provide additional details by DOE office.
Contract rating site by DOE office Office of Energy Efficiency and Renewable Energy Savannah River Site–Environmental Management Waste Isolation Pilot Plant Office of Fossil Energy Strategic Petroleum Reserve Office Office of Nuclear Energy National Nuclear Security Administration Kansas City National Security Campus Lawrence Livermore National Laboratory Los Alamos National Laboratory Nevada National Security Site NNSA Production Office Sites Pantex Plant Y-12 National Security Complex Sandia National Laboratories Savannah River Site–National Nuclear Security Administration Thomas Jefferson National Accelerator Facility 94 in annual performance evaluation plans. Fixed fees are set at the inception of the contract and do not vary for performance.
Of further note from our analysis of the extent to which contractors earned fees in fiscal years 2006 through 2016:
Contractors for the 24 M&O contract rating sites that included award fees earned approximately $4.3 billion in total fees over this time. About three-quarters ($3.4 billion) of the $4.3 billion in fees were award fees and incentive fees, and the remaining amount was fixed fees.
NNSA’s M&O contracts represent 68 percent of the fees paid and 55 percent of the total M&O contract spending over this period.
As discussed above, DOE offices provided a median of 90 to 95 percent of available annual award fee to 18 of 24 M&O contract rating sites. However, six rating sites, all conducting work for NNSA, had median award fee percentages below 90 percent. Several NNSA sites had fixed fees in addition to award fees. When including those fixed fees, the percentage of total fee awarded rises, with median fee percentages rising above 90 percent for three of the sites.
Contract rating sites rarely received less than 75 percent of available award fee.
In addition to awarding contractors high percentages of available fees, DOE offices generally awarded M&O contractors most of the available award term incentives. Several DOE and contractor officials we interviewed noted that award term is perhaps the most valuable incentive from a contractor perspective because an extra year of work on the contract represents much more revenue for them than fees. SC and NNSA—the two offices with contracts that had award term—awarded 92 percent of award term years available, or 76 out of 83 possible award term years. Specifically, SC included award term in seven contracts and awarded M&O contractors with 95 percent of potential award term years, and NNSA included award term in four contracts and awarded contractors with 83 percent of potential award term years (see Table 5 below). Three of the unearned award term years are attributable to the contractor at Los Alamos National Laboratory, which also had a fourth award term year that NNSA revoked retroactively. According to NNSA officials, upon not earning an award term for the fourth time, Los Alamos’s contractor—in accordance with the terms of the contract—had all of its award terms revoked, and NNSA decided to recompete the contract.
Administrative Performance Sometimes Had Lower Ratings, with Some Issues Resulting in Fee Reductions Outside of the Performance Evaluation Process
Within the pattern of high overall performance ratings, ratings for administrative performance have generally been lower than ratings for technical performance, and some administrative performance issues— particularly safety issues and accidents—resulted in fee reductions outside the performance evaluation process, as noted in table 6 below. For example, since fiscal year 2013, when NNSA adopted common performance goals across its contract rating sites, about 83 percent of possible goal ratings (134 of 162) had been rated Very Good or better. Of the 28 goal ratings below Very Good, 22 (79 percent) were in administrative goals. In many cases, incidents that led to lower ratings involved site operations issues, such as in safety and security. Similarly, the contractors at the 10 SC contract rating sites and one NE contract rating site also showed generally higher technical performance ratings with 9 of 11 contract rating sites having higher average technical area scores than administrative area scores (the two other contract rating sites had average technical area scores that were about equal to the average administrative scores).
From our review of DOE documents and discussions with officials, one factor that may be an important influence in the difference between technical and administrative scores at SC and NE rating sites is that the Science and Energy Lab performance evaluation approach does not incentivize administrative performance above a B+. As discussed above, contractors generally receive additional award fee for higher ratings, but under the Science and Energy Lab approach, in the administrative area, all scores of B+ and above lead to the same amount of award fee. Therefore, a contractor whose only difference was an administrative score of B+ versus A+ would receive the same amount of award fee. According to DOE officials, this structure is meant to encourage contractors to reinvest cost savings into technical performance rather than improving administrative systems that already meet expectations.
Relatively low performance in certain areas can be balanced out in overall ratings by strong performance ratings elsewhere. Of nine occasions since fiscal year 2013 that an NNSA contractor received at least one Satisfactory goal rating (below 50 percent), the overall rating for the contractor remained Good or Very Good, and contractors were provided the majority of their fees in all but one case (the contractor for Los Alamos National Laboratory in fiscal year 2014, which we discuss further below). For example, following the break-in of trespassers and related security lapses at Y-12 in 2012, NNSA provided the M&O contractor with Satisfactory ratings in operations in fiscal years 2012 and 2013. However, Very Good and Excellent ratings in other areas meant NNSA provided an overall rating of Good to the contractor in those years, and the contractor received more than 50 percent of available award fees. For SC, in the five occasions since fiscal year 2006 in which a contractor received at least one goal rating of C (2.0) or below, overall area scores remained As and Bs and fees above 75 percent, except for one instance. On that occasion, Princeton Plasma Physics Laboratory in 2016 received multiple goal ratings of C, which led to a technical score of C+ and a fee of 68 percent. This 2016 rating for Princeton Plasma Physics Laboratory is also the only case from fiscal years 2006 through 2016 of a Satisfactory-level goal rating in a technical area goal, as the others were all in the administrative areas of site operations or leadership.
The extent to which a single area of performance affects overall ratings is influenced by the broad scope of activities under an M&O contract, the broad types of performance required under the contract, and the weights used to determine overall ratings and incentives. According to DOE officials, one way the Science and Energy Lab approach addresses these factors is to include all the ratings provided by each stakeholder and for each objective in the PER. In this way, while a C from one stakeholder or objective may be weighted out overall, the grade and the feedback associated with it are still provided to the M&O contractor and clearly visible to readers of the reports.
Another way that DOE offices have addressed individual performance deficiencies that may get balanced out in overall ratings is through additional fee reductions. Most offices have reduced fees outside the performance evaluation process to address specific performance deficiencies—generally administrative concerns, such as safety issues. In particular, all offices except EERE have reduced fees that would have been provided from performance evaluation results, relying on contract clauses that allow for fee reductions. Such clauses allow DOE offices to unilaterally reduce fees for the evaluation period if, for example, the contractor fails to meet performance requirements of the contract relating to environment, safety, and health. For example, NE used such clauses in 7 of the 11 years we reviewed to reduce the fee provided to the Idaho National Laboratory M&O contractor. FE has also frequently used fee reductions to address issues outside its predominantly objective performance criteria. SC, NNSA, and EM have also occasionally used additional fee reductions outside the performance evaluation process. For all offices, fee reductions generally resulted from administrative performance issues—safety issues and accidents—rather than technical performance. These fee reductions ranged from $10,000 to $35 million, and while the fee received by the contractor was lowered, the original ratings were not revised. In most cases, these reductions were for 10 percent or less of award and incentive fees provided and less than $1 million dollars; however, they represented large portions of contractors’ fees in a few cases. See Table 6 below for a list of fee reductions.
From Fiscal Years 2006 through 2016, Three Contractors Received 50 Percent or Less of Available Award and Incentive Fees Due to Significant Safety and Security Incidents
Three times in fiscal years 2006 through 2016, M&O contractors received 50 percent or less of available award and incentive fees due to a safety and security issue at the Lawrence Livermore National Laboratory (LLNL) in fiscal year 2008 and a major accident involving the WIPP in Carlsbad, New Mexico, and the Los Alamos National Laboratory (LANL) in fiscal year 2014.
LLNL, 2008. LLNL’s M&O contractor received 50 percent of the available award and incentive fees—$15,795,584 out of $31,879,519—due to weaknesses in environmental management, security, and management/performance improvement that resulted in Satisfactory ratings in those respective areas and an overall Satisfactory rating in operations. In particular, an April 2008 inspection and force-on-force exercise conducted by DOE’s Office of Health, Safety, and Security found significant weaknesses in protective force and classified matter protection and control programs that led to an Unsatisfactory rating in security. The performance evaluation also reported issues with contractor assurance system progress, staffing, and “unacceptable” losses of key personnel. LLNL’s contractor received overall ratings of Outstanding in mission and Good in institutional management. In addition, the contractor received $21,862,651 in fixed fees, for a total fee award of $37,658,235.
WIPP, 2014. WIPP’s contractor received 6.9 percent—$561,266 out of $8,192,895—of the fees available under its contract in fiscal year 2014 due to two unrelated accidents, a truck fire and a waste drum explosion, that resulted in the suspension of waste disposal at the site—the nation’s only facility for disposal of transuranic waste. The 6.9 percent of fees awarded represented an additional reduction of fees from the amounts the contractor earned for meeting a portion of its objective criteria targets and receiving Satisfactory ratings in all four subjective criteria. WIPP did not resume waste disposal operations until 2017.
LANL, 2014. LANL’s contractor received none of the available award fee, and no DOE fixed fee, in fiscal year 2014 due to its improper oversight and packaging of the waste drum that exploded at WIPP. Of $63,406,380 in available fee, LANL’s contractor received about $6.3 million in fixed fee associated with work completed under contract with other federal agencies that, according to NNSA officials, could not be revoked. Similar to WIPP’s contractor, this represented an additional reduction of fees from the amounts that would have resulted from an overall Satisfactory rating (including an Unsatisfactory for operations and infrastructure; Satisfactory for science, technology and engineering; Satisfactory for leadership; and Very Good for the two mission goals). In addition to losing fee and award term, the waste portion of the LANL contract was withdrawn from the M&O contract and contracted out separately by EM.
In all three cases, in the year following the 50 percent or less in award and incentive fees, performance ratings returned to at least Good levels and contractors received at least three-quarters of available award and incentive fees. With regard to the WIPP accident involved in two of the three cases, efforts to recover from the waste drum incident and return to full operations have cost hundreds of millions to date and are estimated to cost more than $600 million in total, all of which will be costs to the taxpayer. The combined unearned and reduced fee for both contractors amounted to $64,788,464, or about 10 percent of total estimated costs to the government. In addition to fee reductions, NNSA officials stated that the WIPP accident played a significant role in NNSA’s decision to not exercise the last 7 years of possible award term on the LANL contract and thus recompete the contract in 2018. According to NNSA, those 7 years translate into approximately $17 billion in work and up to $500 million in fee the LANL contractor could have earned. Also, with regard to additional actions EM took after the accidents at WIPP, according to DOE officials, EM modified the contract terms from having a single 5-year option period to five 1-year option periods.
Conclusions
While there are differences in how DOE’s offices approach performance evaluation of M&O contractors, all of the offices use the annual performance evaluations of the contractors and the associated rating and fee determinations to evaluate the extent to which contractors are operating sites as intended and accomplishing mission work, and to justify incentives such as fee and additional contract term. These annual performance evaluations also provide valuable information for contract management and acquisition decisions, such as whether to renew or compete expiring M&O contracts. DOE also recognizes the importance of improving performance evaluation and oversight of contractors.
All of DOE’s offices except NNSA have clearly documented procedures on how to collect and use information to make rating determinations.
NNSA provides a general framework for its performance evaluations in its NAP-4C policy but leaves how to collect, distribute, and document information to the discretion of individual offices and officials. In the past, NNSA officials have made changes to incentives awarded without underlying performance documentation to support the change. Without developing and documenting clear procedures for implementing NAP-4C that specify the process for collecting contractor performance information and how officials are to ensure this information can be traced to rating determinations, NNSA leadership does not have reasonable assurance that it is consistently evaluating contractor performance and that it is using relevant performance information as intended.
The cost performance information included in DOE offices’ fiscal year 2016 PERs is of limited use for acquisition decision-making in that this information does not permit making an overall assessment of M&O contractors’ cost performance. DOE offices have not required specific assessment of cost performance in their performance evaluation policies, nor discussed how to ensure that cost information is useful for acquisition decision-making. However, the PERs are important sources of information for contract management—particularly for acquisition decisions and oversight of spending on cost-reimbursement contracts. DOE officials identified challenges in evaluating M&O contractors’ cost performance and ways this evaluation may occur outside of the annual performance evaluation process. These challenges contribute to why there is less cost performance-related information in PERs than for other types of performance. While collecting, measuring, and reporting quality cost performance information may be challenging, such information is important for fully assessing contractor performance and managing the inherent risks of cost-reimbursement contracts. By updating their policies to require quality cost performance information in PERs to enable an overall assessment of M&O contractor cost performance, the six DOE offices with M&O contracts could strengthen their oversight of costs for contracts worth about $20 billion a year and use this information to improve acquisition decision-making.
Recommendations for Executive Action
We are making seven recommendations to DOE offices: The Administrator for the National Nuclear Security Administration should develop and document clear procedures for implementing NAP-4C, specifying the process for collecting contractor performance information and describing how officials are to ensure this information can be traced to rating determinations. (Recommendation 1)
The Assistant Secretary for the Office of Energy Efficiency and Renewable Energy should update its policy to require that Performance Evaluation Reports include quality information on cost performance to enable an overall assessment of Management and Operating contractor cost performance. (Recommendation 2)
The Assistant Secretary for the Office of Environmental Management should update its policy to require that Performance Evaluation Reports include quality information on cost performance to enable an overall assessment of Management and Operating contractor cost performance. (Recommendation 3)
The Assistant Secretary for the Office of Fossil Energy should update its policy to require that Performance Evaluation Reports include quality information on cost performance to enable an overall assessment of Management and Operating contractor cost performance. (Recommendation 4)
The Administrator for the National Nuclear Security Administration should update its policy to require that Performance Evaluation Reports include quality information on cost performance to enable an overall assessment of Management and Operating contractor cost performance. (Recommendation 5)
The Assistant Secretary for the Office of Nuclear Energy should update its policy to require that Performance Evaluation Reports include quality information on cost performance to enable an overall assessment of Management and Operating contractor cost performance. (Recommendation 6)
The Director of the Office of Science should update its policy to require that Performance Evaluation Reports include quality information on cost performance to enable an overall assessment of Management and Operating contractor cost performance. (Recommendation 7)
Agency Comments and Our Evaluation
We provided a draft of this report to DOE for comment. DOE provided us with written comments, as well as technical comments, which we incorporated as appropriate. In its written comments, reproduced in appendix IX, DOE agreed with four of our seven recommendations and partially agreed with the others.
DOE partially agreed with our recommendations that three DOE offices— EERE, NE, and SC—update their policies to require that PERs include quality information on cost performance to enable an overall assessment of M&O contractor cost performance. In its written comments, DOE said that the three offices have concerns that (1) our report gives the impression that DOE does not review cost performance of their respective national laboratories in an adequate manner, and (2) by focusing on the annual PERs, our report does not capture the cost performance reviews conducted in day-to-day contract oversight, the annual laboratory planning process, and contract extend/compete decisions. In its comments, DOE stated that since EERE, NE, and SC conduct cost performance reviews in normal operations and at the year-end annual evaluation process, adequate information is available to assess whether the contractor cost performance is acceptable to the department.
In the report, we note that DOE conducts some cost performance evaluation activities outside of the annual performance evaluation process, although we did not assess these efforts. While there may be adequate information available, DOE does not commonly document this information or assessments from such activities in the PERs. We continue to believe that the PERs are important sources of information for contract management—particularly for acquisition decisions and oversight of spending on cost-reimbursement contracts—and that action is needed to improve these formal records of contractor performance. By not including quality information on overall cost performance and assessments in PERs, DOE offices are missing a valuable opportunity to better document contractors’ cost performance, improve acquisition decision-making, and strengthen oversight of billions of dollars in contracting. We continue to believe that it is important for EERE, NE, and SC to implement the recommendations and that by doing so, these offices would have better assurance that M&O performance evaluations fully address required elements.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X.
Appendix I: Objectives, Scope, and Methodology
This report reviews the Department of Energy’s (DOE) performance management of its management and operating (M&O) contracts. Specifically, it examines (1) how DOE offices evaluated M&O contractor performance in fiscal years 2006 through 2016 and the extent to which these offices have documented their evaluation approaches; (2) the extent to which DOE’s fiscal year 2016 M&O contractor performance evaluation reports provide information on contractors’ technical, administrative, and cost performance; and (3) the results of DOE’s M&O contractor performance evaluations from fiscal years 2006 through 2016.
For all three objectives, we reviewed performance evaluation documentation—performance evaluation plans, performance evaluation reports (PERs), fee determinations, award term determinations, and option term determinations—for 21 of the 22 DOE M&O contracts in place as of fiscal year 2016, the most recently completed contract year at the time we initiated our review. We also reviewed documentation for Bettis and Knolls Atomic Power Laboratories’ M&O contract but excluded it from our analysis because the contract does not have annual reviews and ratings comparable to the other DOE M&O contracts. The Bettis and Knolls contract does not have an award fee and thus NNSA’s Office of Naval Reactors—the office responsible for overseeing the M&O contract—does not produce annual PERs similar to those of the other offices. In addition, we did not include in our scope the DOE contract for the cleanup of the West Valley Demonstration Project in upstate New York because it was not an M&O contract in fiscal year 2016; according to DOE officials, it switched from being an M&O to a non-M&O contract in fiscal year 2007.
In addition, we also interviewed DOE officials to gain a further understanding of the department’s performance evaluation processes and results, including officials at DOE headquarters and at several field offices that are responsible for providing day-to-day oversight of the activities of M&O contractors. To provide additional perspective, we interviewed officials at the Department of Defense, the National Aeronautics and Space Administration, and the Department of Homeland Security, which we selected because they also manage government-owned, contractor- operated laboratories and sponsor work at DOE laboratories, sometimes contributing views incorporated into DOE performance evaluations.
To examine how DOE offices have evaluated M&O contractor performance, we reviewed DOE’s and DOE offices’ policies and procedures for performance evaluations, as well as annual performance evaluation and measurement plans and PERs from fiscal years 2006 through 2016. We also compared each office’s policies and procedures for conducting performance evaluations against federal standards for internal control, as well as the Federal Acquisition Regulation (FAR), DOE’s Acquisition Guide, and the Department of Energy Acquisition Regulations. In addition, to examine the extent to which these offices have documented their evaluation approaches, we discussed the evaluation approaches and processes with DOE officials and compared those approaches with documented policies and procedures.
To evaluate the extent to which PERs provided information on each of the performance areas outlined in the FAR—technical, administrative, and cost—we performed a content analysis of 22 DOE fiscal year 2016 PERs for M&O contractors. We developed operationalized definitions of each of the three areas with input from DOE’s offices. Broadly, the operationalized definition of technical performance included mission- related activities, the operationalized definition of administrative performance included mission support activities, and the operationalized definition of cost performance included spending-related activities. Mission-related activities included, for example, research and development, production, storage, clean-up, and construction. Mission support activities included, for example, information technology, human resources, legal activities, environmental safety and health, property management, risk assessment, and leadership activities. Cost-related activities included, for example, spending, budgeting, strategic sourcing, and costs, including the contractor’s cost-effectiveness. In identifying information related to cost performance, we considered all evaluative statements related to cost, including broad terms such as saving, cost, spending, and budget. Then we categorized performance descriptions under these three performance areas and counted the number of performance descriptions that included information in the M&O contracts’ PERs related to each of the areas. A performance description could be categorized as related to one, two, or all three areas. Two analysts independently reviewed each PER and then met to agree on the categorizations. When differences arose, we included a third analyst to arrive at a consensus.
For the vast majority of M&O contracts, we analyzed the performance descriptions at the level of objectives—where most performance descriptions were found—and included notable outcomes described under those objectives. In a few instances, we used other comparable units of analysis, such as goals, for some National Nuclear Security Administration (NNSA) M&O contracts (in which performance information was provided by goals, not objectives) and criteria for Office of Environmental Management (EM) and Office of Fossil Energy (FE) (in which performance information was provided under numerous subjective and objective criteria). Based on our analysis, we reported the total number of performance descriptions for each area, as well as the percentage of performance descriptions that contained information related to each area. Because performance criteria descriptions could contain information related to more than one area, the percentages total more than 100 percent.
To determine the extent of cost performance-related information in DOE’s fiscal year 2016 PERs for its M&O contracts, we performed a content analysis. From our analysis, we reported the total number of pages the cost performance-related information represented, compared with the average number of total report pages. To determine the number of pages, we counted the number of pages of each PER.
In addition, to evaluate the quality of cost performance-related information, we reviewed DOE Information Quality Guidelines, which apply to information DOE offices make available to the public. We then performed a content analysis of DOE fiscal year 2016 PERs based on the definition of quality in the guidelines, which includes that information generated for DOE and the public be useful. We further analyzed and categorized the types of cost performance-related information. Types of cost information included, for example, within budget, over budget, cost savings, cost overrun, and cost effectiveness. We defined cost effectiveness as good value for money spent.
To examine the results of DOE’s M&O contractor performance evaluations from fiscal years 2006 through 2016, we analyzed performance ratings and incentives awarded in PERs, fee determination letters, and other performance evaluation documents. Throughout the report, we analyzed and provided information by “contract rating sites” rather than individual contractors or physical sites, because the individual contractors and how certain sites align with the contracts have changed over time. We analyzed 24 distinct contract rating sites covered by 21 M&O contracts in place as of fiscal year 2016. There are three more contract rating sites than the number of contracts in 2016: two additional contract rating sites because two individual contracts were consolidated into one contract during the period we covered—we analyzed the two individual contracts from prior to 2014 separately from the current consolidated contract—and one additional contract rating site because two DOE offices separately evaluated the performance of a single contractor that performed activities for each of those offices.
To summarize the results of DOE’s annual contract performance evaluations, we analyzed overall annual percentages of available award and incentive fees provided at each contract rating site and presented the corresponding FAR rating categories. We reviewed performance evaluation ratings from 239 performance evaluations at the 24 contract ratings sites. We also did not include ratings from the EM portion of the Savannah River Site contract for fiscal years 2006 through 2008 because, according to EM officials and award fee documents, it had multi-year award fee targets that did not align with individual fiscal years.
We conducted this performance audit from October 2016 through February 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Additional Information on the Department of Energy’s Management and Operating Contracts
Table 7 provides additional information on the Department of Energy’s 22 management and operating contracts, contractors, contract award and end year, and total spending through these contracts. Table 8 presents the spending data adjusted for inflation.
Appendix III: Additional Information on the Office of Energy Efficiency and Renewable Energy’s Performance Evaluations
The Office of Energy Efficiency and Renewable Energy (EERE) focuses on aiding the development and implementation of renewable energy technologies and improving energy efficiency across various sectors. EERE administers its management and operating (M&O) contract at the National Renewable Energy Laboratory (NREL), in Golden, Colorado. As we describe in our report, EERE follows a Science and Energy Lab approach to evaluate its M&O contractor’s performance that uses broad, office-wide performance criteria, which are mostly subjective. Table 9 provides the full list of the goals and objectives EERE used to evaluate its M&O contractor performance in fiscal year 2016. For the most part, these performance criteria remained unchanged from fiscal year 2006 through fiscal year 2016.
As we describe in our report, EERE uses detailed methodologies to determine ratings and incentives. To illustrate the detailed formulas and calculations involved, Figure 5 provides an example of how ratings and fees are calculated.
Table 11 shows the rating scores the contractors earned for Mission and Operations goals.
Figure 6 shows the annual total fee (both award fee and fixed fee) EERE provided to its M&O contractors for fiscal years 2006 through 2016.
Table 12 provides the percentage of available award and incentive fees provided to the M&O contractors for fiscal years 2006 through 2016 by contract rating site.
Appendix IV: Additional Information on the Office of Environmental Management’s Performance Evaluations
The Department of Energy’s Office of Environmental Management (EM) is responsible for decontaminating and decommissioning facilities and sites that are contaminated from decades of nuclear weapons production and nuclear energy research. EM has two management & operating (M&O) contract sites: the Savannah River Site (SRS) in Aiken, South Carolina; and the Water Isolation Pilot Plant (WIPP) in Carlsbad, New Mexico.
As we describe in our report, EM follows a Site Specific approach to evaluate its M&O contractors that uses detailed performance criteria specific to each contract. Under this approach, most performance criteria we reviewed are objective criteria, and a few are broader, subjective criteria. Tables 13 through 16 provide examples of some of the specific criteria EM used at each site. We provide examples rather than a full list because each site has numerous individual metrics, which are often quite technical. Specifically, Tables 13 and 14 provide examples of EM’s objective performance criteria, which are defined based on quantifiable metrics (e.g., a contractor’s demonstrated waste processing rate) and milestones (e.g., whether a contractor completed a task on or before a scheduled date). Table 13 includes 3 of the 6 objective performance criteria EM used to evaluate the SRS contractor’s performance during fiscal year 2016. Table 14 contains examples of 3 of the 9 objective criteria EM used to evaluate the WIPP contractor in fiscal year 2016.
Tables 15 and 16 provide examples of EM’s subjective criteria, which are used for aspects of performance that may be difficult to capture objectively. Table 15 provides examples of 3 of the 12 subjective criteria for evaluating the SRS M&O contractor’s performance during fiscal year 2016, while Table 16 contains the fiscal year 2016 subjective criteria for evaluating the WIPP M&O contractor’s performance.
The following tables and figure provide details on the incentives available to and earned by EM’s M&O contractors from fiscal year 2006 through fiscal year 2016. Table 17 shows the performance incentives that EM included in its M&O contracts. We use the term “contract rating sites” rather than individual contractors or physical sites, because the individual contractors and how certain sites align with the contracts may have changed over time.
Figure 7 shows the annual total fee (both award fee and fixed fee) provided to EM M&O contractors for fiscal years 2006 through 2016. Because EM and National Nuclear Security Administration activities at the Savannah River Site are rated separately, only the EM portion of fees is represented below.
Table 18 provides the percentage of available award and incentive fees EM’s M&O contractors earned for fiscal years 2006 through 2016.
Appendix V: Additional Information on the Office of Fossil Energy’s Performance Evaluations
The Department of Energy’s Office of Fossil Energy (FE) manages the nation’s Strategic Petroleum Reserve (SPR), which consists of salt caverns storing crude oil in Texas and Louisiana. As we describe in our report, FE follows a Site Specific approach to evaluate its M&O contractors that uses detailed performance criteria specific to each contract. Under this approach, most performance criteria we reviewed are objective criteria, and a few are broader, subjective criteria. Table 19 provides examples of FE’s objective performance criteria, which are defined based on quantifiable metrics (e.g., the contractor’s demonstrated oil drawdown rate) and performance targets (e.g., whether a contractor completed a task on or before a scheduled date). Table 19 includes 4 of the 11 objective performance criteria FE used to evaluate the Strategic Petroleum Reserve Office (SPRO) contractor’s performance during fiscal year 2016. We provide examples rather than a full list because there were numerous individual metrics, which are often quite technical.
Table 20 contains the full list of FE’s subjective performance criteria— which FE uses for aspects of performance that may be difficult to capture objectively—for evaluating the SPRO M&O contractor’s performance during fiscal year 2016.
Table 21 shows the performance incentives that FE included in its M&O contract.
Figure 8 shows the annual total fee (both award fee and fixed fee) FE provided to its M&O contractors for fiscal years 2006 through 2016.
Table 22 provides the percentage of available award and incentive fees provided to M&O contractors for fiscal years 2006 through 2016.
Appendix VI: Additional Information on the National Nuclear Security Administration’s Performance Evaluations
The National Nuclear Security Administration (NNSA), a separately organized agency within DOE, is responsible for maintaining and enhancing the safety, reliability, and performance of the nation’s nuclear weapons stockpile, promoting international nuclear safety and nonproliferation, and supporting U.S. leadership in science and technology. NNSA administers management and operating (M&O) contracts at eight national laboratories, plants, and sites:
Bettis and Knolls Atomic Power Laboratory in West Mifflin, Pennsylvania, and Niskayuna and West Milton, New York
Kansas City National Security Campus in Kansas City, Missouri
Lawrence Livermore National Laboratory in Livermore, California
Los Alamos National Laboratory in Los Alamos, New Mexico
Nevada National Security Site near Las Vegas, Nevada
NNSA Production Office Sites
Pantex Plant in Amarillo, Texas
Y-12 National Security Complex in Oak Ridge, Tennessee
Sandia National Laboratories in Albuquerque, New Mexico
Savannah River Site in Aiken, South Carolina As we describe in our report, NNSA follows an approach to evaluate its M&O contractors that uses broad, office-wide performance criteria that are mostly subjective. Table 23 provides the full list of the goals and objectives NNSA used to evaluate its M&O contractors’ performance in fiscal year 2016. While there have been some language amendments, overall, goals and objectives have remained the same from fiscal year 2013 through fiscal year 2016.
As we describe in our report, under the NNSA approach, goals are assigned specific portions of the available award fee for each contract at the beginning of the fiscal year—and at the end of the fiscal year, officials determine ratings and fees at the same time in a collaborative meeting with NNSA leadership. Figure 9 provides an example of award fee amounts assigned to individual goals.
Table 24 shows the performance incentives that NNSA included in its M&O contracts. We use the term “contract rating sites” rather than individual contractors or physical sites, because the individual contractors and how certain sites align with the contracts have changed over time.
Specifically, NNSA consolidated its Y-12 National Security Complex and Pantex Plant contracts into the National Production Office Sites contract in fiscal year 2014, and NNSA and the Office of Environmental Management separately evaluated their respective activities carried out by the Savannah River Site contractor.
Table 25 provides annual performance ratings by goal for fiscal years 2013 through 2016 for each NNSA contract rating site.
Figure 10 shows the annual total fee (both award fee and fixed fee) provided to NNSA M&O contractors for fiscal years 2006 through 2016 by contract rating site.
Table 26 provides the percentage of available award and incentive fees provided to M&O contractors for fiscal years 2006 through 2016 by contract rating site.
Appendix VII: Additional Information on the Office of Nuclear Energy’s Performance Evaluations
The Office of Nuclear Energy’s (NE) primary mission is to advance nuclear power as a resource capable of making major contributions in meeting U.S. energy supply, environmental, and energy security needs. NE administers its management and operating (M&O) contract at the Idaho National Laboratory (INL), in Idaho Falls, Idaho. As we describe in our report, NE follows a Science and Energy Lab approach to evaluate its M&O contractor that uses broad, office-wide performance criteria that are mostly subjective. Table 27 provides the full list of the goals and objectives NE used to evaluate its M&O contractor performance in fiscal year 2016. For the most part, these performance criteria have remained unchanged from fiscal year 2007 through fiscal year 2016.
As discussed above, NE uses detailed methodologies to determine ratings and incentives. To illustrate the detailed formulas and calculations involved, Figure 11 provides an excerpt from a fee determination letter as an example of how ratings and fees are calculated.
Table 28 shows the performance incentives that NE included in its M&O contract.
Table 29 shows the rating scores the contractor earned for Mission and Operations goals.
Figure 12 shows the annual total fee (both award fee and fixed fee) provided to NE’s M&O contractor for fiscal years 2006 through 2016.
Table 30 provides the percentage of available award and incentive fees provided to M&O contractor for fiscal years 2006 through 2016.
Appendix VIII: Additional Information on the Office of Science’s Performance Evaluations
The Office of Science (SC) supports scientific research for energy and the physical sciences both by directly supporting such research, for example, through grants to and cooperative agreements with universities, and by supporting the development, construction, and operation of scientific user facilities. SC administers management and operating (M&O) contracts at 10 national laboratory sites:
Ames Laboratory in Ames, Iowa
Argonne National Laboratory in Argonne, Illinois
Brookhaven National Laboratory in Upton, New York
Fermi National Accelerator Laboratory in Batavia, Illinois
Lawrence Berkeley National Laboratory in Berkeley, California
Oak Ridge National Laboratory, in Oak Ridge, Tennessee
Pacific Northwest National Laboratory in Richland, Washington
Princeton Plasma Physics Laboratory in Princeton, New Jersey
SLAC National Accelerator Laboratory in Stanford, California
Thomas Jefferson National Accelerator Facility in Newport News, As we describe in our report, SC follows a Science and Energy Lab approach to evaluate its M&O contractors that uses broad, office-wide performance criteria that are mostly subjective. Table 31 provides the full list of the goals and objectives SC used to evaluate its M&O contractors’ performance in fiscal year 2016. Generally, these performance criteria remained mostly unchanged from fiscal year 2006 through fiscal year 2016.
As discussed above, SC uses detailed methodologies to determine ratings and incentives. To illustrate the detailed formulas and calculations involved, Figure 13 provides excerpts from a performance evaluation report as an example of how ratings and fees are calculated.
The following tables and figure provide details on the incentives available to and earned by SC’s M&O contractors from fiscal year 2006 through 2016. Table 32 shows the performance incentives that SC included in its M&O contracts. We use the term “contract rating sites” rather than individual contractors or physical sites, because the individual contractors and how certain sites align with the contracts may have changed over time.
Table 33 shows the rating scores the contractor earned for the Science and Technology goals and Maintenance and Operations goals, by contract rating site.
Figure 14 shows the annual total fee (both award fee and fixed fee) SC M&O contractors earned for fiscal years 2006 through 2016 by contract rating site.
Table 34 provides the percentage of available award and incentive fees SC’s M&O contractors earned for fiscal years 2006 through 2016 by contract rating site.
Under the award term incentive, some SC M&O contractors are able to earn one additional year of performance under the contract for each year they exceed certain thresholds in their annual performance evaluations. Table 36 shows award term results for fiscal years 2006 through 2016 by contract rating site.
Appendix IX: Comments from the Department of Energy
Appendix X: GAO Contact and Staff Acknowledgments
GAO Contact:
Staff Acknowledgements:
In addition to the contact named above, Quindi Franco (Assistant Director), Ryan Gottschall (Analyst in Charge), Danny Baez, and Diantha Garms made key contributions to this report. Also contributing to this report were John Delicath, Brenna Derritt, Cindy Gilbert, Timothy Guinane, Rich Johnson, Danny Royer, Kiki Theodoropoulos, and Tatiana Winger. | Why GAO Did This Study
In fiscal years 2006 through 2016, the federal government spent almost $193 billion on DOE's M&O contracts—a form of contract that traces its origins to the Manhattan Project. Six DOE offices use M&O contracts to manage and operate federally owned sites that perform work to fulfill DOE's diverse missions, such as conducting scientific research and maintaining nuclear weapons.
GAO was asked to review DOE's performance management of its M&O contracts. This report examines, among other things, (1) how DOE offices evaluated M&O contractor performance in fiscal years 2006 through 2016; (2) the extent to which DOE's fiscal year 2016 M&O contractor PERs provide information on contractors' technical, administrative, and cost performance; and (3) the results of DOE's M&O contractor performance evaluations for fiscal years 2006 through 2016.
GAO reviewed performance evaluation documents for 21 of the 22 DOE M&O contracts; analyzed DOE policies, procedures, and guidelines, and federal regulations; analyzed technical, administrative, and cost aspects of M&O contracts' 2016 PERs; and interviewed DOE officials.
What GAO Found
In fiscal years 2006 through 2016, six offices within the Department of Energy (DOE) generally used one of three different approaches to evaluate management and operating (M&O) contractor performance. Although these approaches varied in the performance criteria and methodologies used for determining contractor ratings and incentives, all the offices annually set expectations for contractors and assessed performance.
In analyzing DOE's fiscal year 2016 Performance Evaluation Reports (PER), GAO found that these reports provided less information on M&O contractors' cost performance than on contractors' technical and administrative performance. The cost information provided in the PERs often was not detailed, did not indicate the significance of the performance being described, and applied only to specific activities. Further, the information is of limited use for acquisition decision-making, such as deciding whether to extend the length of a contract, because it does not permit an overall assessment of cost performance. A key reason PERs did not include more cost performance information is that the DOE offices' policies do not require specific assessments of cost performance or discuss how to ensure cost information is useful for future acquisition decision-making. By updating policies to require inclusion of quality cost performance information in PERs, DOE offices could better assess M&O contractors' costs, improve acquisition decision-making, and ensure performance evaluations fully address required elements.
Based on GAO's review of DOE M&O contractor performance evaluations from fiscal years 2006 through 2016, DOE generally provided high performance ratings and more than 90 percent of available performance incentives (see figure). Ratings for some areas of contractor performance, as well as ratings for contractor performance at specific DOE sites, varied from this trend. For example, three times during this period contractors received 50 percent or less of available award and incentive fees due to a major accident and safety and security issues.
What GAO Recommends
GAO is making seven recommendations to DOE, including to each of the six DOE offices to update their policies requiring that PERs include quality information to enable an overall assessment of M&O contractor cost performance. In commenting on a draft of this report, DOE generally agreed with these recommendations. |
gao_GAO-18-638 | gao_GAO-18-638_0 | Background
USPS has a wide range of domestic competitive products that are a growing sector of its business. The volume of USPS’s competitive products increased from approximately 750 million pieces in fiscal year 2008 to 4.9 billion pieces in fiscal year 2017. Revenue from these products increased from about 10 percent of all USPS mail revenues in fiscal year 2008 to about 28 percent in fiscal year 2017 (see fig. 1). USPS forecasts that continued growth in e-commerce will increase the volume of its competitive products, especially for the “last-mile” delivery service to consumers—which involves delivery from retail locations and fulfillment centers (i.e., where online orders are processed, packaged, and shipped out to USPS for delivery) to customers. USPS reported that in fiscal year 2017, revenue from competitive products exceeded USPS’s expectations by $500 million due to the growth in e-commerce and successful marketing and sales campaigns. USPS expects increased competition, though, in the first- and last-mile delivery services—collection and delivery of packages—from other delivery providers.
To remain competitive in the competitive product delivery market, USPS officials have stated that information gained from scanning is leveraged to provide customers with real-time visibility for the location of a competitive product in USPS’s delivery process as well as accurate estimates of the delivery time of USPS’s competitive products. Further, USPS’s latest strategic plan states that this information is one factor used to reduce its own costs through optimizing its network, including processing facilities, post offices, and numerous other facilities across the United States, and streamlining its operations.
USPS delivers competitive products across the nation, which it divides into seven postal areas comprised of 67 postal districts (see fig. 2). Managers at each level—postal area, postal district, and post office—are responsible for overseeing and reporting on the performance of the level below them. For example, each district manager is accountable to the area vice president. Postmasters, who manage individual post offices, are accountable to district managers and also monitor the performance of employees at their post office.
To track the movement of competitive products, USPS leverages automation (i.e., scanning by postal-processing equipment) and passive and active scan technology (i.e., scanning devices used by postal employees) to capture barcode information. In addition, when competitive products are not able to go through all the automated scans, USPS employees are to manually scan barcodes that have been placed on each item. These barcodes link the item with information in USPS’s databases such as: the delivery address, the type of USPS product, and when the item was accepted by USPS. According to USPS procedures, competitive products could be scanned up to 13 times to generate visibility necessary for USPS, mailers, and customers to track their packages as they move through USPS’s network (see fig. 3). For example, the first scan of the product—the “Acceptance” scan—is made when the item is dropped off at the post office or by a carrier if the product is picked up at a mailbox or customer address. The last scan—the “Acceptable Delivery Event” scan—generally means the item was successfully delivered to the addressee or that a delivery attempt was made (e.g., the product requires a signature but the recipient was not at home so another attempt will need to be made or the recipient will need to pick up the product). The interim scans reflect the product’s progress through the postal network, including through mail-processing plants and equipment. The scan data are transmitted to USPS’s data systems throughout the day. Scan information from these systems is available to USPS managers as well as mailers and customers who wish to track the progress of their items.
USPS’s employees use devices to scan competitive products in postal facilities and on delivery routes (see scans 1, 2, and 11–13 in fig. 3). Carriers usually use a handheld Mobile Delivery Device (MDD) to scan a package’s barcode. MDDs contain Global Positioning System (GPS) technology and transmit package scanning data and carrier location data using a cellular network. USPS employees working inside post offices or other facilities use similar scanning devices without GPS technology, such as the handheld Intelligent Mail Device (IMD) to perform the manual scans (see fig. 4).
USPS’s Competitive Products Are Almost Always Scanned Correctly, Although Some Missed and Inaccurate Scans Occur
USPS reports we reviewed indicate that competitive products are almost always scanned and scanned correctly. USPS has an overall organizational goal of accurately scanning 100 percent of all mail pieces—both competitive and other products—that have a barcode. This includes scanning each competitive product at several points from acceptance, as described earlier. However, individual management employee-performance goals for scanning are set slightly lower than 100 percent, as USPS officials stated that they recognize that some scanning issues, such as for missing or damaged barcodes, may occur across post offices. According to USPS data we reviewed for the first three quarters of fiscal year 2018, all but one of USPS’s 67 districts met USPS’s scanning goals for all five required scans for competitive products. Additionally, in one district we visited, a USPS internal report showed that every group of post offices in the district met its scanning goal for the arrival-at-unit scan for the week, the preceding 4 weeks, and the year-to- date periods, and all but one group of post offices met their scanning goals for the acceptable delivery scan for the same measurement period.
In addition, representatives for mailers we interviewed that use USPS’s competitive products stated that they were generally satisfied with USPS’s scanning performance. Representatives of all the major mailers we spoke with that rely on USPS’s delivery network said they believed that USPS is generally scanning competitive products accurately, although issues still occur. Representatives of mailers told us that they receive scanning data from USPS for their items throughout the day, with some mailers receiving the data every 15 minutes, a rate that allows them to track their items through USPS. Some mailers use this information to calculate the expected time of delivery and monitor USPS’s progress against their own estimates of delivery time to measure USPS’s performance. Representatives for major mailers we spoke with said they also get complaints from customers if items are late, lost, or inaccurately scanned, so the customers provide another source of information on any scanning issues. Four of the five representatives for major mailers we interviewed that sent items via USPS competitive products told us that they have seen improvement in USPS’s scanning performance in recent years. Additionally, all of the representatives for mailers we spoke with stated that USPS has increased the amount of scanning and the information provided from the scans in recent years.
Although USPS has a high scanning rate, some missed and inaccurate scans for competitive products do occur, errors that could potentially affect millions of competitive products. For example, several USPS OIG reports between 2016 and 2018 found that instances of missed or inaccurate scans still occurred both nationwide and that in nine USPS districts they analyzed, were due in part, to post office personnel not always following proper scanning procedures and post office supervisors not adequately monitoring how scanning procedures were implemented. For example, the USPS OIG analyzed approximately 2 billion delivery scans over a 6-month period in 2017 and found that 1.9 million delivery scans (about 0.1 percent) occurred at the post office instead of at the delivery address and were considered improper scans. Furthermore, examples of USPS’s internal reports we reviewed containing scanning performance results showed that a small percentage of competitive mail items had not been scanned. For example, one USPS internal report for a district we visited showed that for one week, USPS employees in the district missed about 0.73 percent of the expected delivery scans for competitive products. Due to USPS’s large volume of competitive products, a small percentage of products not scanned can represent large numbers of items. For example, about 155,000 competitive products were missing a delivery scan in one district’s 2018 year-to-date report we reviewed.
Additionally, the representatives of mailers we interviewed also reported occasional scanning issues with USPS’s competitive products. Most of the mailers’ representatives stated that when they see competitive items missing scanning data, it is generally an isolated situation and USPS usually fixes the issue. According to these representatives, USPS provides them with points of contact to work with to resolve scanning issues immediately and on a regular basis. However, one major mailer’s representative we spoke with stated that even though USPS’s employees are generally good at scanning packages, inaccurate delivery scanning is an issue. The representative stated that about 8 to 10 percent of the company’s products sent through USPS were scanned by carriers as delivered, but not at the customer’s delivery address—contrary to USPS’s standard operating procedures for scanning. The representative stated that, although this percentage has decreased in recent years, the mailer would like to see that number decrease further because delivery to the destination address assures them that the item was left as close as possible to the customer.
USPS is taking some steps to address missed or inaccurate scans. For example, USPS officials stated that the current electronic scanning device carried by almost all carriers on their routes does not prevent scanning a mail item as delivered to an address that is not the delivery address associated with the item’s barcode information. They also stated that USPS is updating scanning devices to alert carriers when they scan items as delivered when not physically at the correct delivery address. According to USPS officials, as of May 2018, 80 percent of hand-held electronic scanning devices used by USPS carriers had this functionality and that this functionality is being fine-tuned. This capability, though, still does not preclude all scanning errors, as it only affects the final delivery scan. USPS officials also stated that employees may still encounter scanning issues, such as damaged barcodes, which could lead to missed scans.
USPS Has Designed Policies and Procedures to Support Accurate Scanning of Competitive Products, but Limitations Could Contribute to Scanning Errors
USPS’s Scanning Policies and Procedures Are Not Based on Standards for Operational Internal Controls
USPS has not based its operational policies and procedures, such as those that support the accurate scanning of competitive products, on any standards for internal controls. USPS officials told us that they have not used any specific criteria for designing, implementing, and operating an internal control system for meeting its operational policies and internal controls, such as those that help ensure competitive products are accurately scanned. According to USPS officials, USPS does not follow the COSO Framework to design, implement or evaluate its operational internal controls as they believe that the COSO Framework standards are traditionally related to internal controls over financial reporting. In addition, USPS officials stated that USPS is not required to follow Standards for Internal Control in the Federal Government, and therefore USPS does not follow these standards as well. Instead, USPS officials stated that USPS has designed its operational policies and internal controls over the years based on its unique responsibilities, management experience, and sound business practices. However, officials could not identify any specific standards or framework they had followed.
We have reported that standards for the design, implementation, and operation of their internal-control system provide an overall framework for establishing and maintaining an effective internal-control system—which is a key factor in achieving an entity’s mission. Further, internal controls help managers achieve desired results through effective stewardship of public resources. USPS has options to choose from in selecting standards for internal controls. Two widely used standards are the COSO Framework and Standards for Internal Control in the Federal Government, which was adapted for federal entities from the COSO Framework. Both standards are designed to help an entity design, implement, and maintain an effective internal-control system. Such a system should encompass all aspects of an entity’s objectives, including operations, reporting, and compliance objectives, and can help an entity adapt to shifting environments, evolving demands, changing risks, and new priorities. Non-federal entities can adopt either of these standards in their efforts to design, implement, and operate an effective internal control system.
As stated above, we found that the COSO Framework to be a reasonable and relevant set of internal control standards to evaluate USPS’s operational internal-control activities. However, we and the USPS OIG have applied both the COSO Framework and Standards for Internal Control in the Federal Government in evaluating USPS’s operational internal controls in recent reports. Without standards for an effective internal-control system for its operational policies and procedures for scanning competitive products, USPS may miss opportunities to improve how it achieves its mission to deliver those important products.
USPS Has Standard Operating Procedures for Scanning Competitive Products, but They May Not Guarantee Accurate Scanning
USPS management has designed standard operating procedures to provide assurance that competitive products are scanned accurately. We found some of these procedures to be consistent with the COSO Framework, which states that an organization should deploy control activities through policies that establish what is expected and procedures that put policies into action. USPS has developed a scanning policy for its products, stating that “properly scanning all barcodes will result in World Class Visibility and be instrumental in retaining and growing our shipping business and providing valuable data to drive improved operational performance and reduce costs.” USPS also has procedures that establish the responsibilities of employees for accurately scanning barcodes for competitive products at various points in the mail flow. Although USPS officials stated that employees should rely on prompts from their scanning devices to ensure scans are done correctly, USPS communicates these procedures in three main ways: documents, such as City Carrier Handbook and Rural Carrier Handbook, that outline scanning procedures and that explain carriers’ duties, including scanning; job aids, such as posters showing proper scanning procedures (see fig. 5); and, standard work steps or guidance that lists procedural steps either for competitive products or for scanning mail in general (see fig. 6).
Following these procedures is important to fulfill USPS’s scanning goals. As stated above, the USPS OIG found instances of missed or inaccurate scans for competitive items in recent reports. Further, the USPS OIG also recently found that USPS employees at all 15 postal facilities it visited in the Los Angeles District did not follow correct scanning procedures for USPS’s competitive Parcel Return Service product, leading to inconsistent counts for these products. Such errors can put USPS at risk of not collecting revenue for these products. The USPS OIG has made several recommendations in its recent reports to USPS management to reinforce the importance of these procedures to employees. USPS officials agreed with some of these recommendations and stated that they are taking action to address them.
While reinforcing these procedures can be helpful, we found that USPS’s scanning procedures may not provide the necessary assurance for accurate scanning because they are not consistent. For example:
The USPS’s City Carrier Handbook states that mail with a barcode should be scanned at the delivery point (or address).
However, a standard operating procedures document for city carriers at a post office we visited stated that carriers must scan each delivery confirmation mail piece but did not specify that this scan had to be at the delivery point or address. Locally developed procedures may not be uncommon, as one district manager told us that USPS headquarters allows managers to make a certain amount of flexibility to adapt the standard operating procedures for each post office.
The USPS document, SCANNING at a Glance: Delivering 100% Visibility, states that all mail items that require delivery scanning should be scanned at the delivery address, but this document also provides additional scanning procedures not contained in the City Carrier Handbook and other standard operating procedures documents we examined. In particular, the document contained procedures for scanning to account for mail being held for customers on vacation; scanning items correctly to account for mail not delivered to business that were closed; and for mail that was refused by the addressee.
This inconsistency in USPS’s scanning procedures has likely occurred because many of the documents have been updated at different times and have not always reflected new operations. For city carriers, the online version of the USPS’s City Carrier Handbook was last updated in April 2001. USPS officials stated that the most recent update regarding scanning was issued in November 2015 via a separate Postal Bulletin. Further, a separate standard operating procedure document for city carriers at a post office we visited was dated June 2006. For rural carriers, the most recently updated scanning procedures we found was dated 2013. As a result, some of these documents are not updated with the latest information on new scanning procedures. In a related example, the USPS OIG recently found that employees at three of the six USPS facilities the USPS OIG visited did not have an adequate understanding of the procedures for processing election and political mail due, in part, to guidance that was not updated, even though the procedures were centrally documented on an internal USPS website.
USPS officials recognized this issue and stated that these handbooks are not updated regularly as the content of the handbooks are subject to labor negotiations. Therefore, new procedures are presented to USPS employees outside of the handbooks. However, given that these efforts rely on employees to orally communicate information, having consistent documented procedures is even more important. In addition to stating that the organization should deploy control activities through policies and procedures, the COSO Framework states that senior management should communicate objectives clearly through the organization so that other management and personnel understand their individual roles in the organization. By not having consistent procedures, USPS risks not clearly communicating to its employees how they should carry out scanning procedures and therefore contributing to scanning errors. As discussed below, USPS officials told us that management updates its procedures typically through regular meetings with employees, which are documented in handouts or slides. USPS officials stated that management stresses the importance of scanning and that employees should follow the prompts on their electronic devices when scanning competitive products. However, employees can still scan competitive products as delivered even if they are not, as device prompts can be misread, misinterpreted, or ignored. Furthermore, even with current prompts, scanning errors can and do occur.
Consistent procedures, clearly communicated to employees, have become increasingly important as USPS hires new employees to handle, in part, anticipated growth in the volume of competitive packages. For example, GAO analysis of USPS data showed that USPS’s carrier workforce increased by 6.4 percent between fiscal years 2015 and 2017. The USPS OIG has found that these new employees require training and guidance to properly perform their roles and to reduce turnover.
USPS Holds Regular Training and Meetings to Support Accurate Scanning
In addition to deploying policies and procedures to achieve an organization’s objectives, the COSO Framework states that an organization should internally communicate objectives and responsibilities that are necessary to support the functioning of internal controls. This process can be accomplished through training and meetings. Specifically, the COSO Framework states that training should enable individuals to develop competencies appropriate for assigned roles and responsibilities, among other things, and that active forms of communication such as face-to-face meetings are often more effective than passive forms such as broadcast e-mails and intranet postings.
To communicate how its procedures should be correctly implemented, USPS has developed both initial and on-going training for employees. USPS officials stated that new employees are formally trained in scanning procedures when they start their employment. For example, carriers are trained how to use USPS’s electronic scanning devices, when to scan competitive items, the correct codes to use for different delivery situations (i.e., signature required, vacation holds, how to code where a package was left at a delivery address). Any new procedures can be introduced through presentations given by managers during meetings, as described below. Required regular meetings may be tracked by USPS management to ensure they are completed. Some district officials we spoke with stated that they certify that their employees have received required training and send that certification to area and USPS headquarters officials. Additional training also helps USPS reinforce correct scanning procedures. When scanning procedures are not being followed or scanning goals are not met at a post office, USPS officials stated that reminders of the correct procedures designed to reinforce USPS’s scanning procedures are presented to employees through presentations, posters, job aids, and additional documents such as carriers’ handbooks. For example, the representative of the major mailer we spoke with that had 8 to 10 percent of competitive products not scanned to the final delivery address stated that training was needed for both new and experienced carriers to reinforce that they should scan items at the delivery address.
To further ensure the accurate scanning of competitive products, USPS reported that it holds internal and external meetings. Specifically, these meetings are designed to:
Reinforce procedures: Post office managers can use stand-up talks— weekly meetings between management and employees at the post office—to discuss scanning issues with employees and opportunities to address those issues. For example, the postmaster at one post office we visited stated that this post office reinforces the standard work procedures designed to improve the scanning performance of employees during these meetings. Carriers and clerks can ask questions and learn why they are asked to do something or how to do a specific task, allowing for additional training and reinforcement of procedures. For example, we reviewed a handout developed by USPS headquarters to provide managers with talking points for service talks. This handout provided information on carriers delivering and scanning accurately and instructions on scanning at point of delivery on rural routes.
Introduce new procedures: USPS officials told us that post office managers use stand-up talks to introduce new procedures and processes with carriers and clerks. For example, postmasters stated that they used these meetings to introduce and train carriers on new scanning features at the post offices. USPS district and area management develop and disseminate memos and handouts to assist managers conducting these meetings. We reviewed handouts USPS provided to managers for service talks. These handouts provided information on the rollout of some of the most recent scanning procedure changes.
Continuously improve operations: District managers we interviewed stated that post offices with low scanning performance scores are placed on a district’s list of underperforming post offices. USPS district managers we interviewed told us that they meet with these post offices to determine how each post office plans to improve its scanning performance. District management also conducts audits of underperforming post offices and post offices that are in need of improvement. Our review of one district office’s service review checklist identified the key areas of audit for underperforming post offices.
Reassess procedures: Representatives of mailers we interviewed told us that they meet with USPS representatives to discuss ways USPS can share scanning information for competitive products.
USPS Generates Reports for Tracking Scanning Performance, but Reports May Not Be Used Consistently by Managers to Resolve Scanning Issues
Given that inaccurate scans can and do occur, it is important that postal managers explore and investigate any instances of missed or inaccurate scans. To do so, USPS managers—including area vice presidents, district managers, and postmasters—use a variety of reports as tools to ensure that the required scans are made at the appropriate place and time, and take action to monitor the status of competitive products, track lost items, and identify scanning issues. USPS headquarters designs reports used by managers to review performance at the local level across the country. Managers at each level are responsible for overseeing and reporting on the performances of the level below them. For example, the postmaster monitors performance of employees at the post office and is accountable to the district manager. In turn, each district manager is held accountable by the area vice president.
To monitor performance of scanning of competitive products, these managers have access to several USPS data systems to generate reports. They can use the reports to monitor scanning performance of carriers and clerks at each post office and to identify the causes of scanning issues, such as missing or incorrect scans. Managers can also use these reports to track the status of competitive products or to investigate customer complaints of lost items. Some examples of reports available to managers include the following:
Report 1: USPS officials told us that each post office receives this report from their District Office. The report identifies competitive products that do not have all the required scans, such as scans when the item arrives at the post office or when a delivery attempt was made. For example, one district official sends postmasters weekly reports on competitive products that do not have all the required scans. The officials told us that these reports help managers investigate the cause of incorrect scans identified in the report and how to prevent future occurrences.
Report 2: USPS officials told us that this report is generated by district managers to proactively identify scanning irregularities, such as scans that may be out of sequence or multiple competitive products that are scanned at the same time but are for different addresses. District management can query postmasters about these scans and ask them to investigate the reason for the irregularities and determine if the scan was appropriate.
Report 3: USPS officials told us that this report is generated by postmasters to monitor scanning status and performances for each competitive product that has received an arrival scan but lacks a delivery scan. While this may indicate a problem, it could also just reflect that the final scan had not been made by the end of the day or the scan that had not been uploaded into the USPS data systems when the report was generated.
While having these reports are helpful, their full potential to help USPS managers may be limited because USPS lacks detailed and up-to-date standard operating procedures for how managers should use these reports or conduct other activities to efficiently investigate and resolve scanning issues. USPS’s Scanning Performance: Delivery Standard Operating Procedures for managers are a list of bullet points outlining managers’ responsibilities to meet scanning performance target goals and not a list of detailed procedures for managers to follow, such as how to use Report 1 to identify items that do not have all the required scans. In addition, USPS officials told us that this list has not been updated since approximately October 2005. The COSO Framework states that organizations should internally communicate information, including objectives and responsibilities for internal control, necessary to support the functioning of internal control. Further, it states that a process should be in place to communicate required information to enable all personnel to understand and carry out their internal-control responsibilities.
Absent such communication, managers may take different actions to address problems or may have difficulty knowing where to find the appropriate information to locate a missing item to resolve a customer’s complaint quickly. For example, one post office manager told us that he will look at the scanning history in the USPS data systems to determine if the item received an acceptable delivery event scan or what the status of the item is on the route, while another post office manager told us he will use GPS data to see where the scans were made to determine if the item was delivered to the right address. If managers do not know where to find the appropriate information, they may spend more time investigating and be less efficient in resolving issues.
Further, not having detailed standard operating procedures means managers may not be aware of all the reports available to them. For example, some post office managers told us that they use Report 3 while other post office managers told us that this report was not available to them. Without using Report 3, some managers told us that they look in several sources to find the same information needed to resolve the issue, such as locating a lost package. Some managers told us that USPS management discontinued the report because it was being misused by some managers. Specifically, managers told us that some managers were manually entering scanning or service-performance information retroactively to improve their performance scores. However, they told us that USPS management recently made Report 3 available to managers again but changed features to reduce any misuse.
Additionally, USPS may miss opportunities to prevent scanning issues from happening again by not clearly communicating how managers should use the various reports to address specific scanning issues. For example, the USPS OIG recently determined that instances of missed and inaccurate scans for competitive products were a result of USPS management not adequately monitoring the implementation of those procedures. Without detailed procedures to guide managers in finding and using specific information in available reports and other tools, managers will not have consistent information to use to investigate and resolve customer complaints quickly or accurately. In addition, new managers may not know where to go for the most appropriate information and how to use this information to address some issues.
Conclusions
As competitive products have become essential to USPS’s economic viability, it is increasingly important for USPS to accurately track them to remain competitive in this market. While USPS may be scanning most mail accurately, there continue to be instances where mail is not scanned accurately or is missing scans. Given the volume and growth in these competitive products, even a small percentage of inaccurately scanned products could be a large number of such products. Since USPS’s procedures were developed absent standards for internal control, the adoption of a set of internal control standards could enhance USPS’s efforts to continuously improve the design, implementation, and evaluation of its operational internal controls for scanning of competitive products. Further, since USPS’s standard operating procedures for scanning are located in numerous documents and are not always consistent—and given USPS’s reliance on stand-up talks and meetings to keep employees current—USPS employees may not always have accurate scanning procedures easily accessible to them. Having consistent standard operating procedures is increasingly important to ensure that employees are making accurate scans. Additionally, standard procedures that guide managers to investigate and resolve scanning issues would help managers more efficiently address these issues and ideally prevent these issues from happening again.
Recommendations for Executive Action
To improve USPS’s competitive products scanning, we recommend that the Postmaster General take the following three actions.
The Postmaster General should identify and adopt a set of internal control standards that can be used as the basis for operational internal-control activities, such as those for scanning competitive products. (Recommendation 1)
The Postmaster General should improve the communication of standard operating procedures for scanning competitive products by, for example, updating or consolidating USPS documents, job aids, and standard work steps. (Recommendation 2)
The Postmaster General should create standard operating procedures for managers on how to address inaccurate scans and use available reports to investigate and resolve scanning issues. (Recommendation 3)
Agency Comments
We provided a draft of this product to USPS for its review and comment. USPS’s comments are reproduced in appendix I.
USPS stated that it cannot agree with our recommendation to identify and adopt a set of internal control standards for USPS’s operational internal control activities at this time. Although USPS has adopted an internal control framework for its financial internal control activities, USPS does not know what the benefits and costs are of adopting internal control standards for its operational internal control activities. As a result, USPS agreed to conduct a cost study to determine whether to commit resources to identifying and adopting a set of internal control standards for its operational internal control activities. We are encouraged that USPS is planning to conduct such a study and anticipate that performing this study will result in the implementation of an appropriate set of internal control standards. USPS agreed with the two recommendations regarding scanning procedures and committed to completing corrective actions by November of 2018.
In its general comments, USPS noted that our reference to the USPS OIG’s report, Processing Readiness for Election and Political Mail for the 2018 Midterm Elections did not appear germane to the scanning of competitive mail. We recognize that this report was focused on a different type of mail, but as USPS noted in its letter, we use the OIG report as a related example of how USPS has taken efforts to improve the communication of its scanning procedures to employees. Therefore, we determined that our use of the report is appropriate. We have added information from the OIG report to characterize the OIG’s recommendations and USPS’s actions to address those recommendations.
USPS also provided technical comments, which we incorporated as appropriate.
We will send copies of this report to the appropriate congressional committees, the Postmaster General, the Chairman of the Postal Regulatory Commission, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff making key contributions to this report are listed in appendix II.
Appendix I: Comments from the U.S. Postal Service
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Kyle Browning (Assistant Director); Greg Hanna (Analyst-in-Charge); Michael Hansen; Thanh Lu; John Mingus; Faye Morrison; Malika Rice; Amy Rosewarne; Crystal Wesco; Elizabeth Wood; and Matthew Zaun made key contributions to this report. | Why GAO Did This Study
USPS's competitive products have become increasingly important, comprising about 28 percent of USPS's total revenue. USPS scans these packages at various points throughout the postal network. When scans are inaccurate or missing, questions are raised about the veracity of USPS's data on scanning performance and can lead to customer complaints.
GAO was asked to review USPS's scanning policies and procedures. In this report, GAO (1) describes USPS's scanning performance and (2) examines how USPS ensures accurate scanning. GAO reviewed USPS's policies and procedures and assessed them against internal control standards; interviewed officials from USPS and five high-volume mailers; and conducted site visits to six post offices in two USPS districts that represented a range of volume, number of routes, and performance.
What GAO Found
Mail products over which the United States Postal Service (USPS) does not exercise market dominance, such as many of its packages, are called competitive products. These items are scanned throughout the mail delivery system to track their progress (see figure). USPS data show that these products are almost always scanned. For example, USPS data showed that for the first three quarters of fiscal year 2018; all but one of USPS's 67 districts met their scanning goals. Additionally, mailers that account for a high volume of USPS's competitive products told GAO that they believed USPS was generally scanning products correctly. However, a small percentage of missed or inaccurate scans occur. For example, a report from one USPS district showed that for one week, 0.73 percent of the products delivered were missing a scan and that for the fiscal year to date almost 155,000 competitive products were missing a delivery scan.
USPS has designed and implemented procedures and activities to help ensure accurate scanning, but some limitations could contribute to scanning errors. For example, USPS has not based its operational procedures for scanning on any internal control standards. USPS officials said the procedures were based on USPS's unique responsibilities, management experience, and sound business practices, but the officials could not identify specific standards or a framework that they followed as the basis for the procedures. USPS officials said they did not believe any internal controls standards applied to these procedures. By not basing procedures on standards, USPS may miss opportunities to improve how it achieves its mission to scan and measure the performance of competitive products. Additionally, USPS's scanning procedure documents, such as for outlining specific delivery scanning steps, are not always consistent, and USPS relies on more informal methods, such as meetings with employees to communicate changes. Thus, employees may not have accurate procedures available to them. Finally, USPS lacks procedures to help managers identify and address incorrect scans, address customer complaints or otherwise address scanning irregularities. For example, USPS's guidance for managers is limited to a list of bullet-points that do not detail the steps managers should follow to resolve scanning irregularities. In addition, this list has not been updated since 2005. Without consistent or detailed procedures, USPS's employees and managers may not scan items accurately or find information needed to resolve scanning issues—a situation that could hinder USPS's ability to reduce inaccurate or missing scans for these important mail products.
What GAO Recommends
GAO recommends that USPS: (1) identify and adopt internal control standards for its operational activities such as for scanning of competitive products; (2) improve the communication of procedures for scanning competitive products; and, (3) create procedures for supervisors on how to address inaccurate scans and resolve scanning issues. USPS agreed to explore addressing the first recommendation and agreed with the other two recommendations. |
gao_GAO-18-319 | gao_GAO-18-319_0 | Background
Federal Requirements for Implementing Telework and Reducing Space
Implementing Telework: The Telework Enhancement Act of 2010 (the Act) establishes telework implementation requirements for agencies, including, for example, that each agency designate a Telework Managing Officer and that each agency incorporate telework into its continuity of operations plans. The Act does not mention telework specifically in the context of space planning. The Act also requires the Office of Personnel Management (OPM) to assess whether agencies have met agency- established telework outcome goals such as real estate savings. It also requires OPM to submit reports that include executive agencies’ goals for increasing telework participation to the extent practicable, assist each agency with developing qualitative- and quantitative- teleworking measures and goals, and track telework eligibility and participation rates across the government.
According to OPM telework information from fiscal year 2012 through fiscal year 2015, the percentage of federal workers eligible to telework remained stable at about 45 percent, on average. However, during the same period, the percentage of eligible employees who participated in telework increased from 29 percent to 46 percent. Figure 1 shows the frequency of telework across the federal government from fiscal years 2012 through 2015 by type of telework.
Reducing Space: OMB issued the National Strategy for the Efficient Use of Real Property and the Reduce the Footprint policy in 2015, which require all CFO Act agencies to improve the efficiency of real property use, control costs, and reduce holdings. These OMB initiatives also required agencies to develop Five-Year Real Property Efficiency Plans annually; develop office space standards that specify maximum square footage identify reduction targets for office space in square feet; and freeze the footprint (i.e., not increase square footage of office space).
OMB’s National Strategy noted that employee telework has changed the dynamic of the federal real property portfolio and resulted in a need for less space. OMB’s Reduce the Footprint guidance memo states that agencies’ 5-year plans should include an explanation of actions the agency is taking to increase space efficiency, including cost-effective alternatives to acquisition of additional office space, such as consolidation, colocation, teleworking, and “hoteling.” Federal statute also requires that agencies consider whether space needs can be met using alternative workplace arrangements when deciding whether to acquire new space.
Mobility as a Space- Planning Tool
GSA defines mobility as an overarching term describing the ability of employees, enabled by information technology (IT) and workplace policies to perform work both within and outside the agency worksite. Under this definition, mobility includes telework, desk-sharing, site work, and travel. Agencies can strategically use telework—one form of mobility—combined with desk-sharing and hoteling to reduce space needs and increase efficiency. This allows agencies to plan for fewer workstations than the number of employees. Other space efficiency strategies such as smaller workstations (e.g., reduced space standards), reconfigured office space (e.g., open-office plans instead of private offices), and mobile technology (e.g., laptops, Wi-Fi throughout the office, and smart phones) can be combined with telework and used as planning tools to reduce office space, use space more efficiently, and potentially cut costs.
GSA, in a 2010 publication, described a continuum of three different scenarios for the ways agencies may use mobility, including telework. These scenarios range from limited mobility not leveraged for space planning to extensive mobility leveraged for space planning to reduce and use office space more efficiently: (1) No space changes: Some employees telework at least 2 scheduled days per week but retain assigned workstations with no changes to the existing space configuration. (2) No space reduction but different space allocation: Most employees telework at least 2 scheduled days per week and keep assigned workstations. Workstations are smaller and more densely organized; and space freed up by smaller workstations can be used for collaborative work spaces. (3) Space reduction and different space allocation: Nearly all employees telework at least 3 scheduled days per week and participate in hoteling (i.e., unassigned workstations), and workstations are also smaller and more densely organized. In this scenario, according to GSA, an agency can redesign its office and potentially reduce space by up to 30 percent.
The key factors in distinguishing these scenarios illustrated in figure 2 below include the: level of employee participation in telework; changes to physical office spaces (e.g., smaller and more densely organized workstations and more emphasis on collaborative workspaces); and extent to which employees have assigned workstations or participate in desk-sharing.
GSA Provides Space- Planning Guidance and Assistance to Agencies
Under federal statute, GSA has a role in promulgating rules and developing guidance promoting the efficient use of real property. For example, the GSA Administrator may provide guidance, assistance, and oversight to client agencies regarding the establishment and operation of alternative workplace arrangements, which include leveraging telework to reduce space needs. GSA also directly assists client agencies with identifying and prioritizing opportunities to improve and implement real- property efficiency measures.
Agencies Reported Various Ways of Considering and Using Telework as a Space Planning Tool
In reviewing planning documents, policies, and survey data, we found that the 23 civilian CFO Act agencies reported using telework to reduce or use space more efficiently. Specifically, our analysis of (1) agency-wide space-planning policies and procedures and (2) Real Property Efficiency Plans found that all of the agencies discussed telework in the context of space planning and achieving greater space efficiencies. Agencies also provided examples in survey responses of how they have used telework to increase operational effectiveness while optimizing their use of space.
Agencies Use Telework in Space Planning, and Most Include It in Their Agency- Wide Policies
Fifteen agencies’ space-planning policies and procedures included provisions for using telework and other mobility strategies, such as hoteling and desk-sharing, as a strategic space-planning tool. Three of the agencies mentioned these strategies only in the context of space planning, and five agencies did not mention them at all (see table 1).
For the fifteen agencies with space planning policies that incorporated telework, the documents either expressly directed agency planners to include telework, hoteling, or desk-sharing in space planning; provided instructions and guidance for using these in space planning; or issued space allocation standards for their implementation. Several agency-wide space plans identify space reduction goals based on using telework strategically. For example, the Department of Transportation (DOT) documents identify a goal of at least a 10 percent workspace reduction in new acquisitions as a result of compressed work schedules and telework, and specifically state that employees who telework six or more days per pay period should not have a permanent workspace.
Agency-wide space planning documents from three civilian CFO Act agencies mentioned telework, hoteling or desk-sharing as strategies in the context of space planning. For example, DOJ’s agency-wide policy notes that telework and hoteling could increase the efficient use of space and directs each sub-agency to maintain its own space design guidelines within agency-wide policy office space standards. The other two agencies, the Department of State and the Nuclear Regulatory Commission, either had a short provisional agency-wide space planning document that laid out space standards or mentioned telework, hoteling and desk-sharing as a tool for creating sustainable space. Five agencies’ space planning documents made no reference to telework; however, one agency, OPM, developed a maximum office space utilization rate and criteria for determining which positions require a private office.
Real Property Efficiency Plans Provide Information on Agencies’ Efforts to Use Telework
As noted above, OMB’s Reduce the Footprint policy (2015) requires agencies to establish Real Property Efficiency Plans. In our analysis of the agencies’ fiscal year 2016 or 2017 plans, we found that 19 of the 23 civilian CFO Act agencies discussed telework in the context of space planning. A few agencies’ Real Property Efficiency Plans explicitly stated that the agency reduced space as a result of telework. For example, GSA used telework to reduce space in its Heartland, Rocky Mountain, and National Capital Region offices.
Some (5 of 23) Real Property Efficiency Plans discussed the telework pilot programs agencies have initiated. For example, the Department of Education’s plans reported using a pilot program to acclimate employees to teleworking and desk-sharing; as a result, the agency intends to incorporate hoteling or space-sharing opportunities into proposed space designs in future projects. The Social Security Administration (SSA) plans also reported initiating a pilot program to experiment with smaller “hoteling” workspaces. Similarly, the Nuclear Regulatory Commission’s plan reported conducting a pilot program at its headquarters offices to identify challenges, better understand telework, and evaluate the potential of shared workspaces.
Agencies Reported Using Telework to Reduce Space and Achieve Space Efficiencies
In response to our survey, about three-quarters of agencies reported space-planning policies that use telework to reduce office space, lower real estate costs, or reduce the size of individual workstations. Agencies reported accomplishing this by using desk-sharing and hoteling for employees who have relinquished permanent workspaces. For example, several agencies discussed strategies to reduce space in their responses to our survey.
The Department of Labor reported using telework to close some small offices resulting in overall space reductions of about 16,000 square feet.
OPM reported that it both reduced space and created space efficiencies by transitioning staff in its Eastern Management Development Center to full-time telework and terminating the lease, resulting in a space reduction of about 32,000 square feet. OPM’s Human Resource Solutions Program also achieved a 47 percent space reduction when it instituted desk-sharing and freed the vacated space for use by another program office.
The Department of the Treasury reported that its sub-agency, the Internal Revenue Service, has aggressively used telework to help reduce its real property portfolio, while other Treasury units have leveraged telework to achieve significant space reductions. At the end of fiscal year 2016, Treasury reported agency-wide reductions of about 484,000 square feet at a cost savings of about $10 million.
Two agencies––Department of Homeland Security, and the National Science Foundation––reported using telework to increase the efficiency of existing office space in sub-agencies by increasing staff without increasing the size of offices, for example:
The Department of Homeland Security reported that one of its sub- agencies used telework in the planning and design of a new office, resulting in both a space reduction and more efficient use of the space. The new office is 57,573 square feet smaller than the prior office while personnel assigned to the office increased from 315 to 394.
The National Science Foundation reported that it used teleworking, among other workspace strategies such as new space standards and virtual technologies, to increase staff numbers without increasing its real estate footprint.
Three of the Four Agencies We Reviewed in Depth Leveraged Telework to Reduce or Use Office Space More Efficiently
Among the agencies we reviewed in detail—GSA, OJP, CDC, and the Fiscal Service—the use of telework in office space planning varied from emerging consideration to extensive implementation. GSA and OJP have used telework extensively to both reduce space and increase space efficiency in their office spaces. CDC has leveraged telework to reduce space or use space more efficiently in more limited cases while the Fiscal Service has begun to consider telework in future space planning. Appendix II provides additional details on office spaces where these agencies reduced space or used space more efficiently, including the role of telework, if any.
GSA Used Telework Extensively as a Space- Planning Tool
GSA has leveraged telework to reduce space by implementing unassigned workstations in nearly all of its regional and headquarters offices, along with other forms of “employee mobility,” complementary IT, and smaller space standards. GSA adopted telework as early as 1999 and by fiscal year 2015, more than 90 percent of all eligible GSA employees teleworked, and nearly half of all employees teleworked 3 or more days per pay period, according to OPM data. GSA’s space policy cites desk-sharing (e.g., hoteling, “hot-desking,” or other arrangements) as one strategy to help meet its space standard of 136 useable square feet (USF) per person. GSA employees may telework full-time, but may be required to give up dedicated workstations if they are on site 2 or fewer days per week.
GSA has gradually transitioned to unassigned workstations at headquarters and in its regional offices, allowing the agency to implement desk-sharing and calculate space needs at less than one desk per employee. The agency also assigned laptops and mobile or soft phones to employees to further maximize mobility. The three GSA sites we visited used space-planning strategies to achieve, or nearly achieve, GSA’s space standard of 136 USF per person. For example, at its Philadelphia Regional Office, GSA leveraged existing telework levels to meet reduced space standards and move to a smaller leased space by accommodating about 600 employees in fewer than 500 workstations. According to GSA, this allowed the office to achieve a utilization rate of 139 USF per person and realize a reported annual rent cost savings of about $2 million. Similarly, at its New York Regional Office, GSA also leveraged existing telework levels to meet reduced space standards and move to a smaller leased space. This step allowed the office to achieve a utilization rate of 119 USF per person and realize a reported total rent cost savings of nearly $11 million.
GSA also leveraged telework as part of its headquarters consolidation. GSA reports that it was able to move approximately 1,000 additional employees to the headquarters building by implementing a hoteling system and planning for less than one workstation per employee. This allowed GSA to achieve a utilization rate of 138 USF per person at its headquarters and realize a reported annual rent-cost savings of approximately $24 million.
OJP Leveraged Telework to Consolidate Its Offices
Similar to GSA, OJP used telework, along with complementary tools, to reduce space and use space more efficiently at its consolidated office. More than 90 percent of eligible employees teleworked in fiscal year 2015, and nearly half of all employees teleworked 3 or more days per pay period, according to OPM data. More recently, OJP reported that around 70 percent of its employees teleworked in August 2017, with just less than 40 percent doing so three or more days per pay period. At the departmental level, DOJ’s plans to improve space utilization include reduced space requirements, and, in some cases, alternative workplace strategies. DOJ’s space utilization policy mentions telework with hoteling as one way to increase efficient use of space, and DOJ’s telework policy mentions the potential of telework to create cost savings by decreasing space needs. While most OJP employees are eligible to telework, a few federal staff occupying administrative positions are not eligible.
OJP took the opportunity to examine and improve its space use as three of its leases approached expiration in 2013. It leased space in two adjacent buildings under three separate leases. OJP worked with GSA to analyze space-planning options, contracting a study of the office that recommended ways to improve space utilization. This study included a survey of all employees, a complete physical space survey, and interviews with leadership. The results of this study not only indicated OJP employees’ openness to more mobility but also that they had concerns such as loss of privacy and social connectedness. For example, more than 80 percent of survey respondents said they could work off-site more often with proper tools and support, and almost half said they would give up dedicated space to work remotely more often. Furthermore, interviews with the leadership of several OJP units indicated a willingness to support increased mobility but also a need to maintain privacy and improve mobile IT. According to OJP, it alleviated these concerns by encouraging participation in the planning process, highlighting opportunities for positive changes, and maintaining open communication (e.g., communicating changes and expected benefits).
Based on the analysis of space-planning options, OJP retained one of the three previous leases and leveraged telework to accommodate all employees into less space overall in one building. OJP achieved this objective by targeting 25 percent employee mobility and implementing hoteling. Concurrently, OJP officials explained that they introduced smaller workstations and used tenant-improvement allowance funds to reconfigure space for more flexible use. Physical reconfigurations included changing hard-walled office spaces with dedicated workstations to a primarily open office with few walls or dedicated workstations.
According to OJP, it complemented these changes with investments in mobile IT for individual employees, improved IT capabilities in conference rooms and other collaborative spaces, and an emphasis on training employees to work well in a mobile office environment. For example, OJP officials said they installed Wi-Fi throughout the space, issued laptops and smart phones to employees, upgraded video-conferencing capabilities in conference rooms and collaborative spaces, and expanded tools for informal employee communication. According to OJP officials, through the consolidation, they said they achieved a utilization rate of 190 USF per person—a decrease of 30 USF per person from the prior 220 utilization rate. This rate remained higher than DOJ’s overall target and housed the same total number of employees—about 1,000—in about 50,000 fewer USF. OJP reports that the consolidation resulted in an estimated $3 million annual lease- cost savings. OJP also estimates additional savings from reduced transit subsidies, carbon emissions, and continuity of operations.
CDC Used Telework to Achieve Space Efficiencies in Limited Cases
Relative to GSA and OJP, CDC has made more limited use of telework in office space-planning. CDC officials told us that they have leveraged telework as a space-planning tool in many locations, but they have only documented doing so in one case. HHS expects each sub-agency to comply with its 170 USF per-person utilization-rate policy, and this policy suggests that planned space reductions should take telework into account. As a component of HHS, CDC has its own space policy, which states that telework, desk-sharing, and hoteling can help CDC’s units meet HHS’s utilization rate policy of 170 USF per person. CDC’s telework policy requires employees who telework frequently to agree to participate in desk-sharing, but according to CDC officials, how and to what extent this portion of the broader policy is implemented is up to the discretion of management. CDC officials cited two limitations to further implementing space sharing techniques: (1) the large number of employees’ who may be unable to telework on certain days based upon their job responsibilities and (2) the voluntary nature of telework. Some CDC employees cannot work off-site at least some of the time due to confidential data or lab-based work. Approximately 60 percent of eligible CDC employees teleworked in fiscal year 2015 and about one-quarter of all employees teleworked 3 or more days per pay period, according to OPM data. CDC officials told us that telework participation ranges from 49 to 86 percent across CDC units.
CDC’s National Center for Chronic Disease Prevention and Health Promotion’s office in Chamblee, GA, provides the most clearly documented case of CDC’s leveraging telework for space efficiency. According to CDC officials, this unit accommodated more than 300 additional employees within its existing space by implementing hoteling for employees who telework 4 or more days per pay period, and it continues to use hoteling as part of its space management strategy. In contrast, CDC’s National Center for Health Statistics’ (NCHS) office in Hyattsville, MD, reduced space without leveraging telework by reconfiguring the space with smaller, soft-walled workstations.
CDC officials told us that NCHS reduced its office space from seven floors to three and three-quarters floors, resulting in a reported space reduction of more than 40 percent and allowing it to achieve a 170 USF per person utilization rate. CDC officials reported that this space reduction resulted in annual rent cost savings of approximately $1 million. Hoteling was not feasible at this location because work on confidential data limits the ability of employees to work off-site, and NCHS employees also prefer dedicated workstations. CDC officials said that there is limited documentation of any additional cases of CDC’s leveraging telework as a space- planning tool because, prior to our review, there had been no formal request to connect telework and space utilization data.
The Fiscal Service Has Reduced Space without Leveraging Telework
In contrast to GSA, OJP, and CDC, telework as a space-planning tool is an emerging consideration at the Fiscal Service. At the departmental level, Treasury has space standards that aim for efficient and effective offices that use increased telework and shared workstations to minimize the number of dedicated workstations. Similarly, objectives of the Fiscal Service telework policy include cost savings from reduced office space needs. Treasury’s space standards specify a planned maximum utilization rate of 200 USF per person for facilities with general office space, and the Fiscal Service reported that its average office space-utilization rate was 183 USF per person at the time of our review. Treasury policy also recommends hoteling for employees who are out of the office 80 or more hours per month, but the Fiscal Service told us that it would like to conduct additional desk-sharing pilots to assess their impact before negotiating broader desk-sharing with the union. At the Fiscal Service, approximately 80 percent of eligible employees teleworked in fiscal year 2015, and about one-quarter of all employees teleworked 3 or more days per pay period, according to OPM data. The Fiscal Service reported that approximately 80 percent of Fiscal Service employees telework at its Washington, D.C., Maryland, and West Virginia locations.
At the time of our review, the Fiscal Service officials said the agency had reduced space without leveraging telework or implementing hoteling, instead relying on smaller space standards (i.e., fewer square feet per workstation) to lease smaller offices. For example, the Fiscal Service reduced space by giving up several floors at its Hyattsville, MD, office starting in 2012. According to Fiscal Service officials, they accomplished these reductions by consolidating data centers to other locations, conducting targeted buyouts of employees, and, most recently, by implementing a new space standard of 183 USF per person through smaller workstations. The officials said that the most recent space reduction at this location resulted in savings in annual rent costs not attributable to telework. Looking forward, the Fiscal Service reported that it has started taking preliminary steps to promote efficient space utilization through telework. These steps have included: creating an Executive Space Management Council that discussed incorporating telework and desk-sharing into space management guidelines; implementing a voluntary, informal desk-sharing pilot in one program area for employees who already telework 50 percent or more of the time; and seeking information from GSA, including discussing and visiting GSA offices that have implemented hoteling as part of their space planning model.
In addition, the Fiscal Service officials told us that the agency plans to negotiate the impact and implementation of desk-sharing and hoteling for telework employees with its union, but the Fiscal Service has not yet begun this effort.
Agencies Face Several Planning Challenges in Using Telework to Reduce Space
Our analysis of the survey responses from the 23 civilian CFO Act agencies identified three major planning challenges agencies face with using telework to reduce space: human capital issues such as negotiating workspace changes with collective bargaining units and managing organizational change; the suitability of telework to mission work requirements; and difficulty measuring cost savings that might result from space reductions attributable to telework.
This measuring difficulty may include both gross savings as well as savings net of costs, such as for renovations or IT investments. See table 3 for examples of space-planning challenges related to telework reported by the 23 agencies.
To address these challenges, nearly two thirds of the agencies we surveyed reported they would like guidance on using telework programs or other alternatives to meet the federal goals of reducing space or using space more efficiently.
Challenges to Telework as an Office-Space-Planning Tool Include Human Capital and Mission Suitability Issues
Human Capital
Our review of agency survey responses, Real Property Efficiency Plans, and other agency space-planning documents, found that human capital challenges to using telework in space planning generally fell into two categories: (1) requirements to negotiate space allocation changes with collective-bargaining units; and (2) managing department workforces in adapting to new workspace designs and altered workspace allocations.
Collective-bargaining challenges: Of the 23 civilian CFO Act agencies, 7 of 23 noted that changes to telework policy or workspace arrangements required negotiation with collective bargaining units, for example,
The Small Business Administration (SBA) reported its greatest challenge to incorporating telework in office space planning has been with negotiating and securing agreement from all parties, including management and its union, on establishing space standards.
HUD reported that it could not implement hoteling or desk-sharing as its collective-bargaining agreements require that each employee retain an assigned workstation regardless of an employee’s type of telework agreement.
SSA reported that changing floor plans required negotiation with its three collective-bargaining units, which could extend the time needed for construction and relocation.
DOT reported that collective-bargaining agreements posed a challenge to incorporating workforce mobility options, including telework.
In addition, the collective-bargaining agreements we reviewed from the four agencies we reviewed in detail––GSA, OJP, CDC and Fiscal Service––required negotiations or the opportunity to negotiate changes to matters relating to workspace arrangements and in some cases, to telework policy.
Managing change: Using telework as a strategic space-planning tool, particularly in conjunction with complementary space-saving efforts such as desk-sharing, hoteling, or open-space designs, generally involves a cultural change. Nine of the 23 agencies we surveyed reported challenges associated with managing change. For example, three agencies reported employees’ discomfort or apprehension about desk-sharing and hoteling. In 2013, we reported that organizations may also encounter concerns from agency leaders, managers, employees, or employee organizations when introducing physical space changes associated with increased workforce mobility (telework). More recently, we reported that management concerns remain the most frequently reported barrier to expanding telework.
Two private sector experts we met with underscored the importance of management “buy-in” saying it was imperative that senior executives fully support the initiative to facilitate the necessary cultural change for agencies to use telework in space planning. One noted that management needed to make the business case to employees so that each layer of the organization could understand the importance of the initiative and its potential benefits. Another suggested a change management plan tailored to the work performed within a unit. This individual said that key components of such a plan might include studying existing work practices and program requirements, surveying employee preferences, and including employees in the planning process. Further, a GSA document circulated in response to the Telework Enhancement Act of 2010 mentions obtaining supervisory “buy-in” or support as key to facilitating change.
Mission Suitability
According to survey responses, within agencies there are sub-agencies that have individual mission requirements that may or may not be suitable for telework. This makes it difficult for agencies to implement overarching telework and space planning policies that apply department-wide. Sub- agencies and units within sub-agencies must individually determine if telework is appropriate given their particular mission requirements. For example, the Veterans Administration reported that although it developed an agency-wide telework policy, each sub-agency and supervisor has the flexibility to implement telework based on operational needs. Moreover, because the agency’s core mission involves direct services to veterans, about 83 percent of agency staff positions are not suitable for telework. The Telework Enhancement Act of 2010 outlines two broad exceptions to telework participation for employees: (1) directly handling secure materials determined to be inappropriate for telework by the agency head and (2) on-site activity that cannot be handled remotely or at an alternative worksite. In cases where telework does not support an agency’s mission or where a particular mission may require increases or decreases in personnel, telework as a strategic space-planning tool may not work.
Agencies Face Challenges in Measuring the Effect of Telework on Reducing Office Space
About half (12 of 23) of the agencies we surveyed reported that office space reductions resulting from using telework in space planning led to real estate cost savings while the other half reported either that cost savings did not result or they did not know. GSA officials told us that calculating cost savings attributable to a particular aspect of space planning is complicated as several factors contribute to savings. In particular, in survey responses, the Departments of Education, Energy, and Agriculture reiterated this point.
OMB’s National Strategy for the Efficient Use of Real Property and its Reduce the Footprint policy encourage agencies to increase and maximize efficiencies in office space by implementing cost-effective strategies such as telework. For example, the National Strategy outlines a framework that aims to measure real property costs and utilization to improve the efficient use of federal real property. The Reduce the Footprint policy requires agencies to measure cost savings that result from reducing space through disposals. However, neither document offers guidance or methodologies on how to measure the costs or savings that may result from using telework. We previously reported that GSA works with client agencies to develop tools to measure office space utilization and, in 2013, was developing an Excel-based tool to help agencies quantify the benefits and costs of using telework to achieve greater office space efficiencies. This tool—the Workplace Investment and Feasibility Tool—is aimed at helping agencies quantify the benefits and costs of increased telework participation and implementing other alternative-work arrangements. When completed, the tool will enable users to quickly develop rough estimates of cost and space impacts resulting from workplace changes, particularly relating to desk-sharing, workspace reconfiguration, and consolidation. Key features include the ability to compare up to three scenarios, which in turn may be used to inform a more detailed design program.
As of January 2018, GSA had not yet completed the tool. GSA officials said mission needs, resource constraints, and developmental adjustments have contributed to delays in the time frame for completing the tool. They added that during this time, GSA has diverted resources to space calculation tools for individual agencies. For example, GSA worked with DHS on its Space Calculation Tool as a way to help determine workplace space requirements in a manner consistent with DHS space policies. In January 2018, GSA officials told us that they plan to make the Workplace Investment and Feasibility Tool available to GSA staff in March 2018 along with training on how to use it. However, GSA officials have not decided whether to make the tool available to other federal agencies to use as a space-planning tool. Instead, the officials plan to assess GSA’s use of the tool and then determine in late 2018 if and how it should be released to other agencies for independent use. Given the absence of a government-wide tool, in our review, we found that some agencies had used their own resources to purchase similar tools for their space-planning needs from the private sector. Without such a government-wide resource, agencies may not be able to determine how best to reduce space or use it more efficiently.
GSA Guidance Does Not Specifically Address How to Use Telework in Space Planning
In responses to our survey, nearly two-thirds of agencies reported that they would find it helpful to have additional information, assistance, or resources to assist them in using telework as a space-planning tool. As noted above, a key element of GSA’s mission is to provide guidance and services that enable agencies to improve space utilization, reduce costs, and better achieve their missions. Moreover, under federal statute, GSA may provide guidance to executive agencies on the implementation of alternative workplace arrangements, which includes telework. Federal standards for internal control also call for agencies to communicate necessary quality information such as guidance with external parties.
In reviewing GSA’s websites, we found that GSA last developed formal guidance on alternative workplace arrangements in 2006 and maintains several separate informational websites on implementing telework and optimizing space utilization. Our review of this guidance and these websites found that they do not provide specific guidance for using telework as a strategic space-planning tool. For example, the 2006 guidance is generally limited to defining the factors agency heads must contemplate when considering alternative workplace arrangements along with the equipment and technical services agencies may provide for alternative worksites. However, this guidance does not address in detail the impact of such arrangements on agency office space and resulting planning issues. Similarly, our review of GSA’s teleworking and space- planning websites found that although they separately offered documented case studies along with information such as tips for implementing telework and managing a mobile workforce, GSA did not provide documents consolidating the concept of using telework as a strategic space-planning tool. For example, information on GSA’s Total Workplace Program website––intended to assist agencies in using workforce mobility (including telework) to increase space efficiencies––is generally limited. Although this website includes high-level information that describes the potential benefits of using telework with office space planning and design, it lacks a practical outline of the process agencies might use to achieve them. Because the information in the 2006 guidance and the telework and space-planning websites is neither specific nor detailed, it is of limited assistance for agencies that would like to use telework as a strategic space-planning tool to meet the goals of a more efficient use of space.
Conclusions
While using telework to reduce space is not a new challenge, it has become more pressing with OMB’s requirement for federal agencies to explore alternatives to acquiring more office space. Most civilian CFO Act agencies reported having a telework program in place and some reported success with using it in space planning to reduce space or accommodate more employees without increasing space. However, many of the agencies continue to face challenges and do not believe that they have adequate information, assistance, or resources to assist them in using telework as a space-planning tool or assess its costs and benefits. Until agencies have access to detailed guidance and tools to help utilize various space-planning options, they may not be able to effectively identify opportunities to use telework toward the goal of reducing their real property footprint.
Recommendations for Executive Action
We are making the following two recommendations to GSA: The Administrator of General Services should ensure that the appropriate GSA offices develop guidance including, but not limited to, how agencies can use telework as a strategic space-planning tool for reducing and optimizing office space efficiency and that the offices make the guidance readily available. (Recommendation 1)
The Administrator of General Services should ensure the appropriate GSA offices complete the Workplace Investment and Feasibility Tool and make it available to federal agencies for use in assessing the benefits and costs of telework to achieve office space efficiencies. (Recommendation 2)
Agency Comments
We provided a draft of this report to GSA, the Department of Justice, the Department of Health and Human Services, and the Department of the Treasury. In its written comments, reproduced in Appendix III, GSA concurred with our recommendations and stated that it is developing a plan to address them. We received technical comments from the Department of Justice and the Department of Health and Human Services, which we incorporated where appropriate. The Department of the Treasury did not have comments on our draft report.
We are sending copies of this report to the appropriate congressional committees; the Administrator of GSA; and the Secretaries of the Department of Health and Human Services and Department of the Treasury, and the Attorney General of the Department of Justice. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions concerning this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made key contributions to this report is listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
This report addresses: (1) how the 23 civilian Chief Financial Officer (CFO) Act agencies reported using telework in office space planning; (2) the specific ways selected agencies and GSA used telework in their office space planning; and (3) any challenges the 23 civilian CFO Act agencies faced in using telework in office space planning.
To determine how the 23 civilian CFO Act agencies reported using telework in office space planning, we surveyed the agencies. The survey asked questions about agency-wide efforts to use telework in the human- capital and space-planning areas; whether agencies had achieved any cost savings as a result; the challenges agencies faced in using telework in space planning; and asked agencies to identify additional information, resources, or guidance that might be helpful. In addition to the survey questions, we asked each agency real property officer to provide copies of agency-wide space-planning documents and the Real Property Efficiency Plans agencies prepared for fiscal years 2016 and 2017 pursuant to the requirements of the Office of Management and Budget’s (OMB) Reduce the Footprint policy. We developed survey questions based on our review of the relevant literature, white papers from federal agencies and private sector entities, and past GAO reports. We pre- tested the survey instrument with three federal agencies to ascertain: (1) the clarity of survey questions; (2) the precision of language; and 3) the availability of information queried. As a result of the pre-tests we made changes to the content and format of the survey where appropriate.
We received survey responses from each of the 23 civilian CFO Act agencies in addition to requested space-planning documents and Real Property Efficiency Plans, and thus achieved a 100 percent response rate. We analyzed survey results by calculating the frequency of responses to dichotomous questions (i.e., questions requiring a “yes” or “no” answer). We also conducted a content analysis on the open-ended, narrative-based questions by identifying common themes and tabulating results. We also conducted a content analysis to determine the extent to which agencies referenced telework in their agency-wide space-planning documents and in their Real Property Efficiency Plans. To accomplish these analyses, we developed separate coding schemes for each of the two types of documents. These were based on information obtained in our literature review, interviews with subject matter experts, and our professional judgment. We then identified relevant sections and common themes, and coded and tabulated the results. To validate the coding results, we used a second, independent coder.
To determine the specific ways agencies include telework in their office space planning, we selected a non-generalizable sample of three CFO Act agency sub-agencies as illustrative case studies. To select sub- agencies, we analyzed data from the Office of Personnel Management’s (OPM) Public Use 2014-2015 Telework Data call. First, we applied two selection criteria: (1) agency-wide progress toward a stated goal of using telework to reduce real estate costs, and (2) agency-reported data indicating that more than 25 percent of sub-agency employees teleworked 3 or more days per pay period. Next, we considered variations in sub-agency size and percentage of employees eligible to telework. Finally, we excluded candidates that had recently been selected for related GAO work and we excluded sub-agencies related to agency administration such as Offices of Inspector General or Secretary-level offices. We assessed the reliability of the OPM’s data by interviews with knowledgeable officials and by reviewing prior assessments of the same data, and we found the data reliable for our purposes.
As a result of this process, we selected (1) the Department of the Treasury’s Bureau of the Fiscal Service (Fiscal Service); (2) the Department of Health and Human Services’ (HHS) Centers for Disease Control and Prevention (CDC); and (3) the Department of Justice’s Office of Justice Programs (OJP). For each of the selected agencies, we interviewed agency officials and reviewed their telework and space- planning documents. We also visited four sub-agency office locations to determine if and how telework played a role in any space reductions or efficiencies along with any associated cost savings. In addition to these three sub-agencies, we used the General Services Administration (GSA) as a comparative example since it is responsible for providing space- planning guidance to client agencies and has experience using telework in space planning. We interviewed GSA officials, reviewed relevant documents, and visited three GSA office locations with recent space reductions or efficiencies. In total, we conducted seven site visits including two Fiscal Service locations, one location each for CDC and OJP, and three GSA locations (National Headquarters, Region 2 Office, and Region 3 Office) in Washington, D.C.; New York City; and Philadelphia, respectively.
To identify any challenges the 23 civilian CFO Act agencies faced in using telework in office space planning, we analyzed results from survey questions addressed to challenges, interviewed sub-agency and GSA officials as detailed above, interviewed two private sector subject matter experts, and representatives from four private-sector entities that had reported using telework to reduce and use office space more efficiently. Statements made by knowledgeable federal officials, outside experts and private sector entities are not generalizable to the universe of civilian CFO Act agencies. We also analyzed the section of each of the 23 civilian CFO Act agencies’ Real Property Efficiency Plans devoted to challenges agency face in reducing space. We selected the two subject matter experts––representatives from Global Workplace Analytics and Fentress Facility Planning and Analytics––based on: (1) their experience working with federal agencies to incorporate telework programs into the space planning process; (2) information compiled in our literature review; (3) prior GAO reports; (4) internal GAO recommendations; and (5) industry recommendations. We selected the four private sector entities (AT&T, Deloitte, Adobe, and CapitalOne) based on our literature review, recommendations from industry experts, and reports of having achieved space efficiencies including space reduction, cost savings, or cost avoidance(s).
To identify what guidance or information on using telework as a space- planning tool GSA makes available through its website, we reviewed the contents of multiple GSA websites including Telework, Total Workplace, Alternative Work, and GSA Telework Resources. We followed links and reviewed webpage contents for information on how agencies might use telework as a strategic tool to reduce space or use space more efficiently. We compared GSA’s guidance and website information to relevant statutory requirements and federal internal controls standards related to external communication.
We conducted this performance audit from January 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Selected Agency Case Study Profiles
Appendix III: Comments from the General Services Administration
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, David J. Wise (Director); Amelia Bates Shachoy (Assistant Director); Lindsay Bach (Analyst-in-Charge); Geoff Hamilton; Malika Rice; Kelly Rubin; Shelia Thorpe; Elise Vaughan Winfrey; and Amelia Michelle Weathers made key contributions to this report. | Why GAO Did This Study
Federal agencies are exploring ways to use telework as a tool to reduce the federal footprint and use space more efficiently. GAO was asked to examine the effects of telework on agencies' space-planning efforts. In this report, GAO reviewed: (1) how the 23 civilian CFO Act agencies reported using telework in office space planning; (2) the specific ways selected agencies and GSA used telework in their office space planning; and (3) any challenges the civilian CFO Act agencies faced in using telework in office space planning.
GAO surveyed all 23 civilian CFO Act agencies, analyzed each agency's space-planning documents, and Real Property Efficiency Plans . GAO reviewed four agencies in greater detail based on analysis of telework data and other factors. For those four agencies GAO conducted site visits, interviewed officials, and analyzed agency documents. GAO also identified challenges agencies faced in using telework in space planning, based on survey results, agency documents, and interviews.
What GAO Found
The 23 civilian Chief Financial Officer (CFO) Act agencies reported various ways of considering and using telework as a space-planning tool, by, for example, implementing desk-sharing for employees who telework in order to relinquish leased space, or increasing the number of staff working in an existing space without increasing its size. All of the 23 agencies discussed telework in the context of space planning and achieving greater space efficiencies in either their space-planning documents or Real Property Efficiency Plans . The agencies that used telework as a space-planning tool generally reported implementing smaller or unassigned workstations.
Three of the four agencies GAO reviewed in greater detail––the General Services Administration (GSA); the Office of Justice Programs at the Department of Justice; the Centers for Disease Control at the Department of Health and Human Services; and the Bureau of the Fiscal Service at the Department of the Treasury––leveraged telework to reduce or use office space more efficiently. For example, GSA and the Office of Justice Programs used telework to accommodate more employees in a smaller office space as illustrated in figure 1 below. The Centers for Disease Control used telework to accommodate more employees in the same amount of space. The Bureau of the Fiscal Service reduced space without telework by reducing the size of individual workstations.
The 23 civilian CFO Act agencies reported several challenges in using telework to reduce space including human capital issues, mission suitability, and measuring cost savings attributable to telework. About two-thirds of the agencies said they would find it helpful to have additional information, assistance, or resources in using telework as a space-planning tool. GSA provides guidance to improve space utilization. However, GAO found that GSA last developed relevant formal guidance in 2006. This information, and that on GSA's telework and space-planning websites, was neither specific nor detailed and therefore of limited assistance to agencies that would like to use telework as a space-planning tool. Additionally, GSA's space-planning tool—the Workplace Investment and Feasibility Tool, intended to help agencies quantify the benefits and costs of telework––remains under development after more than 4 years, and GSA officials have not decided whether to make the tool available to other federal agencies. As such, agencies reported that they lack adequate guidance to determine how best to reduce space or use it more efficiently, and how to assess the benefits and costs of using telework in space planning.
What GAO Recommends
GSA concurred with recommendations that GSA should: (1) develop guidance on how agencies can use telework as a strategic space-planning tool and make this guidance readily available and (2) complete and make the Workplace Investment and Feasibility Tool available to federal agencies for use in assessing the benefits and costs of telework. |
gao_GAO-18-401 | gao_GAO-18-401_0 | Background
FMS Program Size and Benefits
The FMS program provides support to over 150 foreign partners, with sales totaling $416 billion between fiscal years 2007 and 2017. Annual sales were over $30 billion in each of these years except two, and grew 80 percent over the period to $42 billion in fiscal year 2017 (see fig. 1). The types of equipment and services sold to foreign partners ranged from fighter jets and integrated air and missile defense systems to combat helmets and training on the use of equipment. According to DSCA officials, fluctuations in annual sales are driven by changes in individual foreign partners’ needs for equipment and other goods and services from year to year. For example, the fiscal year 2012 annual sales of $69 billion were substantially driven by one sale to Saudi Arabia that was valued at $29 billion.
According to DOD and State officials, FMS provides multiple benefits to foreign governments and the U.S. government. Foreign governments that choose to use FMS rather than direct commercial sales receive greater assurances of a reliable product, benefit from DOD’s economies of scale, improve interoperability with the U.S. military, and build a stronger relationship with the U.S. government. DSCA anticipates strong annual sales to continue, although using FMS is generally not the quickest or least expensive option for foreign governments. From the U.S. perspective, FMS expands the market for U.S. businesses and contributes to foreign policy and national security objectives.
Process of Administrative and CAS Fee Collections and Expenditures
The administrative and CAS fee rates have varied over time, as seen in figure 2. The administrative fee was first implemented in 1970 and was originally set at 2 percent. Since 1970, the administrative fee rate has been changed four times, staying within the range of 2.0 to 3.8 percent. Since November 2012, the rate has been set at 3.5 percent. The CAS fee was first implemented in 1981 and was originally set at 1.5 percent. In 2002, a supplementary CAS fee was created for cases managed outside the United States (and set at an additional 0.2 percent), and in 2014 the base CAS fee rate for all cases was decreased to 1.2 percent.
Administrative and CAS fee collections are held in the FMS trust fund, which is comprised of separate accounts for each country and several distinct accounts for fees. Each country’s individual account, referred to as a country account, holds funds that country has paid for FMS purchases of equipment and services until the funds are expended. The fee accounts, including the administrative and the CAS accounts, do not separate funds by country and instead comingle funds paid for fees by all purchasers. These accounts hold their deposits without accruing interest. According to DOD officials, once fees are deposited into one of the fee accounts, they are considered U.S. government funds and do not expire. Expenses related to administrative and CAS services are paid respectively from the related fee account.
The timing and calculation of collections differs between the administrative and CAS fees, as shown in the example case of a $10 million equipment sale in figure 3. In particular, for the administrative fee, half of the amount owed is collected with the first payment made on most cases. The remaining administrative fees owed are timed with deliveries on the case. For the CAS fee, nothing is collected upfront. Instead, whenever the contractor providing goods or services on the case bills for work on the contract, a corresponding payment of the CAS fee is moved from the country account to the CAS account.
According to DSCA data, the average length of a standard FMS case closed in fiscal year 2017 was 9 years. The administrative and CAS accounts need to maintain sufficient balances to pay for related operational expenses over that time period. DOD does not track administrative or CAS costs by case. Instead, collected funds are comingled and expenditures from the administrative and CAS accounts are made to DOD implementing agencies to pay for their overall FMS work. We have previously found that DOD does not have sufficient information on program costs to determine the amount needed to support the FMS program.
Related Roles and Responsibilities
While State reviews and approves FMS purchases, DSCA is responsible for administering the FMS program for DOD, including managing the administrative and CAS accounts and coordinating with other DOD components. In this role, DSCA sets policies for the FMS process, including for how implementing agencies can use administrative and CAS account funds; monitors the administrative and CAS account balances; and sets the administrative and CAS fee rates. DFAS provides DSCA’s accounting services for FMS and in this role is responsible for accounting, billing, disbursing, and collecting funds for the FMS program. DFAS’ accounting duties also include reconciliation and correction of errors related to collection of fees from foreign customers and disbursement of funds out of the administrative and CAS accounts, as governed by an agreement with DSCA.
Congress and DSCA both have roles in defining what expenses are covered by the administrative fee. Congress defines in the act what administrative expenses DSCA can charge to FMS purchasers. Congress amended the act in 1989 to exclude salaries of the Armed Forces of the United States and estimated costs of unfunded civilian retirement and other benefits from the expenses that shall be recovered by the administrative fee. Since that change, the Armed Forces salaries and the estimated costs of unfunded civilian retirement and other benefits are paid instead from other appropriated funds.
Within the parameters specified in the act, DSCA is responsible for defining whether administrative expenses should be paid from funds charged to the foreign partner, either from funds collected into the administrative account or from case-specific funds held in the related country account, or from other DOD annual appropriations. DSCA does this by outlining the expected funding source for specific types of administrative tasks carried out for FMS cases. For example, DSCA has determined that functions that are a normal part of all FMS cases—such as identifying defense requirements to help write an offer letter—should be paid from the administrative account. Conversely, functions that are requested to provide supplementary support on a case—such as conducting a site survey—should be paid with case fees from the partner’s country account.
The FMS Administrative Account Balance Has Grown Steadily Due in Part to Insufficient Management Controls and Should Be Adequate to Pay for Additional Expenditures through 2024
The administrative account balance grew steadily over the last decade due in part to the insufficient controls DSCA has in place to manage the account balance. Although DSCA has set a minimum desired level for the account and a process for regular monitoring, it has not completed timely comprehensive reviews of the administrative fee rate. In addition, DSCA has not adopted the best practice of establishing a method to calculate an upper bound of a target range for the account balance. As a result, DSCA’s monitoring and rate review practices are limited in their ability to prevent excessive growth in the account balance. Our analysis indicates that even if the administrative fee rate were reduced to as low as 2.9 percent and administrative expenditures were to increase 15 percent above expected growth, the administrative account balance would likely remain sufficient to pay for projected expenditures while maintaining a reserve balance through at least fiscal year 2024.
The Administrative Account Grew about 950 Percent between Fiscal Years 2007 and 2017
The administrative account balance grew each year from the beginning of fiscal year 2007 through the end of fiscal year 2017—from $391 million to $4.1 billion, or 953 percent (see fig. 4). According to DSCA officials, the account balance has grown in part due to the fact that 50 percent of the administrative fee is usually paid when the first payment is made on a case while funds need to be available to pay for administrative work on the case as long as it remains open. Thus, as sales have grown on average over recent years, the amount of these upfront collections made on cases and the amount of expenditures that would be needed to work on these cases have also grown. However, administrative account collections and expenditures grew at slower rates than the overall account balance growth. Specifically, administrative account collections and expenditures grew 86 percent and 149 percent, respectively.
Administrative account collections exceeded expenditures in each fiscal year between 2007 and 2017, contributing to the growing account balance. As shown in figure 5, collections were at least 1.5 times expenditures in 6 of these years, and the difference between collections and expenditures was $324 million in fiscal year 2017. At the end of each fiscal year, the value of collections that exceeds expenditures remains in the administrative account and is carried over to the next fiscal year’s beginning balance, which compounds the growth from year to year. Administrative fees are transferred from the foreign partner’s country account to the administrative account when agreements for new sales are signed and when deliveries are made on cases. Fluctuations in collections from year to year are due to the variations in the timing of these events and the value of the related cases. Despite these year-to- year fluctuations, expenditures from the administrative account to pay implementing agencies to work on FMS cases have generally increased more steadily over time.
Annual growth in the administrative account balance has slowed in recent years; however, the overall balance has continued to grow. DSCA reduced the administrative fee rate in November 2012 from 3.8 to 3.5 percent following a review prompted by concerns that the balance appeared excessive as it neared $2 billion. Growth in the account balance from fiscal years 2007 to 2012 averaged $412 million a year compared with $273 million a year in fiscal years 2013 to 2017. Therefore, the rate reduction may have helped to decrease the annual growth in the account balance, yet the account balance itself has continued to grow.
DSCA’s Management Controls for the Administrative Account Provide Some Assurance of Maintaining Sufficient Funds but Do Not Guard Against an Excessive Balance
DSCA has established a minimum desired level for the administrative account and has processes for regularly monitoring the account’s balance. DSCA also has a process for reviewing the fee rate, called a comprehensive review, although it has not completed its most recent comprehensive reviews as frequently as required by DSCA policy. In addition, DSCA has not set an upper bound of a target range for the account balance. As a result, DSCA cannot provide adequate assurance that the account maintains an appropriate balance that is both sufficient but not excessive.
DSCA Has Established a Desired Minimum Level for the Administrative Account
Best practices in managing federal user fees suggest that federal agencies use a risk-based strategy to establish a target range for fee account balances so that there are reserves sufficient to cover varying or unpredictable revenues or expenses. This risk-based strategy should match the level of risk identified for the program, based on past experience and realistic risks.
DSCA has set a minimum desired level for the administrative account, which it calls the safety level. It considers the safety level the minimum balance required to allow sufficient time to respond to volatility in the FMS business environment and to complete ongoing FMS cases. Prior to fiscal year 2013, the safety level was determined based on the assumption that FMS business might cease and 2 years of administrative expenses would be needed to wind down operations. An estimate of such shut-down expenses was difficult to calculate, according to DSCA officials. DSCA and the DOD Comptroller determined that the initial assumption for calculating the safety level was not valid because FMS would not likely cease operations given its integral role in U.S. government and DOD strategies. They therefore decided to change the calculation, and in so doing to increase the safety level to further mitigate risk and provide more flexibility. Specifically, starting in fiscal year 2013, the safety level has instead been defined as 18 months of funding, a period of time considered sufficient to respond to volatility in the FMS business environment and to complete ongoing FMS cases. According to DSCA officials, maintaining the safety level helps to ensure that there are sufficient funds in the account to pay for expenses throughout the life- cycle of individual cases.
Since fiscal year 2007, the administrative account balance has been above this safety level every year, with the balance $2.7 billion above the safety level (of $1.4 billion) at the close of fiscal year 2017. Since the safety level calculation was modified for fiscal year 2013, the account balance has been between 2.4 and 3.2 times the safety level, and was 3 times the safety level at the close of fiscal year 2017 (see fig. 6).
DSCA policy describes certain processes for account monitoring to occur on a monthly, quarterly, and annual basis:
Monthly reviews: On a monthly basis, DSCA officials are to review a report from DFAS on the status of the administrative account. These reviews focus on whether: an expected amount of expenditures were made from the account, collections into the account are commensurate with past and current sales, the account balance is trending up or down, and the balance is near the safety level. According to DSCA officials, the results of these reviews are provided to DSCA leadership through monthly oral briefings from October through August, and the same information is reviewed and briefed weekly during September as the end of the fiscal year approaches.
Quarterly reviews: On a quarterly basis, DSCA officials supplement their monthly briefings to DSCA leadership with other information on the FMS business environment, according to DSCA officials. Such information could, for example, focus on changes in bilateral relationships with key FMS customers, regional conflicts, changes in the global economy, or the status of annual sales.
Annual assessments: DSCA has completed annual assessments of the administrative account since 2006, according to DSCA officials. These assessments involve a review of the previous year’s sales, administrative fee collections, expenditures from the administrative account, and the administrative account balance. The health of the account is determined by comparing the current and projected account balances with the account’s safety level, which is also recalculated for the new fiscal year as part of the annual assessment process. To assess the health of the account over the next year, DSCA officials use DSCA’s sales forecast and budgeted expenditures. These assessments are based on the current fee rate and do not include testing of any alternative fee rates. These assessments result in a report that is shared with DSCA leadership and the implementing agencies to keep them informed of the account’s health at a more detailed level.
DSCA Has Not Completed Timely Comprehensive Reviews of the Administrative Fee Rate
DSCA policy requires that a comprehensive review of the administrative fee rate be completed at least every 5 years. In addition, DSCA policy encourages more frequent comprehensive reviews in the case of certain events, such as a period of sales consistently below the forecasted level, which may put the account balance at risk of dropping below the safety level. However, DSCA has completed its three most recent comprehensive reviews of the administrative fee rate more than 6 years apart, which is less frequently than required by DSCA policy. Specifically:
Fiscal year 2005: DSCA decided to conduct a comprehensive review of the administrative fee rate because the account balance ($260 million) was approaching the account’s safety level ($250 million). For this review, DSCA conducted an internal study that concluded that, with no changes to the fee rate, the administrative account would have a negative balance in fiscal year 2009. To perform this study, DSCA officials projected what would happen to the administrative account balance given different administrative fee rates, while estimating annual sales between $12.5 billion and $14.5 billion for future years. As a result of this study, DSCA decided to increase the fee rate from 2.5 to 3.8 percent. According to independent analysis undertaken by the Naval Postgraduate School (NPS) in 2011 for the next rate review, this decision addressed short-term concerns about a possible negative account balance but did not account for the projected long-term growth of the balance at the new fee rate.
Fiscal years 2011 to 2012: DSCA enlisted NPS to perform a comprehensive review of the administrative fee rate in fiscal year 2011. NPS built a model to assess how various administrative fee rates would affect the administrative account balance through fiscal year 2015, using multiple methodologies to project future annual sales based on historical sales data. The model was also used to estimate what the administrative account balance would have been if various fee rates had been in effect since fiscal year 1999. Based on this analysis, NPS recommended that the fee rate be lowered to within the range of 3.0 to 3.4 percent, stating that 3.0 percent would be ideal for minimizing large variations in the account balance from year to year while mitigating the risk of falling below the safety level or accruing an excessive balance. However, following a 2012 internal DSCA review of this report, DSCA leadership decided to decrease the fee rate from 3.8 percent to 3.5 percent. According to DSCA officials, this decision was made due to uncertainty regarding future annual sales and because DSCA officials had learned to avoid making significant rate changes that can make foreign partners’ budgeting more difficult.
Fiscal year 2018: According to DSCA officials, after performing some preparatory work during the prior fiscal year, DSCA began another comprehensive review of the administrative fee rate in fiscal year 2018. According to DSCA officials, this review was to be conducted internally and involve modeling various scenarios for the administrative account, making projections based on DSCA’s fiscal year 2018 sales forecast, recent sales data, expenditure trends, and historical collection rates on ongoing cases. In addition to using historical sales data to project future sales, DSCA planned to model alternate scenarios to account for the possibility of certain high or low sales years. In April 2018, DSCA announced that, as a result of this review, the administrative fee will be reduced to 3.2 percent as of June 1, 2018.
DSCA established the policy of a 5-year period between comprehensive rate reviews because, according to DSCA officials, foreign partners prefer stability in the administrative fee rate to facilitate their budgeting. In addition, 5 years between rate reviews would allow DSCA to identify sales and expenditure patterns that could determine whether a rate change would be needed. According to DSCA officials, the most recent rate review was originally scheduled to be completed on time but was delayed due to competing priorities and limited resources. However, without timely comprehensive reviews, there is greater likelihood that large changes would be needed in the administrative fee rate to correct for large variations in the administrative account balance, thus hindering DSCA’s ability to provide stability in the administrative fee rate.
DSCA Has Not Set an Upper Bound of a Target Range for the Administrative Account
DSCA has not established a method to calculate an upper bound of a target range for the administrative account balance as suggested by best practices. Setting an upper bound could help DSCA determine when the balance is excessive and an out-of-cycle comprehensive review of the fee rate might be warranted. An upper bound could be based on a certain number of months or years in expenditures and would thereby change over time to reflect the size and needs of the FMS program. DSCA could thus use the upper bound of a target range as another management tool to help more closely monitor the account during its periodic reviews. Given the lack of data on actual FMS costs per case and uncertainty about future annual sales, such a management tool could usefully inform future DSCA decisions based on its comprehensive rate reviews.
GAO Analysis Indicates the Administrative Account Is Likely to Have Sufficient Funds for at Least 7 Years and Could Pay for Additional Expenditures
We developed a model to understand potential changes in the administrative account balance for fiscal years 2018 through 2024 given a range of annual sales, administrative fee rates, and annual administrative expenditures. We found that, if no changes were made to the fee rate or expected expenditure levels, the administrative account balance would likely be above the projected safety level by at least $1.6 billion in fiscal year 2024. If DSCA were to reduce the administrative fee rate as low as 2.9 percent and annual expenditures were to increase as much as 15 percent, the administrative account balance would also likely be above the projected safety level in fiscal year 2024 by at least $25 million.
The Projection Model
We used cautious assumptions to model eight scenarios to assess the likelihood of the administrative account balance remaining above a projected safety level in fiscal years 2018 through 2024. The projected safety level reflects DSCA’s definition of the minimum balance required for the administrative account to allow sufficient time to respond to volatility in the FMS business environment and to complete ongoing FMS cases. We consider our assumptions cautious because they are more likely to lead us to underestimate the administrative account balance and to inflate the risk of it dropping below the projected safety level (see text box).
Cautious Assumptions Used in GAO Modeling of the Administrative Account Balance in Future Years Sales: We assumed a minimum of $15 billion and a maximum of $47 billion in sales each year, using a uniform distribution that assumes an equal likelihood of any sales value within that range each year. In reality, annual sales have increased overall since fiscal year 2000 and have remained above $20 billion since fiscal year 2006 and above $33 billion since fiscal year 2014. Higher annual sales lead to larger administrative fee collections. This sales range likely leads to underestimating collections in some years. Expenditures: We assume expenditure levels that reflect both fluctuations in sales and overall steady annual growth in expenditures even when our annual sales values do not increase on average. Therefore, we likely overestimate expenditures in some years. Safety level: We assume steady annual growth in the safety level, even though we would expect the safety level to be lower when collections and expenditures are lower. Since our safety level projections do not take this into account, we likely overestimate the safety level, and therefore inflate the risk of dropping below it.
We developed our baseline scenario, in which we maintain the current 3.5 percent administrative fee rate and typical growth based on current trends in expenditures. In additional scenarios, we adjusted the baseline projections with two key levers affecting the administrative account balance: (1) the fee rate and (2) the amount of expenditures out of the account. Given that the administrative account balance was $2.7 billion above the safety level as of the end of fiscal year 2017, we made adjustments to these levers in ways that could lead to a decline in the account balance by decreasing the fee rate, increasing expenditures, or through a combination of the two. Below, we describe the results of the baseline scenario and where we adjust either or both levers to the maximum extent we considered. See appendix II for a full description of our modeling methodology and results from four additional scenarios.
For each scenario, we estimated the expected range of the administrative account balance and then assessed the likelihood of the account balance remaining above the projected safety level. We consider 10 percent as an acceptable risk threshold and therefore consider any outcome as favorable if it involves a 90 percent or greater likelihood of the balance remaining above the projected safety level.
Model Outcomes
As shown in figure 7, our projections indicate that the administrative account balance will remain sufficient to maintain operations through fiscal year 2024 in all scenarios. Specifically: In the baseline scenario, if no changes were made to the fee rate or to annual expenditures, the estimated administrative account balance would be between $2.5 billion and $5.7 billion in fiscal year 2024, with a 90 percent likelihood that the balance would be above the projected safety level by at least $1.6 billion.
If DSCA were to reduce the fee rate to 2.9 percent, we estimate the administrative account balance would be between $2.1 billion and $4.7 billion, with a 90 percent likelihood that the balance would be above the projected safety level in fiscal year 2024 by at least $1.0 billion.
If annual expenditures from the administrative account were to increase 15 percent above expected levels, we estimate the administrative account balance would be between $1.5 billion and $4.6 billion, with a 90 percent likelihood the balance would be above the projected safety level in fiscal year 2024 by at least $622 million.
If this increase in annual expenditures were coupled with a reduction in the administrative fee rate to 2.9 percent, we estimate the account balance would be between $1.1 billion and $3.6 billion in fiscal year 2024, with a 90 percent likelihood the balance would be above the projected safety level in fiscal year 2024 by at least $25 million.
The range of the estimated balance in each scenario gets larger from year to year due to increasing uncertainty for longer-term projections.
Our modeling shows that, even with a substantially reduced administrative fee rate, the estimated administrative account balance would likely well exceed the account’s projected safety level through at least fiscal year 2024. Even if DSCA reduced the fee rate an additional 0.3 percent lower than it plans to as of June 2018, we project the estimated balance of the administrative account would be over $1 billion above the account’s safety level in fiscal year 2024.
GAO Modeling Indicates the Administrative Account Balance Could Likely Be Used to Pay for Additional Expenses, Such As Those Excluded by Statute
In addition, our modeling demonstrates that administrative funds are sufficient to cover a higher amount of expenditures for the work the U.S. government performs for the benefit of its foreign partners, and could be used in place of the other appropriated funds used to support some of the associated expenses today. As enacted in 1976, the provision of the act that authorized the collection of administrative fees required that sales contracts include appropriate fees for administrative services to recover the full estimated costs of the administration of sales made under the act. Subsequently, Congress amended the act to exclude some expenses from the administrative fee. In particular, according to a House report and DOD testimony, to avoid raising the administrative fee at a time when annual sales were low and the account was insolvent, Congress, at DOD’s request, amended the act in 1989 to exclude from the administrative fee certain expenses associated with military personnel who work on the FMS program as well as the estimated costs of unfunded civilian retirement and other benefits.
Since then, these expenses—with one exception for fiscal year 2000— have been funded with other appropriated funds rather than with foreign partners’ administrative fees. For fiscal year 2000, Congress required DOD to recover expenses attributable to salaries of members of the Armed Forces and the unfunded estimated costs of civilian retirement and other benefits by including them in the administrative fee, resulting in $52 million in additional FMS administrative expenses, or 13.5 percent of total FMS administrative expenses, for that year. Applying the same percentage, these costs would approximate $119 million in fiscal year 2017; however, DOD does not track the costs of military pay or unfunded civilian retirement and other benefits for FMS, so the current value of these costs is unknown. Our modeling shows that, even if DSCA were to decrease the administrative fee rate an additional 0.3 percent lower than it plans to effective June 2018 and annual expenditures increased as high as 15 percent above expected levels, the account balance would likely remain sufficient through at least fiscal year 2024. By then, DSCA would have had an opportunity to reassess the fee rate through another comprehensive rate review. The circumstances of the administrative account balance have changed substantially since the 1980s. Revisiting the provisions in the act authorizing and defining the collection of administrative expenses could allow other appropriated funds currently used to pay for some of these expenses to be used for other authorized purposes. Officials within DSCA and DOD’s Comptroller Office have stated they are receptive to revisiting these provisions.
The FMS CAS Account Balance Has Grown Substantially; Management Controls over the Balance Remain Insufficient
The CAS account balance grew substantially between fiscal years 2007 and 2015 because CAS collections exceeded expenditures in each year and insufficient controls were in place to manage the balance. The account balances for fiscal years 2016 and 2017 overstate available CAS funds due to a systems issue and limited related oversight. Since fiscal year 2014, DSCA has created some controls to help better manage this account; however, DSCA does not plan to conduct timely comprehensive reviews of the CAS fee rate, has inconsistently implemented internal guidance related to calculating the minimum desired level for the account, and has not established a method to calculate an upper bound of a target range for the account, thus allowing the account to continue to grow.
The CAS Account Balance Grew Substantially between Fiscal Years 2007 and 2015; the Fiscal Years 2016 and 2017 Account Balances Overstate Available Funds
The CAS account balance grew every fiscal year, from $69 million at the beginning of 2007 to $981 million at the end of 2015, or 1,329 percent over the period (see fig. 8). As annual sales grew during this period, CAS collections and expenditures also grew, but at slower rates than the account balance growth—at 133 percent and 187 percent, respectively.
CAS account collections exceeded expenditures each fiscal year from 2007 through 2015, contributing to the growing account balance. As shown in figure 9, collections were at least double expenditures in five of these years, with a $49 million difference between collections and expenditures in fiscal year 2015. DSCA reduced the CAS fee rate from 1.5 to 1.2 percent in 2014 due to concerns over growth in the CAS account balance, according to DSCA officials. After the rate reduction, the account balance continued to grow but at a slower rate. The account balance increased 5 percent during fiscal year 2015 compared with an average of 38 percent from fiscal years 2006 through 2014. The balance would continue to grow if this trend continues.
The CAS account balance data that DFAS provided to DSCA overstated the amount of CAS funds available by about $187 million for fiscal year 2016 and continued to be overstated for fiscal year 2017 due to a systems issue and limited related oversight. According to Defense Contract Management Agency (DCMA) officials, in October 2015, DCMA, the largest recipient of CAS funds, began using a new accounting system called the Defense Agencies Initiative. According to DCMA officials and internal data, DCMA submitted bills for about $187 million of CAS work for fiscal year 2016. To process its requests for this CAS funding in its new system, DCMA used an incorrect accounting code, according to DFAS officials. As a result, DCMA was paid for some of its fiscal year 2016 CAS bills, totaling about $89 million, from a different account, according to DFAS officials. Consequently, this amount paid to DCMA was not reflected in the CAS account expenditures or balance for fiscal year 2016. Further, DCMA and DFAS data differ regarding what additional amounts have been reimbursed to DCMA for its remaining fiscal year 2016 and its fiscal year 2017 CAS funding and suggest that DFAS underreported CAS expenditures to DSCA for both years.
Although DSCA has financial management responsibility for the FMS trust fund, DSCA has played a minimal role in correcting DCMA’s incorrect billings or low reimbursement levels. After DSCA officials noticed low fiscal year 2016 CAS disbursements in December 2016, DSCA officials asked DFAS and DCMA officials to look into the cause and to resolve the issue. However, as of January 2018, DSCA had not provided any specific directions to DFAS or DCMA on a process or timeline for fixing it. DCMA began to submit vouchers totaling approximately $89 million in November 2017 for DFAS to process to be correctly paid out of the CAS account. According to DFAS officials, DFAS processed corrections related to these vouchers by January 2018 so that the approximately $89 million would be taken from the CAS account and returned to the other account. DFAS officials believe that these transactions resolved DCMA’s billing issues since they have not received any additional vouchers from DCMA or direction from DSCA. However, according to DCMA officials, they continue to have difficulty getting reimbursed for CAS work dating back to FY2016 and discrepancies remain between related DCMA and DFAS data.
Federal standards for internal control state that management should use quality information that is current, complete, accurate, and provided on a timely basis to achieve the agency’s objectives and make informed decisions. However, as a result of DCMA’s difficulties in getting reimbursed from the CAS account, the CAS account balance remains overstated as of January 2018, hampering DSCA’s ability to perform oversight of the account.
DSCA’s Management Controls over the CAS Account Balance Have Been Strengthened in Recent Years but Remain Insufficient
Since 2014, DSCA has put in place various management controls for the CAS account. Nevertheless, these remain insufficient due to inconsistent implementation of internal guidance and lack of a key control.
DSCA Established Some Controls for Managing the CAS Account Balance
From June to August 2013, DSCA conducted its first comprehensive review of the CAS fee rate since the early 2000s, according to DSCA officials. This comprehensive fee rate review was called for in DSCA’s strategic plan and was also prompted by substantial growth of the CAS account, according to DSCA officials. To conduct this review, DSCA officials worked with an internal support contractor to develop a model to project future CAS account balances based on historical data on CAS expenditures and collections, historical data and future projections for annual sales, and future budget estimates made by CAS implementing agencies. In this model, DSCA varied future annual sales projections and the CAS fee rate within the range of 1.0 to 1.5 percent to determine if the CAS account could maintain a healthy balance over the next 10 years under different conditions. As a result, in November 2014, DSCA issued a policy memo that specified a reduction in the CAS fee base rate from 1.5 to 1.2 percent for all cases starting after December 1, 2014. The decision to reduce the rate to 1.2 percent was supported by their modeling outcomes that showed that the CAS account balance would be above a safety level set for the account even if annual sales were as low as $12 billion in each of the following 10 years.
The November 2014 policy memo that resulted from the 2013 comprehensive fee rate review specified three new controls for managing the CAS account:
Periodic comprehensive fee rate reviews: DSCA determined that it would conduct comprehensive rate reviews of the CAS account every 5 years.
A safety level for the CAS account: DSCA established a safety level, or minimum desired balance, for the CAS account at 3 years of average annual expenses. According to DSCA officials, the basis for the calculation of the safety level was rooted in a Federal Acquisition Regulation requirement to complete contract closeout within 3 years of final delivery for some types of contracts. As a result, even if no new sales were made, the CAS account would have sufficient funds to pay for contract management on existing cases. The CAS account balance was 1.7 times or $371 million above the safety level in fiscal year 2014 and 1.8 times or $420 million above the safety level in fiscal year 2015.
Annual reviews of the health of the CAS account: For each year since fiscal year 2014, DSCA has conducted an annual assessment of the health of the CAS account. To perform this assessment, a DSCA official reviews information such as the CAS account balance from the end of the prior fiscal year against the account’s safety level, prior year account expenditures and collections, and information that may be relevant to the account moving forward, such as budget requests submitted by implementing agencies. This annual assessment culminates in a report that is provided to and signed off by DSCA’s Director of Business Operations.
These practices were formalized by incorporating them into DSCA’s Manager’s Internal Control Program (MICP). In addition to these practices, MICP documentation for the CAS account also lays out a fourth management control: monthly reviews, which are meant to ensure that the account stays above its safety level throughout the year and that any large variances in expected expenditures or collections are reported to DFAS so that errors can be identified and corrected as needed. According to DSCA’s MICP Handbook, all MICP documentation should be reviewed at least annually to ensure it is kept up to date.
DSCA Does Not Plan to Conduct a Timely Comprehensive Review of the CAS Account Fee Rate
As mentioned above, DSCA’s internal guidance indicates DSCA should conduct comprehensive reviews of the CAS fee rate every 5 years, which would make the next review in the summer of 2018. However, DSCA officials do not expect to begin their next comprehensive rate review until fiscal year 2019. DSCA officials stated that they intend to complete the review sometime by the beginning of fiscal year 2020, to complete it within 5 years of when the last CAS rate reduction took effect. However, this plan extends the time between reviews by a year and a half due to the amount of time it took for DSCA to decide on and implement the rate reduction after the last review. More frequent comprehensive reviews would provide timely in-depth information to decision makers to ensure that the CAS fee rate is set appropriately. In addition, more frequent fee rate changes would allow for smaller corrections when needed, limiting the impact that large fee rate changes would have on customers’ ability to budget.
DSCA Inconsistently Implemented Guidance Concerning Safety Level Calculations
The guidance in the MICP procedures specifying how to calculate the safety level has not been consistently implemented and has not been updated to align with current practices. Federal internal control standards indicate that management should document the organization’s internal control responsibilities in its policies at the appropriate level of detail to allow management to monitor the control activity effectively. These standards also state that if there is a significant change in an entity’s process, management should review this process in a timely manner after the change to determine that the control activities are designed and implemented appropriately.
Figure 10 outlines the guidance in the MICP procedures with regard to the safety level and how this guidance was implemented from fiscal years 2014 through 2017. In particular, the MICP procedures indicate that the safety level should be calculated based on a 3-year average of disbursements. The procedures also allow DSCA officials to determine whether to update the safety level in each year without providing specific criteria for making this determination. As a result, no change to the safety level was made in fiscal year 2015 or 2016 despite increases in CAS expenditures. However, for the years when the safety level was calculated, the calculation was performed differently than what is prescribed in the MICP guidance. For example, for fiscal year 2017, the DSCA official in charge of managing the CAS account stated the method was modified to be based on the amount of obligation authority (or total CAS budget) instead of the amount of disbursements. This approach was taken because of the incomplete fiscal year 2016 disbursement data. However, the method used was not consistent with the guidance. Accordingly, for future years it is not clear how the safety level should be calculated.
As previously stated, best practices in managing federal user fees indicate that it is advisable for federal agencies to use a risk-based strategy to establish a target range for fee accounts. Although DSCA has followed this best practice and set a safety level, or minimum desired balance for the CAS account, DSCA has not established a method to calculate an upper bound of a target range for the CAS account balance, which would help officials identify when the account balance becomes excessive. DSCA’s MICP procedures indicate that, as part of the annual assessment process, DSCA officials should review account activity to determine if an out-of-cycle comprehensive review of the CAS fee rate is needed, specifying that this should be done either because the CAS account balance should be higher to cover expenses or lower because too many fees are being collected. However, in the absence of an upper bound for the account, it is up to the judgment of DSCA officials to determine when the account is excessive. DSCA officials told us that they were reluctant to set an upper bound for the account due to uncertainty regarding future sales and future CAS expenditures. Nevertheless, as with the safety level, an upper bound could be based on a certain number of months or years in expenditures and could be flexible and adjusted over time. Without establishing a target range for the account balance, DSCA officials lack a key tool to help determine the appropriate CAS fee rate.
Conclusions
From fiscal years 2007 to 2017, the balance of the Foreign Military Sales administrative account grew dramatically to $4.1 billion. DSCA has set a minimum desired level for the account balance and designed various account monitoring practices to ensure the minimum level is not reached. However, DSCA has not performed comprehensive reviews of the administrative fee rate at least every five years, consistent with DSCA policy, and has not set an upper bound that would provide a target range for the account. These conditions limit DSCA’s ability to appropriately target the fee rate and to protect against excessive growth in the account balance. Our analysis demonstrates that the administrative account is likely to stay above its safety level even if the rate were reduced to as low as 2.9 percent and expenditures from the account were raised by 15 percent, signifying there should be even more room for the account to absorb increased expenditures now that DSCA has announced that the rate will be reduced to 3.2 percent as of June 1, 2018. Thus, this account should now have sufficient funds to pay for additional expenses that are currently paid from appropriated funds, such as those excluded by statute. Thereby, more of the costs for the work performed for the benefit of our foreign partners could be paid through the administrative fee, rather than having those some of those expenses paid through other appropriated funds.
The CAS account has also experienced significant growth since fiscal year 2007, although the current account balance is unknown because of an accounting error and difficulty using a new accounting system. Specifically, in fiscal year 2016, a different account was charged about $89 million in DCMA’s CAS billings and DCMA has had continuing difficulty getting reimbursed for its CAS bills for fiscal years 2016 and 2017. DSCA did not become aware of this issue for over a year after it began, and DSCA has played a minimal role in coordinating DCMA and DFAS to fix it. Since 2014, DSCA has strengthened some management controls over the CAS account, but they could be further enhanced if DSCA conducted more timely comprehensive reviews, provided more clarity on the expected calculation of the account’s minimum level, and set an upper bound of a target range for the account. In particular, such an upper bound could allow DSCA officials to identify when the CAS balance is excessive, as directed by DSCA’s internal guidance. Adopting such controls would enhance DSCA leadership’s ability to monitor the account’s balance and make timely decisions to ensure the rate is set to cover DOD costs but not overcharge foreign partners.
Matter for Congressional Consideration
Congress should consider redefining what can be considered an allowable expense to be charged from the administrative account. (Matter for Consideration 1)
Recommendations for Executive Action
We propose making the following six recommendations to DSCA: The Director of DSCA should take steps to ensure that comprehensive reviews of the administrative fee rate are completed at least every 5 years. (Recommendation 1)
The Director of DSCA should define a method for calculating an upper bound of a target range for the administrative account that could be used to guide the agency’s reviews of administrative account balances and decision making in setting the fee rate. (Recommendation 2)
The Director of DSCA should direct DCMA and DFAS to work together to ensure timely correction of the fiscal years 2016 and 2017 DCMA CAS reimbursement issues. (Recommendation 3)
The Director of DSCA should take steps to ensure that comprehensive reviews of the CAS fee rate are completed at least every 5 years. (Recommendation 4)
The Director of DSCA should clarify internal guidance to ensure consistency in the calculation of the CAS account’s minimum (safety) level. (Recommendation 5)
The Director of DSCA should define a method for calculating an upper bound of a target range for the CAS account that could be used to guide the agency’s reviews of CAS account balances and decision making in setting the fee rate. (Recommendation 6)
Agency Comments and Our Evaluation
We provided a draft of this report for review and comment to DOD and State. DSCA provided written comments on behalf of DOD, which we reproduce in appendix III. In its comments, DSCA concurred with five of our recommendations and partially concurred with one.
In commenting on our first recommendation for DSCA to take steps to ensure that it completes timely comprehensive reviews of the administrative fee rate, DSCA asserted that its last two reviews were conducted in time to meet its 5-year requirement. However, as we outline in this report, these reviews were conducted about 6 to 7 years apart. These included a fiscal year 2005 review that led to an August 2006 rate change, a review that began in fiscal year 2011 that led to a November 2012 rate change, and a fiscal year 2018 review that led to a June 2018 rate change. By following its own policy to complete the reviews every 5 years instead, DSCA would better be able to keep the administrative fee rate up-to-date with program changes.
In partially concurring with our fourth recommendation for DSCA to take steps to ensure that it completes timely comprehensive reviews of the CAS fee rate, DSCA asserts that it plans to begin its next review later than 5 years after the last one to provide more time for DCMA’s billing issues to be resolved and to inform the review with 5 years of data since the December 2014 rate reduction. Implementing this recommendation, including for its next review, would allow DSCA to meet its own guidance. In addition, the process of performing a comprehensive review of the fee rate could further provide impetus for addressing DCMA’s billing issues that have led to inaccuracies in the account balance and expenditure information since fiscal year 2016. Finally, if DSCA were to delay data collection until more than 5 years after the last rate reduction, that would cause the reviews to start more than 6-1/2 years apart. Given how long the review process has taken in the past, an earlier start will help ensure completion within 5 years.
In commenting on our fifth recommendation, DSCA noted that it updated its internal guidance for calculating the CAS safety level in March 2018. We plan to verify full implementation of this recommendation as part of our routine follow up process.
DOD also provided technical comments, which we incorporated as appropriate.
State did not provide any written or technical comments.
We are sending copies of this report to the appropriate congressional committees, and the Secretaries of Defense and State. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Thomas Melito at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
The Defense Security Cooperation Agency (DSCA) manages fees collected on transfers of defense articles and services to foreign countries that occur through the Foreign Military Sales (FMS) program. These fees are collected into separate accounts in the FMS trust fund. This report examines (1) the balance maintained in the administrative account in fiscal years 2007 to 2017, the controls used to manage it, and the extent to which the Department of Defense (DOD) has the ability to pay for FMS administrative expenses under different scenarios; and (2) the balance maintained in the contract administration services (CAS) account in fiscal years 2007 to 2017 and the controls used to manage it.
To determine which fees to include in our review, we reviewed the International Security Assistance and Arms Export Control Act of 1976 (the act), which is the authorizing legislation for FMS, and DOD documents and data. We also interviewed DOD officials. We determined that there are three primary fees charged on FMS cases: (1) the administrative fee, (2) the CAS fee, and (3) the transportation fee. These three fees represented 99 percent of the amount of funding held in FMS trust fund overhead accounts as of the beginning of fiscal year 2016. We will review the transportation account in a separate report because of the different ways in which the collections and expenditures from the account operate.
To assess the balance of the administrative account, we analyzed administrative account collections, expenditures, and balance data for fiscal years 2007 to 2017 maintained in the Defense Integrated Financial System by the Defense Finance and Accounting Service (DFAS), the DOD component that acts as the accounting service for the FMS program. According to DFAS, the Defense Integrated Financial System was implemented in 1980, and is used for FMS case management, financial reporting, and customer billing. We chose to review this number of fiscal years of data based on data availability. To understand the structure and functioning of the administrative account and to determine the reliability of these data, we reviewed relevant DOD documents, including explanations of changes to the administrative fee rate over time, and we interviewed DFAS and DSCA officials in various policy, financial, or technical roles. We asked knowledgeable agency officials a set of standard questions on this system, data entry procedures and checks, and other relevant aspects of data reliability. We reviewed their responses, examined the data ourselves, and conducted basic logic checks. Where questions arose, we followed up with agency officials for explanation and clarification. We did not conduct any independent testing of these data to determine whether these were the amounts that should have been paid into and out of the account during that period, such as through correct payments having been made based on accurate billings. We determined the administrative account data to be sufficiently reliable for assessing the account balance and related trends over the period, and for projecting future trends in the account balances, under a variety of assumptions, using statistical modeling.
To assess the controls DSCA uses to manage the administrative account balance, we reviewed relevant documents and interviewed DOD officials. To determine what controls DSCA should be using to manage the account, we reviewed DOD’s Financial Management Regulations, DSCA’s Security Assistance Management Manual, DSCA’s Managers’ Internal Control Program procedures, and other internal DSCA guidance. We also reviewed reports resulting from DSCA’s implementation of its account monitoring and comprehensive rate review processes, including annual administrative account assessments from fiscal years 2012 to 2016, quarterly administrative account assessments from fiscal years 2015 and 2016, and reports resulting from the 2005 and 2011-2012 comprehensive fee rate reviews. We chose to review the annual and quarterly assessments for different periods of time to review manageable numbers of the most recent assessments conducted. We also interviewed DSCA policy officials regarding their implementation of these processes.
To assess the extent to which DOD has the ability to pay for FMS administrative expenses from the administrative account under different conditions, we modeled eight scenarios to determine the projected account balance in fiscal years 2018 to 2024 across a range of potential annual sales values in each year while varying the administrative fee rate and expenditures from the account. Appendix II provides a complete description of our modeling methodology and the results of our analysis.
In addition to the modeling, we also performed legal research to determine the extent to which Congress and DOD have a role in defining what can be paid from the administrative account. In particular, we reviewed sections 2761 and 2792 of the act regarding DOD’s authority to charge fees. We also reviewed DOD documentation and legislative history to determine the conditions that led to the 1989 amendments to the act that excluded certain costs associated with military personnel who work on the FMS program as well as unfunded civilian retirement and other benefits from administrative expenses. Additionally, we reviewed DSCA’s definitions of which FMS administrative services should be paid from different funding sources, as specified in DSCA’s Security Assistance Management Manual. We also interviewed DOD officials about the agency’s role in defining administrative expenses.
Similar to the administrative account, to assess the balance of the CAS account, we initially attempted to analyze CAS account collections, expenditures, and balance data for fiscal years 2007 to 2017 maintained by DFAS in the Defense Integrated Financial System. We chose to review this number of fiscal years of data based on data availability. To understand the structure and functioning of the CAS account and to determine the reliability of these data, we reviewed relevant documents from DOD, including those explaining changes to the CAS account fee rate over time, and interviewed DFAS and DSCA officials in various policy, financial, or technical roles. We asked knowledgeable agency officials a set of standard questions on this system, data entry procedures and checks, and other relevant aspects of data reliability. We reviewed their responses, examined the data, and conducted logic checks. Where questions arose, we asked agency officials to explain and clarify. We performed additional cross-checks that compared CAS expenditures data provided by DFAS with disbursement data from the implementing agencies that used the CAS funds in fiscal years 2012 to 2017. We found some discrepancies in these data that we were subsequently able to reconcile with agency officials for fiscal years 2007 through 2015 for the purposes of reporting overall annual expenditures from the account. We did not conduct any independent testing of these data to determine whether these were the amounts that should have been paid into and out of the account during that period, such as through correct payments having been made based on accurate billings. We determined the CAS account data for fiscal years 2007 to 2015 to be sufficiently reliable for assessing the account balance and related trends over the period. We did not determine the CAS account data to be sufficiently reliable for these purposes for fiscal years 2016 and 2017 due to a large share of CAS billings for those fiscal years that either had been disbursed from the incorrect account or were delayed, and were therefore not reflected in the CAS expenditures and balance data. Accordingly, the CAS data for fiscal years 2016 and 2017 were excluded from our analysis.
To assess the controls DSCA uses to manage the CAS account balance, we reviewed relevant statutes, DOD financial management regulations, DOD guidance, and DOD documentation of such controls, and interviewed DSCA officials. To determine what controls DSCA should be following to manage the account, we reviewed DSCA’s Managers’ Internal Control Program procedures and a related DSCA policy memo, and interviewed DSCA policy officials. We also reviewed reports resulting from DSCA’s implementation of its account monitoring and comprehensive rate review processes, including all of DSCA’s annual CAS account assessments completed to date (covering fiscal years 2014 to 2016) and reports showing the process used and results of the fiscal year 2013 comprehensive review of the CAS fee rate. We also interviewed DSCA officials regarding their implementation of these processes.
We were unable to perform modeling to assess the extent to which DOD has the ability to pay for CAS expenses from the CAS account under different conditions due to the limited data available at the time of our review and data reliability concerns for fiscal years 2016 and 2017.
We conducted this performance audit from February 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Methodology Used to Model Possible Changes to the Administrative Account Balance and Model Results
Methodology
To determine whether the administrative account balance would be sufficient to maintain Foreign Military Sales (FMS) operations if there were a reduction in the administrative fee rate or an increase in annual expenditures, we used a Monte Carlo simulation methodology to project the account balance across a range of annual sales values for fiscal years 2018 through 2024. This technique approximates the likelihood of certain outcomes by performing multiple trial runs, called simulations, using random variables within a specified range. The simulations capture the volatility of sales in the projection of the future balance of the administrative account. We determined to report projections through fiscal year 2024 for two main reasons. First, there is increasing uncertainty for longer-term projections. Second, by then, DSCA should have had an opportunity to reassess the fee rate through another comprehensive rate review, given that the current review is to be completed in fiscal year 2018 and DSCA policy requires such reviews every 5 years.
To construct our baseline model, we used the 3.5 percent administrative fee rate, which was current during the period of our review. We also used historical annual sales and appropriations data provided by the Defense Security Cooperation Agency (DSCA) and annual administrative account collections, expenditures, and balance data provided by the Defense Finance and Accounting Service (DFAS). To assess the reliability of the data provided by both DSCA and DFAS, we interviewed officials from both agencies, performed manual error checks on the data, and reviewed relevant documents from DOD and other sources, including DSCA’s annual assessments of the administrative account and congressional appropriations laws dating back to fiscal year 2000. In addition, for collections data, we cross-checked the data provided by DFAS with reports the agency provided to DSCA on administrative fees owed on cases implemented since fiscal year 2000 as well as checked for any anomalies in the data. Through this process, we found errors in the way a key variable in these reports was pulled for data on cases prior to March 2013. We did not find such errors in the data for fiscal years 2014 to 2017, which led us to only using data on the status of cases in fiscal years 2014 to 2017 in our modeling. We did not conduct independent testing or an audit of DSCA or DFAS data. We found these specific data to be sufficiently reliable for use in our modeling.
We conducted 10,000 simulations for each year using the following parameters:
Sales: We used annual sales data from fiscal years 2000 to 2017 and the Monte Carlo methodology to build an annual sales distribution for fiscal years 2018 to 2024. We chose to review this number of fiscal years of data based on availability of reliable data. For that distribution, we assumed a uniform distribution with a minimum possible sales value of $15 billion and a maximum of $47 billion, which has an equal probability of annual sales values falling anywhere within that range. A uniform distribution was selected because, as compared to other potential distributions (e.g., normal, triangular), it more accurately reflected the current reality of annual sales, including the increasing trend seen since fiscal year 2000 and the jump in sales seen in fiscal year 2006. Although annual sales have grown steadily over time with values of at least $27.8 billion since fiscal year 2008, DSCA officials explained that the FMS market could shrink at any time based on global geopolitical and economic factors. As a result, we took a cautious approach to determining the minimum level of our sales projections by allowing for the possibility of annual sales dropping to $15 billion in each year. We set our maximum possible sales value at $47 billion to reflect the second highest sales value between fiscal years 2000 and 2017. Sales in fiscal year 2012 were $69 billion due in large part to one large purchase made by Saudi Arabia. We excluded this as a possible maximum value in future years due to DSCA officials’ explanation that this high value of sales was considered an exception. We also do not take into account any time trend effects such as inflation, technological advances, or new product development that could increase the value of future annual sales. The uniform distribution used in the model produces average sales of $30.8 billion, with a standard deviation of $9.2 billion, while the average sales from fiscal years 2006 through 2017 were $36.4 billion, with a standard deviation of $12.1 billion.
Collections: First, to calculate collections on ongoing cases for fiscal years 2018 to 2024, we used administrative account collections data from fiscal years 2010 to 2017, a schedule of the average percentage of administrative fee collections for each year in the life of an FMS case, and administrative fee rates from fiscal years 2010 to 2017. To develop an average collection schedule for cases, we used a DFAS report that shows the percentage of the administrative fee that should have been collected in each year on each case implemented in fiscal years 2008 to 2017. To address the inaccuracies in the data in this report prior to March 2013, we developed a schedule of the average rate of collections in each of the first 9 years of case implementation by summing the pertinent amounts of the administrative fee that should have been paid on cases divided by the total amounts of the administrative fee owed on cases implemented in fiscal years 2008 to 2017 as of fiscal years 2014 to 2017. We excluded from the collections schedule the large sale made to Saudi Arabia in fiscal year 2012 because that case had a reduced first-year collection rate that skewed the first-year average. This 9-year collection schedule accounts for about 91 percent of total expected collections on cases.
We then calculated expected collections for new cases in a given year by multiplying the dollar value of sales in that year by the average collection rate for the first year of a case and the applicable fee rate. Finally, we added new and ongoing collections to arrive at total collections projected for each year.
Expenditures: We used administrative account expenditure and collection data from fiscal years 2006 to 2017 to develop a regression model to project administrative account collections in fiscal years 2018 to 2024. We used available data from fiscal years 2006 to 2017 to produce an estimate of the relationship between collections and expenditures, employing a simple linear regression model where the dependent variable was expenditures against collections, a linear time trend, and a constant. We chose to review this number of fiscal years of data based on availability of reliable data. We then used the coefficients from the regression model to estimate future expenditures against simulated collections and a time trend. As designed, to provide a cautious estimate of future expenditures, this model reflected an overall increasing trend in expenditures even when annual sales simulated in future years did not increase on average.
Safety level: The administrative account safety level is established each year by DSCA as the minimum balance required to continue operations and respond to potential volatility in the FMS market. DSCA calculates the account’s annual safety level as 18 months of operational funding, as determined by the congressional obligation limit, which has been annually set in the foreign operations appropriation since 1992. To project the administrative account safety level for fiscal years 2018 to 2024, we used the congressional obligation limit for the administrative account from fiscal years 2000 to 2017, as reported by DSCA, to develop a simple regression model where the dependent variable was the obligation limit against a linear time trend and a constant. We chose to review this number of fiscal years of data based on availability of reliable data. Then, based on DOD guidance, we divided the projected obligation limit by 12 and multiplied it by 18 to calculate the projected safety level. This regression model projects steady growth in the obligation limit and therefore steady growth in the safety level every year. The same projected safety level applies to all simulations for each year so that we can apply a consistent threshold against which to compare the account balance, although some simulations involved lower future sales, which could lead to lower future expenditures and hence lower safety levels.
Finally, using these parameters, we calculated the administrative account balance for each year by adding the net income projected for that year (that year’s projected collections minus that year’s projected expenditures) to the previous year’s account balance. All of our estimated projections are in nominal dollars.
Building upon the baseline projection, we conducted 10,000 simulations for each year for seven additional scenarios: three in which the administrative fee rate is reduced from the current 3.5 percent to as low as 2.9 percent, three in which annual expenditures are increased as high as 15 percent above expected levels, and one in which both changes occur (see table 1). We modeled decreases of the fee rate to as low as 2.9 percent to look at the effect of a wide range of possibilities lower than the current rate. We modeled increases in annual expenditures of up to 15 percent above typical growth because this amount is a little higher than 1.5 times the average annual growth in expenditures between fiscal years 2007 and 2017 (9.3 percent). As such, our model accounted for the potential of large sustained expenditure growth. Finally, we modeled the effects of adjusting both levers to the maximum extent through a scenario with a 2.9 percent fee rate and a 15 percent increase above expected annual expenditures. Using the account balance and safety level projections for each scenario, we assessed the likelihood of the balance dropping below the safety level in each year through fiscal year 2024.
Summary of Results
Baseline Scenario
In the baseline scenario, we projected what would happen to the administrative account balance if the fee rate were to remain 3.5 percent and expenditures were to remain stable based on historical data. There is a 100 percent likelihood of the account balance remaining above the safety level in each year in this scenario. There is a 90 percent likelihood that the account balance would remain above the projected safety level in fiscal year 2024 by at least $1.6 billion (see fig. 11).
We used the model to determine what would happen to the account balance if the administrative fee rate were decreased to 3.3, 3.1, and 2.9 percent. We projected a 100 percent likelihood that the account balance would remain above the projected safety level in fiscal year 2024 in each of these scenarios. There is a 90 percent likelihood that the account balance would remain above the projected safety level in fiscal year 2024 by at least $1.4 billion if the fee rate is decreased to 3.3 percent, by at least $1.2 billion if decreased to 3.1 percent, and by at least $1.0 billion if decreased to 2.9 percent (see fig. 12).
We used the model to determine what would happen to the account balance if annual expenditures were to increase 5, 10, and 15 percent above levels expected in the baseline scenario. There is more than a 99 percent likelihood that the account balance would remain above the projected safety level in fiscal year 2024 in each of these scenarios. There is a 90 percent likelihood that the account balance would remain above the projected safety level by at least $1.3 billion if annual expenditures increased 5 percent, by at least $974 million if annual expenditures increased 10 percent, and by at least $622 million if annual expenditures increased 15 percent (see fig. 13).
We used the model to determine what would happen to the account balance if both the fee rate were decreased to 2.9 percent and annual expenditures were to increase 15 percent above expected levels. There is at least a 91 percent likelihood that the account balance would remain above the projected safety level in fiscal year 2024 in this scenario. There is a 90 percent likelihood the account balance would remain above the projected safety level in fiscal year 2024 by at least $25 million (see fig. 14).
Appendix III: Comments from the Department of Defense
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Hynek Kalkus (Assistant Director), Heather Latta (Analyst-in-charge), Lynn Cothern, Elisabeth Helmer, Jessica Mausner, and Moon Parks made key contributions to this report. Martin De Alteriis, Jeff Isaacs, Christopher Keblitis, Grace Lui, Susan Murphy, Laurel Plume, Heather Rasmussen, Chanetta Reed, and Aldo Salerno provided technical assistance. | Why GAO Did This Study
The FMS program is one of the primary ways the U.S. government supports its foreign partners, by providing them with defense equipment and services. The program charges FMS customers overhead fees to cover the U.S. government's operating costs. They include the administrative fee for costs such as civilian employee salaries and facilities, and the CAS fee for the cost of contract quality assurance, management, and audits. In 1989, Congress excluded from administrative expenses certain costs associated with military personnel who work on the FMS program as well as unfunded civilian retirement and other benefits. As of May 2018, the administrative fee rate is 3.5 percent, and the CAS fee rate is 1.2 percent.
House Report 114-537 and Senate Report 114-255 included provisions that GAO review DSCA's collection and management of these fees. This report examines, for fiscal years 2007 to 2017, the balance of and controls over (1) the administrative account and (2) the CAS account. GAO analyzed Department of Defense (DOD) data and documents, modeled projections for the administrative account, and interviewed DOD officials.
What GAO Found
The Foreign Military Sales (FMS) administrative account balance grew by over 950 percent from fiscal years 2007 to 2017—from $391 million to $4.1 billion—due in part to insufficient management controls, including the lack of timely rate reviews. The Defense Security Cooperation Agency (DSCA) has some controls to manage the account balance. For example, DSCA has established a method for calculating a minimum desired balance to ensure it has sufficient funds to complete FMS cases despite uncertain future sales. At the end of fiscal year 2017, the account balance was $2.7 billion above this minimum. DSCA, however, has completed rate reviews less frequently than directed by its policy. Moreover, DSCA has not adopted the best practice of setting an upper bound for the account that would, along with the minimum level, provide a target range for the account balance. By not performing timely rate reviews or setting an upper bound, DSCA has limited its ability to prevent excessive balance growth. GAO modeling indicates that, even with a planned fee rate reduction to 3.2 percent, the account balance would likely remain above its minimum level through fiscal year 2024, including if annual expenditures increased by 15 percent more than expected. As such, the account has the potential to pay for additional expenses. These could include expenses first excluded by statute in 1989 at a time when the account balance was negative and which have since been paid from other appropriated funds. DOD told GAO it is willing to revisit these exclusions.
The FMS contract administration services (CAS) account grew from fiscal years 2007 to 2015 from $69 million to $981 million, due in part to insufficient management controls, including not setting an upper bound. The balances for fiscal years 2016 and 2017 overstated the amount of funds available due to a systems issue and limited related oversight. Since 2014, DSCA has implemented some controls for the CAS account, such as regular reviews of the account balance, but weaknesses remain. In particular, DSCA does not plan to follow its internal guidance to conduct the next CAS fee rate review within 5 years. DSCA also has inconsistently calculated the desired minimum level for the account. Finally, DSCA has not set an upper bound for the account to help officials follow internal guidance that directs them to determine when the balance is excessive and a fee rate reduction should be considered. As a result, DSCA is limited in its ability to make timely, appropriate decisions on the fee rate.
What GAO Recommends
Congress should consider redefining what it considers an allowable expense to be charged from the administrative account. GAO is making six recommendations to help DSCA improve its controls over both accounts, including completing more timely reviews and establishing a desired range for balance levels. DOD generally concurred. |
gao_GAO-18-388 | gao_GAO-18-388_0 | Background
The LDA requires lobbyists to register with the Secretary of the Senate and the Clerk of the House and to file quarterly reports disclosing their respective lobbying activities. Lobbyists are required to file their registrations and reports electronically with the Secretary of the Senate and the Clerk of the House through a single entry point. Registrations and reports must be publicly available in downloadable, searchable databases from the Secretary of the Senate and the Clerk of the House. No specific statutory requirements exist for lobbyists to generate or maintain documentation in support of the information disclosed in the reports they file. However, guidance issued by the Secretary of the Senate and the Clerk of the House recommends that lobbyists retain copies of their filings and documentation supporting reported income and expenses for at least 6 years after they file their reports.
The LDA requires that the Secretary of the Senate and the Clerk of the House guide and assist lobbyists with the registration and reporting requirements and develop common standards, rules, and procedures for LDA compliance. The Secretary of the Senate and the Clerk of the House review the guidance semiannually. It was last revised January 31, 2017, to (among other issues) update the registration threshold to reflect changes in the Consumer Price Index, and clarify the identification of clients and covered officials and issues related to rounding income and expenses. The guidance provides definitions of LDA terms, elaborates on registration and reporting requirements, includes specific examples of different scenarios, and provides explanations of why certain scenarios prompt or do not prompt disclosure under the LDA. The offices of the Secretary of the Senate and the Clerk of the House told us they continue to consider information we report on lobbying disclosure compliance when they periodically update the guidance. In addition, they told us they e-mail registered lobbyists quarterly on common compliance issues and reminders to file reports by the due dates.
The LDA defines a lobbyist as an individual who is employed or retained by a client for compensation, who has made more than one lobbying contact (written or oral communication to covered officials, such as a high ranking agency official or a Member of Congress made on behalf of a client), and whose lobbying activities represent at least 20 percent of the time that he or she spends on behalf of the client during the quarter. Lobbying firms are persons or entities that have one or more employees who lobby on behalf of a client other than that person or entity. Figure 1 provides an overview of the registration and filing process.
Lobbying firms are required to register with the Secretary of the Senate and the Clerk of the House for each client if the firms receive or expect to receive more than $3,000 in income from that client for lobbying activities. Lobbyists are also required to submit an LD-2 quarterly report for each registration filed. The LD-2s contain information that includes: the name of the lobbyist reporting on quarterly lobbying activities; the name of the client for whom the lobbyist lobbied; a list of individuals who acted as lobbyists on behalf of the client during the reporting period; whether any lobbyists served in covered positions in the executive or legislative branch such as high-ranking agency officials or congressional staff positions, in the previous 20 years; codes describing general issue areas, such as agriculture and education; a description of the specific lobbying issues; houses of Congress and federal agencies lobbied during the reporting reported income (or expenses for organizations with in-house lobbyists) related to lobbying activities during the quarter (rounded to the nearest $10,000).
The LDA also requires lobbyists to report certain political contributions semiannually in the LD-203 report. These reports must be filed 30 days after the end of a semiannual period by each lobbying firm registered to lobby and by each individual listed as a lobbyist on a firm’s lobbying report. The lobbyists or lobbying firms must: list the name of each federal candidate or officeholder, leadership political action committee, or political party committee to which he or she contributed at least $200 in the aggregate during the semiannual period; report contributions made to presidential library foundations and presidential inaugural committees; report funds contributed to pay the cost of an event to honor or recognize an official who was previously in a covered position, funds paid to an entity named for or controlled by a covered official, and contributions to a person or entity in recognition of an official, or to pay the costs of a meeting or other event held by or in the name of a covered official; and certify that they have read and are familiar with the gift and travel rules of the Senate and House and that they have not provided, requested, or directed a gift or travel to a member, officer, or employee of Congress that would violate those rules.
The Secretary of the Senate and the Clerk of the House, along with USAO, are responsible for ensuring LDA compliance. The Secretary of the Senate and the Clerk of the House notify lobbyists or lobbying firms in writing that they are not complying with the LDA reporting. Subsequently, they refer those lobbyists who fail to provide an appropriate response to USAO. USAO researches these referrals and sends additional noncompliance notices to the lobbyists or lobbying firms, requesting that they file reports or terminate their registration. If USAO does not receive a response after 60 days, it decides whether to pursue a civil or criminal case against each noncompliant lobbyist. A civil case could lead to penalties up to $200,000 for each violation, while a criminal case—usually pursued if a lobbyist’s noncompliance is found to be knowing and corrupt—could lead to a maximum of 5 years in prison.
Lobbyists Filed Disclosure Reports as Required for Most New Lobbying Registrations
Generally, under the LDA, within 45 days of being employed or retained to make a lobbying contact on behalf of a client, the lobbyist must register by filing an LD-1 form with the Secretary of the Senate and the Clerk of the House. Thereafter, the lobbyist must file quarterly disclosure (LD-2) reports detailing the lobbying activities. Of the 3,433 new registrations we identified for the third and fourth quarters of 2016 and the first and second quarters of 2017, we matched 2,995 of them (87.2 percent) to corresponding LD-2 reports filed within the same quarter as the registration. These results are consistent with the findings we have reported in prior reviews. We used the House lobbyists’ disclosure database as the source of the reports. We also used an electronic matching algorithm that allows for misspellings and other minor inconsistencies between the registrations and reports. Figure 2 shows lobbyists filed disclosure reports as required for most new lobbying registrations from 2010 through 2017.
For Most LD-2 Reports, Lobbyists Provided Documentation for Key Elements, Including Documentation for Their Income and Expenses
For selected elements of lobbyists’ LD-2 reports that can be generalized to the population of lobbying reports, our findings have generally been consistent from year to year. Most lobbyists reporting $5,000 or more in income or expenses provided written documentation to varying degrees for the reporting elements in their disclosure reports. Figure 3 shows that for most LD-2 reports, lobbyists provided documentation for income and expenses for sampled reports from 2010 through 2017. However, in recent years our findings showed some variation in the estimated percentage of lobbyists who have reports with documentation for income and expense supporting lobbying activities. Specifically, our estimate for 2017 (99 percent) represents a statistically significant increase from 2016.
Figure 4 shows that for some LD-2 reports, lobbyists did not round their income or expenses as the guidance requires. In 2017, we estimate 25 percent of reports did not round reported income or expenses according to the guidance. We have found that rounding difficulties have been a recurring issue on LD-2 reports from 2010 through 2017. As we previously reported, several lobbyists who listed expenses told us that based on their reading of the LD-2 form they believed they were required to report the exact amount. While this is not consistent with the LDA and the guidance, this may be a source of some of the confusion regarding rounding errors. In 2016, the guidance was updated to include an additional example about rounding expenses to the nearest $10,000. In 2017, 11 percent of lobbyists reported $10,000 or more in income or expenses.
The LDA requires lobbyists to disclose lobbying contacts made with federal agencies on behalf of the client for the reporting period. This year, of the 98 LD-2 reports in our sample, 51 reports disclosed lobbying activities at federal agencies. Of those, lobbyists provided documentation for all lobbying activities at executive branch agencies for 34 LD-2 reports. Figures 5 through 8 show that lobbyists for most LD-2 reports provided documentation for selected elements of their LD-2 reports from 2010 through 2017.
For Most Lobbying Disclosure Reports (LD-2), Lobbyists Filed Political Contribution Reports (LD- 203) for All Listed Lobbyists
Lobbyists for an estimated 93 percent of LD-2 reports filed year-end 2016 for all lobbyists listed political contributions on the report as required. Figure 9 shows that lobbyists for most lobbying firms filed contribution reports as required in our sample from 2010 through 2017. All individual lobbyists and lobbying firms reporting lobbying activity are required to file LD-203 reports semiannually, even if they have no contributions to report, because they must certify compliance with the gift and travel rules.
For Some LD-2 Reports, Lobbyists May Have Failed to Disclose Their Previously Held Covered Positions
The LDA requires a lobbyist to disclose previously held covered positions in the executive or legislative branch, such as high ranking agency officials and congressional staff, when first registering as a lobbyist for a new client. This can be done either on a new LD-1 or on the quarterly LD- 2 filing when added as a new lobbyist. This year, we estimate that 15 percent of all LD-2 reports may not have properly disclosed previously held covered positions as required. As in our other reports, some lobbyists were still unclear about the need to disclose certain covered positions, such as paid congressional internships or certain executive agency positions. Figure 10 shows the extent to which lobbyists may not have properly disclosed one or more covered positions as required from 2010 through 2017.
Some Lobbyists Amended Their Disclosure Reports after We Contacted Them
Lobbyists amended 15 of the 98 LD-2 disclosure reports in our original sample to change previously reported information after we contacted them. Of the 15 reports, 7 were amended after we notified the lobbyists of our review, but before we met with them. An additional 8 of the 15 reports were amended after we met with the lobbyists to review their documentation. We consistently find a notable number of amended LD-2 reports in our sample each year following notification of our review. This suggests that sometimes our contact spurs lobbyists to more closely scrutinize their reports than they would have without our review. Table 1 lists reasons lobbying firms in our sample amended their LD-1 or LD-2 reports.
Most LD-203 Contribution Reports Disclosed Political Contributions Listed in the Federal Election Commission Database
As part of our review, we compared contributions listed on lobbyists’ and lobbying firms’ LD-203 reports against those political contributions reported in the Federal Election Commission (FEC) database to identify whether political contributions were omitted on LD-203 reports in our sample. The sample of LD-203 reports we reviewed contained 80 reports with contributions and 80 reports without contributions. We estimate that overall for 2017, lobbyists failed to disclose one or more reportable contributions on 12 percent of reports. Additionally, ten LD-203 reports were amended in response to our review. For this element in prior reports, we reported an estimated minimum percentage of reports based on a one-sided 95 percent confidence interval rather than the estimated proportion as shown here. Estimates in the table have a maximum margin of error of 11 percentage points. The year to year differences are not statistically significant.
Table 2 illustrates that from 2010 through 2017 most lobbyists disclosed FEC reportable contributions on their LD-203 reports as required.
Most Lobbying Firms Found it Easy to Comply with Disclosure Requirements and Understood Lobbying Terms
As part of our review, 88 different lobbying firms were included in our 2017 sample of LD-2 disclosure reports. Consistent with prior reviews, most lobbying firms reported that they found it “very easy” or “somewhat easy” to comply with reporting requirements. Of the 88 different lobbying firms in our sample, 34 reported that the disclosure requirements were “very easy,” 40 reported them “somewhat easy,” and 13 reported them “somewhat difficult” or “very difficult” (see figure 11).
Most lobbying firms we surveyed rated the definitions of terms used in LD-2 reporting as “very easy” or “somewhat easy” to understand with regard to meeting their reporting requirements. This is consistent with prior reviews. Figures 12 through 16 show what lobbyists reported as their ease of understanding the terms associated with LD-2 reporting requirements from 2012 through 2017.
U.S. Attorney’s Office for the District of Columbia Actions to Enforce the LDA
U.S. Attorney’s Office’s Resources and Authorities to Enforce LDA Compliance
U.S. Attorney’s Office (USAO) officials stated that they continue to have sufficient personnel resources and authority under the LDA to enforce reporting requirements. This includes imposing civil or criminal penalties for noncompliance. Noncompliance refers to a lobbyist’s or lobbying firm’s failure to comply with the LDA. However, USAO noted that the number of assigned personnel has decreased due to attrition.
USAO officials stated that lobbyists resolve their noncompliance issues by filing LD-2, LD-203, or LD-2 amendments, or by terminating their registration, depending on the issue. Resolving referrals can take anywhere from a few days to years, depending on the circumstances. During this time, USAO creates summary reports from its database to track the overall number of referrals that are pending or become compliant as a result of the lobbyist receiving an e-mail, phone call, or noncompliance letter. Referrals remain in the pending category until they are resolved. The pending category is divided into the following areas: “initial research for referral,” “responded but not compliant,” “no response/waiting for a response,” “bad address,” and “unable to locate.” The USAO attempts to review and update all pending cases every six months.
USAO focuses its enforcement efforts primarily on the “responded but not compliant” and the “no response/waiting for a response” groups. Officials told us that, if the USAO, after several unsuccessful attempts, has been unsuccessful in contacting the non-compliant firm or its lobbyist, USAO confers with both the Secretary of the Senate and the Clerk of the House to determine whether further action is needed.
In the cases where the lobbying firm is repeatedly referred for not filing disclosure reports but does not appear to be actively lobbying, USAO suspends enforcement actions. USAO officials reported they will continue to monitor these firms and will resume enforcement actions if required.
Status of LD-2 Enforcement Efforts
USAO received 3,213 referrals from both the Secretary of the Senate and the Clerk of the House for failure to comply with LD-2 reporting requirements cumulatively for filing years 2009 through 2015. Table 4 shows the number and status of the referrals received and the number of enforcement actions taken by USAO to bring lobbying firms into compliance. Enforcement actions include USAO attempts to bring lobbyists into compliance through letters, e-mails, and calls. About 45 percent (1,450 of 3,213) of the total referrals received are now compliant because lobbying firms either filed their reports or terminated their registrations. In addition, some of the referrals were found to be compliant when USAO received the referral. Therefore, no action was taken. This may occur when lobbying firms respond to the contact letters from the Secretary of the Senate and the Clerk of the House after USAO received the referrals. About 55 percent (1,752 of 3,213) of referrals are pending further action because USAO could not locate the lobbying firm, did not receive a response from the firm after an enforcement action, or plans to conduct additional research to determine if it can locate the lobbying firm. The remaining 11 referrals did not require action or were suspended because the lobbyist or client was no longer in business or the lobbyist was deceased.
Status of LD-203 Referrals
LD-203 referrals consist of two types: (1) LD-203(R) referrals represent lobbying firms that have failed to file LD-203 reports for their lobbying firm and (2) LD-203 referrals represent the lobbyists at the lobbying firm who have failed to file their individual LD-203 reports as required. USAO received 2,255 LD-203(R) referrals (cumulatively from 2009 through 2015) and 3,716 LD-203 referrals (cumulatively from 2009 through 2014 from the Secretary of the Senate and the Clerk of the House for lobbying firms and lobbyists for noncompliance with reporting requirements). LD- 203 referrals are more complicated than LD-2 referrals because both the lobbying firm and the individual lobbyists within the firm are each required to file a LD-203. Lobbyists employed by a lobbying firm typically use the firm’s contact information and not the lobbyists’ personal contact information. This makes it difficult to locate a lobbyist who is not in compliance and may have left the firm.
USAO officials reported that, while many firms have assisted USAO by providing contact information for lobbyists, they are not required to do so. According to officials, USAO has difficulty pursuing LD-203 referrals for lobbyists who have departed a firm without leaving forwarding contact information with the firm. While USAO utilizes web searches and online databases, including social media, to find these missing lobbyists, it is not always successful. Table 5 shows the status of LD-203 (R) referrals received and the number of enforcement actions taken by USAO to bring lobbying firms into compliance. A little more than 44 percent (998 of 2,255) of the lobbying firms referred by the Secretary of the Senate and Clerk of the House for noncompliance from calendar years 2009 through 2015 are now considered compliant because firms either filed their reports or terminated their registrations. About 56 percent (1,251 of 2,255) of the referrals are pending further action.
Table 6 shows that USAO received 3,716 LD-203 referrals from the Secretary of the Senate and Clerk of the House for lobbyists who failed to comply with LD-203 reporting requirements for calendar years 2009 through 2014. It also shows the status of the referrals received and the number of enforcement actions taken by USAO to bring lobbyists into compliance. In addition, table 6 shows that about 47 percent (1,741 of 3,716) of the lobbyists had come into compliance by filing their reports or are no longer registered as a lobbyist. About 53 percent (1,966 of 3,716) of the referrals are pending further action because USAO could not locate the lobbyist, did not receive a response from the lobbyist, or plans to conduct additional research to determine if it can locate the lobbyist.
Table 7 shows that USAO received LD-203 referrals from the Secretary of the Senate and the Clerk of the House for 4,991 lobbyists who failed to comply with LD-203 reporting requirements for any filing year from 2009 through 2014. It also shows the status of compliance for individual lobbyists listed on referrals to USAO. About 51 percent (2,526 of 4,991) of the lobbyists had come into compliance by filing their reports or are no longer registered as a lobbyist. About 50 percent (2,465 of 4,991) of the referrals are pending action because USAO could not locate the lobbyists, did not receive a response from the lobbyists, or plans to conduct additional research to determine if it can locate the lobbyists.
USAO officials said that many of the pending LD-203 referrals represent lobbyists who no longer lobby for the lobbying firms affiliated with the referrals, even though these lobbying firms may be listed on the lobbyist’s LD-203 report.
Status of Enforcement Settlement Actions
According to USAO officials, lobbyists and lobbying firms who repeatedly fail to file reports are labeled chronic offenders and referred to one of the assigned attorneys for follow-up. USAO also receives complaints regarding lobbyists who are allegedly lobbying but never filed an LD-203. USAO officials added that USAO monitors and investigates chronic offenders to ultimately determine the appropriate enforcement actions, which may include settlement or other civil actions.
In regards to the four active cases involving chronic offenders they reported to us in 2016, USAO officials noted that the agency is investigating one case, negotiating a resolution that will include a civil penalty in another case, and closing two other investigations without further action. In addition, USAO is reviewing its records to identify additional chronic offenders for further action due to noncompliance.
Agency Comments
We provided a draft of this report to the Department of Justice for review and comment. The Department of Justice provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to the Attorney General, Secretary of the Senate, Clerk of the House of Representatives, and interested congressional committees and members. In addition, this report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2717 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope and Methodology
Our objectives were to determine the extent to which lobbyists are able to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA) by providing documentation to support information contained on registrations and reports filed under the LDA; to identify challenges and potential improvements to compliance, if any; and to describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO), its role in enforcing LDA compliance, and the efforts it has made to improve LDA enforcement.
We used information in the lobbying disclosure database maintained by the Clerk of the House of Representatives (Clerk of the House). To assess whether these disclosure data were sufficiently reliable for the purposes of this report, we reviewed relevant documentation and consulted with knowledgeable officials. Although registrations and reports are filed through a single web portal, each chamber subsequently receives copies of the data and follows different data-cleaning, processing, and editing procedures before storing the data in either individual files (in the House) or databases (in the Senate). Currently, there is no means of reconciling discrepancies between the two databases caused by the differences in data processing. For example, Senate staff told us during previous reviews they set aside a greater proportion of registration and report submissions than the House for manual review before entering the information into the database. As a result, the Senate database would be slightly less current than the House database on any given day pending review and clearance.
House staff told us during previous reviews that they rely heavily on automated processing. In addition, while they manually review reports that do not perfectly match information on file for a given lobbyist or client, staff members approve and upload such reports as originally filed by each lobbyist, even if the reports contain errors or discrepancies (such as a variant on how a name is spelled). Nevertheless, we do not have reasons to believe that the content of the Senate and House systems would vary substantially. Based on interviews with knowledgeable officials and a review of documentation, we determined that House disclosure data were sufficiently reliable for identifying a sample of quarterly disclosure reports (LD-2) and for assessing whether newly filed lobbyists also filed required reports. We used the House database for sampling LD-2 reports from the third and fourth quarters of 2016 and the first and second quarters of 2017, as well as for sampling year-end 2016 and midyear 2017 political contributions reports (LD-203). We also used the database for matching quarterly registrations with filed reports. We did not evaluate the Offices of the Secretary of the Senate or the Clerk of the House, both of which have key roles in the lobbying disclosure process. However, we did consult with officials from each office. They provided us with general background information at our request.
To assess the extent to which lobbyists could provide evidence of their compliance with reporting requirements, we examined a stratified random sample of 98 LD-2 reports from the third and fourth quarters of 2016 and the first and second quarters of 2017. The sample size of 98 LD-2 reports for this year’s review represents an increase from the sample size selected for the 2015 and 2016 reviews, and is a return to the sample size selected in reviews prior to 2015. We increased the sample size because, in 2016, we observed a change in the estimate of the percentage of reports that had documentation of income and expenses (83 percent down from 92 percent in 2015). At that time, we were unable to state that this was a statistically significant change because, in part, the reduced sample size of 80 did not give us enough power to detect and report on the change of that size. We excluded reports with no lobbying activity or with income or expenses of less than $5,000 from our sampling frame. We drew our sample from 45,818 activity reports filed for the third and fourth quarters of 2016 and the first and second quarters of 2017 available in the public House database, as of our final download date for each quarter.
Our sample of LD-2 reports was not designed to detect differences over time. However, we conducted tests of significance for changes from 2010 to 2017 for the generalizable elements of our review. We found that results were generally consistent from year to year and there were few statistically significant changes after using a Bonferroni adjustment to account for multiple comparisons. For this year’s review, we identified that the estimated change in the percent of LD-2 reports that provided written documentation for the income and expenses from 2016 to 2017 is notable. In recent years, our findings show some variation in the estimate percentage of reports with documentation. Specifically, our estimate for 2017 (99 percent) represents a statistically significant increase from 2016. These changes are identified in the report. The inability to detect significant differences from year to year in our results may be related to sampling error alone or the nature of our sample, which was relatively small and was designed only for cross-sectional analysis.
Our sample is based on a stratified random selection and is only one of a large number of samples that we may have drawn. Because each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This interval would contain the actual population value for 95 percent of the samples that we could have drawn. The percentage estimates for LD-2 reports have 95 percent confidence intervals of within plus or minus 12 percentage points or fewer of the estimate itself.
We contacted all the lobbyists and lobbying firms in our sample and, using a structured web-based survey, asked them to confirm key elements of the LD-2 and whether they could provide written documentation for key elements in their reports, including the amount of income reported for lobbying activities; the amount of expenses reported on lobbying activities; the names of those lobbyists listed in the report; the houses of Congress and federal agencies that they lobbied, and the issue codes listed to describe their lobbying activity.
After reviewing the survey results for completeness, we interviewed lobbyists and lobbying firms to review the documentation they reported as having on their online survey for selected elements of their respective LD- 2 report.
Prior to each interview, we conducted a search to determine whether lobbyists properly disclosed their covered position as required by the LDA. We reviewed the lobbyists’ previous work histories by searching lobbying firms’ websites, LinkedIn, Leadership Directories, Legistorm, and Google. Prior to 2008, lobbyists were only required to disclose covered official positions held within 2 years of registering as a lobbyist for the client. The Honest Leadership and Open Government Act of 2007 amended that time frame to require disclosure of positions held 20 years before the date the lobbyists first lobbied on behalf of the client. Lobbyists are required to disclose previously held covered official positions either on the client registration (LD-1) or on an LD-2 report. Consequently, those who held covered official positions may have disclosed the information on the LD-1 or a LD-2 report filed prior to the report we examined as part of our random sample. Therefore, where we found evidence that a lobbyist previously held a covered official position, and that information was not disclosed on the LD-2 report under review, we conducted an additional review of the publicly available Secretary of the Senate or Clerk of the House database to determine whether the lobbyist properly disclosed the covered official position on a prior report or LD-1. Finally, if a lobbyist appeared to hold a covered position that was not disclosed, we asked for an explanation at the interview with the lobbying firm to ensure that our research was accurate.
In previous reports, we reported the lower bound of a 90 percent confidence interval to provide a minimum estimate of omitted covered positions and omitted contributions with a 95 percent confidence level. We did so to account for the possibility that our searches may have failed to identify all possible omitted covered positions and contributions. As we have developed our methodology over time, we are more confident in the comprehensiveness of our searches for these items. Accordingly, this report presents the estimated percentages for omitted contributions and omitted covered positions, rather than the minimum estimates. As a result, percentage estimates for these items will differ slightly from the minimum percentage estimates presented in prior reports.
In addition to examining the content of the LD-2 reports, we confirmed whether the most recent LD-203 reports had been filed for each firm and lobbyist listed on the LD-2 reports in our random sample. Although this review represents a random selection of lobbyists and firms, it is not a direct probability sample of firms filing LD-2 reports or lobbyists listed on LD-2 reports. As such, we did not estimate the likelihood that LD-203 reports were appropriately filed for the population of firms or lobbyists listed on LD-2 reports.
To determine if the LDA’s requirement for lobbyists to file a report in the quarter of registration was met for the third and fourth quarters of 2016 and the first and second quarters of 2017, we used data filed with the Clerk of the House to match newly filed registrations with corresponding disclosure reports. Using an electronic matching algorithm that includes strict and loose text matching procedures, we identified matching disclosure reports for 2,995, or 87.2 percent, of the 3,433 newly filed registrations. We began by standardizing client and lobbyist names in both the report and registration files (including removing punctuation and standardizing words and abbreviations, such as “company” and “CO”). We then matched reports and registrations using the House identification number (which is linked to a unique lobbyist-client pair), as well as the names of the lobbyist and client.
For reports we could not match by identification number and standardized name, we also attempted to match reports and registrations by client and lobbyist name, allowing for variations in the names to accommodate minor misspellings or typos. For these cases, we used professional judgment to determine whether cases with typos were sufficiently similar to consider as matches. We could not readily identify matches in the report database for the remaining registrations using electronic means.
To assess the accuracy of the LD-203 reports, we analyzed stratified random samples of LD-203 reports from the 30,594 total LD-203 reports. The first sample contains 80 reports of the 9,474 reports with political contributions and the second contains 80 reports of the 20,335 reports listing no contributions. Each sample contains 40 reports from the year- end 2016 filing period and 40 reports from the midyear 2017 filing period. The samples from 2017 allow us to generalize estimates in this report to either the population of LD-203 reports with contributions or the reports without contributions to within a 95 percent confidence interval of within plus or minus 11 percentage points or fewer. Although our sample of LD- 203 reports was not designed to detect differences over time, we conducted tests of significance for changes from 2010 to 2017 and found no statistically significant differences after adjusting for multiple comparisons.
While the results provide some confidence that apparent fluctuations in our results across years are likely attributable to sampling error, the inability to detect significant differences may also be related to the nature of our sample, which was relatively small and designed only for cross- sectional analysis. We analyzed the contents of the LD-203 reports and compared them to contribution data found in the publicly available Federal Elections Commission’s (FEC) political contribution database. We consulted with staff at FEC responsible for administering the database. We determined that the data are sufficiently reliable for the purposes of our reporting objectives.
We compared the FEC-reportable contributions on the LD-203 reports with information in the FEC database. The verification process required text and pattern matching procedures so we used professional judgment when assessing whether an individual listed is the same individual filing an LD-203. For contributions reported in the FEC database and not on the LD-203 report, we asked the lobbyists or organizations to explain why the contribution was not listed on the LD-203 report or to provide documentation of those contributions. As with covered positions on LD-2 disclosure reports, we cannot be certain that our review identified all cases of FEC-reportable contributions that were inappropriately omitted from a lobbyist’s LD-203 report. We did not estimate the percentage of other non-FEC political contributions that were omitted because they tend to constitute a small minority of all listed contributions and cannot be verified against an external source.
To identify challenges to compliance, we used a structured web-based survey and obtained the views from 88 different lobbying firms included in our sample on any challenges to compliance. The number of different lobbying firms is 88, which is less than our original sample of 98 reports because some lobbying firms had more than one LD-2 report included in our sample. We calculated responses based on the number of different lobbying firms that we contacted rather than the number of interviews. Prior to our calculations, we removed the duplicate lobbying firms based on the most recent date of their responses. For those cases with the same response date, the decision rule was to keep the cases with the smallest assigned case identification number. To obtain their views, we asked them to rate their ease with complying with the LD-2 disclosure requirements using a scale of “very easy,” “somewhat easy,” “somewhat difficult,” or “very difficult.” In addition, using the same scale we asked them to rate the ease of understanding the terms associated with LD-2 reporting requirements.
To describe the resources and authorities available to the U.S. Attorney’s Office for the District of Columbia (USAO) and its efforts to improve its LDA enforcement, we interviewed USAO officials. We obtained information on the capabilities of the system officials established to track and report compliance trends and referrals and on other practices established to focus resources on LDA enforcement. USAO provided us with reports from the tracking system on the number and status of referrals and chronically noncompliant lobbyists and lobbying firms.
The mandate does not require us to identify lobbyists who failed to register and report in accordance with the LDA requirements, or determine for those lobbyists who did register and report whether all lobbying activity or contributions were disclosed. Therefore, this was outside the scope of our audit.
We conducted this performance audit from April 2017 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: List of Lobbyists and Clients for Sampled Lobbying Disclosure Reports
The random sample of lobbying disclosure reports we selected was based on unique combination of House ID, lobbyist, and client names (see table 8).
Appendix III: List of Sampled Lobbying Contribution Reports with and without Contributions Listed
See table 9 for a list of the lobbyists and lobbying firms from our random sample of lobbying contribution reports with contributions. See table 10 for a list of the lobbyists and lobbying firms from our random sample of lobbying contribution reports without contributions.
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Clifton G. Douglas Jr. (Assistant Director), Shirley Jones (Assistant General Counsel) and Ulyana Panchishin (Analyst-In-Charge) supervised the development of this report. James Ashley, Ann Czapiewski, Krista Loose, Kathleen Jones, Amanda Miller, Sharon Miller, Stewart W. Small, and Kayla L. Robinson made key contributions to this report.
Assisting with lobbyist file reviews were Justine Augeri, Matthew Bond, James A. Howard, Jesse Jordan, Sherrice Kerns, Dalton Matthew Lauderback, Alexandria Palmer, Alan Rozzi, Shane Spencer, Jessica Walker, Ralanda Winborn, and Kate Wulff.
Related GAO Products
Lobbying Disclosure: Observations on Lobbyists’ Compliance with New Disclosure Requirements. GAO-08-1099. Washington, D.C: September 30, 2008. 2008 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-09-487. Washington, D.C: April 1, 2009. 2009 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-10-499. Washington, D.C: April 1, 2010. 2010 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-11-452. Washington, D.C: April 1, 2011. 2011 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-12-492. Washington, D.C: March 30, 2012. 2012 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-13-437. Washington, D.C: April 1, 2013. 2013 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-14-485. Washington, D.C: May 28, 2014. 2014 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-15-310. Washington, D.C.: March 26, 2015. 2015 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-16-320. Washington, D.C.: March 24, 2016. 2016 Lobbying Disclosure: Observations on Lobbyists’ Compliance with Disclosure Requirements. GAO-17-385. Washington, D.C.: March 31, 2017. | Why GAO Did This Study
The LDA, as amended, requires lobbyists to file quarterly disclosure reports and semiannual reports on certain political contributions. The law also includes a provision for GAO to annually audit lobbyists' compliance with the LDA. GAO's objectives were to (1) determine the extent to which lobbyists can demonstrate compliance with disclosure requirements, (2) identify challenges to compliance that lobbyists report, and (3) describe the resources and authorities available to USAO in its role in enforcing LDA compliance, and the efforts USAO has made to improve enforcement. This is GAO's 11th report under the provision.
GAO reviewed a stratified random sample of 98 quarterly disclosure LD-2 reports filed for the third and fourth quarters of calendar year 2016 and the first and second quarters of calendar year 2017. GAO also reviewed two random samples totaling 160 LD-203 reports from year-end 2016 and midyear 2017. This methodology allowed GAO to generalize to the population of 45,818 disclosure reports with $5,000 or more in lobbying activity, and 30,594 reports of federal political campaign contributions. GAO also met with officials from USAO to obtain status updates on its efforts to focus resources on lobbyists who fail to comply.
GAO is not making any recommendations in this report. GAO provided a draft of this report to the Department of Justice for review and comment. The Department of Justice provided technical comments, which GAO incorporated as appropriate.
What GAO Found
For the 2017 reporting period, most lobbyists provided documentation for key elements of their disclosure reports to demonstrate compliance with the Lobbying Disclosure Act of 1995, as amended (LDA). For lobbying disclosure (LD-2) reports and political contributions (LD-203) reports filed during the third and fourth quarter of 2016 and the first and second quarter of 2017, GAO estimates that
87 percent of lobbyists filed reports as required for the quarter in which they first registered; the figure below describes the filing process and enforcement;
99 percent of all lobbyists who filed (up from 83 percent in 2016) could provide documentation for income and expenses; and
93 percent filed year-end 2016 LD-203 reports as required.
These findings are generally consistent with prior reports GAO issued for the 2010 through 2016 reporting periods. However, in recent years GAO's findings showed some variation in the estimated percentage of reports with supporting documentation. For example, an estimated increase in lobbyists who could document expenses is notable in 2017 and represents a statistically significant increase from 2016.
As in GAO's other reports, some lobbyists were still unclear about the need to disclose certain previously held covered positions, such as paid congressional internships or certain executive agency positions. GAO estimates that 15 percent of all LD-2 reports may not have properly disclosed previously held covered positions. On the other hand, over the past several years of reporting on lobbying disclosure, GAO found that most lobbyists in the sample rated the terms associated with LD-2 reporting as “very easy” or “somewhat easy” to understand.
The U.S. Attorney's Office for the District of Columbia (USAO) stated it has sufficient resources and authority to enforce compliance with the LDA. USAO continued its efforts to bring lobbyists into compliance by reminding them to file reports or by applying civil penalties. |
gao_GAO-18-302 | gao_GAO-18-302_0 | Background
Requirements for Innovation Center Models Implemented under Section 1115A
Section 1115A establishes certain requirements for the Innovation Center that relate to the selection of models, use of resources, and evaluation of models. These requirements include: consulting with representatives of relevant federal agencies, as well as clinical and analytical experts in medicine or health care management, when carrying out its duties as described in the law; ensuring models address deficits in care that have led to poor clinical outcomes or potentially avoidable spending; making no less than $25 million of the Innovation Center’s dedicated funding available for model design, implementation, and evaluation each fiscal year starting in 2011; evaluating each model to analyze its effects on spending and quality of care, and making these evaluations public; and modifying or terminating a model any time after testing and evaluation has begun unless it determines that the model either improves quality of care without increasing spending levels, reduces spending without reducing quality, or both.
Under section 1115A, certain requirements applicable to previous CMS demonstrations are inapplicable to models tested under the Innovation Center. For example, while prior demonstrations generally required congressional approval in order to be expanded, section 1115A allows CMS to expand Innovation Center models—including on a nationwide basis—through the rulemaking process if the following conditions are met: (1) the agency determines that the expansion is expected to reduce spending without reducing the quality of care, or improve quality without increasing spending; (2) CMS’s Office of the Actuary certifies that the expansion will reduce or not increase net program spending; and (3) the agency determines that the expansion would not deny or limit coverage or benefits for beneficiaries. In addition, certain requirements previously cited by the Medicare Payment Advisory Commission as administrative barriers to the timely completion of demonstrations are inapplicable. Specifically, section 1115A provides the following:
HHS cannot require that an Innovation Center model initially be budget neutral—that is, designed so that estimated federal expenditures under the model are expected to be no more than they would have been without the model—prior to approving a model for testing.
Certain CMS actions in testing and expanding Innovation Center models cannot be subject to administrative or judicial review.
The Paperwork Reduction Act—which generally requires agencies to submit all proposed information collection efforts to the Office of Management and Budget (OMB) for approval and provide a 60-day period for public comment when they want to collect data on 10 or more individuals—does not apply to Innovation Center models.
Innovation Center Staffing and Organization
The Innovation Center uses a combination of staff and contractors to test models. Since the center became operational in November 2010, the number of staff increased steadily through the end of fiscal year 2016. (See fig. 1.) As of September 30, 2017, there were 617 staff—a slight decrease in the number of staff from the end of the prior fiscal year. Officials indicated that, in the future, changes in the model portfolio may require additional staff to manage and support model development and implementation. However, officials do not anticipate needing to increase staffing levels at the same pace as they did between fiscal years 2011 and 2016. Additionally, the Innovation Center uses third-party contactors to perform functions related to the implementation of models and to perform evaluations of the changes in the quality of care furnished and program spending under a model.
The Innovation Center has organized its 617 staff members primarily into eight groups and the Office of the Director. Four of the eight groups are responsible for coordinating the development and implementation of models. Staff in these four groups primarily lead efforts in developing model designs and obtaining approval for their models from CMS and HHS. Once a model is approved, staff coordinate the remaining implementation steps, including soliciting and selecting participants and overseeing the model during the testing and evaluation period. The other four groups perform key functions that support model development and implementation, such as reviewing ideas submitted for consideration as possible models, overseeing the evaluations of models, providing feedback to model participants about their performance, disseminating lessons learned across models, and monitoring budget resources. The Office of the Director, in general, has oversight responsibilities for the models led by these groups. Table 1 provides information on the staffing groups within the Innovation Center.
Innovation Center Process for Model Development and Implementation
The Innovation Center has developed internal agency guidance that outlines a general process used by the four model groups for developing and implementing models. (See fig. 2.) Appendix I provides additional information about the general process for implementing models.
Innovation Center Categories for Models
The Innovation Center has organized its models into seven categories based on delivery and payment approaches tested and program beneficiaries covered. The seven categories are as follows:
Accountable Care. This category includes models built around accountable care organizations (ACOs)—groups of coordinated health care providers who are held responsible for the care of a group of patients. The models are designed to encourage ACOs to invest in infrastructure and care processes for improving coordination, efficiency, and quality of care for Medicare beneficiaries.
Episode-based payment initiatives. This category includes models in which providers are held accountable for the Medicare spending and quality of care received by beneficiaries during an “episode of care,” which begins with a health care event (e.g., hospitalization) and continues for a limited time after.
Initiatives Focused on Medicare-Medicaid Beneficiaries. This category includes models focused on better serving individuals eligible for both Medicaid and Medicare in a cost-effective manner.
Initiatives Focused on Medicaid and CHIP Populations. This category includes models administered by participating states to lower spending and improve quality of care for Medicaid and CHIP beneficiaries.
Initiatives to Accelerate the Development and Testing of New Payment and Service Delivery Models. This category includes models where the Innovation Center works with participants to test state-based and locally developed models, covering Medicare beneficiaries, Medicaid beneficiaries, or both.
Initiatives to Speed the Adoption of Best Practices. This category includes models in which the Innovation Center collaborates with health care providers, federal agencies, and other stakeholders to test ways of disseminating evidence-based best practices that improve Medicare spending and quality of care for beneficiaries.
Primary Care Transformation. This category includes models that use advanced primary care practices—also called “medical homes”— to emphasize prevention, health information technology, care coordination, and shared decision-making among patients and their providers.
For certain categories, the Innovation Center assigns primary responsibility for developing and implementing models to a single model group; for some other categories, the responsibility is shared across different groups. For example, the center assigned responsibility for models in the ACO and the Primary Care Transformation categories to the Seamless Care Model Group, whereas the responsibility for models in the Initiatives to Accelerate the Development and Testing of New Payment and Service Delivery Models categories were assigned across all four model groups. Appendix II provides a summary of the number of models organized under each category and a description of each model.
The Innovation Center Implemented 37 Models That Test Varying Delivery and Payment Approaches, and Obligated over $5.6 Billion
The Innovation Center Implemented 37 Models and Announced an Additional 2; Models Varied by Delivery and Payment Approaches Tested, Beneficiaries Covered, and Other Characteristics
As of March 1, 2018, the Innovation Center had implemented 37 models under section 1115A of the Social Security Act. (See fig. 3.) Of those 37 models, the testing period has concluded for 10 of them. In addition, the Innovation Center has announced two models to begin testing in 2018.
Innovation Center models varied based on several characteristics, including delivery and payment approaches tested and program(s) covered. Delivery and payment approaches varied across all implemented and announced models—even models organized by the Innovation Center under the same model category. For example, the six models that tested an episode-based payment approach varied in terms of how episodes were defined, including the clinical and surgical episodes to which models applied. In addition, some models included multiple approaches for achieving changes in health care delivery or payment. Models also differed in terms of the programs covered, with 22 models covering Medicare only, 9 models covering Medicare and Medicaid, one model covering Medicaid and CHIP, and 7 models covering all three programs. Other characteristics by which models varied include the nature of model participation for providers (voluntary or mandatory) and the source of innovation (i.e., federal, state, or local initiatives). See table 2 for a breakdown of models across selected characteristics. Appendix II provides a full description of all models implemented and announced by the Innovation Center.
In September 2017, the Innovation Center provided some insight into its future plans when it issued an informal “request for information” that identified guiding principles under which models will be designed going forward, described focus areas for new models, and requested feedback from stakeholders. One of the guiding principles focused on voluntary models—a principle consistent with a final rule published in December 2017 canceling four mandatory participation models in development and making participation in a fifth mandatory model voluntary for some geographic areas. Other guiding principles included promoting competition based on quality, outcomes, and costs; empowering beneficiaries, their families, and caregivers to take ownership of their health; and using data-driven insights to ensure cost-effective care that also leads to improvements in beneficiary outcomes. In addition, the Innovation Center indicated the following focus areas for new model development: additional advanced alternative payment models; consumer-directed care and market-based innovation models; physician specialty models; prescription drug models; Medicare Advantage innovation models; state-based and local innovation, including Medicaid- focused models; mental and behavioral health models; and program integrity.
The Innovation Center Obligated over 55 Percent of Its Initial Multiyear Appropriation through Fiscal Year 2016
According to Innovation Center documentation, through September 30, 2016, the center obligated over $5.6 billion of the $10 billion appropriated for fiscal years 2011 through 2019 under section 1115A of the Social Security Act. The obligated amounts for individual models during this period ranged from $8.4 million to over $967 million, and varied based on model scope and design. For example, a model where the Innovation Center used its waiver authority to provide additional flexibility to participants (rather than additional funding) required only $8.4 million in obligations for the evaluation of the model and implementation activities. In contrast, a model where the Innovation Center awarded funding to a broad set of partners, including providers, local government, and public- private partnerships, to test their own care delivery and payment models required more than $870 million in obligations for payments to awardees and used over $95 million for contractor evaluations and other activities that supported model development and implementation.
Innovation Center spending falls into three categories: model programs, innovation support, and administration.
Model programs include obligations that directly support individual models and delivery system reform initiatives.
Innovation support includes center-wide operational expenses that are not directly attributable to a single model.
Administration includes permanent federal full-time equivalent payroll expenses, administrative contracts, administrative interagency agreements, and general administrative expenses.
As the Innovation Center implemented additional models each year, total annual obligations increased steadily from approximately $95 million in fiscal year 2011 to more than $1.3 billion in fiscal year 2015, but decreased slightly in fiscal year 2016. (See fig. 4) Most of these total obligations were for model programs, which followed a similar pattern, increasing from $51 million in 2011 to about $1.1 billion in fiscal year 2015, with a slight decrease in fiscal year 2016. According to officials, the 2016 decrease in obligations for model programs was due in part to some of the earlier, expensive models ending and to newer models being less costly than the older models. Officials noted, for example, that a number of newer models incorporated basic program infrastructure used in previously implemented models, which allowed for reduced model costs. Officials also indicated that the decrease in obligations may be due to newer models using payment approaches that are funded by the Medicare Trust Fund, rather than funded by the Innovation Center’s dedicated appropriation. The center’s obligations for both innovation support and administration increased from around $20 million for each category in fiscal year 2011 to about $163 million for innovation support and $119 million for administration in fiscal year 2016. Officials told us that as obligations for model programs grew, so did obligations for innovation support and administration, which includes indirect costs and contractor assistance.
Evaluations Inform the Development of Models and Decisions to Certify Certain Models for Expansion
The Innovation Center Has Used the Results from Evaluations to Inform the Development of Additional Models and to Make Changes to Implemented Models
The Innovation Center has used the results from model evaluations to generate ideas for new models. For some of the early implemented models, evaluation results showed reduced spending and maintained or improved quality of care, but also identified model design limitations that could affect those results. According to officials, in some of these instances, the Innovation Center has developed new models that build upon the approaches of earlier models, but include adjustments intended to address identified limitations (see text box).
Evaluations of Implemented Models The evaluation of each model is performed by a third-party contractor, who generally determines the effect of a model on quality of care and program spending by comparing data for model participants to those of a comparison group of providers and beneficiaries with characteristics similar to model participants. For purposes of the evaluation, the Innovation Center has the authority to require the collection and submission of necessary data by model participants. Accordingly, the third-party contractor collects both quantitative and qualitative data. The quantitative data are used to assess program spending and quality of care and the qualitative data are used to provide the context needed to understand the quantitative results.
Example of A Model That Tests the Same General Delivery and Payment Approach of a Previously Implemented Model While Addressing Limitations Bundled Payment for Care Improvement (BPCI) Model 2 tested an episode-based delivery and payment approach in which the Innovation Center set a benchmark, or target, price for all Medicare services a beneficiary might receive during a clinical episode—defined by BPCI Model 2 as the initial hospital stay and all services received up to 90 days after discharge. If the total spending for Medicare services during an episode was lower than the target price, participating hospitals would receive payments in addition to the normal fee-for-service payments. If the total spending for Medicare services during an episode was higher than the target price, participating hospitals would have to reimburse Medicare. Participants could select up to 48 different clinical episodes under the model. The evaluation of BPCI Model 2 found that orthopedic surgery episodes—of which approximately 90 percent were hip and knee joint replacement surgeries—may have resulted in reduced program spending and improved quality of care. However, the evaluation also identified limitations affecting those results. For example, the target prices for hip and knee replacement surgeries did not account for potential differences in Medicare spending between elective surgeries and surgeries required after a fracture. As a result of this limitation, hospitals could attempt to control spending by limiting the number of episodes associated with higher cost beneficiaries (i.e., those requiring surgery due to a fracture). In part to address the design issue identified under BPCI Model 2, Innovation Center officials told us they developed the Comprehensive Care for Joint Replacement (CJR) model. Implemented in April 2016, the CJR model tests the same general delivery and payment approach used in BPCI Model 2, but focuses specifically on hip and knee joint replacement surgical episodes and adjusts the target price to account for the higher spending related to hip and knee joint replacement surgeries following a fracture. As of March 1, 2018, no evaluations of the CJR model have been publicly released.
The Innovation Center has also used the results from evaluations as one way to improve the operational and participant support for new models. According to officials, evaluations have helped them identify lessons learned regarding support systems, such as which types of systems work well with which types of models, and then the center incorporated those lessons when designing the systems for new models. For example, officials noted that the experience with the learning system from the Bundled Payments for Care Improvement (BPCI) models informed the learning system for the Comprehensive Care for Joint Replacement (CJR) model. The lessons learned helped the Innovation Center better identify where participants would need additional support and the learning activities—such as webinars and implementation guides—to provide the needed support during the early stages of model implementation. Innovation Center officials told us that these lessons from evaluations helped ensure that each successive model built upon the collective experience of models implemented by the center.
The Innovation Center also has used evaluation results to make periodic changes to models during the testing period. According to officials, these changes include adjustments to the delivery and payment approaches tested, such as refining the target population, broadening the geographic focus, and refinements of spending calculations. Innovation Center officials noted that, in general, such changes were limited to minimize their effects on the evaluation of program spending and quality of care. Officials also identified changes to operational and participant support systems, which have included changes to the timing of participant data reporting, revisions to how data are collected from participants, and changes to the way learning materials are delivered to participants. According to officials, these types of changes are generally intended to help improve the experience of participants.
According to Innovation Center officials, evaluation results may also be used in making a decision to terminate a model prior to the end of its planned testing period. However, officials stated that the Innovation Center has not terminated any models prior to the conclusion of their testing periods, either based on the results of an evaluation or for other reasons.
Evaluations Informed Innovation Center Decisions to Recommend Two Models be Certified for Expansion
The Innovation Center used evaluation results in recommending two models be certified for expansion. According to Innovation Center officials, the evaluation of each model adequately demonstrated that the delivery and payment approach tested reduced Medicare spending while maintaining or improving quality of care. Based on these results, the Innovation Center formally requested that CMS’s Office of the Actuary analyze the financial impact of a potential expansion of each model. The two models were:
Pioneer ACO. Pioneer ACO tested an ACO delivery and payment approach that gave providers an opportunity to be paid a relatively greater share of savings generated, compared to participants in other ACO models, in exchange for accepting financial responsibility for any losses. In year 3 of the model, ACOs that met certain levels of savings in the first two years could elect to receive a portion of their Medicare fee-for-service payments in the form of predetermined, per beneficiary per month payments.
YMCA of the USA Diabetes Prevention Program (Diabetes Prevention Program). The Diabetes Prevention Program applied a lifestyle change program recognized by the Centers for Disease Control and Prevention to reduce to the risk of Type 2 diabetes for at- risk Medicare beneficiaries. The Diabetes Prevention Program was a part of the Health Care Innovation Awards Round One model.
When assessing the Pioneer ACO and Diabetes Prevention Program models for expansion, the officials from the Office of the Actuary considered the model evaluation results that were available and information from other sources. For example, the assessment of Pioneer ACO used historical shared savings calculations and beneficiary attribution data from ACOs in the Medicare Shared Saving Program and Pioneer ACO; Medicare claims and enrollment data; and published studies. According to CMS officials, a model evaluation and a certification for expansion differ in that a model evaluation assesses the historical impact of a delivery and payment approach for model participants only, while a certification for expansion assesses the future impact on program spending across all beneficiaries, payers, and providers who would be affected by the expanded model.
Based on its assessments, the Office of the Actuary certified both models for expansion and steps have been taken to expand them. In certifying Pioneer ACO, the Office of the Actuary concluded that because ACOs, in general, have been shown to produce savings relative to Medicare fee- for-service, an expansion of Pioneer ACO would generate further savings to the Medicare program. According to officials, CMS expanded Pioneer ACO by incorporating elements of the model—through rulemaking—as one of the options that providers may choose under the Medicare Shared Savings Program. For the Diabetes Prevention Program, the Office of the Actuary concluded that certain changes considered as part of the expansion would, in the near term, improve upon the original savings achieved as part of the Health Care Innovation Awards as well as savings achieved in similar diabetes prevention programs. The Innovation Center has expanded—through rulemaking—the Diabetes Prevention Program under a new, nationwide model to be implemented in April 2018.
In addition, officials from the Innovation Center and the Office of the Actuary discussed potentially assessing whether Partnership for Patients should be certified for expansion. Partnership for Patients is a model that leveraged federal, state, local, and private programs to spread proven practices for reducing preventable hospital-acquired conditions and readmissions across acute care hospitals. According to officials, the Innovation Center shared the results for Partnership for Patients—which showed improved quality of care in the form of reduced preventable hospital-acquired conditions and readmissions—with the officials from the Office of the Actuary. After discussing these issues, Innovation Center officials decided not to request a formal analysis for certification of expansion.
The Innovation Center Established Performance Goals and Related Performance Measures and Reported Meeting Its Targets for Some Goals
To assess is own performance, the Innovation Center established three center-wide performance goals and related measures.
Goal 1: Reduce the growth of healthcare costs while promoting better health and health care quality through delivery system reform. This goal has three performance measures that focus on ACOs. As shown in table 3, the Innovation Center has reported mixed results in achieving the targets set. According to agency reported data, the Innovation Center met the targets for 2 of its 3 Goal 1 performance measures for 2015. For the remaining measure—the percentage of ACOs that shared in savings—the center did not meet its target during either of the two years for which data were available. According to officials, when results fall short of targets, they examine the causes and make appropriate adjustments to the program. Officials stated that the missed target was driven by the high growth in the number of ACOs that were new—and therefore would not yet be expected to achieve a level of savings in which they could share—and not by ACO performance deficits. As a result, officials decided that no adjustments were required to the Medicare Shared Savings Program or other ACO Models to help improve performance. However, as shown in table 3, the Innovation Center set a target for 2016 that was lower than the 2015 target. For 2017, the Innovation Center lowered the expectation for growth compared to previous years, setting a target that was 1 percent higher than the 2016 target. Moving forward, CMS believes that as more ACOs gain experience, more will share in savings. Additionally, the agency expects that with additional performance years, the targets for the measure will become more refined.
Goal 2: Identify, test, and improve payment and service delivery models. This goal has one performance measure, which identifies the number of models that currently indicate (1) cost savings while maintaining or improving quality or (2) improving quality while maintaining or reducing cost. As of September 30, 2016, the Innovation Center reported that four section 1115A model tests have met these criteria (see table 4).
Goal 3: Accelerate the spread of successful practices and models. For this goal, the first performance measure focuses on the number of states developing and implementing a health system transformation and payment reform plan. The second measure focuses on increasing the percentage of active model participants who are involved in Innovation Center or related learning activities. As shown in table 5, the Innovation Center reported meeting its target for the first measure for both fiscal years 2015 and 2016, but not meeting its target for the second measure. For the second measure, the Innovation Center noted in its report to Congress that although the results for fiscal year 2016 showed a slight decrease in overall participation in Innovation Center or related learning activities, the majority of models performed higher than their individual targets. Several models underperformed, however, bringing down the overall percentage rate.
In addition to the Goal 3 performance measures, the Innovation Center identifies two related contextual indicators—which according to officials are measures that provide supporting information to help understand trends or other information related to the goal. The first contextual indicator provides a snapshot of Medicare beneficiary participation at a given point in time for all models operational for more than 6 months. In fiscal year 2016, CMS reported that over 3.6 million Medicare fee-for- service beneficiaries participated in models, representing approximately 9 percent of Medicare fee-for-service beneficiaries. The second contextual indicator provides information to help understand the level of interest and participation among providers in the Innovation Center’s model portfolio. In fiscal year 2016, the Center estimates that 103,291 providers participated in Innovation Center payment and service delivery models.
In addition to the three goals established by the Innovation Center, CMS has established an agency-wide goal related to the center’s performance. In 2015, CMS announced goals to help drive Medicare, and the health care system at large, toward rewarding the quality of care instead of the quantity of care provided to beneficiaries. One of these goals was to shift Medicare health care payments from volume to value using alternative payment models established under the Innovation Center. This agency- wide goal has one performance measure, which is to increase the percentage of Medicare fee-for-service payments tied to alternative payment models, such as ACOs or bundled payment arrangements. As shown in table 6, CMS reported meeting its target for 2015 and 2016.
Looking forward, officials told us that the Innovation Center has developed a methodology to estimate a forecasted return on investment for the model portfolio, and is in the early stages of refining the methodology and applying it broadly across the portfolio in 2018. As part of the development efforts, the Innovation Center expects to utilize standard investment measures used in the public and private sectors.
Agency Comments
We provided a draft of this report to HHS for comment. The Department provided technical comments, which we incorporated as appropriate.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Center for Medicare and Medicaid Innovation’s General Process for Implementing Models
Appendix I: Center for Medicare and Medicaid Innovation’s General Process for Implementing Models An agency may issue a request for information for planning purposes.
Appendix II: Models Implemented or Announced by the Center for Medicare and Medicaid Innovation under Section 1115A
As of March 1, 2018, the Center for Medicare and Medicaid Innovation (Innovation Center) organized its models into seven categories based on delivery and payment approaches tested and program beneficiaries covered. Table 8 provides the number of models implemented and announced, organized under each category.
The Innovation Center organized seven of its models under the Accountable Care category. (See table 9.)
The Innovation Center organized seven of its models under the Episode- Based Payment Initiatives category. (See table 10.)
The Innovation Center organized three of its models under the Initiatives Focused on Medicare-Medicaid Enrollees category. (See table 11.)
The Innovation Center organized one of its models under the category, Initiatives Focused on the Medicaid and Children’s Health Insurance Program Population. (See table 12.)
The Innovation Center organized 14 of its models under the category, Initiatives to Accelerate the Development and Testing of New Payment and Service Delivery Models. (See table 13.)
The Innovation Center organized three of its models under the category, Initiatives to Speed the Adoption of Best Practices. (See table 14.)
The Innovation Center organized four of its models under the category, Primary Care Transformation. (See table 15.)
Appendix III: Models Required by Different Provisions of the Patient Protection and Affordable Care Act
In addition to models required by section 1115A of the Social Security Act, as added by the section 3021 of Patient Protection and Affordable Care Act, the Center for Medicare and Medicaid Innovation implemented six models under different provisions of the Patient Protection and Affordable Care Act. (See table 16.)
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Greg Giusto (Assistant Director), Aaron Holling (Analyst-in-Charge), Ashley Dixon, and Rachel Rhodes made key contributions to this report. Also contributing to the report were Sam Amrhein, Muriel Brown, and Emily Wilson. | Why GAO Did This Study
The Patient Protection and Affordable Care Act created the Innovation Center within CMS to test new approaches to health care delivery and payment—known as models—for use in Medicare, Medicaid, or CHIP. The Innovation Center became operational in November 2010. In 2012, GAO reported on the early implementation of the Innovation Center. GAO found that, during the first 16 months of operations, the Innovation Center focused on implementing 17 new models and developed preliminary plans for evaluating the effects of each model and for assessing the center's overall performance.
GAO was asked to update its previous work. In this report, GAO: (1) describes the status of payment and delivery models implemented and the resources used; (2) describes the center's use of model evaluations; and (3) examines the center's assessment of its own performance. GAO reviewed available documentation, such as model fact sheets and frequently asked questions, and evaluation reports for models that have been implemented. GAO reviewed obligation data and performance information for the time period for which complete data or information were available. GAO also interviewed officials from the Innovation Center and CMS's Office of the Actuary.
The Department of Health and Human Services provided technical comments on a draft of this report, which GAO incorporated as appropriate.
What GAO Found
As of March 1, 2018, the Center for Medicare and Medicaid Innovation (Innovation Center) had implemented 37 models that test new approaches for delivering and paying for health care with the goal of reducing spending and improving quality of care. These models varied based on several characteristics, including the program covered—Medicare, Medicaid, the Children's Health Insurance Program (CHIP), or some combination of the three—and the nature of provider participation—voluntary or mandatory. Going forward, the Innovation Center indicated that the center plans to continue focusing on the use of voluntary participation models and to develop models in new areas, including prescription drugs, Medicare Advantage, mental and behavioral health, and program integrity. Through fiscal year 2016, the Innovation Center obligated $5.6 billion of its $10 billion appropriation for fiscal years 2011 through 2019.
The Innovation Center has used evaluations of models (1) to inform the development of additional models, (2) to make changes to models as they are implemented, and (3) to recommend models for expansion. For example, Innovation Center officials noted that, for some instances where evaluations have shown reduced spending with maintained or improved quality of care, the center has developed new models that build upon the approaches of earlier models, but with adjustments intended to address reported limitations. In addition, the Innovation Center used evaluations to recommend two models to the Centers for Medicare & Medicaid Services (CMS) Office of the Actuary for certification for expansion. According to CMS officials, a model evaluation and a certification for expansion differ in that a model evaluation assesses the impact of a delivery and payment approach for model participants only, while a certification for expansion assesses the future impact on program spending more broadly across all beneficiaries, payers, and providers who would be affected by the expanded model. As a result, the Office of the Actuary used the results of the evaluation and other information, such as Medicare claims data and published studies, to certify the expansion of both models.
To assess the center's overall performance, the Innovation Center established performance goals and related measures and reported meeting its targets for some goals in 2015, the latest year for which data were available (see table below).
Innovation Center officials told GAO that the center also recently developed a methodology to estimate a forecasted return on investment for its model portfolio. The center is in the early stages of refining the methodology and applying it broadly across its models. |
gao_GAO-18-344 | gao_GAO-18-344_0 | Background
DHS Budget Structure and Reporting
After the 2002 consolidation of 22 agencies into a single department, DHS had, until recently, different appropriation structures and budget management practices based on agencies’ enacted appropriations prior to DHS consolidation. DHS reported that, with over 70 different appropriations and over 100 formal program/project activity (PPA) accounts, it operated for over a decade with significant budget disparities and inconsistencies across its components. The lack of uniformity hindered visibility, inhibited comparisons between programs, and complicated spending decisions.
To address some of these inconsistencies, DHS proposed a new, common appropriations structure to Congress in 2014, according to officials. The House Appropriations Committee then included language regarding a common appropriations structure for the President’s budget request in its report that accompanied a proposed House fiscal year 2015 DHS Appropriations Bill. The language in the report directed the DHS Office of the Chief Financial Officer to work with the DHS components, the Office of Management and Budget, and the Committee to establish a common appropriations structure for the President’s budget request. While the specific appropriations bill that the report accompanied did not become law, Congress subsequently enacted a common appropriations structure for the department. DHS’s fiscal year 2016 President’s budget request was the first to use this common appropriations structure.
Under the common appropriations structure, DHS uses these four enacted accounts to capture the following costs:
Research and development – includes funds to support the search for new or refined knowledge and ideas as well as improved products or processes to yield future benefits;
Procurement, construction, and improvements – provides funds for planning, operational development, engineering and purchase of one or more assets prior to deployment.
Operations and support – provides funds necessary for operations, mission support, and associated management and administration activities, including salaries. Operational costs can include funding for fuel and other consumables as well as personnel. Maintenance costs can include routine or critical maintenance, spare parts, and additional personnel; and
Federal assistance – provides monetary or non-monetary support to any entity through various types of loans, grants, and other means.
Within each component’s budget request, the four appropriations accounts are subdivided into mission-oriented PPAs that correspond to the components’ varied operations. One PPA can include costs for multiple programs and funding for programs may cross multiple PPAs. For example, in the fiscal year 2017 congressional budget justification— the formal budget submission from DHS that comprises its portion of the President’s annual budget submission—Customs and Border Protection’s Integrated Fixed Towers and UH-60 helicopter programs both requested O&S funds through the Securing America’s Borders PPA. Other examples of PPAs include Transportation Screening Operations and Securing and Expediting Trade and Travel. Figure 1 shows the relationship between DHS appropriations PPAs.
DHS components use the mission-oriented PPAs to develop component budget requests within the President’s budget request. The DHS budget request includes components’ requested funds within the four appropriations accounts, including O&S, and their PPAs.
DHS also uses the PPAs in its monthly execution reports to Congress to communicate its obligations and expenditures, along with other information. The monthly execution report mirrors the format of the congressional budget justification by providing execution data organized by appropriation account and mission-oriented PPA. This monthly snapshot includes personnel costs as part of the O&S costs they report.
The Future Years Homeland Security Program is a database that contains 5-year funding plans for DHS’s major acquisition programs and is used to prepare a report to Congress that supplements information in the annual budget request. In addition to the information presented in the budget submission and monthly execution report, this document organizes funding projections by major acquisition program. The 5-year plans in the Future Years Homeland Security Program are intended to allow the department to achieve its goals more efficiently than an incremental approach based on 1-year plans and articulate how the department will achieve its strategic goals within fiscal constraints.
DHS Acquisition Process and Life-Cycle Cost Estimates
The establishment of DHS in 2002 consolidated 22 agencies from multiple cabinet-level departments and independent agencies into a single organization. To help manage its portfolio of acquisition programs, DHS established policies and processes for acquisition management, test and evaluation, and resource allocation. The department uses these policies and processes to acquire and deliver systems that are intended to close critical capability gaps and enable DHS to execute its mission. Figure 2 outlines the acquisition life-cycle for major acquisition programs at DHS.
Programs initially identify costs—including those for O&S—in their department-approved LCCE during the analysis phase. When a program becomes operational while still going through its acquisition milestones, programs may use O&S funds during the obtain or deploy phases. For example, the Coast Guard has several operational National Security Cutters, but is also obtaining additional cutters; therefore, it would use O&S funds for support of deployed cutters and procurement funds for acquisition of additional cutters.
GAO’s Cost Estimating and Assessment Guide notes four characteristics of a high quality cost estimate: comprehensive, well documented, accurate, and credible. Specifically, a comprehensive cost estimate should include all costs of the program for the O&S phase, while reflecting the current schedule, and should document all ground rules and assumptions. Furthermore, an accurate cost estimate should provide for results that are based on historical data, if available, while containing few, if any, minor errors.
DHS acquisition policy has generally required components to update LCCEs at Acquisition Decision Events, up until the deployment phase, since 2008. However, since issuance of the department’s October 2011 acquisition policy revision, LCCE revisions must also be DHS-approved. Prior to the 2008 policy, GAO found that nearly two-thirds of programs did not have life-cycle cost estimates. Each of our 11 selected programs has an approved cost estimate. Table 1 lists the programs selected for our study.
In accordance with DHS policy and GAO’s Cost Estimating and Assessment Guide, the O&S costs in the LCCE should inform the O&S portion of the program’s budget request and the funds provided to the program. Accordingly, as programs use these funds in the obtain or deploy phases, they should update the LCCE with spending data to reflect actual costs. Figure 3 illustrates this feedback cycle.
To help facilitate this feedback process, DHS issued a memorandum in January 2016 reminding components that an annual updated LCCE is required for each major acquisition program that has not reached full operational capacity. According to this memorandum, components must submit this cost estimate by April 1st of each year and should include the incurred costs to date through the prior fiscal year as well as how these costs track to prior LCCEs.
Mission-Oriented Budget Management Provides Operational Flexibility, but Limits Visibility of O&S Costs in Reports to Congress
According to agency officials, DHS’s mission-oriented budget management provides operational flexibility in using O&S funding. However, DHS’s budget justifications and reports aggregate programs’ O&S data, limiting oversight of major acquisition programs’ O&S costs. While some program-oriented O&S data are available at the component level, this information does not appear in DHS’s budget reports to Congress. This disparity is due, in part, to the format of the budget reports.
Consolidated O&S Funds Provide Operational Flexibility
Officials across DHS identified operational flexibility as the primary benefit of the department’s mission-oriented budget management. The mission- oriented PPA accounts allow, to a limit, components to move funds between major acquisition programs. For example, officials from Customs and Border Protection’s Air and Marine Operations Division stated they consolidate O&S funds within a single account, which makes them more responsive to mission changes. If a new mission requires a specific aircraft capability, flexible O&S funds will support using that asset, as opposed to another. This flexibility is also apparent in other kinds of change or trade-offs components can make in deploying their systems. We could not identify the frequency with which programs or operators made these operational trade-offs because of limitations in the data we obtained. A few examples include the following: Asset Trade-Off: Customs and Border Protection officials told us they manage aircraft usage to meet mission needs while remaining within the overall O&S budget for the Integrated Operations PPA. At times, they use a less expensive and less capable asset that can still complete the mission as a cost-saving technique. For example, officials told us the UH- 60 Helicopter cost per flight hour is nearly three times the cost of a smaller helicopter. The smaller aircraft does not have the same capabilities as the UH-60, but operators can save money and sufficiently complete the mission with this aircraft, according to officials.
Inventory Trade-Off: Coast Guard officials responsible for maintaining aircraft, such as the Medium Range Surveillance Aircraft, noted that recent budget constraints affected their ability to buy sufficient spare parts. To address this shortfall, they sometimes pulled working parts from aircraft that were grounded and awaiting maintenance to install on aircraft already undergoing maintenance.
Contract or Upgrade Trade-Off: Immigration and Customs Enforcement’s TECS Modernization program requested $3 million in O&S funding for fiscal year 2017, but the program did not receive this funding. To mitigate this unexpected shortfall, officials described how they adapted their contracting strategy to stretch funding through the fiscal year until they could receive full funding in fiscal year 2018. Officials stated that additional proposed funding cuts in fiscal year 2018 would leave the program unable to meet its minimum operating costs. According to officials, the program has several mitigation plans that will reduce cost through a new contract and reductions in data housing center costs.
Because O&S funding represents the money available to end users to carry out their missions, we attempted to use program-level data to identify O&S funding shortfalls for our selected programs. Potentially, this information could also identify how frequently system users are making these trade-offs. However, the components’ use of consolidated funds for certain programs makes O&S costs difficult to see, particularly at Customs and Border Protection’s Air and Marine Operations Division. This component relies on aggregated O&S accounting and could not provide program-level O&S cost information for the Multi-Role Enforcement Aircraft and UH-60 Helicopter programs. As a result, we could not obtain usable information. However, Customs and Border Protection Officials also informed us that they are replacing their internal maintenance cost tracking system, which could help improve expenditure tracking in the future.
Reports to Congress Do Not Consistently Identify Program-Specific O&S Information
DHS first used its common appropriations structure—which DHS proposed and Congress enacted—to address appropriations and budget management inconsistencies in its fiscal year 2016 budget submission. The common appropriation structure streamlined its appropriations, but the resulting reports that the department provides to Congress obscure O&S costs for individual programs. Additionally, while DHS has program- level expenditure data for most of the programs we reviewed, it also relies on fragmented financial management systems that further limit reporting.
Budget Request and Expenditure Reporting Lacks Consistent Program-Level O&S Information
The PPAs DHS uses to communicate its annual budget requests and projections, as well as monthly obligations and expenditures, are mission- oriented. As a result, the budget reports DHS provides to Congress do not always present a clear accounting of individual programs’ O&S costs.
Congressional Budget Justification – Requests for total program O&S funds are not always visible in the DHS congressional budget justification. This document’s mission-oriented reporting within the O&S section continues to combine program-level data within PPAs, as they were for previous budgets. Beginning in fiscal year 2018, DHS added O&S information to the individual program funding request summaries that appear in the procurement, construction, and improvements section of the budget justification, which describes acquisition funding requests. According to officials, this line shows requested funding for O&S for the coming fiscal year and two prior years.
Our review of the fiscal year 2018 congressional budget justification found this information for 5 of our 11 selected programs. However, these program-level details did not appear in the O&S section of the same document, except for one program: Customs and Border Protection’s TECS Modernization program, which recently transitioned to its deployment phase. Of the remaining programs we reviewed that did not have clear O&S information in this document, two were Customs and Border Protection programs: the Multi-Role Enforcement Aircraft and UH- 60 Helicopter. Three were Coast Guard programs: the Long Range Surveillance Aircraft program, the Medium Range Surveillance Aircraft Program, and the National Security Cutter. Both of these components consolidate their O&S funds, meaning they can direct available funds based on program needs. As stated above, this practice also makes it difficult to provide program-level O&S cost information and as a result, the O&S information DHS added to its procurement section is blank for these programs. This new information also does not include programs that completed their procurement phase as DHS requests O&S funds for programs in the deployment phase. Therefore programs in deployment still lack clear program-level O&S data in the congressional budget justification. For example, the Secure Flight program completed procurement and does not have an entry in the procurement section and therefore lacking O&S information.
Monthly Execution Reports – DHS provides monthly execution reports to Congress that include O&S expenditure, obligation, and other budget data, organized by PPA. These reports consist of summary information at the PPA level, again obscuring individual programs’ O&S costs. For example, the Customs and Border Protection PPA cited above would include multiple programs in the same way.
Visibility of DHS’s O&S costs by program is further limited in congressional budget submissions, as personnel costs are not fully captured. For nearly all of our selected programs, we could not identify funding for personnel who operate and maintain program assets within the congressional budget justification or monthly execution report. Program officials stated that, in certain cases, personnel costs are funded in mission–oriented PPAs not clearly associated with the program. According to officials, Customs and Border Protection’s Integrated Fixed Towers program is an example of this scenario. In other cases, the personnel funding associated with a program appears within the same PPA but may fund operations for more than one program. As a result, the full O&S cost of a program—inclusive of operating and supporting personnel—is not clear in the budget request and execution report.
Federal standards for internal control state that managers should communicate quality information to external bodies. DHS is not clearly communicating to Congress the full O&S costs of its programs—inclusive of operating and supporting personnel—in congressional budget justifications and execution reporting. By comparison, agencies such as the National Aeronautics and Space Administration (NASA) and the Department of Defense directly request individual programs’ O&S costs, at least until projects launch or begin operations in NASA’s case. Further, our best practices on capital decision making state that good budgeting requires that the full costs of a project be considered when making decisions to provide resources. Providing data on full program costs permits Congress to better understand the long term costs of a program and the budgetary and programmatic effect of its decisions. While the recent change DHS made to its congressional budget justification to include program-level O&S cost information in the procurement section is an improvement, Congress still lacks complete information regarding DHS O&S costs as such data are absent from monthly execution reporting.
In the course of our review, DHS initiated a pilot program to use unique identifier codes to track O&S expenditures for individual major acquisition programs. As of January 2018, headquarters officials told us the department was testing the identifier with three components that have relatively simple acquisition portfolios: the Domestic Nuclear Detection Office, Immigration and Customs Enforcement, and the National Protection and Programs Directorate. Following the pilot, officials plan to assess whether and how to implement this identifier within other components’ financial management systems. DHS officials stated that they intend to use this information to inform O&S cost estimating for future acquisitions. As of January 2018, DHS did not plan to include the information in any of the budget information provided to Congress. According to DHS officials, they would need to work with Congress in order for Congress to identify how its existing reporting requirements should change, as they did during the development of a common appropriations structure in 2015.
Previous Future Years Homeland Security Program Report Contained Program- Level Information
Prior to fiscal year 2018, the Future Years Homeland Security Program report, which accompanies DHS’s annual budget request, provided supplemental data on planned funding for major acquisition programs. For most components, the report included prior year funds and 5 years of estimated procurement funding for O&S as well as government personnel costs for each program. DHS removed this reporting in its fiscal years 2018-2022 Future Years Homeland Security Program report. Officials explained they removed program O&S funding to focus on planned procurement funding. However, in January 2018, DHS officials stated that they plan to re-introduce O&S funding for major acquisition programs in the Future Years Homeland Security Program report for fiscal years 2019-2023. DHS officials based this decision on multiple internal discussions about the best way to present a more comprehensive view of programs’ total costs and feedback from key stakeholders, such as the Office of Management and Budget.
With its intention of reflecting program-level O&S costs in the upcoming Future Years Homeland Security Program report, to be submitted with the fiscal year 2019 President’s budget request, DHS officials recognize the value in such reporting. This change also aligns to federal standards for internal control and communicating quality information. Re-introducing O&S program cost information would improve the quality of information DHS provides to Congress in its Future Years Homeland Security Program Report. Until DHS takes concrete action to reverse the exclusion of O&S funding at a major acquisition program level in its Future Years Homeland Security Program reports, Congress will lack important information necessary for oversight.
Program-Level O&S Data Exist at Component Level but Not Utilized for Budget Reporting
Programs can generally track detailed O&S obligations and expenditures within their financial systems; however, department officials told us they do not request this information. Each component uses a different financial system to track its O&S costs and report expenses and, in some cases, must manually transfer data between systems. As a result, headquarters officials told us they do not have direct access to components’ systems and request summary information organized by PPA to develop budget requests and monthly execution reports, in accordance with DHS’s mission-oriented budget management.
DHS financial management systems are an area we have designated as high risk since 2003. In September 2013, we found that without sound internal controls over its financial reporting, DHS is hindered in its ability to efficiently manage its operations and resources on a daily basis and provide useful, reliable, and timely financial information for decision making. At that time, we recommended DHS take steps to integrate financial management systems and unify the components’ financial management. In September 2017, we found that despite efforts to address long-standing financial management system deficiencies, several factors delayed the Transportation Security Administration and Coast Guard’s efforts to replace their financial management systems. Specifically, insufficient resources, an aggressive schedule, complex requirements, increased costs, and project management and communication concerns resulted in cost and schedule growth. DHS is taking steps to mitigate these risks and is revising its acquisition strategy to replace these systems, based in part on the issues we identified.
Life-Cycle Cost Estimates for All but One Selected Program Were Comprehensive but Many Did Not Provide Evidence of Accuracy, and All Were Updated as Required
The O&S portion of our selected programs’ most recently approved life- cycle cost estimates (LCCEs) were nearly all comprehensive, but lacked elements of accuracy despite annual and other updates. Program-level LCCEs are one of the sources DHS components should rely on for budget development. Specifically, 10 of the 11 selected programs reviewed either substantially or fully met our best practices criteria for comprehensiveness, while only 5 substantially or fully met criteria for accuracy. These programs have met DHS’s acquisition policy that major acquisition programs generally revise their LCCEs at major acquisition decision events and generally met DHS’s 2016 requirement for annual updates.
Ten of 11 O&S Cost Estimates Were Comprehensive
As of December 2017, 10 out of 11 selected programs’ most recent DHS- approved LCCE either substantially or fully met GAO’s four criteria for a comprehensive cost estimate. Figure 4 depicts the results of our analysis and the criteria for this characteristic.
GAO best practices in cost estimating note it is important that the O&S portion of a program’s LCCE be comprehensive. That is, it should provide an exhaustive and structured accounting of all resources and associated cost elements—hardware, software, personnel, and so on—required to deploy and sustain a program. Five programs fully met and 5 programs substantially met the comprehensive characteristic.
Within those programs that substantially met the characteristic, we found two reasons programs did not fully address criteria. First, 2 of those programs partially met the criterion that requires the estimate to completely define the program, reflect current schedule, and be technically reasonable. Second, despite substantially meeting the characteristic, the Customs and Border Protection’s TECS Modernization program did not have a single, authoritative technical baseline document that contained all the details to satisfy this specific criterion. Instead, multiple technical baselines or baseline documents were present.
The one program in our review that minimally met criteria for comprehensiveness is the Next Generation Security Networks Priority Services program. It is “acquisition-only,” meaning that its LCCE includes the costs to acquire new capabilities for its parent program—Priority Telecommunication Services. When it has acquired these capabilities, the parent program becomes responsible for O&S costs. This unique acquisition relationship is a reason we selected this program, namely to see how the component would factor O&S costs into its estimate. We also previously reported on variance in the program’s cost estimate, due to changes in how the component included O&S costs. Our analysis found that the Next Generation Networks Priority Services program’s LCCE contained minimal information on O&S costs. In the program’s recently updated LCCE, which we did not assess, the National Protection and Programs Directorate refined the Priority Telecommunication Services’ O&S costs to identify only those attributable to the Next Generation Networks Priority Services acquisition.
Over Half of Selected Programs Did Not Provide Evidence to Demonstrate Their Cost Estimates Were Accurate
In contrast to the comprehensiveness of programs’ O&S estimates, only 5 of the 11 selected programs we reviewed either fully or substantially met GAO’s five criteria for accuracy. Accuracy is critical to ensuring a reliable and well-founded LCCE to support operations. This is important because these estimates serve as the basis to request program funding and provide insight into the overall affordability of the acquisition program. Figure 5 depicts the results of our analysis and the selected criteria for this characteristic.
Two programs fully met and 3 programs substantially met the criteria we assessed. Of the programs that substantially met these criteria, we found a common criterion programs struggled to address: they did not document, explain, and review variances experienced between planned and actual costs. DHS acknowledges the importance of including this information and, in its 2016 memorandum, required its components to annually provide a detailed description of any differences between updated and past cost estimates.
Of the 5 programs that partially met criteria for accuracy, we found several reasons for these results, including our lack of access to the cost models used to develop the programs’ LCCE and an explanation of any variances. For example, the Coast Guard was unable to share cost models for the programs we assessed, due to information sensitivities. Without access to the cost models, we could not determine whether the estimates had been properly adjusted for inflation and could not determine whether the estimates contained few, if any, errors—one of GAO’s criteria for accuracy. Similar to the results of our comprehensiveness analysis, the Next Generation Network Priority Services program did not meet our selected accuracy criteria because it did not include O&S costs in its LCCE.
Selected Programs Are Generally Following DHS Requirements to Update LCCEs
While we could not determine that selected programs’ LCCEs were accurate based on the information reviewed, we found that the department is regularly updating LCCEs, a GAO best practice that promotes accuracy. All of the programs met DHS requirements to update their LCCE at each acquisition decision event, as applicable, a policy that also aligns with our cost estimating best practices. Updating LCCEs is an important step to maintain the utility of an estimate throughout a program’s life-cycle and is critical to budget development. Outdated O&S estimates hamper a program’s ability to analyze changes in costs over time. For example, they may not reflect fluctuation in the price of fuel, which could lead to a program requesting insufficient funds for annual operations. DHS relies on the programs’ LCCEs to develop initial budget requests, which it subsequently updates with actual expenditures as the program matures.
As of November 2017, 10 of our 11 selected programs also met DHS’s requirement for programs not yet in the deployment phase to update their LCCEs annually. These new requirements to update LCCEs are making this acquisition document more relevant throughout the life of a program to inform budget requests. The Coast Guard’s Long Range Surveillance Aircraft program is the only program we selected that did not meet this requirement for fiscal year 2017. Coast Guard officials explained that the program is in the process of revising its LCCE, which is why it did not have a submission within fiscal year 2017.
While components are following DHS policy, programs may vary in their approach to updating O&S reporting elements as newer versions of the LCCE document are developed and approved. For about half of our programs, we observed changes to O&S cost elements in the LCCE, which can reflect program changes. This situation is consistent with our cost estimating best practices, which note that cost elements should be updated as changes occur and the program becomes better defined. For example, the Coast Guard’s Medium Range Surveillance Aircraft program’s original LCCE was completed in 2009, when the Coast Guard planned to procure a single aircraft type. Since then, the Coast Guard revised its LCCE in 2012 and 2016 to account for changes to the program, namely the addition of a second aircraft type. The Medium Range Surveillance Aircraft program’s 2016 LCCE now includes an entirely new set of O&S cost elements for both aircraft. Conversely, a program that has very stable cost elements may not need to make such changes. Officials from Customs and Border Protection TECS Modernization program explained they did not alter its cost elements between its original 2014 LCCE and its 2016 revision because O&S costs are stable and well-known as the program enters its deployment phase.
Conclusions
Operations and Support (O&S) is the bulk of the taxpayer’s investment in major acquisition programs and is necessary to meet end user’s needs for spares, maintenance, and operations. To support this mission, DHS manages its budget to maximize components’ flexibility to use O&S funds across major acquisition programs. This aspect of the department’s budget management did not change with enactment of the common appropriation structure.
We do not take issue with DHS’s mission-oriented budget management approach; however, with this reliance on broader O&S mission-oriented program/project activity (PPAs) in reporting, program-specific O&S information is difficult to discern. DHS’s addition of program-level O&S information to the procurement, construction, and improvements section of the congressional budget justification is a positive step, but still does not address this shortfall for all programs. The identifier pilot program DHS has underway could add details on O&S costs for major acquisition programs in addition to those already contained in programs’ life-cycle cost estimates. Such an action will require additional reporting from the components, which may be challenging due to the department’s fragmented financial management systems, as we have observed and made recommendations on in prior reports. DHS could work with Congress to identify ways to strengthen its congressional budget justifications and monthly execution reports by including information on O&S costs.
DHS’s recent proposal to shift back to reporting program-level O&S funding in the Future Years Homeland Security Program report demonstrates that the department sees value in providing such information to Congress and that such information is available to some extent. Until DHS takes concrete action to reverse the exclusion of O&S funding at a major acquisition program level in its Future Years Homeland Security Program reports, Congress will lack important information necessary for oversight.
Recommendations for Executive Action
We are making the following three recommendations to DHS: The Secretary of Homeland Security should work with Congress to add information to its annual congressional budget justification to show O&S funding requests for major acquisition programs within current program/project activity accounts. (Recommendation 1)
The Secretary of Homeland Security should work with Congress to include O&S data in monthly execution reports at a major acquisition program level within current program/project activity accounts. (Recommendation 2)
The DHS Chief Financial Officer should reverse the exclusion of O&S funding at a major acquisition program level in its Future Years Homeland Security Program report for all components. (Recommendation 3)
Agency Comments and Our Evaluation
We provided a draft of this report for review and comment to DHS.
DHS provided written comments, which are reproduced in appendix III. In its comments, DHS concurred with all three of our recommendations and identified actions it plans to take to address them. DHS also provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report.
Appendix I: Objectives, Scope and Methodology
The House Homeland Security Subcommittee on Oversight and Management Efficiency asked us to evaluate operations and maintenance (O&S) activities for the Department of Homeland Security’s (DHS) major acquisition programs. This report assesses, for selected major acquisition programs, the extent that (1) DHS budget management and reporting affects operations and oversight; and (2) life-cycle cost estimates (LCCEs) are comprehensive and accurate, as well as regularly updated.
Selection of Case Studies
To conduct our work, we reviewed the DHS Major Acquisition Oversight List as of April 2017 and selected 11 major acquisition programs from five components to serve as case studies for our review. We selected a non- generalizable sample of programs, and their corresponding components, based on their stage in the acquisition cycle, including programs in the deployment phase. We also ensured we had a mix of different DHS components reflecting the broad spectrum of DHS operations. Our case studies included four information technology programs and seven other programs.
Analysis of Budget Information
To determine how O&S funds are organized within the budget request, we reviewed the O&S and procurement, construction, and improvements appropriations accounts within the fiscal year 2017 and 2018 congressional budget justification by program/project activity account (PPAs) for the 11 programs in our review. We identified the selected programs within these accounts, as possible. To determine whether the PPAs we identified in the O&S budget request were all-inclusive of O&S costs, we developed and disseminated a data collection instrument to program offices, which collected information on selected programs’ O&S budget requests, budget authority, obligations, and expenditures, including personnel expenditures, from fiscal years 2015 to 2017. We compared this information to our analysis of the congressional budget justification and conducted follow-up meetings with each of the component budget offices to understand differences in the data sources and learn if program obligations and expenditures were included in other common component PPAs.
To determine the inclusion of personnel costs in the program O&S expenditures, we reviewed the congressional budget justification and our data collection instrument for personnel expenditures. We held follow-up meetings with program offices to discuss to what extent DHS used O&S PPA funds for personnel costs, identify those PPAs, and whether personnel costs were shared with other programs.
To determine if monthly execution reports contained program-level O&S cost information, we reviewed the December 2016 monthly execution report, as well as DHS guidance to programs on preparing that report, to determine whether individual program obligations and expenditures could be identified within the report. We determined that O&S cost information is reported by mission-oriented PPA in this report and were unable to identify O&S obligations or expenditures by program. We held follow-up discussions with DHS officials to discuss how this information is collected and reported to Congress.
To determine whether O&S costs were included in the Future Years Homeland Security Program database, we reviewed the fiscal years 2017-2021and 2018-2022 reports from the Future Years Homeland Security Program database for identification of program costs. We found that program costs were identified. However, while we are able to determine the inclusion of O&S costs in the fiscal year 2017-2021 report, DHS excluded these costs in the fiscal year 2018-2022 report.
We discussed with DHS and components the financial management systems used by the five components to track obligations and expenditures, and the financial management system used by the Department to develop the monthly execution reports and Future Years Homeland Security Program database.
Analysis of Operational Effects
To assess the extent to which the DHS budget management and reporting has affected operations, we reviewed program budget information including the Congressional budget justification, a data collection instrument, a monthly execution report to Congress, and the fiscal years 2017-2021 and fiscal years 2018-2022 reports from the Future Years Homeland Security Program database. In addition, we conducted interviews with program personnel to discuss the effect of any budget shortfall or surplus on their programs.
Analysis of Life-Cycle Cost Estimates
To assess how DHS incorporated or revised life-cycle cost estimates to include comprehensive and accurate O&S costs, we analyzed the O&S portion of DHS-approved LCCEs for the case study programs, as well as prior versions where applicable, to identify changes in reporting elements over time. We conducted an abridged analysis of programs’ approved LCCE against criteria from GAO’s Cost Estimating and Assessment Guide, with focus on comprehensiveness and portions of accuracy.
Typically in analyzing a cost estimate against GAO best practices, we examine four characteristics, each defined by multiple criteria: credible.
For this review, we assessed our case study programs’ LCCEs against the comprehensive and accurate characteristics, in part, because we limited our analysis to the O&S portion of programs’ LCCEs and did not review entire LCCEs. Further, if the cost estimate is not comprehensive (that is, “complete”), then it cannot fully meet the well documented, accurate, or credible best practice characteristics. For instance, if the cost estimate is missing some cost elements, then the documentation will be incomplete, the estimate will be inaccurate, and the result will not be credible due to the potential underestimating of costs and the lack of a full risk and uncertainty analysis.
In addition, we excluded one of the supporting criteria for the accuracy characteristic, which assesses that the cost estimate results are unbiased, not overly conservative or optimistic, and based on an assessment of most likely costs. Because we did not assess program risk as part of the characteristics we excluded, which also considers potential bias, we did not analyze programs against this criterion.
We interviewed officials at DHS headquarters; component program and budget offices; Coast Guard Surface Forces Logistics Center in Baltimore, Maryland; Coast Guard Aviation Logistics Center in Elizabeth City, North Carolina; Transportation Security Administration at Reagan National Airport in Washington, D.C.; Customs and Border Protection Southwest Border Regional Headquarters in Albuquerque, New Mexico; Customs and Border Protection Tucson Air Branch in Tucson, Arizona; and the Border Patrol’s Nogales Station in Nogales, Arizona. We chose these locations, in part, as we could often discuss multiple programs during a single site visit. For example, we discussed both of our Coast Guard aircraft programs at the Aviation Logistics Center.
We conducted this performance audit from November 2016 to April 2018 in accordance with generally accepted government auditing standards.
Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Results of Life-Cycle Cost Estimate Analysis
Using the GAO Cost Estimating and Assessment Guide, GAO cost experts assessed selected DHS major acquisition programs against 2 of the 4 characteristics of a quality cost estimate. Please see Appendix I for a more detailed description of our methodology and why we did not assess O&S cost estimates against all 4 characteristics.
We determined the overall assessment rating by assigning each individual rating a value:
Not Met = 1,
Minimally Met = 2,
Partially Met = 3,
Substantially Met = 4, and
Met = 5.
Next we averaged the individual assessment ratings to determine the overall rating for each of the two characteristics. The resulting average becomes the Overall Assessment as follows:
Not Met = 1.0 to 1.4,
Minimally Met = 1.5 to 2.4,
Partially Met = 2.5 to 3.4,
Substantially Met = 3.5 to 4.4, and
Met = 4.5 to 5.0.
Table 3 provides our results of selected Custom and Border Patrol acquisition programs’ individual and overall assessment for the comprehensive and accuracy characteristics.
Table 4 provides our results of the selected Immigration and Customs Enforcement acquisition program’s individual and overall assessment for the comprehensive and accuracy characteristics.
Table 5 provides our results of the selected National Protection and Programs Directorate program’s individual and overall assessment for the comprehensive and accuracy characteristics.
Table 6 provides our results of the selected Transportation Security Administration acquisition programs’ individual and overall assessment for the comprehensive and accuracy characteristics.
Table 7 provides our results of the selected U.S. Coast Guard acquisition programs’ individual and overall assessment for the comprehensive and accuracy characteristics.
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Marie A. Mak, (202) 512-4841, or [email protected].
Staff Acknowledgments
In addition to the contact named above, J. Kristopher Keener, Assistant Director; Burns Chamberlain Eckert; Peter Anderson; Jessica Berkholtz; George Bustamante; Erin Butkowski; Jeff Cherwonik; Juana Collymore; Matthew T. Crosby; Jennifer Echard; Jason Lee; and Robin Wilson made key contributions to this report. | Why GAO Did This Study
O&S costs—the costs used to operate and sustain a program—can account for up to 70 percent of a program's total cost. End users rely on O&S funds for maintenance, spares, and personnel. DHS programs initially identify O&S costs in their life-cycle cost estimate at the outset of the acquisition. This estimate informs the program's budget and affects the amount the department designates for the program's use. In 2015, GAO found that DHS's budget requests did not reflect all estimated costs—including O&S—for certain programs, which limits visibility for decision makers.
GAO was asked to review O&S activities for major acquisition programs at DHS. This report examines the extent that (1) DHS's budget management and reporting affects operations and oversight, and (2) cost estimates are comprehensive and accurate, as well as regularly updated.
GAO selected a non-generalizable sample of 11 major acquisition programs, based on asset type and acquisition status, as case studies from selected DHS components. GAO analyzed selected programs' O&S cost estimates, funding, and spending. In addition, GAO interviewed DHS officials at the headquarters, component, and program office level.
What GAO Found
While the Department of Homeland Security's (DHS) budget management provides flexibility to conduct operations, such as shifting funds to programs within the same mission area to cover unforeseen needs, budget reporting does not provide Congress with insight into specific programs' operations and support O&S costs. The O&S budget information that DHS reports to Congress is oriented by mission—for example, Integrated Operations—instead of by program—for example, the Multi-Role Enforcement Aircraft Program. The figure depicts the mission-oriented nature of the budget.
While some program-oriented O&S data are available at the component level, this information does not appear in DHS's budget reports to Congress. This disparity is due in part to the manner in which the department reports budget information. However, these limitations are not insurmountable. Standards for internal controls state that managers should communicate quality information, in this case full program costs. Providing additional data on O&S costs in budget reports would preserve DHS's flexibility in its use of funds while providing Congress a better understanding of the budgetary and programmatic effect of its funding decisions.
GAO reviewed the O&S portion of the most recently approved cost estimates for selected programs and found that 10 of the 11 estimates provided a complete accounting of all resources and associated cost elements. Further, all the programs had appropriately updated their cost estimates as required, a GAO best practice in cost estimating. Due to the sensitive nature of some programs' cost models, GAO could not verify all aspects of accuracy for all estimates reviewed.
What GAO Recommends
GAO is making three recommendations, including that DHS work with Congress to add program-level O&S funding details to the budget information it provides Congress. DHS concurred. |
gao_GAO-18-160 | gao_GAO-18-160_0 | Background
Public Debt and Revenue
Territorial governments issue debt securities and receive loans for a variety of purposes, including to finance long-term investments, such as infrastructure projects, and to fund government operating costs. For the purposes of this report, total public debt outstanding refers to the sum of bonds and other debt held by and payable to the public, as reported in the territories’ single audit reports. Bonds payable are marketable bonded debt securities issued by the territories’ primary governments or their component units and held by investors outside those governments. The primary government is generally comprised of governmental activities (generally financed with taxes and intergovernmental aid) and business- type activities (generally financed with charges for goods and services). Component units are legally separate entities for which a government is financially accountable. For the purposes of this report, any reference to total government activity and balances includes both the primary government and component units. Other debt payable may include shorter term marketable notes and bills issued by territorial governments and held by investors outside those governments, non-marketable intragovernmental notes, notes held by local banks, federal loans, intragovernmental loans, and loans issued by local banks. Pension liabilities and other post-employment benefits (OPEB) are not included in our definition of total public debt.
Marketable debt securities, primarily bonds with long-term maturities, are the main vehicle by which the territories access capital markets. Municipal bonds issued by all five territories have traditionally been attractive to investors because they are triple tax exempt; interest from the bonds is generally not subject to federal, state, and local income taxes regardless of an investor’s state of residence. There are several different types of marketable debt securities:
General obligation bonds are bonds issued by territorial governments that are payable from the general funds of the issuer, although the precise source and priority of payment for general obligation bonds may vary considerably from issuer to issuer depending on applicable law. Most general obligation bonds are said to entail the full faith and credit (and in many cases the taxing power) of the issuer, depending on applicable law. In USVI, unlike in the other four territories in which general obligations bonds are backed by the full faith and credit of the government, debt issued by the primary government is either backed by 1) both a general obligation of the government and revenue from USVI’s gross receipts tax, or 2) revenue from the federal excise tax on rum rebated to the territory.
Limited obligation bonds are bonds payable from specific taxes that are limited by law in rate or amount, while revenue bonds are payable from specific sources of revenue.
Marketable notes differ from bonds in that they are short-term obligations of an issuer to repay a specified principal amount on a certain date, together with interest at a stated rate, usually payable from a defined source of anticipated revenues. Notes usually mature in 1 year or less, although notes of longer maturities are also issued.
Bonds and notes may be issued by both the territories’ primary governments and by their component units. Examples of the territories’ component units are USVI’s Water and Power Authority, Guam’s Airport Authority, CNMI’s Ports Authority, and Puerto Rico’s Electric Power Authority.
Unlike the states, territories are prohibited from authorizing their component units to seek debt restructuring under Chapter 9 of the federal bankruptcy code, which can be used to extend the timeline for debt repayment, refinance debt, or reduce the principal or interest on existing debt.
U.S. law restricts the territories’ authority to impose certain territorial taxes. Three territories—Guam, CNMI, and USVI—are required by U.S. law to have a mirror tax code. In general this means that these territories must use the U.S. Internal Revenue Code (IRC) as their territorial income tax law. In contrast, American Samoa and Puerto Rico, which are not bound by a mirror tax code, have established and promulgated their own income tax regulations. Although Guam and CNMI are mirror-code jurisdictions, they are authorized under the Tax Reform Act of 1986 to delink from the IRC if certain conditions are met.
Revenues are amounts that result from governments’ exercise of their sovereign power to tax or otherwise compel payment. Revenues also include income generated by the territories’ component units. While our analysis primarily focuses on trends in general revenues, we also include total revenue—general revenues and program revenues combined—in our analysis. In addition to general revenue levels, another measure of fiscal health is the net position for primary government activities, which represents the difference between the primary government’s assets (including the deferred outflows of resources) and the primary government’s liabilities (including the deferred inflows of resources). In other words, the net position for primary government activities reflects what the primary government would have left after satisfying its liabilities. A negative net position means that the primary government has more liabilities than assets. A decline in net position may indicate a deteriorating financial position. While our analysis primarily focuses on trends in the net position for the primary government, we also include certain information on trends in the total net position—primary government net position and component unit net position combined— for the government.
Fiscal risks refer to responsibilities, programs, and activities that may legally commit or create the expectation for future government spending. Fiscal risks may be explicit in that the government is legally required to fund the commitment, or implicit in that an exposure arises not from a legal commitment, but from current policy, past practices, or other factors that may create the expectation for future spending. Civilian pension benefits are typically an example of an explicit fiscal risk because the government has a legal commitment to pay pension benefits earned by current government employees who will receive benefits in the future and to pay retirees who currently receive benefits.
Puerto Rico
Puerto Rico’s Total Public Debt Increased by 73 Percent and It Grew from 47 to 66 Percent of GDP between Fiscal Years 2005 and 2014
Total Public Debt Outstanding
Puerto Rico’s total public debt outstanding increased continuously between fiscal years 2005 and 2014. (See figure 2.) Total public debt grew from $39.2 billion in fiscal year 2005 to $67.8 billion at the end of fiscal year 2014 —an average rate of 6.3 percent per year. Bonded debt outstanding —including mainly general obligation and revenue bonds—represented the majority of total public debt outstanding for all years. Bonded debt outstanding averaged 86 percent of total public debt between fiscal years 2005 and 2014, increasing from a total of $35 billion in fiscal year 2005 to $58.5 billion in fiscal year 2014. Puerto Rico’s Consolidated Audited Financial Report for fiscal year 2015 was not available as of June 2017. However, in the March 13, 2017, fiscal plan released by the Government of Puerto Rico, total public debt outstanding was listed as $74.3 billion as of February 2017.
As of fiscal year 2014, the primary government’s bonded debt outstanding was mainly comprised of revenue bonds. These accounted for $24.3 billion of the $37.9 billion in total bonded debt. In contrast, between fiscal years 2005 and 2008, general obligation bonds represented the majority of the primary government’s bonded debt. In fiscal year 2009, the amount of revenue bonds outstanding tripled. The risks of general obligation bonds and revenue bonds are different. A revenue bond is secured by a specific revenue stream, identified in the bond contract, whereas a general obligation bond is secured by the full taxing power of the government, but also reliant on the full faith and credit of the issuing government. Puerto Rico also issued notes between fiscal years 2005 and 2014.
Puerto Rico’s primary government and the three largest component units—the Puerto Rico Electric Power Authority (PREPA), the Puerto Rico Aqueduct and Sewage Authority (PRASA), and the Puerto Rico Highways and Transportation Authority (PRHTA)—owed the majority of Puerto Rico’s public debt outstanding in fiscal year 2014. (See table 1.) These component units mostly issued debt backed by their own resources, including the revenue generated from their operations. Other component units also held public debt in fiscal year 2014, including the Government Development Bank, State Insurance Fund Corporation and the Puerto Rico Trade and Export Company, among others. The primary government’s share of total public debt outstanding grew relative to debt owed by all of the component units from 44 percent in fiscal year 2005 to 59 percent in fiscal year 2014.
Puerto Rico’s total public debt outstanding as a percentage of Gross Domestic Product (GDP) grew from 47 percent in fiscal year 2005 to 66 percent in fiscal year 2014, and its ratio of total public debt outstanding to Gross National Product (GNP) grew from 71 percent of GNP in fiscal year 2005 to 99 percent in fiscal year 2014. (See figure 3.) GDP measures the value of goods and services produced inside a country, or for the purpose of this report, a territory. In contrast, GNP measures the value of goods and services produced by its residents. GNP includes production from residents abroad and excludes production by foreign companies in a country. In Puerto Rico, GDP has consistently been greater than GNP, which means that production by foreign companies in Puerto Rico is larger than production by Puerto Rican residents in the territory and abroad. For this reason, according to the U.S. Department of the Treasury, GNP is generally a more representative measure of Puerto Rico’s economic activity than GDP.
A July 2014 report by the Federal Reserve Bank of New York stated that debt to GNP ratios above just 60 percent can inhibit economic growth because they generally lead to higher financing costs and limit access to other sources of financing. Puerto Rico’s share of total public debt outstanding to GNP has remained above 90 percent since 2010.
Puerto Rico’s total public debt outstanding per capita has almost doubled since fiscal year 2005, rising from $10,000 per person in fiscal year 2005 to $19,000 per person in fiscal year 2014. (See figure 4.)
Despite Some Growth in General Revenue, Puerto Rico’s Net Position Declined between Fiscal Years 2005 and 2014
Puerto Rico’s general revenue fluctuated between fiscal years 2005 and 2014, with lows around $11.6 billion between fiscal years 2008 and 2010 and again in 2013. Puerto Rico’s general revenue in fiscal year 2014 was $13.8 billion, of which 75 percent or $10.3 billion was tax revenue. Most of the tax revenue for the same year was reported as income taxes (52 percent of the total or $5.4 billion) and excise taxes (33 percent of the total or $3.4 billion.) Revenue in fiscal year 2014 increased by over $2 billion from the prior year. The majority of this growth was due to increases in income and excise taxes. Puerto Rico’s total revenue (i.e. general revenue and program revenue combined) also fluctuated but grew slightly by 3 percent on average, per year, from $25.5 billion in fiscal year 2005 to $32.5 billion in fiscal year 2014. (See figure 5.)
Despite the growth in revenue in fiscal year 2014, Puerto Rico’s net position for the primary government as of fiscal year end 2014 was a negative $49.7 billion, declining from a negative $46.4 billion as of fiscal year end 2013. Moreover, despite the fluctuations in revenue between fiscal years 2005 and 2014, Puerto Rico’s net position for the primary government declined year over year from a negative $15.2 billion as of fiscal year end 2005 to a negative $49.7 billion as of fiscal year end 2014. Puerto Rico’s declining net position for the primary government reflects its deteriorating financial position. Further, the effect of Puerto Rico implementing Governmental Accounting Standards Board (GASB)
Statement No. 68, Accounting and Financial Reporting for Pensions —An Amendment of GASB Statement No. 27, is not yet known. GASB Statement No. 68 was in effect for fiscal years beginning after June 15, 2014, and established standards for measuring and recognizing liabilities, deferred outflows of resources, and deferred inflows of resources related to pensions. For each of the other territories that implemented GASB Statement No. 68, implementing the statement resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition.
Puerto Rico’s total net position for the primary government and component units combined also declined year over year between fiscal years 2005 and 2014, from a positive $2.5 billion as of fiscal year end 2005 to a negative $43.6 billion as of fiscal year end 2014.
Experts Identified Several Factors That Have Contributed to Puerto Rico’s High Debt Levels
Puerto Rico officials, representatives from ratings agencies that we spoke to, and publically available reports that we reviewed cited various major factors as contributors to Puerto Rico’s high debt levels. The factors cited include the following:
Public debt financing government operations: Ratings agency officials told us that Puerto Rico has long used public debt as a means to finance general government operations and indicated that debt has been used for this purpose in Puerto Rico since at least 2000. According to these officials, the sustained use of debt to finance general government operations is unusual when compared to states and was considered a “red flag” in the case of Puerto Rico. As Puerto Rico’s debt grew, the government found it increasingly difficult to meet other responsibilities, including paying tax returns, settling accounts payable, and fulfilling pension obligations.
Triple tax exempt status: Debt in Puerto Rico was attractive to investors for its triple tax exempt status. Over time, Puerto Rico’s primary government accumulated debt from investors without addressing its persistent deficits. According to the February 28, 2017, version of the Puerto Rico government’s fiscal plan, Puerto Rico’s capacity to issue debt at favorable rates postponed the implementation of fiscal reforms and controls necessary to balance Puerto Rico’s budget.
Financial data limitations: A lack of comprehensive, timely, and accurate financial data from Puerto Rico may have limited the ability of some investors to anticipate or fully understand the economic crisis in the territory. For example, according to the Government of Puerto Rico’s February 28, 2017, version of the fiscal plan, audited financial statements for Puerto Rico were only issued on time three times from 2005 to 2014. Audited financial statements are still currently pending for fiscal years 2015 and 2016. In addition, forecasts routinely overestimated revenue.
Recession and outmigration: Recession and outmigration have resulted in reduced tax revenue. A recession in Puerto Rico began in 2006 and continued through the period we reviewed. Outmigration also accelerated most years since 2005 as Puerto Ricans migrated to the U.S. mainland and elsewhere. According to U.S. Census Bureau estimates, Puerto Rico lost 14 percent of its population, more than 550,000 individuals, between July 2009 and July 2016.
936 tax credit phase out: The phase out of the section 936 tax credit is often cited by Puerto Rico officials for its negative effect on Puerto Rico’s economy. Other experts said the effect was not as significant. In addition, in 2006, we reported that the expiration of the benefit did not ultimately lead to a reduction in income and value added. A substantial share of production in Puerto Rico is carried out by U.S. multinational corporations, in part because of federal corporate income tax benefits, once available to firms located in Puerto Rico. Prior to 1994, certain U.S. corporations could claim the possessions tax credit under section 936 of the Internal Revenue Code (IRC). In general, the credit equaled the full amount of federal tax liability related to an eligible corporation’s income from its operations in a possession—including Puerto Rico—effectively making such income tax-free. In 1993, caps were placed on the amount of possessions credits that corporations could earn. In 1996, the credit was repealed, although corporations that were existing credit claimants were eligible to claim credits through 2005.
Outcomes of Restructuring Process Will Determine Outlook for Repayment of Debt
Puerto Rico had missed up to $1.5 billion in debt service payments as of September 2016. Puerto Rico’s government is working with the Financial Management and Oversight Board (Board) to implement plans for long- term financial reform and to adjust debts accrued by both the primary government and public corporations. The Board has the power to approve or certify fiscal plans, budgets, voluntary agreements with bondholders, debt restructuring plans, and critical projects within Puerto Rico. As the first step in a process to adjust debts in Puerto Rico, the Board certified the current Governor’s fiscal plan in March 2017, which outlines strategies for financial reform. The fiscal plan includes estimates for how much each year can be allocated for debt payments, which average 23 percent of total debt payments due for the years 2018 through 2026. (See figure 6.)
On May 3, 2017, the Board filed an initial petition for restructuring Puerto Rico’s debt and pension liabilities. Puerto Rico’s ultimate liability for its outstanding debt will be determined based on the outcome of this process in federal court.
American Samoa
American Samoa’s Total Public Debt More Than Doubled and It Grew from 5 to 11 Percent of GDP between Fiscal Years 2005 and 2015
Total Public Debt Outstanding
American Samoa’s total public debt outstanding grew from $27 million in fiscal year 2005 to $69.5 million in fiscal year 2015. Until fiscal year 2015, the portion of American Samoa’s total public debt outstanding that was bonded debt outstanding was limited. (See figure 7.) In fiscal year 2007, the territory paid off a general obligation bond that was issued in fiscal year 2000 to refinance prior debt. Between fiscal years 2008 and 2014, American Samoa had no outstanding bonded public debt. In fiscal year 2015, American Samoa’s primary government issued a general obligation bond for about $55 million, and in January 2016 a second bond was issued for $23 million. Most of American Samoa’s bonded debt outstanding is scheduled to mature by 2035.
Between fiscal years 2005 and 2015, American Samoa’s loan balance was significantly greater than bonded debt outstanding for all years except fiscal year 2015. American Samoa’s loan balance consists of both loans from the U.S. government and intragovernmental loans, or loans between the territory’s primary government and component units. Between fiscal years 2005 and 2015, this included 1993 and 1994 Federal Emergency Management Agency community disaster loans totaling $10.2 million and a 1999 Department of the Interior loan in the amount of $18.6 million. In 2006 and 2007, the primary government also entered into two loan agreements with the government retirement fund, in the amounts of $10 million and $20 million, in part to finance infrastructure projects.
American Samoa’s total public debt outstanding has remained small relative to its economy between fiscal years 2005 and 2015. During this period, American Samoa’s total public debt outstanding as a percentage of GDP was 5.3 percent in fiscal year 2005, reached a low of 4.4 percent in fiscal year 2014, and grew to 10.9 percent in fiscal year 2015. During this same period, bonded debt outstanding as a share of GDP was 1.3 percent in fiscal year 2005, declined to 0.44 percent in fiscal year 2007 and remained at 0 percent between fiscal years 2008 and 2014. The new bond issuance in fiscal year 2015 increased the share to 8.6 percent. (See figure 8.)
Total public debt per capita grew from $414 per person in fiscal year 2005 to $1,212.8 in fiscal year 2015. (See figure 9.)
American Samoa’s General Revenue Grew and Net Position Was Positive and Generally Improving between Fiscal Years 2005 and 2015
American Samoa’s general revenue fluctuated, but trended upward between fiscal years 2005 and 2015. American Samoa’s general revenue of $116.5 million in fiscal year 2015 represented a 20 percent increase over its revenue of $97.4 million in fiscal year 2005. Approximately 55 percent of the general revenue earned by American Samoa during this period was comprised of tax revenue, and all of the tax revenue was from income and excise taxes. American Samoa’s total revenue (i.e. general revenue and program revenue combined) also fluctuated but trended upward between fiscal years 2005 and 2015. Its total revenue of $436.4 million in fiscal year 2015 represented a 55 percent increase over its total revenue of $281.8 million in fiscal year 2005. According to territory officials, growth in revenue during this period can be attributed in part to revenue generated by stimulus funding the territory received as part of the American Recovery and Reinvestment Act of 2009. (See figure 10.)
Along with the growth in revenue, American Samoa’s net position for the primary government was consistently positive and generally improving between fiscal years 2005 and 2014. American Samoa’s net position for the primary government generally improved year over year from a positive $217.7 million as of fiscal year end 2005 to a positive $291.9 as of fiscal year end 2014; it then declined to a positive $245.1 million as of fiscal year end 2015. American Samoa’s net position for the primary government as of fiscal year end 2014 is shown prior to restatement. In fiscal year 2015, American Samoa implemented GASB Statement No. 68 and adjusted its beginning net position by $60.1 million, resulting in a restated net position as of fiscal year end 2014 of a positive $240.8 million. The implementation of GASB Statement No. 68 resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition.
American Samoa’s total net position for the primary government and component units combined was also consistently positive and generally improving between fiscal years 2005 and 2015. It increased from $317.9 million as of fiscal year end 2005 to $450.2 million as of fiscal year end 2015.
The territory has previously faced financial management challenges, including failures to meet revenue projections and deficiencies in forecasting expenditures. Territory officials said, however, that they are taking a number of steps to improve forecasting. In early 2015, officials convened a task force in Hawaii to develop a plan to improve the management of American Samoa’s finances. As part of the effort to improve forecasting, the plan requires the treasury and budget departments to meet on a monthly basis to reconcile actual revenues and expenditures and brief the Governor. If revenues are below projections, the Governor may instruct all government departments to reduce spending by an additional 5-10 percent. In addition, officials told us that the territory is planning to procure a contractor in fiscal year 2017 to help further improve its revenue and spending forecasts.
American Samoa’s Bonded Debt Was Issued Primarily to Fund Infrastructure Projects
According to territory officials, American Samoa has never issued debt to fund government operating costs and does not intend to do so. Territory officials confirmed that the fiscal year 2015 and 2016 general obligation bonds were issued primarily to fund various infrastructure projects, including relocating airport fuel tanks, constructing an inter-island ferry, and establishing a territorial charter bank.
American Samoa Faces Economic Vulnerabilities That May Affect Its Ability to Repay Public Debt
While American Samoa’s level of public debt is relatively low compared to other territories, we found that it faces significant economic vulnerabilities that may hamper its ability to repay that debt. According to territory officials and our prior work, American Samoa’s economy relies heavily on the tuna processing and canning industry. In December 2016, we reported that canneries employed about 14 percent of American Samoa’s workforce in 2014. Moreover, we found that the canneries provided a number of indirect benefits to other industries and the economy in American Samoa. For example, other businesses exist because of the canneries, such as the company that manufactures the cans. Maintenance for the canneries and for the vessels that supply the canneries also has brought business and jobs to the island. Cannery workers spend money at local establishments, such as restaurants and retail stores. Additionally, exported cannery products and delivery of materials to the canneries reduced the shipping cost of bringing other goods to American Samoa. We also reported that the tuna canning industry faces a number of challenges; in addition territory officials expressed concerns about federal policies that may hamper American Samoa’s tuna industry, such as scheduled minimum wage increases that increase labor costs for tuna canning in American Samoa relative to other locations, decreased access to fishing grounds in the Pacific due to environmental regulations, and potential erosion of the territory’s preferential trade status. In October 2016, one of the two companies with canning operations in American Samoa announced that it would indefinitely suspend its operations in the territory, and the other temporarily suspended operations twice during the same year. Changes in American Samoa’s tuna industry have been important determinants of changes in its GDP, and additional disruptions in the industry would reduce revenue and hamper GDP growth, which, if severe enough, could impede the repayment of existing debt.
In part because of such challenges, Moody’s Investor Services assigned a noninvestment grade rating to the territory’s bonds in early 2016. According to the rating agency, this downgrade reflected concerns associated with the territory’s small and volatile economy, low income levels, weak financial position, and financial management challenges. Territory officials told us that the Puerto Rico debt crisis has affected their access to favorable rates in capital markets, and said that they currently do not have plans to issue any more bonded debt.
Commonwealth of the Northern Mariana Islands (CNMI)
CNMI’s Total Public Debt Declined by $100 million, Decreasing to 16 Percent of GDP between Fiscal Years 2005 and 2015
Total Public Debt Outstanding
CNMI’s total public debt outstanding declined from $251.7 million in fiscal year 2005 to $144.7 million in fiscal year 2015. (See figure 11.) During this time, CNMI’s primary government issued one general obligation bond in the amount of about $100.5 million in fiscal year 2007. This general obligation bond refinanced two prior bonds that were issued in fiscal years 2000 and 2003. Most of CNMI’s bonded debt outstanding is scheduled to mature in 2030 or later.
Between fiscal years 2005 and 2015, CNMI’s total public debt outstanding as a share of GDP grew from 23 percent in fiscal year 2005 to 26 percent in fiscal year 2007, and then declined to 16 percent in fiscal year 2015. Bonded debt outstanding as a share of GDP was 14 percent in both fiscal years 2005 and 2015, but reached 19 percent in fiscal year 2011. (See figure 12.)
CNMI’s total public debt outstanding per capita declined from about $4,199 per person in fiscal year 2007 to about $2,776 per person in fiscal year 2015. (See figure 13.)
CNMI’s General Revenue Fluctuated and Net Position Was Generally Declining between Fiscal Years 2005 and 2015
CNMI’s general revenue fluctuated between fiscal years 2005 and 2015. General revenues declined by about 39 percent between fiscal years 2005 and 2011, largely due to the decline in the territory’s garment industry. (See figure 14.) General revenues have steadily increased since fiscal year 2011, primarily as a result of growth in the tourism sector. Data from the Marianas Visitor Authority show that the downward trend in Japanese visitors from 2013 to 2016 was offset by the growth in visitors from China and South Korea. The tourist industry has also been boosted by the introduction of a new casino. In August 2014, the CNMI government entered into a casino license agreement to construct a development project that will include a hotel with a minimum of 2,004 guest rooms and areas for gaming, food, retail, and entertainment, among other things. CNMI’s total revenue (i.e. general revenue and program revenue combined) also fluctuated between fiscal years 2005 and 2015. Total revenue reached a high of $635.7 million in fiscal year 2014 and then declined to $573.8 million in fiscal year 2015, which represented only a one percent increase over the fiscal year 2005 revenue of $567.9 million.
While general revenue fluctuated, dipping then rebounding between fiscal years 2005 and 2015, CNMI’s net position for the primary government has been negative and generally trending downward. Specifically, CNMI’s net position for the primary government declined from a negative $38.1 million as of fiscal year end 2005 to a negative $215.4 million as of fiscal year end 2015. CNMI’s net position for the primary government has been negative by over $200 million for each fiscal year since 2010, but it showed a slight improvement between fiscal years 2011 and 2013 and in fiscal year 2015.
CNMI’s total net position for the primary government and component units combined fluctuated but generally remained stagnant, increasing slightly from $281.6 million as of fiscal year end 2005 to $284.8 million as of fiscal year end 2015.
Between Fiscal Years 2005 and 2015, CNMI Issued Public Debt to Refinance Prior Debt and to Fund Infrastructure Projects
CNMI’s Constitution prohibits public indebtedness for operating expenses of the CNMI government or its political subdivisions. In addition, the territory’s legislature must approve any bond issuances and the value of any bonds issued cannot exceed 10 percent of the assessed value of real property within CNMI. In fiscal year 2007, the primary government of CNMI issued one general obligation bond to refinance two bonds originally issued in 2000 and 2003. Both the 2000 and 2003 bonds were issued to finance various infrastructure improvement projects. The 2003 issuance was also used for a onetime payment to settle land claims for the appropriation of private lands for public use.
Component units in CNMI also issue debt. In 2007, the Commonwealth Ports Authority, which is responsible for operating, maintaining, and improving all airports and seaports in CNMI, issued a bond for about $7.2 million. The proceeds of the bond were used in part to pay for improvements to seaport facilities at Saipan Harbor.
Despite Economic Growth, CNMI Faces Labor Shortages and Fiscal Risks That May Affect Its Ability to Repay Public Debt
While CNMI’s economic outlook has improved, with GDP increasing 3 years in a row since 2013, we found that the territory faces growing labor shortages that may affect its ability to repay public debt in the future. In May 2017, we reported that CNMI’s economy relies heavily on a foreign workforce and foreign workers comprised a majority of the territory’s workforce in 2015. The Consolidated Natural Resources Act of 2008, among other things, established federal control of CNMI immigration beginning in 2009. The act established a transition period with special provisions for foreign visitors, investors, and workers. Specifically, it required the U.S. Department of Homeland Security (DHS) to establish a temporary work permit program for foreign workers and to reduce annually the number of permits issued, reaching zero by the end of the transition period—now set to occur on December 31, 2019. We analyzed the economic effect of removing all permitted foreign workers from CNMI’s economy using the most recent GDP information available from calendar year 2015. Depending on assumptions made, with no permitted workers CNMI’s GDP in 2015 would have hypothetically declined by 26 to 62 percent. Planned reductions in permitted workers could worsen the effect on GDP going forward and hamper the territory’s ability to repay existing debt.
CNMI also has significant pension liabilities, but the exact amount of the net pension liability is not included in the territory’s most recent single audit report because the government has not complied with accounting standards that require it to do so. In 2013, a U.S. district court approved a settlement agreement with the territory’s government pension plan, which applied for bankruptcy in 2012. As part of the settlement, CNMI agreed to make minimum annual payments to the fund to allow members to receive 75 percent of their full benefits. In addition to the settlement plan, CNMI appropriated $25 million of casino license fees to fund the restoration of the 25 percent reduction of the retirees’ and beneficiaries’ pensions, among other purposes. CNMI made one payment of $27 million and another payment of $19.4 million to the fund in fiscal year 2015. Territory officials told us they are planning to market a $45 million general obligation bond in 2017 to provide additional financing for the pension fund. They added, however, that they currently have no plans to issue debt for other purposes, such as infrastructure projects, because of uncertainty in the labor market.
In 2012, Moody’s Investor Services confirmed CNMI’s general obligation bond ratings as non-investment grade, which was downgraded in 2009. According to the rating agency, the 2012 rating was due to losses in the territory’s garment industry, consistent operating deficits, and increasing unfunded pension liabilities.
Guam
Guam’s Total Public Debt More Than Doubled and It Grew from 24 to 44 Percent of GDP between Fiscal Years 2005 and 2015
Total Public Debt Outstanding
Guam’s total public debt outstanding increased from almost $1 billion in fiscal year 2005 to $2.5 billion fiscal year 2015, with the majority of the increase occurring between fiscal years 2008 and 2015 when total outstanding public debt grew 13 percent on average per year. (See figure 15.) In fiscal year 2015, 54 percent of Guam’s total public debt outstanding was issued by component units. Territory officials told us component unit debt is backed solely by the revenue component units generate and cannot be used to service debt issued by the primary government.
The majority of Guam’s total public debt is in the form of bonds. Bonded debt outstanding comprised between 93 and 97 percent of total public debt outstanding from fiscal years 2005 through 2015. Most of Guam’s bonded debt outstanding will mature in 2027 or afterwards. The remainder of Guam’s public debt outstanding between fiscal years 2005 and 2015 was primarily comprised of notes and loans, including loans from the federal government.
Between fiscal years 2005 and 2015, Guam’s total public debt outstanding as a share of GDP increased from 24 percent to 44 percent, with bonded debt outstanding growing similarly from 22 percent of GDP to 42 percent. (See figure 16.)
Both total public debt and bonded public debt outstanding per capita more than doubled between fiscal years 2005 and 2015. Total public debt outstanding per capita rose from about $6,270 per person to $15,323 per person, while bonded public debt outstanding increased from $5,810 per person to $14,759 per person. (See figure 17.)
Guam’s General Revenue Grew and Net Position Fluctuated Significantly between Fiscal Years 2005 and 2015
Guam’s general revenue grew by 6 percent on average, per year, between fiscal years 2005 and 2015, from $573.2 million to $862.7 million. General revenue declined sharply in fiscal year 2006, recovered in fiscal year 2007, and then increased steadily through fiscal year 2015. According to territory officials, this increase in revenue can largely be attributed to economic development, with significant growth in tourism and new construction. A 2015 report to Guam’s bondholders noted that there was an increase in visitors to the island each month between 2014 and 2015. The report attributed this increase to several factors, such as the expanded number of airline routes to Guam, the favorable exchange rate for Asian visitors, and the relative improvement of the overall global economy. Guam’s total revenue, or general revenue and program revenue combined, also grew by 5 percent on average, per year, between fiscal years 2005 and 2015, from $1.4 billion to $2.2 billion. (See figure 18.)
To project revenues, Guam officials use a model comprised of statistical weights that are calculated and assigned to each revenue source, which is derived from historical collections data from the prior fiscal years.
While revenue generally grew, Guam’s net position for the primary government fluctuated significantly between fiscal years 2005 and 2015. Since fiscal year end 2006, Guam’s net position for the primary government has been negative and trending downward. Specifically, Guam’s net position for the primary government declined from a positive $79.8 million as of fiscal year end 2005 to a negative $194.2 million as of fiscal year end 2012. Net position improved significantly and was positive in fiscal years 2013 and 2014, but then declined from a positive $174.4 million as of fiscal year end 2014 to a 10-year low of a negative $670.9 million as of fiscal year end 2015. Guam’s net position for the primary government as of fiscal year end 2014 is shown prior to restatement. In fiscal year 2015, Guam implemented GASB Statement No. 68 and adjusted its beginning net position by $815.6 million, resulting in a restated net position as of fiscal year end 2014 of a negative $641.2 million. The implementation of GASB Statement No. 68 resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition.
Guam’s total net position for the primary government and component units combined also fluctuated significantly. Specifically, Guam’s total net position increased from a positive $788.8 million as of fiscal year end 2012 to a 10-year high of positive $1.2 billion as of fiscal year end 2014. It declined to a 10-year low of positive $47.3 million as of fiscal year end 2015 due to the implementation of GASB Statement No. 68.
Guam Has Used Public Debt to Meet Federal Requirements and Court Orders
According to territory officials, Guam’s bonded debt outstanding has primarily been used to comply with federal requirements and court orders. Guam has issued debt in several cases when compelled to meet federal and territorial requirements. For example, since Guam adheres to the mirror tax code, the territory is required to fund the Earned Income Tax Credit (EITC) and is not reimbursed for this by the federal government. In June 2004, the territory agreed to pay $60 million over 9 years in settlement of unpaid EITC refunds from 1996, and in September 2006, the territory reached a new settlement replacing the 2004 agreement in which it agreed to pay up to $90 million.
Moreover, in 2006, the Superior Court of Guam held that a territorial statutory provision required the retirement fund for government employees to pay past due annual lump sum Cost of Living (COLA) payments plus interest to eligible retirees and survivors. This resulted in an award of $123.5 million plus interest to those individuals. In response, Guam issued a general obligation bond in 2007 in the amount of $151.9 million to finance these past due tax refunds and outstanding COLA settlement payments, as well as to refinance prior debt and help fund infrastructure projects. In 2009, it issued another general obligation bond in the amount of $271 million for similar purposes. According to a Guam government report, the largest increase in the territory’s indebtedness occurred between fiscal year 2008 and fiscal year 2009, and was due in part to issuing bonds to pay for past due tax refunds and unpaid COLA expenses. In Guam’s 2017 draft debt management policy, the Governor cited the administration’s commitment to ensuring that tax refunds will be paid on time and no later than 6 months after filing.
In addition, in February 2004 the U.S. Environmental Protection Agency (EPA) and the Department of Justice filed a consent decree in the U.S. District Court of Guam. The consent decree set forth the settlement terms agreed to by the federal government and Guam settling a lawsuit alleging Guam violated the Clean Water Act. The consent decree included deadlines for opening a new landfill and adopting a dump closure plan. In response to a 2009 District Court order that Guam comply with the terms of the consent order, the territory chose to issue a $202.4 million limited obligation bond to fund closing the Ordot dump and constructing a new landfill to meet the terms of the settlement agreement.
Guam also issued revenue bonds between fiscal years 2005 and 2015 to finance infrastructure projects. For example, in 2011 a revenue bond backed by hotel occupancy taxes was issued in the amount of $90.6 million in part to fund the construction of a museum on the island and other projects to benefit Guam’s tourism industry. In addition, in 2013 Guam’s Airport Authority issued $247 million in bonds that were used, in part, to fund airport enhancements.
As established under its Organic Act, Guam has the authority to issue bonds, but Guam’s public indebtedness is not authorized or allowed to exceed 10 percent of the aggregate tax valuation of property in the territory; tax valuation of property is currently set at 90 percent of appraised value of property. The limit applies to both general obligation and limited obligation debt. In fiscal year 2007, to increase borrowing capacity to address a $524 million deficit, the government changed the percentage of appraised value which constitutes the assessed value. The debt ceiling still limits the amount of public debt Guam can issue to 10 percent of the aggregate tax valuation of property. However, in September 2007, Guam amended its statutory definition of assessed value from 35 percent of appraised property values to 70 percent. In May 2009, the definition tax valuation of property was again amended to 90 percent of appraised property values. This second increase was imposed so Guam could issue bonds to comply with the requirement to close the Ordot dump and open a new landfill. In fiscal year 2012, the government increased borrowing capacity a third time by amending the definition of assessed value to 100 percent of appraised value in order to fund past due tax refunds. In fiscal year 2016, the statutory definition of assessed value was decreased back down to 90 percent of appraised value.
Despite Guam’s Recent and Expected Economic Growth, Growing Pension Fund Liabilities May Present a Risk
Despite economic growth, we found that Guam faces large fiscal risks related to unfunded pension liabilities and other post-employment benefits (OPEB) that, if unaddressed, may hamper its ability to repay existing debt and increase its need to issue debt. A number of factors may contribute to continued economic growth in Guam. Specifically, according to a government report, visitor arrivals to Guam are projected to continue increasing and higher room rates and occupancy are leading to continued hotel development. Moreover, the Marine Corps has plans to consolidate bases in Okinawa, Japan, and relocate 4,100 Marines to Guam. The Department of Defense (DOD) expects this relocation to Guam to occur between fiscal years 2022 and 2026. Officials from Guam predict that the military buildup will result in significant additional investment in Guam’s economy. In July 2016, DOD agreed to give Guam approximately $55.6 million in grants to fund civilian water and wastewater projects linked to the military buildup; additional investments in the power infrastructure will also be funded by DOD. A 2014 study conducted by the Department of the Navy on the effect of the military buildup on Guam’s economy concluded that it would increase civilian labor force demand, increase civilian labor force income, and increase tax revenues.
While it maintained Guam’s debt as investment grade as of 2017, the rating agency Standard and Poor’s expressed concern about Guam’s extremely high debt burden and vulnerability to economic changes in its tourism and military industries. In addition, Guam has large pension and OPEB liabilities that may stress current debt service payment arrangements if anticipated savings from changes to the government pension system are not realized. In fiscal year 2015, pension liabilities were $1.2 billion and OPEB liabilities were $2 billion, 22 and 37 percent of GDP, respectively. Territory officials told us that they have taken a variety of steps to address their unfunded pension and OPEB liabilities. In 1995, the government closed the defined benefit plan to new members with all new employees participating in a defined contribution plan, which resulted in a decrease in accrued liabilities. To address insufficient savings by members in the defined contribution plan, the legislature created two new retirement plans in 2016. The government estimates that the new retirement plans could add an additional $173 million to the pension fund. Territory officials said the government is meeting its actuarial contributions on an annual basis and is on track to pay off the existing unfunded pension liability in approximately 15 years.
United States Virgin Islands (USVI)
USVI’s Total Public Debt Nearly Doubled and Grew from 32 to 72 Percent of GDP between Fiscal Years 2005 and 2015
Total Public Debt Outstanding
Between fiscal years 2005 and 2015, USVI’s total public debt outstanding grew by 84 percent, from $1.4 billion to $2.6 billion. (See figure 19.) The sharpest increase was between fiscal years 2008 and 2010. During this period, total public debt outstanding increased by about $800 million, and almost all of USVI’s public debt was in the form of bonds. Bonds issued by USVI’s primary government are either backed by 1) both a general obligation of the government and a gross receipts tax, or 2) an excise tax on rum produced in USVI. Bonds issued by component units are backed by their revenues. Approximately half of USVI’s bonded debt is backed by revenues generated from the excise tax placed on rum imports to the U.S. mainland. Both the primary government and component units issued notes and took out loans during this period. Most of USVI’s bonded debt outstanding is scheduled to mature in 2027 or afterward.
USVI’s total public debt outstanding as a percentage of GDP doubled between fiscal years 2005 and 2015, growing from 34 percent to 72 percent. The steepest increases were between 2008 and 2010, when total public debt outstanding as a percentage of GDP increased by 19 percent, and between 2011 and 2014, when it increased by 16 percent. Total public debt outstanding as a share of GDP reached 72 percent in fiscal year 2015. Bonded debt outstanding was 63 percent of GDP in fiscal year 2015. (See figure 20.)
Total public debt outstanding per capita also increased during this period. It ranged from about $13,063 per person in fiscal year 2005 to about $25,739 per person in fiscal year 2015. (See figure 21.)
USVI’s General Revenue Remained Stagnant and Net Position Was Generally Declining between Fiscal Years 2005 and 2015
USVI’s general revenue showed almost no growth in the 10-year period between fiscal years 2005 and 2015. USVI’s general revenue declined from fiscal years 2008 to 2009 due to the 2008 recession and operating losses at the Hovensa oil refinery, and rebounded in fiscal year 2010 as the economy recovered. General revenue decreased again from fiscal year 2010 to 2011. Between fiscal years 2011 and 2014 revenue increased again. Despite the increase, the fiscal year 2015 general revenue of $919.4 million was only about $43 million greater than that collected 10 years prior. In contrast USVI’s total revenue (i.e. general revenue and program revenue combined) grew slightly by 2 percent on average, per year, between fiscal years 2005 and 2015, from $1.6 billion to $1.9 billion. (See figure 22.)
USVI has a statutory requirement that a team, composed of senior executives and legislative officials, meet at least twice a year to establish an official economic forecast of the territorial economy, including estimates of the following year’s revenue. Territory officials acknowledged that in recent years actual revenues have been less than had been estimated, citing both adverse economic conditions and litigation that had blocked the collection of property taxes for several years. These officials said that a new estimation methodology has been devised which uses a weighted average of the prior 5 years of actual revenue.
USVI’s net position for the primary government declined year over year from a negative $215.0 million as of fiscal year end 2008 to a negative $1.5 billion as of fiscal year end 2014; continuing to decline to a negative $3.7 billion as of fiscal year end 2015. USVI’s net position for the primary government as of fiscal year end 2014 is shown prior to restatement implementing GASB Statement No. 68. In fiscal year 2015, USVI implemented GASB Statement No. 68 and adjusted its beginning net position by $2.0 billion, resulting in a restated net position as of fiscal year end 2014 of a negative $3.5 billion. The implementation of GASB Statement No. 68 resulted in the territory recognizing previously unrecognized net pension liabilities and, therefore, a decline in ending net position in the year of recognition. USVI’s declining net position for its primary government reflects its deteriorating financial position.
USVI’s total net position for the primary government and component units combined increased between fiscal year end 2005 and 2007; it then declined year over year from positive $490.9 million as of fiscal year end 2008 to negative $3.6 billion as of fiscal year end 2015.
Since 2010, USVI’s Public Debt Has Been Used Primarily for General Government Operations
More than a third of USVI’s current bonded debt outstanding as of fiscal year 2015 was issued to fund government operating costs. Before that time bonded debt outstanding issued on behalf of the primary government was used either to refinance earlier bond issues; fund infrastructure projects such as improvements to schools, public safety facilities, and transportation infrastructure; or to assist privately-owned industrial enterprises, specifically construction at the Cruzan and Diageo rum distilleries and payment of a portion of the costs of sewage and solid waste disposal at the Hovensa oil refinery.
In the period following the recession of 2008, revenues declined and there were continuing demands for spending. In response, USVI issued debt for the purpose of financing regular government operating expenses. Between July 2010 and December 2014, USVI issued almost $850 million in bonds for this purpose with maturities ranging between 1 and 20 years.
According to territory officials, several factors contributed to USVI’s increasing reliance on debt to fund government operations, including the recession of 2008, the 2012 closure of the Hovensa oil refinery, a decline in USVI’s share of worldwide rum sales, and a decline in visits from cruise ship passengers. According to a senior government official, the closure of the Hovensa refinery was particularly detrimental to the territory’s economy and resulted in the loss of 2,000 jobs on St. Croix and a significant decrease in revenue. As of April 2017, USVI’s unemployment rate was 10.3 percent.
USVI officials cited several federal requirements that contributed to USVI’s need to issue debt. Because USVI is part of the mirror tax code, officials noted that USVI is required to pay the EITC to its residents, but is not reimbursed for this by the federal government. In contrast, state governments do not pay EITC because it is a federal benefit administered through the federal tax code. EPA directives for improving landfills and water projects and federal banking regulations that treat branches of U.S. banks placed in USVI as non-U.S. banks—thereby discouraging large banks from having branches in USVI—were also cited as reasons that USVI has issued debt.
Economic Uncertainty and Large Fiscal Risks May Significantly Limit USVI’s Ability to Repay Public Debt
USVI officials expressed confidence in the territory’s ability to repay public debt, but we found that large fiscal risks and exclusion from capital markets may hamper its ability to do so. USVI’s bonds are backed by the gross receipts tax on some individuals and entities doing business in USVI and by excise tax revenues collected by the federal government and remitted to USVI as required by statute. Officials said that revenues from the gross receipts tax and excise tax rebates—from which debt service payments are made—are monitored on a month-by-month basis. Also, officials cited as a protection against default the “lockbox” provisions that USVI has had contractually for some time and that were written into its statutes in 2016. According to these provisions, gross receipts tax and excise rebate revenue go directly to an escrow account in a New York bank, and the escrow agent makes debt service payments twice a year from the account; a year’s worth of payments is held in reserve at all times.
USVI officials expressed confidence that these provisions make it difficult for USVI to default on its debt payments. However, in a recent statement, Moody’s rating service said that these security provisions have not been tested in a stress scenario where the government faces a lack of funds to provide basic services. This observation was part of a statement issued by Moody’s in late January 2017 in which it announced it had downgraded USVI’s matching fund bonds (those backed by excise tax rebates) to noninvestment grade.
Other rating agencies expressed similar concerns. For example, Standard & Poor’s cited 1) the government’s fiscal distress, as evidenced by its significant structural imbalance and continued reliance on deficit financing to fund operations; 2) revenue backed bond issues that have exhibited either declining or flat growth absent tax rate increases and are levied on a limited and concentrated base; 3) adequate, but substantially reduced, debt service coverage; and 4) a limited economy, concentrated in rum production, tourism, and government.
In late January 2017, USVI cancelled a new bond issuance it was attempting to market to provide additional financing for general government operations. The bond issuance was authorized by the USVI legislature in 2016, but according to a senior bank official involved in underwriting USVI bonds, delays in bringing the issuance to market, and the legislature’s delay in enacting so-called “sin taxes” on items such as beer, cigarettes, and liquor, reduced the chances of successfully marketing the bond issue to investors. By the time USVI made an effort to market the bonds in late 2016 and January 2017, the Puerto Rico debt crisis had increased investors’ concerns about USVI’s debt as well. The rating downgrades of existing USVI debt, while not the decisive factor according to the bank official, did reinforce existing skepticism on the part of potential investors. Ultimately, the early 2017 bond issuance was not adequately subscribed and the offer failed. USVI effectively lost market access to new debt even at high interest rates.
In September 2016, the administration released its 5-year financial plan. The two major features of this plan were a reduction in government expenditures by limiting hiring and reducing non-personnel costs, and a proposal for increasing revenue through taxes on beer, rum, wine, brandy, sugar-laden carbonated beverages, and cigarettes, among other revenue generating measures. The legislature passed the tax increase bill, with some modification of the Governor’s proposal, in early March and the Governor signed it into law on March 22, 2017.
In the 5-year financial plan, the administration said that adopting austerity and tax measures would eliminate future deficits, which otherwise would amount to more than $130 million for each fiscal year between 2017 and 2021. A senior USVI official expressed a belief that the level of consumption of cigarettes, for example, will remain at pretax levels despite the higher cost. However, due to elasticity of demand, an increase in the price of cigarettes could decrease cigarette consumption and therefore revenues. If the tax increases do not produce the anticipated level of revenue, and if USVI is not able to regain access to capital markets, it will place even more stress on the debt service arrangements currently in effect.
Moreover, the recent measures do not address the fiscal risk presented by unfunded pension liabilities and OPEB for government employees. USVI reported an unfunded pension liability of over $3 billion, which was 83 percent of GDP in fiscal year 2015. According to an independent consulting firm’s August 2016 report conducted for the USVI Government Employees Retirement System, the retirement fund will become insolvent in 2023 without adding financial resources and adjusting benefit levels.
Territory officials cited several reasons for the large unfunded pension and OPEB benefit liabilities. These include recent legislation that resulted in more retirees eligible for pensions and a decline in the active USVI government workforce that resulted in a narrower ratio of retirees to workers, dropping from 6-to-1 in fiscal year 1982 to almost 1-to-1 in fiscal year 2015. In addition, officials told us that the most significant cause for the current condition of the retirement system is the primary government making contributions to the system below the amounts required by law.
Some measures have been taken to address the retirement fund’s impending insolvency, and other steps have been recommended. According to territory officials, USVI law changed in 2005, resulting in increased required pension contributions from all newly hired employees except for judges and legislators. In 2013, a Pension Reform Task Force (Task Force) recommended legislation that would 1) increase government and employee contributions towards pension benefits, 2) raise contribution rates for senators and judges, 3) reduce retiree current benefits by 10 percent, 4) increase the early retirement age from 50 to 55 and the regular retirement age from 60 to 65, 5) limit cost of living increases, and 6) change the formula used to calculate benefits.
In October 2015, the Legislature enacted and the Governor signed legislation that raised retirement ages for some employees, changed the basis for determining pension levels to career earnings, and allowed the retirement system to invest funds in lower-rated securities. This did not, however, address most of the Task Force recommendations. Territory officials told us that the administration will put forward additional pension reform proposals in the near future, however it remains unclear what those reforms will entail and when they will take effect.
Moreover, territory officials told us that since 2011 the government has paid less than half of actual post-employment benefit costs, leaving an unpaid current obligation of $357 million as of fiscal year 2015. The unfunded liability for post-employment benefits, projecting anticipated future costs, was most recently calculated in October 2013; at that time it was just over $1 billion. USVI’s pension and OPEB obligations are already contributing to the territory’s debt burden, and will likely continue to do so at an increasing rate. If unaddressed, they may place additional stress on the debt service arrangements currently in effect and hamper the territory’s ability to repay debt.
Agency Comments, Third Party Views, and our Evaluation
We provided a draft of this report for review to the U.S. Departments of the Interior and Treasury. We also provided, to the governments of Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands (CNMI), Guam, and the United States Virgin Islands (USVI), portions of the draft that were relevant to them. We received written comments from each of the five territories’ governments, which are reprinted in appendixes II, III, IV, V, and VI, respectively. We also received technical comments from American Samoa, Guam, USVI, and Treasury, which we incorporated as appropriate. We did not receive any comments from the Department of the Interior. In the letter from the Governor of Guam, the territory raised some issues, which we subsequently discussed in depth with territory officials. Following these discussions, we made modifications to the draft to provide additional context by broadening our coverage of revenue for Guam and for other territories, as applicable. We provide additional information about changes that we made or did not make at the end of Appendix V.
We will provide copies of this report to the Governor of each territory and the U.S. Secretaries of the Interior and Treasury. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staffs have questions about this report, please contact Susan J. Irving at (202) 512-6806, or David Gootnick at (202) 512-3149. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Objectives, Scope, and Methodology
Our objectives were, for each U.S. territory—Puerto Rico, American Samoa, the Commonwealth of the Northern Mariana Islands (CNMI), Guam and the U.S. Virgin Islands (USVI)— to describe: (1) trends in public debt and its composition between fiscal years 2005 and 2015, (2) trends in revenue and its composition between fiscal years 2005 and 2015, (3) the major reported drivers of the territory’s public debt, and (4) what is known about the ability of the territory to repay public debt.
For the purposes of this report, total debt held by the public (public debt) refers to the sum of bonds payable and other debt payable as described in the audited financial statements included within the territories’ single audit reporting packages, hereinafter referred to as the single audit reports. Bonds payable are marketable bonded debt securities issued by territorial governments or their component units and held by investors outside those governments. Other debt payable may include marketable notes issued by territorial governments and held by investors outside those governments; non-marketable intragovernmental notes; and notes held by local banks, federal loans, intragovernmental loans, and loans issued by local banks. Pension liabilities and other post-employment benefits (OPEB) are not included in the definition of total public debt but are considered and discussed in the sections of the report that describe the territories’ ability to repay their public debt.
To describe trends in public debt and its composition for each territory, we reviewed the territories’ single audit reports. These single audits are conducted each year by independent accounting firms in accordance with government accounting standards. We obtained single audits for American Samoa, CNMI, Guam, and USVI for fiscal years 2005 through 2015. We also obtained and analyzed consolidated audited financial statements for Puerto Rico from the Commonwealth of Puerto Rico’s Treasury Department website for fiscal years 2005 through 2014. For each territory, we reviewed the independent auditor’s report corresponding to each single audit and noted the type of opinion that was expressed on the financial statements and accompanying note disclosures. With the exception of Puerto Rico, each of the territories received modified opinions by auditors on one or more of the single audit reports included in our analysis. We reviewed each of these opinions and determined that despite the modified opinions the data we obtained from each of the single audit reports was reliable for the purpose of describing trends in debt and revenue and their composition for the fiscal years included in our analysis.
For each territory, we extracted information on public debt—specifically bonds, loans, and notes for both the primary government and component units—for each fiscal year and recorded the data on spreadsheets, which were then independently verified by other analysts. For American Samoa, CNMI, Guam, and USVI, we calculated debt per capita and debt as a percentage of nominal Gross Domestic Product (GDP) using nominal GDP and population data from the U.S. Department of Commerce’s Bureau of Economic Analysis. For Puerto Rico, we obtained data on Gross National Product (GNP) and nominal GDP from the Commonwealth of Puerto Rico Office of the Governor’s Planning Board and data on population from the U. S. Census Bureau.
To identify trends in revenue and its composition for each territory, we obtained and recorded information from the single audit reports on general revenues. All tax revenues, including tax revenues that are dedicated to particular purposes, are reported in general revenues. Tax revenues represent the largest component of general revenues and include both derived tax revenues (resulting from assessments imposed on exchange transactions, such as income taxes and sales taxes) and imposed nonexchange revenues (resulting from assessments imposed on non-exchange transactions, such as property taxes and fines). General revenues also include other forms of revenue, such as unrestricted aid from other governments and investment earnings. Our analysis primarily focused on trends in general revenues because the territories’ public debt is either explicitly or implicitly backed by general revenues. We also included total revenue—general revenues and program revenues combined—in our analysis because it reflects revenue generated by the territories’ component units and could be used to service debt payments.
In addition to general revenue levels, another measure of fiscal health is the net position for primary government activities, which represents the difference between the primary government’s assets (including the deferred outflows of resources) and the primary government’s liabilities (including the deferred inflows of resources). In other words, the net position for primary government activities reflects what the primary government would have left after satisfying its liabilities. A negative net position means that the primary government has more liabilities than assets. A decline in net position may be indicative of a deteriorating financial position. While our analysis primarily focuses on trends in the net position for the primary government, we also include certain information on trends in the total net position for the primary government and component units combined.
To determine the major reported drivers of public debt and what is known about the territories’ ability to repay this debt, we interviewed officials from the territories’ governments, including officials from the Governors’ offices, departments of finance or treasury, and the agency responsible for issuing and marketing bonded debt. We also spoke to officials in territorial public audit offices. In addition, we interviewed representatives of the three rating agencies that provide credit ratings for the territories’ securities: Fitch, Moody’s, and Standard and Poor’s.
In addition, to determine what is known about the territories’ ability to repay public debt we analyzed common factors—identified through prior work, documents, and interviews with the three rating agencies—that indicate territories’ potential vulnerability to debt crises. These factors included 1) the extent to which territories consistently issued debt to fund general government operations, 2) the extent to which territories’ economies were vulnerable to shocks due to a heavy dependence on a single or limited industry, and 3) the extent to which territories faced large fiscal risks such as pension liabilities.
We also interviewed officials from the Department of the Interior’s Office of Insular Affairs, which provides grant aid and technical assistance and support to the territories, and the Pacific and Virgin Islands Training Initiatives, which provides training and technical assistance on fiscal management to the Pacific territories and USVI, and directs the preparation of an annual report on the fiscal condition of these territories. In addition, we spoke with subject matter experts on territorial debt, officials from an investment bank involved in underwriting the territories’ bonds, and officials from the three rating agencies that rate the marketability of the territories’ bonds.
We obtained and reviewed information on territorial bond issuances from fiscal years 2005 through 2015 from the Electronic Municipal Market Access (EMMA) database of the Municipal Securities Rulemaking Board, the primary regulator of the municipal securities market. We reviewed information from EMMA on bonds issued by the territories from fiscal years 2005 through 2015, including memoranda of offering for individual bond issuances.
We conducted this performance audit from September 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained for the purpose of addressing our audit objectives provides a reasonable basis for our findings and conclusions.
Appendix II: Comments from the Government of Puerto Rico
Appendix III: Comments from the Government of American Samoa
Appendix IV: Comments from the Government of the Commonwealth of the Northern Mariana Islands
Appendix V: Comments from the Government of Guam
See comment 2
See comment 7
See comment 10
See comment 11
The following are GAO’s comments on Guam’s letter that supplement the comments in the text. 1. Our responses to Guam’s technical comments are not corrections.
After reviewing Guam’s comments, we expanded the information provided on revenue and net position for all 5 territories. For example, for Guam, we included on pages 41 and 42 of this report, additional information on revenue where we combine primary government revenue and component unit revenue. 2. Since our objective was to provide the most comprehensive metric of total public debt, it would have been incorrect for us to exclude public enterprise and revenue bond debt in our measure. 3. We do not compare the relative public debt burdens of the territories in this report. Further, pension liabilities are not included in our definition of public debt. Our definition of total public debt does include component unit debt, which Guam excludes from the calculations presented in its response. 4. On pages 41 and 42 of this report we include both a measure of primary government revenue, and a measure of primary government revenue and component unit revenue combined; an “apples-to- apples” comparison can be made to our total public debt figure, which includes component unit debt. 5. Our calculation of total public debt outstanding for Guam is the total of bonds payable and notes payable, both the current and noncurrent portions, and other debt as defined on page 8 of this report. Guam’s calculation of total public debt outstanding as shown in the table is all noncurrent liabilities except the net pension liability and results in a higher amount for fiscal year 2015 than our calculation. For bonds payable, our calculation includes both the current and noncurrent portions of bonds payable. Guam’s calculation of bonds payable as shown in the table only includes the noncurrent portion and results in a lower amount for fiscal year 2015 than our calculation. As a result of these differences, our calculation of bonded debt outstanding as a percentage of total public debt outstanding for fiscal year 2015 is higher than Guam’s calculation. 6. As noted on page 42 of this report, while revenue generally grew, Guam’s net position for the primary government fluctuated significantly between fiscal years 2005 and 2015. Since fiscal year end 2006, Guam’s net position for the primary government has been negative and trending downward. Guam’s total net position for the primary government and component units also combined fluctuated significantly. On page 41 and 42 of this report, we explicitly note the increase in revenue, however in the long-term significant financial risks may outweigh any given year’s revenue increase. 7. Based on our methodology, which includes component unit debt, Guam’s total public debt outstanding was $2.5 billion for fiscal year 2015. 8. We used total public debt outstanding, not solely tax-supported debt to calculate the debt-to-GDP ratio for all 5 territories. As reported on page 39 of this report, Guam’s debt to GDP ratio is 44 percent for total public debt and 42 percent for bonded debt for fiscal year 2015. We do not rank the U.S. territories in this report. 9. The per capita amounts presented in the report are based on debt amounts from Guam’s fiscal year 2015 audit as reported in the single audit report. However, the debt amounts and population figure shown in the table differ from those used in our calculations. Total public debt and bonded public debt outstanding used in our per capita calculations are calculated as discussed in comment 5 above, which differ by about $7 million from the amounts cited in the table. In addition, the population figure used in our per capita calculations is on a fiscal year basis, which results in 161,500 for fiscal year 2015. 10. Pension liabilities are not included in our definition of public debt. The debt per capita numbers that we present in this report are based on total public debt. For Guam that figure for fiscal year 2015 was $2.5 billion. 11. We disagree with Guam’s comment that the presentation in this report is negative. The final section in the discussion of Guam notes both the elements that may contribute to continued economic growth in Guam and the vulnerabilities and risks to the future: a high total debt burden and vulnerability to economic changes in its tourism and military industries. In addition, we note that Guam has large pension and other post-employment benefits liabilities that may stress current debt service payment arrangements if anticipated savings from changes to the government pension system are not realized.
Appendix VI: Comments from the Government of the United States Virgin Islands
Appendix VII: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contacts named above, Tara Carter, Assistant Director; Emil Friberg, Assistant Director; Divya Bali, Analyst-in-Charge; and Steven Berke, Karen Cassidy, and Eddie Uyekawa made significant contributions to this report. Dawn Simpson, Director; Nicole Burkart, Assistant Director; and J. Mark Yoder provided accounting expertise. Also contributing to this report were Pedro Almoguera, Jeffrey Arkin, Ann Czapiewski, John Hussey, Heather Krause, Donna Miller, Amy Radovich, Justin Snover, and A.J. Stephens. | Why GAO Did This Study
The United States has five territories: Puerto Rico, American Samoa, CNMI, Guam, and USVI. The territories, like U.S. states in some cases, borrow through financial markets. Puerto Rico in particular has amassed large amounts of debt, and defaulted on billions of dollars of debt payments. In response to the fiscal crisis in Puerto Rico, Congress enacted and the President signed the Puerto Rico Oversight, Management, and Economic Stability Act (PROMESA) in June of 2016, which established an Oversight Board with broad powers of budgetary and financial control over Puerto Rico and requires GAO to study fiscal issues in all five U.S. territories.
In this report, for each territory for fiscal years 2005-2015, GAO examined (1) trends in public debt and its composition, (2) trends in revenue and its composition, (3) the major reported drivers of the territory's public debt, and (4) what is known about the ability of each territory to repay public debt.
GAO analyzed the territories' single audit reports; interviewed officials from the territories' governments, ratings agencies, and subject matter experts; and reviewed documents and prior GAO work.
What GAO Found
Puerto Rico: Between fiscal years 2005 and 2014, the latest figures available, Puerto Rico's total public debt outstanding (public debt) grew from $39.2 billion to $67.8 billion, reaching 66 percent of Gross Domestic Product (GDP). Despite some revenue growth, Puerto Rico's net position was negative and declining during the period, reflecting its deteriorating financial position. Experts pointed to several factors as contributing to Puerto Rico's high debt levels, and in September 2016 Puerto Rico missed up to $1.5 billion in debt payments. The outcome of the ongoing debt restructuring process will determine future debt repayment.
American Samoa: American Samoa's public debt more than doubled in fiscal year 2015 to $69.5 million, but remained small relative to its economy, with a debt to GDP ratio of 10.9 percent. American Samoa's debt was primarily used to fund infrastructure projects. Between fiscal years 2005 and 2015, revenues grew and the government's net position was positive and generally improving. GAO previously reported that American Samoa relies heavily on the tuna processing and canning industry. Disruptions in this industry could affect its ability to repay debt.
Commonwealth of the Northern Mariana Islands (CNMI): CNMI's public debt declined from $251.7 million to $144.7 million between fiscal years 2005 and 2015, decreasing CNMI's debt to GDP ratio to 16 percent. Most of CNMI's debt was used to refinance prior debt and fund infrastructure projects. Despite revenue growth since fiscal year 2011, CNMI's net position was negative and generally declining during the period. GAO previously reported that labor shortages may affect GDP. This could impede CNMI's ability to repay debt in the future.
Guam: Between fiscal years 2005 and 2015, Guam's public debt more than doubled from almost $1 billion to $2.5 billion, with a debt to GDP ratio of 44 percent for fiscal year 2015. Most of Guam's debt was used to comply with federal requirements and court orders. Revenue grew during this period, and net position fluctuated significantly, with a negative balance in fiscal year 2015. Despite recent and expected economic growth, GAO found that large unfunded pension and other post-employment benefit (OPEB) liabilities may present a risk.
U.S. Virgin Islands (USVI): Between fiscal years 2005 and 2015, USVI's public debt nearly doubled, reaching $2.6 billion and a debt to GDP ratio of 72 percent. Since 2010, most of USVI's debt was used to fund general government operations. Revenue remained stagnant and net position was negative and declining during the period, reflecting a deteriorating financial position. While USVI holds a year's worth of debt service payments in reserve, GAO found that economic uncertainty and looming government pension fund insolvency by 2023 may hamper repayment. In early 2017, USVI was unable to access capital markets to issue new debt at favorable rates. Although the government adopted a financial plan intended to reduce expenditures and increase revenue, the plan does not address USVI's significant unfunded pension and OPEB liabilities and it is unclear whether the plan will produce the intended level of savings.
What GAO Recommends
GAO is not making recommendations in this report. |
gao_GAO-18-320 | gao_GAO-18-320_0 | Background
This section provides an overview of patenting in the United States, patent infringement litigation, and administrative proceedings for patent validity challenges. It also includes a brief history of court decisions that clarified eligibility requirements for the Patent Trial and Appeal Board’s CBM program. See “Related GAO Products” at the end of this report for a list of our prior work related to patents and intellectual property.
Patenting in the United States
In the United States, patents may be granted by USPTO for any new and useful process or machine, or any new and useful improvement on an existing process or machine, but there are some exceptions. Laws of nature, physical phenomena, and abstract ideas are not patentable. The U.S. Supreme Court and the U.S. Court of Appeals for the Federal Circuit have refined the boundaries of these exceptions over time, allowing some subject matter that was previously not patentable to become so. For example, U.S. Supreme Court decisions in the 1970s found mathematical formulas used by computers (i.e., software) were like laws of nature and therefore not patentable subject matter. However, a 1981 Supreme Court decision overturned USPTO’s denial of a patent application for a mathematical formula and a programmed digital computer because, as a process, the claimed invention was patentable subject matter. Similarly, business methods were widely considered unpatentable subject matter until 1998, when the U.S. Court of Appeals for the Federal Circuit ruled in the State Street Bank decision that they were patentable. In 2014, however, the Supreme Court effectively limited the patentability of some business methods by ruling in Alice Corp.
Pty. Ltd. v. CLS Bank Int’l that using a generic computer to implement an abstract idea is not patentable.
Traditionally, economic theory has held that intellectual property rights, such as those conferred by patents, can help encourage innovation and stimulate economic growth. Exclusive rights provided by patents, for example, can help patent owners recoup investments in technology and earn greater profits than if their patented technologies could be freely imitated. Moreover, to the extent that intellectual property rights encourage specialization, innovators may be more productive than they would be in the absence of patent laws. Because of complex trade-offs, however, some economists hold a more nuanced view of the potential for patents to promote innovation and increase productivity. By increasing the cost of using technologies, for example, patents may discourage not only diffusion of these technologies but also cumulative innovation that uses such technologies to develop new technologies. In addition, attempts to quantify the effect of patents on economic growth often fail to account for the creation of useful knowledge outside the patent system. Furthermore, to the extent that innovation occurs in the absence of patent laws, the need for patents can vary across industries or over time. Some researchers have suggested that some patents are currently limiting innovation, especially in areas such as software and computer technologies that overlap with business methods.
USPTO receives hundreds of thousands of applications each year from inventors seeking patents to protect their work. According to USPTO data, applications for patents have increased in recent years, and the share of patents granted for business methods has significantly increased over the past 2 decades (see fig. 1). In calendar year 2014, patents related to business methods accounted for more than 28 percent of all issued patents.
A patent’s claims define the legal boundaries of the invention, often in complex technical language. A patent application can be written to define an invention broadly or narrowly. Patent applicants often prefer broader claims because their competitors are less able to avoid infringement by making only small changes to their patented invention, as we reported in June 2016.
Before issuing a patent, USPTO patent examiners determine whether claimed inventions in the application meet requirements for patentable subject matter, novelty, non-obviousness, and clarity—the four patentability grounds that are established by statute. Patent examiners assess whether the claimed invention consists of patentable subject matter and also ensure that the claims are described clearly enough to enable a person skilled in the art to make the claimed invention. In addition, examiners determine whether a patent application’s claimed invention is novel and non-obvious by comparing the application’s content to “prior art”— existing patents and patent applications both in the United States and abroad, as well as non-patent literature such as scientific articles.
In February 2015, USPTO launched an Enhanced Patent Quality Initiative, which included several proposals designed to improve the quality of patent examination and issued patents. However, we found in June 2016 that USPTO faced challenges in issuing patents in accordance with standards. For example, we found that a majority of examiners (67 percent) said they have somewhat or much less time than needed to complete an examination, given a typical workload, and many examiners felt a time pressure that reduced their ability to conduct thorough searches. Examiners also said that it was difficult to issue patents that met the statutory requirements because of the limited availability of and access to non-patent prior art such as offers for sale and public use. Examiners said another limitation is their being responsible for examinations in subject areas in which they do not have adequate technical knowledge. We made seven recommendations to USPTO aimed at improving patent quality, clarity, and prior art search. USPTO agreed with the recommendations and is working to address them.
Patent Infringement Litigation
Patent owners can bring infringement lawsuits against anyone who uses, makes, sells, offers to sell, or imports the patented invention without authorization. Only a small percentage of patents in force are ever litigated, but some scholars believe that low-quality patents can make such litigation not only more complex and expensive but also more frequent. During an infringement case, the accused infringer may seek to have the lawsuit dismissed by showing the patent is invalid. When the courts rule on validity, they generally invalidate almost half of the patents, according to academic research.
Exactly what a patent covers and whether another product infringes the patent’s claims are rarely easy questions to resolve in litigation, and defending a patent infringement lawsuit in district court can take years and cost millions of dollars, not including damages if infringement is found. Whatever the outcome, costly litigation can leave defendants with fewer resources for innovation. Consequently, patent infringement defendants often find it in their best interest to settle lawsuits quickly, as we reported in August 2013.
Administrative Proceedings for Challenging Patent Validity before the Patent Trial and Appeal Board
The AIA in 2011 created the Patent Trial and Appeal Board and stated any references in federal law to USPTO’s then-existing Board of Patent Appeals and Interferences be deemed to refer to the new board. By statute, the Patent Trial and Appeal Board consists of the USPTO Director, Deputy Director, Commissioner for Patents, Commissioner for Trademarks, and administrative patent judges. In practice, to issue decisions in the matters that come before it, the board involves more than 300 people serving in many positions, according to the board. The board is led by the Chief Judge and Deputy Chief Judge, who, along with other members of senior management, meet regularly to discuss operational and procedural matters of importance to the board’s overall mission, according to the board.
The AIA created three new administrative proceedings for the board to administer, each with different statutory rules (see table 1). Two proceedings were made permanent:
Post-grant review provides a 9-month opportunity following the issuance of a patent during which a third party can file a petition to challenge a patent’s validity on any of the four statutory grounds: subject matter eligibility, novelty, non-obviousness, and clarity.
Inter partes review is available to third parties for the life of the patent, but on a limited set of grounds (non-novelty or obviousness), and on a limited set of acceptable prior art (previously issued patents and printed publications).
The third proceeding—the CBM program—was included in the act as a temporary proceeding that can be used to challenge a patent at any point in its life, as allowable under the inter partes review program. However, under the CBM program, only a party (e.g., a company or an individual) that is sued or charged in an infringement suit can petition. Such petitioners can challenge a patent’s validity on any of the four statutory grounds without the limits on prior art in inter partes review. Additionally, rules about which arguments parties are officially barred from being raised again in later legal actions (called estoppel provisions) are less restrictive under the CBM program than for the other two board proceedings. However, the body of patents that qualify for review under the CBM program is limited to those that claim a non-technological method involved in the practice, administration, or management of a financial service or product. A patent is “technological” if it claims a technological feature that solves a technical problem using a technical solution. Many software and business method patents issued in the wake of State Street Bank describe implementing an abstract idea on a generic computer. Since the Supreme Court’s 2014 decision in Alice, which closely aligns with the CBM program’s “non-technological” designation, these types of ideas are no longer thought to be patentable.
Inter partes review is the most-used of the proceedings created by the AIA and the one stakeholders we interviewed were most familiar with when they discussed the Patent Trial and Appeal Board. The other proceedings have been used less frequently, likely because of the short window for filing a challenge, in the case of post-grant review, and because of additional restrictions on what patents may be challenged, in the case of CBM.
Under statute and regulation, the full review process at the Patent Trial and Appeal Board for any of the three proceedings generally takes up to 18 months and comprises two phases: (1) the petition phase, which lasts up to 6 months, and (2) the trial phase, which generally lasts up to 12 months. During the petition phase, the petitioner—typically a party accused of patent infringement, in the CBM program— files a petition challenging the validity of one or more of the patent’s claims and pays fees for each challenged claim. In some cases, a petitioner will file more than one petition challenging a patent. This might occur when a petitioner is constrained by the maximum number of pages allowed in a petition. Multiple petitions can also be filed against a single patent if the patent owner has sued more than one party for infringement, and each files a separate petition challenging the patent’s validity. Petitioners might also file a petition under more than one proceeding, either concurrently or sequentially.
When a petition is received and the fees paid, administrative personnel of the board, under direction of the Chief Judge, assign three technically trained administrative patent judges to the case. According to agency documents, these three-judge panels are put together taking into account many factors, including technical experience, experience at the board, potential conflicts of interest, and availability. The patent owner may then, within 3 months of the petition date, file a preliminary response to the petitioner’s arguments. Within 3 months of submission of any preliminary response, or the last date on which such response may be filed, the panel of judges determines whether to allow the petition to move to the trial phase for review. This determination is called the “institution decision.” According to statute and regulations, in the case of the CBM program and post-grant review, a panel of judges may not institute a review unless the information presented in the petition, if not rebutted, would demonstrate that it is “more likely than not” that at least one of the claims challenged in the petition is unpatentable, or in the case of inter partes review, if the petitioner has a “reasonable likelihood” of prevailing.
The first step in the trial phase is discovery (a step that exists in all federal civil litigation), during which the parties produce documents or testimony relevant to the challenged claims. Each party has 3 months to file discovery documents for the panel of judges’ review. If a petitioner and patent owner do not settle a case or it does not otherwise terminate, the case will proceed to the oral hearing. The hearing is an opportunity for the parties to make their strongest arguments and to answer judges’ questions, according to a board official, and after the hearing, the panel of judges will deliberate over the course of a few weeks or months and then issue its final written decision. The final written decision must be issued within 1 year of the institution decision, with limited exceptions. The patent owner may, for example, cancel one or more claims in the patent in an attempt to avoid institution of the trial.
Figure 2, shows the progression of a case from the petitioner’s filing to the panel of judges issuing a final written decision.
Under its Standard Operating Procedures, every Patent Trial and Appeal Board decision is, by default, a routine opinion until it is designated as “representative,” “informative,” or “precedential.”
Representative decisions typically provide a representative sample of outcomes on a particular matter; they are not binding authority.
Informative decisions provide norms on recurring issues, guidance on issues of first impression, and guidance on the board’s rules and practices; they are not binding authority.
Precedential decisions are binding authority and emphasize decisions that resolve conflicts or address novel questions.
Nominations for these designations can be made by a Patent Trial and Appeal Board judge, the Chief Judge, the Director of USPTO, the Deputy Director of USPTO, the Commissioner for Patents, or the Commissioner for Trademarks. Also, a member of the public may nominate a decision for a precedential designation within 60 days of its issuance. The Chief Judge can designate a nominated decision as representative or informative, but under Standard Operating Procedures, a precedential designation requires a majority agreement among all voting members of the board, including administrative patent judges and statutory members, as well as concurrence by the Director of the USPTO.
Court Decisions on Eligibility for Review under the CBM Program
Petitioners and patent owners may appeal the final written decisions of the Patent Trial and Appeal Board to the U.S. Court of Appeals for the Federal Circuit, just as unsatisfied plaintiffs or defendants may appeal a federal district court decision, and decisions may ultimately be appealed to the U.S. Supreme Court. The following decisions have significantly influenced the eligibility rules for CBM review, for different reasons: In Cuozzo Speed Technologies, LLC v. Lee (June 2016), the U.S. Supreme Court affirmed the board’s use of the “broadest reasonable construction” standard—meaning the ordinary meaning that someone skilled in the art would reach—to define the language of the claims during post-grant review as a reasonable exercise of the board’s rulemaking authority. Defining claim language using the broadest reasonable interpretation meant that the number of business method patents that could be determined as financial in nature is larger than it would otherwise be, so more patents are potentially eligible for review under the CBM program.
In Unwired Planet, LLC v. Google Inc. (November 2016), the U.S. Court of Appeals for the Federal Circuit ruled that the USPTO’s policy of assessing whether a claim’s activities were “incidental” or “complementary” to a financial activity was too broad a standard to apply when determining whether a patent claim was eligible for a CBM review. The court stated that, to be CBM-eligible, a patent must claim a method used in the practice, administration, or management of a financial product or service. Applying this narrower standard effectively reduced the number of patents accepted for review under the CBM program.
In Secure Axcess, LLC v. PNC Bank Nat’l Assoc. (February 2017), the U.S. Court of Appeals for the Federal Circuit clarified that a CBM patent must specifically have a claim that contains an element of financial activity in order for a patent to qualify for review under the CBM program. Like the Unwired Planet decision, the narrower standard expressed by the court has led to fewer patents being eligible for review under the CBM program.
More Than 350 Patents Have Been Challenged under the CBM Program, and About One-Third of These Patents Were Ruled Unpatentable
From September 2012 through September 2017, parties accused of patent infringement filed 524 petitions challenging the validity of 359 distinct patents under the CBM program, resulting in rulings against about one-third of these patents. The average monthly number of CBM petitions fluctuated during this period, but use of the program has declined since about 2015. Some stakeholders have expressed concern about multiple petitions being filed against the same patent, but our analysis of petition data showed that the vast majority of patents challenged under the CBM program were challenged once or twice. Overall, through September 2017, the Patent Trial and Appeal Board completed reviews of 329 of the 359 patents challenged under the program, and the board ruled at least some challenged patent claims unpatentable in about one-third of these patents.
Petitioners Have Challenged the Validity of 359 Patents under the CBM Program, but Use of the Program Has Declined Overall
Parties accused of patent infringement filed 524 petitions for patent review under the CBM program from September 2012 through September 2017, with the number of petitions per month fluctuating but tapering off over time (see fig. 3). During this 5-year period, an average of more than 9 petitions per month were filed under the CBM program, but this average rate has declined since 2015 to fewer than 5 per month in the last fiscal year, with no petitions filed in August or September 2017. As a point of comparison, the number of petitions for inter partes review has generally increased over the 5-year period.
Stakeholders we interviewed suggested several possible reasons for the decline in CBM petitions. Specifically, some stakeholders told us that recent Federal Circuit and Supreme Court decisions that have changed what is patentable subject matter and the eligibility criteria for CBM review may have reduced the set of business method patents eligible for CBM review. Some stakeholders also suggested CBM petitioners successfully targeted the lowest-quality business method patents in the early years of the program, and now that those patents have been challenged, there are fewer patents that do not meet patentability requirements. Another possibility, according to stakeholders, is that owners of business method patents are wary of asserting their intellectual property and risking its invalidation, especially in light of the Alice decision, which effectively limited the patentability of some business methods. As a result, according to these stakeholders, fewer such patents end up in litigation and subsequently before the Patent Trial and Appeal Board. Some stakeholders also told us the CBM program has reduced patent infringement lawsuits, including some filed by non- practicing entities. In addition, a few stakeholders told us some patent owners may be waiting until after the CBM program sunsets to assert their patents.
Patents Are Infrequently Challenged More Than Once or Twice
Some stakeholders we interviewed were concerned about multiple petitions being filed against the same patents; however, our analysis showed that the vast majority of the 359 distinct patents challenged under the CBM program were challenged only once or twice under that program. Stakeholders have suggested that petitioners are, in some cases, using the CBM program and the inter partes review program as tools to increase costs borne by patent owners, and in the case of the CBM program, as a tool to delay district court proceedings. Some stakeholders have stated that the use of the AIA trials in this manner amounts to harassment, and at least one stakeholder has written letters to USPTO requesting the Director to intervene.
However, our analysis of petition data showed that among the 359 patents challenged under the CBM program, 73.3 percent were challenged once and 18.4 percent were challenged twice during the 5- year period we reviewed. Another thirty patents, or 8.4 percent, were challenged more than twice under the CBM program during this period (see fig. 4). Of these 30 patents, in many cases multiple parties challenged a single patent; in others, a single petitioner or set of petitioners challenged a patent multiple times.
In addition, of the 359 patents challenged under the CBM program during the 5-year period we reviewed, 92 were also challenged at least once in inter partes review. In some instances, petitioners filed concurrent petitions for CBM and inter partes review if, for example, they were unsure if the claims were eligible for a CBM review. In other instances, petitioners first sought CBM review and, when that was unsuccessful, filed an inter partes review. In these cases, petitioners may initially be seeking CBM review because of the additional grounds available for challenging the patents, and then turning to the inter partes review program if the CBM challenge proves unsuccessful. In other instances, petitioners first had success under the inter partes review program and then filed another petition under the CBM or inter partes review programs, according to our analysis of petition data.
When including patent challenges under both the CBM and inter partes review programs, 52.1 percent of the 359 patents challenged under the CBM program were challenged once and 29.3 percent were challenged twice (see fig. 4). More than half of the patents challenged under both programs (50 of 92 patents) did not have any challenged patent claims instituted for trial under the CBM program, meaning that those patents, in many cases, did not meet the CBM program’s eligibility requirements and may have been more appropriately challenged with an inter partes review.
There are several other reasons why petitioners may file more than one petition against a single patent, according to stakeholders we interviewed. First, the board limits the number of pages that a petitioner may use to submit prior art and arguments for invalidity. Some petitioners might file more than one petition so they have room to present all of their art and arguments at once. Data we analyzed on CBM petitions show that many follow-on petitions are filed on or near the same day as the first petition, supporting this argument. Second, in some cases the patent owner may not identify all the asserted patent claims in the district court right away or may change the set of asserted claims later in the proceedings, necessitating an additional CBM or inter partes review petition to cover the new claims. Third, in order to get the expensive district court proceedings stayed—that is, halted pending the board’s decision on the patent’s validity—a petitioner may file a CBM petition on patentability or clarity grounds soon after the district court trial commences, because these arguments require limited time to formulate. Later, once the petitioner takes the time to investigate the prior art, the petitioner might file a second petition challenging the patent for non-novelty or obviousness. In our analysis of petition data, we found some examples that were consistent with this approach. Fourth, if a patent owner charges multiple entities with patent infringement, each of the alleged infringers has an individual right to file a petition challenging the patent’s validity. The defendants in the infringement suits who become petitioners at the board may collaborate with one another and join their cases, but they may also choose to file petitions individually. In our analysis of petition data, we found examples of both. Petitioners might choose to join their cases in order to share the cost of counsel, while others may choose not to join their cases, perhaps because they use substantially different art and arguments in their petitions.
Our analysis of the petition data found some examples of multiple petitions against a single patent that may raise questions about the legitimacy of the follow-on petitions. In some instances, a second, follow- on petition challenging the patent’s validity on the same statutory grounds as it did in the first petition was filed by the same petitioner after the first petition was denied institution. This type of multiple petitioning may occur when, for instance, a procedural termination resulted from a technical error in the first petition. Board officials said it may also occur because a petitioner is using the first denial of institution to alter the arguments and guide the second petition, a strategy that the board has labeled “road- mapping.” In other instances, a single petitioner filed a second, follow-on petition challenging the patent on different statutory grounds after the first petition was denied institution. These follow-on petitions may be legitimate attempts to correct simple errors in the first petitions, or they may reflect practices that might raise questions about whether the program is being used as intended.
Patent Trial and Appeal Board officials are aware of concerns over multiple petitions and recently concluded a study about the prevalence of such practices in relation to all three types of proceedings created by the AIA. The board found that almost two-thirds (63.4 percent) of follow-on petitions were filed on or near the same day as the first petition. Nearly three in four (72.4 percent) follow-on petitions were filed before the institution decision on the first petition. These findings suggest that most petitioners are not waiting to use the board’s decision of non-institution as a guide for developing a second petition. Moreover, the board officials we interviewed told us they are empowered to deny a petition if they determine the petition presents the same or substantially the same prior art or arguments previously presented in another petition. Board officials told us they had denied several recent petitions on this basis. In addition, in a recent precedential opinion, the board clarified the characteristics it looks for to determine whether it should deny an inter partes review when a petitioner submits a follow-on petition. These characteristics include whether the petitioner previously filed a petition against the same patent claims; whether the petitioner provides adequate explanation for the time elapsed between filing two or more petitions against the same patent claims; and whether the petitioner knew, or should have known, about the prior art presented in the second petition at the time of the first petition.
Claims Have Been Ruled Unpatentable in More Than One-Third of Patents Challenged under the CBM Program
The Patent Trial and Appeal Board has ruled unpatentable some or all of the patent claims instituted for trial in about one-third of challenged patents and about one-third of petitions under the CBM program. Data on petition outcomes, however, are open to different interpretations depending on how they are presented. For example, board judges ruled some or all of the patent claims considered at trial unpatentable in 96.7 percent of petitions (175 of 181) under the CBM program for which they issued a final written decision from September 2012 through September 2017. On the basis of this statistic, the board could seem to invalidate the majority of the patents it reviews, as noted by some stakeholders. However, this outcome is predictable given the criteria for institution of a CBM trial—a judge panel will institute a petition to the trial phase if it is “more likely than not” that at least one of the claims challenged in a petition is unpatentable—which tips outcomes for instituted petitions toward rulings of unpatentability. In addition, board judges did not issue final written decisions for all petitions that enter the trial phase because the parties often reach a settlement before the final written decision. When taking into account all of the CBM petitions that had an outcome as of Sept 30, 2017, board judges ruled some or all of the claims considered at trial unpatentable in 35.6 percent of the cases (175 of 492).
The results are similar when considered by patent rather than by petition. Specifically, for patents challenged between September 2012 and September 2017 and for which a final written decision was issued in at least one petition, 95.2 percent of patents (120 of 126) had some or all the patent claims that were instituted for trial ruled unpatentable. However, because not all challenged patent claims are instituted for trial and because final written decisions are not issued for all petitions that enter the trial phase, it is also accurate to say the board judges ruled some or all of the patent claims unpatentable for 36.5 percent of challenged patents (120 of the 329) that had an outcome as of September 30, 2017 (see fig. 5).
Changes in petition outcomes over time also challenge the idea that the board invalidates most patents it reviews. In particular, the percentage of CBM petitions instituted for trial has decreased over time (see fig. 6). In 2012, about 80.0 percent of CBM petitions had some or all challenged claims instituted. In comparison, in 2016 about 53.5 percent of CBM petitions had some or all claims instituted. Preliminary data for 2017 suggests that this trend might continue: through September 2017, about 38.5 percent of CBM petitions had some or all claims instituted. Similar to the decline in number of petitions filed, this trend might have a few explanations, according to stakeholders. Specifically, board panels might be less likely to institute a petition for trial based on conclusions of the U.S. Court of Appeals for the Federal Circuit in Unwired Planet and Secure Axcess. Another possibility is that the patents in earlier cases represented the easiest targets for validity challenges, and thus the more recent challenges are based on shakier legal grounds and less likely to meet the CBM program’s institution threshold.
In addition to declining institution rates, there has been an increase in the percentage of CBM petitions that settle before reaching an outcome. Specifically, the percentage of cases where the parties settled their dispute either before or after the institution decision increased from about 6.7 percent in 2012 to about 28.9 percent in 2016. When a case before the board is settled, it generally concludes any concurrent district court infringement case. The patent owner’s intellectual property remains in place, and the patent owner is free to assert the patent against other alleged infringers later.
The Board Met Timeliness Requirements and Has Taken Steps to Analyze Decisions and Improve Proceedings but Does Not Have Guidance to Ensure Decision Consistency
The Patent Trial and Appeal Board has completed all trials under AIA- authorized proceedings within statutorily directed time frames, according to board data, and the board has taken steps to review issues that could affect the consistency of its trial proceedings and decisions and to engage with stakeholders to improve its proceedings. To ensure timeliness of trial proceedings, the board provided a checklist of information and time frames to petitioners and patent owners, among other things. According to board documents and interviews with officials, the board has also taken steps to review and assess its trial proceedings and decisions, but it does not have guidance for reviewing trial decisions, or the processes that lead to the decisions, for consistency. The board has also taken several steps to engage with stakeholders regarding various aspects of trial proceedings.
Patent Trial and Appeal Board Data Indicate Trials Have Been Completed within Statutorily Directed Time Frames
According to data on Patent Trial and Appeal Board proceedings, as of September 31, 2017, all trials under AIA-authorized proceedings, including the CBM program, have been completed within statutorily directed time frames. The board maintains a database of trial proceedings that includes the date of each petition, decision to institute a trial, and final written decision. Board officials we interviewed told us the timeliness of decisions to institute a trial and of final written decisions has not been a concern in the 5 years that it has operated. According to board officials, as of November 2017, two AIA trials—one under the inter partes review program and one under the CBM program—have been extended for good cause past the typical 1-year time limit between the institution decision and the final written decision, as allowed by statute.
Board officials told us they have taken several steps to ensure that trials are completed within required time frames. According to board documentation, between 2012 and 2017, for example, the board hired more than 150 additional administrative patent judges, in part to preside over AIA trials. In addition, the board has taken several proactive administrative steps to help ensure that stakeholders are aware of requirements for information filing and dates. For example, when a petition is filed, the board’s administrative staff creates a checklist of information required and due dates, and communicates these dates and requirements to petitioners and patent owners throughout the trial.
Some stakeholders have expressed concern that AIA trial time frames are too short and deprive patent owners and petitioners of due process rights. One patent attorney that we spoke with, for example, noted that the short time frames limit discovery. As directed by the AIA, a final determination for a review generally must be issued not later than 1 year after the date a review has been instituted, and the director may extend that period by up to 6 months for good cause. Board officials we interviewed stated that they do not believe parties are having trouble completing discovery activities in the time allotted in view of the limited discovery allowed at the board. Board officials further stated that they have not found compelling reasons to extend trial proceedings on the basis of the need for additional discovery. As reflected in USPTO’s strategic plan, timeliness of the board’s trial process is a key program goal, and board officials said trials would be extended only in unusual circumstances. In addition, board officials stated that the board adheres to the 12-month timeline for final written decisions because this timeline gives the district courts a definitive and predictable endpoint for the trials.
The Board Has Taken Several Steps to Review Issues That Affect Trial Proceedings, but It Does Not Have Guidance to Ensure the Consistency of Its Decisions
The Patent Trial and Appeal Board has decision review processes that help ensure trial decisions are revisited as appropriate, but the board cannot ensure the consistency of these decisions because it does not have guidance for reviewing them or the processes that lead to them. For trials still in progress, board officials told us that there are several ways that management gets involved in reviews. According to officials, a review of an ongoing trial is triggered if and when a paneled judge raises any issue deserving of management attention. Such issues are brought to the attention of the Chief Judge or other members of the board’s management team and are acted upon at their discretion. According to board officials, the usual response is a management meeting with the three-judge panel, with the goal of ensuring the judges are aware of any precedent or ongoing trials dealing with similar issues. The officials said these review meetings are also meant to ensure that board management is aware of any decisions that may be relevant to the stakeholder community or the public. According to board officials, issues that may prompt action include those that are not routine in nature, that involve novel questions of law, or that may result in decisions that could contradict previous board decisions. Board officials called these review meetings the first step for keeping track of key issues. Board officials told us these reviews raise a fair number of issues, but the process relies on self-reporting by the judges, and board officials told us the effectiveness of these reviews is not measured.
Board officials also told us that a separate internal review process has evolved over time, whereby a small group of board judges, in consultation with board management, seeks to ensure decision quality and consistency by reading a large number of draft AIA trial decisions and giving feedback or suggestions to authoring judges prior to issuance. The board is currently drafting a formal charter that will outline the group’s function, reviewer selection, and membership term. According to board officials, these reviews are meant to help ensure consistency with applicable board rules, other board decisions, and Federal Circuit and Supreme Court case law. In addition, such reviews may result in coaching and training to increase an individual judge’s quality of performance.
Regarding completed trials, board officials told us they review any board AIA trial decisions that are appealed to the U.S. Court of Appeals for the Federal Circuit and that the appeals court reverses or remands. Specifically, the board monitors Federal Circuit decisions and board management then reviews any reversals or remands for opportunities to improve processes and stay abreast of emerging issues. According to board officials, for any reversal or remand, board management and members of the three-judge panel that decided the case meet to discuss what steps could have been taken to avoid the Federal Circuit reversal or remand, and what else can be learned from the Federal Circuit decision. In some instances, according to officials, the board will host a session where all board judges are invited to review and discuss the trial court decision and the decision of the Federal Circuit. In addition, board officials told us they track data on Federal Circuit affirmances, remands, and reversals. The board has recently updated its Standard Operating Procedure to provide guidance on how it handles cases remanded by the Federal Circuit. This procedure creates internal norms to promote timeliness and consistency of the board’s response to remands. The procedure includes a goal for the board to issue decisions on remands within 6 months of receipt and calls on the Chief Judge and the Deputy Chief Judge to discuss each remanded case with the presiding three- judge panel before the panel expends substantial effort on the case. The Chief Judge may also elect to expand the panel assigned to the remanded case, when deemed prudent.
Furthermore, officials told us that all board decisions—including final written decisions, decisions to institute a trial, and any substantive orders—are reviewed by board judges on the date of issuance. Specifically, a rotating group of judges, on a voluntary basis, reads and analyzes each day’s decisions and, according to board officials, sends a summary list of the number of decisions made that day along with a brief decision summary for any cases where key issues of interest were raised. Board officials said that most decisions are straightforward and generally not summarized in detail. For decisions highlighted in the summary report, according to officials, a lead judge, in most cases, will then review the decision more closely. Example summary lists provided to us by the board show brief summaries of a trial involving interpretations of prior art admissibility and a trial dealing with an interpretation of a challenge based on clarity.
Finally, board officials told us that the board has begun to increase the number of trial decisions considered for precedential and informative designations as part of its efforts to ensure the consistency of trial decisions. Board officials also told us that increasing the number of these designations had not been a priority while the AIA trial procedures and processes were being operationalized and as the board was hiring more than 150 administrative patent judges over the past 5 years. However, officials said that they are now taking steps to simplify the vetting and voting process, and the board expects more precedential and informative designations going forward.
Taken together, the board’s review processes help ensure that board trial decisions are reviewed in some manner. However, because the board does not have documented procedures for how to review decisions for consistency, the board cannot fully ensure the consistency of the decisions or the processes that lead to them. USPTO’s 2014-2018 strategic plan includes the goal to “optimize patent quality and timeliness,” which includes an objective to “maintain ability to provide high-quality decisions.” As part of this objective, the plan states that it is “critical for the to ensure consistency in its decisions through review of decisions in proceedings.”
Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks. Such control activities include clearly documenting internal control in a manner that allows the documentation to be readily available for examination. The documentation may appear in management directives, administrative policies, or operating manuals. However, the board has not yet clearly documented how judges are to review trial decisions, or the processes that lead to the decisions, to ensure consistency. Without developing guidance, such as documented procedures, outlining the steps USPTO will take to review the Patent Trial and Appeal Board decisions and the processes that lead to decisions, USPTO cannot ensure that it is fully meeting the objective of ensuring consistency of its decisions.
The Patent Trial and Appeal Board Has Taken Several Steps to Engage Stakeholders and Address Stakeholder Concerns
The Patent Trial and Appeal Board has taken several steps to engage stakeholders regarding trial proceedings and decisions and address related concerns. USPTO’s strategic plan states that the board should expand outreach to stakeholders by providing opportunities for interaction and updates on board operations and other important issues. The board has done so through several types of public outreach efforts, including participating in roundtables, webinars, and judicial conferences, among other activities. The board has made several changes to policies and procedures based on stakeholder feedback gathered through these mechanisms.
For example, after the Patent Trial and Appeal Board had been operational for about 18 months, it conducted a series of eight roundtables in April and May of 2014 at locations around the country to publicly share information concerning trial proceedings, to obtain public feedback on these proceedings, and to launch the process of revisiting its trial rules and trial practice guide. At these roundtables, the board provided the public with statistics summarizing the administrative trial proceedings, as well as lessons learned for filing effective petitions, engaging in successful discovery and amendment practice, and effectively presenting a case at oral hearing, among other things. The board also asked for and received feedback from the public on the AIA administrative trial proceeding rules and trial practice guide, as well as on experiences in general with the AIA administrative trial proceedings. Subsequent to the 2014 roundtables, the USPTO sought public input on all aspects of AIA trial proceedings through a June 27, 2014 Federal Register notice, which included 17 specific questions regarding certain trial rules, such as claim construction, the claim amendment process, and good cause trial extensions. USPTO took a two-step approach in responding to the 37 comments received in response to this Federal Register notice. First, USPTO implemented several immediate changes to board proceedings, including changes to page limits for some documents. According to the annual report of USPTO’s Patent Public Advisory Committee, these changes were favorably received by the stakeholder community. Second, in April 2016, the board implemented more substantive changes, including allowing testimonial evidence to be submitted with a patent owner’s preliminary response to a petition and changing from a page limit to a word count for major briefings, among other things.
In addition to roundtables, the board has engaged with stakeholders through several other mechanisms, including webinars and judicial conferences. For example, in February 2015, the board announced its inaugural “Boardside Chat” lunchtime webinar series, which has been held bi-monthly ever since. These webinars are designed to update the public on current board activities and statistics, and to allow a means for the board to regularly receive public feedback about AIA trial proceedings and any issues of concern. Topics discussed at these events include key trial decisions, proposed changes to trial rules, and best practices for prior art presentations in AIA trials, among other things. Since 2015, the board has hosted an annual judicial conference, where the board engages with stakeholders and educates them about AIA trial proceedings, answers questions, and receives feedback. Board judges present trial statistics, information about the internal functioning of the board, practice tips, and engage in discussions on topics of current interest to stakeholders. Topics have included motions to amend and the prevalence of multiple petitions. More recently, the board has conducted other outreach sessions, including: an August 2017 roundtable meeting with stakeholders from the American Intellectual Property Law Association to address a broad range of topics affecting practitioners before the board, including how patent claims are interpreted, claim amendments, and conditions under which multiple petitions from a single petitioner would be denied; a webinar on August 31, 2017, addressing common evidentiary issues that occur during AIA trial proceedings; and a webinar on September 12, 2017, with the Chief Judge to commemorate the 5th anniversary of the board, where discussion topics included the origins and mission of the board, recent board developments, and operational procedures.
According to USPTO’s Patent Public Advisory Committee, this type of outreach provides a valuable two-way conduit for constructive flow of information to and from the board. In addition to these various outreach efforts, stakeholders are encouraged to provide feedback to the board, on any topic related to trial proceedings, by e-mail or telephone.
Board officials we interviewed told us that they review information obtained from stakeholders during roundtable meetings and other outreach events and implement changes to policies and procedures where applicable. The officials told us that stakeholder feedback has been used to inform updates to the board’s trial rules guidance, to modify rules of practice, and in updating Standard Operating Procedures. In addition, board officials told us that in response to stakeholder concerns, they conducted two extensive studies covering motions to amend and the filing of multiple petitions against a single patent. Furthermore, board officials told us that they have held training sessions for judges regarding specific areas of interest to stakeholders. Lastly, board officials also told us that the board’s website, including the frequently-asked-questions pages, is updated with information relevant to stakeholders, including stakeholder concerns. For example, written stakeholder comments submitted in response to a proposed rulemaking are posted on the USPTO website for public viewing.
Stakeholders Agree the CBM Program Has Reduced Litigation, and Many See Value in Maintaining Aspects of the Program
Stakeholders we interviewed generally agreed that the CBM program has reduced litigation, and many said there is value in maintaining some aspects of the program. Stakeholders generally agreed that the CBM program has contributed to a decrease in litigation involving business methods patents and that the program has had positive effects on innovation and investment. Most stakeholders also said there is value in maintaining, among other things, the ability to challenge patents on all four statutory grounds before the Patent Trial and Appeal Board.
Stakeholders Generally Agreed the CBM Program Has Contributed to a Decrease in Litigation Involving Business Method Patents
Stakeholders we interviewed generally agreed the CBM program has reduced litigation involving business method patents because the CBM program allows these patents to be more easily challenged than in district courts. Stakeholders told us that fewer business method patent lawsuits are filed and that existing lawsuits are often dropped after patents have been through the CBM program. However, stakeholders also noted that the Supreme Court’s 2014 decision in Alice may have also reduced the number of business method patent lawsuits. Patents that would be found invalid under Alice are often very similar to the patents that are eligible for challenge under the CBM program, and in some cases, according to stakeholders, it is cheaper and more efficient to challenge a patent’s validity in district court using Alice than it is to use the CBM program.
Stakeholders described the following additional effects of the CBM program:
Business method patent assertion is riskier. The CBM program makes it riskier to assert business method patents because, compared with district court, the program offers a cheaper and more efficient way for alleged infringers to challenge a patent’s validity. District court litigation can take several years and cost several million dollars, while CBM trials are limited to 18 months and generally cost much less. In addition, technically trained board judges have greater expertise in patent law than an average district court judge and jury, and are often better able to understand complex patentability issues. Because of this, some alleged infringers are more willing to present complex arguments—such as questions about whether the patent meets standards for clarity—to the board than to a jury. As a result, the CBM program has deterred owners of financial business method patents from asserting their patents for fear those patents will be ruled unpatentable. According to stakeholders, the existence of CBM challenges has put downward pressure on settlement amounts. Patent owners may want to avoid the risk of their patent being invalidated and will demand lower settlement amounts to avoid the risk of CBM and district court proceedings. Petitioners, too, told us they use this knowledge to negotiate lower settlement fees. In addition, because challenges under the CBM program may suspend the parallel district court proceedings, it is more difficult for patent owners to expect quick settlements from alleged infringers looking to avoid the rapidly increasing court costs associated with lengthy trials. The parties can still reach settlements after the alleged infringer files a challenge under the CBM program, but the patent owners have less leverage in negotiations. On the other hand, for patent owners willing to go through a CBM challenge, their patents will emerge stronger having survived the additional review according to stakeholders we interviewed.
Business method patent owners have adjusted assertion strategies to avoid the CBM program. Patent owners are focused on asserting business method patents that are higher quality and less vulnerable to challenge under the CBM program or based on the Supreme Court’s decision in Alice; in other words, those patents that describe a technological invention that is not abstract and implemented on a generic computer. In addition, a few stakeholders told us that they have abandoned some claims in certain patents to avoid the possibility of their patents being challenged under the CBM program. Stakeholders also told us that patent owners seem to be asserting more patents, and more claims, than before the CBM program was implemented, as a strategy either to ratchet up defense costs for accused infringers and secure a settlement or to at least have success with some of the infringement charges. In addition, some stakeholders said that because the board charges fees for each petition challenging a patent, asserting more patents is a strategy to increase expected costs of defending against infringement and, thus, to increase the likelihood of a settlement. However, our analysis of RPX litigation data from 2007 to 2017 did not support these assertions. Patent litigation data did not show an increase in the monthly average number of patents asserted per case among cases involving one or more business method patents.
The CBM program has decreased the value of business method patents. The CBM program has decreased the value of business method patents generally, even beyond those focused on financial services. Several stakeholders told us that the board’s broad initial interpretation of the CBM program’s eligibility requirements contributed to an increased risk to a wider swath of business method and software patents than was intended by Congress. Stakeholders told us that any patent tangentially related to financial business methods has been devalued because it could potentially be challenged under the CBM program. In addition, stakeholders said they believed that the threat of such challenges has decreased the value of all business method patents, including those that might ultimately survive a CBM challenge. Some stakeholders pointed to a decrease in licensing of business method patents and others suggested that patents have lost value on the secondary patent market. Available data that we reviewed, though limited, support the claims that patent values on the secondary market have fallen. A few stakeholders, however, told us that to the extent these patents have lost value, the devaluation is related to problems with patent quality.
Stakeholders Generally Agreed the CBM Program Has Had Positive Effects on Innovation and Investment
Stakeholders generally agreed the effects of the CBM program on innovation and investment have been minimal or mostly positive. More specifically, stakeholders told us that the CBM program is good for overall innovation and investment in financial technologies in that the program eliminates overly broad (non-specific), low-quality patents. Stakeholders told us they believe the existence and assertion of overly broad patents is bad for innovation, in part because defending against alleged infringement is expensive and time-consuming, even under the CBM program. Assertion of overly broad, unclear, or otherwise low-quality patents acts much like a tax on investment, according to stakeholders. Stakeholders also told us that removing such patents from the marketplace promotes innovation because it prevents these patents from blocking new innovation. According to stakeholders, innovation is represented by the quality of the patents issued rather than the quantity. A large number of patents in a technology space, according to stakeholders, can make it difficult to innovate within that crowded space.
A few stakeholders had differing views, stating that the CBM program has affected some companies’ ability to protect a business model with a business method patent, although one stakeholder acknowledged that the Supreme Court’s decision in Alice has also had an effect. These types of comments were generally from stakeholders with company-specific interests, including individual patent owners and companies that have had patents invalidated under the CBM program. Other stakeholders, however, including those in the financial services industry, told us that innovation in their field is robust. For example, these companies are developing mobile-payment and blockchain technologies, and the companies have not seen any negative effects from the CBM program on their ability to innovate, patent, and invest in these financial services technologies.
Stakeholders generally agreed that the CBM program and the other post- grant programs have had a positive effect on patent quality, as patent applicants are more and more aware of what it takes to ensure a patent will survive a post-grant challenge. Several stakeholders highlighted extra steps they have taken before and during the patent application and examination stages to ensure their patents will stand up to any eventual challenges. For example, one patent owner told us how his company proactively worked to get its patent examined by a foreign patent office, in an effort to understand any quality issues with the patent, before submitting a patent application to USPTO. Another stakeholder told us about an extended back-and-forth with the USPTO examiner. This stakeholder told us that the additional effort taken during the examination process resulted in a patent that is much clearer and that will be more likely to stand up to additional scrutiny.
Most Stakeholders Said There Is Value in Maintaining Aspects of the CBM Program
Most stakeholders told us there was value in maintaining aspects of the CBM program, including the ability to challenge patents on all four statutory grounds at the Patent Trial and Appeal Board, and many told us that it would be useful to expand this capability to a broader set of patents beyond business methods. However, there was no strong consensus among stakeholders for how the AIA trials should be designed in the future.
Stakeholders generally agreed that the ability to challenge a patent’s validity on subject matter eligibility grounds remains important, although there was not broad agreement among stakeholders regarding how far that ability should extend beyond business method patents.
Stakeholders we interviewed pointed to inconsistencies in how federal courts interpret subject matter eligibility requirements and said that challenges on subject matter eligibility grounds should remain an option at the Patent Trial and Appeal Board because of the board’s expertise over the courts. Some stakeholders said subject matter eligibility challenges were important for a wider scope of patents than just business methods because concerns about subject matter eligibility that apply to business method patents extend to software-related patents in general. In addition, a few stakeholders suggested that subject matter eligibility challenges should be available for patents in all areas of technology. The continued prevalence of challenges in district courts based on the Supreme Court’s decision in Alice, for business method patents and for a wider array of patents, highlights the importance of retaining the ability to challenge patent validity at the board on subject matter eligibility grounds.
Similarly, stakeholders told us that patent clarity problems exist beyond business method patents. Stakeholders said that the federal courts and jurors do not necessarily have the expertise to interpret patent clarity requirements and that the technically trained Patent Trial and Appeal Board judges were better suited to make patentability determinations, including on clarity grounds. One stakeholder, for example, told us that petitioners can delve much deeper into the invalidity argument on patent clarity grounds at a CBM trials than they can as defendants in district court, mostly because the board judges have the requisite technical expertise. In addition, many stakeholders told us that challenging patents on clarity grounds was also important for a much broader array of patents than business method patents, and some suggested that these challenges should remain an option for all patents challenged at the board. In June 2016, we reported that more than 40 percent of patent examiners experience pressure to avoid rejecting a patent application because of problems with clarity and we recommended additional steps USPTO could take to improve patent clarity. This suggests there are a potentially large number of patents, beyond and including business method patents, that could benefit from a second look by the board on these grounds, and inter partes review does not allow patents to be challenged on clarity grounds.
Stakeholders discussed several other topics related to the future of the CBM program:
Post-grant review is not an effective substitute for the CBM program for challenging patents on subject matter eligibility and patent clarity grounds. Stakeholders told us that the 9-month window, after a patent is issued, to file challenges using post-grant review is too short to make it an effective substitute for the CBM program. Post-grant review was established as a permanent mechanism at the board for challenging all patents on all statutory grounds. However, only 78 petitions have been filed for post-grant review through September 30, 2017. According to stakeholders, few companies have the resources to continuously monitor patent issuance in real time. In addition, even if companies do discover patents that are relevant to their business, companies, in general, are not willing or able to spend resources challenging patents that may never be used as the basis for an infringement lawsuit. As a result, the public essentially does not have the ability to challenge most patents on subject matter eligibility and clarity grounds, according to stakeholders.
CBM challenges should not be limited to a specific technology.
Although the CBM program was designed to address a problem caused by a narrow set of patents, some stakeholders told us they are troubled by CBM’s focus on patents for financial services and products. Stakeholders said that singling out such services and products is unfair and that the need to determine eligibility for review created uncertainty for patent owners. In addition, some stakeholders told us that the singling out of a particular subset of patents may raise questions about compliance with an international treaty.
Concerns remain about business method and software-related patents. Some stakeholders told us the patents that the CBM program was designed to address have largely been addressed by improved examination at USPTO, reducing the need for the program. In addition, some stakeholders told us that the CBM program, which was designed to be temporary, had largely succeeded in addressing the problems with business method patents. However, other stakeholders told us that patents of questionable validity, including business method and software patents, continue to be issued by the patent office. Given these continuing concerns over software-related patents, several stakeholders suggested that one viable option for the future of the CBM program is to expand its eligibility beyond financial services patents to cover all software-related patents. In addition, in contrast to the inter partes review program, the CBM program allows any form of prior art to be used to challenge a patent on novelty or obviousness grounds. This broader allowance for prior art is important because many software and business method patents were preceded by prior art not found in existing patents or printed publications.
Conclusions
In 2016, we reported on a number of patent quality challenges at USPTO and made several recommendations to help improve the quality and clarity of issued patents. In that report, we estimated that almost 70 percent of patent examiners did not have enough time to complete a thorough examination of patent applications given a typical examiner’s workload. Given these time constraints and other patent quality challenges, the Patent Trial and Appeal Board has provided a means to challenge low-quality patents after they have been issued. Stakeholders generally agreed that the CBM program has reduced lawsuits in the federal courts involving business method patents, and many stakeholders were in favor of maintaining aspects of the program.
The board has a track record of issuing timely decisions that have largely been upheld by the U.S. Court of Appeals for the Federal Circuit. However, the board does not have guidance, such as documented procedures, for reviewing trial decisions and the processes that led to the decisions. Without developing guidance, such as documented procedures, that outlines the steps USPTO will take to review the Patent Trial and Appeal Board’s decisions and the processes that lead to decisions, USPTO cannot fully ensure that it is meeting the objective of ensuring consistency of its decisions.
Recommendation for Executive Action
We are making the following recommendation to USPTO:
The Director of USPTO should develop guidance, such as documented procedures, for judges reviewing the Patent Trial and Appeal Board’s decisions and the processes that lead to the decisions. (Recommendation 1)
Agency Comments
We provided a draft of this report to the Department of Commerce for review and comment. In its comments, reproduced in appendix II, the department agreed with the recommendation and stated that it has begun taking steps to address it, including drafting a formal, written charter that documents procedures for reviewing board decisions. The department further stated that it intends to address the recommendation within one year. In addition, it provided technical comments, which we incorporated as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 8 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Commerce, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to (1) describe the extent to which the Patent Trial and Appeal Board’s Transitional Program For Covered Business Method Patents (CBM program) has been used to challenge patents, and the results of those challenges; (2) examine the extent to which USPTO ensures timeliness of trial decisions, reviews decisions for consistency, and engages with stakeholders to improve its administrative proceedings for the program; and (3) discuss stakeholder views on the effects of the CBM program and whether it should be extended past its scheduled September 2020 sunset date.
To describe the extent to which the CBM program has been used to challenge patents, and the results of those challenges, we obtained data on board proceedings from two companies—RPX Corporation and Unified Patents—that included information on all of the board’s proceedings from September 2012 through September 2017. RPX and Unified Patents collect, compile, and analyze data from the U.S. Patent and Trademark Office’s publicly available data system. Both companies manually review these data to verify variables and to manually code additional information from other publicly available board documents. We conducted data quality testing, interviewed relevant officials, and reviewed relevant documentation for the data. We found these data to be sufficiently reliable for the purposes of our reporting objectives.
For petitions filed at the board, data from RPX and Unified Patents include information on the patent in dispute, including its U.S. patent number, petition-filing dates, and trial institution and final written decision dates. RPX data include the patent claims challenged and the statutory grounds on which they were challenged. In addition, RPX data includes which patent claims were instituted for trial on which statutory grounds, and which patent claims were ruled unpatentable on which statutory grounds. RPX and Unified Patents provided the names of the petitioners and patent owners, as well as whether the patent owner is an operating company or one of several classifications of non-practicing entities. RPX also provided the names of the parties’ attorneys. We categorized which program each petition was filed under (CBM, inter partes review, or post- grant review) to enable comparisons across programs.
We used the data from Unified Patents on Patent Trial and Appeal Board proceedings to supplement the RPX data for outcomes of each petition. Specifically, we compared the Unified Patents’ outcome variable—which describes the final outcome of the proceeding—and the RPX outcome variable to create a new variable that reflects the full available information about each petition’s outcome. There were some—fewer than 3 percent of cases—where the two variable values were inconsistent with one another. In these cases, we reviewed trial documentation to determine the correct value for the outcome variable. The Unified Patents outcome variable sometimes had more information than the RPX variable. For example, cases that were terminated because of settlement were identified as settlements in the Unified Patents data, but not in the RPX data. We retained the additional detail for our analysis.
To determine trial outcomes at the patent level, we analyzed the petition in which the patent proceeded the furthest in the CBM process. For example, if a patent was challenged under the CBM program multiple times—for example, three times—and two petitions were not instituted to the trial phase and one was instituted and then settled before the board judges issued a final written decision, we used the petition that proceeded the furthest for our patent-level analysis of outcomes. In this way, we were able to report what happened to patents under the CBM program, while not double-counting those patents that were challenged more than once.
To examine the extent to which USPTO ensures trial timeliness, reviews past decisions for consistency, and engages with stakeholders to improve its administrative proceedings for the program, we reviewed the America Invents Act (AIA); USPTO’s strategic plan; the Patent Trial and Appeal Board’s policy and guidance documents, including the Trial Practice Guide; and we interviewed board officials on several occasions. We compared USPTO’s efforts to review decisions for consistency against USPTO’s current strategic plan as well as Standards for Internal Control in the Federal Government (commonly referred to as the “Green Book”). In addition, we reviewed publicly available information documenting the steps the board takes to engage with stakeholders, including documentation of webinars, judicial conferences, and roundtable discussions.
To obtain stakeholder views on the effects of the CBM program and whether it should be extended, we conducted semi-structured interviews with 38 stakeholders knowledgeable about the CBM program. To identify these stakeholders, we first identified the following sets of stakeholder groups: petitioners and patent owners who have been involved with CBM trials; attorneys who have represented clients with board proceedings; industry trade groups; academic and legal commentators; public interest groups; and venture capitalists. We identified petitioners, patent owners, and attorneys who had been involved in board proceedings using data from RPX Corporation and Unified Patents. We ranked petitioners, patent owners, and attorneys based on how many CBM cases they had been involved with, and how many inter partes review cases they had been involved with in front of the board. We then requested, via email, interviews with several stakeholders from each stakeholder group, and began our semi-structured interviews as stakeholders accepted our invitation. During our initial set of semi-structured interviews, we identified additional stakeholders through an iterative process known as a “snowball selection method,” whereby during each interview we solicited names of additional stakeholders it would be useful to interview. As we obtained the names of additional stakeholders, we requested additional interviews, conducted interviews, and solicited additional stakeholders, until we (a) had interviewed four or more stakeholders from each identified stakeholder group and (b) found that stakeholder responses were, in general, commonly describing the same broad themes and relevant points that previous stakeholders had described about the topics we were discussing. In total, the stakeholders we recruited and interviewed did not form a random, statistically representative sample of all relevant stakeholders. As such, we cannot generalize the results of the interviews. However, these stakeholder groups and the stakeholders we interviewed provide a broad spectrum of informed opinions on the CBM program.
Of the 38 stakeholders interviewed, 14 had previously petitioned CBM against more than one patent owner, and many of those had also petitioned an inter partes review. In addition, we interviewed 6 patent owners that had been involved in multiple CBM trials. We also interviewed attorneys from 5 law firms that have represented multiple petitioners and patents owners in CBM cases. In addition, we interviewed officials from 4 trade groups, 4 venture capital firms, and 5 academics and legal commentators, all of whom had interest and expertise in the CBM program.
During our semi-structured interviews, we asked stakeholders the following three broad questions:
How much and in what way has the existence of the CBM program affected patent assertion strategies since 2012?
How much has the CBM program influenced investment decisions and innovation for technologies related to financial-services business methods?
Should the CBM program be allowed to expire in September 2020 or should it be renewed?
For each question, we used a consistent set of follow-up prompts to ensure that we fully covered all aspects of each topic with the stakeholders, that we received complete answers, and that we were able to accurately record the responses. While we asked every stakeholder each of the three questions, we did so keeping in mind the particular background and experience of each stakeholder because experience and expertise differed across our wide range of stakeholders. As such, during each interview, we focused on the topics where the stakeholder had the most experience, expertise, or knowledge.
To systematically analyze the information we collected during our semi- structured interviews, we used qualitative analysis software to group the responses into categories and themes. All information was individually coded by two analysts. We classified individual responses according to these broad themes, which generally corresponded to our main questions:
The effect of the CBM program on patent assertion and litigation.
The effect of the CBM program on innovation and investment in business methods.
The future of the CBM program.
Within each broad theme, we labeled and organized sub-themes. We established the sub-themes by identifying natural clusters of stakeholder responses.
We analyzed the categorized themes and sub-themes to draw inferences about the effectiveness of the CBM program by taking the following steps: We first examined the amount and nature of agreement and disagreement between responses within each theme and sub-theme. We then assessed the strength of the arguments supporting each categorized response, and considered factors including the number of stakeholders who discussed a topic, including the strength of the rationale for each viewpoint and other supporting evidence provided. We also considered the way in which stakeholders’ interests could influence their perspectives.
In this report, we present the themes with the strongest and most consistent support based on rationale including the prevalence of each argument, the presence of credible evidence in support of statements, and the amount of consistency and corroboration of themes across stakeholders. Because stakeholders do not make up a defined population that we could sample from, and because the stakeholders we interviewed had a wide range of experience and expertise, we did not tally up similar responses and do not present stakeholder responses based solely on how many stakeholders agreed or disagreed with a given statement.
We conducted this performance audit from November 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient and appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Commerce
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, the following individuals made contributions to this report: Rob Marek (Assistant Director), Kevin Bray, Mark Braza, Richard Burkard, Stephanie Gaines, Michael Krafve, Cynthia Norris, Ardith Spence, Sara Sullivan, and Sarah Williamson.
Related GAO Products
Intellectual Property: Patent Office Should Define Quality, Reassess Incentives, and Improve Clarity. GAO-16-490. Washington, D.C.: June 30, 2016.
Intellectual Property: Patent Office Should Strengthen Search Capabilities and Better Monitor Examiners’ Work. GAO-16-479. Washington, D.C.: June 30, 2016.
Intellectual Property: Assessing Factors That Affect Patent Infringement Litigation Could Help Improve Patent Quality. GAO-13-465. Washington, D.C.: August 22, 2013.
U.S. Patent and Trademark Office: Performance Management Processes. GAO-10-946R. Washington, D.C.: September 24, 2010.
Intellectual Property: Enhanced Planning by U.S. Personnel Overseas Could Strengthen Efforts. GAO-09-863. Washington, D.C.: September 30, 2009.
Check 21 Act: Most Consumers Have Accepted and Banks Are Progressing Toward Full Adoption of Check Truncation. GAO-09-8. Washington, D.C.: October 28, 2008.
U.S. Patent and Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog. GAO-08-527T. Washington, D.C.: February 27, 2008.
U.S. Patent And Trademark Office: Hiring Efforts Are Not Sufficient to Reduce the Patent Application Backlog. GAO-07-1102. Washington, D.C.: September 4, 2007.
Intellectual Property: Improvements Needed to Better Manage Patent Office Automation and Address Workforce Challenges. GAO-05-1008T. Washington, D.C.: September 8, 2005.
Intellectual Property: Key Processes for Managing Patent Automation Strategy Need Strengthening. GAO-05-336. Washington, D.C.: June 17, 2005.
Intellectual Property: USPTO Has Made Progress in Hiring Examiners, but Challenges to Retention Remain. GAO-05-720. Washington, D.C.: June 17, 2005. | Why GAO Did This Study
Patents can promote innovation by giving inventors exclusive rights to their inventions, and patent owners can bring infringement lawsuits against anyone who uses, makes, sells, offers to sell, or imports a patented invention without authorization. As GAO previously reported, such lawsuits can take years and cost several million dollars. USPTO's CBM program provides a trial proceeding to challenge a patent's validity at USPTO's board for, according to stakeholders, a fraction of the time and money that would be spent in the federal courts. The CBM program began in September 2012 and is slated to sunset in September 2020.
GAO was asked to examine the CBM program. This report (1) describes the extent to which the program has been used to challenge patents, and the results of those challenges; (2) examines the extent to which USPTO ensures timeliness of trial decisions, reviews decisions for consistency, and engages with stakeholders to improve proceedings for the program; and (3) discusses stakeholder views on the effects of the program and whether it should be extended past its sunset date. GAO analyzed CBM trial data from September 2012 through September 2017, reviewed USPTO documents, and interviewed 38 stakeholders, such as legal and academic commentators, selected for their knowledge of or direct involvement in such trials.
What GAO Found
From September 2012 through September 2017, entities facing patent infringement lawsuits filed 524 petitions challenging the validity of 359 patents under the U.S. Patent and Trademark Office's (USPTO) covered business method (CBM) program, resulting in decisions against about one-third of these patents. The CBM program provides entities facing infringement lawsuits an opportunity to challenge the validity of a business method patent by demonstrating that it did not meet requirements for patentability. Business method patents focus on ways of doing business in areas such as banking or e-commerce. The rate of filing petitions over this period has fluctuated but has generally declined since 2015, and none were filed in August or September 2017.
USPTO has taken several steps to ensure the timeliness of trial decisions, review past decisions, and engage with stakeholders to improve proceedings under the program:
Timeliness: USPTO regularly informs relevant parties about paperwork requirements and due dates throughout trials. According to program data, as of September 2017, all 181 completed trials were completed within statutorily required time frames.
Decision review: USPTO has taken several steps to review its decisions and has monitored the rate at which the Court of Appeals for the Federal Circuit affirms or reverses them. However, USPTO does not have guidance, such as documented procedures, for reviewing trial decisions, or the processes leading to decisions, for consistency. Without guidance, such as documented procedures, USPTO cannot fully ensure that it is meeting its objective of ensuring consistency of decisions.
Stakeholder engagement: USPTO judges have engaged with stakeholders by participating in public roundtables and webinars, and attending judicial conferences, among other things.
Stakeholders GAO interviewed generally agreed that the CBM program has reduced lawsuits involving business method patents in the federal courts. While many stakeholders favored maintaining aspects of the program, there was not strong consensus among stakeholders for how future trials should be designed.
What GAO Recommends
GAO recommends that USPTO develop guidance, such as documented procedures, for reviewing trial decisions for consistency. USPTO agreed with GAO's recommendation. |
gao_GAO-18-218 | gao_GAO-18-218_0 | Background
DOD Goals, Roles, and Responsibilities for the Privatized Housing Program
DOD’s policy is to ensure that eligible personnel and their families have access to affordable, quality housing facilities and services consistent with grade and dependent status, and that the housing should generally reflect contemporary community living standards. It is also DOD’s policy to rely on the local private sector as the primary source of housing for servicemembers who are normally eligible to draw a housing allowance, whether unaccompanied or accompanied by family. About a third of eligible servicemembers generally live on an installation, with the rest living in the surrounding local communities.
The Assistant Secretary of Defense for Energy, Installations, and Environment (ASD (EI&E)) is the program manager for all DOD housing, whether DOD-owned or privatized. In this capacity, the ASD (EI&E) provides guidance and general procedures related to military housing privatization. One responsibility of ASD (EI&E) is to provide required reports to Congress on privatized military housing projects. However, it is the responsibility of the military departments, rather than ASD (EI&E), to execute and manage privatized housing projects, including conducting financial management and monitoring their portfolio of projects. Each military department has issued guidance that outlines its responsibilities for privatized housing, such as key offices responsible for overseeing privatized housing projects. For each privatized military housing project, developers maintain day-to-day operational decision making and manage each project.
Military Housing Privatization Authorities and Project Structures
The military housing privatization initiative provided DOD with various authorities to obtain private-sector financing and management to repair, renovate, construct, and operate military housing. These authorities included the ability to make direct loans to and invest limited amounts of funds in projects for the construction and renovation of housing units for servicemembers and their families. The projects were generally financed through both private-sector financing and funds provided by the military departments. Specifically, projects obtained private-sector financing by obtaining bank loans and by issuing bonds, which are held by the public. In addition, the military departments provided additional financing. The Army and the Navy generally structured their privatized housing projects as limited liability companies in which the military departments formed partnerships with the developers and invested funds into the partnership. The Air Force generally provided direct loans to the developers. Because privatized housing projects involve budgetary commitments of the federal government, each project was scored at inception by the Office of Management and Budget to determine the amount of funds that needed to be budgeted for that particular project.
The number of projects can change over time. For example, a project may be sold, and new projects can be created. As of October 2017, there were 82 privatized military housing projects, each of which can consist of one or multiple installations. The Army has 35 projects, the Navy and Marine Corps together have 15, and the Air Force has 32. Most of these are family housing projects, but the Army and Navy have created a small number of privatized housing projects for servicemembers without families (that is, unaccompanied housing).
The military departments have flexibility in how they structure their privatized housing projects, but project structures share certain similarities. For a typical project, a military department leased land to a developer for a 50-year term and conveyed existing homes located on the leased land to the developer for the duration of the lease. The developer then became responsible for leasing renovated and newly constructed homes, giving preference to servicemembers and their families.
Each privatized housing project is a separate and distinct entity governed by a series of legal agreements that are specific to that project. These agreements include, among others, an operating agreement, a property management agreement, and an agreement that describes the management of funds in the project, including the order in which funds are allocated within the project. However, while each project is distinct, there are some common elements in how projects invest and utilize funds. Every project takes in revenue, which consists mostly of rent payments. Projects then pay for operating expenses, including administrative costs, day-to-day maintenance, and utilities, among other things. After that, projects generally allocate funds for taxes and insurance, followed by debt payments. Figure 1 shows a typical funding structure for a privatized housing project.
In the typical privatized housing project depicted in figure 1, once debt payments are made, funds are allocated to accounts that fund scheduled maintenance. These accounts exist to fund repair and replacement of items such as roofs, heating and cooling systems, and infrastructure. After that, funds are allocated to a series of management incentive fees, such as the property management fee. Finally, the project divides these remaining funds according to a fixed percentage between accounts that fund major renovations and rebuilds on the one hand and go the developer on the other hand. The percentages may vary, but the majority of funds go toward the accounts funding major renovations and rebuilds.
Housing Allowance and Occupancy of Privatized Housing
DOD’s Defense Travel Management Office annually calculates rent and utility rates for locations across the United States based on estimates of local market conditions, which are then adjusted for an individual’s pay grade and dependency status. These calculations, which can fluctuate from year to year, are then used to determine individual servicemembers’ monthly basic allowance for housing payments. DOD does not require servicemembers, other than certain key personnel and junior unaccompanied personnel, to live on an installation and thus in military privatized housing. Because only about a third of eligible servicemembers generally live on an installation, the basic allowance for housing payment is designed to enable servicemembers to live off-base comparably to their civilian counterparts. Servicemembers pay their rent—whether living on the installation or off—with their basic allowance for housing payments. Therefore, DOD’s privatized housing competes with available housing options in the local market.
Active-duty servicemembers are given priority for privatized military housing. However, projects can advertise and lease to tenants other than active-duty servicemembers, including civilians in some cases, generally once occupancy dips below a specific level. For example, the Air Force has approved leasing to other tenants when any given project’s occupancy rate falls below 98 percent.
DOD Regularly Assesses Projects’ Financial Conditions but Has Not Consistently Assessed Future Sustainment Needs or Issued Required Reports to Congress
DOD regularly assesses the financial condition of its privatized housing projects through recurring internal reporting by the military departments on each of their projects; however, key data on current financial conditions are not mutually comparable. Moreover, the military departments vary in the extent to which they use measures of future sustainment needs and funding to assess project sustainability. In addition, DOD has not consistently issued required reports to Congress on the financial condition of privatized housing projects in a timely manner.
The Military Departments Regularly Assess the Financial Condition of Their Privatized Housing Projects
The military departments regularly assess the current financial condition of their privatized housing projects through internal, recurring monthly or quarterly financial reporting. DOD policy requires the military departments to manage their housing, including privatized housing, through financial management and reporting. DOD’s housing manual states that because housing privatization projects create a long-term governmental interest in privatized housing, it is essential that projects be monitored attentively, and that the military departments monitor their portfolios of projects. Specifically, each military department produces—based on information provided by each project—or receives from each project quarterly or monthly reports detailing the financial condition of each individual privatized housing project. Each military department also produces periodic reports on the condition of its portfolio as a whole. These reports include financial measures such as revenue and operating expenses, as well as a measure of the ability to make required debt payments, referred to as debt coverage ratio or debt service coverage ratio.
In their assessments, each military department emphasizes somewhat different measures of current financial condition, although each uses debt coverage ratio as a key measure of the current financial condition of privatized military housing projects. Specifically, in its portfolio-wide reports, the Army uses three key performance metrics to measure financial condition—a measure of revenue, net operating income, and the debt coverage ratio. The Air Force also rates projects’ financial condition based on three metrics, but the metrics differ from those used by the Army. The Air Force’s metrics are operating expenses compared with budgets, net operating income compared with the original project plan, and debt coverage ratio. In its portfolio-wide reports, the Navy provides debt coverage ratio as its measure of current financial condition. Regardless of the different metrics used, the military departments rated almost all of the privatized housing projects as having acceptable current financial conditions. Specifically:
Army: For the quarter ending June 30, 2017, all 34 Army projects generated enough cash to continue operations and make required debt payments, according to the Army’s portfolio-wide reporting. However, the Army rated 8 family housing and 4 unaccompanied housing projects as below or well below expectations, in terms of current finances. For example, the Army rated the project at Fort Bragg, North Carolina, as being well below expectations, due to occupancy challenges resulting from off-post competition and higher- than-expected expenses.
Navy and Marine Corps: For the 6 months ending June 30, 2017, all 16 Navy and Marine Corps projects were generating enough cash to continue operations and make required debt payments, according to the Navy’s portfolio-wide reporting. However, 5 of the 16 projects were on a watch list, due to financial challenges. For example, the Marine Corps’ project comprising Camp Lejeune, North Carolina; Marine Corps Air Station Cherry Point, North Carolina; and Stewart Air National Guard Base, New York was experiencing low occupancy rates due to local market competition, and as such was included on the watch list.
Air Force: For the quarter ending June 30, 2017, the Air Force rated 27 of its 32 projects’ current finances as acceptable or exceptional. However, the Air Force rated 2 of its 32 projects as unacceptable, and 3 as marginal, for current finances, according to Air Force portfolio- wide reporting. For example, the Air Force rated the Nellis Air Force Base project in Nevada as having an unacceptable current financial condition as of June 2017. In March 2017, the Office of Management and Budget approved the budgetary scoring of a financial restructuring of the project. In the restructuring, the Air Force reduced the interest rate on the government’s direct loan to the project and extended the loan’s maturity date, redistributed residual project cash flows, and reduced certain returns due to the developer. In another example, the Air Force rated the Air Combat Command II project, which comprises Holloman Air Force Base, New Mexico, and Davis- Monthan Air Force Base, Arizona, as having a marginal current financial condition as of June 2017. Specifically, basic allowance for housing rates for the project were only 85 percent of original expectations, and the project was unable to compensate for that shortfall by controlling expenses. The Office of Management and Budget has approved the budgetary scoring of a financial restructure of the project, including a reduction in the interest rate on the government’s loan to the project and a reduction in certain returns and fees previously owed to the developer.
Data on Current Financial Condition of Privatized Housing Projects Reported to the Office of the Secretary of Defense and Congress Are Not Comparable
Based on our analysis, data on the current financial condition of privatized housing projects that have been reported by the military departments to ASD (EI&E) and Congress have not been comparable because (1) there are inconsistencies in the calculation of the reported debt coverage ratios, and (2) the data requested have not followed consistent time periods. Debt coverage ratios are a key measure used by the military departments to report on the current financial condition of privatized housing projects, and the measures are also the main financial measure for privatized housing projects that DOD has previously reported to Congress. However, we found the following inconsistencies in the debt coverage ratio data reported to ASD (EI&E):
Adjustments made to income for the purposes of calculating debt coverage ratios affect the ratios’ consistency: The expenses that are or are not included in a project’s calculation of the debt coverage ratio are dictated by each project’s business agreements. ASD (EI&E) defines debt coverage ratio as the project’s net operating income— income remaining after all project expenses are paid, but before debt service and depreciation—divided by its required debt payments. However, we found that in practice, projects make various adjustments to net operating income for the purposes of calculating debt coverage ratios. These adjustments may include adding or subtracting from net operating income any of the following: sustainment fund deposits; various types of management fees, including performance incentive fees and asset management fees; certain utility costs; and taxes. Military department officials stated that the debt coverage ratios calculated using these adjustments, while different for different projects, are accurate and appropriate. However, while the calculation methods may be sufficient for any given project, the differences in calculation methods reduce the comparability of the data.
Different project accounting methods affect the comparability of debt coverage ratios: Some projects conduct financial accounting based on the amount of cash received or paid during the period (referred to as cash basis accounting), while other projects do so based on when revenue is earned and when expenses are incurred, regardless of when cash is received or paid (called accrual basis accounting). These accounting differences can significantly affect the debt coverage ratio. For example, a cash basis project may have cash on hand to pay its debt obligations, but not enough to cover future expenses that would have been recognized under an accrual project. The specific accounting method used reflects each project’s particular business agreements, but the differences in accounting methods reduce the comparability of the debt coverage ratios across the projects.
Moreover, as the program manager for all DOD housing, ASD (EI&E) requested debt coverage ratio data across varying time frames for required reports to Congress on privatized housing projects. Specifically, ASD (EI&E) has alternated between requesting annual average debt coverage ratio data and requesting data as of the end of the reporting period, thus reducing the comparability of the data over time. In instructions for its fiscal year 2014 data collection, the office requested the average debt coverage ratio over the full fiscal year; in instructions for its fiscal year 2015 data collection, it requested data as of the end of the reporting period; and in its fiscal year 2016 data collection, the office again requested data for the average over the full fiscal year.
Furthermore, the instructions provided by ASD (EI&E) to the military departments for fiscal year 2015 did not specify the time period of the data to be reported. Therefore, each military department provided a different time period of data in response, further reducing the comparability of the data. Specifically, one of the military departments provided quarterly data, another military department provided data for the full year, and the other military department provided one-month data, according to military department officials.
Using data from different time periods not only reduces their comparability, but also can produce a different outlook on a project’s financial condition. For example, we found that debt coverage ratios for a single fiscal quarter can be significantly different from the ratio for the same project for the full fiscal year. In some cases, a quarterly ratio showed insufficient funds to continue operations and make required debt payments, while the full-year ratio showed sufficient funds for that purpose. Conversely, another project’s ratios showed the single quarter as having sufficient funds, but the full year as having insufficient funds. ASD (EI&E) officials stated that data for previous reports were collected by different staff in that office and that the current officials were not sure why the time period for fiscal year 2015 data collection was different from that of the other two fiscal years.
Standards for Internal Control in the Federal Government states that management should use quality information and externally communicate the necessary quality information to achieve the entity’s objectives. Information, among other things, should be complete and understandable. This involves processing data into information and then evaluating the processed information so that it is quality information. The standards also state that management should obtain relevant data from reliable sources, which provide data that are reasonably free from error and faithfully represent what they purport to represent.
However, in prior reports to Congress, ASD (EI&E) did not clarify the differences in how debt coverage ratios were calculated, resulting in information that lacked full context. Moreover, the information provided by the military departments to ASD (EI&E) and to Congress to conduct their oversight activities has not been consistent and comparable because ASD (EI&E) has not revised its guidance on privatized housing to ensure that data reported to Congress, such as data on debt coverage ratios, are consistent in terms of time periods. Officials in ASD (EI&E) acknowledged that the differences in debt coverage ratio calculation methods and project account methods can affect the comparability of the data. They also noted that in the future they plan to continue the annual time period for data collection and reporting, though they did not identify any additional steps they plan to take to ensure consistent and comparable data. Without contextual information on how the military departments calculate debt coverage ratios—a key measure of the current financial condition of privatized housing projects—and on the effect these differences have on comparing the data across projects, data reported to Congress may not be fully useful in supporting congressional oversight of privatized military housing. Additionally, by revising guidance to ensure that data reported to Congress are comparable (that is, across the same time frames), ASD (EI&E) will provide additional assurance that DOD and Congress will have quality information on which to base decisions regarding privatized housing projects.
The Military Departments Have Varying Methods of Assessing a Project’s Sustainability
During the course of our review, we found that the military departments take different approaches in assessing a project’s sustainability (that is, future sustainment needs and funding). Army officials stated that the Army validates project sustainment plans, and is developing, but has not yet implemented, a model to independently assess project sustainability. The Navy validates sustainment plans generated by the developers managing its projects. In addition to reviewing developers’ sustainment plans, the Air Force conducts an independent analysis of each project’s sustainment needs by conducting site tours of each project location and by using its own financial model to forecast sustainment needs, according to Air Force officials. The Air Force then compares its analysis with that of the developer. In most cases, according to Air Force officials, this comparison has shown that the Air Force’s estimates of sustainment needs were greater than the developer’s original estimates, which would require additional sustainment funding beyond what the developer estimated.
Moreover, the military departments do not all use measures of future sustainment for their internal portfolio-wide reports on privatized housing projects. Specifically:
Army: The Army does not include a measure of future sustainability among the key finance performance metrics it emphasizes in its portfolio-wide oversight reports. The Army tracks the balance of funds for long-term major renovations and rebuilds as compared with expectations, but it does not include a measure of expected future sustainment needs versus funding in its portfolio-wide reports. As of June 2017, seven Army projects had fallen below expectations in current funding levels for long-term major renovations and rebuilds, according to the Army’s portfolio-wide report for the quarter ending June 2017.
Navy: In its portfolio-wide reports, the Navy includes a measure of sustainability. Specifically, the reports show modeled surpluses or shortfalls in sustainment funding through the term of each project. As of June 30, 2017, the Navy reported five projects expecting shortfalls in sustainment funding, four of which the Navy anticipated would require project plan modifications to address the shortfalls.
Air Force: In its portfolio-wide reporting, the Air Force has adopted measures of long-term financial condition, including measures of future sustainment funding. Specifically, the Air Force gives each project a “long-term outlook” rating. This rating includes measures of projected sustainment funding levels relative to projected needs, among other measures. As of June 30, 2017, the Air Force rated 6 of its 32 projects as having “unacceptable” long-term outlooks, and another 6 as having “marginal” long-term outlooks. For example, the Air Force considered the Air Combat Command II project, which comprises Holloman Air Force Base, New Mexico, and Davis- Monthan Air Force Base, Arizona, to have severely underfunded planned maintenance funds and a projected inability to meet any future needs for major renovations and rebuilds, due to lower-than- expected basic allowance for housing levels.
DOD guidance states that because privatization creates a long-term governmental interest in privatized housing, it is essential that projects be attentively monitored. DOD has recognized that a lack of sustainment funding can decrease the desirability of housing over time, thus reducing occupancy and further jeopardizing financial stability. However, DOD has not required the military departments to incorporate measures of future sustainment into their assessments of privatized housing projects. Measures of current financial condition, such as the ability to make debt payments, do not necessarily indicate the ability of a project to fund its sustainment accounts sufficiently to maintain housing quality in the future. A project may generate enough revenue to cover operating expenses and make required debt payments, but the level of projected funding available for planned renovations over the course of the project may still be insufficient, as shown by Navy and Air Force portfolio-wide oversight reports. The Navy and Air Force include measures of future sustainment needs and funding in their portfolio-wide oversight. While Army officials stated that the Army regularly reviews sustainment funding levels, the Army does not include forecasts of future sustainment needs and funding in its portfolio-wide assessment reports because they are not required by ASD (EI&E). Without a requirement to include sustainment measures in their oversight of privatized housing projects, military department officials may choose to review such measures or not. If ASD (EI&E) does not require the military departments to include measures of future sustainment in their assessments of privatized housing projects, the military departments may not consistently incorporate such measures into their portfolio-wide assessments, and therefore the military departments and ASD (EI&E) may not have sufficient oversight of the projects’ future sustainability. ASD (EI&E) officials agreed that such a requirement would help ensure that the military departments are consistent in their oversight of future sustainment.
DOD Has Not Met the Requirement for Financial Oversight Reports to Congress in a Timely Manner and Has Not Included Sustainability Information on Each Privatized Housing Project
DOD has not consistently provided required reports to Congress in a timely manner, and as a result Congress does not have up-to-date information on the financial condition of privatized housing. Section 2884(c) of Title 10 of the United States Code requires the Secretary of Defense to report semiannually an evaluation of the status of oversight and accountability measures for military housing privatization projects, including, among other things, information about financial health and performance and the backlog of maintenance and repair. DOD provided a report covering fiscal year 2013 to Congress in November 2014, and then did not provide another report, covering fiscal year 2014, until October 2017. ASD (EI&E) officials stated that they have not provided the reports in a timely manner in recent years due to staff turnover and limited resources, as well as efforts to ensure the quality of the data included in the reports. An ASD (EI&E) official stated that DOD is planning to resume timely reporting, with a consolidated report covering fiscal years 2015 and 2016 to be submitted to Congress in the second quarter of fiscal year 2018, and a report covering fiscal year 2017 to be submitted in late fiscal year 2018.
Furthermore, in prior reports submitted to Congress, ASD (EI&E) has not reported information on the future sustainment of each privatized housing project. The statute does not require the reporting of information on future sustainability for each project. However, ASD (EI&E) has noted that long- term sustainability has become a priority as projects have completed their initial development periods, and therefore information on future sustainment has become more critical to understanding the projects’ financial health.
Standards for Internal Control in the Federal Government states that management should use quality information and externally communicate the necessary quality information to achieve the entity’s objectives. In the past, DOD has not consistently reported on the financial condition of privatized housing projects to Congress and in cases where data were reported, the department focused its reports on measures of current financial health such as debt coverage ratios, which do not provide information about the future sustainment of the projects. An ASD (EI&E) official stated that the office will streamline the report’s narrative while adding additional details to figures as a means to expedite future report submission, but the official did not provide additional details of how future reports will be completed in a more timely fashion. ASD (EI&E) officials also stated that in the past they were focused on the initial implementation phases of the privatized housing projects and are now shifting to focus on sustainment, but they have not provided sustainment information on each project to Congress. ASD (EI&E) officials agreed that it would be beneficial to include information on sustainment in their reports to Congress. If DOD does not take steps to comply with statutory time frames for reporting on the financial condition of privatized housing projects moving forward, decision makers in Congress will not have up-to- date information about financial conditions of projects as they provide oversight of a program that represents a long-term commitment for the department. Furthermore, reporting financial information on the future sustainability of projects will help provide Congress a complete picture of the financial condition of each project.
DOD Has Not Fully Assessed the Effects of the Basic Allowance for Housing Reductions but Has Identified Other Privatized Housing Challenges and Options to Address Them
DOD has completed some analysis of the projected effects of recent reductions in the basic allowance for housing on its privatized housing portfolios, but it has not fully assessed the significance of the effects on the future sustainment of each of its privatized housing projects. Moreover, DOD has not identified a course of action to address possible shortfalls resulting from the reductions in the basic allowance for housing. The military departments have also identified a variety of other challenges that could affect the financial condition of their privatized housing projects, including reductions in assigned personnel and the higher-than-expected cost of utility infrastructure. The military departments have identified options to address potential financial challenges to their privatized housing projects, including actions to increase revenue, actions to reduce expenses, and extraordinary measures to improve project financial conditions.
Reductions in the Basic Allowance for Housing Could Decrease Privatized Housing Projects’ Revenue and Future Sustainment Funding
According to the military departments, reductions in the basic allowance for housing relative to market rent and utility calculations by the Defense Travel Management Office—a 4 percent reduction as of 2018—will decrease funding for future sustainment and could affect the privatized housing projects’ ability to continue operations and make required debt payments. Specifically, housing developers stated that declines in revenue have already been felt by certain projects, and that any reduction in their ability to sustain the privatized housing projects over the term of their 50-year leases will result in the degradation of the housing, leaving the homes less marketable. Unlike challenges that may affect one or a few projects, the reductions in the basic allowance for housing affect all projects, since basic allowance for housing is a basis for revenue for all of the projects.
DOD has established that the amount charged to servicemembers for renting housing on base was equal to their basic allowance for housing rate. Thus, the privatized housing projects were developed with the assumption that they would receive full basic allowance for housing payments as rent, according to officials from each military department. However, at DOD’s request, Congress included provisions in the Carl Levin and Howard P. “Buck” McKeon National Defense Authorization Act for Fiscal Year 2015 and National Defense Authorization Act for Fiscal Year 2016 that authorized the department to reduce the housing allowance to servicemembers below the Defense Travel Management Office’s typical basic allowance for housing calculations, starting with a 1 percent reduction in 2015 and reaching a 5 percent total reduction by 2019. As of 2018, the department has reduced basic allowance for housing payments by 4 percent. Because of this reduction, the revenue that projects receive from rent payments has decreased at certain projects. However, according to officials representing the military departments, the reductions in the basic allowance for housing will not be the sole reason that any project is struggling. A project may be struggling due to other challenges the military departments identified, examples of which we describe in this report, such as aging utility infrastructure. However, officials representing each military department stated that the reductions will have a compounding effect on projects that are facing other challenges.
DOD Has Not Fully Assessed the Effects of the Reductions in the Basic Allowance for Housing That Began in 2015
An August 2015 memorandum issued by ASD (EI&E) directed the military departments to complete a thorough review of their privatized housing portfolios. Additionally, the military departments were to provide a report outlining any effects of changes in the basic allowance for housing on their portfolios. However, the military departments have not fully assessed the effects of the basic allowance for housing reductions. Instead, in response to this memorandum, the military departments completed some analysis on the effects of the reductions in the basic allowance for housing and provided reports outlining the projected effects of the reductions on their privatized housing portfolios. Each military department reported that the reductions in the basic allowance for housing would decrease project revenue, and each provided estimates across multiple scenarios. Specifically:
The Army’s September 2015 report projected an average decrease in long-term sustainment accounts of $104 million per project through 2039 based on a 5 percent reduction in basic allowance for housing rates. Out of the 35 projects in the Army’s privatized housing portfolio, the report looked at the 15 projects projected to lose 5 percent or more of their assigned personnel and estimated the funds available to support each project from 2015 until the end of 2039.
The Navy’s October 2015 report projected a decrease in long-term sustainment accounts across the portfolio of privatized housing projects of $2 billion based on a 5 percent reduction in basic allowance for housing rates. The report also summarized any projected effects in the first year of reductions on the debt coverage ratio and specified the calendar years when sustainment shortfalls could begin to occur per project.
The Air Force’s November 2015 report projected a decrease of $48 million per year across the portfolio based on a 5 percent reduction in basic allowance for housing rates. The report indicated that project ratings could begin to be affected in the same year as the reductions in the basic allowance for housing were implemented, and that funding for long-term sustainment would be diminished.
However, DOD does not have the information needed to fully assess the effects of the reductions that began in 2015, because it did not direct the military departments to specify in their reports the significance of the effects of the reductions on each individual project. The August 2015 ASD (EI&E) memorandum directed the military departments to provide reports with a “thorough review,” but it did not specify the inclusion of information that would detail the extent of the effects on the sustainment of each individual project. As a result, the reports did not fully assess specific effects on each project to enable the identification of and response to specific risks. For example, generally, the reports did not include certain information for the full term of all projects, as detailed below: two of the reports did not include information on when deficits related to reductions in the basic allowance for housing will occur per project; two of the reports did not include information on the decrease in the sustainment accounts due to reductions in the basic allowance for housing versus the amount that the project requires for planned sustainment per project; and none of the reports included information on the likely effects of particular sustainment funding deficits (for example, how many units will forgo needed renovations or rebuilds).
In addition, the military departments did not identify specific actions in the reports to respond to particular, identified shortfalls for individual projects resulting from reductions in the basic allowance for housing. In its August 2015 memorandum, ASD (EI&E) noted that individual projects may have different solutions to address the effect of the reductions in the basic allowance for housing. The military departments did not outline solutions for each individual project but, as requested by ASD (EI&E), proposed recommendations in their reports to mitigate the overall effects of the reductions in the basic allowance for housing by charging servicemembers the out-of-pocket rate. The out-of-pocket rate reflects a servicemember cost-sharing adjustment that would require the servicemember to pay the amount by which his or her allowance was reduced.
However, neither DOD nor the military departments have taken action to address the reports’ recommendations, nor have they determined any other courses of action for individual projects in response to the reductions in basic allowance for housing. While the Army has a policy that would allow individual projects to propose charging servicemembers the out-of-pocket amount, subject to Army approval, the policy states that the Army strongly prefers that projects not charge servicemembers. According to Army officials, none of the projects had done so as of August 2017. Further, according to privatized housing developers representing Army projects, they have not proposed charging the out-of- pocket rate because doing so could result in a reduction in occupancy at that project, as servicemembers would begin to look for other housing. Unlike the Army, the Navy and Air Force do not have a policy that would allow developers to charge the out-of-pocket amount. According to ASD (EI&E), Navy and Air Force officials stated that their lack of policy is based in large part on the fact that servicemembers from all three military departments reside at nearly every installation, and that without having written assurance that the other military departments will also charge the out-of-pocket rate, the Air Force and Navy cannot agree to do so.
Standards for Internal Control in the Federal Government states that management should analyze the identified risks to estimate their significance, which provides a basis for responding to the risks, and design responses to the analyzed risks so that risks are within the defined risk tolerance for the defined objective. In its August 2015 memorandum, ASD (EI&E) noted that the reductions in the basic allowance for housing could create shortfalls that in turn could lower the quality of homes in privatized housing communities. However, DOD has not fully assessed the significance of this risk by considering the magnitude of impact, the likelihood of occurrence, and the nature of the risk because, generally, the reports do not include certain information for the full term of all projects, as detailed above. Specifically, DOD has not fully assessed the significance of the risk of the reductions in the basic allowance for housing by considering how the reductions will affect the quality of its housing. If DOD does not fully assess the effects of the reductions in the basic allowance for housing, DOD and Congress will not be fully informed before making decisions that could affect all of the projects. Furthermore, if DOD does not respond to the risk of reduced sustainment funds by designing specific actions, DOD and the military departments may not be well positioned to reduce any risks and meet their objective of providing quality housing for servicemembers.
The Military Departments Have Identified Various Challenges to Sustaining Their Privatized Housing Projects
The military departments have identified various challenges that could affect the financial condition and future sustainment of their privatized housing projects. Examples of these challenges include the following:
Reductions in assigned personnel at installations have reduced occupancy rates: Information from military department officials shows that the loss of personnel assigned to an installation has reduced occupancy at some projects. Reductions in assigned personnel can occur at an installation because of large-scale troop reductions or the inactivation of units. The decrease in occupancy at some projects has led to revenue and cash flow challenges. For example, Army officials noted that the occupancy rate dropped from about 95 percent to about 70 percent at the Fort Knox project in Kentucky in 2014 when a unit was inactivated. This drop in occupancy resulted in challenges for the privatized housing project because the number and type of housing units originally built were determined on the basis of the unit’s remaining at the installation.
Aging utility infrastructure has increased sustainment costs, resulting in reduced cash flows for some projects: According to DOD and officials representing the military departments, the costs of maintaining infrastructure for utilities has reduced cash flows for some projects. In some privatized housing agreements, the military departments transferred responsibility for utility infrastructure to the projects. According to DOD and military department officials, this oversight and maintenance have been more costly than project owners had expected. Air Force officials stated that aging utility infrastructure is not something the projects are equipped to handle because there is not enough revenue in their project structures to cover the costs of maintaining the infrastructure. Air Force officials said that they noticed the challenges related to transferring utility infrastructure in the earlier projects and that they made a decision to stop transferring infrastructure to developers in later projects. Moreover, according to Air Force officials, some project owners are now asking for the military departments to take back the infrastructure. For example, the Air Force agreed to take back some of the gas and electric infrastructure at the Air Force Academy project in Colorado as part of a financial restructuring.
Perceived disconnects between basic allowance for housing calculations and market rates: Military department officials and privatized housing developers perceive the Defense Travel Management Office’s basic allowance for housing calculations as challenging because they believe that the calculations are unpredictable and do not always reflect the realities of local markets. Officials in each military department stated that the data used for the calculations sometimes do not accurately reflect the local market surrounding the project. For example, officials from the Navy’s Midwest project noted that the calculation for Millington, Tennessee— an area covered by the Midwest project—was higher than that for the Chicago area of the project in 2014—an area that they felt should have had the higher costs of the two. Additionally, according to Army officials, basic allowance for housing rates fluctuate at certain projects from year to year and do not reflect the local market. For example, the average basic allowance for housing rate for Fort Huachuca in Arizona dropped 11 percent from 2014 to 2015, increased 4.6 percent in 2016, and dropped 9 percent in 2017. Army officials stated that these fluctuations did not match rental costs in the local market.
Actual costs of utilities in some locations are not covered by the basic allowance for housing utility rates: Officials representing two military departments stated that the Defense Travel Management Office’s basic allowance for housing calculations do not accurately reflect the actual costs of utilities. According to Army officials, the utility component of the Defense Travel Management Office’s calculations does not cover the actual cost of utilities for project homes at some locations. This difference can result when the surveys for utility costs are from homes in the local community that are not comparable to those on base. For example, in Fairbanks, Alaska—where the Army’s Fort Wainwright/Greely project is located—off-base homes get the majority of their heat from wood stoves that report no cost element to the surveys used by the Defense Travel Management Office. By underreporting or not otherwise adjusting for these costs, according to Army officials, the basic allowance for housing calculations fail to account for the funds necessary to cover the costs of traditional, metered utilities.
Unexpected project expenses can reduce cash flows for some projects: Officials representing two military departments stated that unexpected expenses can be a challenge for some projects. These expenses can occur because of unexpected events, such as weather events, environmental damage, or unexpected litigation. For example, the Navy’s Mid-Atlantic project has experienced unexpected expenses related to water intrusion and mold issues and the ensuing litigation, causing fewer funds to flow to the project’s sustainment accounts. There are also expenses for snow removal, hurricanes, and flooding. Navy officials stated that they did not anticipate a lot of sustainment work in the first 5 to 10 years of the projects, but needs have arisen due to these unexpected events. Additionally, according to information from the Navy’s New Orleans project in Louisiana, hurricane and tropical storm damage may drain $1.5 million to $2 million from the project’s sustainment accounts every 3 to 4 years.
Determining the amount DOD must budget for a project may affect future expansions or changes to existing projects: Military department officials also noted potential challenges with the way that the Office of Management and Budget will be scoring future projects. Scoring seeks to determine the cost that should be recognized and recorded as an obligation of DOD for budgeting purposes at the time a contract is signed. When the privatized housing initiative began, developers sought private borrowing, knowing that only the government funding would be scored because a 1997 Office of Management and Budget memorandum established that private funds for the projects would not be scored as government participation or activity. However, according to a 2005 Office of Management and Budget memorandum, as of September 30, 2010, new privatized housing projects and expansions to existing projects using the limited liability or corporation approach are subject to traditional scoring rules. These rules require projects proposing the use of a purely private entity to be scored as a private activity, and projects proposing the use of a co-owned limited liability corporation to be scored as government activity. Some military department and developer officials have expressed concern with the uncertainties surrounding future scoring. Specifically, military department officials and developers are concerned that the reversion to traditional scoring will affect any plans for obtaining mid-term loans and any potential expansions or other changes to existing projects. Office of Management and Budget officials stated that any future federal government contributions to privatized housing projects in the form of direct loans or loan guarantees will be fully scored at the value of the loan or loan guarantee.
Privatized Housing Projects Have Various Options to Mitigate Financial Challenges
Military department and developer officials have identified various options to address financial challenges such as those previously discussed in this report. These include actions to increase revenues, actions to reduce expenses, and extraordinary measures to improve project financial conditions. As the project manager, the developer may act unilaterally in some cases, and other actions may require approval from the military department, coordination with ASD (EI&E), or notification to the Office of Management and Budget. Although these actions may improve a project’s financial condition, there are limitations, such as the potential to reduce tenant satisfaction and therefore occupancy levels, or costs to the government. The extent to which any of these options will be sufficient to address a particular project’s financial challenges depends on the degree of the financial challenge and the effectiveness of the option. For example, a project may seek to raise revenue by advertising to tenants to increase occupancy, but the response may be insufficient. Likewise, a project may engage in a financial restructuring to return the project to a healthy financial footing, but ongoing low occupancy or unexpectedly high expenses may continue to challenge the project financially.
Actions to Increase Revenue
Developers and military departments cited several options for increasing project revenues, including the following examples: Renting to tenants other than active-duty servicemembers: The military departments have the option to increase project revenues by allowing projects to rent to tenants other than active-duty servicemembers. The Navy and Air Force have policies that determine the priority ordering of types of tenants to whom a project can rent. An Army official stated that the Army does not have a department policy, but allows projects to rent to tenants other than active-duty servicemembers based on project agreements. For example, a project may offer to rent to tenant groups in the following order: active-duty personnel, reserve-duty personnel, DOD civilian employees, military retirees, and general public tenants. As of June 2017, 33 of 35 Army privatized housing projects were renting to tenants other than active-duty servicemembers; 14 of 16 Navy and Marine Corps projects were renting to tenants other than active-duty servicemembers; and 28 of 32 Air Force projects were renting to tenants other than active-duty servicemembers. While renting to tenants other than active-duty servicemembers can increase revenue, the usefulness of this action is limited when a project is already operating at a high rate of occupancy or when additional demand is limited.
Other steps to increase occupancy: Developers can take other actions to increase project occupancy, to include increased advertising, promotions, or offering rent concessions. While these actions can increase occupancy, advertising adds costs to project operations, and rent concessions lower the per-unit revenue earned for the project. Figure 2 shows an advertisement by a privatized housing project seeking tenants outside of Naval Station Norfolk in Virginia.
Charging fees for services: Developers stated that they have considered charging fees for services that had previously been provided free of charge—such as community center rentals and pet fees—as another means of increasing project revenue. However, a developer’s ability to charge fees varies based on project agreements and military department policies. Developers also need to consider potentially negative effects on tenant satisfaction.
Actions to Reduce Expenses
Developers and military departments cited several options for reducing project expenses, including the following examples: Reducing or eliminating services: Projects can reduce or eliminate project services as a means of reducing operating expenses. Officials have taken these steps at certain Army, Navy, and Air Force projects. For example, Navy officials told us that the developer cut portions of the landscaping program at the Navy’s Midwest project in Illinois, Indiana, and Tennessee and eliminated one 24-hour service desk at the Navy’s Hampton Roads Unaccompanied Housing project in Virginia in order to reduce expenses. While these actions reduce operating expenses, providing reduced or fewer services may make a project less marketable or desirable to tenants and can lead to declines in tenant satisfaction and occupancy.
Deferring routine maintenance: In response to financial distress, projects can curtail routine maintenance to realize savings. For example, when Nellis Air Force Base in Nevada was facing cash flow challenges, officials told us that the project curtailed its preventive maintenance program that includes the inspection and repair of heating, ventilation, and air conditioning systems; water heaters; plumbing and plumbing fixtures; roofs; and carpeting. These expense-saving measures help operating costs in the near term, but deferring maintenance can reduce the quality of the housing, reduce tenant satisfaction, and increase expenses over time by reducing the effective life of the items not being maintained.
Delaying sustainment: Another option to reduce project expenses is to delay certain sustainment actions. At the Army’s Fort Knox project in Kentucky, officials stated that the sustainment plan initially included the demolition and rebuild of each unit or full renovation of historic units over the 50-year project lease; however, they no longer project that there will be funds to complete those improvements. Instead of full rebuilds, officials stated that they expect to conduct piecemeal renovations. Over time, deferred sustainment can lead to reduced housing quality, in turn reducing occupancy levels and tenant satisfaction, and thereby reducing project revenues.
Extraordinary Measures
Developers and the military departments can also take various extraordinary measures to improve the financial condition of a project. Extraordinary measures are options that can alter project agreements or project financial arrangements with the military department. These options may require approval from the military department, coordination with ASD (EI&E), or notification to the Office of Management and Budget. Examples of such actions include the following: Retaining and renting excess units: Projects can earn additional revenue by retaining and renting units that were originally slated for demolition. Some project plans included the transfer of existing housing units, deemed in excess of project needs, to the developer with the intention of demolishing them. For retaining and renting excess units to be an option, a project must have some excess units slated for demolition and sufficient demand for their rental.
Reducing project scope: Projects may reduce the scope of planned work to reduce potential expenditures or improve the project’s financial state. Reductions in scope may be in the form of the number of units to be built, renovated, or demolished. For example, following the inactivation of a brigade combat team at Fort Knox in Kentucky, the project made plans to eliminate 280 units due to changes in servicemember housing needs from when the project originally started construction.
Deferring fees: Developers can defer project fees due to them, such as fees for construction or management services, so that more funds are available for other project needs. Developers agreed to defer fees for several Navy and Air Force projects as a means to ensure adequate funding for the completion of project construction. Projects can defer fees to meet shortfalls in project funding, but the deferral can place additional financial strain on a project, as funds later must be used to repay the deferred fees.
Making additional investment contributions: Developers can make additional financial investments in the project to cover underfunded project expenses. For example, Air Force officials stated that developers have made additional financial investments at the Robins Air Force Base I project in Georgia to ensure that the project had sufficient funds to make debt payments. According to officials, the Air Force agreed to the additional investment contributions on the basis that they be repaid from any future excess cash flows.
Returning assets: In some instances, project assets can cost the developer more than anticipated due to the expenses necessary to maintain the asset. To alleviate the resulting financial challenges, projects can transfer ownership of the assets back to the military departments. For example, the Air Force took back five historic units from the Robins II project in Georgia that, according to officials, were not financially viable within the project and that the Air Force wanted for purposes other than housing. When assets are returned to a military department, the military department may have to begin budgeting for their costs through its annual budgeting process.
Transferring assets: The military department can transfer assets to a project that developers can sell to fund projects. For example, the Navy transferred land and units to the Navy’s Midwest project with the intention that the developer would sell the land and units to supplement project funding. Asset sales can be unreliable funding sources if assets sell for less than the project expected.
Financial restructurings: Military departments can seek to financially restructure projects to improve their financial condition. This process requires the military departments to renegotiate project agreements with the developer to improve financial condition. For example, the Air Force recently completed financial restructurings of the Nellis Air Force Base project in Nevada and the Air Combat Command Group II project, which comprises Davis-Monthan Air Force Base in Arizona and Holloman Air Force Base in New Mexico. Air Force and developer officials stated that the Nellis Air Force Base project began to have problems making debt payments because of declines in basic allowance for housing payments associated with falling local rental market prices. For Nellis, the Air Force and the developer negotiated a financial restructuring whereby the Air Force reduced the interest rate on the government’s loan to the project and extended the loan’s maturity date. The Air Force also gave the developer an additional portion of project profits. In exchange, the developer agreed to forgive an outstanding balance of payments due to them.
An ASD (EI&E) official stated that financial restructuring agreements may require notification to the Office of Management and Budget, which scores changes to privatized military housing projects. Restructurings can provide relief to projects that are facing imminent default or longer-term sustainment funding shortfalls, but they can also add financial costs to the military department. The ability to financially restructure also may be limited by the willingness of the developer to give concessions during negotiations and the ability to obtain the approvals necessary to complete the restructure.
DOD Has Not Defined When Project Changes Require Advance Notice or Defined Risk Tolerance Levels for Not Achieving Housing Goals
DOD has not clearly defined in its policy the circumstances in which ASD (EI&E), as the DOD-wide housing program manager, should receive advance notice of changes to address financial challenges in privatized military housing projects. In addition, DOD has not defined its risk tolerance levels for achieving its goal of providing quality housing to servicemembers that reflects community living standards—in particular, its tolerance for declining levels of funding for future sustainment that can pose a risk to this goal.
The Military Departments Have Varied Understandings of When Privatized Military Housing Project Changes Require Notification
The military departments have varied understandings of what changes to privatizing housing projects require notification to ASD (EI&E)—DOD’s program manager for privatized housing. Military department officials provided somewhat differing explanations when asked about the types of project changes that require notification to ASD (EI&E). Specifically:
Army officials stated that the Army provides notice any time there is a planned use of or change to a project involving privatized military housing authorities related to government loans and loan guarantees, the leasing of housing units, or government investments in privatized housing projects, as well as any action that requires congressional notification. The Army also notifies the office if a project’s number of units is expanded relative to its approved plan.
Navy officials stated that they provide notice any time there is an action that requires congressional notification, any time there are project changes with a potential effect on military housing privatization authorities, any time new projects or project phases are considered, and any changes to a project’s previously approved scope, as well as any time ASD (EI&E) requests notification.
Air Force officials stated that notification is required when the military department makes a material change to a project that has a financial or scope effect relative to the details that were originally approved.
Officials added that any project changes that require approval from the Office of Management and Budget would require ASD (EI&E) concurrence.
Under current DOD housing policy, ASD (EI&E) is required to notify the Office of Management and Budget of any significant changes to privatized housing projects that may require scoring consideration. However, DOD policy does not establish the circumstances in which the military departments should notify ASD (EI&E) of significant project changes, and it does not define which project changes qualify as significant. DOD guidance requires ASD (EI&E) to provide guidance and general procedures relating to housing privatization. An ASD (EI&E) official also told us that the military departments are providing notification of project changes based on limited guidance, and that ASD (EI&E) is conducting oversight on a case-by-case basis. Moreover, Office of Management and Budget officials stated that they will analyze project changes to determine whether an action would constitute a project expansion significant enough to require scoring.
Standards for Internal Control in the Federal Government states that management should develop policies that address the entity’s objective to achieve an effective internal control system. In addition, management should obtain and internally communicate the necessary quality information to achieve the entity’s objectives, while communicating quality information down and across reporting lines to enable personnel to perform key roles. Moreover, the standards state that the oversight body receives quality information that flows up from the reporting lines from management and personnel that is necessary for effective oversight of internal control. However, DOD’s guidance does not clearly define the types of project changes for which ASD (EI&E) requires prior notification from the military departments, which could result in ASD (EI&E) not being notified of project changes. ASD (EI&E) has draft guidance on oversight and management of privatized military housing, which would define the circumstances under which military departments should notify ASD (EI&E) of project changes, but officials stated that they have not established a time frame for issuing this policy. An ASD (EI&E) official stated that the policy is being coordinated with the military departments, and this has resulted in delays to its issuance. Without issuing guidance to clearly define and communicate to the military departments the conditions that require notification, the military departments will not be able to consistently fulfill their responsibilities and ASD (EI&E) will not be able to completely fulfill its oversight function.
DOD Has Not Defined Its Tolerance for Risk to Privatized Housing Goal
Office of Management and Budget guidance on the preparation, submission, and execution of the federal budget suggests that public- private partnerships such as privatized military housing projects contain some elements of risk to the government. For example, the projects are frequently constructed on government land and they include government financing in the form of direct investments or direct loans. However, the military departments have not defined their risk tolerance levels for privatized housing relative to the program’s objective of providing quality housing that reflects community living standards. Specifically, the Army and Navy have not identified the level of risk they are willing to accept in their ability to fund future sustainment. Army officials stated that the Army is not responsible for taking any actions to restore a project’s financial condition. Navy officials stated that they do not use a risk model, and that one is not required by DOD. The Air Force has not formally defined its risk tolerance levels for future sustainment, but it has identified the circumstances in which projected sustainment funding deficits will cause it to take extraordinary measures—specifically, to seek a financial restructuring of the project. For example, if future planned maintenance is funded at less than 85 percent of estimated needs within the next 5-year period, the Air Force may seek a financial restructuring, according to Air Force officials. Likewise, according to Air Force officials, if planned major renovations and rebuilding are funded at below 30 percent of estimated needs, the Air Force will seek a financial restructuring.
Standards for Internal Control in the Federal Government states that agencies need to define risk tolerance relative to their program objectives. Risk tolerance is the acceptable level of variation in performance relative to the achievement of objectives. However, DOD has not required the military departments to define their risk tolerances regarding the future sustainability of the projects. ASD (EI&E) officials told us that they are considering establishing parameters for risk tolerance for the military departments, but have not yet done so. Officials also noted that DOD had been focused on the initial development periods of the privatized housing projects, whereas it is now shifting focus to sustainment as the projects have moved from the initial development stage. Given this focus on sustainment, if the military departments do not define their risk tolerances regarding the future sustainability of their privatized housing projects, they will lack a consistent basis on which to determine when the risks to achieving their objectives require responses, and the nature of those responses.
Conclusions
DOD’s ability to maintain quality housing is critical, because housing can affect retention, readiness, and servicemembers’ quality of life. Since Congress provided the department with authorities to do so, DOD has worked with private developers to improve the quality of housing available on military installations. The military departments regularly review the financial condition of their privatized housing projects, but they calculate a basic measure of current financial health—the debt coverage ratio— differently among their projects, which limits the ability of ASD (EI&E), and in turn Congress, to compare project financial health based on this measure without additional information to give the data full context. DOD has also previously reported such information for differing time periods in different reports to Congress, further limiting the data’s usefulness, and has not issued revised guidance on privatized housing to help ensure consistent reporting. The military departments also vary in the extent and manner in which they oversee measures of future sustainment of their privatized housing projects. DOD has not reported measures of future sustainment to Congress, or issued a report on the financial condition of privatized housing projects, since the report covering fiscal year 2014. Without consistent and up-to-date information on the financial condition of projects, DOD and Congress will not be able to conduct informed and effective oversight of the projects.
The military departments have identified the reductions in basic allowance for housing as one of the various challenges affecting the financial condition of privatized housing projects. At the request of ASD (EI&E), the military departments have provided analysis on the effects of the reductions on their portfolios, but they have not been required to fully assess the significance of the effects of the reductions on the future sustainment of each of their projects, or identified specific actions to respond to the reductions, as detailed by federal internal control standards related to risk assessment. Without complete assessment of the risks of the reductions in the basic allowance for housing on each project, and developing any appropriate courses of action, DOD and the military departments will not be able to know when to take action to address deficits in the funding of long-term sustainment accounts that could lead to diminishment in the quality of military housing. Additionally, DOD and Congress will not be fully informed of the risks and possible effects before making decisions that affect all of the privatized housing projects—such as approving any further reductions in the basic allowance for housing.
The military departments have various options for attempting to improve the financial condition of their privatized housing projects, but some of these options require prior notice to ASD (EI&E). The absence of clearly defined requirements as to when this office should be notified of project changes to address financial challenges has led to varied understandings among the military departments about when notification should occur. Without a clear identification of when ASD (EI&E) should be notified of project changes, the military departments will not have consistent and clear guidance as to when this office needs to be informed prior to an action being taken by a military department regarding its privatized housing projects, and thus, the oversight office may not be fully informed on the projects it intends to oversee. In addition, DOD has not required the military departments to define their tolerances for risk to the goal of providing quality housing to servicemembers in line with community standards, including its ability to fund future sustainment needs. Without doing so, DOD will not have key information needed to determine when the risks to achieving their objectives require responses, or to determine the nature of the responses.
Recommendations for Executive Action
We are making a total of eight recommendations to the Secretary of Defense and the Assistant Secretary of Defense for Energy, Installations, and Environment. The Secretary of Defense should ensure that: The Assistant Secretary of Defense for Energy, Installations, and Environment provides additional contextual information in future reports to Congress on privatized military housing to identify any differences in the calculation of debt coverage ratios and the effect of these differences on their comparability. (Recommendation 1)
The Assistant Secretary of Defense for Energy, Installations, and Environment revises its existing guidance on privatized housing to ensure that financial data on privatized military housing projects reported to Congress, such as debt coverage ratios, are consistent and comparable in terms of the time periods of the data collected. (Recommendation 2)
The Assistant Secretary of Defense for Energy, Installations, and Environment revises its guidance on privatized military housing to include a requirement that the military departments incorporate measures of future sustainment into their assessments of privatized housing projects. (Recommendation 3)
The Assistant Secretary of Defense for Energy, Installations, and Environment takes steps to resume issuing required reports to Congress on the financial condition of privatized housing in a timely manner. (Recommendation 4)
The Assistant Secretary of Defense for Energy, Installations, and Environment reports financial information on future sustainment of each privatized housing project in its reports to Congress. (Recommendation 5)
The Assistant Secretary of Defense for Energy, Installations, and Environment provides guidance directing the military departments to assess the significance of the specific risks to individual privatized housing projects resulting from the reductions in the basic allowance for housing and identify courses of action to respond to any risks based on their significance. (Recommendation 6)
The Assistant Secretary of Defense for Energy, Installations, and Environment finalizes guidance in a timely manner that clearly defines the circumstances in which the military departments should provide notification of project changes and which types of project changes require prior notification or prior approval. (Recommendation 7)
The Assistant Secretary of Defense for Energy, Installations, and Environment revises its guidance on privatized military housing to require the military departments to define their risk tolerances regarding the future sustainability of their privatized housing projects. (Recommendation 8)
Agency Comments
We provided a draft of this report for review and comment to DOD and the Office of Management and Budget. We initially made our recommendations to the Assistant Secretary of Defense for Energy, Installations, and Environment. We have updated our recommendations to also include the Secretary of Defense. In written comments, DOD concurred with each of our recommendations and identified actions it plans to take to implement them. DOD’s comments are reprinted in their entirety in appendix III. DOD and the Office of Management and Budget also provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, and the Office of Management and Budget. In addition, the report is available at no charge on our website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Brian Lepore at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
Senate Report 114-255 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2017 included a provision for us to assess the solvency of each privatized military housing project in the United States and the effect of recent changes in basic allowance for housing on long-term project sustainability. This report examines the extent to which the Department of Defense (DOD) has (1) assessed and reported the financial condition of each privatized housing project; (2) assessed the effects of recent reductions in the basic allowance for housing on privatized housing, and identified any other challenges and options to address challenges; and (3) defined notification requirements for project changes and risk tolerances relative to privatized housing goals.
For all objectives, we scoped our review to include all privatized housing projects in each military department. We excluded privatized temporary lodging because its financial structure is substantially different than all other privatized housing projects. We reviewed relevant policies and collected information by interviewing officials from the Office of the Secretary of Defense (the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment); the Army (Office of the Assistant Secretary of the Army for Installations, Energy, and Environment, and the Office of the Assistant Chief of Staff for Installation Management); the Navy (Office of the Deputy Assistant Secretary of the Navy for Installations and Facilities, the Commander, Navy Installations Command, and the Naval Facilities Engineering Command); the Marine Corps (Marine Corps Installations Command); and the Air Force (Office of the Deputy Assistant Secretary of the Air Force for Installations, and the Air Force Civil Engineering Center).
Additionally, we met with the five leading developers of privatized housing projects: Balfour Beatty, Corvias, Lend Lease, Lincoln Military Housing, and Hunt Companies. We also visited a non-generalizable sample of five privatized housing projects to interview on-site military department officials and tour the housing. For this sample, we selected one or two projects from each of the military departments, emphasizing projects that had identified financial difficulties or were located in close proximity to military department oversight offices. We made site visits to the following areas and installations: Norfolk, Virginia, where we met with officials of the Naval Facilities Engineering Command and visited the Homeport Hampton Roads and Mid-Atlantic Military Family Communities privatized housing projects; San Antonio, Texas, where we met with officials at the Air Force Civil Engineer Center; Las Vegas, Nevada, where we met with officials and visited the privatized housing project at Nellis Air Force Base; Fort Knox, Kentucky, where we met with officials and visited the privatized housing project at the Army’s Fort Knox; and Fort Meade, Maryland, where we met with officials and visited the privatized housing project at Fort Meade.
To determine the extent to which DOD has assessed and reported the financial condition of each privatized housing project, we reviewed DOD guidance on the oversight and management of privatized military housing. We also reviewed documentation used by each military department to oversee the financial condition of each of their privatized housing projects, and each of their portfolios as a whole through portfolio- wide oversight reports, monthly and quarterly reports on each privatized housing project, and audited project financial statements from fiscal years 2013 to 2016. We reviewed DOD’s fiscal year 2013 and 2014 annual reports to Congress on privatized housing, as well as data for privatized housing projects from fiscal years 2013 through 2016. We also met with officials involved in the oversight and management of privatized housing in the Office of the Assistant Secretary of Defense for Energy, Installations, and Environment (ASD (EI&E)), and each of the military departments to discuss their oversight and management of the financial condition of privatized housing projects. Additionally, we requested data for each privatized housing project, including audited financial statements, and examined the differences among and within the military departments in determining the solvency of their projects. For each military department, we assessed the number of projects doing financially well and those not doing financially well through correspondence with knowledgeable officials at each military department and found those department-level numbers sufficiently reliable to report the number of projects in each financial category. We compared DOD’s and the military departments’ actions to assess and report on the financial condition of their privatized housing projects with DOD’s housing policy and with standards for quality information in Standards for Internal Control in the Federal Government to determine whether DOD has fully assessed and reported the financial condition of each project.
To determine the extent to which DOD has assessed the effects of recent reductions in the basic allowance for housing on privatized housing and identified any other challenges and options to address those challenges, we reviewed DOD guidance on applying reductions in basic allowance for housing to privatized military housing and other DOD documentation on the reductions in basic allowance for housing payments. Specifically, we reviewed the military departments’ reports on the projected effects of the reductions in the basic allowance for housing on their portfolios and quarterly project oversight reports from fiscal years 2016 and 2017. Additionally, we interviewed officials at the Defense Travel Management Office for information on the basic allowance for housing calculations and military department officials for their perspectives on the reductions in basic allowance for housing. We compared the military department reports on the projected effects of the reductions in basic allowance for housing with standards for risk assessment in Standards for Internal Control in the Federal Government to determine whether DOD has fully assessed the effects of the reductions. We determined challenges identified by DOD and the military departments and options to address challenges through interviews with ASD (EI&E) officials, officials from each military department involved with privatized housing, and officials at select installations involved in privatized housing. We also met with officials of five leading privatized housing developers for their perspectives on challenges to their privatized housing and options to address them. Additionally, we reviewed quarterly project oversight reports to identify challenges associated with privatized housing. We reported examples of challenges that were identified by at least two of the three military departments. Additionally, we assessed the number of projects renting to tenants other than active-duty servicemembers by obtaining information from each military department and found those department-level numbers sufficiently reliable to report the number of projects that were renting to these tenants. We reviewed quarterly project oversight reports to identify the options for addressing challenges, and DOD’s policy guidance on privatized housing responsibilities to determine the level of authority needed for the options.
To determine the extent to which DOD has defined notification requirements for project changes and risk tolerances relative to privatized housing goals, we reviewed DOD guidance on oversight and management of privatized military housing, interviewed DOD and developer officials responsible for privatized housing, and reviewed DOD documentation. Specifically, we reviewed DOD housing policies and guidance, reviewed military department guidance on overseeing privatized housing, and interviewed military department officials familiar with notification processes for changes to privatized housing projects and approaches to managing risks to privatized housing projects. We also interviewed officials in the Office of Management and Budget familiar with privatized military housing. We compared DOD’s policy guidance on privatized housing responsibilities with standards related to internal communication in Standards for Internal Control in the Federal Government to determine the level of notification needed. We also compared the extent to which DOD has defined risk tolerance for privatized housing with federal internal control standards related to risk assessment.
We conducted this performance audit from December 2016 to March 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Complete Listing of the Department of Defense’s Privatized Military Housing Projects as of October 2017
The following is a complete listing of the Department of Defense’s 82 privatized military housing projects, as of October 2017. The projects can consist of one or multiple installations.
Appendix III: Comments from the Department of Defense
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Brian J. Lepore, (202) 512-4523 or [email protected].
Staff Acknowledgments
In addition to the contact named above, Kristy Williams (Assistant Director), Tracy Barnes, Ronnie Bergman, Timothy Carr, Kelly Friedman, Simon Hirschfeld, Terence Lam, Amie Lesser, Jeffrey Love, Richard Powelson, Nancy Santucci, Mike Silver, and Cheryl Weissman made key contributions to this report.
Related GAO Products
Defense Infrastructure: Army Has a Process to Manage Litigation Costs for the Military Housing Privatization Initiative. GAO-14-327. Washington, D.C.: April 3, 2014.
Military Housing: Information on the Privatization of Unaccompanied Personnel Housing. GAO-14-313. Washington, D.C.: March 18, 2014.
Military Housing: Enhancements Needed to Housing Allowance Process and Information Sharing among Services. GAO-11-462. Washington, D.C.: May 16, 2011.
Military Housing Privatization: DOD Faces New Challenges Due to Significant Growth at Some Installations and Recent Turmoil in the Financial Markets. GAO-09-352. Washington, D.C.: May 15, 2009.
Military Housing: Management Issues Require Attention as the Privatization Program Matures. GAO-06-438. Washington, D.C.: April 28, 2006.
Military Housing: Further Improvement Needed in Requirements Determination and Program Review. GAO-04-556. Washington, D.C.: May 19, 2004.
Military Housing: Better Reporting Needed on the Status of the Privatization Program and the Costs of Its Consultants. GAO-04-111. Washington, D.C.: October 9, 2003.
Military Housing: Opportunities That Should Be Explored to Improve Housing and Reduce Costs for Unmarried Junior Servicemembers. GAO-03-602. Washington, D.C.: June 10, 2003.
Military Housing: Management Improvements Needed as the Pace of Privatization Quickens. GAO-02-624. Washington, D.C.: June 21, 2002.
Military Housing: DOD Needs to Address Long-Standing Requirements Determination Problems. GAO-01-889. Washington, D.C.: August 3, 2001.
Military Housing: Continued Concerns in Implementing the Privatization Initiative. GAO/NSIAD-00-71. Washington, D.C.: March 30, 2000.
Military Housing: Privatization Off to a Slow Start and Continued Management Attention Needed. GAO/NSIAD-98-178. Washington, D.C.: July 17, 1998. | Why GAO Did This Study
In 1996 Congress provided DOD with authorities enabling it to obtain private-sector financing and management to repair, renovate, construct, and operate military housing. DOD has since privatized 99 percent of its domestic housing.
The Senate Report accompanying a bill for the National Defense Authorization Act for 2017 included a provision that GAO review privatized military housing projects and the effect of recent changes in the basic allowance for housing on long-term project sustainability. This report examines the extent to which DOD has (1) assessed and reported the financial condition of each privatized housing project; (2) assessed the effects of recent reductions in the basic allowance for housing on privatized housing; and (3) defined notification requirements for project changes and risk tolerances relative to privatized housing goals. GAO reviewed policies, project oversight reports, and financial statements, and interviewed DOD officials and privatized housing developers.
What GAO Found
The Department of Defense (DOD) has regularly assessed the financial condition of its privatized housing projects; however, it has not used consistent measures or consistently assessed future sustainment (that is, the ability to maintain the housing in good condition), or issued required reports to Congress in a timely manner. Specifically:
Some data used to report on privatized housing across the military services are not comparable. For example, there are inconsistencies among the projects in the measurements of current financial condition (for example, the ability to pay debts and maintain quality housing).These differences have not been identified in reports to Congress.
The military departments vary in the extent to which they use measures of future sustainment, and information regarding the sustainment of each of the privatized housing projects has not been included in the reports to Congress.
DOD's reporting to Congress has not been timely. DOD is statutorily required to report to Congress the financial condition of privatized housing projects on a semiannual basis, but it has not reported on any fiscal year since 2014.
By taking steps to improve the consistency of the information provided and meet the reporting requirement, DOD would provide decision makers in Congress with useful, timely information about the financial condition of the privatized housing projects as they provide required oversight.
DOD has not fully assessed the effects of reductions, relative to calculations of market rates for rent and utilities, in servicemembers' basic allowance for housing payments on the financial condition of its privatized housing projects. In August 2015, DOD required the military departments to review their privatized housing portfolios and outline any effects of the reductions. Each military department reported that the reductions would decrease cash flows to their long-term sustainment accounts. However, the reports did not specify the significance of the reductions on each project's future sustainment or identify specific actions to respond to shortfalls at individual projects. If DOD fully assesses the effects of the basic allowance for housing reductions on privatized housing and identifies actions to respond to any risks, DOD and Congress will be better informed to make decisions affecting the projects.
DOD has not defined when project changes require prior notice to the Assistant Secretary of Defense for Energy, Installations, and Environment or its tolerance for risk relative to its goal of providing servicemembers with quality housing, including the risk from reduced sustainment funding. Specifically, the military departments had different understandings of when project changes, such as financial restructurings, required prior notice. Additionally, DOD has not required the military departments to define their risk tolerances—the acceptable level of variation in performance relative to the objectives—regarding the future sustainability of the projects. By clearly defining the conditions that require advance notification and developing risk tolerance levels, DOD would have consistent information that would improve its oversight of privatized housing and inform its response to any future sustainment challenges.
What GAO Recommends
GAO is making eight recommendations, including that DOD improve the consistency and timeliness of the information reported on the financial condition of its privatized housing projects, fully assess the effects of the reductions in basic allowance for housing on the projects, clarify when project changes require notice, and define tolerances for project risks. DOD concurred with each of our recommendations and identified actions it plans to take to implement them. |
gao_GAO-19-112 | gao_GAO-19-112_0 | Background
Improper Payment Requirements
IPIA defines an improper payment as any payment that should not have been made or that was made in an incorrect amount (including overpayments and underpayments) under statutory, contractual, administrative, or other legally applicable requirements. It includes duplicate payments, any payment made to an ineligible recipient, any payment for an ineligible good or service, any payment for a good or service not received (except for such payments where authorized by law), and any payment that does not account for credit for applicable discounts. OMB M-15-02 also provides that when an agency’s review is unable to determine whether a payment was proper as a result of insufficient or lack of documentation, this payment must also be considered an improper payment.
IPIA also defines the scope of payments subject to improper payment requirements. Specifically, a payment is any transfer or commitment for future transfer of federal funds—such as cash, securities, loans, loan guarantees, and insurance subsidies—to any nonfederal person or entity that is made by a federal agency, a federal contractor, a federal grantee, or a governmental or other organization administering a federal program or activity.
Executive branch agencies are required to take various steps regarding improper payments under IPIA and as directed by OMB M-15-02. The steps include the following: 1. reviewing all programs and activities and identifying those that may be susceptible to significant improper payments (commonly referred to as a risk assessment), 2. developing improper payment estimates for those programs and activities that agency risk assessments, OMB, or statute identifies as being susceptible to significant improper payments, 3. analyzing the root causes of improper payments and developing corrective actions to reduce them, and 4. reporting on the results of addressing the foregoing requirements.
Figure 1 illustrates these steps, as well as the major components of conducting an improper payment risk assessment.
IPIA requires that agencies conduct improper payment risk assessments for all federal programs and activities at least once every 3 years and identify any program or activity that may be susceptible to significant improper payments. OMB guidance provides that programs that have been determined to be susceptible to significant improper payments and that are already reporting an estimate—or in the process of establishing an estimate—do not have to conduct additional improper payment risk assessments. IPIA defines “significant” improper payments as improper payments in the preceding fiscal year that may have exceeded either (1) 1.5 percent of program outlays and $10 million or (2) $100 million (regardless of the improper payment rate). OMB M-15-02 provides guidance for implementing the IPIA requirements and covers agencies’ responsibilities for improper payment risk assessments, estimation, and reporting.
OMB M-15-02 also lists steps that agencies should take when conducting improper payment risk assessments. Agencies must institute a systematic method of reviewing all programs and activities to identify those that may be susceptible to significant improper payments, as defined by IPIA. According to OMB M-15-02, this systematic method could be a quantitative evaluation based on a statistical sample or a qualitative method (e.g., a risk-assessment questionnaire). Prior to fiscal year 2018, at a minimum, agencies were required to take into account nine risk factors—seven specified in IPIA and two in OMB guidance—that are likely to contribute to improper payments, regardless of which method was used by the agency (see table 1).
In June 2018, OMB revised its guidance for improper payments in OMB Circular A-123, Appendix C, Requirements for Payment Integrity Improvement (OMB M-18-20). In the revised guidance, OMB no longer directs agencies to consider the two additional risk factors that were included in OMB M-15-02 in their risk assessments. Rather, OMB directs agencies to take into account those risk factors that are likely to contribute to a susceptibility of significant improper payments. The revised guidance also states that beginning in fiscal year 2020, agencies should use quantitative evaluations for programs or activities with outlays exceeding $5 billion. As specified in OMB M-18-20, the end goal of the systematic method of reviewing all programs, whether qualitative or quantitative, is to determine whether a program is susceptible to significant improper payments. Accordingly, OMB M-18-20 states that if a qualitative method is used, it must be designed to accurately determine whether the program is susceptible to significant improper payments.
Characteristics of Programs Reviewed
When conducting improper payment risk assessments, each federal agency, unless otherwise specified by OMB Circular A-11, after consultation with OMB, is generally authorized to determine the grouping of programs that most clearly identifies and reports improper payments for the agency. The five programs we reviewed serve a variety of purposes and are administered by various agencies across the federal government.
Head Start
HHS’s Head Start program was established in 1965 to deliver comprehensive educational, social, health, nutritional, and psychological services to low-income families and their children. These services include preschool education, family support, health screenings, and dental care. Head Start was originally aimed at 3- to 5-year-olds. The Head Start program makes grants directly to approximately 1,600 local organizations, including community action agencies, school systems, tribal governments and associations, and for-profit and nonprofit organizations.
The Head Start program has several primary eligibility criteria to enroll in the program—including that the child’s family earns income below the federal poverty level; the child’s family is eligible for or, in the absence of child care, would potentially be eligible for public assistance; the child is in foster care; or the child is homeless. Head Start services are to be provided free of charge to eligible families.
Prior to fiscal year 2013, HHS reported improper payment estimates for the Head Start program. However, as of fiscal year 2013, HHS, in consultation with its Office of Inspector General (OIG) and with approval from OMB, no longer reports annual improper payment estimates related to the program. According to HHS, Head Start’s fiscal year 2017 outlays were approximately $9.4 billion.
Interest on the Public Debt
Public debt is defined as Treasury-issued securities, primarily consisting of marketable Treasury securities (i.e., bills, notes, and bonds), and a smaller amount of nonmarketable securities, such as savings bonds and special securities issued to state and local governments. A portion is debt held by the public and a portion is debt held by federal government accounts.
Debt held by the public represents federal debt held by investors outside of the federal government, including individuals, corporations, state or local governments, the Federal Reserve, and foreign governments. Types of securities held by the public include Treasury bills, notes, and bonds and State and Local Government Series securities. Debt held by the public primarily represents the amount the U.S. government has borrowed from the public to finance cumulative cash deficits. As of September 30, 2017, total debt held by the public was $14.7 trillion.
Debt held by federal government accounts (intragovernmental holdings) represents balances of federal government accounts of certain federal agencies that are either authorized or required to invest excess receipts in Treasury securities. As of September 30, 2017, total debt held by federal government accounts was $5.6 trillion.
Interest calculations on the public debt differ depending on the types of securities, their associated terms, and average interest rates. According to Treasury, total interest paid on public debt for fiscal year 2017 was approximately $294.8 billion.
Home Affordable Modification Program
In February 2009, as part of a broader plan to stabilize the housing market and economy, Treasury established the Making Home Affordable Program to help struggling families avoid possible foreclosure. As part of this plan, Treasury announced a national modification program for first- lien mortgages, the Home Affordable Modification Program (HAMP). The program offered eligible homeowners who are at risk of foreclosure reduced monthly mortgage payments that are more affordable and sustainable over the long term. Homeowners who chose to participate in the program had to show (1) documented financial hardship and (2) an ability to make their monthly mortgage payments after a modification.
HAMP works by encouraging participating mortgage servicers to modify mortgages so struggling homeowners can have lower monthly payments and avoid foreclosure. It has specific eligibility requirements for homeowners and includes strict guidelines for servicers.
In December 2016, entrance into the Making Home Affordable program expired. However, payments for previously approved participants in HAMP will continue until approximately September 2023. According to Treasury, HAMP’s fiscal year 2017 outlays were approximately $4.1 billion.
Law Enforcement
For improper payment risk assessment purposes, DOJ has five mission- aligned program groups. The Law Enforcement group is the largest in terms of annual outlays and consists of the following five components: 1. the Bureau of Alcohol, Tobacco, Firearms, and Explosives; 2. the Drug Enforcement Administration; 3. the Federal Bureau of Investigation; 4. Offices, Boards, and Divisions; and 5. the United States Marshals Service.
According to DOJ, Law Enforcement’s fiscal year 2017 outlays were approximately $11.8 billion.
Agriculture Risk Coverage and Price Loss Coverage
The Agriculture Risk Coverage (ARC) and Price Loss Coverage (PLC) programs were authorized by the 2014 Farm Bill to provide farmers with protection against adverse changes in market conditions. Although ARC and PLC are considered two separate programs, they are grouped as one program for the purposes of conducting improper payment risk assessments. The programs are managed by the Commodity Credit Corporation, whose activities are primarily administered by USDA’s Farm Service Agency.
Within the ARC program, farmers have the choice of an individual-based option, known as ARC-Individual, or a county-based option, known as ARC-County. Both options provide revenue loss coverage to farmers when the legislative guarantee for a crop exceeds the actual year revenue.
PLC program payments are issued to farmers when a crop’s “reference price,” as specified in the 2014 Farm Bill, is in excess of an average price, which is determined at the national level each year for the covered commodities.
ARC/PLC statutes and regulations establish a series of eligibility criteria that farmers must meet in order to enroll in the programs. Among other things, to be eligible farmers must produce a certain quantity of at least 1 of the 21 covered commodities, actively engage in the farming process, meet income eligibility limits, and meet certain land conservation requirements. According to USDA, ARC/PLC’s fiscal year 2017 outlays were approximately $9.6 billion.
Four of the Five Risk Assessments Lacked Documentation to Support Their Risk Determinations, and Many of HHS’s Programs Were Not Assessed
HHS’s Improper Payment Risk Assessment for Head Start Lacked Documentation to Support Its Low Risk Determination, and Many Other Programs Were Not Assessed HHS’s Improper Payment Risk Assessment for Head Start Lacked Documentation to Support Its Low Risk Determination
In its fiscal year 2016 qualitative risk assessment, HHS assessed its Head Start program as at low risk of susceptibility to significant improper payments. However, HHS did not have sufficient documentation on how it developed its risk assessments, so we could not determine if the risk assessment process was designed to provide a reasonable basis for making risk determinations.
Although HHS did take into account the nine risk factors, among other factors, HHS did not document or effectively demonstrate how each specific risk factor affected Head Start’s susceptibility to significant improper payments. HHS’s improper payment risk assessment template included the nine risk factors, among other factors, and described how the divisions should consider each risk factor. However, HHS did not document how the descriptors or individual risk factors relate to the program’s susceptibility to significant improper payments. Further, although HHS used a risk assessment template to assess each of the risk factors, which included space for the divisions to provide additional information regarding the risk determinations, the division responsible for the Head Start program did not always provide sufficient documentation or support for us to determine how it arrived at its risk determinations for each risk factor. For example, see the following:
Eligibility determination: HHS considered the eligibility of initial Head Start payments that HHS made to the initial grantees—local organizations—as low risk. However, HHS did not consider the Head Start eligibility decisions that these organizations made at the subrecipient level—calling into question the reliability of HHS’s risk assessment. In the Head Start program, local organizations, not HHS, make the eligibility determinations for individuals to be enrolled in the program. In addition, local organizations, not HHS, are responsible for maintaining the documentation to substantiate the eligibility of enrollees. HHS did not consider the impact of these determinations in its improper payment risk assessment. Our analysis of improper payment estimates from paymentaccuracy.gov for fiscal years 2016 and 2017 indicates that the inability to authenticate eligibility is one of the largest root causes of improper payments.
Audit findings: HHS assigned a low-risk rating for findings from oversight agencies. However, in the risk assessment, it identified nine audit reports that the OIG issued pertaining to Head Start agencies with findings on unallowable costs, enrollment, and misuse of grant funds. According to agency officials, these OIG reports contained findings related to costs and misuse of grant funds that are specific to particular grantees and may not be indicative of widespread programmatic issues. However, HHS did not document the rationale for this assessment.
Program management report: HHS assigned a low-risk rating for findings related to program management reports. According to HHS’s Report to Congress on Head Start Monitoring for Fiscal Year 2015, “allowable and allocable costs” was the most commonly cited noncompliance issue in its fiscal reviews of grantees. Specifically, 8.8 percent of grantees included in a fiscal review were found to be noncompliant with regard to allowable and allocable costs. However, HHS did not document whether it considered the impact of noncompliance by grantees in its Head Start risk assessment.
According to HHS officials, divisions were required to maintain supporting documentation for their risk assessments, although submission of the related documents along with the risk assessment was not mandatory.
HHS officials stated that this policy was orally communicated to the divisions; however, it was not formally documented. Lack of a written policy for the divisions to maintain such information may have contributed to HHS’s inability to provide sufficient supporting documentation for its low risk determinations.
HHS’s qualitative risk assessment for Head Start also did not document or effectively demonstrate how the total score for all risk factors led to a determination that the program was not susceptible to significant improper payments. Our analysis of HHS’s risk assessment showed that for several of its risk factors, HHS did not score those factors as low risk. For example, HHS assigned a high-risk rating for three of the nine risk factors: (1) permanency of the program, (2) volume of payments made through the program, and (3) complexity per transaction. HHS’s risk assessment did not document or support how it determined Head Start to overall be at low risk for susceptibility to significant improper payments given the high-risk ratings for certain risk factors. Without supporting documentation, HHS cannot demonstrate, and we cannot determine, if HHS’s low risk determination for Head Start was reasonable.
Additionally, based on HHS’s risk assessment scoring template, a program could be considered “high risk” for all nine risk factors, but because of the assigned weight given to each of the nine risk factors, HHS’s final risk calculation would still not determine the program to be at high risk of susceptibility to significant improper payments. According to HHS officials, the agency has procedures to review the improper payment risk assessments that the individual divisions perform; however, these review procedures are not formally documented. HHS officials stated that while no risk assessment has identified all nine risk factors as high risk, if all nine risk factors were identified as high risk by a division, the agency would require supporting documentation from the division for review and could overrule the outcome calculated based on the risk assessment scoring template if necessary. Without documented procedures for this review process, HHS lacks assurance that this process, if applicable, would consistently take place.
According to HHS, the fiscal year 2016 improper payment qualitative risk assessment template used for Head Start was designed to calculate an overall risk rating of low, medium, or high based on program management responses to each individual risk factor. However, HHS did not have documentation to demonstrate how it determined the weighting of the risk factors or how the numerical risk level ranges from the risk assessment template related to a program’s susceptibility to significant improper payments. Additionally, HHS did not have documentation demonstrating the basis for its determination that specific risk factors do or do not lead to susceptibility to significant improper payments. HHS officials stated that OMB does not have specific guidance on establishing weights for each risk factor or assigning numerical risk level ranges to determine overall susceptibility to significant improper payments. HHS officials also stated that HHS developed its own numerical risk level ranges based on experience and data from previous risk assessments. When asked for documentation to support its weighting of the various risk factors, HHS officials stated that they did not document this analysis. Without documenting the basis for the assigned weights, HHS cannot demonstrate, and we cannot determine, that its process for determining Head Start’s susceptibility to significant improper payments was reasonable.
Federal internal control standards state that management should develop control activities to achieve objectives and respond to risks and implement control activities through policies. As part of these standards, management should clearly document internal controls and other significant events in a manner that allows the documentation to be readily available for examination. Additionally, management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness. Further, federal internal control standards state that management should use quality information to achieve the entity’s objectives. As such, to reasonably determine if a program is susceptible to significant improper payments, agencies’ risk assessments would have a logical connection with, or bearing upon, the statutory definition of significant improper payments. Until HHS revises its risk assessment process to help ensure that it results in a reliable assessment, it will be uncertain whether Head Start may be susceptible to significant improper payments and therefore require an estimate of its improper payments.
HHS Did Not Conduct Risk Assessments for Many of Its Programs and Activities
During our agency and program selection process, we found that HHS did not assess many of its programs and activities at least once during the 3- year period from fiscal years 2015 through 2017, as required by IPIA. Although HHS conducted improper payment risk assessments for a total of 71 programs and activities during the 3-year period, based on our analysis of HHS-provided outlay data, HHS did not conduct the required risk assessment for at least 140 programs. For example, HHS did not assess its Block Grants for Prevention and Treatment of Substance Abuse program that had outlays of approximately $1.8 billion in fiscal year 2016. According to HHS officials, HHS has limited resources, so it took a risk-based approach when selecting programs to include in its improper payment risk assessment process. Further, HHS officials stated that HHS was transitioning in fiscal year 2015 to a new risk assessment process. As such, HHS’s procedures directed its divisions to select one program per division for fiscal year 2015 and two programs per division for fiscal years 2016 and 2017.
Federal internal control standards state that management should design control activities to achieve objectives and respond to risks and implement control activities through policies. Without properly designed control activities to help ensure that all programs and activities are assessed for susceptibility to significant improper payments at least once every 3 years, as required by IPIA, there is an increased risk that HHS may not identify all risk-susceptible programs and activities, resulting in incomplete improper payment estimates.
Treasury’s Improper Payment Risk Assessments for Interest on the Public Debt and HAMP Lacked Documentation to Support Its Low Risk Determinations
In its fiscal year 2017 qualitative risk assessments, based on fiscal year 2016 outlay data, Treasury assessed its Interest on the Public Debt and HAMP as at low risk of susceptibility to significant improper payments. However, Treasury did not have sufficient documentation for how it developed its risk assessments, so we could not determine if the risk assessment process was designed to provide a reasonable basis for making risk determinations.
Although Treasury did take into account the nine risk factors, among other factors, it did not document or effectively demonstrate how each specific risk factor affected the programs’ susceptibility to significant improper payments. Treasury’s risk assessment templates for these programs had 62 questions which required “Yes,” “No,” or “Not applicable” responses. Treasury did not document how each of the 62 questions related to each program’s susceptibility to significant improper payments. Further, the template did not require the bureaus responsible for the Interest on the Public Debt and HAMP risk assessments to provide documentation or support other than a check mark in response to these questions. Without descriptions of how to answer the questions or documentation to support the responses, we could not verify the reasonableness of the Interest on the Public Debt or HAMP improper payment risk assessments.
For example, the Interest on the Public Debt program’s risk assessment questionnaire was completed for 11 different payment types under the program. For the TreasuryDirect payment type, Treasury answered “No” to the question, “Are there risks due to a high volume of payments for TreasuryDirect?” Treasury did not provide documentation or other support for how the agency determined that there was no risk for this question. Further, since the template lacked descriptors, it is unclear if responses related to the number of transactions or the dollar amount of transactions. In fiscal year 2016, TreasuryDirect payments totaled almost $300 billion, representing about 7.8 percent of all the federal government outlays. In contrast, Treasury answered “Yes” to this same question for the HAMP program, for which payments were about 1 percent (about $4 billion) of the total payments made by TreasuryDirect.
Similarly, in the HAMP risk assessment questionnaire, Treasury answered “No” to the question, “Are payment or payment eligibility decisions made outside the agency?” However, under HAMP, financial institutions, not Treasury, determine whether borrowers are eligible for loan modification through the program. Treasury did not document why a “No” response was appropriate.
Treasury’s risk assessments for Interest on the Public Debt and HAMP also did not document or effectively demonstrate how the total scores for all risk factors led to the determinations that the programs were not susceptible to significant improper payments. For example, in its risk assessment, Treasury’s responses indicated several improper payment risks for Interest on the Public Debt, including (1) complexity of administering the payment type, (2) unmitigated risks relying on contractors to perform critical agency operations, and (3) payments being made to incorrect payees or ineligible recipients.
Further, based on total payments for the Interest on the Public Debt, Treasury would have to be over 99.97 percent accurate in its payments in order for the activity to not reach the $100 million threshold for significant improper payments. Treasury’s risk assessment did not document or support how it determined Interest on the Public Debt to be at low risk for susceptibility to significant improper payments considering these risks for improper payments.
Similarly, Treasury’s responses in its risk assessment questionnaire indicated several improper payment risks for HAMP, including (1) an emphasis on expediting payments, (2) risks resulting from recent changes in agency operations and personnel, (3) complicated criteria for manually computing payments, and (4) a high volume of payments. Treasury’s risk assessment did not document or support how it determined HAMP to be at overall low risk for significant improper payments considering these risks for improper payments. Without supporting documentation, Treasury cannot demonstrate, and we cannot determine, if Treasury’s low risk determinations for Interest on the Public Debt and HAMP were reasonable.
Additionally, based on our analysis of Treasury’s risk assessment template, a bureau could identify areas of risk related to each of the nine risk factors for a program, but because of the assigned weights given to each of the nine risk factors, Treasury’s final risk calculation would still not determine the program to be at high risk of susceptibility to significant improper payments.
According to Treasury officials, Treasury provides general instructions on how to complete the risk assessment templates, but the bureaus are responsible for assessing the risks. In addition, according to Treasury, the fiscal year 2017 improper payment risk assessment template used for Interest on the Public Debt and HAMP was designed to calculate an overall risk rating of low, medium, or high based on bureau responses to each individual question. However, Treasury did not have documentation to demonstrate how it determined the weighting of the risk factors or the numerical risk level ranges from the template related to the programs’ susceptibility to significant improper payments. Additionally, Treasury did not have documentation demonstrating the basis for its determination that specific risk factors do or do not lead to susceptibility to significant improper payments. According to Treasury officials, Treasury considered the severity of the impact on the program’s improper payments when developing its weights for each question. However, Treasury officials stated that they did not have documentary support for this analysis. Without documenting the basis for the assigned weights, Treasury cannot demonstrate, and we cannot determine, that its process for determining its programs’ susceptibility to significant improper payments was reasonable.
Federal internal control standards state that management should develop control activities to achieve objectives and respond to risks and implement control activities through policies. As part of these standards, management should clearly document internal controls and other significant events in a manner that allows the documentation to be readily available for examination. Additionally, management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness. Further, federal internal control standards state that management should use quality information to achieve the entity’s objectives. As such, to reasonably determine if a program is susceptible to significant improper payments, agencies’ risk assessments would have a logical connection with, or bearing upon, the statutory definition of significant improper payments. Until Treasury revises its risk assessment process to help ensure that it results in reliable assessments, it will not be certain whether Interest on the Public Debt or HAMP may be susceptible to significant improper payments and therefore require an estimate of improper payments.
DOJ’s Improper Payment Risk Assessment for Law Enforcement Lacked Documentation to Support Its Low Risk Determination
In its fiscal year 2017 risk assessment, DOJ assessed its Law Enforcement program as at low risk of susceptibility to significant improper payments. However, DOJ did not have sufficient documentation for how it developed its risk assessments, so we could not determine if the risk assessment process was designed to provide a reasonable basis for making risk determinations.
Although DOJ conducted a quantitative evaluation as part of its improper payment risk assessment for its Law Enforcement program, the evaluation did not reliably indicate the program’s susceptibility to significant improper payments. Specifically, our analysis of Law Enforcement’s improper payment risk assessment found that the quantitative evaluation’s baseline was largely based on the prior fiscal year’s improper payment amount identified through recovery activities, which may not reliably represent the estimated improper payment amount that the agency incurred. For example, improper payment recovery activities do not include underpayments.
DOJ’s qualitative analysis on improper payments also did not document or effectively demonstrate whether the program may be susceptible to significant improper payments. Although DOJ’s risk assessment template did take into account the nine risk factors, among other factors, and descriptors of how the components should consider each risk factor, DOJ did not document or effectively demonstrate how each specific risk factor affected the program’s susceptibility to significant improper payments. Further, although DOJ used a risk assessment template to assess each of the risk factors, which included a voluntary comments section for each risk factor so that components can explain answers or justify the risk ratings, the components frequently left the comment sections blank. As such, DOJ did not always provide sufficient documentation or support for us to determine how the components arrived at their risk determinations for each risk factor.
DOJ’s risk assessment for Law Enforcement also did not document or effectively demonstrate how the total score for all risk factors led to the determination that the program was not susceptible to significant improper payments. For example, in its risk assessment, DOJ’s Offices, Boards, and Divisions’ responses indicated risks for contract payments related to (1) changes in funding, authorities, practices, or procedures; (2) results of monitoring activities; (3) results of recapture audit activities; (4) volume and dollar amount of payments; (5) inherent risks; and (6) capability of personnel. DOJ’s risk assessment did not document or support how it determined Law Enforcement to be at low risk for susceptibility to significant improper payments given the identified risks for certain risk factors. Without supporting documentation, DOJ cannot demonstrate, and we cannot determine, if DOJ’s low risk determination for Law Enforcement was reasonable. Additionally, based on our analysis of DOJ’s risk assessment template, a component could identify areas of risk related to each of the nine risk factors, but because of the assigned weight given to each of the nine risk factors, DOJ’s final risk calculation would still not determine the program to be at high risk of susceptibility to significant improper payments.
According to DOJ, the fiscal year 2017 improper payment qualitative risk assessment template used for Law Enforcement was designed to calculate an overall risk rating of low, medium, or high based on component responses to each individual risk factor. However, DOJ did not have documentation to demonstrate how it determined the weighting of the risk factors or the numerical risk level ranges from the template related to the program’s susceptibility to significant improper payments. Additionally, DOJ did not have documentation demonstrating the basis for its determination that specific risk factors do or do not lead to susceptibility to significant improper payments. Further, DOJ’s qualitative risk assessment template indicated that the overall risk determination does not relate to the program’s susceptibility to significant improper payments. For example, the template stated that “a risk rating of high risk for the purposes of this assessment does not mean that the payment type is susceptible to significant improper payments but may indicate that additional focus and testing should be placed on that payment type to better estimate the improper payment rate for the payment type.” DOJ officials stated that DOJ held internal discussions and considered the severity of the impact on the program’s improper payments when developing its weights for each risk factor. When asked for supporting documentation, DOJ officials stated that OMB guidance does not direct agencies to demonstrate how the weights for each risk factor or overall risk ratings relate to the definition of significant improper payments.
However, without documenting the basis for the assigned weights, DOJ cannot demonstrate, and we cannot determine, that its process for determining Law Enforcement’s susceptibility to significant improper payments was reasonable.
Federal internal control standards state that management should develop control activities to achieve objectives and respond to risks and implement control activities through policies. As part of these standards, management should clearly document internal controls and other significant events in a manner that allows the documentation to be readily available for examination. Additionally, management should periodically review policies, procedures, and related control activities for continued relevance and effectiveness. Further, although OMB does not direct agencies to demonstrate how the weights for each risk factor or overall ratings relate to the definition of significant improper payments, federal internal control standards state that management should use quality information to achieve the entity’s objectives. As such, to reasonably determine if a program is susceptible to significant improper payments, agencies’ risk assessments would have a logical connection with, or bearing upon, the statutory definition of significant improper payments. Until DOJ revises its risk assessment process to help ensure that it results in a reliable assessment, it will be uncertain whether Law Enforcement may be susceptible to significant improper payments and therefore require an estimate of improper payments.
USDA’s Improper Payment Risk Assessment for ARC/PLC Provided a Reasonable Basis for Its Risk Determination
USDA’s fiscal year 2017 improper payment risk assessment for ARC/PLC consisted of a qualitative analysis and a quantitative evaluation. Both assessments determined that the program was not susceptible to significant improper payments. We found that the quantitative evaluation, based on statistical sampling, provided a reasonable basis for USDA’s determination that the program was at low risk for susceptibility to significant improper payments. Specifically, based on its statistical sample, USDA estimated that ARC/PLC’s improper payment rate was 0.73 percent of program outlays with an estimated improper payment amount of $38.6 million. As such, the analysis clearly demonstrated that ARC/PLC did not meet the statutory definition of significant improper payments under IPIA—estimated improper payments that may have exceeded either (1) 1.5 percent of program outlays and $10 million or (2) $100 million (regardless of the improper payment rate).
Conclusions
Properly executed improper payment risk assessments are a cornerstone of government-wide efforts to estimate and reduce such payments. Although the qualitative risk assessments we reviewed for HHS, Treasury, and DOJ considered the nine risk factors required by IPIA or directed by OMB, none of them demonstrated how the factors affected a program’s susceptibility to significant improper payments. Additionally, despite the agencies identifying multiple factors as areas of risk in individual program risk assessments, each of the agencies’ overall determinations for the risk assessments we reviewed was “low risk,” and none of the agencies had documentation with which to explain the basis for their assessments.
Revising their processes for conducting improper payment risk assessments, including preparing sufficient documentation to support the assessments, would better position HHS, Treasury, and DOJ to demonstrate the reliability of the assessments. Without properly designed risk assessments, the departments will continue to be uncertain whether improper payment estimates should be prepared for most programs we reviewed, potentially affecting the completeness of their improper payment estimates and hampering efforts to reduce improper payments.
Recommendations for Executive Action
We are making the following four recommendations—two to HHS, one to Treasury, and one to DOJ: The Secretary of Health and Human Services should revise HHS’s process for conducting improper payment risk assessments for Head Start to help ensure that it results in a reliable assessment of whether the program is susceptible to significant improper payments. This should include preparing sufficient documentation to support its risk assessments. (Recommendation 1)
The Secretary of Health and Human Services should revise HHS’s procedures for conducting improper payment risk assessments to help ensure that all programs and activities are assessed for susceptibility to significant improper payments at least once every 3 years, as required by IPIA. (Recommendation 2)
The Secretary of the Treasury should revise Treasury’s processes for conducting improper payment risk assessments for Interest on the Public Debt and HAMP to help ensure that the processes result in reliable assessments of whether the programs are susceptible to significant improper payments. This should include preparing sufficient documentation to support its risk assessments. (Recommendation 3)
The Attorney General should revise DOJ’s process for conducting improper payment risk assessments for Law Enforcement to help ensure that it results in a reliable assessment of whether the program is susceptible to significant improper payments. This should include preparing sufficient documentation to support DOJ’s risk assessments. (Recommendation 4)
Agency Comments and Our Evaluation
We provided a draft of this report for comment to OMB, HHS, DOJ, Treasury, and USDA. DOJ and HHS provided written comments, which are reproduced in appendixes II and III, respectively. OMB, HHS, and Treasury provided technical comments, which we incorporated as appropriate. Treasury’s Acting Director of its Risk and Control Group notified us by email that Treasury concurred with the report and recommendation. A USDA management analyst notified us by email that USDA had no comments on the report.
In its written comments, HHS stated that it concurs with both recommendations and is committed to reducing improper payments in all of its programs. HHS also described actions it plans to take to address these recommendations, including (1) issuing a written policy directing divisions to maintain supporting documentation for risk assessments, (2) documenting the agency review procedures for risk assessments that the divisions perform and the rationale for assigning weights to the risk factors, and (3) developing an automated program identification process for monitoring and inclusion in risk assessments to help ensure that all programs and activities are reviewed. The actions described by HHS, if implemented effectively, would address our recommendations.
In its written comments, DOJ stated that it disagreed with our conclusions and recommendation. DOJ explained that its risk assessment methodology includes a qualitative evaluation and a quantitative analysis, and that it considers the nine risk factors likely to contribute to improper payments. Additionally, DOJ provided an overview of its risk assessment tool and guidance and stated that its methodology includes all steps required by OMB. We acknowledged in the draft report that DOJ did take into account the nine risk factors, among others, as directed by OMB and provided an overview of DOJ’s risk assessment template and process.
DOJ stated that it believes that some of our interpretations exceed the risk assessment requirements, and believes that its methodology complies with requirements and adequately demonstrates whether a program may be susceptible to significant improper payments. DOJ stated that the risk factor ratings summarized in its risk assessment provided a clear link of how the individual risk factor ratings support the overall assessed risk of significant improper payments. Further, DOJ stated that the risk assessment tool provides sufficient documentation for the formulas and logic for the risk rating conversions and weight-based summarization of risk factor scoring.
We disagree that our interpretations exceed the risk assessment requirements, and we continue to believe that DOJ’s risk assessment did not adequately demonstrate whether a program is or is not susceptible to significant improper payments. We believe that while agencies are not specifically directed to demonstrate how the weights for each risk factor or overall ratings relate to the definition of significant improper payments, management should use quality information to achieve the entity’s objectives as stated in federal internal control standards. As such, to reasonably determine if a program is susceptible to significant improper payments, agencies should have documentation to support how their risk assessments provided a logical connection with, or bearing upon, the statutory definition of significant improper payments. DOJ did not provide sufficient support for how it determined the weighting of the risk factors or the numerical risk level ranges. Because DOJ did not have sufficient documentation for how it developed its risk assessment template, we could not determine if the risk assessment was designed to provide a reasonable basis for the risk determinations.
DOJ stated that the report does not accurately portray DOJ’s risk assessment process. Specifically, DOJ stated that we incorrectly reported that DOJ’s quantitative evaluation did not include improper payments related to lack of documentation. Based on the information DOJ provided, we removed the lack of documentation example from our report.
DOJ also stated that it was misleading to report that although DOJ’s risk assessment template included a voluntary comments section for each risk factor for components to explain answers or justify risk ratings, the comment sections were frequently left blank. DOJ stated that its components only need to provide a comment when they believe it is necessary to qualify their responses and that obvious answers do not need to be explained. However, as previously noted, DOJ did not provide sufficient documentation or support for us to determine how the components arrived at their risk determinations for each risk factor. Without such documentation, DOJ cannot demonstrate, and we cannot determine, whether DOJ’s assessment for each risk factor was reasonable.
Further, DOJ stated that the Offices, Boards, and Divisions example was inaccurate and misleading. DOJ stated that the summary table in its risk assessment questionnaire documented that the risks identified were determined to be low risk and therefore supported the conclusions reached. DOJ also stated that its approach acknowledges that risks exist in every disbursement process and allows process owners to assess the level of risk that exists and determine whether a program may be susceptible to significant improper payments. We disagree that the Offices, Boards, and Divisions example is inaccurate or misleading. Although we recognize that DOJ’s summary table, or scoring template as referred to in the report, documented that the risks identified were determined to be low risk, we do not believe that it provided support for that determination. Specifically, the summary table was populated based on component responses and predetermined weights to calculate an overall risk rating of low, medium, or high; however, DOJ did not provide documentation to demonstrate how it determined the weights of the risk factors or the numerical risk level ranges involved in that calculation. Without documenting the basis for the assigned weights, DOJ cannot demonstrate, and we cannot determine, that its process for determining Law Enforcement’s susceptibility to significant improper payments was reasonable.
We continue to believe that our recommendation to DOJ is valid to help ensure that DOJ’s risk assessment reliably results in determining whether Law Enforcement may be susceptible to significant improper payments.
We are sending copies of this report to the appropriate congressional committees, the Director of the Office of Management and Budget, the Secretary of Agriculture, the Secretary of Health and Human Services, the Secretary of the Treasury, the Acting Attorney General, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2623 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objective, Scope, and Methodology
This report examines the extent to which certain federal agencies’ improper payment risk assessments for selected programs provided a reasonable basis for determining susceptibility to significant improper payments.
To address our objective, we reviewed improper payment risk assessment requirements in the Improper Payments Information Act of 2002, as amended (IPIA), and the related guidance in Office of Management and Budget (OMB) Circular A-123, Appendix C, Requirements for Effective Estimation and Remediation of Improper Payments (OMB M-15-02). We analyzed this statute and guidance to identify key criteria that agencies must meet when conducting improper payment risk assessments. IPIA identifies seven risk factors and OMB guidance includes two additional risk factors that agencies must consider, at a minimum, in their improper payment risk assessments to determine susceptibility to significant improper payments. IPIA also directs agencies to conduct risk assessments for all programs and activities at least once every 3 years. We also reviewed relevant internal control standards to determine the relevant processes and procedures needed to help ensure that agencies conduct effective improper payment risk assessments to determine the susceptibility to significant improper payments.
For this objective, we selected a nongeneralizable sample of 4 agencies and five programs to review. To select the agencies, we considered data for the 24 agencies subject to the Chief Financial Officers Act of 1990 (CFO Act). Specifically, we considered the timing of the agencies’ improper payment risk assessments, findings reported by the agencies’ inspectors general (IG), the number of programs and activities for which the agencies reported improper payment estimates for fiscal year 2017, the types of programs and activities that the agencies administered, and agency gross outlays in fiscal year 2017. To ensure we were including agencies that had most recently conducted improper payment risk assessments, we limited our selection to agencies that conducted improper payment risk assessments for any programs or activities in fiscal year 2017. In order to avoid duplicate efforts, we also eliminated agencies that reported IG findings related to risk assessments. We then selected a mix of agencies with and without improper payment estimates for fiscal year 2017, and ultimately selected 4 agencies based primarily on their fiscal year 2017 outlays for programs determined to be not susceptible to improper payments. Specifically, we selected one agency that did not report any improper payment estimates, one agency that reported a few improper payment estimates (for three or fewer programs or activities), and one agency that reported several improper payment estimates (for five or more programs or activities). We also selected one agency that administered eligibility-based programs in fiscal year 2017 because of the unique application and approval processes generally associated with eligibility determinations and their increased risk of improper payments.
We then selected up to two programs or activities at each agency, for a total of five programs. To facilitate our program selection, we requested a listing of all programs and activities at the selected agencies that underwent a risk assessment in fiscal years 2015 through 2017 (the most recent 3-year period at the time of our review) along with the gross outlay amounts associated with these programs and activities. Through our selection process, we noted that the Department of Health and Human Services (HHS) did not assess at least 140 of its programs and activities in the 3-year period from fiscal years 2015 through 2017, and therefore our program selection for HHS was limited to approximately 70 programs or activities.
To select programs, we considered outlay data, the timing of the most recent improper payment risk assessment conducted for each program or activity, and whether eligibility determinations were required for payments under each program or activity. Our selection was primarily based on the size of program and activity gross outlays reported for fiscal year 2017. We focused on outlays because the overall impact of any issues identified with an agency’s risk assessment process may be greater for programs and activities with higher gross outlays, as a higher volume of payments or higher payment amounts could potentially involve higher improper payments. Based on these data, we selected five programs for review. Our findings are limited to the five selected programs and cannot be generalized to all programs and activities at the 24 CFO Act agencies. The agencies and relevant programs selected for review are shown in table 3.
We interviewed officials at the selected agencies on their processes for conducting improper payment risk assessments and reviewed documented policies and procedures. We obtained the most recent improper payment risk assessments that the agencies conducted on the selected programs during the latest 3-year period at the time of our review (fiscal years 2015 through 2017). We then analyzed those risk assessments against relevant IPIA requirements, OMB guidance, and internal control standards to determine whether the agencies had evaluated the appropriate risk factors for improper payments, appropriately considered those risk factors in their risk assessments, and provided a reasonable basis for the risk determination. For any agencies that did not adhere to the improper payment risk assessment requirements, lacked supporting documentation for their risk assessments, or did not provide a reasonable basis for the risk determinations, we interviewed appropriate agency officials to determine the reasons they did not. We also interviewed OMB staff regarding their roles in developing risk assessment guidance.
We conducted this performance audit from December 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Justice
Appendix III: Comments from the Department of Health and Human Services
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments:
In addition to the contact named above, Matthew Valenta (Assistant Director), Stephanie Adams (Auditor in Charge), Marcia Carlsen, Pat Frey, Gina Hoover, Diana Lee, Zhen Li, and Charles Varga made key contributions to this report. | Why GAO Did This Study
Improper payments are a long-standing problem in the federal government, estimated at almost $141 billion for fiscal year 2017. Agencies are required to perform risk assessments to identify programs that may be susceptible to significant improper payments.
GAO was asked to review federal agencies' improper payment risk assessments. This report examines the extent to which certain agencies' improper payment risk assessments for selected programs provided a reasonable basis for determining their susceptibility to significant improper payments. GAO analyzed the most recent risk assessments, from 2015 through 2017, for the following five programs: USDA's Agriculture Risk Coverage and Price Loss Coverage programs; HHS's Head Start; DOJ's Law Enforcement; and Treasury's Interest on the Public Debt and Home Affordable Modification Program. GAO selected these programs, focusing on programs that recently underwent a risk assessment and size of programs' gross outlays—which totaled about $330 billion in fiscal year 2017 for the five programs GAO selected.
What GAO Found
The Improper Payments Information Act of 2002, as amended (IPIA), defines “significant” improper payments as improper payments in the preceding fiscal year that may have exceeded either (1) 1.5 percent of program outlays and $10 million or (2) $100 million (regardless of the improper payment rate). GAO found that the Departments of Health and Human Services (HHS), the Treasury (Treasury), Justice (DOJ), and Agriculture (USDA) assessed the five programs GAO selected for review as at low risk for susceptibility to significant improper payments; however, HHS, Treasury, and DOJ lacked sufficient documentation to assess the extent to which their risk assessments provided a reasonable basis for their risk determinations. On the other hand, USDA's quantitative risk assessment of its program's susceptibility to significant improper payments provided a reasonable basis for its low-risk determination.
Although HHS, Treasury, and DOJ considered, among other factors, the nine risk factors from IPIA and Office of Management and Budget guidance, they did not document or effectively demonstrate how these factors affected their programs' susceptibility to significant improper payments. These programs' risk assessments did not contain sufficient documentation to determine how the agencies arrived at their risk determinations for each risk factor, or how the total scores for all risk factors led to low-risk determinations. For example, HHS determined that its Head Start program was at high risk for several risk factors—including complexity per transaction and volume of payments—but did not document how these high-risk ratings informed its overall determination that Head Start was not susceptible to significant improper payments.
Further, the agencies did not have documentation to demonstrate how they determined the weighting of each risk factor or the risk level ranges from the risk assessment templates as they relate to the programs' susceptibility to significant improper payments. For example, based on GAO's analysis of Treasury's risk assessment template, the agency could identify areas of risk related to each of the nine risk factors. But because of the assigned weights given to each risk factor, Treasury's final risk calculation would still not determine the program to be at high risk of susceptibility to significant improper payments. Without documenting the basis for the assigned weights, Treasury cannot demonstrate, and GAO cannot determine, that its process for determining its programs' susceptibility to significant improper payments was reasonable. Until HHS, Treasury, and DOJ revise their risk assessment processes to help ensure that they result in reliable assessments, they cannot be certain whether their programs are susceptible to significant improper payments and therefore whether they are required to estimate the amount of improper payments.
GAO also found that HHS did not assess many of its programs and activities at least once during the 3-year period from fiscal years 2015 through 2017, as required by IPIA. Based on the analysis of HHS information, GAO identified at least 140 programs or activities that were not assessed during the 3-year period. When not all eligible programs are reviewed as required, there is an increased risk that the agency may not identify all risk-susceptible programs and activities, resulting in incomplete improper payment estimates.
What GAO Recommends
GAO recommends that Treasury, DOJ, and HHS revise their improper payment risk assessment processes, and that HHS revise its procedures to help ensure that all programs are assessed at least once every 3 years. In their responses, Treasury and HHS agreed with the recommendations, and DOJ disagreed with GAO's recommendation. GAO continues to believe that the recommendation is valid, as discussed in the report. |
gao_GAO-18-411 | gao_GAO-18-411_0 | Background
This section provides information on oil and gas leasing and development on federally managed lands, lease revenues, lease suspensions, and BLM’s LR2000 database.
Oil and Gas Leasing and Development on Federally Managed Lands
BLM is responsible for managing approximately 700 million acres of subsurface mineral estate throughout the country, including the acreage it leases to operators for oil and gas development. At the end of fiscal year 2016, about 41,000 oil and gas leases accounted for approximately 28.2 million acres in 32 states, according to BLM data (see app. II for additional details).
The Federal Land Policy and Management Act of 1976, as amended, requires the Secretary of the Interior to develop land use plans for public lands. These plans identify federal lands and mineral resources that will be available for oil and gas leasing and development and other activities. The act requires the plans to be revised as appropriate, and BLM generally evaluates plans for potential revisions at least every 5 years.
As part of developing or revising land use plans, BLM is required under the National Environmental Policy Act of 1969, as amended, to evaluate likely environmental effects of any decisions in the plan, such as selecting areas for oil and gas development. Generally, Interior prepares an environmental impact statement—a detailed statement of the likely environmental effects of the proposed action—in preparing land use plans. BLM officials said the agency uses the land use plans and environmental impact statements to (1) help develop “reasonably foreseeable development scenarios” to estimate outcomes, such as the number of wells and likely surface disturbance that may occur under the land use plan; (2) identify lands open and closed to leasing; (3) identify resource-protection measures, such as lease stipulations and environmental best management practices; and (4) establish monitoring protocols. With a completed land use plan and its associated environmental impact statement, BLM can offer for lease the mineral rights identified in the plan.
The parcels of land that BLM offers for potential leasing and development are nominated by industry and the public or identified by BLM. BLM offers leases through a competitive bidding process and requires a uniform national minimum bid of $2 per acre, due as a one-time payment when a bidder is awarded the lease. If BLM receives any bids on an offered lease, the lease is awarded to the bidder with the highest bid. Since 1992, BLM has offered leases with a 10-year primary term—the initial period of time prescribed in a lease to begin oil and gas development. Operators generally begin oil and gas exploration on leased lands by analyzing available geologic and seismic information and other testing to determine if economically viable oil and gas reservoirs exist. If the findings are positive, the operators may begin efforts to prepare for development, such as completing the environmental studies required to apply for permits to begin lease development activities. For example, operators holding leases for oil and gas development must submit a drilling permit application to BLM and obtain approval before preparing the land and drilling new oil or gas wells. After receiving a permit application, BLM generally communicates with operators until they provide all of the required documents, including necessary environmental information or studies. The Energy Policy Act of 2005 requires BLM to approve or defer permit applications within 30 days of submission by the operator. After such applications are approved, operators may begin development activities, including building roads to the well site, constructing platforms, drilling wells, and constructing additional pipeline transportation necessary to transport the oil and gas to market.
BLM has the authority to inspect federal oil and gas sites, including well pads and production facilities, under the Federal Oil and Gas Royalty Management Act of 1982, as amended. According to the agency’s handbook for its inspection and enforcement program, BLM must ensure that oil and gas operations on federal lands are prudently conducted in a manner that ensures protection of the surface and subsurface environment.
Lease Revenues
For issued leases, the operator pays a fixed amount of rent each year until the lease begins producing or expires. Under the Mineral Leasing Act of 1920, as amended, once a federal lease begins producing, the operator pays royalties on the oil and gas it produces in lieu of paying rent. The act sets the royalty rate for competitive leases at not less than 12.5 percent of the amount or value of production. A producing lease remains in effect so long as the operator continues to produce oil and gas in paying quantities.
The Office of Natural Resources Revenue, within Interior, is responsible for managing and collecting revenues from operators that produce or extract resources from federal leases. In fiscal year 2016, approximately 164 million barrels of oil and 3.25 trillion cubic feet of gas were produced on federal lands, according to agency data. According to Office of Natural Resources Revenue data, in fiscal year 2016, the federal government collected approximately $1.6 billion in gross revenue from the production of these resources on federal land. The majority of this revenue—nearly $1.5 billion, or 91 percent—came from royalties. The remaining revenue came from bids made on new leases—more than $120 million—and rent for existing leases—more than $20 million.
Lease Suspensions
According to agency guidance, specifically the Suspensions of Operations and/or Production Manual, BLM generally uses two types of suspensions for oil and gas leases: (1) suspension of operations or (2) suspension of operations and production. A suspension of operations halts the operations associated with a particular lease, such as drilling or developing a well pad and roads. A suspension of operations and production—the most common type of suspension, according to BLM officials—is broader because it halts both operations and any production of oil and gas. BLM’s guidance also states that a suspension of operations may be granted in cases in which the operator is prevented from operating or producing on the lease for reasons beyond the operator’s control, and a suspension of operations and production may be granted only in the interest of the conservation of natural resources. During either type of suspension, the time remaining in the primary term of the lease is reserved until the suspension is terminated, so that the operator is not penalized for the time the lease is in suspension.
According to BLM officials, lease suspensions typically are initiated by the operator but may also be initiated by BLM. BLM guidance states that before an operator can request a suspension, the operator must first demonstrate being hampered in performing some operation or activity on the lease. The operator must submit thorough documentation of the reason for requesting a suspension and should include evidence that activity has been attempted on the lease, such as filing an application for a drilling permit, and that the activity has been prevented by actions beyond the operator’s control. For BLM’s part, according to BLM’s guidance, requests filed less than 30 days prior to the expiration of the lease are considered late and should normally be denied. If a request is filed in a timely manner, BLM is to assess the request and, if the reasons for the request are acceptable and justify a suspension, BLM should approve the request, according to BLM guidance. The state director at each BLM state office is responsible for reviewing and approving requests for lease suspensions; however, BLM’s guidance encourages the delegation of this responsibility to the field manager at the field office with jurisdiction over the lease. According to BLM officials, BLM state offices generally delegate responsibility for monitoring lease suspensions to their field offices.
BLM’s LR2000 Database and Other Interior Databases
According to BLM officials, LR2000 is a national database that provides internal and external users with access to, among other things, land and mineral use authorizations for oil, gas, and other mineral development; land titles; and other data extracted from case files that support BLM land, mineral, and resources programs. LR2000 contains information on approximately 6 million land and mineral case files. BLM designed the database for use by the oil and gas industry, mining industry, land and mineral title companies, utilities, state and local governments, interest groups, and members of the public that need access to BLM land and mineral case files.
The agency has conducted a series of reviews of LR2000 over the last 5 years in an attempt to improve the accuracy of the data in the system, according to BLM officials we interviewed. In particular, the officials informed us that they created a tool, known as Data Flux, to improve the accuracy of the data, and that the tool has helped identify numerous data errors. BLM officials told us that each spring a report is generated using Data Flux that highlights the errors found in LR2000, and BLM state offices are responsible for taking action to address the identified errors for their respective states. These officials also told us that BLM plans to either significantly update or replace LR2000 but has not set a definitive date for doing so.
Interior and BLM manage several other databases that contain information about the development and production of oil and gas on federal lands. In prior work, we found weaknesses in how Interior tracks and uses some information in its data systems. Specifically, in July 2010, we reported that BLM’s publicly available data related to protests, or challenges, to lease sales were incomplete or inconsistent, and we recommended that Interior determine and implement an agency-wide approach for collecting protest information that is complete, consistent, and available to the public. BLM agreed with the recommendation and issued guidance to standardize data collection. In addition, we found in July 2016 that Interior could improve the data it collects to help track progress toward its goal of reducing methane emissions from oil and gas operations. We made four recommendations to improve BLM’s reporting of emissions data. The agency generally concurred with all of the recommendations and has implemented two of them. Further, in April 2017, we found that BLM field offices had not effectively used data collected during environmental inspections, which could have enhanced BLM’s ability to assess and mitigate environmental impacts. We recommended that BLM develop guidance and consistently track inspections data, among other things. BLM generally concurred with these recommendations.
BLM Uses a Multistep Process in Determining Whether to Suspend Leases
BLM uses a multistep process to determine whether to suspend oil and gas leases, and this process, according to BLM guidance and officials, typically begins with an operator submitting a suspension request to the appropriate BLM field office. Once the request is received, the cognizant BLM field official—usually a petroleum engineer at the field office— reviews it for completeness and whether the reasons cited meet the suspension criteria established in federal regulations and BLM’s Suspensions of Operations and/or Production Manual. These criteria require that lease suspensions be approved only in the interest of the conservation of natural resources or for circumstances beyond the operator’s control. Officials we interviewed stated that field officials generally have broad discretion in how to apply suspension criteria when considering a request. See figure 1, below, for examples of circumstances for which suspensions can be issued.
According to BLM officials, if the field office recommends approving the operator’s request for suspension, the field office is to forward the request to the appropriate BLM state office for final review, as shown in figure 2 below. In cases in which the state office agrees with the field office’s recommendation, the state office is to issue a decision letter to the operator noting the changes to the terms and conditions of the lease. A copy of the letter is also to be sent to the Office of Natural Resources Revenue, if necessary, requesting deferment of rent and royalty payments while the lease is suspended.
Conversely, if the field office recommends that the suspension request be denied, the field office is to inform the operator in writing, BLM officials said. According to agency guidance, the operator can appeal the field office’s recommendation to the state office director within 20 days after receiving the notification. The state director then has 10 days to render a decision. If the state director denies the request for suspension, the operator can challenge the decision at the Interior Board of Land Appeals. After the board’s decision, the operator may make additional appeals in federal court. In cases in which a decision is overturned, the state office is to issue a decision letter to the operator that highlights changes in the lease’s terms and conditions. The state office is to record the new terms and conditions in LR2000, notify the Office of Natural Resources Revenue of any rental or royalty payments that are to be deferred, and update the official lease file in the state office. Other affected parties (i.e. any party who is adversely affected by a decision) can also appeal a suspension decision, according to BLM officials.
Agency officials stated that BLM field offices are primarily responsible for monitoring the status of lease suspensions they issue to ensure that the conditions for granting the suspension still exist. If the conditions have changed, the field office is to recommend that the lease suspension be terminated and notify the operator. The state office is to terminate the suspension and send a letter to the operator with the updated lease terms and conditions, which should extend the original lease expiration date to reflect the length of the suspension. BLM guidance states that the state office also is to send a copy of the suspension termination letter to the Office of Natural Resources Revenue to alert that office that any rental and royalty payments on hold for the lease should resume. The state office is then responsible for updating LR2000 and the official lease file regarding any new lease terms and conditions, according to BLM officials.
A Small Portion of BLM’s Oil and Gas Leases Were Recorded as Suspended, but Reasons for Suspensions Were Difficult to Determine
A small portion of BLM’s oil and gas leases were suspended as of the end of fiscal year 2016, according to the agency’s LR2000 data, but the reasons for the suspensions were difficult to determine. These data indicated that as of September 2016, about 2,750 of BLM’s approximately 41,000 oil and gas leases were suspended in various locations for various lengths of time. LR2000 did not always contain the reasons for suspensions, which required us to take additional steps to identify the reasons.
As of September 2016, about 2,750 Oil and Gas Leases Were Recorded as Suspended and Varied in Their Location and Length of Suspension
According to LR2000 data, approximately 2,750 oil and gas leases were suspended at the end of fiscal year 2016. Our analysis of these data showed that the lease suspensions spanned 16 states and accounted for about 3.4 million acres of federally managed land. The data also showed that most of the suspensions were in five Mountain West states: Colorado, Montana, New Mexico, Utah, and Wyoming. These five states accounted for more than 2,350 of the approximately 2,750 recorded lease suspensions and encompassed more than 2.9 million acres of federally managed land (see app. II for additional details). Figure 3, below, provides information on the location of oil and gas leases and recorded suspensions across the United States.
Our analysis of LR2000 data showed that, of the approximately 2,750 recorded lease suspensions, about 630 had been in place for less than 3 years, about 1,150 had been in place for 3 years to less than 10 years, about 190 had been in place for 10 years to less than 20 years, about 130 had been in place for 20 years to less than 30 years, and about 650 had been in place for 30 years or more.
See figure 4 and appendix III for additional details.
Reasons for Suspensions Were Not Always Recorded in BLM’s Database and Required Reviews of Official Lease Files to Identify
BLM’s database, LR2000, did not always contain information on the reasons for oil and gas lease suspensions. BLM officials said that while LR2000 does not have a field to specifically capture the reason for a suspension, and inclusion of this information is not mandatory, the general remarks field could be used for this purpose.
Because we found this remarks field was rarely used to capture the reason for suspensions, we reviewed the official lease files for a sample of 48 leases in Montana and Wyoming that were suspended as of September 30, 2016, and we interviewed field office staff for clarification. The reasons for suspensions in this sample generally fell into four broad categories: environmental reviews, delays in reviewing applications for permits to drill, logistical conflicts, and other reasons.
Our review of the official lease files for our sample found the following reasons cited for suspensions:
Sixteen leases were suspended for large-scale environmental concerns, such as wilderness or wildlife protection areas or environmental reviews that affected large parcels of land. These 16 suspensions had been in effect for approximately 6 years to 38 years. One of these leases was suspended because of a court order that also resulted in suspension of 422 other leases; the leases suspended as a result of this court order accounted for most of the suspensions that had been in place for more than 30 years.
Fourteen leases were suspended because BLM required additional time to complete its review of the operator’s drilling permit application. These 14 suspensions had been in effect for approximately 1 year to 13 years. Seven of these 14 suspensions were issued because BLM needed additional time to review the environmental assessments submitted with the drilling permit applications.
Eight leases were suspended because they faced logistical conflicts with other surface development, such as mining activities occurring on the lease or adjacent lands. These suspensions had been in effect for approximately 4 years to 25 years.
Five leases were suspended for other, short-term reasons, such as weather-related issues or economic conditions, but were recorded in LR2000 as suspended for approximately 22 years to 74 years.
We were unable to determine the reasons why the 5 remaining leases were suspended. These leases were recorded as suspended for approximately 28 to 82 years. The agency was unable to provide lease files for 1 of the leases. Field officials said that some of these suspensions may have been issued at the state level, and the officials had no additional information on them.
According to Standards for Internal Control in the Federal Government, management should use quality information to achieve the entity’s objectives; quality information may be defined as appropriate, current, complete, accurate, accessible, and provided on a timely basis. BLM does not have quality information on the reasons for suspensions, in part because such reasons are not routinely included in LR2000, and there is no specific data field for them. To obtain this information, BLM officials would have to review the official lease files, as we did, and most of the files were available only in hard copy in BLM state offices. Therefore, the information is not readily accessible across the agency. Field officials we interviewed from one field office said that additional information on reasons for suspension in the database would be helpful in monitoring lease suspensions and in communicating with others, such as management or the public, about suspensions. BLM headquarters officials said they are planning to update or replace LR2000. By including a data field in the update or replacement for LR2000 to record the reasons for suspensions, BLM could better ensure that federal lands are not being inappropriately kept from development—potentially foregoing revenue—or from valuable uses of public lands.
BLM Relies on an Informal Monitoring Approach That May Not Provide for Consistent and Effective Oversight of Lease Suspensions
BLM uses an informal approach to monitor lease suspensions and does not have procedures in place for monitoring suspensions, which may not ensure consistent and effective oversight. We also found that BLM’s state offices do not always maintain current information on lease suspensions in the official lease files or LR2000, and BLM headquarters and state officials told us they generally do not oversee the monitoring of lease suspensions.
Monitoring Varies Among Field Offices, and BLM Does Not Have Procedures for How to Conduct It
Field offices vary in how they monitor lease suspensions, and BLM does not have official agency procedures in place for monitoring, relying instead on an informal approach. We found that the field offices we reviewed differed in the frequency of their monitoring activities for lease suspensions. According to officials we interviewed from these offices:
8 field offices monitor with varying frequency, depending on the
3 field offices monitor rarely.
Officials who monitored with varying frequency said that the frequency depends in part on the nature of the suspension. For instance, they said suspensions that involve seasonal protection of wildlife habitat, which can last for several months, typically require relatively little monitoring because the time frames for these suspensions are more clearly defined. In contrast, suspensions involving environmental reviews often require more frequent monitoring because the time frames associated with these suspensions are less definitive and can range from several months to several years. Several of these officials said that their offices have established prompts to alert staff when to conduct monitoring activities. For example, an official from 1 field office told us the office’s staff use handwritten notes to track their lease suspensions. An official from another field office informed us that their office uses an electronic calendar feature to alert staff when to monitor, and several other field office officials reported that they rely on various spreadsheets and emails to remind them when to monitor. Officials from 1 field office also stated that their office uses an estimated end date for every suspension—that is, the date the suspension is expected to terminate—to prompt them to review the current conditions to ensure that the suspension is still warranted.
Officials who monitored with varying frequency also said that the frequency depends on the availability of staff for monitoring. These officials said they generally rely on petroleum engineers in their respective offices to monitor lease suspensions because these individuals are normally the most familiar with leases. However, some officials added that staffing limitations, particularly a shortage in petroleum engineers, have hindered their ability to monitor lease suspensions in a timely manner. Several of the field officials we interviewed noted that, in recent years, they have had to rely on other staff or petroleum engineers who were on loan from other field offices because their offices did not have a petroleum engineer on staff. According to two field officials we interviewed, while assistance from other field offices is needed and appreciated, there is invariably a lack of consistency in the knowledge that engineers from other offices have about the lease sites involved. Field officials also said that there have been instances in which petroleum engineers left the agency for the private sector, resulting in a loss of institutional knowledge about certain leases, possibly contributing to lapses in follow-up on particular leases. Officials from offices that rarely or never conduct monitoring also cited problems with staff availability. We reported on human capital challenges at BLM, specifically in hiring and retaining petroleum engineers, in March 2010. We also noted BLM’s human capital constraints in our High-Risk Series update report in February 2011, and we reported on human capital issues at BLM in January 2014 and September 2016. In several of these reports, we recommended that BLM take a number of actions, including using existing authorities and incentives to improve staff retention. BLM generally agreed with these recommendations and has taken action on some, but not all, of these recommendations.
Nonetheless, the extent of variability we found, including 3 field offices that monitor rarely or not at all, indicates that allowing individual field offices to determine when to monitor suspensions may not ensure that monitoring takes place. Under Standards for Internal Control in the Federal Government, management should design control activities, such as procedures, to ensure the objectives of the program are achieved. The Office of Management and Budget has also acknowledged the importance of internal guidance documents to channel the discretion of employees, increase efficiency, and enhance the fair treatment of similarly situated parties. Some field officials we interviewed said that procedures to help guide them on monitoring could be beneficial and provide a level of consistency. By developing procedures for monitoring lease suspensions, including when to conduct monitoring efforts, BLM could promote more consistent monitoring to better ensure that lease suspensions in effect are warranted.
BLM State Offices and Headquarters Generally Do Not Oversee Field Office Monitoring of Lease Suspensions and Do Not Always Have Current or Complete Information on Suspensions
Officials from BLM’s state offices told us that they do not oversee field office monitoring of suspensions, and we found that they did not always have current or complete information on suspensions. We found that more than three-quarters of the official lease files in BLM state offices we reviewed contained outdated documentation regarding the status of lease suspensions. Specifically, files for 37 of the 48 lease suspensions we reviewed did not contain updated information on whether the lease suspension had been monitored or reviewed since the suspension was initially issued. For example, we reviewed a lease file for a suspension issued in 1949 for economic reasons, but the file only contained information on the suspension issuance and not whether monitoring occurred to assess the economic conditions associated with the lease. Additionally, we discovered that some official lease files were not complete and did not have certain required information, such as letters issuing the suspension. For example, three of the lease files we reviewed were missing required information. We could not verify the reasons these leases were suspended, their current status, or any information concerning monitoring efforts associated with them. Field officials we interviewed did not have any information on these suspensions and said that they may have been initiated by the state office more than 30 years ago. However, BLM state officials were unable to confirm or deny this. For another lease, there was no lease file. Officials in BLM headquarters and state offices said that there is no requirement for them to oversee the field offices’ monitoring activities. However, they said that performing such oversight could help to ensure effective and consistent monitoring of lease suspensions.
We also identified some instances in LR2000 where data on suspensions were not up to date. Specifically, 7 of the 48 leases we reviewed were recorded in LR2000 as suspended, but information we received from agency officials indicated that the suspensions were no longer warranted. We later confirmed with state and field officials that none of the 7 suspensions remained in effect. Five of these 7 leases were recorded as being in suspension for 22 years or more for what appeared to be short- term reasons, such as weather-related issues or economic conditions. One Wyoming suspension, for instance, was granted in 1990 because of low oil prices at the time, which made repairing wells uneconomical. While this lease was still recorded as suspended in LR2000 as of September 2016, a termination letter in the lease file indicated that the suspension was terminated in 1991. In another example, a lease was listed in the official lease file as suspended for 3 years because of delays in processing a drilling permit application. When we followed up with field officials about the lease, they informed us that the suspension should have been terminated years ago; however, we found no termination letter in the official lease file maintained by the state office. Field officials speculated that the letter may not have been sent because the case manager had retired and no one in the field office knew to follow up on the lease. Because field officials informed us that these leases were no longer suspended, we confirmed with officials from Interior’s Office of Natural Resources Revenue that payments were being appropriately collected for these 7 leases.
Moreover, these data are not available in a standardized report that could be used to help oversee monitoring, such as a report showing the average length or frequency of suspensions. BLM produces standardized reports from LR2000 for other aspects of oil and gas leases, such as when leases have been issued or are set to expire. BLM officials said that a standardized report for lease suspensions could assist headquarters and state officials in conducting oversight of field offices’ monitoring efforts.
Standards for Internal Control in the Federal Government state that management should design control activities, such as conducting top- level reviews of actual performance, to ensure the objectives of the program are being achieved. By requiring that management, particularly cognizant headquarters and state office officials, conduct top-level reviews of field offices’ monitoring efforts, as well as top-level reviews of official lease files and databases, BLM could better ensure that lease suspensions in effect continue to be warranted and that information on suspensions is current and complete. Additionally, federal standards for internal control state that management should design control activities, such as developing mechanisms that enforce management’s directives, to achieve the entity’s objectives and address related risks. By developing mechanisms, such as summary reports on lease suspensions, as BLM updates or replaces LR2000, BLM could assist cognizant officials in headquarters and state offices with their oversight of monitoring.
Conclusions
The ability of federal agencies to manage their programs effectively depends in part on the information systems the agencies use and the quality of the data within these systems. Over the past several years, BLM has worked to improve the quality of the data in LR2000, including data related to oil and gas lease suspensions. These efforts have helped to improve the accuracy of certain data, but they do not address some constraints of LR2000. In particular, LR2000 does not contain a data field for recording the reasons for suspensions. BLM officials told us that they will upgrade or replace LR2000 in the near future. By including a data field in the update or replacement for LR2000 to record the reasons for suspensions, BLM could better ensure that federal lands are not being inappropriately kept from development—potentially foregoing revenue— or from other valuable uses of public lands.
BLM’s ability to effectively manage the program also depends on the establishment of effective internal controls. To date, BLM has not developed procedures for monitoring lease suspensions. By developing procedures for monitoring lease suspensions, including when to conduct monitoring efforts, BLM could promote more consistent monitoring to better ensure that lease suspensions in effect are warranted. Additionally, BLM does not conduct top-level reviews to oversee field offices’ monitoring efforts, and we found instances in which BLM’s information on suspensions was outdated or incomplete. By requiring that management, particularly cognizant headquarters and state office officials, conduct reviews of field offices’ monitoring efforts, as well as official lease files and databases, BLM could better ensure that information on suspensions is current and complete. Finally, BLM does not have mechanisms to provide officials with some key information relevant for oversight, such as when suspensions were last reviewed or the average length and frequency of suspensions. By developing mechanisms, such as summary reports on lease suspensions, as BLM updates or replaces LR2000, BLM could assist cognizant officials in headquarters and state offices with their oversight of monitoring.
Recommendations for Executive Action
We are making the following four recommendations to BLM: As BLM updates or replaces its database, the Director of BLM should include a data field to record the reasons for suspensions. (Recommendation 1)
The Director of BLM should develop official agency procedures for monitoring oil and gas lease suspensions, including when to conduct monitoring activities. (Recommendation 2)
The Director of BLM should require cognizant officials in headquarters and state offices to conduct top-level reviews of field offices’ monitoring of oil and gas lease suspensions, as well as of official lease files and databases to ensure they are current and complete. (Recommendation 3)
As BLM updates or replaces LR2000, the Director of BLM should ensure the development of mechanisms, such as standardized summary reports on lease suspensions, to assist cognizant officials in headquarters and state offices with oversight of field offices’ monitoring efforts. (Recommendation 4)
Agency Comments
We provided a draft of this report to Interior for review and comment. In its comments, reproduced in appendix IV, Interior generally agreed with our findings and recommendations. Interior also outlined plans for addressing the recommendations.
Regarding our first recommendation, Interior stated that it agrees that any future database used to track information on oil and gas lease suspensions should include a data field to more explicitly record the reasons for suspensions.
Interior also stated that it will develop standardized procedures for monitoring oil and gas lease suspensions, consistent with our second recommendation. These procedures will be instituted agency-wide, according to Interior, and agency policy and handbooks will be updated as needed to implement the procedures.
With respect to our third recommendation, Interior stated that it will provide updated guidance and online training to assist the state and field offices in managing, monitoring, and reviewing lease suspensions. These actions are positive steps and may address our recommendation depending on their implementation.
Finally, consistent with our fourth recommendation, Interior stated that any future update to or replacement of LR2000 database will include the capability to create standardized reports for oil and gas lease suspensions.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V.
Appendix I: Objectives, Scope, and Methodology
This report examines (1) the process the Bureau of Land Management (BLM) uses to determine whether to suspend oil and gas leases; (2) the extent of oil and gas lease suspensions and the reasons for the suspensions of selected leases; and (3) the approach that BLM uses to monitor the status of lease suspensions and the extent to which this approach allows for oversight of such lease suspensions.
To examine the process BLM uses to determine whether to suspend oil and gas leases, we reviewed applicable laws, agency documents, and the criteria BLM uses when considering lease suspensions. Specifically, we reviewed BLM’s statutory requirements for granting a lease suspension. We also reviewed BLM’s guidance for reviewing suspension requests—Suspensions of Operations and/or Production Manual––which outlines the process and criteria BLM uses to approve or deny a lease suspension request as well as the process for appealing a suspension decision. We interviewed BLM officials at headquarters, as well as state and field offices responsible for leases in our review, about how they apply these criteria when assessing suspension requests. We also interviewed representatives from the Interior Board of Land Appeals about suspension decisions that are appealed to the board, how these appeals are handled, board decisions that are subsequently appealed, and the process involved with those appeals.
To examine the extent of oil and gas lease suspensions and the reasons for the suspensions of selected leases, we analyzed data on lease suspensions from BLM’s Legacy Rehost 2000 System (LR2000) database as of September 30, 2016. We took a number of steps to assess the reliability of suspension data and related fields in LR2000. Specifically, we performed electronic tests to check the extent to which data were complete and within expected ranges. Testing included comparison of data extractions prepared by BLM officials for us against data we downloaded directly from LR2000. We also interviewed BLM officials responsible for managing the system about how data are collected and entered into the system as well as the steps the officials take to help ensure that the data are accurate and complete. We also clarified discrepancies regarding lease suspension data with these officials when necessary. We determined the data were sufficiently reliable to give a high-level summary on suspensions, including information on the number and location of leases, the number in suspension, and suspension length.
LR2000 contains information on activities related to an oil and gas lease’s status, among other things. For each lease, we identified the latest record, if any, for actions in fiscal year 2016 and earlier that indicate suspension initiation or termination. We determined that a lease was in suspension if the most recent action related to a suspension indicated that the suspension was initiated. We determined the length of suspension based on the date of that initiation record. We then determined distributions of the numbers of leases recorded as still in suspension in each state as of the end of fiscal year 2016.
We also reviewed the official lease files, maintained by BLM state offices, for a nongeneralizable sample of leases recorded as suspended as of September 30, 2016, in Montana and Wyoming to assess their status and the reasons behind the suspensions. We chose these two states because they were among the states with the largest numbers of suspended leases. Montana’s official lease files were electronically maintained and easily accessible, while Wyoming, which had leases recorded as suspended for the longest period of time as of September 30, 2016, maintained hard copy official lease files. Montana and Wyoming collectively represent about 50 percent of all oil and gas leases recorded as suspended. We used the following approaches to select a sample of 48 suspended leases in these states and limited the extent to which we selected multiple leases that were suspended at the same time for the same reason.
For Montana leases, we found that only 12 suspension initiation dates were recorded for the leases in suspension as of the end of fiscal year 2016. We therefore randomly selected for review a single lease from those suspended on each of these dates. For Wyoming, the suspension initiation dates were much more dispersed, so we identified groups of 15 or more leases based on a combination of suspension date, similarity of lease numbers, and the field office of jurisdiction. From these groups, we selected 19 suspended leases—each lease was the lease with largest acreage from each field office within its group. There were a number of leases that did not fit into these groups because there were fewer than 15 suspensions on a given date with similar lease numbers, so we selected a single lease file with the largest acreage from each year that was at least 20 years old. This allowed us to review suspensions that have been in effect for a relatively long period of time. This approach resulted in our selection of an additional 17 suspended leases in Wyoming. While our review of suspended lease files is not generalizable to other BLM lease suspensions, our findings provide examples of types of reasons that are cited for lease suspensions.
To verify the status of each selected lease, we compared information in LR2000 and the official lease file to information in the Offices of Natural Resources Revenue’s database. The Office of Natural Resources Revenue, within the Department of the Interior, is responsible for collecting rental and royalty payments associated with oil and gas leases. We also interviewed the BLM state and field office officials responsible for the specific lease files we reviewed to obtain additional information about the status of certain lease suspensions and the reasons these suspensions remained in effect. We compared how BLM maintains and verifies its lease suspension information with Standards for Internal Control in the Federal Government for information and communication.
To examine the approach BLM uses to monitor the status of lease suspensions and the extent to which the approach provides for oversight, we reviewed agency data, guidance and requirements, and official lease documents. In particular, we reviewed monitoring information in LR2000, BLM’s Suspensions of Operations and/or Production Manual, and monitoring information in the official lease files for our sample of 48 leases recorded as being in suspension as of September 30, 2016. We also interviewed officials from BLM headquarters, as well as BLM’s state offices in Montana and Wyoming and the field offices responsible for the 48 selected leases in our review—a total of 12 field offices, 2 from Montana and 10 from Wyoming. We interviewed officials from 11 field offices about the approaches they used to monitor lease suspensions, including the frequency of monitoring and the staff involved. We also interviewed officials at headquarters and state offices to examine the extent to which these approaches provided for oversight of lease suspensions. We compared BLM’s actions and documentation with agency guidance, federal regulations, and federal standards for internal control for control activities.
We conducted this performance audit from September 2016 to June 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Bureau of Land Management Oil and Gas Leases, Suspensions, and Acreage, as of September 30, 2016
Number of leases 242
Appendix III: Bureau of Land Management Oil and Gas Leases Recorded as in Suspension
Appendix IV: Comments from the Department of the Interior
Appendix V: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Dan Haas (Assistant Director), Karla Springer (Assistant Director), John C. Johnson (Analyst-in-Charge), Richard Burkard, Cindy Gilbert, John W. Hocker, Cynthia Norris, Daniel Purdy, Stuart Ryba, Sara Sullivan, Kiki Theodoropoulos, Barbara Timmerman, Jack Wang, and Khristi Wilkins made key contributions to this report. | Why GAO Did This Study
Oil and gas leases on federal lands generate billions of dollars in rents and royalty payments for the federal government each year, but these revenues can be reduced if leases are suspended (i.e., placed on hold). Questions have been raised about whether some suspensions, particularly those in effect for more than 10 years, may hinder oil and gas production or adversely affect the use of federal lands for other purposes, such as recreation.
GAO was asked to review oil and gas lease suspensions on federal lands managed by BLM. This report examines, among other things, (1) the extent of and reasons for such suspensions and (2) the approach BLM uses to monitor the status of lease suspensions. GAO analyzed all data on suspensions in a BLM database and the official lease files for a nongeneralizable sample of 48 leases recorded as suspended in, Montana and Wyoming, which GAO selected based in part on the large number of suspensions these states had. GAO also reviewed BLM documents and interviewed BLM officials.
What GAO Found
According to data at the end of fiscal year 2016 from the Bureau of Land Management (BLM), a small portion of oil and gas leases were suspended for various lengths of time (as shown below), but the reasons for the suspensions were difficult to determine. During a suspension, the government generally does not collect revenues from the lease. Determining the reasons for suspensions is difficult, in part because BLM does not require the inclusion of this information in its database. To obtain this information, BLM officials would have to review the official lease files, of which many are in hard copy. Under Standards for Internal Control in the Federal Government , management should use quality information to achieve the entity's objectives. BLM field officials GAO interviewed said that additional, more detailed information in the database on reasons for suspensions would be helpful in tracking lease suspensions. By including a data field in the database to record the reasons for suspensions, BLM could better ensure that federal lands are not being inappropriately kept from development—potentially foregoing revenue—or from other valuable uses of public lands.
The approach BLM uses to monitor lease suspensions does not ensure consistent and effective oversight because BLM does not have procedures in place for monitoring. BLM state offices generally delegate responsibility for monitoring lease suspensions to their field offices. Officials from 12 selected field offices in two states with relatively large numbers of lease suspensions reported various frequencies in their monitoring of suspensions, ranging from every few months to rarely or not at all. In the absence of BLM monitoring procedures, field officials have discretion in how and when to monitor. By developing procedures for monitoring lease suspensions, including when to conduct monitoring efforts, BLM could better ensure that lease suspensions in effect are warranted.
What GAO Recommends
To better ensure that federal lands are not being inappropriately kept from development, GAO is making four recommendations, including that BLM record the reasons for lease suspensions in its database and develop procedures for monitoring suspensions. Interior concurred with GAO's recommendations. |
gao_GAO-19-62 | gao_GAO-19-62_0 | Background
The rapid increase of UAC apprehended by DHS in 2014 led to USAID’s assistance for reception and reintegration efforts in Central America’s Northern Triangle. USAID’s efforts, carried out by its implementing partner IOM, have focused on children and family units, as they are considered the most vulnerable migrant populations. According to DHS, the number of UAC from any country who were apprehended at the U.S.- Mexico border rose from nearly 28,000 in fiscal year 2012 to more than 42,000 in fiscal year 2013, and to more than 73,000 in fiscal year 2014. Prior to fiscal year 2012, the majority of UAC apprehended at the border were Mexican nationals. However, nearly three-fourths of UAC apprehended in fiscal year 2014 were nationals from El Salvador, Guatemala, and Honduras. In fiscal year 2014, approximately 122,000 nationals (both children and adults) from the Northern Triangle countries were removed from the United States and returned to their home countries, according to DHS. That number decreased to approximately 75,000 in fiscal year 2017. For the number of nationals from El Salvador, Guatemala, and Honduras removed by DHS’s U.S. Immigration and Customs Enforcement (ICE) from fiscal years 2014 through 2017, see figure 1.
In addition to migrants returned from the United States, the Northern Triangle countries also receive migrants returned from Mexico. In 2016 and 2017, the number of returnees from Mexico to these three countries was greater than those returning from the United States, according to information from countries’ migration directorates. In 2017, however, the number of returning migrants from the United States and Mexico decreased in all three countries, as figure 2 shows.
We have previously reported that the causes of migration from El Salvador, Guatemala, and Honduras to the United States are multiple and include: the lack of economic and job opportunities, gang-related violence and other insecurity issues, high poverty rates and poor living conditions, the desire for family reunification, and perceptions of U.S. immigration policy.
A number of U.S. agencies provide assistance to these countries to address some of these socioeconomic issues, such as violence and poverty. For example, USAID, State, and DHS have programs providing assistance in areas such as economic development, rule of law, citizen security, law enforcement, education, and community development funded through the U.S. Strategy for Central America, including the Central America Regional Security Initiative. To support efforts to prevent migration, such as targeting human smuggling organizations and developing public information campaigns, the U.S. embassies in El Salvador, Guatemala, and Honduras coordinate through interagency working groups. For more information on these coordination activities, see appendix II.
USAID Provides Funding for Assistance to Reintegrate Migrants Returning to El Salvador, Guatemala, and Honduras
USAID has provided funding for short- and long-term assistance to migrants returning to El Salvador, Guatemala, and Honduras, including assisting returning migrants upon arrival at points of entry and reintegrating them into their home countries. USAID provided approximately $27 million to IOM through three program contribution agreements to conduct these efforts. These efforts are in various stages of development in all three countries. Host governments face challenges in their efforts to reintegrate migrants, including limited resources and a lack of employment opportunities.
USAID Provides Funding for Short and Long-Term Assistance for Returning Migrants
USAID has provided funding for short- and long-term assistance to migrants returning to El Salvador, Guatemala, and Honduras, whether they are returning from the United States or Mexico. Short-term efforts assist returning migrants arriving at reception centers in their home countries. These efforts involve processing migrants upon arrival at the points of entry and generally providing post-arrival assistance, such as food, transportation, hygiene and school kits, and clothes within the first two days after returning (see fig. 3). Long-term efforts focus on reintegrating migrants into their home countries. Reintegration seeks to restore migrants into society and to reestablish economic, psychological, and social ties.
USAID has assisted migrants returning to their home countries since 2014 through three program contribution agreements, implemented by IOM. 1. Reception/ In-Processing and Repatriation Assistance to Returning Families and Unaccompanied Children in the Northern Triangle of Central America Agreement (also known as Post-Arrival and Reception Assistance or PARA), (July 2014–April 2016). This agreement between USAID and IOM—established in response to a rapid increase of UAC from El Salvador, Guatemala, and Honduras arriving at the U.S. border in 2014—intended to, among other things, achieve the overall objective of contributing to the “dignified, holistic, and sustainable” return of children and families in the Northern Triangle. According to the program description, IOM viewed infrastructure improvements as a key component of the program. For example, IOM included the renovation of reception centers and shelters among the activities that might be carried out to meet one of the program goals, which related to supporting the countries’ capacities to process and assist returnees at points of entry and migrant shelters. Other goals included efforts to address topics such as providing capacity building to key government agencies, non- governmental organizations, and other partners offering assistance to returning migrants, and improving migration data collection and information sharing among governments, donors, humanitarian agencies, and civil society. 2. Northern Triangle Migration Information Initiative Agreement (NTMI), (September 2015–March 2018). This second agreement between USAID and IOM focused on improving the quality, reliability, and uniformity of migration information. According to the program description, the program would address the need for improved migration information to contribute to the development of more strategic public policies among institutional counterparts involved in the reception, assistance, and reintegration of returning migrants. The program’s goal was to strengthen the governments’ capacity to manage, collect, and analyze migration information to support humanitarian action and protect vulnerable populations in the Northern Triangle countries. This effort also involved taking steps to develop and strengthen data systems to register returning migrants’ information. 3. Return and Reintegration in the Northern Triangle Agreement, (June 2016–June 2019). This third agreement between USAID and IOM was intended to continue to promote and ensure more humane and dignified assistance to and sustainable reintegration of migrants upon return to communities of origin by strengthening the capacities of key stakeholders to assist, care for, and protect returning UAC and migrant families in the Northern Triangle countries. According to the agreement, the program would address things such as expanding the range of government-supported opportunities for returning migrants while providing high-quality services during the reintegration process at the local level.
USAID Provided Approximately $27 Million for Assistance to Returning Migrants, in Fiscal Years 2014 through 2016
USAID provided approximately $27 million for assistance to IOM through the three program contribution agreements. Once the program contribution agreement is signed and the funds are disbursed to IOM, USAID considers the funds expended for its purposes. As of April 2018, IOM has expended all the funds for the first two agreements, $7.6 million and $2.5 million respectively, and $7.1 million of $16.8 million, or 42 percent, of the funds for the third.
For all three agreements, from fiscal year 2014 through April 2018, IOM expended about $9.1 million in El Salvador, about $5.4 million in Honduras, and about $2.7 million in Guatemala, according to IOM. (See figure 4.)
Asociación de Retornados Guatemaltecos (ARG) The civil society organization Asociación de Retornados Guatemaltecos (ARG) begins its work with returning migrants from the United States at the Guatemalan Air Force Base Reception Center. Members of ARG are returned migrants themselves who started the association in 2013 because they understood the experiences of returning migrants and wanted to help people in similar situations by providing a support network. According to an ARG volunteer and our observations, at the reception center, an ARG volunteer greets every returning migrant as they come through the door. After migration authorities process the returning migrants and provide them a snack, an ARG volunteer helps them make a domestic or international telephone call to their family members. Once the migrants have received any belongings and exchanged money, ARG volunteers offer them clothing, help with various tasks—such as receiving money through wire transfers or registering them for a new identity card—and, if necessary, purchase bus tickets for them to return to their communities of origin. ARG volunteers stay until all the returning migrants are served, and, if the migrants are fearful of returning to their communities, accompany them to the Casa del Migrante, a shelter that provides protection assistance. The volunteers told us that they maintain a database to track the returned migrants, later call the returned migrants to make sure they arrived safely in their communities, and offer them assistance in getting certified in skills they may have acquired abroad, such as construction work or speaking English. ARG also connects returned migrants with vocational or training opportunities and potential scholarships. $49,740 to expand a network of migrant returnees to facilitate reintegration and provide information on locally available resources to returnees, such as credit access, government-training programs, market information, and contracting opportunities. The grantee also developed a working group to discuss with government officials and the private sector the health issues returnees face. Even though the grant has ended, the lnstituto Salvadoreño del Migrante’s efforts continue with funds from other donors, according to IAF.
Efforts to Assist Returning Migrants are in Various Stages of Development in the Three Countries
Efforts to assist reception, migrant-related data collection, and reintegration are in various stages of development in all three countries. IOM, with U.S. assistance, has renovated seven reception centers and shelters in El Salvador, Guatemala, and Honduras and improved the collection of migration data to understand the characteristics of the population returning to their countries to inform decisions about allocating resources needed for reintegration. However, in all three countries the use of migration information varies and reintegration efforts are just beginning.
Reception Centers and Post- Arrival Assistance
El Salvador has one reception center for returning migrants; Guatemala has three reception centers and two shelters; and Honduras has three reception centers. See figure 5 for the locations of these reception centers and shelters as well as points of entry.
We observed that at the reception centers in the three countries, returning migrants go through a similar reception process. The process may differ slightly depending on the country and whether the returning person is an adult, part of a family unit, or UAC. See figure 6.
IOM has assisted in the renovation of the countries’ reception centers and shelters and provided post-arrival assistance to returning migrants. Country-specific information on these facilities follows.
El Salvador has one IOM-supported reception center, called Dirección de Atención al Migrante (DAMI), Directorate of Assistance to Migrants, but informally known as La Chacra. IOM completed its efforts to renovate the center in February 2016, and increased its capacity to receive up to 200 returning migrants at a time. The center serves adults, UAC, and family units returned by chartered bus from Mexico or on chartered flights from the United States. Post-arrival assistance is provided at the center. See figures 7 and 8.
Guatemala has three reception centers and two shelters for returning migrants. IOM renovated the two shelters in 2015 and one of the reception centers in 2017. IOM also provided information technology equipment for one reception center and plans to renovate another reception center in 2018. See figure 9.
The three reception centers include:
Sala de Recepción de Niñas, Niños y Adolescentes Migrantes no Acompañados y Unidades Familiares (Reception Center for Unaccompanied Migrant Children and Family Units), La Aurora International Airport, Guatemala City. This center, which opened in May 2017, serves UAC and family units returning by commercial flights from Mexico or the United States. The center provides post- arrival assistance, and has areas for immigration processing, psychological and social assistance, and breast-feeding. It also has a medical clinic and a play area for children. See figure 10.
Centro de Recepción de Retornados de la Fuerza Aérea Guatemalteca (Reception Center for Returnees at Guatemalan Air Force Base), Guatemala City. This reception center serves adults, UAC, and families returning by chartered flights from the United States, and provides post-arrival assistance to them. See figure 11. Adults traveling without children are processed separately from families. In July 2015, IOM opened a small remodeled area of the center that receives returning migrant families and provides post- arrival assistance.
Centro de Recepción de Retornados en Tecún Umán (Reception Center for Returnees at Tecún Umán), Tecún Umán. This reception center, on the border with Mexico, serves adults, UAC, and family units returning by chartered bus from Mexico. IOM has supported the center mainly by providing IT equipment in October 2016 to process returning migrants. The children go through immigration processing at Tecún Umán and are then moved to Casa Nuestras Raíces Quetzaltenango by bus, accompanied by a government social worker to ensure the protection of UAC until a parent or guardian picks them up.
The two shelters include:
Casa Nuestras Raíces Guatemala (Our Roots Shelter, Guatemala), Guatemala City. This shelter serves UAC returning by chartered flights from Mexico and commercial or chartered flights from the United States who have been processed at either La Aurora or Fuerza Aérea Guatemalteca. IOM renovated this shelter in August 2015 and supports post-arrival assistance for returning migrants and their relatives who come to take them home. See figure 12.
Casa Nuestras Raíces Quetzaltenango (Our Roots Shelter, Quetzaltenango), Quetzaltenango. This shelter serves UAC returning by chartered bus from Mexico. UAC are processed first at Tecún Umán and then transported to Quetzaltenango. Similar to the shelter in Guatemala City, IOM renovated this shelter in August 2015 and provides post-arrival assistance.
Honduras has three reception centers. IOM renovated two of the reception centers and upgraded the third. See figure 13.
Centro de Atención al Migrante Retornado SPS (SPS Assistance Center for Returned Migrants), San Pedro Sula. This reception center serves adults returning by chartered flights from the United States. IOM completed renovating and equipping this center in February 2016. It provides post-arrival assistance to returning migrants.
Centro de Atención para Niñez y Familias Migrantes Belén (Belén Assistance Center for Children and Families), San Pedro Sula. This center serves UAC and family units returning by chartered bus from Mexico or commercial flights from Mexico or the United States. IOM completed renovating and equipping the center in February 2016. Post-arrival, psychological, and medical assistance is also provided at Belén.
Centro de Atención al Migrante Retornado Omoa (Omoa Assistance Center for Returned Migrants), Omoa. This center serves adults who are returned by chartered bus from Mexico. IOM provided hygiene, sanitation, and water upgrades to the center, and, according to IOM, plans to make electrical improvements and construct a sports field, sidewalks, and parking area; some of these efforts were started in September 2018.
Migration Data Collection
IOM began assisting the countries in September 2015 with the collection and use of migration data with funding from USAID through its NTMI agreement. Since September 2015, all three host governments collect and digitize migration data. The governments use the data to understand the characteristics of the population returning to their countries so they can make decisions about allocating resources needed for reintegration, according to IOM.
To facilitate the collection of relevant information, IOM helped each government in the three countries develop its own form to gather the information needed by the various ministries involved in reception and reintegration efforts. According to IOM, this uniform questionnaire has promoted data sharing among institutions, reduced interviewing times, and helped ensure that returning migrants are not required to provide the same information multiple times. In addition to counting the number of returned migrants and recording where they are returning from, each country now collects detailed information about each migrant. For example, the Honduran government collects information on an individual’s reason for migrating, labor skills, place of birth, and education level.
Through the NTMI agreement, IOM also provided government agencies in all three countries with information technology equipment, software, and training to collect and analyze relevant information about returning migrants. For example, IOM developed the Honduran government’s data repository and official website for the agency responsible for the registration and publication of data on returning migrants. In Guatemala, IOM is helping the migration directorate implement a system to use fingerprints to identify returning migrants who had migrated previously and returned, providing information on recidivism. IOM has also trained personnel involved with migrant programs in all three countries on how to use and analyze this information.
Reintegration Efforts
El Salvador, Guatemala, and Honduras are at different stages in establishing reintegration efforts, and each government has different priorities, according to IOM. While some reintegration efforts began earlier, IOM’s main reintegration efforts began under the third contribution agreement with USAID in 2016, focusing on expanding the range of government-supported opportunities for returning migrants while providing high-quality services during the reintegration process at the local level. Reintegration efforts in all three countries seek to support returnees with resources in their home communities, including psychological and social services, vocational and employment training, employment opportunities, and upgrades to public spaces. Civil society organizations support some of these reintegration efforts. USAID, through its agreements with IOM, assists these reintegration efforts in a context in which the three host countries experience challenges, such as limited resources and employment opportunities, which affect implementation.
Reintegration Efforts in El Salvador El Salvador is furthest along in establishing reintegration efforts, at both the national and municipal levels. These efforts focus on the entire spectrum of returnees—children, adolescents, and adults—by providing education, psychological, and social assistance to children and families, and reintegration information to adults. At the national level, IOM has been working since November 2015 with the government of El Salvador’s Assistance Centers for Returned Migrant Children and Adolescents and its information centers that support reintegration services for adults, called Ventanillas de Atención al Migrante, Migrant Assistance Windows (commonly known as Ventanillas).
The Assistance Centers for Returned Migrant Children and Adolescents are located in four municipalities, all of which have high numbers of returning migrants, including children and adolescents. These centers provide returning migrant children and families with social services and case management to facilitate their economic and social reintegration. These services include psychological and social assistance and crisis intervention; legal assistance, including safety and protection; health services, including nutrition and immunizations; educational support to ensure children and adolescents are incorporated into the formal education system; and referral services.
The Ventanillas are information centers supporting reintegration in the five municipalities with the highest number of returning migrants. Each center has one person who is responsible for providing assistance to returned migrants such as employment assistance, school enrollment, training opportunities, and lines of credit. IOM equipped the centers with office furniture and such items as storage cabinets, water coolers, air conditioners, and telephones.
At the municipal level, IOM is also assisting other government initiatives in four communities that have high numbers of returned migrants and which the government has prioritized under its Plan El Salvador Seguro (Safe El Salvador Plan). Specifically, IOM is working with municipal governments and community organizations to: improve public spaces with small scale infrastructure projects; raise awareness and knowledge of migration and reintegration at the community level among local governments, communities, and community leaders; and provide psychological and social assistance.
The infrastructure projects are meant to create safe, public spaces to build social cohesion within communities. For example, in two areas in Zacatecoluca that we visited, IOM supported an effort to rebuild a sports complex, which included basketball and soccer fields, and a playground and community center. In Usulután, IOM supported the renovation of the municipal gym (see fig. 14). In January 2018, IOM also began providing technical assistance to the Zacatecoluca municipal government to help it obtain feedback from the community on services needed and working with local service providers to facilitate assistance to beneficiaries, among other things.
Guatemala also has government reintegration efforts at both the national and municipal levels. The current reintegration activity underway is the municipal level Centro de Formación Quédate (Stay Here Vocational Training Center), supported by IOM. Implemented by the Secretariat for Social Welfare, this technical and vocational center provides certified vocational courses and alternative education opportunities for youth, including returned UAC and host community adolescents. While the Secretariat for Social Welfare began operations at the center in 2015, IOM’s support started in July 2018. In addition, Guatemala’s President and First Lady launched a national strategy in March 2017 that aims to prevent migration and to care for returning Guatemalan migrants and their families. The strategy’s goal is to consolidate all government agencies’ activities and create a comprehensive system for returning migrants, including children.
Honduras, with support from IOM, has focused at the national level on improving and maintaining its reception centers, and at the municipal level on opening reintegration assistance centers. In addition, the Honduran First Lady has concentrated on UAC and their needs, such as prioritizing secure reunification. Honduras’ effort to link returned migrants, specifically families and UAC, with government services in the municipalities are focused on reintegration assistance centers. There are nine centers, with plans to open seven more by the end of 2018. The Belén Assistance Center, discussed earlier in this report, refers returning migrants to the reintegration assistance centers, according to a center official. The reintegration assistance centers then obtain information from the returning migrants about assistance they are seeking and send it to one of 12 government agencies, such as the Ministries of Development and Social Inclusion, Education or Health, and the Women’s National Institute.
In addition to assisting government-sponsored reintegration efforts, IOM supports civil society organizations in Honduras that provide reintegration services. In Honduras, we visited three civil society organizations whose programs work directly with returned UAC.
Casa Alianza. Casa Alianza provides reintegration support including psychological and social assistance, child protection services, and children’s rights advocacy for returnees as well as internally displaced persons. The organization worked in the Belén Assistance Center from 2014 to 2017 with returning UAC, according to Casa Alianza officials.
Mennonite Committee for Social Action. This organization’s Support for Returned Migrants Program began in 2014 and has various components including: (1) vocational training, (2) psychological assistance, (3) complementary workshops on life skills, and (4) humanitarian assistance. The program focuses on youth between ages 15 and 25 returning to the San Pedro Sula area.
Collaboration and Effort Association. This program in Tegucigalpa focuses on providing returned children a safe place to live, teaching them responsibility and cooperation, and supporting their education. Many of the adolescents are returned UAC, and all beneficiaries must themselves help run the association’s programs.
Host Government Challenges Affect Reintegration Efforts USAID, through its agreements with IOM, is providing assistance to host countries where various challenges affect reintegration efforts. Some of these challenges affecting host countries, such as limited employment opportunities and resource constraints, are long-standing in nature.
Limited resources: With limited resources dedicated to reintegration efforts, the centers can connect few returning migrants with the appropriate government services. For example, at a Ventanilla we visited in El Salvador, just one official—who has no vehicle—is responsible for providing services to all returning migrants in an area roughly one-fifth the country’s overall size and containing roughly one- fifth of its returning migrants. Similarly, at the Honduran reintegration assistance center we visited, there was only one staff member and no psychologist. As of July 2018, the Honduran government had opened 9 of the 16 planned reintegration assistance centers; it plans to open the remaining ones by the end of 2018.
Few training and employment opportunities: There are limited training and employment opportunities for returning migrants. One of the primary reasons cited for migration is the lack of employment opportunities in the countries. Additionally, the employment opportunities that are available may not fit the migrants’ skills. For example, only migrants with sufficient English skills can be placed in call centers. At the same time, the training programs being offered at a particular time may not interest the migrant. Further, the few opportunities available may not be offered in the locations where migrants can readily access them. Finally, an official from a multilateral organization working in the region raised the concern that many of the training opportunities offer similar skills, such as training to be a barber, beautician, or mechanic, and the market can support only so many people in these professions.
Need for individualized services: Each returning migrant has a different set of needs, skills, and interests, but providing customized assistance takes time and resources. Staff at reintegration assistance centers we visited told us that they try to match a migrant with the services or opportunities they need. For example, a returning migrant may be a single mother with good English skills and referred to services and opportunities based on that profile. Additionally, according to U.S. and Honduran government officials, large-scale reintegration efforts encounter the challenge of reintegrating migrants with different and individualized profiles.
Voluntary nature of seeking and finding assistance: Receiving reintegration assistance and services depends in part on the initiative and desire of the returning migrant. Returning migrants must seek assistance to receive reintegration services, and so must be aware of and connect with the reintegration assistance centers. In El Salvador, only about 7 percent of returning migrants requested help from the reintegration assistance centers in 2017; of those who requested assistance, however, 91 percent received it, according to El Salvador’s Ministry of Foreign Affairs. In both El Salvador and Honduras, the reintegration assistance offered by the government is publicized at the reception centers where migrants are processed upon their return. However, in El Salvador, a government official told us that migrants may not have the patience to wait to receive information after traveling and going through the reception process.
Termination of TPS May Increase the Need for Reception and Reintegration Services in El Salvador and Honduras With the Secretary of Homeland Security’s decisions to terminate TPS in the United States for nationals of El Salvador and Honduras, as of September 9, 2019, and January 5, 2020, respectively, both countries face the possibility of a significant influx of returnees—as many as 262,500 Salvadorans and 86,000 Hondurans, along with their U.S. citizen children. Reintegration efforts may also be complicated by the different backgrounds and needs of returning migrants who benefited from TPS. According to State officials, returning migrants who had TPS are likely to be older with more skills and education than those who left the country more recently. Successful strategies to reintegrate former TPS beneficiaries will be different than those that are currently in place. TPS beneficiaries may also have children who are U.S. citizens with different needs than UAC. During our country visits in March 2018, State officials indicated that official planning for the return of former TPS beneficiaries was either just beginning, as in El Salvador, or had not begun, as in Honduras because an official decision on the termination of TPS for Hondurans had not yet occurred. U.S. officials, though, were meeting with their counterparts to discuss the challenges of reintegrating TPS beneficiaries. In both El Salvador and Honduras, U.S. officials have encouraged the government to address the challenges of reintegrating former TPS beneficiaries. For example, in February 2018, USAID’s mission in El Salvador convened a one-day conference on current efforts to prevent migration and to plan for the return of migrants with TPS. At the same time, U.S. government officials also stated that some or most TPS beneficiaries might choose to stay in the United States without lawful status, attempt to adjust their status, or move to a third country rather than return to their home countries.
Leadership turn-over and guidance: Elections in the three countries, and the subsequent turnover of government officials, also affect implementation, according to IOM. Furthermore, in Guatemala leadership turn-over in key agencies has affected what the government can achieve in terms of reintegration of returning migrants, according to IOM officials. Both the Secretariat of Social Welfare and the Directorate of Migration have had various leaders over the past few years. The government of Guatemala has not yet determined which institution is responsible for reintegration activities and a national plan has not yet been developed, which complicates reintegration efforts, according to IOM.
USAID Assessed Reception and Data Collection Efforts, Which Were Improved, but Effectiveness of Reintegration Efforts Remains to be Determined
USAID assessed the effectiveness of its reception and migrant-related data collection efforts through site visits, meetings with IOM, and report reviews. This assistance has improved the capacity of the governments of El Salvador, Guatemala, and Honduras to provide reception services to returning migrants and to collect and utilize migration information. USAID has not yet assessed the effectiveness of reintegration efforts conducted to date, but plans to sign an agreement by the end of December 2018 for a new reintegration program which will include a monitoring and evaluation component.
USAID Assessed the Effectiveness of its Reception and Data Collection Efforts through Program Monitoring and Report Reviews
Beginning in October 2014, after signing the first agreement IOM, USAID monitored program implementation and assessed the effectiveness of IOM’s efforts to assist returning migrants and improve migration information through site visits, regular meetings with IOM, and review of IOM reports. USAID and IOM officials noted that USAID’s periodic site visits to IOM projects and frequent communications between the two parties helped USAID track progress and results, and make needed adjustments in a timely manner. In a memorandum approving the third program, USAID’s mission in Honduras stated that IOM “responded quickly and satisfactorily to any concerns.” IOM, in consultation with USAID, adapted activities as needed for each country, such as by rebidding a contract to renovate a reception center in Guatemala City in response to corruption allegations. During our site visit in March 2018, we observed USAID officials’ familiarity with specific details related to IOM’s activities and the close working relationship between USAID and IOM staff.
In addition, USAID regularly reviewed the activity and progress reports provided by IOM, which included weekly, monthly, and quarterly reports. According to USAID officials, these activity and progress reports served as the basis for conversations with IOM about program progress and assessment. The reports included information such as an overview of achievements, activity updates by country, and challenges and actions taken. For example, the reports detailed information such as the number of returning migrants provided with post-arrival assistance, including food or hygiene kits, as well as progress on larger projects such as constructing small-scale, community-based infrastructure or renovating reception centers. IOM also explained challenges encountered and plans for overcoming them, such as building strong relationships with new key government personnel when there was turnover in Guatemala and Honduras. IOM also provided information to USAID through periodic, two- page information sheets that summarized its activities in a certain geographical area, such as a municipality in El Salvador, or with a certain program, such as NTMI in Honduras.
As part of the agreements with USAID, IOM agreed to conduct mid-term and final evaluations of the three programs. IOM produced written mid- term and final evaluations for the first program (PARA) based on reviews of documents, field visits, and interviews with government counterparts and USAID, among others. The final evaluation highlighted the program’s achievements, challenges, effective practices, lessons learned, and recommendations. For example, it noted IOM’s strong working relationship with USAID and host government agencies, as well as the need to conduct high-quality assessments in each country during program design. Instead of a written mid-term evaluation for the second program (NTMI), IOM held an internal workshop, which a USAID representative attended. According to IOM officials, IOM plans to present USAID with a mid-term evaluation for the Return and Reintegration program and a final evaluation for the NTMI program, although both have been delayed due to staffing issues.
USAID also assessed IOM’s programs during internal USAID meetings. For example, according to USAID officials, when USAID considered IOM’s requests for no-cost extensions for the PARA and NTMI agreements, USAID assessed the progress and challenges of the activities implemented as part of the agreements and whether they were fulfilling their goals. USAID also discussed the effectiveness of IOM’s programs at a strategic level during portfolio reviews and program performance reports, according to USAID officials. USAID officials told us that because the first program with IOM was productive and had good results, USAID also funded the second and third programs through program contribution agreements. In the memorandum approving the third program, USAID’s mission in Honduras stated that “IOM has been a very effective partner in the first Program Contribution” and noted that IOM collaborated with USAID, the host governments, and other donors to design the follow-on program focused on reintegration efforts. The memo also stated that IOM has “sound management systems and controls, and has long been an effective partner” of the U.S. government.
U.S. Assistance Has Helped Improve Reception Centers and Data Collection
Reception Center Improvements
With U.S. assistance, IOM improved the capacity of the Northern Triangle governments to provide reception services to returning migrants and to collect migration information. With U.S. assistance, IOM renovated the region’s seven reception centers and shelters currently in use and provided post-arrival assistance such as hygiene kits and medical services. The final evaluation for IOM’s first program indicated that IOM designed the renovations in consultation with the host government agencies to meet their needs and to provide a welcoming space for returning migrants. During our site visit in March 2018, we visited five reception centers and one shelter in the three countries, including the Belén Assistance Center in Honduras, which we had visited in March 2015, prior to its renovation. The Belén Assistance Center renovations were extensive, including the dining areas, kitchen, bathrooms, dormitories, play spaces, clinics, and counseling areas as well as a conference room used for facilitating meetings and workshops among government entities and partners. We observed the improved facilities as well as the processing of returning migrants (see fig. 15).
Likewise, IOM extensively renovated the Casa Nuestras Raíces Shelters in Guatemala City and Quetzaltenango, Guatemala, including the kitchen, bathrooms, dormitories, play spaces, clinics, and counseling areas.
In addition to improving infrastructure, IOM provided the governments with post-arrival assistance such as hygiene kits, clothing, meals, buses, and medical, psychological, and social support for returning migrants. For example, from 2014 through 2017 in all three countries, IOM reported that it supplied in total: nearly 60,000 hygiene kits, nearly 34,000 items of clothing, and more than 75,000 meals to returning migrants.
In fiscal year 2017, IOM provided post-arrival assistance to over 29,000 returning migrants, according to IOM. Additionally, IOM provided the host governments with 12 buses to transport returning migrants from the airport to the reception center and from the reception center to the bus station to return to their communities. U.S. and host government officials in the three countries noted that, with USAID and IOM’s assistance, the reception of returnees has improved. For example, IOM expanded and renovated the DAMI Reception Center in San Salvador, adding separate areas for the various ministries involved so that returning migrants can receive specialized services such as a medical examination, psychological and social assistance, and the beginning of job placement assistance. The center also provides integrated child protection and social services. During our site visits to the reception centers and shelter in Guatemala City and San Pedro Sula in March 2018, we observed staff distributing food to returning migrants upon their arrival.
Through technical assistance and other support, IOM also helped build the capacity of host government institutions as it relates to the reception process and their ability to provide better reception services. For example, IOM worked with government agencies to develop protocols and procedures for receiving returned migrants and trained reception staff on issues such as human rights. At the reception centers in all three countries, multiple government agencies are now working together to assist returning migrants, according to IOM.
Migration-Related Data Improvements
With IOM’s support, the governments of the Northern Triangle have improved their capacity to collect data about returning migrants. According to USAID, the technical assistance and support provided by IOM through the NTMI agreement strengthened the governments’ capacity to collect, manage, analyze, and share migration information. Prior to these USAID-assisted efforts, data on returning migrants was limited in all three countries and the information produced was not readily available for use by other government agencies, according to USAID. Since 2015, with IOM equipment and training, all three countries have moved toward uniform, more detailed data collection systems. In Honduras, for instance, technical assistance from IOM enabled the creation of a single data repository, which provides migration data for all agencies to use.
IOM has trained staff of the countries’ migration directorates to use the registration systems for returning migrants and has trained personnel of other government agencies on how to analyze and use the data produced by the migration directorates. Each government now knows the number of migrants returning to the country—information that was not available previously. (See fig. 2 earlier in this report.) In addition, the governments now have such information as: the causes of migration reported by returnees; the location from which the migrants are returning; and the location to which they are returning.
For example, in El Salvador, approximately 27 percent of children and adolescent migrants returning in 2017 said they left because of violence, approximately 27 percent left to reunify with families, and approximately 43 percent left for economic reasons, according to IOM’s analysis of information from El Salvador’s Directorate of Migration. Additionally, according to USAID officials, IOM trained the staff at El Salvador’s General Directorate of Statistics and Census and the agency is now conducting its own surveys of migrants.
According to USAID and IOM officials, the Northern Triangle governments are using the expanded information about returning migrants to make informed decisions, design public policies, and develop programs to provide reintegration assistance. Prior to USAID and IOM entering into the NTMI agreement, no official statistics were available that allowed for evidence-based decisions or public policy design. Now, during the registration process in Honduras, for instance, returning migrants are asked what trade they would like to learn, which can inform host government planning. With information about the reasons migrants left the country, governments can also refer migrants to existing programs or create programs to address those issues, such as developing training and employment opportunities. According to IOM and USAID officials, examples of how governments use this information include the following.
In El Salvador, multiple government institutions use returning migrant information to design specific programs for this population and redirect programming if necessary. The Ministry of Labor, for instance, uses this information to design entrepreneurship programs. Relevant migration information is also shared with committees of the Alliance for Prosperity Plan.
In Honduras, returning migrant information is used by government institutions for planning, budgeting, and monitoring reception, assistance, and reintegration activities. For instance, the First Lady of Honduras’ Task Force for Child Migrants bases its strategy for the reception centers on returning migrant data.
Detailed information on returning migrants in these countries has also been useful for U.S. government officials and has informed USAID’s strategy and programming. According to a USAID official in Guatemala, the new information has been integral to USAID’s ability to evaluate migration issues in a more informed manner. For example, USAID officials in Guatemala told us that much of their programming is based in the Western Highlands because they now have data showing most migrants come from this area of the country. In addition, USAID’s mission in El Salvador convened a conference in February 2018 to discuss the termination of Temporary Protected Status for Salvadorans and used information gathered by El Salvador’s Directorate of Migration about reasons for migration and returnees’ profiles to discuss possible reintegration strategies for this population.
USAID Has Not Yet Assessed the Effectiveness of its Reintegration Efforts
USAID has not assessed the effectiveness of reintegration efforts conducted to date. Reintegration is a long-term process and many of the reintegration assistance programs are just beginning. Specifically, El Salvador began opening five information centers supporting reintegration in November 2015, Honduras opened nine reintegration assistance centers in 2017 and early 2018, and Guatemala’s one center began assisting returned adolescents in July 2018. Given the number of returning migrants and the nascent reintegration services, relatively few have benefited from services offered by these centers. For example, in El Salvador, only about 1,700 of nearly 26,500 returning migrants were connected with government reintegration services through the centers in 2017.
In addition, determining the effectiveness of reintegration efforts is challenging because of the difficulties of tracking migrants once they return to their communities and of accounting for the various external factors that influence an individual’s decision to migrate again. USAID, IOM, and host government officials cited the challenges of tracking and following up with returned migrants once they leave the reception centers. Although the countries are beginning to offer reintegration assistance, through the information and municipal assistance centers in El Salvador and Honduras, there are currently no systems in place to track migrants when they return to their communities. U.S. government officials also noted there are multiple external factors that may influence an individual’s decision to migrate again, some of which cannot be addressed through reintegration assistance programs. For example, the desire to reunify with family may affect an individual’s decision, as well as the country’s economic conditions and levels of violence and insecurity.
Although USAID has not yet assessed the effectiveness of reintegration efforts, it plans to monitor and evaluate efforts. As part of the third program, IOM plans to evaluate each country’s reintegration assistance projects. In addition, by the end of December 2018, USAID expects to sign a 3-year agreement with a Public International Organization (PIO) for a new program which will, among other things, continue assisting the host governments’ efforts to reintegrate returning migrants. According to the USAID memorandum describing the new program, it will be underpinned by a monitoring and evaluation plan, and is expected to result in, among other things, a strengthened focus on monitoring and evaluation systems to track reintegration at the community level. Additionally, according to the memorandum, the new program will use a cost-type agreement which is structured such that the PIO will be reimbursed or advanced funds for costs of goods and services to achieve the agreement purpose.
Agency Comments and Our Evaluation
We are not making any recommendations in this report. We provided a draft of this report to DHS, IAF, State, and USAID. All the agencies provided technical comments, which we incorporated as appropriate. USAID and IAF provided written comments which we have reprinted in appendices III and IV.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report’s date. At that time, we will send copies to the appropriate congressional committees and the Administrator of the U.S. Agency for International Development, the President of the Inter-American Foundation, and the Secretaries of Homeland Security and State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V.
If you or your staff has any questions about this report, please contact me at (202) 512-7141 or [email protected].
Appendix I: Objectives, Scope, and Methodology
This report examines (1) the U.S. Agency for International Development’s (USAID) efforts to assist the reception and reintegration of migrants from El Salvador, Guatemala, and Honduras into their home countries since fiscal year 2014; and (2) what is known about the effectiveness of these efforts. In addition, we reviewed how U.S. agencies have coordinated efforts to assist the reintegration of returning migrants.
To examine USAID’s efforts to assist the reception and reintegration of returning migrants from fiscal year 2014 through fiscal year 2017 in El Salvador, Guatemala, and Honduras, we reviewed USAID’s three program contribution agreements with the International Organization for Migration (IOM). We also reviewed grant agreements for Inter-American Foundation (IAF) projects in El Salvador and Guatemala. In addition, we obtained data from USAID, the Department of State (State), and IAF on agency funding to El Salvador, Guatemala, and Honduras from fiscal years 2014 through 2017. We assessed the reliability of USAID expenditures by reviewing expenditure data from USAID’s Phoenix system for the three contribution agreements. We determined these data to be sufficiently reliable for reporting the amount of funding U.S. agencies expended on reintegration programs. We also reviewed IOM expenditure data from fiscal year 2014 through April 2018. We determined these data were sufficiently reliable to illustrate the general scale of IOM’s expenditures. Additionally, we reviewed IOM program reporting documents detailing the status of the projects, including weekly, biweekly, and monthly progress reports and project presentations related to renovations, information management, and reintegration efforts.
During our March 2018 site visit, we interviewed USAID, State, IAF, and IOM officials in all three countries regarding the status of the projects being implemented under the contribution agreements or grants, and we met with host government officials to discuss these projects. We interviewed representatives from nongovernmental organizations in the three countries to learn about how their work supports reintegration. We conducted five site visits to reception centers, one in El Salvador, two in Guatemala, and two in Honduras, where we observed the reception process, and we visited one shelter in Guatemala City, Guatemala. We selected the locations to visit based on the location of the majority of reception centers and shelters in the countries. In Honduras, we met with unaccompanied children (UAC) at three centers operated by different nongovernmental organizations with IOM support, where we discussed their reasons for making the journey to the U.S, and how the programs were assisting their reintegration. Spanish-speaking GAO staff primarily conducted the interviews and GAO contracted for interpreters with State to help facilitate the interviews, when necessary. We also interviewed USAID, State, and IAF officials in the United States who are responsible for these programs.
To determine the number of migrants returned to El Salvador, Guatemala, and Honduras, we reviewed and tabulated IOM data from calendar year 2015 to 2017. We did not review 2014 data because IOM’s effort had not yet begun. To determine the number of people removed from the United States, we reviewed and tabulated Department of Homeland Security (DHS) data from fiscal years 2014 through 2017. We assessed the reliability of IOM migration data on the number of returnees, and DHS data on people removed, by reviewing documents and interviewing knowledgeable agency officials and host government officials about how the data were produced, selected, and checked for accuracy. We determined the IOM data to be sufficiently reliable to provide background information on the number of migrants returning to the three countries. We determined the DHS data was sufficiently reliable for reporting on number of removals of migrants from the United States to El Salvador, Guatemala, and Honduras from fiscal years 2014 through 2017. The data for the number of Temporary Protected Status (TPS) beneficiaries is from DHS reporting in the Federal Register, which is sufficiently reliable for reporting the approximate number of TPS beneficiaries.
To examine how USAID assessed the effectiveness of its assistance for reintegration efforts in El Salvador, Guatemala, and Honduras, from fiscal years 2014 through 2017, we reviewed IOM’s contribution agreements, USAID’s evaluation policies for the agreements, country strategy documents for each country, and regional planning documents. We also interviewed USAID officials. To gather migration related information and requirements, we reviewed the U.S. Strategy for Central America, the associated quarterly reporting cables, and State’s Justification Memoranda for releasing foreign assistance to Central America. During our March 2018 site visit, we also interviewed USAID and IOM officials at overseas locations regarding their evaluation requirements and policy and how they monitored and evaluated the projects. We reviewed IOM’s reported progress towards achieving its goals by reviewing its mid-term and final evaluation reports for the first contribution agreement, and other reporting documentation containing progress updates for the other two contribution agreements.
During our site visit to El Salvador, we visited renovation projects that IOM supported, including two playgrounds, a municipal gymnasium, and a community center in Zacatecoluca and Usulután. In addition, we visited several reintegration initiatives, including an Assistance Center for Returned Migrant Children and Adolescents and, one municipal information center supporting reintegration center, both in El Salvador, and one municipal reintegration assistance center in Honduras. We selected reception and reintegration initiatives to visit based on proximity to San Salvador and San Pedro Sula. We also met with U.S. embassy officials, including the U.S. Ambassador to Guatemala and acting chiefs of mission in El Salvador and Honduras, to obtain their views on U.S. assistance for returning migrants and to understand what efforts were underway to address the impact of termination of Temporary Protected Status for El Salvadoran and Honduran beneficiaries. We also interviewed IOM officials in El Salvador on the host nation’s ability to reintegrate Temporary Protected Status beneficiaries, and reviewed documents regarding El Salvador and Honduras by DHS and State on this topic.
To examine interagency coordination, we obtained information on how USAID, State, DHS, and IAF headquarters offices with responsibility for overseeing assistance for reception and reintegration activities and country team operations in El Salvador, Guatemala, and Honduras have been coordinating with each other and with host country partners. During our March 2018 site visit, we interviewed USAID and IOM representatives at overseas locations to discuss their coordination efforts. We also interviewed USAID, State, and DHS officials in the United States who are responsible for these programs to obtain their views on interagency coordination. In addition, we obtained related information from IAF officials on coordination by email.
We conducted this performance audit from November 2017 to November 2018, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: U.S. Agencies Coordinate on Reception and Reintegration Efforts for Migrants Returning to El Salvador, Guatemala, and Honduras
Interagency Coordination on Reception and Reintegration Efforts Takes Place in All Three Countries
Interagency coordination on reception and reintegration efforts takes place at U.S. embassies among the U.S. Agency for International Development (USAID), Department of State (State), Department of Homeland Security (DHS), and others, in El Salvador, Guatemala, and Honduras. These efforts occur on a formal basis as part of interagency working groups focused on migration at the U.S. embassies in El Salvador and Honduras and on an ad hoc basis in Guatemala, where no formal migration working group exists. Additionally, the Inter-American Foundation (IAF) coordinates its reintegration efforts with USAID’s missions in El Salvador and Guatemala, where it funds such projects.
The migration working group at the U.S. embassy in El Salvador, according to group officials, coordinates the efforts of the various U.S. agencies working on migration issues, in support of the U.S. embassy’s overall goal of curbing illegal migration to the United States. Members of the working group come from USAID; State, including various sections such as Political, Consular, and Public Affairs; DHS components, including U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement; and others as appropriate. According to these officials, the working group’s purpose is to have all the agencies at the U.S. embassy support and work together on migration-related issues, share information, and avoid duplication of effort. These officials told us the working group also responds to issues raised by State headquarters. For example, State officials in Washington asked the working group to assess the potential impact of former beneficiaries of Temporary Protected Status in the United States returning to El Salvador.
The migration working group at the U.S. embassy in Honduras initially focused on addressing the rapid increase of unaccompanied children (UAC) from El Salvador, Guatemala, and Honduras arriving at the U.S. border in 2014, according to group officials. Members of the working group include individuals from USAID, State, DHS, and others as appropriate. In September 2017, the working group, according to these officials, shifted its focus to reintegration, as well as issues related to internally displaced persons. Officials told us that the working group has spun off other working groups, including one to address the issue of beneficiaries with Temporary Protected Status returning to Honduras.
The U.S. embassy in Guatemala had no formal inter-agency migration working group, in March 2018 when we visited, but it had several others, including a law enforcement working group that meets once a week. According to the working group, the Ambassador meets with them if any sensitive issues regarding migration arise. In addition, it has an economic and political working group focused on the ports and trade that regularly discusses what is occurring at the ports of entry. Among these working groups, migration is discussed at the U.S. embassy as needed, according to embassy officials we spoke with who participate in these groups. Members of the working groups include individuals from USAID, State, DHS, and others as appropriate.
IAF also coordinates its reintegration efforts with all three U.S. embassies, to ensure that (1) its projects are aligned with U.S. foreign policy objectives and (2) its grantees are appropriate. State provides feedback on IAF proposed grants and the relevant U.S. embassies provide their approval. According to IAF officials, for each fiscal year since 2016 IAF has presented a detailed proposal to USAID’s Latin American and Caribbean Bureau, outlining its programing and funding objectives, and monitoring and evaluation plan in the Northern Triangle countries. The proposals are intended to facilitate USAID’s transfer of funds to IAF, ensuring that community-led projects are included in the efforts it supports to advance the U.S. Strategy for Central America.
USAID Coordinates with Foreign Partners Mainly through the International Organization for Migration
USAID coordinates its assistance for reception and reintegration efforts with foreign partners, including host governments and international organizations, through the International Organization for Migration (IOM), which is the primary implementing partner for these efforts. USAID officials told us, however, they engage with both the host government and other national and multilateral organizations when it identifies a constructive opportunity.
Specifically, USAID’s three program contribution agreements with IOM addressed the benefits of partnerships and coordination with counterparts in government, civil society, multilateral organizations, and the private sector. Additionally, IOM noted it would engage with various stakeholders to coordinate responses and avoid duplication. For example, according to IOM, in 2014, it had already met with various private sector counterparts, such as Americares, and the civil society organizations Glasswing International and World Vision, to identify potential activities to build upon USAID-funded assistance before the initiation of the first program contribution agreement.
IOM also coordinated with various civil society, multilateral, and private sector organizations in the three countries in its implementation of the program contribution agreements. For example, in Guatemala, IOM officials stated that their coordination with the United Nations Population Fund enabled IOM to provide computer hardware, while the United Nations provided computer software to the Ministry of Foreign Relations to register UAC, thus avoiding duplication. IOM also coordinated with civil society organizations such as:
Fundación Cristosal, in El Salvador, which is working to implement a new registration system of victims of internal displacement.
Fundación Avina, in Guatemala, which assists returnees with social and labor reintegration.
Scalibrini Missionary Sisters, in Honduras, which operates the reception center at San Pedro Sula and provides returnees bus tickets back to their communities of origin, if needed and also phone calls to reach their family members upon their arrival.
During our site visit to Honduras in March 2018, we attended a roundtable meeting with representatives from the International Committee of the Red Cross, the Norwegian Refugee Council, and the United Nations High Commissioner of Refugees, where these representatives discussed coordination and efforts to avoid duplication at reception centers. For example, officials at the meeting stated that during the post-election protests in Honduras in late 2017 and early 2018, returning children and families could not access the Centro de Atención para Niñez y Familias Migrantes Belén (Belén Assistance Center for Children and Families) to be processed by IOM, so they were processed by the Honduran Red Cross at the Centro de Atención al Migrante Omoa (Omoa Assistance Center for Migrants). The organizations worked together and consistently communicated to ensure that there were no gaps in coverage for the returning UAC and families, according to officials at the meeting.
USAID officials told us that IOM programs helped strengthen the relationship between the U.S. government and the host country governments. The host government agency must formally request IOM’s assistance before IOM will provide support, and IOM officials said this letter of request is important to ensure institutional support for and cooperation with IOM’s programs. Additionally, IOM, USAID, and the host government agencies worked together to improve reception and reintegration services for returning migrants. For example, in Honduras in March 2018, USAID, IOM, the Ministry of Foreign Affairs, and the National Center for Social Sector Information met to discuss what additional information they would like to obtain about returning migrants and how to analyze the data.
The program contribution agreements also called for the establishment of coordination committees to facilitate coordination and consultation among its members. According to the agreements, the committees were to share information as needed to provide assistance, evaluate the effectiveness of the assistance, and otherwise share relevant information. The committee meetings, according to IOM officials, were held regionally among representatives of IOM and the USAID missions under the first program contribution agreement, Repatriation Assistance to Returning Families and Unaccompanied Children in the Northern Triangle of Central America, when the efforts were beginning and there was a sense of urgency due the rapid influx of UAC at the U.S. border from El Salvador, Guatemala, and Honduras. When the third program contribution agreement, Return and Reintegration in the Northern Triangle, began in 2016, the meetings between IOM and USAID were held bilaterally in each country.
The coordination committee played an important role during the beginning of the first program contribution agreement because, according to USAID officials, it facilitated interaction with the host governments, helped with coordination, and established working relationships between USAID and IOM. Once the program and relationships were established by the time of the third contribution agreement, coordination had evolved, according to USAID officials. IOM officials said that although committee meetings occur on an ad hoc basis under the third program contribution agreement, coordination is stronger. For example, USAID and IOM coordinate closely on strategic decisions, such as IOM’s decision to rebid the contract to renovate and expand the reception center at the Guatemalan Air Force Base, after allegations of corruption arose surrounding the initial contractor.
Finally, USAID interacts in various ways with IOM, outside of the formal terms of the contribution agreements. According to IOM and USAID officials,
USAID and IOM engage in regular discussions about the programs’ progress and implementation challenges, to help IOM make decisions and redefine plans of action if necessary.
USAID is involved in IOM’s strategic decisions, and IOM regularly consults USAID for feedback and recommendations regarding programming.
USAID and IOM participated in forums such as conferences and a workshop where lessons learned and best practices were discussed.
Appendix III: Comments from the U.S. Agency for International Development
Appendix IV: Comments from the Inter- American Foundation
Appendix V: GAO Contact and Staff Acknowledgments
GAO Contact:
Staff Acknowledgments:
In addition to the contact named above, Judith Williams (Assistant Director), Joe Carney (Assistant Director), Julie Hirshen (Analyst-in- Charge), Kathryn Bassion, Neil Doherty, Daniela Rudstein, Aldo Salerno, Michael Silver, and K. Nicole Willems made key contributions to this report. | Why GAO Did This Study
In 2014, instability driven by insecurity, lack of economic opportunity, and weak governance led to a rapid increase of unaccompanied alien children (UAC) from El Salvador, Guatemala, and Honduras arriving at the U.S. border. In fiscal year 2017, the Department of Homeland Security reported (DHS) apprehending more than 200,000 nationals from these countries and removed nearly 75,000 nationals, including UAC, of these countries from the United States and returned them to their home countries. Current estimates also indicate nearly 350,000 individuals may need to be reintegrated to El Salvador and Honduras over the next few years when their Temporary Protected Status in the United States expires.
GAO was asked to review U.S. efforts to support the reintegration of Central American migrants. This report describes (1) USAID efforts to assist reception and reintegration of migrants from El Salvador, Guatemala, and Honduras into their home countries since fiscal year 2014; and (2) what is known about the effectiveness of these efforts. GAO reviewed agency program documents and funding data; interviewed officials from U.S. government agencies, IOM, and host governments and beneficiaries; and conducted site visits in these countries.
GAO is not making any recommendations in this report. USAID and IAF provided formal comments, which are reproduced in this report, and all agencies provided technical comments, which were incorporated as appropriate.
What GAO Found
Since fiscal year 2014, the U.S. Agency for International Development (USAID) has provided approximately $27 million to the International Organization for Migration (IOM)—an intergovernmental organization focusing on migration—for assistance to migrants returning to El Salvador, Guatemala, and Honduras. Assistance to migrants includes short-term reception services, such as food and transportation, renovating reception centers, and collecting data on returning migrants that are used to support their reintegration. Assistance also includes long-term reintegration efforts, such as counseling services and employment assistance to make it easier for migrants to readjust to and stay in their home countries. These various efforts are in different stages of development.
While reception services for migrants have improved, USAID has not yet assessed the effectiveness of reintegration efforts. USAID monitored and assessed reception services through site visits, meetings, and reports from IOM. IOM's early efforts improved the three host governments' capacity to provide reception services to returning migrants. For example, since fiscal year 2014, IOM renovated the seven reception centers and shelters being used in El Salvador, Guatemala, and Honduras. Further, with IOM's assistance, the host governments have improved their capacity to collect data about returning migrants. According to USAID and IOM, host governments are using these data to design policies and develop programs to provide reintegration assistance. While USAID has not yet assessed the effectiveness of reintegration efforts, many of these programs are just beginning. USAID expects to sign a new agreement by the end of December 2018 that would involve, among other things, monitoring and evaluating reintegration efforts in the three countries. |
gao_GAO-17-798T | gao_GAO-17-798T_0 | Background
Since January 2017, the Navy has suffered four significant mishaps at sea that have resulted in serious damage to Navy ships and the loss of 17 sailors (see figure 1). Three of the four at sea mishaps that have occurred—two collisions and one grounding—have involved ships homeported overseas in Yokosuka, Japan. Appendix II provides a summary of major mishaps for Navy ships at sea in fiscal years 2009 through 2017.
The Navy currently has 277 ships, a 17 percent reduction from the 333 ships it had in 1998. Over the past two decades, as the number of Navy ships has decreased, the number of ships deployed overseas has remained roughly constant at about 100 ships; consequently, each ship is being deployed more to maintain the same level of presence. We reported in September 2016 that the Navy, along with the other military services, had been reporting persistently low readiness levels. The Navy attributes these, in part, to the increased deployment lengths needed to meet the continuing high demand for its aircraft carriers, cruisers, destroyers, and amphibious ships. For example, the deployment lengths for carrier strike groups had increased from an average of 6.4 months during the period of 2008 through 2011 to a less sustainable 9 months for three carrier strike groups that were deployed in 2015. In 2016, the Navy extended the deployments of the Harry S Truman and Theodore Roosevelt Carrier Strike Groups to 8 and 8.5 months, respectively. In addition, the Navy has had to shorten, eliminate, or defer training and maintenance periods to support these high deployment rates. These decisions have resulted in declining ship conditions across the fleet and have increased the amount of time required for the shipyards to complete maintenance on these ships. Lengthened maintenance periods, in turn, compress the time that ships are available for training and operations.
Ships Homeported Overseas Provide Increased Forward Presence but Train Less, Defer More Maintenance, Degrade Faster, and Cost More to Operate
As we previously reported, to help meet the operational demands using its existing inventory of ships, the Navy has assigned more of its surface combatants and amphibious ships to overseas homeports. Since 2006, the Navy has doubled the percentage of the fleet assigned to overseas homeports. In 2006, 20 ships were homeported overseas (7 percent of the fleet); today, 40 ships are homeported overseas (14 percent of the fleet) in Japan, Spain, Bahrain, and Italy; and an additional destroyer will be homeported in Yokosuka, Japan in 2018 (see figure 2).
According to the Navy, homeporting ships overseas is an efficient method for providing forward presence and rapid crisis response. Our prior work confirms that having ships homeported overseas provides additional presence, but it comes at a cost. For example, we found in May 2015 that homeporting ships overseas results in higher operations and support costs than homeporting ships in the United States. In addition, the operational schedules the Navy uses for overseas-homeported ships limit dedicated training and maintenance periods, resulting in difficulty keeping crews fully trained and ships maintained. In fact, the primary reason that Navy ships homeported overseas provide more deployed time than ships homeported in the United States is that the Navy reduces their training and maintenance periods in order to maximize their operational availability. Ships homeported overseas do not operate within the traditional fleet response plan cycles that apply to U.S.-based ships. Since the ships are in permanent deployment status during their time homeported overseas, they do not have designated ramp-up and ramp- down maintenance and training periods built into their operational schedules (see figure 3). Navy officials told us that because the Navy expects these ships to be operationally available for the maximum amount of time, their intermediate and depot-level maintenance are executed through more frequent, shorter maintenance periods or deferred until after they return to a U.S. homeport—generally after 7 to 10 years overseas.
In May 2015, we also found that high operational tempo for ships homeported overseas limits the time for crew training when compared with training time for ships homeported in the United States. Navy officials told us that U.S.-based crews are completely qualified and certified prior to deploying from their U.S. homeports, with few exceptions. In contrast, the high operational tempo of ships homeported overseas had resulted in what Navy personnel called a “train on the margins” approach, a shorthand way to say there was no dedicated training time set aside for the ships so crews trained while underway or in the limited time between underway periods. We found that, at the time of our 2015 review, there were no dedicated training periods built into the operational schedules of the cruisers, destroyers, and amphibious ships homeported in Yokosuka and Sasebo, Japan. As a result, these crews did not have all of their needed training and certifications. We recommended that the Navy develop and implement a sustainable operational schedule for all ships homeported overseas. DOD concurred with this recommendation and reported in 2015 that it had developed revised operational schedules for all ships homeported overseas. However, when we contacted DOD to obtain updated information for this testimony, U.S. Pacific Fleet officials stated that the revised operational schedules for the cruisers and destroyers homeported in Japan were still under review and had not been employed. As of June 2017, 37 percent of the warfare certifications for cruiser and destroyer crews homeported in Japan had expired, and over two-thirds of the expired certifications—including mobility-seamanship and air warfare—had been expired for 5 months or more. This represents more than a fivefold increase in the percentage of expired warfare certifications for these ships since our May 2015 report. The Navy’s Surface Force Readiness Manual states that the high operational tempo and frequent tasking of ships homeported overseas requires that these ships always be prepared to execute complex operations and notes that this demand for continuous readiness also means that ships homeported overseas should maintain maximum training, material condition, and manning readiness.
With respect to the material condition of the ships, we found in May 2015 that casualty reports—incidents of degraded or out-of-service equipment—nearly doubled over the 2009 through 2014 time frame, and the condition of overseas-homeported ships decreased even faster than that of U.S.-based ships (see figure 4). The Navy uses casualty reports to provide information on the material condition of ships in order to determine current readiness. For example, casualty report data provide information on equipment or systems that are degraded or out of service, the lack of which will affect a ship’s ability to support required mission areas. In 2015, Navy officials acknowledged an increasing number of casualty reports on Navy ships and a worsening trend in material ship condition. They stated that equipment casualties require unscheduled maintenance and have a negative effect on fleet operations, because there is an associated capability or capacity loss.
In our May 2015 report, we recommended that the Navy develop a comprehensive assessment of the long-term costs and risks to its fleet associated with the Navy’s increasing reliance on overseas homeporting to meet presence requirements; make any necessary adjustments to its overseas presence based on this assessment; and reassess these risks when making future overseas homeporting decisions. DOD concurred with this recommendation, but, as of August 2017, it has not conducted an assessment, even though it has continued to increase the number of ships homeported overseas.
Size and Composition of Ship Crews May Contribute to Sailor Overwork and Create Readiness and Safety Risks
In the early 2000s, the Navy made several changes to its process for determining the size and composition of ship crews that may contribute to sailor overwork and create readiness and safety risks. These changes were intended to drive down crew sizes in order to save on personnel costs. However, as we reported in May 2017, these changes were not substantiated with analysis and may be creating readiness and safety risks. With fewer sailors operating and maintaining surface ships, the material condition of the ships declined, and we found that this decline ultimately contributed to an increase in operating and support costs that outweighed any savings on personnel (see figure 5). The Navy eventually reassessed and reversed some of the changes it had made during this period—known as “optimal manning”—but it continued to use a workweek standard that does not reflect the actual time sailors spend working and does not account for in-port workload—both of which may be leading to sailors being overworked. Additionally, we found that heavy workload does not end after ships return to port. Crews typically operate with fewer sailors while in port, so those crew members remaining must cover the workload of multiple sailors, causing additional strain and potential overwork.
In 2014, the Navy conducted a study of the standard workweek and identified significant issues that could negatively affect a crew’s capabilities to accomplish tasks and maintain the material readiness of ships, as well as crew safety issues that might result if crews slept less to accommodate workload that was not accounted for. The Navy study found that sailors were on duty 108 hours a week, exceeding their weekly on-duty allocation of 81 hours. This on-duty time included 90 hours of productive work—20 hours per week more than the 70 hours that are allotted in the standard workweek. This, in turn, reduced the time available for rest and resulted in sailors spending less time sleeping than was allotted, a situation that the study noted could encourage a poor safety culture. Moving forward, the Navy will likely face manning challenges, especially given its current difficulty in filling authorized positions, as it seeks to increase the size of its fleet by as much as 30 percent over its current size. Navy officials stated that even with manpower requirements that accurately capture all workload, the Navy will be challenged to fund these positions and fill them with adequately trained sailors at current personnel levels. Figure 6 shows the Navy’s projected end strength and fleet size.
In our May 2017 report, we found that the Navy’s guidance does not require that the factors it uses to calculate manpower requirements be reassessed periodically or when conditions change, to ensure that these factors remain valid and that crews are appropriately sized. We made several recommendations to address this issue, including that the Navy should (1) reassess the standard workweek, (2) require examination of in- port workload, (3) develop criteria to reassess the factors used in its manpower requirements process, and (4) update its ship manpower requirements. DOD concurred with our recommendations, stating that it is committed to ensuring that the Navy’s manpower requirements are current and analytically based and will meet the needs of the existing and future surface fleet. As of August 2017, DOD had not yet taken any actions to implement these recommendations. We believe that, until the Navy makes the needed changes, its ships may not have the right number and skill mix of sailors to maintain readiness and prevent overworking its sailors.
The Navy’s Inability to Complete Ship Maintenance on Time Hampers Its Efforts to Rebuild Readiness
To address its persistently low readiness levels, the Navy began implementing a revised operational schedule in November 2014, which it referred to as the optimized fleet response plan. This plan seeks to maximize the employability of the existing fleet while preserving adequate time for maintenance and training, providing continuity in ship leadership and carrier strike group assignments, and restoring operational and personnel tempos to acceptable levels. The Navy’s implementation of the optimized fleet response plan—and readiness recovery more broadly—is premised on adherence to deployment, training, and maintenance schedules.
However, in May 2016, we found that the Navy was having difficulty in implementing its new schedule as intended. Both the public and private shipyards were having difficulty completing maintenance on time, owing primarily to the poor condition of the ships after more than a decade of heavy use, deferred maintenance, and the Navy’s inability to accurately predict how much maintenance they would need. We reported that in 2011 through 2014 only 28 percent of scheduled maintenance for surface combatants was completed on time and just 11 percent was completed on time for aircraft carriers. We updated these data for the purposes of this testimony to include maintenance availabilities completed through the end of fiscal year 2016 and found continued difficulty completing maintenance on time for key portions of the Navy fleet (see figure 7):
Aircraft Carriers (CVNs): In fiscal years 2011 through 2016, maintenance overruns on 18 of 21 (86 percent) aircraft carriers resulted in a total of 1,103 lost operational days—days that ships were not available for operations—the equivalent of losing the use of 0.5 aircraft carriers each year.
Surface Combatants (DDGs and CGs): In fiscal years 2011 through 2016, maintenance overruns on 107 of 169 (63 percent) surface combatants resulted in a total of 6,603 lost operational days—the equivalent of losing the use of 3.0 surface combatants each year.
Submarines (SSNs, SSBNs, and SSGNs): In fiscal years 2011 through 2016, maintenance overruns on 39 of 47 (83 percent) submarines resulted in a total of 6,220 lost operational days—the equivalent of losing the use of 2.8 submarines each year.
Navy officials are aware of the challenges faced by both the public and private shipyards and have taken steps to address the risks these pose to maintenance schedules, including hiring additional shipyard workers and improving their maintenance planning processes. However, Navy officials have told us that it will take time for these changes to bring about a positive effect. For example, as of May 2016, data on the public shipyards’ workforce showed that 32 percent of all employees had fewer than 5 years of experience. According to Navy officials, this workforce inexperience negatively affects the productivity of the shipyards, and it will take several years for them to attain full productivity.
Navy Readiness Rebuilding is Part of a Broader DOD Effort
In September 2016, we found that although DOD has stated that readiness rebuilding is a priority, implementation and oversight of department-wide readiness rebuilding efforts did not fully include key elements of sound planning, and the lack of these elements puts the overall rebuilding efforts at risk. The Navy states that its overall goal for readiness recovery is to reach a predictable and sustainable level of global presence and surge capacity from year to year. The Navy identified carrier strike groups and amphibious ready groups as key force elements in its plan for readiness recovery and had set 2020 for reaching a predictable and sustainable level of global presence and surge capacity by implementing the optimized fleet response plan. However, we found in 2016 that the Navy faced significant challenges, such as delays in completing maintenance and emerging demands, in achieving its readiness recovery goals for carrier strike groups and amphibious ready groups, and projections show that the Navy will not meet its time frames for achieving readiness recovery.
As a result, we recommended that DOD and the services establish comprehensive readiness goals, strategies for implementing them, and associated metrics that can be used to evaluate whether readiness recovery efforts are achieving intended outcomes. DOD generally concurred with our recommendations and, in November 2016, issued limited guidance to the military services on rebuilding readiness; it has also started to design a framework to guide the military services in achieving readiness recovery but has not yet implemented our recommendations. The Navy has since extended its time frame for readiness recovery to at least 2021, but it still has not developed specific benchmarks or interim goals for tracking and reporting on readiness recovery. Navy officials cited several challenges to rebuilding readiness, chief among them the continued high demand for its forces, the unpredictability of funding, and the current difficulty with beginning and completing ship maintenance on time.
In January 2017, the President directed the Secretary of Defense to conduct a readiness review and identify actions that can be implemented in fiscal year 2017 to improve readiness. DOD and Navy officials told us that, as part of this readiness review, the Navy prioritized immediate readiness gaps and shortfalls. These officials added that this review would guide the Navy’s investment decisions in future budget cycles, with the intention to rebuild readiness and prepare the force for future conflicts. However, high demand for naval presence will continue to put pressure on a fleet that is already stretched thin across the globe. Looking to the future, the Navy has plans to grow its fleet by as much as 30 percent, but it has not yet shown the ability to adequately man, maintain, and operate the current fleet. These readiness problems need to be addressed and will require the Navy to implement our recommendations—particularly in the areas of assessing the risks associated with overseas basing, reassessing sailor workload and the factors used to size ship crews, and applying sound planning and sustained management attention to its readiness rebuilding efforts. In addition, continued congressional oversight will be needed to ensure that the Navy demonstrates progress in addressing its maintenance, training, and other challenges.
Chairmen Wilson and Wittman, Ranking Members Bordallo and Courtney, and Members of the Subcommittees, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time.
GAO Contact and Staff Acknowledgements
If you or your staff have questions about this testimony, please contact John Pendleton, Director, Defense Capabilities and Management at (202) 512-3489 or [email protected].
Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Suzanne Wren, Assistant Director; Steven Banovac, Chris Cronin, Kerri Eisenbach, Joanne Landesman, Amie Lesser, Tobin McMurdie, Shari Nikoo, Cody Raysinger, Michael Silver, Grant Sutton, and Chris Watson.
Appendix I: Implementation Status of Prior GAO Recommendations Cited in this Testimony
Over the past three years, we issued several reports related to Navy readiness cited in this statement. Table 1 summarizes the status of recommendations made in these reports, which contained a total of 11 recommendations. The Department of Defense generally concurred with all of these recommendations but has implemented only one of them to date. For each of the reports, the specific recommendations and their implementation status are summarized in tables 2 through 4.
Appendix II: Summary of Major Mishaps for Navy Ships at Sea for Fiscal Years 2009 Through 2017, as of August 2017
The Navy defines a class A mishap as one that results in $2 million or more in damages to government or other property, or a mishap that resulted in a fatality or permanent total disability. We analyzed data compiled by the Naval Safety Center for fiscal years 2009 through 2017 to provide a summary of major Navy mishaps at sea (see table 5).
Appendix III: Related GAO Products
Report numbers with a C or RC suffix are Classified. Classified reports are available to personnel with the proper clearances and need to know, upon request.
Department of Defense: Actions Needed to Address Five Key Mission Challenges. GAO-17-369. Washington, D.C.: June 13, 2017.
Military Readiness: Coastal Riverine Force Challenges. GAO-17-462C. Washington, D.C.: June 13, 2017. (SECRET)
Navy Force Structure: Actions Needed to Ensure Proper Size and Composition of Ship Crews. GAO-17-413. Washington, D.C.: May 18, 2017.
Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 7, 2016.
Navy and Marine Corps: Services Face Challenges to Rebuilding Readiness. GAO-16-481RC. Washington, D.C.: May 25, 2016. (SECRET//NOFORN)
Military Readiness: Progress and Challenges in Implementing the Navy’s Optimized Fleet Response Plan. GAO-16-466R. Washington, D.C.: May 2, 2016.
Navy Force Structure: Sustainable Plan and Comprehensive Assessment Needed to Mitigate Long-Term Risks to Ships Assigned to Overseas Homeports. GAO-15-329. Washington, D.C.: May 29, 2015.
Military Readiness: Navy Needs to Assess Risks to Its Strategy to Improve Ship Readiness. GAO-12-887. Washington, D.C.: September 21, 2012.
Force Structure: Improved Cost Information and Analysis Needed to Guide Overseas Military Posture Decisions. GAO-12-711. Washington, D.C.: June 6, 2012.
Military Readiness: Navy Needs to Reassess Its Metrics and Assumptions for Ship Crewing Requirements and Training. GAO-10-592. Washington, D.C.: June 9, 2010.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Since January 2017, the Navy has suffered four significant mishaps at sea that resulted in serious damage to its ships and the loss of 17 sailors. Three of these incidents involved ships homeported in Japan. In response to these incidents, the Chief of Naval Operations ordered an operational pause for all fleets worldwide, and the Vice Chief of Naval Operations directed a comprehensive review of surface fleet operations, stating that these tragic incidents are not limited occurrences but part of a disturbing trend in mishaps involving U.S. ships.
This statement provides information on the effects of homeporting ships overseas, reducing crew size on ships, and not completing maintenance on time on the readiness of the Navy and summarizes GAO recommendations to address the Navy's maintenance, training, and other challenges.
In preparing this statement, GAO relied on previously published work since 2015 related to the readiness of ships homeported overseas, sailor training and workload issues, maintenance challenges, and other issues; GAO updated this information, as appropriate, based on Navy data.
What GAO Found
GAO's prior work shows that the Navy has increased deployment lengths, shortened training periods, and reduced or deferred maintenance to meet high operational demands, which has resulted in declining ship conditions and a worsening trend in overall readiness. The Navy has stated that high demand for presence has put pressure on a fleet that is stretched thin across the globe. Some of the concerns that GAO has highlighted include:
Degraded readiness of ships homeported overseas : Since 2006, the Navy has doubled the number of ships based overseas. Overseas basing provides additional forward presence and rapid crisis response, but GAO found in May 2015 that there were no dedicated training periods built into the operational schedules of the cruisers and destroyers based in Japan. As a result, the crews of these ships did not have all of their needed training and certifications. Based on updated data, GAO found that, as of June 2017, 37 percent of the warfare certifications for cruiser and destroyer crews based in Japan—including certifications for seamanship—had expired. This represents more than a fivefold increase in the percentage of expired warfare certifications for these ships since GAO's May 2015 report. The Navy has made plans to revise operational schedules to provide dedicated training time for overseas-based ships, but this schedule has not yet been implemented.
Crew size reductions contribute to sailor overwork and safety risks: GAO found in May 2017 that reductions to crew sizes the Navy made in the early 2000s were not analytically supported and may now be creating safety risks. The Navy has reversed some of those changes but continues to use a workweek standard that does not reflect the actual time sailors spend working and does not account for in-port workload—both of which have contributed to some sailors working over 100 hours a week.
Inability to complete maintenance on time: Navy recovery from persistently low readiness levels is premised on adherence to maintenance schedules. However, in May 2016, GAO found that the Navy was having difficulty completing maintenance on time. Based on updated data, GAO found that, in fiscal years 2011 through 2016, maintenance overruns on 107 of 169 surface ships (63 percent) resulted in 6,603 lost operational days (i.e., the ships were not available for training and operations).
Looking to the future, the Navy wants to grow its fleet by as much as 30 percent but continues to face challenges with manning, training, and maintaining its existing fleet. These readiness problems need to be addressed and will require the Navy to implement GAO's recommendations—particularly in the areas of assessing the risks associated with overseas basing, reassessing sailor workload and the factors used to size ship crews, and applying sound planning and sustained management attention to its readiness rebuilding efforts. In addition, continued congressional oversight will be needed to ensure that the Navy demonstrates progress in addressing its maintenance, training, and other challenges.
What GAO Recommends
GAO made 11 recommendations in prior work cited in this statement. The Department of Defense generally concurred with all of them but has implemented only 1. Continued attention is needed to ensure that these recommendations are addressed, such as the Navy assessing the risks associated with overseas basing and reassessing sailor workload and factors used in its manpower requirements process. |
gao_GAO-19-7 | gao_GAO-19-7_0 | Background
This section provides information on BLM’s mission and organizational structure, the process for overseeing the development of federal and Indian oil and gas resources, and key aspects of the Inspection and Enforcement program.
BLM Mission and Organizational Structure
BLM’s mission is to maintain the health, diversity, and productivity of public lands for present and future generations. As part of this mission, BLM manages federal lands for multiple uses, including recreation; grazing; timber; minerals; watershed; wildlife and fish; natural scenic, scientific, and historical preservation; and the sustained yield of renewable resources. BLM manages these responsibilities through its headquarters office in Washington, D.C.; state offices; district offices; and field offices. Each level’s general responsibilities include the following:
BLM’s headquarters office develops guidance and regulations.
State and field offices manage and implement the bureau’s programs. In addition to implementing programs, BLM state offices oversee field office operations. Field offices lead BLM’s oversight of oil and gas development. They are located primarily in the Mountain West, where much of oil and gas development on federal and Indian lands takes place.
Within field offices, BLM supervisory and staff PET inspectors and tribal PET inspectors (who are contracted by BLM to inspect some wells on Indian lands in accordance with BLM policies and procedures) have primary responsibility for implementing the Inspection and Enforcement Program with assistance from state office program coordinators, according to the Inspection and Enforcement Program Manager. Among other things, state office program coordinators help field offices plan and prioritize their inspection workloads in accordance with BLM policy and comply with BLM guidance and federal regulations when conducting and documenting inspections, according to BLM officials.
Process for Development of Oil and Gas Resources on Federal and Indian Lands Including Key Aspects of the Inspection and Enforcement Program
Development of oil and gas resources on federal and Indian lands is a multi-stage process. First, Interior holds auctions through which entities may secure the right to federal and Indian leases that allow them to drill for oil and gas after meeting certain conditions. Once an operator plans to drill a well on leased land, it must first secure a permit from Interior. After drilling a well, an operator installs production equipment, such as pump jacks, storage tanks, and metering equipment. This production phase continues until the well becomes inactive, and the operator may decide to plug the well, usually because the well is either depleted or no longer economically viable. After plugging the well, the operator is required to remove all production equipment and reshape and revegetate the land around the well.
To ensure compliance with applicable laws, regulations, and other requirements, BLM’s Inspection and Enforcement program verifies that the operator complies with all requirements at a well or lease site during the drilling, production, and plugging phases. Three BLM onshore orders, issued pursuant to regulation, specify requirements that operators are to follow on federal and Indian leases. Inspectors use these orders to verify compliance during inspections. Onshore Oil and Gas Order Number 3 specified requirements for the minimum standards for site security by ensuring that oil and gas produced from federal and Indian leases are properly handled to prevent theft and loss and enable accurate measurement. Onshore Oil and Gas Order Number 4 specified requirements for measurement of oil produced under the terms of federal and Indian leases or received by federal and Indian lessees as shares of oil produced on state or private lands. Onshore Oil and Gas Order Number 5 specified requirements for measurement of gas produced under the terms of federal and Indian leases or received by federal and Indian lessees as shares of gas produced on state or private lands.
Figure 1 shows key inspection activities that occur during the drilling, production, and plugging stages of a well’s life cycle.
Our Analysis of BLM Data Shows the Distribution of BLM’s Inspection and Enforcement Program’s Workload and Workforce Was out of Balance in Fiscal Years 2012 through 2016
In fiscal years 2012 through 2016, the distribution of the oil and gas Inspection and Enforcement program’s workload and the workforce among the 33 BLM field offices with ongoing oil and gas development activities showed an imbalance, based on our analysis of BLM data. BLM took both short- and long-term actions in fiscal years 2012 through 2016 to address this imbalance, such as temporarily re-assigning inspectors from some medium activity offices to some of the highest activity offices. Based on our review of BLM documentation and interviews with agency officials, two key factors affected the distribution of the program’s workload: (1) energy market changes (e.g., price fluctuations) and (2) BLM actions to plan and prioritize inspection workload (e.g., changing risk classification for production inspections and decreasing the number of work months for plugging inspections).
The Distribution of the BLM Inspection and Enforcement Program’s Workload and Workforce Was Out of Balance Based on Our Analysis of BLM’s Data
From fiscal years 2012 through 2016, the distribution of the workload and workforce of BLM’s oil and gas Inspection and Enforcement Program was out of balance across the 33 BLM field offices with ongoing oil and gas development activities, based on our analysis of BLM data. The majority of the workload, about 58 percent, was located at the 6 highest-activity field offices, which had 44 percent of the workforce. In contrast, the majority of the workforce, 56 percent, was located in the remaining 27 medium and lowest activity offices, which had about 42 percent of the workload. Figure 2 shows the distribution of workload and workforce across the 33 field offices. In addition, figure 3 shows a map of our categorization of BLM’s 33 field offices by their workload and workforce activity level.
From fiscal years 2012 through 2016, based on our review of BLM documentation and interviews with agency officials, BLM took both short and long-term actions to address this imbalance, such as temporarily re- assigning inspectors from some medium activity field offices to some highest activity offices. A specific example of how BLM addressed this workload and workforce imbalance on a short term basis for this period concerns two of the highest activity offices (Hobbs and Dickinson). These offices had fewer PET inspectors on board and fewer PET inspection work months than three medium-activity offices (Pinedale, Rawlins, and Vernal). To address this imbalance, BLM sent short-term “strike teams” of PET inspectors to Hobbs and Dickinson on multiple occasions to help complete inspections. For example, officials from the Hobbs field office told us that in fiscal years 2012 and 2013, PET inspectors from the Farmington field office helped complete drilling and plugging inspections at Hobbs. In addition, officials from the Dickinson field office said that during fiscal years 2012, 2013, and 2014, more than 20 PET inspectors from five different states helped them inspect drilling, production, and plugging operations.
BLM officials said there were pros and cons to the strike team approach. They said strike teams generally allow a field office to complete high- priority inspections and can provide additional training to inspectors at that office. However, agency officials said that, at times, the inspection documentation from strike team PET inspectors may not fully align with the policies and practices of the office they are assisting, which can create uncertainty about what inspection activities were completed and what the inspection found. We previously reported that strike teams increase costs and are not a sustainable solution.
To address the workload and workforce imbalance on a long term basis, BLM allocated additional funding in fiscal years 2015 and 2016 to hire PET inspectors. The Inspection and Enforcement program manager said that these hires were targeted to address workforce needs at certain field offices. According to agency documentation, BLM allocated additional funding to hire about 20 inspectors in fiscal year 2015 and 40 inspectors in fiscal year 2016. Approximately 75 percent of these inspector positions were in three state offices: Montana (which includes the Dickinson, North Dakota field office), New Mexico (which includes the Tulsa, Oklahoma field office), and Wyoming. All six of BLM’s highest activity field offices are located in these three states.
With this additional funding in fiscal years 2015 and 2016, multiple officials from BLM field offices reported that they were generally able to hire inspectors and, as a result, the number of onboard inspectors increased. For example, the number of onboard PET inspectors in the Dickinson field office increased from 8 in fiscal year 2015 to 17 in fiscal year 2017. In the Buffalo field office, the number of onboard PET inspectors increased from 16 in fiscal year 2015 to 23 in fiscal year 2017. These officials generally cited two key reasons for being able to hire inspectors. First, BLM increased the compensation for PET inspectors through the use of special salary rates, incentive payments, and student loan repayments. We have previously reported that BLM faces challenges hiring PET inspectors because BLM competes with industry for employees, and industry offers higher salaries. Second, and as described below, industry reduced development activity (i.e., wells drilled) in fiscal years 2015 and 2016 as commodity prices decreased. Multiple BLM field office officials also told us that it is easier to hire PET inspectors when oil and gas prices are low because industry is not hiring and applicants look to BLM for job security.
Two Key Factors Affected the Distribution of the Oil and Gas Inspection and Enforcement Program’s Workload
Two key factors—based on our review of BLM documentation and interviews with agency officials—affected the distribution of the program’s workload: (1) energy market changes (e.g., price fluctuations and increased development of shale plays) and (2) BLM actions to plan and prioritize inspection workload (e.g., changing risk classification for production inspections and decreasing the number of work months for plugging inspections). As we describe below, these factors affected several aspects of the program’s workload (i.e., wells drilled, production inspection cases, planned plugging work months, and enforcement actions).
Consistently Lower Gas Prices, Volatile Oil Prices, and Increased Development of Shale Plays Led to a Decrease in Wells Drilled, but Not Uniformly Across BLM Field Offices
The number of wells drilled on federal and Indian lands from fiscal years 2012 through 2016 declined, according to BLM data. The decline was primarily the result of consistently lower gas prices and oil prices that dropped significantly in fiscal years 2015 and 2016 combined with technological advancements that increased the development of resources located in shale and other tight rock formations—which are generally not found on federal and Indian lands. Multiple BLM officials told us that commodity prices are a key factor that impacts the number of wells drilled on federal and Indian lands. These officials told us that, in general, when commodity prices are higher, industry will drill more wells, whereas when prices are lower, fewer wells are drilled. In addition, we previously reported that the highs and lows in prices and the number of oil and gas wells drilled largely overlapped, strongly suggesting that development activities reacted quickly and proportionally to changes in the prices of oil and gas. Table 1 shows the number of wells drilled on federal and Indian lands and average monthly prices for natural gas and crude oil for the period. While there may have been some year-to-year variability between the number of wells drilled and commodity prices (see the fiscal year 2013 to 2015 prices for natural gas in table 1), operators drilled fewer wells in fiscal years 2015 and 2016, which were years of both consistently low gas prices and significant decreases in oil prices.
With regard to natural gas prices, a Purdue University study from March 2017 found that (1) the period of consistently lower natural gas prices (i.e., the Henry Hub average monthly price per million British thermal unit was generally from $2 to $4) began around 2009, which corresponds with increased development of natural gas from shale resources, and (2) the price increase in fiscal year 2014 was related to an extreme winter cold spell. With regard to oil prices, a World Bank report from January 2018 identified multiple factors contributing to the significant price decease that occurred in fiscal years 2015 and 2016. These factors included increased oil production from U.S. shale plays—sedimentary rock formations containing significant amounts of oil and natural gas— contributing to oversupply as well as lower production costs that allowed shale oil wells to be profitable at lower prices.
From 2009 to 2016, there was also an increase in the development of oil and gas plays located in shale and other tight rock formations, brought about by advances in production technologies such as horizontal drilling and hydraulic fracturing. According to Energy Information Administration data, shale plays represented more than 90 percent of the growth in oil and gas development from 2011 to 2016. As stated above, most shale plays are not located on federal and Indian lands. However, the few BLM field offices located in shale plays where operators focus on oil development saw a smaller decrease in the number of wells drilled compared to field offices located outside of shale plays. For example, the Dickinson field office—located in the Bakken shale play—experienced a 15 percent decrease in the number of wells drilled from about 400 in fiscal year 2012 to about 330 in fiscal year 2016. Similarly, the Hobbs field office—located in the Permian shale play—experienced a 27 percent decrease from about 160 in fiscal year 2012 to about 120 in fiscal year 2016. According to BLM data, almost all producing wells in the Dickinson and Hobbs field offices are oil wells. In contrast, two field offices located outside of shale plays experienced a more significant decrease. The number of wells drilled in the Bakersfield field office (located in California) declined 90 percent from 285 wells drilled in fiscal year 2012 to 30 wells drilled in fiscal year 2016. According to BLM data, almost all of the Bakersfield field office’s producing wells are oil wells. The number of wells drilled in the Vernal, field office (located in Utah) declined 95 percent from 725 wells drilled in fiscal year 2012 to 35 wells drilled in fiscal year 2016. According to BLM data, about 40 percent of the Vernal field office’s producing wells are oil wells, and the remaining 60 percent are natural gas wells.
BLM Changes to Risk Classification Led to Fluctuations in the Number of High Priority Production Inspection Cases
On multiple occasions from fiscal year 2012 through fiscal year 2016, based on our review of agency documentation, BLM changed its methodology to identify and classify risk, which led to fluctuations in the number of high-priority production inspection cases in a given fiscal year. In our review, we focused on high priority production cases because, according to agency documents, inspecting such cases is one of the program’s top three work priorities. Based on our review of agency documentation, BLM’s risk-based strategy went through several iterations from fiscal years 2011 through 2016, and agency officials said that it was difficult to identify the specific reasons for year-to-year changes in the number of their high-priority production cases. This strategy used multiple weighted factors to develop a composite risk score to identify high- and low-priority cases. In fiscal year 2011, BLM based the composite risk score on seven weighted factors: four factors based on BLM data, and three factors based on data from Interior’s Office of Natural Resources Revenue (ONRR). However, BLM officials stated that they had challenges importing ONRR data in a format compatible with the bureau’s information technology system and have since stopped using the data. From fiscal year 2013 through fiscal year 2016, BLM based the composite risk score on the following four BLM-identified risk factors: (1) average monthly production, (2) number of missing oil and gas operations reports, (3) number of incidents of noncompliance, and (4) number of years since last inspection. With regard to composite risk scores, in fiscal year 2011, BLM determined that a composite risk score of 4 would be considered high risk, meaning that cases with a score of 4 or more required an inspection. For fiscal year 2013, BLM increased the composite risk score needed to be considered high risk and required an inspection with a score of 5, a change intended to reduce the number of required inspections because agency documentation stated that the workload in the preceding years was too high for some field offices. For fiscal years 2014, 2015 and 2016, BLM lowered the composite risk score to 4 again.
BLM averaged about 2,150 high priority production cases in fiscal years 2012 through 2016, and in each of those fiscal years, the number ranged from about 1,700 to about 2,500. In addition, over 60 percent of such cases were located in the 6 highest-activity field offices we identified. Since such cases are concentrated in six field offices, seemingly minor fluctuations in the overall number of high priority production cases can have greater impacts to an individual field office’s workload. For example, in fiscal year 2013, BLM identified about 2,500 high priority production cases. The Farmington field office in that year had about 170 such cases (or about 7 percent of the total) and estimated that PET inspectors needed about 12 work months to complete these inspections. In fiscal year 2015, BLM identified about 1,700 high priority production cases. The Farmington field office had about 90 such cases (or about 5 percent of the total) and estimated that PET inspectors needed about 6 work months to complete these inspections. In general, BLM officials told us that a single PET inspector is assigned about 6 inspection work months in a fiscal year once other demands on an inspector’s time (i.e., sick leave, vacation, training, and the completion of other assigned non-inspection duties such as administering various safety programs) are considered. Therefore, in fiscal year 2013 the Farmington field office would have had to dedicate 2 PET inspectors (or about 10 percent of its total PET workforce) to complete only high priority production inspections, and in fiscal year 2015 the field office would have needed 1 PET inspector (or about 5 percent of its total PET workforce) to complete such inspections.
Since BLM’s risk-based strategy has gone through multiple iterations since fiscal year 2012, several BLM officials said that it was difficult to identify the specific reasons for year-to-year changes in the number of their high-priority production cases. Officials, however, said that their ability to complete more high-priority production inspections increases during times of reduced industry drilling activity. Specifically, if industry is drilling fewer new wells, BLM can apply additional resources toward inspecting currently producing wells because PET inspectors who would normally conduct drilling inspections can now be deployed to high-priority production inspections. For example, as described above, the number of wells drilled decreased during the time frame covered in our review, with the Vernal and Bakersfield field offices experiencing substantial decreases in the number of wells drilled from fiscal year 2012 to fiscal year 2016. Officials in both offices told us that when drilling activity was low, BLM redirected resources originally planned for drilling inspections to complete high-priority production inspections.
BLM Reduced the Total Number of Estimated Well Plugging Inspection Work Months as Commodity Prices Stayed Low or Decreased
According to agency data, BLM reduced the estimated number of plugging inspection work months from about 200 in fiscal year 2012 to about 155 in fiscal year 2016, or about 23 percent. Multiple agency officials told us that due to low or falling commodity prices operators plugged fewer wells from fiscal year 2012 through fiscal year 2016. As discussed above, natural gas prices were consistently low during fiscal years 2012 through 2016, while oil prices decreased significantly in fiscal years 2015 and 2016. According to multiple BLM officials, operators generally plug fewer wells during times of low or falling commodity prices because operators prefer to (1) maintain the income generated from even marginally producing wells or (2) limit the expenditures required to plug wells. In May 2018, we reported that low oil and gas prices placed financial stress on operators, increasing bankruptcies and the risk that operators would not permanently plug wells, and that BLM’s actual costs and potential liabilities for reclaiming oil and gas wells likely increased for fiscal years 2010 through 2017. In addition, we reported that BLM faced challenges identifying and managing shut-in wells. For example, BLM does not have time limits for how long operators can have a well in shut- in status, which may limit the agency’s ability to ensure that operators permanently plug such wells before they become orphaned.
However, since BLM estimates the number of plugging inspection work months at the start of each fiscal year, there can be instances where actual industry activity is different than estimated. For example, BLM officials at four field offices told us that during the time frame of our review, operators in their region plugged more wells than estimated. According to agency officials, these operators plugged more wells than BLM estimated because the operators were either looking to reduce their financial liability—sometimes in anticipation of selling assets—or looking for work to keep crews busy. In these instances, agency officials told us that, in general, BLM re-allocated inspection work months from low- priority production inspections to these plugging inspections. According to agency officials and documentation, plugging inspections are a higher priority than production inspections for multiple reasons. First, a plugging inspection is time sensitive because it is the final stage in a well’s lifecycle. In contrast, a production inspection is an ongoing operation that can be conducted at almost any time. Second, properly plugging a well is essential for long-term environmental protection. For example, wells that are not properly plugged can leak methane and contaminate surface and groundwater. As such, multiple BLM officials told us that plugging inspections are their field office’s highest priority work task and they will re-allocate resources, if necessary, to complete such inspections.
Higher Oil Prices in Some Years and Generally Lower Gas Prices Led to an Imbalance in the Program’s Enforcement Workload at Three Field Offices
Based on our analysis of BLM data, two key market changes created an imbalance of the program’s enforcement workload: (1) increased drilling activity at two field offices located in shale formations during times of higher oil prices, and (2) bankruptcies of coalbed methane operators in one field office as gas prices decreased. Combined, the Buffalo, Carlsbad, and Dickinson field offices issued about 45 percent of all enforcement actions, 75 percent of all monetary assessments, and about 85 percent of all civil penalties (see table 2). For purposes of this review, we focused on the number and amount of monetary assessments and civil penalties because, according to agency officials and BLM documentation, these two enforcement actions are the key tools used by BLM to address instances of serious or continued operator noncompliance.
Almost all of the monetary assessments that the Carlsbad and Dickinson field offices issued were for drilling violations—either drilling without approval or failure to install a blowout preventer or other well control equipment—and occurred in fiscal years 2012 through 2014, based on our review of BLM enforcement data. Federal regulations generally provide for higher monetary assessment amounts for drilling violations compared to other types of violations. Specifically, drilling violations are subject to assessments of $500 per day (up to $5,000), whereas a violation for failure to comply with a previously issued written notice for a minor violation is $250. As such, even though the Carlsbad and Dickinson field offices issued 24 percent of the number of monetary assessments, they issued about 60 percent (about $710,000) of the total amount assessed by all BLM field offices from fiscal years 2012 through 2016. In contrast, even though the Buffalo field office issued more than half of the monetary assessments, these actions accounted for 18 percent (about $220,000) of the total amount assessed because almost all of these assessments were minor violations for failure to comply (see table 3).
From fiscal years 2012 through 2016, the Carlsbad and Dickinson field offices were responsible for about 30 percent of all wells drilled on federal and Indian lands, according to BLM data. These offices are located, respectively, in the Permian and Bakken shale plays, where almost all wells are oil wells. During fiscal years 2012 through 2014, for each of these field offices, operators drilled about 435 wells each year, and the price of oil ranged from $87 to $107 per barrel. In contrast, during fiscal years 2015 and 2016, operators drilled about 275 wells each year while the price of oil ranged from $45 to $86 per barrel. According to agency officials, during fiscal years 2012 through 2014 operators attempted to drill wells as quickly as possible in the Carlsbad and Dickinson field offices to increase production during a time of higher oil prices.
BLM field office officials told us that when oil prices are higher, some operators have less financial incentive to follow federal requirements. In the Dickinson field office, for example, almost all monetary assessments were related to drilling without approval. Officials from that field office told us that, in general, these violations were related to operators who applied to BLM for a drilling permit, but the bureau did not approve the permit before the operator started drilling. In these instances, operators decided that the benefit of increased production at higher prices outweighed the cost of a monetary assessment, according to agency officials. BLM officials told us that for both types of drilling violations—drilling without approval and failure to install well control equipment—BLM issues monetary assessments immediately upon discovery due to the potential serious harmful impacts to resource development and environmental health and suspends drilling operations until the operator corrects the violation and pays the assessment. The officials said operators almost always pay these assessments in a timely manner because they wanted to complete drilling operations and start production.
In contrast to the monetary assessments issued during times of high oil prices, the Buffalo field office issued hundreds of civil penalties totaling millions of dollars during times of lower natural gas prices as some coalbed methane operators declared bankruptcy and did not complete required reclamation activities. Specifically, the Buffalo field office issued over 75 percent of the number of civil penalties and almost the entire amount penalized during fiscal years 2012 through 2016 (see table 4).
As we reported in May 2018, low natural gas prices placed financial stress on operators of thousands of coalbed methane wells (natural gas extracted from coal beds). In that May 2018 report, we also found that coalbed methane was economical to produce when natural gas prices were higher, and thousands of coalbed methane wells were drilled on federal lands. However, coalbed methane production has declined because the production of shale gas has kept natural gas prices low. Officials from the Buffalo field office told us that (1) low natural gas prices contributed to an increasing number of bankruptcies among coalbed methane operators, and (2) in general, these bankrupt operators stopped production activities, shut-in the wells instead of permanently plugging them, and stopped communicating with BLM.
For these cases, Buffalo field office documentation outlines a 20-step process to identify a responsible party—that is, the operator or the person(s) to whom BLM issued the lease (the lessee)—to either permanently plug these wells or bring them back into production. Officials said that they repeated this 20-step process for each operator or lessee, as needed. Since one lease can have multiple lessees, the repetition of this process resulted in a very large number of enforcement actions, according to Buffalo field office officials. Under this process, BLM initially issued thousands of written notices requiring the responsible party to either “plug or produce.” When the responsible party did not take the specified corrective action outlined in the written notices, the field office then issued hundreds of monetary assessments for failure to comply with the written notice and again instructed the operators to “plug or produce.” When the responsible party failed to comply with the monetary assessments, Buffalo issued hundreds of civil penalties.
Buffalo field office officials told us that they do not know whether the government has collected any of the issued penalties because the responsible parties did not pay the penalties to BLM in a timely manner. As such, BLM turned these outstanding penalties over to the Treasury Department for collection, a process that can take up to 2 years, according to agency documentation. Since market conditions have remained unfavorable for coalbed methane production, BLM has taken actions to permanently plug some wells. For example, according to agency officials and documents, the agency has (1) worked with some non-bankrupt lessees, including at least one major oil and gas corporation, to plug wells, (2) re-directed funding from other BLM programs to pay to plug wells and (3) contributed funding to the state of Wyoming’s well plugging program. We recently reported on BLM’s actual costs and potential liabilities for reclaiming oil and gas wells and have ongoing work reviewing BLM’s bonding requirements, which is the primary mechanism to ensure that operators complete required reclamation activities.
BLM Has Not Completed All Required Internal Control Reviews of Its Field Offices and Does Not Employ a Risk-Informed Oversight Strategy
BLM state offices did not complete internal control reviews at 27 of 33 field offices—including 5 of the 6 highest activity offices we identified. According to the July 2012 oversight policy, state offices are to periodically conduct internal control reviews of their field offices to, among other things, (1) review whether inspections and enforcement actions are accurate, complete, and conducted in accordance with policy, (2) review staffing and training needs, and (3) identify areas where program guidance can be improved. The July 2012 oversight policy also says that BLM state offices are responsible for overall programmatic oversight of field office operations. For those field offices with Inspection and Enforcement program functions, this means that state offices are responsible for ensuring that the field offices are able to meet the goals stated in the program’s handbook, which include production accountability (i.e., the accurate measuring and reporting of production volumes), environmental safety, and public safety. BLM state offices completed internal control reviews at 6 of the 33 field offices from 2013 through 2017 and scheduled reviews for 5 others from 2018 through 2020, as shown in table 5.
Officials from BLM state offices who completed internal control reviews said the benefits of these reviews included obtaining data to justify additional training or resources and providing a formal opportunity to examine key program management practices and correct identified deficiencies. For example, in September 2017, the Colorado state office completed an internal control review of a field office. Prior to this review, officials from that state office told us that they thought the field office might be understaffed based on a variety of factors, including longer than expected inspection times. BLM data showed that in fiscal year 2016 this field office estimated about 60 hours to complete a production inspection, while the other 5 Colorado field offices’ average estimate was about 14 hours. The September 2017 Colorado state review identified unofficial management policies at this field office that resulted in the underutilization of PET inspectors and inflated inspection times, creating a perception of understaffing. For example, one of the field office’s unofficial policies required that PET inspectors drive at least 1,000 miles a month in order to keep their government vehicle, which resulted in some inspectors taking longer routes and driving to locations beyond those required for the job. This policy contributed to artificially inflating inspection times. According to the Colorado state review, the accurate tracking of inspection times is vital for workload planning and staffing purposes. In response to these findings, the field office manager terminated the unofficial policy, and officials from the Colorado state office said they will check on the implementation of their recommendations by reviewing the inspection data. Officials from that state office are also no longer considering hiring additional PET inspectors. To ensure that the field office sustains these corrective actions, Colorado state officials told us that they perform periodic reviews of production inspection records and continue to hold progress report meetings with the field office’s management team.
Although BLM state offices completed internal control reviews at 6 of 33 field offices, the state offices did not complete reviews at 27 field offices, including 5 of the 6 highest-activity field offices we identified. Officials from BLM state offices identified some key human capital and workload reasons that hindered their ability to complete reviews, including: long-term vacancies in multiple state offices’ inspection and enforcement coordinator positions, which BLM filled on a temporary basis with other agency employees; competing priorities from upper management (e.g., preparing for lease sales); and hiring and training new PET inspectors.
For example, according to one state office inspection and enforcement coordinator, the coordinator position was filled on a temporary basis by four different BLM employees from about November 2013 to November 2015 as the agency tried to find a permanent hire. This official said that as a result of the personnel changes, the state office did not conduct field office internal control reviews as initially scheduled. In addition, another state office inspection and enforcement coordinator said that she spends a lot of her time providing instruction and on-the-job training to newly hired PET inspectors in multiple field offices that do not have a supervisory PET inspector, which limits her ability to perform field office internal control reviews.
We also identified two shortcomings with BLM’s control activities that may have limited the agency’s ability to compete internal control reviews as required by the July 2012 oversight policy. First, BLM headquarters did not design appropriate types of control activities to help management fulfill its responsibilities. Specifically, the Inspection and Enforcement program manager said that BLM headquarters did not consistently track and monitor the extent to which state offices completed field office internal control reviews. This official said that headquarters tends to rely on state offices to track and monitor such reviews and that headquarters focused on higher priority work tasks, such as developing and implementing new regulations that were issued in January 2017. Within the first 3 years following the issuance of the July 2012 policy, the agency completed one internal control review each during fiscal years 2013 and 2015, although at least 12 reviews were to be completed. BLM headquarters officials we spoke with were not aware that so few reviews had been completed in fiscal years 2013 and 2015.
Federal standards for internal control state that management should design control activities to achieve objectives and respond to risks, such as by comparing actual performance to planned or expected results and analyzing significant differences. Because it did not consistently monitor and track state office performance, BLM headquarters (1) did not know that state offices were not conducting field office internal control reviews in accordance with the July 2012 oversight policy and (2) could not analyze the reasons why actual performance did not meet expected results. Identifying the reasons it did not complete internal control reviews (e.g., human capital and workforce challenges), developing and implementing a plan to address those challenges, and monitoring state offices’ progress toward completing required reviews will better position BLM to ensure that its state offices complete all required internal control reviews as called for by its July 2012 oversight policy.
Second, the July 2012 oversight policy identifies specific areas (e.g., the accuracy and completeness of inspections and staffing and training needs) that the reviews should assess, but according to a BLM headquarters official, the agency did not provide state offices with implementation guidance or procedures. This official said that BLM did not provide guidance or procedures so that state offices would have flexibility in how they conducted such reviews. However, multiple BLM state officials told us that such guidance or procedures would provide a helpful framework for conducting these reviews. One state office inspection and enforcement coordinator told us that since she had no prior training or experience designing and implementing internal control reviews, guidance or procedures would be especially beneficial.
Because they did not have documented implementation guidance or procedures to follow, the two state offices that completed internal control reviews developed their own procedures, which varied in design, methodology, and resources based our review of the six completed internal control reviews as well as interviews with BLM state officials. Specifically:
One state office (1) developed its own review procedures based, in part, on existing program documentation, (2) assigned a single individual to conduct reviews because the state did not have the resources available to provide additional staff support, and (3) reviewed inspection and enforcement data contained in BLM’s corporate oil and gas database as well as hard copy files, and interviewed field office PET inspectors.
Another state office (1) developed its review procedures based on those employed during a 2011 review of the entire Inspection and Enforcement program at the suggestion of the Deputy State Director; (2) assigned review teams consisting of multiple BLM officials with different areas of expertise; and (3) reviewed database and hard copy records, interviewed both field office PET inspectors and field office management, and observed field office PET inspectors as they conducted inspection activities.
Federal standards for internal control state that management should design control activities to achieve objectives and respond to risks, such as by clearly documenting internal control responsibilities in management directives, administrative policies, or operating manuals. BLM has a documented policy, but this policy does not clearly specify what procedures state office officials are to follow to conduct internal control reviews. Without developing and documenting procedures for implementing internal control reviews under the July 2012 oversight policy, BLM does not have assurance that state offices will review all specific areas identified in the July 2012 oversight policy in a consistent manner.
In addition, although BLM did not have documented procedures for conducting periodic internal control reviews, the July 2012 oversight policy specified a schedule for conducting such reviews (see fig. 4). The schedule states the following:
For state offices with four or fewer oil and gas field offices, the state office is to complete an internal control review of each field office at least once every 3 years. The state offices in this category are Alaska, California, Eastern States, Montana, Nevada, and Utah.
For state offices with five or more oil and gas field offices, the state office is to complete an internal control review of each field office at least once every 6 years. The state offices in this category are Colorado, New Mexico, and Wyoming.
According to the Inspection and Enforcement program manager, this schedule was based on discussions with state office inspection and enforcement coordinators to balance officials’ availability to conduct internal control reviews and other responsibilities. The program manager said that BLM did not identify or consider risk when developing the schedule because the agency’s primary focus was to balance the new requirement to conduct field office internal control reviews with the state office coordinators’ existing workload. However, the review schedule in the July 2012 oversight policy generally requires more frequent internal control reviews of low-activity offices and less frequent reviews of high activity offices. In particular, five of the six highest activity field offices we identified in our review are in states in which there are more than five field offices. According to the policy, these highest activity offices would therefore receive an internal control review at least once every 6 years. In contrast, five of the six lowest activity field offices are in states in which the policy requires that reviews be conducted at least once every 3 years. Such a review schedule may not ensure that BLM has properly established and implemented internal control reviews at the highest activity field offices—whose workforce must complete a majority of the program’s workload—which may inherently pose a greater risk to the program’s goals of production accountability, environmental protection, and personnel safety. For example, if the six highest activity field offices have an inadequate number of PET inspectors, then there is an increased risk to BLM’s production accountability goal. Specifically, if these offices do not have the human resources needed to fully inspect high-priority production cases, BLM has less assurance that operators are properly measuring and reporting production volumes, which increases the risks to the accurate collection of royalty payments. Furthermore, those field offices that experienced greater levels of drilling workload may present a higher risk to BLM’s environmental protection goal. Specifically, if the six highest activity offices do not conduct accurate and complete drilling inspections, BLM has less assurance that operators are properly conducting drilling operations, which increases the risks of environmental problems, such as contamination of fresh water aquifers.
Federal internal control standards call for entities to identify, analyze, and respond to risks related to achieving the defined objectives, such as by estimating the significance of identified risks to assess their effect on achieving defined objectives. Management estimates the significance of a risk by considering the magnitude of impact, which refers to the likely magnitude of deficiency that could result from the risk and is affected by factors such as the size of a risk’s impact. Without employing a risk- informed approach to scheduling and conducting internal control reviews that takes into account the risks to the Inspection and Enforcement program, such as those inherent in field offices’ workload and workforce, BLM will not have reasonable assurance that it has adequate controls in place to address the effect of the field offices that pose the greatest risk to the program. BLM officials said that assessing risk, including field offices’ workload activity levels, could provide a useful metric to inform how BLM conducts and prioritizes field office internal control reviews.
Conclusions
On federal and Indian lands, BLM’s Inspection and Enforcement program is intended to ensure that operators developing oil and gas resources do so in a manner that protects public safety, environmental health, and royalty income. This is a complex undertaking that occurs within the oil and gas market and requires BLM’s PET inspectors to conduct technically challenging drilling, production, and plugging inspections. In this context, BLM’s July 2012 oversight policy calls for its state offices to conduct periodic internal control reviews of field offices. While BLM state offices completed internal control reviews at 6 field offices, they did not complete reviews at 27 field offices, including 5 of the 6 highest activity field offices we identified. In addition, because it did not consistently monitor and track state office performance, BLM headquarters (1) did not know that state offices were not conducting field office internal control reviews in accordance with the July 2012 oversight policy and (2) could not analyze the reasons why actual performance did not meet expected results. Identifying the reasons it did not complete internal control reviews (e.g., human capital and workload), developing and implementing a plan to address those challenges, and monitoring state offices’ progress toward completing required reviews will better position BLM to ensure that its state offices are completing all required internal control reviews as called for by its July 2012 oversight policy.
Additionally, although BLM’s July 2012 oversight policy does identify the specific areas that internal control reviews should assess, BLM did not provide state offices with implementation guidance or procedures. Because they did not have documented implementation guidance or procedures to follow, the two state offices that completed internal control reviews developed their own procedures, which varied in design, methodology, and resources. Without developing and documenting procedures for implementing internal control reviews under the July 2012 oversight policy, BLM does not have assurance that state offices will review all specific areas identified in the July 2012 oversight policy in a consistent manner.
Furthermore, and inconsistent with federal internal control standards, BLM’s July 2012 oversight policy established a review schedule without identifying or considering risk. Without employing a risk-informed approach to scheduling and conducting internal control reviews that takes into account the risks to the Inspection and Enforcement program, such as those inherent in field offices’ workload and workforce, BLM will not have reasonable assurance that it has adequate controls in place to address the effect of the field offices that pose the greatest risk to the program.
Recommendations for Executive Action
We are making the following three recommendations to BLM: The Director of BLM should identify the reasons internal control reviews were not completed (e.g., human capital and workforce), develop and implement a plan to address those reasons, and monitor state offices’ progress toward completing required reviews. (Recommendation 1)
The Director of BLM should develop and document procedures for implementing internal control reviews under the July 2012 oversight policy. (Recommendation 2)
The Director of BLM should implement a risk-informed approach to scheduling and conducting internal control reviews that takes into account the risks to BLM’s mission, such as those inherent in field offices’ workload and workforce. (Recommendation 3)
Agency Comments
We provided a draft of this product to the Department of Interior for comment. In its comments, reproduced in appendix I, Interior concurred with our three recommendations and outlined planned actions to implement them. For example, BLM plans to issue updated guidance and procedures for conducting internal control reviews to help ensure that such reviews are completed in a timely manner using a consistent risk- based approach.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Interior, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Comments from the Department of the Interior
Appendix II: GAO Contact and Staff Acknowledgments
Contact:
Staff Acknowledgments:
In addition to the contact named above, Christine Kehr (Assistant Director), Patrick Bernard (Analyst-in-Charge), Tara Congdon, William Gerard, Cindy Gilbert, Jessica Lewis, Dan Royer, Kiki Theodoropoulos, Karen Villafana, and Jack Wang made key contributions to this report. | Why GAO Did This Study
BLM has primary responsibility for managing oil and gas development on federal and Indian lands. To help ensure operator compliance with laws and regulations, BLM administers the Inspection and Enforcement program. Under the program, BLM inspects operators' drilling, production, and plugging activities and can issue various enforcement actions, such as monetary assessments, for violations. GAO was asked to examine key aspects of the Inspection and Enforcement program.
This report (1) describes the distribution of BLM's oil and gas Inspection and Enforcement program's workload and workforce among agency field offices for the most recent 5 years for which such data were available (fiscal years 2012 through 2016) and (2) examines the extent to which BLM conducted internal control reviews in accordance with its July 2012 oversight policy for fiscal years 2013 through 2018, the most recent period for which such data were available. GAO examined BLM policies, data, and documents; interviewed BLM headquarters, state and field office officials; visited six BLM field offices selected based on their level of resource development activity; and toured oil and gas drilling, production, and plugging sites at three of these six field offices.
What GAO Found
Based on GAO's analysis of Bureau of Land Management (BLM) data, the distribution of BLM's oil and gas Inspection and Enforcement program's workload and workforce showed an imbalance among BLM's 33 field offices in fiscal years 2012 through 2016. GAO analyzed BLM data on the overall percentage of the workload and workforce distributed at each field office (i.e., activity level) and grouped similar activity level field offices together into highest, medium and lowest activity categories. GAO found that the program distributed the majority of its workload to 6 highest activity offices and distributed the majority of the workforce to 21 medium activity offices (see fig.). Based on GAO's review of BLM documentation and interviews with agency officials, BLM took both short- and long-term actions in fiscal years 2012 through 2016 to address this imbalance, such as temporarily re-assigning inspectors from some medium activity offices to some of the highest activity offices.
BLM has not completed all required internal control reviews of its field offices. BLM's July 2012 oversight policy instructs its state offices to periodically conduct internal control reviews of field offices, which are to, among other things, identify staffing needs. BLM state offices completed internal control reviews at 6 of 33 field offices from 2013 through 2017, and 5 more are scheduled from 2018 through 2020. Officials from BLM state offices told GAO that some human capital and workload challenges hindered their ability to complete reviews, including long-term vacancies in some state offices positions. However, a senior BLM official said that headquarters did not consistently track and monitor the extent to which state offices completed field office internal control reviews, and headquarters officials said they were not aware that so few reviews had been completed. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks, such as by comparing actual performance to expected results and analyzing significant differences. Identifying the reasons it did not complete internal control reviews, developing and implementing a plan to address those challenges, and monitoring state offices' progress toward completing required reviews will better position BLM to ensure that its state offices are completing all required internal control reviews as called for by its July 2012 oversight policy.
What GAO Recommends
GAO is making three recommendations to BLM, including taking actions to increase monitoring of state offices' progress toward completing internal control reviews. BLM concurred with all three recommendations. |
gao_GAO-18-455 | gao_GAO-18-455_0 | Background
HBCU Capital Financing Program
The Capital Financing Program provides loans to eligible HBCUs for the repair, renovation, construction, or acquisition of capital projects or to refinance existing capital debt. Several offices at Education are involved in administering the program, including the Office of Postsecondary Education and the budget office, with one official responsible for overall program management. Education contracts with a designated bonding authority to manage the program’s operations. The authorizing legislation also establishes the HBCU Capital Financing Advisory Board (Advisory Board) to provide advice to Education and its designated bonding authority on implementing the program. (See table 1.)
The loan process for an HBCU to participate in the Capital Financing Program consists of multiple steps. HBCUs must first complete a preliminary application with the designated bonding authority that includes information such as enrollment, financial data—including a description of existing debt—and proposed capital projects. The designated bonding authority reviews this information to assess the ability of an HBCU to take on debt and determine whether the college should formally complete an application. The application includes more detailed financial information, such as audited financial statements, as well as capital improvement plans and assessments. To be approved for the loan, an HBCU must satisfy certain credit criteria and have qualified projects. Upon reviewing the college’s application, designated bonding authority representatives may visit the HBCU and will recommend to Education whether the college should receive a Capital Financing Program loan. If Education agrees and approves the loan, it goes through a closing process during which certain terms and conditions of the loan may be negotiated. (See table 2.)
The Capital Financing Program’s statute caps total outstanding loans at $1.1 billion, but since fiscal year 2012, Congress has annually passed appropriation bills allowing Education to lend above that amount. As of November 2017, Education has lent over $2 billion in total with $1.8 billion outstanding.
Loan Modifications for Selected HBCUs Following 2005 Gulf Coast Hurricanes
In 2005, Hurricanes Katrina and Rita struck New Orleans and surrounding areas, resulting in significant damage to four HBCUs in the Gulf Coast region: Dillard University, Southern University at New Orleans, Xavier University of Louisiana, and Tougaloo College. The Emergency Supplemental Appropriations Act for Defense, the Global War on Terror, and Hurricane Recovery, 2006 (Emergency Act) was enacted in June 2006, in part to assist these colleges in their recovery efforts. The Emergency Act amended certain provisions of the Capital Financing Program for these colleges. For example, the Emergency Act included provisions such as a lower interest rate and lower fees for cost of issuance (both set at one percent or less), elimination of the escrow requirement, and deferment of both principal and interest payments for a 3-year period. Despite these more generous loan provisions, these four HBCUs experienced challenges repaying these loans due to difficulties they faced rebuilding their enrollment and finances to the levels before the hurricanes. In 2011, federal law authorized Education to further modify the terms and conditions of the Capital Financing Program loans made to these four HBCUs under the Emergency Act. To assist these four colleges, Education used this authority to modify Emergency Act loan terms in the following ways:
Payment forbearance: The HBCUs were granted a 5-year forbearance on their loan payments starting in 2013. During the forbearance period, the colleges were not responsible for making payments toward the principal, interest, or associated fees, but interest and fees continued to accrue during that time. At the end of the forbearance period, the colleges would be responsible for the outstanding principal, accrued interest, and fees.
Expense-based repayment: After the forbearance period, colleges would pay the lesser of an amount based on a percentage of each college’s operating expenses or the reamortized payment schedule.
Debt adjustment: Any unpaid loan amounts at the original loan maturity date—June 1, 2037—would be forgiven. The HBCUs would not be held responsible for any unpaid balances as of that date.
In February 2018, before the end of the forbearance period, Congress passed the Bipartisan Budget Act of 2018 which authorized the Secretary of Education to forgive any outstanding balance owed by these HBCUs. In March 2018, Education forgave these colleges’ loans, eliminating over $300 million of outstanding debt.
Strengthening HBCU Program
Education also administers the Strengthening HBCU Program to eligible HBCUs. These grants can be used for a number of purposes, including physical infrastructure, financial management, academic resources, and endowment-building. The program is non-competitive and Education awards funds on a 5-year cycle through formula-based grants. In 2017, Education awarded 98 new grants totaling about $245 million.
Bond Financing
Municipal bonds are debt securities issued by states, cities, counties and other governmental entities to fund day-to-day obligations and to finance capital projects. Municipal borrowers can also issue bonds on behalf of private entities such as private colleges, or those colleges can issue their own debt that would not be tax exempt. To issue a bond, entities are typically rated by a credit rating agency. This rating indicates the credit quality of the bonds and likelihood of default. The entity may hire municipal advisors and is required to have an underwriter to prepare and sell the bonds to investors. Entities are provided the funding up front to finance the project and then pay the principal, interest, and any fees to investors until the bond matures, often up to 30 years.
HBCUs, Stakeholders, and Planning Documents Identified Extensive and Diverse Capital Project Needs
HBCUs, Stakeholders, and Planning Documents Cited Substantial Need for Repairs and Building Replacement
Almost all the HBCUs responding to our survey (70 of 79) reported that, on average, 46 percent of their building space needed to be repaired or replaced. For example, of the 35 public HBCUs that responded to our survey question on building condition, 8 reported more than three- quarters of their building space is in need of repair or replacement. Like all institutions of higher education, HBCUs are facing increasing capital project needs due to aging campus facilities, according to higher education organization officials and facilities experts. HBCUs’ planning documents we reviewed also support our survey findings around capital project needs. For example, consultants hired by one public HBCU found that a quarter of its buildings were in poor condition with the potential for demolition, according to the college’s master plan. Severe weather was also cited as a challenge by officials at another public HBCU we visited where nearly all their building space had been damaged, requiring them to shut down portions of their functional buildings, construct new buildings, and build flood walls. According to officials from this college, however, damages remain unaddressed in part due to a lack of funding (see fig. 1)
HBCUs, Planning Documents, and GAO Site Visits Identified Deferred Maintenance, Modernization Efforts, and Historical Buildings as Key Reasons for Needs
Through our survey, site visits, and review of master plans, we identified three main reasons for capital project needs: a backlog of deferred maintenance, HBCUs’ efforts to modernize campuses to be more competitive, and historical building requirements. A majority of HBCUs responding to a survey question on planned capital projects over the next 5 to 10 years reported plans to prioritize repairing or replacing academic buildings or residence halls (see fig.2).
Half of HBCUs that responded to our survey question on their current deferred maintenance backlog (24 of 48)—repairs that were not performed when they should have been—reported a backlog of $19 million or more. In addition, 30 HBCUs reported in our survey that their deferred maintenance backlog had increased in the last 3 years (2015 through 2017), and 7 HBCUs reported their backlog decreased. Public HBCUs, on average, reported deferred maintenance backlogs of $67 million and private HBCUs of $17 million. To better understand deferred maintenance, colleges hire consultants to conduct facilities condition assessments. For example, consultants conducted a facility condition assessment to understand a public HBCU’s deferred maintenance backlog, among other things, and found the backlog was $9.7 million for various repair or replacement projects ranging from repairing HVAC systems to needing a new roof for an administrative building. A higher education association reported deferred maintenance can erode safe physical conditions, financial health, and the morale of an institution.
Officials from most HBCUs we interviewed (11 of 15) said they attempt to prioritize their deferred maintenance but that financial emergencies or funding constraints prevent them from doing so. For example, officials at an HBCU we visited said that the main pipes that feed into three residence halls and their student center burst, and this unplanned capital project cost the college nearly $1 million. This HBCU had to borrow funding from its operating budget, which took away from funds that could have been used to address planned deferred maintenance projects.
Modernization Efforts
Officials from all 15 HBCUs we interviewed said that student interests in updated residence halls or academic programs require modern building spaces in order for a college to remain competitive. Officials from several HBCUs we interviewed (7 of 15) said residence halls on their campuses are outdated or in need of repairs (see fig. 3). For example, officials at one HBCU we visited said some of their residence halls were built in the 1960s and 1970s and the concrete block construction only allowed minimal changes. Officials at some HBCUs (3 of 15) said students’ interest in living on-campus increased their need for housing. Officials at one HBCU said student enrollment impacts their capital project planning and that they have plans to repair residence halls and to build new housing facilities as enrollment increases, but have not yet identified funding. One HBCU’s master plan cited anticipated growth in its student population between 2014 and 2024 will continue to impact capital project needs, including a need for additional buildings for academics and student services.
Officials from several HBCUs we interviewed (5 of 15) also reported building new facilities to remain competitive in certain academic fields. For example, officials from one HBCU reported investments in building new facilities and repairing existing buildings to better accommodate Science, Technology, Engineering and Mathematics (STEM) majors (see fig. 4).
Most HBCUs responding to our survey (42 of 79) reported having buildings designated as historic, making up, on average 11 percent of their building space. Many of those HBCUs indicated historical building needs are significant or often take priority. According to officials from two HBCUs we visited and another we interviewed, historical buildings require maintenance that can be expensive, especially for buildings designated as historic by the National Register of Historic Places. Further, the Department of the Interior reported in 2018 that HBCUs have historic building rehabilitation needs and these colleges lack the resources to repair them. For instance, a 2016 master plan for a public HBCU shows that a historic building constructed in 1916, which serves as a residence hall and has only been updated once in 1971, needs over $6 million in repairs to better accommodate students. An official at another HBCU we visited also said that the prohibitive cost of repairing the campus’ historic building has made it non-functional. This historic building had previously been used as a residence hall (see fig. 5).
HBCUs Use A Few Funding Sources for Capital Project Needs and Fewer than Half Use Education’s Capital Financing Program
HBCUs Rely on a Few Funding Sources to Address Capital Project Needs
HBCUs primarily rely on a few sources of funding to address capital project needs, such as state grants and appropriations for public HBCUs and private giving and tuition and fees for private HBCUs, according to HBCUs responding to our survey and our interviews. Officials from almost half of the HBCUs we interviewed (7 of 15) said relying on a few funding sources can affect a college’s ability to fund capital projects. Education officials and several stakeholders also said this reliance can put the HBCUs at a financial disadvantage when seeking additional external funding, such as from the bond market. Diversity of revenues is a key metric when determining a college’s credit rating, which uses a college’s financial profile to assess its ability to pay its financial obligations. Colleges with lower credit ratings, for example, may face challenges accessing the bond market, or pay more to issue a bond, according to several stakeholders. Using IPEDS data from the 2015-16 school year, we found that HBCUs may face challenges with revenue diversity because a large proportion of their revenue is from government funding (federal, state, and local) and tuition and fees. A college’s wealth, such as the size of its endowment, can also affect a college’s credit rating, according to officials from two credit rating agencies. Officials from a higher education association and a foundation noted that many HBCUs have small endowments and as a result may face challenges accessing financing. Our analysis of IPEDs data shows that HBCUs’ median endowments are about half the size of similar non-HBCUs (see table 3).
Not all HBCUs face these challenges, however. According to a representative of one higher education facilities association, some more affluent private HBCUs have more diversified revenue streams and have successfully raised funds from private giving and public-private partnerships to address their capital project needs. Nevertheless, many HBCUs face continued challenges securing external funding.
Funding Sources Used by Public HBCUs
Public HBCUs generally rely on state funding—such as annual appropriations for repairs or one-time grants for new construction—to address their capital project needs; however, those funds are often insufficient to meet their needs, according to some stakeholders and HBCU officials. A majority of public HBCUs (28 of 41) reported using state grants and appropriations to address capital project funding, according to survey responses (see fig.6). Officials from most public HBCUs we interviewed (5 of 6), however, said state appropriations are often limited to academic or administrative buildings, and colleges are responsible for financing and maintaining other projects and building spaces, such as residence halls or student centers. Furthermore, officials from all public HBCUs we interviewed (6 of 6) reported that state funds are often not sufficient to adequately address both routine repairs and their deferred maintenance backlog. Declines in state funding for higher education in recent years have also introduced financial uncertainty, particularly for HBCUs, according to officials from half of the public HBCUs and many stakeholders we spoke with. For example, officials at one public HBCU we visited said that as a result of cuts in the state’s capital budget, the college does not have enough funding to address emergency or deferred maintenance needs and they are running a deficit. Officials from one credit rating agency said that because public HBCUs rely more on state funding than their public non-HBCU counterparts, they are potentially more vulnerable than other colleges.
Over half of public HBCUs in our survey (22 of 38) reported that they used state-issued bonds to address their capital project funding for the last 5 years. Officials from most public HBCUs we interviewed (4 of 6) said the state or university system often issues general obligation bonds on behalf of the state and disperses funding to colleges to finance large scale capital projects. For example, one state issued a $2 billion bond for the 16 colleges in its university system and provided one of its public HBCUs with $30 million for a new college of business. Similar to state appropriations, officials from some public HBCUs noted that state-issued bonds are also typically restricted to academic or administrative buildings rather than residence halls or student centers. Officials from 12 public HBCUs also reported in our survey issuing bonds themselves to finance capital projects. Officials from most public HBCUs we interviewed (4 of 6) said colleges issue bonds, with their state system’s permission, to finance capital projects when state funding is limited or if the projects are for non- academic buildings. For example, one public HBCU issued a $90 million bond to fund a new student center.
Funding Sources Used by Private HBCUs
More than half of private HBCUs reported using alumni and private giving or revenue from tuition and fees to address their capital needs (see fig 7). However, private HBCUs may face challenges using these sources to address their capital needs due to competing priorities for these revenue streams and difficulty raising additional funds from these sources, according to HBCUs and stakeholders we interviewed.
Officials from most private HBCUs we interviewed (7 of 9) said they use some funding from alumni and private gifts for small capital projects, but that donors do not usually contribute to larger projects or help address deferred maintenance or repairs. While a majority of private HBCUs responding to our survey (21 of 37) reported using alumni and private giving to address their capital project needs, this funding source only accounted for 10 percent of their overall capital project funding. Several stakeholders we interviewed (4 of 10) said that some private HBCUs do not have robust fundraising offices and may face challenges raising additional funding from alumni or other private sources.
A majority of surveyed private HBCUs (20 of 37) reported using tuition and fees to address their capital project needs over the last 5 years. Education officials and officials from 5 of 9 private HBCUs said relying on tuition and fees to address capital project needs—in addition to other expenses such as operations and academics—can strain a college’s finances. Many officials from private HBCUs we interviewed (6 of 9) told us that because they are so tuition-dependent, drops in enrollment make it difficult to maintain their facilities or repay capital debt. Officials from one higher education association noted that some HBCUs face constraints raising additional tuition revenue needed to cover capital projects and other expenses because they are generally smaller colleges: more than half of private HBCUs have less than 1,000 students. Private HBCUs also have lower tuition compared to similar private non-HBCUs, according to our analysis of IPEDS data. Additionally, two stakeholders told us HBCUs may face challenges raising tuition and fee revenue, in part, because the student population at HBCUs tends to be more low income and relies more heavily on federal student aid. Based on our analysis of IPEDS data, for example, a higher proportion of students at private HBCUs received Pell Grants in the 2015-16 school year compared to similar private non-HBCUs—77 percent and 43 percent, respectively.
Strengthening HBCU Program
A majority of HBCUs responding to our survey (49 of 77) reported using federal grants to finance capital projects, and most indicated using Education’s Strengthening HBCU Program. We analyzed the program’s 2016 annual reports, the most recent data available at the time of our review, and found that more than three-quarters of HBCUs that received grants in 2016 (79 of 98) used the funds to address capital project needs. Our analysis found that HBCUs in the Strengthening HBCU Program used an average of 22 percent of their funding from this source for capital projects in 2016. According to our analysis of the annual reports, 15 of the 98 HBCUs in the program reported that the grant helped decrease the number of instructional facilities with deferred maintenance backlogs. Officials we interviewed from one HBCU said they used grants from the Strengthening HBCU Program to address some of their deferred maintenance backlog and to renovate classrooms to better meet students’ academic needs. For example, they said the grant funded capital projects that support its physics and chemistry programs (see fig. 8). In another instance, a private HBCU reported using the program’s funds to support technological updates and modernize classrooms. Such updates could help with student recruitment and, ultimately, help increase student enrollment.
Fewer than Half of Eligible HBCUs Used Loans from Education’s Capital Financing Program
Fewer than half of HBCUs, or 46 of the 99 HBCUs that are eligible, have used the HBCU Capital Financing Program to fund capital projects, according to Education data. HBCUs have borrowed over $2 billion, with private HBCUs representing about two-thirds of the loan volume (see fig. 9). After 2007, Education saw an increase in the number of loans made and the amount borrowed by HBCUs due in part to the program’s expansion to help colleges affected by Hurricanes Katrina and Rita in 2005 and Education’s efforts to improve outreach.
Education tracks how Capital Financing Program funds are used, which can fall into three broad categories: refinancing, deferred maintenance and repair, and building replacement. According to our analysis of Education data, since 1996, rather than use these loans for new capital projects, participants have used the program most frequently to refinance outstanding debt (see fig. 10). For instance, one public HBCU used a portion of a $36.6 million Capital Financing Program loan to refinance outstanding debt, which saved the college about $9 million. In addition to refinancing, program participants used the remaining funds to address deferred maintenance and repair or to replace buildings. For example, the most frequent type of project funded through the program was building or renovating residence halls, according to Education data. A private HBCU responded in our survey that it used the program to refinance outstanding debt for student housing and to help construct a new student center.
HBCUs responding to our survey and HBCU officials we interviewed reported using the Capital Financing Program because of its low interest rate. Survey respondents most frequently cited the program’s low interest rate as a reason for participating (33 of 37), as did officials from HBCUs we interviewed that use the program (10 of 11). According to Education and designated bonding authority officials, the program provides HBCUs with rates they might not receive in the private market. For example, program loans used for refinancing from 2012 through 2016 had a median true interest cost—the interest rate plus fees charged to the college—of 3.15 percent. While officials from three state university systems noted their HBCUs can issue bonds with other colleges in their system to receive a more competitive interest rate, this option is not available to all HBCUs. According to officials at the designated bonding authority, HBCUs may lack high credit ratings, and the Capital Financing Program allows these colleges to access lending at rates comparable to highly rated colleges.
Survey respondents also frequently cited the opportunity to refinance existing, more expensive capital debt and lack of access to other funding options as reasons for participating in the Capital Financing Program. Specifically, over two-thirds of survey respondents (24 of 35) cited the opportunity to refinance existing debt. According to officials from Education and the designated bonding authority, HBCUs can see substantial savings using the program. Data provided by the designated bonding authority showed that HBCUs that refinanced debt in the program from 2012 through 2016 saved a median of 14 percent of the overall loan cost. One survey respondent, for example, reported that as a result of the savings generated by refinancing existing bonds the college was able to purchase a residence hall. Almost half of the participating HBCUs that responded to the survey question on why they used the program (15 of 32) reported that they did not have access to other funding. Officials from one organization representing almost three- quarters of the private HBCUs told us this program is particularly important for small private HBCUs that have limited resources and for private HBCUs that do not have access to state funding and may not have the capacity to issue bonds. Officials from most public HBCUs we interviewed (4 of 6) also noted that because states do not typically fund buildings such as residence halls or student centers, the Capital Financing Program can help address that funding gap.
Education Has Taken Some Steps to Help HBCUs Participate in the Capital Financing Program, but Further Action Is Needed
Education Conducts Outreach, but Some HBCUs Reported Being Unaware of the Capital Financing Program
Education and its designated bonding authority have taken some steps to increase awareness of the Capital Financing Program, but some HBCUs and university system officials reported in our survey and interviews that they were unaware of the 26-year-old program. Officials from Education and its designated bonding authority said they attend a range of conferences and events in the HBCU and higher education communities to increase awareness of the program, such as conferences with higher education business officers and an annual national HBCU conference. A senior Education official said, when possible, Education visits individual public and private non-participating HBCUs that may be good candidates for the program based on their credit. In addition, a senior designated bonding authority official said designated bonding authority staff visits every HBCU that applies or expresses interest in the program. However, about a quarter of non-participating HBCUs that responded to our survey said they were unaware of the program. Officials we interviewed at one state university system also reported they had not heard of the program.
HBCUs and state university systems may be unaware of the Capital Financing Program because Education does not target its outreach in two key ways.
Lack of outreach and communication with state university systems: Stakeholders we interviewed and a senior Education official said Education does not reach out to nor communicate program information directly with state university systems, which oversee groups of public universities—both HBCUs and non-HBCUs— supported by an individual state, even though public colleges accounted for half of all HBCUs in 2016. A senior Education official told us Education staff does not reach out to state university systems because program loans are made directly to individual HBCUs. Nonetheless, according to officials at three state university systems, these systems generally play a role in coordinating colleges’ capital budget requests, and their awareness of the Capital Financing Program could help Education in its efforts to increase participation among public HBCUs. For example, officials at one state university system told us they are always interested in learning about low-cost ways to help their colleges with capital projects, and they would be interested to learn more about how the Capital Financing Program could help their public HBCUs. In addition, one surveyed public HBCU that was unaware of the program suggested Education work with state university system offices, as they are the ones responsible for facilitating and approving colleges’ capital funds. Officials at the state university system for this HBCU also said they were unaware of the program.
Lack of formal outreach plan to address HBCU leadership changes: When possible, Education officials said they reach out to HBCUs as new presidents or chief financial officers come on board. However, Education officials said they do not track this particular type of outreach. In 2016, about three-quarters of HBCUs experienced a change in at least one key leadership position, according to our analysis of Education reports, and several stakeholders we talked to cited the frequency of leadership change as a challenge. Given the frequency of changes in key leadership positions at HBCUs, consistent outreach to this group is particularly important.
This lack of program awareness among individual HBCUs and state university systems can hinder participation. Since our 2006 report on the Capital Financing Program, participation has increased from 14 to 46 HBCUs, but the total remains at fewer than half of all HBCUs. While the program is only available for capital financing of projects that meet specific criteria, it serves as a potentially important resource for HBCUs that continue to face challenges diversifying their funding sources to meet capital project needs. The Consolidated Appropriations Act, enacted in March 2018, requires Education to create and execute an outreach plan to work with states and the Capital Financing Advisory Board to improve outreach to states and help additional public HBCUs participate in the program. Taking steps, such as reaching out directly to officials in facilities departments at state university systems, could help to address several of the issues we have identified in this report related to communication with state university systems.
Federal internal control standards state that management should communicate information needed to achieve an agency’s objectives to key external stakeholders. As Education develops its outreach plan it is important that the agency also ensure that officials at individual HBCUs, who engage in capital planning—presidents, chief financial officers, and facilities managers, are aware of the program. Indeed, over half of non-participating HBCUs (23 of 34) responded in our survey that improved communication from Education was “moderately” or “extremely” important to increase program participation. In addition to working with the Capital Financing Advisory Board—which includes representatives of public and private HBCU organizations—to reach out to state university systems, Education could also further leverage the resources of its designated bonding authority. While the designated bonding authority reaches out to some prospective program participants, it could help Education further ensure that program information reaches all HBCUs. Without these efforts as part of the agency’s outreach plan, HBCUs eligible for the Capital Financing Program—the institutions that the program is designed to serve—may remain unaware of the program and miss opportunities to access low-cost capital financing.
Some Program Features Contribute to Low Participation by Public HBCUs
Some public HBCUs report being prohibited from participating in the Capital Financing Program by state law or policy because of certain program features, and Education has taken limited steps to coordinate with states to address those issues. According to our analysis of survey responses and interviews, about one-third of non-participating public HBCUs across four states (13 of 37) report being unable to use the program due to at least one federal requirement placed on the college, which conflicts with state law, policy, or practice. These features include requirements for pooling escrow funds, collateral, and lending directly to HBCUs (see table 4).
Education has taken steps to address public HBCUs’ concerns with the escrow requirement, but not the other state-level provisions that create challenges. In 2006, GAO recommended that Education consider alternatives to the escrow pool requirement, and Education submitted a legislative proposal to Congress, most recently in 2017, to require fees instead. However, Education has not systematically coordinated with states to address other laws or policies that create challenges or to identify potential solutions to help more public HBCUs participate in the program. For example, based on one college’s interpretation of state law, officials from Education and the designated bonding authority told us HBCUs in that state could not participate because of the state’s requirement that such loans be issued to a third-party. However, state university system officials in this state told us this requirement may not prohibit participation. They said a clearer explanation of the benefits and obligations of the program from Education would be helpful to determine whether the state’s HBCUs could participate. Officials at an HBCU in another state with restrictions suggested that Education work with the states to help states develop regulations that do not hinder access to the program. Officials from the university system in that state said they would be open to working with Education to find a way to allow their HBCUs to participate.
Some state university systems and colleges have successfully developed solutions that could also be helpful for states whose laws or policies create similar challenges. For example, officials we spoke with from one state university system said a state statute was recently changed after an HBCU’s application to the program had to be withdrawn because of a state law prohibiting using tuition revenue as collateral. Those changes were enacted in early 2018, and state university system officials said they are moving forward on HBCU participation in the program.
Our prior work highlights the importance of coordinating among key stakeholders to achieve results. Education’s strategic plan prioritizes supporting educational institutions and increasing college access, and coordinating with external stakeholders such as state university systems to achieve those goals. While Education is aware that many public HBCUs face state-level restrictions on participating in the Capital Financing Program, a senior Education official said the Capital Financing Program does not provide support to states whose laws or policies create such challenges. Education officials said they work with colleges on a case-by-case basis, and only work directly with state university systems when invited to by the interested HBCU. However, officials from one university system noted that it would be helpful for Education to keep both the college and the system informed of the program given the system office’s level of involvement in capital financing decisions. Officials we interviewed from three of the four public HBCUs in states with laws or policies that create these challenges said they are interested in participating in the Capital Financing Program. One HBCU official said given the low interest rate, his HBCU would refinance all its existing capital debt into the program if given the opportunity. As Education develops an outreach plan, it will be important for the plan to include coordination with key stakeholders such as state university systems to address state-level challenges to participation and share potential solutions and leverage the designated bonding authority and Advisory Board in that effort.
Education Has Taken Steps to Help Some HBCUs Experiencing Financial Hardship, but Additional Analysis Could Better Inform Policymakers
The number of loan defaults in the Capital Financing Program and the number of HBCUs having difficulty making timely loan payments have increased recently, but Education has not fully assessed the potential use of loan modifications to assist such HBCUs. For example, two HBCUs defaulted on their Capital Financing Program loans in the last 2 years, and 29 percent of loan payments were delinquent in 2017. HBCU officials we interviewed reported that financial challenges stemming from two events—the 2008 economic recession and a recent change to federal student financial aid—have decreased enrollment at some HBCUs and affected HBCUs’ ability to repay their loans on time. For example, officials from two private HBCUs told us that they experienced declining enrollment as a result of the 2008 recession. In addition, changes made in 2011 to the Parent PLUS loan program—a program used by parents to help pay for their student’s tuition—resulted in increased denials of these loan applications, according to Education and officials from several HBCUs. As a result, some students could no longer afford to attend college, and the loss of tuition revenue created additional financial hardship for the colleges, according to officials from several HBCUs and an HBCU organization official. Education issued new regulations in 2014 that revised the Parent PLUS loan criteria, enabling more families to qualify for these loans. However, HBCUs had already lost significant amounts of tuition revenue as a result of the 2011 changes, according to Education officials.
HBCUs and stakeholders have called for loan modifications to potentially assist colleges in financial distress and help them avoid defaulting on their Capital Financing Program loans. According to key stakeholders and officials from eight HBCUs, there is a need for the program to have ways to assist HBCUs facing financial difficulties. For example, officials from four HBCUs we interviewed and four additional HBCUs we surveyed suggested additional program flexibility, such as forgiving, reducing, or temporarily suspending loan payments, could be helpful for some colleges. Stakeholders also suggested that loan deferment—allowing colleges to postpone payments without penalty—or other flexible payment options could help some colleges facing financial hardship. The Consolidated Appropriations Act, enacted in March 2018, appropriated $10 million for Education to defer participating HBCUs’ Capital Financing Program loans to assist colleges experiencing financial difficulties. Under this provision, loans can be deferred for up to 6 years for participating HBCUs demonstrating financial need and meeting certain conditions. These funds are available for Education to authorize loan deferments until the end of fiscal year 2018. Little is known, however, about how loan modifications would affect participating HBCUs or the program.
According to a senior Education official, the agency assessed the potential for loan deferment in 2010 and estimated that it would cost the federal government about $150 million annually. However, neither the program office nor Education’s budget office, which is responsible for estimating the costs of policy changes, were able to provide any information on how Education arrived at this estimate. Furthermore, Education has not assessed whether several other types of loan modifications identified by stakeholders, or those used for HBCUs impacted by Hurricanes Katrina and Rita, could be beneficial to other participating HBCUs that are having trouble making timely loan payments.
Federal internal control standards state that agency management should plan for significant external events, analyze its effects on achieving program goals, and appropriately respond to those events. While Education and its designated bonding authority review each applicant’s credit and ability to take on a Capital Financing Program loan, this review reflects an HBCU’s current financial health at the time of its application. Given that Capital Financing Program loans can be up to 30 years, major external changes such as an economic recession are possible over the life of the loan. Such events may affect an HBCU’s ability to make timely loan payments and may increase the potential of an HBCU to default on its Capital Financing Program loan. According to Education’s fiscal year 2019 budget request, the HBCU portfolio is experiencing greater financial stress as evidenced by an increase in loan delinquencies, and the federal government is at risk of incurring additional costs to manage the program. Analyzing the effects of deferring loans and other types of loan modifications on program participation and on program costs could help Education determine how best to assist participating HBCUs experiencing financial difficulties while minimizing the federal government’s costs. However, a senior Education official said the agency does not plan to analyze (1) whether loan modifications could be helpful to program participants; or (2) the effect offering these modifications could have on the cost of the program. According to Education officials, modifications to the terms of Capital Financing Program loans cannot occur without statutory change. Nonetheless, Education is responsible for providing advice to Congress about what additional steps might be taken to improve the operation and implementation of the program. Conducting analyses on the effect of loan modifications, including recently authorized deferments, to help colleges avoid default and successfully participate in the program, and on the potential costs absorbed by Education of delayed or reduced payments, would enable Education to fulfill this responsibility.
Conclusions
HBCUs play a vital role in providing higher education opportunities for African-Americans. However, HBCUs continue to face challenges in securing financing to undertake needed capital projects. As a result, these colleges may be unable to make the campus improvements necessary to attract and retain students, potentially jeopardizing their long-term sustainability. Education’s Capital Financing Program is intended to be a key funding source for HBCUs’ capital needs, yet fewer than half of these colleges participate in the program.
As Education develops its statutorily mandated outreach plan, it will be important for the plan to address the outreach issues we have identified. Increasing outreach to individual HBCUs will encourage more college participation in the Capital Financing Program. Similarly, coordination with state university systems to address state-level provisions that create challenges and share potential solutions can increase public HBCU participation in the program. Education can leverage the resources not only of the Advisory Board, but also of the designated bonding authority, in these outreach efforts. If Education does not include these activities in its outreach plan, many of the HBCUs the program is intended to serve may continue to be unaware of the program or unable to participate in it.
Some HBCUs have faced declining enrollment as a result of changing economic conditions and recent changes in federal student aid policy. At the same time, the number of defaults and delinquencies has increased in the Capital Financing Program, potentially increasing the federal government’s responsibility for these losses. In addition, stakeholders have called for additional loan modifications for colleges in financial distress. The Consolidated Appropriations Act, enacted in March 2018, authorized Education to offer loan deferments to financially struggling HBCUs. As Education begins offering these loan deferments, it is important that Education analyze the effects of these deferments and other prior loan modifications, such as those given to certain HBCUs affected by Hurricanes Katrina and Rita, to ensure that they are having the intended effect. Analyzing the potential benefits of loan modifications to all participating HBCUs against the potential risks to the program, such as increased program costs, could further help policymakers enhance the overall effectiveness of the Capital Financing Program. This will be especially important as Education implements its required outreach plan, which may increase program participation.
Recommendations for Executive Action
We are making the following two recommendations to Education:
As Education develops the required HBCU Capital Financing Program outreach plan, the Executive Director of the program should include in the plan (1) ways to increase outreach to individual HBCUs so that HBCU officials are informed of the program; (2) steps to coordinate directly with state university systems to specifically address state-level challenges to participation and share potential solutions to increase public HBCU participation; and (3) ways to further leverage the designated bonding authority in its efforts. (Recommendation 1)
The Executive Director of the HBCU Capital Financing Program should lead an agency effort to analyze various Capital Financing Program loan modifications, including the effects of the loan deferments authorized in the 2018 Consolidated Appropriations Act as well as other potential modifications, to assess the potential benefits to HBCUs participating in the program, the potential cost of these options to the government, and their effect on the program’s overall financial stability. (Recommendation 2)
Agency Comments and Our Evaluation
We provided a draft of this report to Education for review and comment. Education’s comments are reproduced in appendix V.
In response to our recommendation on actions that Education should include in its required outreach plan, Education identified steps it plans to take to address each of the three components we recommended. First, to increase outreach to individual HBCUs, Education stated it will send letters to presidents and chancellors of eligible HBCUs that are not yet participating, in addition to existing activities. Second, Education stated that it plans to use methods similar to those currently used to reach out to public HBCUs, depending on resources, to coordinate directly with state university systems. Third, Education noted it plans to explore ways to leverage the designated bonding authority to do so. Education also stated that an HBCU’s ability to use the program depends on its financial strength, and government resources alone will not ensure financial strength among struggling institutions. We agree; however, it is important to make HBCUs aware of the resources available to them, particularly a federal program that was created to help address HBCUs’ capital financing challenges.
With regard to our second recommendation on analyzing the potential benefits and costs of offering loan modifications, Education partially agreed with the recommendation. Education commented that it disagreed with the recommendation to the extent that it suggests a modification of loan terms. Our recommendation does not endorse providing loan modifications to colleges. Rather, our recommendation is focused on analyzing the costs and benefits of modifications authorized by law, as well as other potential modifications. Education noted it will continue to analyze loan modifications and develop cost estimates. As we note in the report, however, Education was not able to provide evidence of analysis it conducted on potential loan modifications. We continue to believe that analysis of costs and benefits is needed to determine whether additional loan modifications are necessary or beneficial for the program.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Education, appropriate congressional committees, and other interested parties. In addition, the report will be made available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Objectives, Scope, and Methodology
We examined (1) Historically Black Colleges and Universities’ (HBCUs) capital project needs; (2) funding sources HBCUs use to address their capital project needs; and (3) the extent to which the Department of Education (Education) helps HBCUs access and successfully participate in the HBCU Capital Financing Program (Capital Financing Program). In addition to the methodologies discussed below, we reviewed relevant federal laws, regulations, and guidance on the Capital Financing Program and Strengthening HBCU Program. To determine the extent to which Education helps HBCUs access and successfully participate in the Capital Financing Program, we reviewed documentation on program performance and administration and Education documentation from selected HBCUs affected by Hurricanes Katrina and Rita that received loan modifications in 2013. We assessed Education’s communication to states and HBCUs against federal internal control standards on communicating quality information to key stakeholders. We reviewed Education’s coordination efforts against best practices for coordinating with relevant stakeholders and reviewed Education’s strategic plan which prioritizes coordinating with external stakeholders to achieve its goals of supporting educational institutions and increasing college access. We also assessed Education’s actions to help HBCUs experiencing financial challenges successfully participate in the program against federal internal control standards, which state that agency management should communicate key information needed to achieve its objectives and plan for significant changes, including economic changes, analyze the effects of such plans, and respond appropriately.
Survey of Historically Black Colleges and Universities and Review of Capital Plans
To address all three objectives, we conducted a web-based survey of accredited HBCUs in the United States (including the U.S. Virgin Islands) in June through August 2017. To identify the list of HBCUs, we ran a query using Education’s Integrated Postsecondary Education Data System (IPEDS) for colleges that were designated as an HBCU in IPEDS and participated in Title IV, and were therefore accredited. IPEDS uses Section 322(a) of the Higher Education Act of 1965, as amended to define an HBCU as “any historically Black college or university that was established prior to 1964, whose principal mission was, and is, the education of Black Americans, and that is accredited by a nationally recognized accrediting agency or association determined by the Secretary of Education to be a reliable authority as to the quality of training offered or is, according to such an agency or association, making reasonable progress toward accreditation.” Additionally, any branch campus of a southern institution of higher education that prior to September 30, 1986, received a Strengthening HBCUs Grant and was formally recognized by the National Center for Education Statistics as a Historically Black College or University is also considered an eligible institution. All 101 colleges identified as HBCUs in IPEDS were also identified as participating in Title IV.
We addressed our survey to senior leadership—presidents and chief financial officers—at HBCUs because capital planning and financing generally fall under their purview. We obtained a list of contact information for presidents and chief financial officers from Education for some participating HBCUs. In cases where contact information was not available, current, or correct, we identified appropriate contact information by reviewing HBCUs’ websites or by following up with the president’s office. Our survey included questions on capital project needs (i.e., repair or replacement) and plans, funding sources HBCUs use to address those needs, and HBCU experiences with Education’s Capital Financing Program and Strengthening HBCU Program. We also asked HBCU officials to provide a copy of their master plans to supplement their survey responses, and we reviewed those plans.
To enhance data quality and to minimize nonsampling errors, we employed recognized survey design practices in the development of the survey and in the collection, process, and analysis of the survey data. To develop our survey questions, we interviewed Education officials, HBCU administrators, higher education facilities experts, and HBCU organization officers. Additionally, we pretested the survey with five HBCUs, over the phone, to standardize survey language and to reduce variability in responses that should be qualitatively the same. In some cases, we used the results of our pretests to change the wording of questions or added clarifying examples based on feedback. We chose the five pretest HBCUs to include representation across the major subgroups of responding HBCUs: private non-profits (private) and public HBCUs, 2-years and 4- years, and participants and non-participants of the Capital Financing Program. We also reviewed examples of master plans and facility assessment guides from higher education associations to help frame our survey questions. For example, we reviewed public and private HBCU capital plans to understand the type of information they collect, methodologies for assessing their capital project needs, and how they prioritize their needs. Furthermore, we consulted higher education facilities associations’ definitions on key terms and facility indicators. Facilities experts from a higher education association indicated that master plans can change over time depending on an HBCU’s emerging capital project needs and funding availability.
To increase the survey response rate, we implemented an outreach plan to engage key HBCU officials. When we completed the final survey questions and format, we sent an email announcement of the survey in June 2017 to key HBCU officials—presidents, chief financial officers, Strengthening HBCU Program coordinators, and facilities managers. They were notified that the survey was available online and were given unique usernames and passwords. To reduce nonresponse, we followed up by email and by phone with HBCUs that had not responded to the survey to encourage them to complete it. We received responses from 79 of 101 HBCUs—38 of 51 private and 41 of 50 public HBCUs, achieving a 78 percent response rate. As this was not designed as a sample survey, we make no claims about the generalizability of the results. However, 79 HBCUs captures a substantial portion of the HBCU population. We received master plans from 20 HBCUs.
We reviewed the data for missing or ambiguous responses and followed up with HBCUs when necessary to clarify their responses. In some cases, we updated responses after following up with the survey respondent. For example, as a part of our reliability check, we followed up with HBCUs whose answers were extreme outliers on reporting dollar values for their deferred maintenance. In three cases, separate from deferred maintenance, HBCUs corrected their answers, and we updated the survey results accordingly. To analyze the survey, we calculated descriptive statistics and reviewed open-ended responses to identify themes. We also reviewed select HBCUs’ master plans to supplement survey responses.
Education Data
HBCU Capital Financing Participation Data
We analyzed Capital Financing Program loan data from Education and the designated bonding authority to better understand participation in the program. Specifically, we reviewed data from 1996 to 2017, which included participating HBCUs with sector information (public and private); loans each HBCU received; original loan amount; and status of each loan (paid off or in progress). We used the data to determine the total number of participating HBCUs by sector and total value of loans provided.
Additionally, we gathered information from Education’s Capital Financing Program website to understand how HBCUs used their loans from 1996 to 2016. The website includes information on the purpose of each loan. Based on the wording of the purpose, we developed the following categories: refinance, deferred maintenance, repair and renovation, alteration, and new construction. For the purpose of reporting, we combined deferred maintenance, repair, renovation, and alteration into a deferred maintenance and repair category. For instances where HBCUs listed a similar or related purpose, we used professional judgement to categorize it. The categorization was conducted by one analyst then independently confirmed by a second analyst. Based on our review of Education’s data, review of loan contracts, and interviews with relevant Education and designated bonding authority officials, we found the HBCU Capital Financing participation data to be sufficiently reliable for the purpose of describing participation and use of the program.
Integrated Postsecondary Education Data System (IPEDS)
To provide context on challenges HBCUs face financing capital projects identified through interviews with officials from Education, HBCUs, HBCU organizations, other stakeholders, and through our survey, we analyzed data from IPEDS from the 2015-16 school year, the most recent data available at the time of our review. We assessed the reliability of the data by reviewing related documentation and interviewing officials responsible for maintaining data in the system, and found the data to be reliable for our purposes. We examined HBCUs’ institutional, student, and financial characteristics and compared those characteristics with a matched set of similar non-HBCUs. These characteristics include information on the colleges’ charges for tuition and fees; the percentage of students who receive financial aid overall, and Pell Grants specifically; information on key revenue streams such as tuition and fees, private grants and contracts, and government funding; and data on the college’s endowment. Colleges report financial information to IPEDS, such as revenue, using different accounting standards: public colleges generally use standards issued by the Governmental Accounting Standards Board, and private colleges use standards issued by the Financial Accounting Standards Board. Due to variation in how colleges report some revenue data under these two different accounting standards, we excluded one public HBCU from our analysis that used standards issued by the Financial Accounting Standards Board and analyzed 100 HBCUs.
Under 1,000; 1,000-4,999; 5,000-9,999; 10,000- 19,999; 20,000 and above Public 4-year; public 2-year; private 4-year; private 2- year Any degree prior to a 4-year Bachelor’s degree; a 4- year Bachelor’s degree; any degree following a 4-year Bachelor’s degree HBCU state or Census division States with HBCUs or Census divisions (Pacific, Mountain, West North Central, East North Central, Middle Atlantic, New England, South Atlantic, East South Central, and West South Central)
Using a multi-stage approach to create matched sets of HBCUs and non- HBCUs, we first identified non-HBCUs that matched the HBCU using the institution’s size, sector, and highest degree offered. We then constrained the set of non-HBCUs to those within the same state as respective HBCUs. Each matched set may contain multiple HBCUs and/or multiple non-HBCUs. If none of the non-HBCUs identified using institution size, sector, and highest degree offered lied within the same state as the HBCUs, we used Census-based divisions to create the matched set of HBCUs and non-HBCUs.
Table 5 summarizes the number of institutions within each matched set. Seventy-three of the 100 HBCUs were matched using state, while 27 were matched using Census-based divisions.
We conducted this matched analysis because an unmatched analysis of the 100 HBCUs and all 3,529 non-HBCUs is potentially vulnerable to spurious differences in outcomes between HBCUs and non-HBCUs that arise from an imbalance of key factors underlying these two types of institutions. For example, public 2-year institutions make up a smaller proportion of HBCUs compared to non-HBCUs (10 and 28.6 percent, respectively), while public 4-year institutions make up a larger proportion of HBCUs compared to non-HBCUs (39 and 19.6 percent, respectively). This imbalance could lead to differences in outcomes arising from characteristics inherent in the type of institution, not a comparison of HBCUs to non-HBCUs. Matching HBCUs to non-HBCUs would lead to a similar underlying distribution of key factors, which improves the comparability of HBCUs and non-HBCUs.
We used the matched sets to compare HBCUs to non-HBCUs on student financial aid and financial outcomes. For each of these variables and across the matched sets, we estimated descriptive statistics (mean, median, range) for HBCUs and non-HBCUs. However, in order to compare HBCUs to non-HBCUs, we accounted for similarities within each matched set. The varying number of HBCUs and non-HBCUs within each matched set required an analysis which is, in principle, an extension of a paired t-test. In this analysis, differences and correlations within each matched set are accounted for when estimating the overall difference between HBCUs and non-HBCUs. More specifically, we performed a linear mixed effects model with the basic form: yij ~ βijHBCUij + bijClusterij + σij, for the jth institution in the ith cluster bik ~ N(0, η), for the kth institution in the ith cluster where y is the outcome variable of interest; β is the parameter of interest, the fixed-effect coefficient that quantifies the overall difference between HBCUs and non-HBCUs; σ is the residual error that is not accounted for by HBCU status or clusters; b is the random-effect coefficient that accounts for correlations within clusters and quantifies the different effects of the k institutions within each cluster set (i.e., the k HBCU and non-HBCU institutions are nested within each cluster set); and b estimates the separate and distinct effects for each cluster set and is assumed to have a multivariate normal distribution, with a variance of η.
The p-value estimated was used to assess whether there was a statistically significant difference between HBCUs and non-HBCUs for the outcome variables of interest.
We stratified the matched sample by public and private education sector and used the model above to obtain estimates specific for public and private colleges. This education sector specific analysis was not further stratified by 2- and 4-year college types due to small sample sizes.
In order to further explore differences with public and private colleges, we expanded the model above as such: yij ~ βijHBCUij + γijSectorij + εijHBCUij*Sectorij + bijClusterij + σij, for the jth institution in the ith cluster Where the parameters described above remain the same and γ is the difference between public and private colleges, after adjusting for being an HBCU and ε is the difference within difference, assessing whether the HBCU–non-HBCU difference within public colleges is different from the HBCU–non-HBCU difference within private colleges.
Wilcoxon Test for Clustered Data The linear mixed effects model above assumes that data are normally distributed (i.e., follow a bell-shaped curve). In order to assess whether these assumptions hold, we performed a Wilcoxon test that is extended for clustered data. The Wilcoxon test ranks values and is free of distributional assumptions, and assumes that all data are independent (i.e., not correlated). Overall consistency between tests of significance from the linear mixed effects model and Wilcoxon tests indicates that model assumptions hold.
Strengthening HBCU Annual Reports
To describe the extent to which HBCUs used the Strengthening HBCU Program to finance capital projects, we analyzed annual reports submitted by participating HBCUs for the 2016 grant year. Participating HBCUs submit annual performance reports which include information on how the funds were used and the amount spent on each activity, among other information. The reports also include information on whether the HBCUs experienced leadership turnover in that reporting year. Because colleges submit a report for each type of Strengthening HBCU funding they receive or to carry over funding from the previous year, each college could have submitted up to three reports in 2016. In total, we reviewed 236 reports for 98 grant recipients. We also used these reports to identify leadership turnover at HBCUs.
Interviews of HBCU Stakeholders
To address all three objectives, we conducted over 40 interviews with HBCU stakeholders and colleges to learn about HBCU capital project needs (i.e., repair, renovation, and new construction of buildings); challenges HBCUs face accessing and securing funding, particularly through Education’s Capital Financing Program; and steps Education has taken, if any, to help HBCUs better access and successfully participate in their programs. We conducted the following interviews:
Education: We interviewed senior officials at Education to learn more about HBCUs’ access to and successful participation in the Capital Financing Program and participation in the Strengthening HBCU Program.
Designated Bonding Authority: We interviewed officials at the designated bonding authority, with whom Education contracts to help administer the Capital Financing Program, to learn more about HBCUs’ access to and successful participation in the Capital Financing Program.
HBCU officials: We interviewed senior officials such as presidents, chief financial officers, and facilities managers from 15 HBCUs to learn more about the state of their capital project needs and challenges they face accessing and securing funding, particularly though the Capital Financing Program and Strengthening HBCU Program. We selected HBCUs that included different sectors (public and private), varying enrollments and state locations, and a mix of participation in the Capital Financing Program.
State university system officials: We interviewed officials from four state university systems in states where public HBCUs did not participate and that were identified by Education as having state-level challenges accessing the program (North Carolina, Florida, Georgia, and Mississippi).
HBCU organizations: We interviewed officials at the United Negro College Fund, which represents private HBCUs; and the Thurgood Marshall College Fund, which represents public and publically supported HBCUs. Both organizations are members of Education’s Capital Financing Program Advisory Board. We consulted with officials from both organizations on different mechanisms that could help borrowers successfully participate in the Capital Financing Program.
Higher education facilities experts: We interviewed higher education facilities experts at the National Association of College and University Business Officers, APPA: Leadership in Educational Facilities, and Sightlines—a higher education facilities consultant—to learn about industry best practices in identifying and addressing capital project needs and what differences, if any, exist for capital funding between HBCUs and non-HBCUs.
Financial experts: We interviewed officials at Moody’s, Standard & Poor’s (S&P), the Municipal Securities Rulemaking Board (MSRB), and a financial consulting group to learn more about the municipal bond market, how colleges are rated, and how access and successful participation in the market differs between HBCUs and non-HBCUs.
Other stakeholders: We interviewed other stakeholders, such as the Association of Public and Land Grant Universities (APLU), which represent HBCU public land-grant universities; the Kresge Foundation, which has provided HBCUs with funding for capital projects; and researchers at the University of Pennsylvania’s Center for Minority Serving Institutions and the authors of a study on HBCU participation in the bond market, “What’s in a (school) name? Racial discrimination in higher education bond markets.”
Site Visits
We visited nine HBCUs across three states—Alabama, Louisiana, and North Carolina—to interview senior HBCU officials to learn about their capital project needs, to tour their facilities, and to learn more about the benefits and challenges the HBCUs faced in accessing funding and participating in Education’s two key programs. We selected our nine site visit HBCUs to obtain a mix of sector (public and private), enrollment size, participation in Education’s programs, and the existence of state-level laws or policies that have created challenges to participating in the Capital Financing Program. We also chose to visit Louisiana to learn more about the loans HBCUs received after Hurricanes Katrina and Rita and the colleges’ recovery efforts. During our site visits, we met with senior leadership—presidents, chief financial officers, facilities managers, Strengthening HBCU grant coordinators—because they generally make decisions on capital project planning. While we did not inspect or evaluate the state of these colleges’ buildings, HBCU officials explained in detail the capital project needs. In particular, we toured campuses to better understand their capital project needs and the extent to which Education’s two key programs have helped address those needs.
Appendix II: Additional Survey Results on Capital Project Needs and Funding for Historically Black Colleges and Universities
We received responses from 79 of 101 Historically Black Colleges and Universities (HBCUs): 38 of 51 private non-profit (private) and 41 of 50 public HBCUs. By survey design, not all respondents reported information for each question. As a result, the denominator (number of survey respondents for a particular question) may change. This appendix presents selected survey responses from HBCUs and calculations made by GAO based on selected responses as a snapshot of capital project needs for HBCUs.
Capital Project Needs: Condition of Building Space for Responding HBCUs
Survey respondents reported information on their institution’s real property portfolio, historical building space, and the condition of their building space.
Capital Project Needs: Deferred Maintenance Backlog for Responding HBCUs
Survey respondents provided information on their deferred maintenance backlogs—repair put off to a later date.
The Federal Accounting Standards Advisory Board defines deferred maintenance as maintenance that was not performed when it should have been or was scheduled to be and which was put off or delayed for a future period. Activities include preventive maintenance; replacement of parts, systems, or components; and other activities needed to preserve or maintain the asset. Maintenance and repairs exclude activities directed towards expanding the capacity of an asset or otherwise upgrading it to serve needs different from, or significantly greater than, its current use.
Capital Projects: Top 5 Capital Projects for Next 5 to 10 Years for Responding HBCUs
Survey respondents provided information on their documented top 5 capital project needs over the next 10 years. Survey respondents provided information on the type of capital project (e.g., repairs, renovations and alterations, new buildings or facilities) and purpose of the project (e.g., academic, administrative, athletics, etc.).
Capital Project Funding: Funding Sources to Address Capital Project Needs for Responding HBCUs
Survey respondents provided information on funding sources they use to address their capital project needs and the percentage of funding from that source.
HBCU Capital Financing Program: Perspectives on Participation by Responding HBCUs
Survey respondents provided information on their participation in the HBCU Capital Financing Program. We asked these respondents questions about the type of projects the program funds, reasons for pursuing this funding, and challenges they face in participating in the program.
Strengthening HBCU Program: Capital Projects for Responding HBCUs
Survey respondents provided information on their participation in the Strengthening HBCU Program. We asked about why they participate and how the program supports capital project needs.
Appendix III: Select Institutional, Student, and Financial Data on Historically Black Colleges and Universities (HBCUs)
Appendix III: Select Institutional, Student, and Financial Data on Historically Black Colleges and Universities (HBCUs)
Using a multi-stage matching technique, we created a matched set of non-HBCUs for comparison purposes. Using data from the Department of Education’s Integrated Postsecondary Education Data System (IPEDS) for the 2015-16 school year, the most recent data available, we matched accredited HBCUs and non-HBCUs on four key characteristics: sector (i.e., public or private non-profit (private)), highest degree offered, size (enrollment), and location. For each of the 100 HBCUs, we established respective matched sets that included a total of 382 non-HBCUs. For more information about our methodology, see appendix I.
Appendix IV: Location of Historically Black Colleges and Universities (HBCUs) and Their Sector (Public and Private Non-profit)
Appendix IV: Location of Historically Black Colleges and Universities (HBCUs) and Their Sector (Public and Private Non-profit)
Appendix V: Comments from the Department of Education
Appendix VI: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, individuals making key contributions to this report were Nyree Ryder Tee, Assistant Director; Rachel Beers, Analyst-in-Charge; Grace Cho; Kris Nguyen; and Manuel Antonio Valverde. In addition, key support was provided by Michael Armes, Susan Aschoff, Allison Bawden, Deborah Bland, Marcia Carlsen, Gina Hoover, DuEwa Kamara, John Karikari, Risto Laboski, Eunice LaLanne, Won Lee, Sheila McCoy, Jean McSween, Jeffrey G. Miller, John Mingus, Mimi Nguyen, Anna Maria Ortiz, Christopher Ross, Benjamin Sinoff, and Karen Tremba. | Why GAO Did This Study
HBCUs play a prominent role in our nation's higher education system. For example, about one-third of African-Americans receiving a doctorate in science, technology, engineering, or mathematics received undergraduate degrees from HBCUs. To help HBCUs facing challenges accessing funding for capital projects, in 1992, federal law created the HBCU Capital Financing Program, administered by Education, to provide HBCUs with access to low-cost loans. GAO was asked to review the program.
This report examines HBCUs' capital project needs and their funding sources, and Education's efforts to help HBCUs access and participate in the HBCU Capital Financing Program. GAO surveyed all 101 accredited HBCUs and 79 responded, representing a substantial, but nongeneralizable, portion of HBCUs. GAO analyzed the most recent program participation data (1996-2017) and finance data (2015-16 school year); reviewed available HBCU master plans; visited nine HBCUs of different sizes and sectors (public and private); and interviewed Education officials and other stakeholders.
What GAO Found
Historically Black Colleges and Universities (HBCUs), stakeholders, and planning documents identified extensive and diverse capital project needs at HBCUs and GAO found HBCUs rely on a few funding sources—such as state appropriations and tuition and fees—to address those needs. HBCUs responding to GAO's survey reported that 46 percent of their building space, on average, needs repair or replacement. Based on a review of master plans—which assess the condition of HBCU facilities—and visits to nine HBCUs, GAO identified significant capital project needs in the areas of deferred maintenance, facilities modernization, and preservation of historic buildings. The Department of Education's (Education) HBCU Capital Financing Program has provided access to needed funding for some HBCUs and has helped modernize their facilities to improve student recruitment. However, fewer than half of HBCUs have used the program, according to Education data, which was specifically designed to help them address capital project needs (see figure).
Education has undertaken several efforts to help HBCUs access and participate in the HBCU Capital Financing Program. For example, Education conducts outreach through attending conferences. However, some HBCUs in GAO's survey and interviews were unaware of the program. Moreover, public HBCUs in four states reported facing participation challenges due to state laws or policies that conflict with program requirements. For example, participants are required to provide collateral, but public HBCUs in two states reported they cannot use state property for that purpose. In March 2018, a federal law was enacted requiring Education to develop an outreach plan to improve program participation. An outreach plan that includes direct outreach to individual HBCUs and states to help address these issues could help increase participation. Without direct outreach, HBCUs may continue to face participation challenges. In addition, two HBCUs recently defaulted on their program loans and 29 percent of loan payments were delinquent in 2017. Education modified a few loans in 2013 and was recently authorized to offer loan deferment, but has no plans to analyze the potential benefits to HBCUs and the program's cost of offering such modifications in the future. Until Education conducts such analyses, policymakers will lack key information on potential options to assist HBCUs.
What GAO Recommends
GAO recommends Education (1) include direct outreach to individual HBCUs and steps to address participation challenges for some public HBCUs in its outreach plan, and (2) analyze the potential benefits and costs of offering loan modifications in the program. Education outlined plans to address the first recommendation, and partially agreed with the second. GAO continues to believe both recommendations are warranted. |
gao_GAO-18-78 | gao_GAO-18-78_0 | Background
DOD Public Water Systems
DOD has two types of public water systems that provide drinking water to people that live and work on military installations. The first type provides drinking water that has been treated by DOD. The second type provides water treated by a private company or a local utility, which we refer to as “non-DOD-treated” drinking water. Drinking water systems vary by size and other factors, but they most typically include a supply source, treatment facility, and distribution system. A water system’s supply source may be a reservoir, aquifer, well, or a combination of these sources. The treatment process for surface water generally uses sedimentation, filtration, and other processes to remove impurities and harmful agents, and disinfection processes such as chlorination to eliminate biological contaminants. Distribution systems are comprised of water towers, piping grids, pumps, and other components to deliver treated water from treatment systems to consumers.
Drinking Water Regulations and Administrative Orders
EPA regulates drinking water contaminants under the Safe Drinking Water Act by issuing legally enforceable standards, known as National Primary Drinking Water Regulations, which generally limit the levels of these contaminants in public water systems. EPA has issued such regulations for approximately 90 drinking water contaminants. In accordance with the Safe Drinking Water Act, EPA may authorize a state to have primary enforcement responsibility for drinking water regulations, as long as the state has, among other things, drinking water regulations that are no less stringent than the National Primary Drinking Water Regulations.
The Safe Drinking Water Act also authorizes EPA to take emergency actions necessary to protect public health when informed that a contaminant is present in or is likely to enter a public water system or an underground source of drinking water that may present an imminent and substantial endangerment. For example, EPA may issue administrative orders, which generally include actions to be taken, such as remediating contaminated sources of drinking water or requiring the provision of alternative water supplies. State regulators may also issue orders to public water systems to address contaminated drinking water.
Public water systems, including the DOD public water systems that provide drinking water to about 3 million people living and working on military installations, are required to comply with EPA and state drinking water regulations. EPA divides violations of drinking water regulations into two types: (1) health-based violations and (2) other types of violations that include violations of monitoring, reporting, and public notification requirements. Under the Safe Drinking Water Act, EPA also is required to identify unregulated contaminants that present the greatest health concern, establish a program to monitor drinking water for unregulated contaminants, and decide whether or not to regulate at least five such contaminants every 5 years. EPA has not regulated any new contaminants using this process since 1996.
DOD’s environmental compliance policy states that ASD (EI&E) is responsible for providing guidance, oversight, advocacy, and representation for environmental compliance programs—to include overseeing the military departments’ compliance with health-based drinking water regulations at DOD public water systems. The policy directs the military departments to annually report to ASD (EI&E) the total population receiving water from both “regulated” and “other” DOD public water systems—referred to in this report as DOD public water systems that provide DOD- and non-DOD-treated drinking water, respectively— that did and did not attain all Safe Drinking Water Act health-based drinking water standards. The policy also requires the military departments to report information regarding each instance health-based drinking water standards were not attained during the reporting period, to include the name and location of the military installation; the nature of the issue (e.g., the contaminant type); the DOD population affected; the duration of the issue; the corrective actions taken or planned (e.g., flushing the system, resampling the water, or implementing system upgrades); and the estimated date for achieving the standard.
EPA Health Advisories
In addition to issuing drinking water regulations, EPA may also publish drinking water health advisories. In contrast to drinking water regulations, health advisories are nonenforceable. Drinking water health advisories provide technical guidance on health effects, analytical methodologies, and treatment technologies. These advisories recommend the amount of these contaminants that can be present in drinking water—”health advisory levels”—at which adverse health effects are not anticipated to occur over specific exposure durations, to include 1 day, 10 days, several years, or over a lifetime. EPA issues provisional health advisories to provide information in response to an urgent or rapidly developing situation. DOD’s list of emerging contaminants includes 11 contaminants, including PFOS, PFOA, and perchlorate, for which EPA has issued a drinking water health advisory. Specifically,
PFOS. PFOS is part of a larger group of fluorinated organic chemicals that have been incorporated into an array of consumer products (i.e., to make some more resistant to stains, grease, and water) and also in firefighting foam used by DOD and civilian airports. According to EPA, the major manufacturer of PFOS in the United States voluntarily agreed to phase out production of the chemical in 2002. According to EPA’s health advisory, exposure to PFOS may remain possible due to legacy uses, existing and legacy use in imported goods, and the chemical’s “extremely high persistence” in the environment. According to the EPA, exposure to PFOS may result in adverse health effects, such as fetal developmental effects during pregnancy or to breastfed infants, cancer, liver damage, immune effects, thyroid effects, and other effects. See table 1 for details of the EPA provisional health advisory that was issued in 2009 and the lifetime health advisory that was issued in 2016, which superseded the provisional health advisory.
PFOA. PFOA is a fluorinated organic chemical that has been used in generally the same products as PFOS, including firefighting foam used by DOD and civilian airports. According to EPA, PFOA was voluntarily phased out by eight major companies in the manufacturing of their products at the end of 2015. According to the EPA, adverse health effects from exposure to PFOA are similar to those for PFOS. See table 1 for details of the EPA provisional health advisory that was issued in 2009 and the lifetime health advisory that was issued in 2016, which superseded the provisional health advisory.
Perchlorate. Perchlorate is commonly used in solid propellants, fireworks, matches, signal flares, and some fertilizers, and has been used by DOD for rocket fuel and ammunition. EPA published an interim health advisory for perchlorate in 2008; the interim health advisory level was set at 15 parts per billion. According to the health advisory, perchlorate can disrupt the functions of the thyroid gland.
DOD-Identified Emerging Contaminants
In 2009, DOD issued a policy on the identification, assessment, and risk management of emerging contaminants that have the potential to impact DOD. According to that policy, chemicals and materials used or planned for use by DOD that meet the definition of an emerging contaminant should be identified as early as possible. The policy further states that DOD is to assess and, when appropriate, take action to reduce risks posed by its emerging contaminants to people; the environment; and DOD missions, programs, and resources. Where necessary, DOD is to perform sampling, conduct site-specific risk assessments, and take response actions for emerging contaminants released from DOD facilities, in accordance with relevant statutes.
According to the DOD policy on emerging contaminants, ASD (EI&E) is to develop and maintain a list of emerging contaminants with potential or probable high risk to the department’s personnel and functions. As of April 2017, DOD’s list of emerging contaminants comprised 49 chemicals or substances. According to our analysis of EPA documents, DOD’s list includes 21 contaminants that can be found in drinking water. Of these 21 contaminants, 10 contaminants have been regulated by EPA under the Safe Drinking Water Act, and 11 contaminants are currently unregulated but have an EPA-issued drinking water health advisory. The other 28 DOD-identified emerging contaminants do not have EPA drinking water regulations or health advisories. Appendix II provides more information on the drinking water regulatory status of DOD-identified emerging contaminants.
DOD Has Not Internally Reported All Data on Compliance with Drinking Water Regulations or Used Available Data to Evaluate Differences between Its Drinking Water Systems
For the years we reviewed—fiscal years 2013 through 2015—the military departments annually reported information internally to ASD (EI&E) on compliance with EPA and state health-based drinking water regulations, which indicate that drinking water quality at DOD public water systems was similar to other systems in the United States. However, not all violations of health-based regulations were reported to ASD (EI&E) during this time frame, as is required by DOD policy. The military departments reported that a total of 77 military installations had at least one violation at some point from fiscal year 2013 through fiscal year 2015, but we found that at least 16 additional installations had violations that were reported to EPA but were not internally reported to ASD (EI&E). DOD also has not used available compliance data to identify why DOD public water systems that provide DOD-treated drinking water appear to have more violations of health-based regulations than DOD systems that provide non-DOD- treated drinking water.
Military Departments Have Internally Reported Data on Compliance with Health-Based Drinking Water Regulations, but Have Not Reported All Violations
For the years we reviewed—fiscal years 2013 through 2015—the military departments annually reported information to ASD (EI&E) on compliance with and violations of EPA and state health-based drinking water regulations at the DOD public water systems that provide drinking water to military installations. The military departments’ data for fiscal years 2013 through 2015 indicate that about 92 percent of people who received drinking water from DOD public water systems were served by a system that complied with EPA and state health-based regulations. This is similar to the percentage of people in the United States—also about 92 percent, according to EPA—who received drinking water during that time frame from a community public water system with no health-based violations. The data for that time period also indicate that about 8 percent of people were provided drinking water from a DOD public water system that had at least one violation of a health-based regulation. Health-based violations can be for any length of time during a fiscal year—for example, a violation lasting 1 day is counted the same as a violation lasting for 1 month. Across the 3 fiscal years, the military departments reported that a total of 77 military installations had at least one violation at some point during that time period: 35 in fiscal year 2013, 25 in fiscal year 2014, and 17 in fiscal year 2015. The most common types of contaminants for which the military departments reported violations were coliform and two disinfection byproducts—trihalomethanes and haloacetic acids—which, according to EPA, are among the most common types of contaminants for which health-based drinking water violations occur across the United States.
However, we found that the military departments have not always reported all violations to ASD (EI&E), as required by DOD policy. Based on our review of data in EPA’s Safe Drinking Water Information System for fiscal years 2013 through 2015, we found that the military departments did not report violations to ASD (EI&E) for at least 16 installations—9 Air Force installations, 5 Navy installations, and 2 Army installations. According to EPA’s database, the total population served by DOD public water systems at these installations is approximately 180,000 people, and most of the violations that went unreported involved coliform and disinfection byproduct contaminants. However, the actual population number affected by these violations and the contaminants involved— along with other information such as the duration of the contamination and the corrective actions planned or taken—were not included in the military departments’ annual reports to ASD (EI&E). These violations were recorded in EPA’s system, which indicates that the installations reported the violations to the appropriate state regulatory agencies, who then reported them to EPA’s database. However, the violations were not reported to ASD (EI&E), as required by DOD policy.
According to military department officials, violations of health-based drinking water regulations went unreported to ASD (EI&E) due to a lack of clarity in DOD’s reporting requirements and misunderstandings of the requirements on the part of installations and the military departments. We found that violations were either not reported by the military installations where the violations occurred or that they were not reported by the installations’ chains of command. Navy officials cited turnover of installation personnel as the reason some violations went unreported, as well as misinterpretations by installation personnel of DOD’s reporting requirements. Air Force officials also told us that most of their unreported violations were not reported to ASD (EI&E) because the Air Force did not interpret them as health-based violations, although DOD policy requires these types of violations to be reported. Army officials told us that, based on their interpretation of DOD’s policy, the policy did not require them to report violations at installations where formal, written notification was not received from the state regulatory agency. However, ASD (EI&E) officials stated that all violations of health-based regulations should be reported, whether or not the state provides formal, written notification of the violation. Navy officials also told us that they have not reported violations at some of the Navy’s smaller systems that purchase drinking water from non-DOD public water systems, due in part to misinterpretation of DOD’s internal reporting requirements. However, Navy officials told us that ASD (EI&E) had instructed them to begin reporting these types of violations in fiscal year 2016, and the Navy is working with ASD (EI&E) and the other military departments to determine whether these types of systems should regularly report health-based violations.
Currently, ASD (EI&E) does not have complete data in accordance with DOD’s policy, limiting its ability to conduct oversight and analyze how many people at military installations receive drinking water with health- based violations, what contaminants were involved, the duration of the contamination, or what corrective actions the military departments have planned or taken to address the violation. Standards for Internal Control in the Federal Government states that quality information is needed to achieve an organization’s objectives. Those standards also indicate that actions such as improved communication to and additional training for personnel are helpful for an organization to meet its objectives. According to DOD officials, a committee comprised of ASD (EI&E) and military department officials began a review in 2016 of DOD’s internal reporting requirements for drinking water compliance data. While such a committee could be in a position to make recommendations on clarifying the annual reporting requirements, no documentation on the committee’s efforts was yet available at the time of our review as the committee’s work was still in progress. In addition, at present, there are no firm dates for when its work will be completed or when any potential changes would be implemented. Absent actions by ASD (EI&E) to identify and implement any necessary changes to clarify annual reporting requirements in its environmental compliance policy, and absent actions by the military departments to increase understanding at their installations and commands about the requirements, adherence to DOD’s environmental compliance policy will remain limited and DOD will lack complete data to conduct oversight of regulatory compliance at its public water systems.
DOD Has Not Used Available Data to Assess Why DOD-Treated Water Appears to Have More Health-Based Violations Than Non-DOD-Treated Drinking Water
DOD has not used available data to assess why DOD public water systems providing DOD-treated drinking water appear to have more violations of health-based drinking water regulations than systems providing non-DOD-treated drinking water. Although we found that not all violations were reported by the military departments to ASD (EI&E), the data that were reported during fiscal years 2013 through 2015 indicated that about 99 percent of the people who received non-DOD-treated drinking water were served by systems with no violations, while about 89 percent of the people who received DOD-treated drinking water were served by systems with no violations.
When we asked ASD (EI&E) and military department officials why these differences may exist, they were unable to provide an explanation because they had not used the reported water quality data to identify the reasons why DOD public water systems providing DOD-treated water appear to have more violations than systems providing non-DOD-treated water. Although some officials offered ideas on the reasons for differences in compliance—including the relative expertise of utilities and private companies, versus DOD, in providing drinking water—DOD officials acknowledged that the agency has not evaluated the data to identify specific reasons for why the differences may exist. All public water systems, including DOD public water systems, are required to comply with applicable EPA and state drinking water regulations. According to Standards for Internal Control in the Federal Government, management should establish and operate activities to monitor the internal control system and evaluate the results. Such monitoring should assess the quality of performance over time and promptly resolve any findings. Without reviewing the data reported by the military departments to identify why there appear to be differences in violations between DOD’s two types of public water systems and without identifying and implementing any actions to address any differences, ASD (EI&E) and the military departments may not be able to improve overall compliance with health-based drinking water regulations.
DOD Has Initiated Actions to Address Concerns with Its Firefighting Foam as Well as Elevated Levels of PFOS, PFOA, and Perchlorate in Drinking Water
DOD is taking steps to address health and environmental concerns with its use of firefighting foam that contains PFCs—including PFOS and PFOA—to include restricting the use of foam at its installations and funding research into the development of a PFC-free foam that can meet DOD performance requirements. DOD also has responded to EPA and state orders and initiated additional actions to address elevated levels of PFOS, PFOA, and perchlorate.
DOD Is Taking Steps to Address Health and Environmental Concerns with Firefighting Foam That Contains PFCs
DOD is taking steps to address PFOS- and PFOA-related health and environmental concerns with its use of firefighting foam that contains PFCs. Firefighting foam is used by DOD to put fires out quickly while also ensuring that they do not reignite. This is critical if, for example, there is a fire from a fighter jet on the deck of an aircraft carrier. DOD has outlined performance requirements in its military specification for firefighting foam, which was authored by the Navy’s Naval Sea Systems Command but is approved for use in all of DOD. For example, the military specification states how long it should take for firefighting foam to extinguish a fire—based on the size of the fire and the amount of foam used—and how long the foam should prevent the extinguished fire from reigniting. DOD’s military specification also requires that firefighting foam purchased and used by the department must contain PFCs.
DOD’s steps to address concerns with the use of firefighting foam include restricting the use of existing foams that contain PFCs; testing its current foams to identify the amount of PFCs they contain; and funding research into the future development of PFC-free foam that can meet DOD’s performance and compatibility requirements (see table 2). Some of these steps, such as limiting the use of firefighting foam containing PFCs, are in place. Others, such as determining the specific amount of PFCs in existing firefighting foams or researching potential PFC-free firefighting foams, are in progress with targets, in some cases, but no firm completion dates.
Navy officials stated that they are planning to revise the military specification after they have completed their testing—to be completed in late 2017 or 2018—on the amounts of PFOS, PFOA, and other PFCs found in the firefighting foam currently used by DOD. That revision, according to Navy officials, is intended to set limits for the amount of PFCs that are allowed in firefighting foam. According to DOD, at present there is no PFC-free firefighting foam that meets DOD’s performance and compatibility requirements. As a result, the Navy has no plans to remove the requirement for firefighting foam to contain PFCs at this time. However, if a PFC-free foam is developed in the future that can meet DOD performance and compatibility requirements, Navy officials said that any necessary revisions to the military specification would be made at that time—a process that could take months to complete.
DOD Has Responded to Orders from EPA and a State Regulator and Has Initiated Additional Actions to Address Elevated Levels of PFOS and PFOA in Drinking Water at or near Military Installations
DOD has taken steps to respond to four administrative orders directing the department to address PFOS and PFOA levels that exceeded EPA’s health advisory levels for drinking water. One order was issued by the Ohio Environmental Protection Agency at Wright-Patterson Air Force Base in Ohio, and three orders were issued by the EPA directed at: the former Pease Air Force Base in New Hampshire; Horsham Air Guard Station in Pennsylvania; and the former Naval Air Warfare Center Warminster in Pennsylvania. Under Section 1431 of the Safe Drinking Water Act, EPA may issue orders necessary to protect human health where a contaminant in a public water system presents an imminent and substantial endangerment. EPA may do so if appropriate state and local authorities have not acted to protect human health. These orders may require, among other things, carrying out cleanup studies, providing alternate water supplies, notifying the public of the emergency, and halting disposal of the contaminants threatening human health. The Ohio Environmental Protection Agency has similar authority.
According to information provided by officials from the Ohio Environmental Protection Agency, EPA, and DOD, DOD has taken steps to respond to the administrative orders. Table 3 provides further details on each order and examples of actions by DOD to address the orders.
In addition to actions specific to these four installations, DOD has initiated other actions to test for, investigate, and mitigate elevated levels of PFOS and PFOA at or near installations across the military departments. Following the release of EPA’s lifetime health advisory for PFOS and PFOA in May 2016, each of the military departments issued guidance directing installations to, among other things, test for PFOS and PFOA in their drinking water and take steps to address drinking water that contained amounts of PFOS and PFOA above the EPA’s lifetime health advisory level. The military departments also directed their installations to identify locations with a known or suspected prior release of PFOS and PFOA and to address any releases that pose a risk to human health— which can include people living outside DOD installations.
As a result of these efforts, DOD has initiated actions to address PFOS and PFOA in drinking water both on military installations and outside military installations. As of March 2017, DOD data indicated that the department was taking steps to address levels of PFOS and PFOA above the EPA’s lifetime health advisory level in drinking water on 11 military installations in the United States, 2 of which we visited during the course of this review (see fig. 1).
According to DOD data, these installations took various corrective actions to mitigate the presence of PFOS and PFOA in the drinking water, including shutting down drinking water wells, providing alternative drinking water, and installing treatment systems. For example, at Eielson Air Force Base in Alaska, the Air Force reported shutting down three of the installation’s six drinking water wells and installing a treatment system to remove PFOS and PFOA from the drinking water. At Marine Corps Base Camp Pendleton in California, the Navy reported that a well contaminated with PFOS and PFOA was taken out of service and that the affected reservoir was drained and replaced with water from another source; follow-on testing showed that the presence of PFOS and PFOA were returned to below the EPA’s lifetime health advisory level. At Fort Leavenworth in Kansas, the Army reported that the private company that operates the installation’s drinking water system had shut down two wells contaminated with PFOS and PFOA and plans to install a treatment system before returning those wells to service.
Additionally, according to DOD data as of December 2016 the military departments had identified 391 active and closed installations with known or suspected releases of PFOS and PFOA, and had reported spending almost $200 million on environmental investigations and mitigation actions at or near 263 (or about 67 percent) of those installations. In particular, DOD had initiated mitigation actions, which include installing treatment systems or supplying bottled water, to address PFOS and PFOA in drinking water for people living outside 19 installations—5 of which we visited during the course of this review (see fig. 2).
The following cost data provided by DOD were current as of December 2016, and are supplemented by additional information we obtained during our installation visits.
The Air Force identified 203 installations with known or suspected releases of PFOS and PFOA, spent about $120 million on environmental investigations at those installations, and spent about $33 million on mitigation actions at or near 14 of the 203 installations. For example, the Air Force reported spending over $5 million on environmental investigations and mitigation actions at Peterson Air Force Base in Colorado. During our visit to that installation, officials showed us the sites they are investigating—to include the current (see fig. 3 below) and former fire training areas—to determine the extent to which their prior use of firefighting foam may have contributed to the discovery of PFOS and PFOA in the drinking water of three nearby communities. Additionally, the Air Force has awarded a contract for, among other things, installing treatment systems in those communities. In another example, the Air Force reported spending about $800,000 on environmental investigations at Joint Base Langley-Eustis in Virginia, but nothing yet on mitigation actions. During our visit to this installation, officials told us that they had not taken any mitigation actions because they do not use the installation’s groundwater as a drinking water source; the utility that serves the installation, as well as the nearby city of Newport News, obtains its drinking water primarily from a surface water source, which officials said was approximately 20 miles from the installation.
The Navy identified 127 installations with known or suspected releases of PFOS and PFOA, spent about $20.5 million on environmental investigations at 47 of those installations, and spent about $24 million on mitigation actions at or near 5 of those installations. For example, the Navy reported spending about $15 million on environmental investigations and mitigation actions at the former Naval Air Station Joint Reserve Base Willow Grove in Pennsylvania. During our visit to this installation, officials told us that the Navy is investigating the extent to which PFOS and PFOA on the installation may have contaminated a nearby town’s drinking water. The Navy has agreed to fund installation of treatment systems and connections of private well owners to the town’s drinking water system, among other things. In another example, the Navy reported spending nearly $3 million on environmental investigations and mitigation actions at Naval Auxiliary Landing Field Fentress in Virginia. During our visit to this installation, officials told us that the Navy is providing bottled water to the approximately 20 to 30 personnel who work there and plans to install a treatment system to treat for PFOS and PFOA.
The Army identified 61 installations with known or suspected releases of PFOS and PFOA, spent about $1.6 million on environmental investigations at 13 of those installations, and has not yet begun any mitigation actions at or near the identified installations. For example, the Army reported spending about $26,000 on environmental investigations at Fort Carson in Colorado, but nothing yet on mitigation actions. During our visit to this installation, officials told us that they had found PFOS and PFOA in groundwater near their previous fire training area but that the installation does not use that groundwater as a drinking water source, and state officials told us that it is unlikely that PFOS and PFOA from Fort Carson had affected any nearby drinking water sources.
According to DOD, it may take several years for the department to determine how much it will cost to cleanup PFOS and PFOA contamination at or near its military installations. In January 2017, we reported that DOD had not notified Congress that the costs for environmental cleanup at closed installations will significantly increase due to the high cost of remediating emerging contaminants—including PFOS and PFOA. We also reported that DOD officials had not determined the total costs for cleaning up emerging contaminants at closed installations. We recommended that DOD include in future annual reports to Congress best estimates of the environmental cleanup costs for emerging contaminants as additional information becomes available, and DOD concurred with the recommendation and stated its commitment to do so.
DOD Previously Directed Installations to Test for Perchlorate in Drinking Water
DOD previously directed installations to test for perchlorate in drinking water. Following the EPA’s issuance of an interim drinking water health advisory for perchlorate in 2008, DOD issued policy in April 2009—which superseded similar policy that was issued in January 2006—directing DOD-owned drinking water systems that were testing for inorganic substances to also test for perchlorate. Installations that found perchlorate in their drinking water were to consult with their leadership on appropriate actions to take and to continue testing on a quarterly basis until they determined that perchlorate levels were likely to remain below EPA’s health advisory level, or any applicable federal or state regulation. Citing congressional and regulatory agency concerns related to perchlorate, DOD developed a database for storing the results of perchlorate testing. According to ASD (EI&E), the database was last updated in 2009 and is no longer being used by the department.
ASD (EI&E) officials stated that they are no longer regularly testing drinking water for perchlorate unless there is a state requirement to do so; previous testing indicated that DOD was not a primary source of perchlorate in drinking water and that known releases of perchlorate did not currently pose a threat to drinking water. According to EPA, the agency expects to issue a final drinking water regulation for perchlorate by the end of 2019. ASD (EI&E) officials told us that, once EPA has issued a final regulation, DOD is committed to complying with it.
Conclusions
During the period we reviewed, DOD data indicate that DOD public water systems complied with EPA and state health-based drinking water regulations at a level comparable with other systems in the United States. However, we found that the military departments did not report all violations of these regulations to ASD (EI&E) during that period, which illustrates that DOD’s internal reporting requirements for drinking water data are either not clear in DOD regulations or are not clearly understood by those implementing them. Unless ASD (EI&E) and the military departments act to make any necessary clarifications to and increase understanding of DOD’s annual reporting requirements, ASD (EI&E) may not have complete data to effectively oversee the military departments’ compliance with drinking water regulations. Further, the data indicated that systems providing DOD-treated drinking water had more reported health-based violations than DOD systems providing non-DOD-treated drinking water. However, DOD has not used these data to identify the reasons that these differences may exist. Without using available data to identify why differences in violations appear to exist between DOD’s two types of public water systems, DOD will likely be hampered in its ability to identify what actions, if any, could be taken to address any differences and improve overall compliance with health-based drinking water regulations.
Recommendations for Executive Action
We are making a total of five recommendations to DOD.
The Assistant Secretary of Defense for Energy, Installations, and Environment, in consultation with the Secretaries of the military departments, should identify and implement any necessary changes to DOD’s environmental compliance policy to clarify DOD’s reporting requirements for violations of health-based drinking water regulations. (Recommendation 1)
The Secretary of the Army should identify and implement actions to increase understanding at Army installations and commands about DOD’s reporting requirements for violations of health-based drinking water regulations. These actions may include improved communication to or additional training for personnel. (Recommendation 2)
The Secretary of the Navy should identify and implement actions to increase understanding at Navy installations and commands about DOD’s reporting requirements for violations of health-based drinking water regulations. These actions may include improved communication to or additional training for personnel. (Recommendation 3)
The Secretary of the Air Force should identify and implement actions to increase understanding at Air Force installations and commands about DOD’s reporting requirements for violations of health-based drinking water regulations. These actions may include improved communication to or additional training for personnel. (Recommendation 4)
The Assistant Secretary of Defense for Energy, Installations, and Environment, in consultation with the Secretaries of the military departments, should (a) review reported compliance data to identify the reasons for any differences in the number of violations of health-based drinking water regulations between DOD’s two types of public water systems and (b) identify and implement any actions needed to address the causes of any differences in the number of violations between DOD’s two types of public water systems. (Recommendation 5)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD and EPA for review and comment. In its written comments, reproduced in appendix III, DOD concurred with our recommendations. DOD and EPA also provided technical comments, which we incorporated as appropriate. Based on technical comments from DOD, we revised the title of the report to more clearly specify the actions DOD should take to address the findings in our report.
We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Assistant Secretary of Defense for Energy, Installations, and Environment; the Secretaries of the Army, the Navy, and the Air Force; and the Administrator of EPA. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact us at J. Alfredo Gómez, (202) 512-3841 or [email protected], or Brian J. Lepore, (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
Senate Report 114-255 accompanying a bill for the national defense authorization for fiscal year 2017 included a provision for us to review the Department of Defense’s (DOD) efforts to manage contaminants in drinking water. This report examines the extent to which DOD has (1) internally reported data on compliance with health-based drinking water regulations at military installations and used those data to assess compliance at its two types of public water systems and (2) taken actions to address concerns with its firefighting foam containing perfluorinated chemicals (PFCs) and to address elevated levels of perfluorooctanesulfonic acid (PFOS), perfluorooctanoic acid (PFOA), and perchlorate in drinking water at or near military installations.
For objective one, we reviewed DOD’s policy on environmental compliance in the United States, which directs the military departments to annually report data to the Assistant Secretary of Defense for Energy, Installations, and Environment (ASD (EI&E)) on compliance with and violations of Environmental Protection Agency (EPA) and state health- based drinking water regulations at military installations. We analyzed data reported by the military departments to ASD (EI&E) on compliance with and violations of health-based drinking water regulations at DOD public water systems located at military installations in the United States for fiscal years 2013 through 2015, the most recent data available at the time of our review. We analyzed the data to identify (1) the number of people served by DOD public water systems that complied with applicable EPA and state health-based drinking water regulations during the fiscal year and (2) the number of people served by DOD public water systems that violated at least one of these regulations sometime during the fiscal year. We performed this analysis for both types of DOD public water systems—those that provide DOD-treated drinking water, and those that provide non-DOD-treated drinking water. We also used the data to identify the military installations where the reported violations occurred; the nature of the violation (including the contaminant involved); and the number of people affected. Next, we collected data from EPA’s Safe Drinking Water Information System for all public water systems in the United States. We used DOD-provided public water system identification numbers to identify in the EPA system any violations for health-based drinking water regulations at those DOD systems for fiscal years 2013 through 2015. We then compared the violations found in EPA’s data to the data reported by the military departments to ASD (EI&E) to determine the extent to which the military departments were reporting all violations of health-based drinking water regulations to ASD (EI&E).
We also analyzed DOD’s data to identify any differences in violations between DOD- and non-DOD-treated drinking water. We evaluated the military departments’ reported data and DOD’s use of these data to determine compliance with DOD’s reporting requirements in the department’s environmental compliance instruction and Standards for Internal Control in the Federal Government. According to these standards, quality information is needed to achieve an organization’s objectives, management is to monitor performance over time and promptly resolve any findings, and actions such as improved communication to and additional training for personnel are helpful for an organization to meet its objectives. We also discussed our analysis with ASD (EI&E) and military department officials, and discussed possible reasons for why any violations went unreported to ASD (EI&E) and why there may be differences in violations between DOD- and non-DOD- treated drinking water. We assessed the reliability of the DOD and EPA data on violations of health-based drinking water regulations by reviewing relevant documentation, testing the data for obvious errors, and interviewing knowledgeable officials. As we have previously found, EPA’s data system may not contain all public water violations as states have under-reported the violations. During this review, we found that some public water system identification numbers for DOD installations could not be matched with EPA’s system and, therefore, were excluded from our analysis. As a result, some DOD installation violations may be missing from the data, and we may not have comprehensive violations data for health-based drinking water regulations at DOD installations. Nonetheless, we determined that DOD and EPA data were sufficiently reliable for the purpose of identifying whether any drinking water violations were recorded in EPA’s system but not internally reported within DOD, and to indicate possible differences in drinking water violations, as reported by the military departments, between DOD’s two types of public water systems.
For objective two, we reviewed policies issued by the military departments on the use of firefighting foam that contains PFCs. We also reviewed DOD documents related to research into PFC-free firefighting foams that can meet the department’s performance and compatibility requirements, as well as DOD’s military specification document that outlines those requirements. We met with officials from ASD (EI&E) and the military departments to discuss their policies on the use of firefighting foam and actions taken to address concerns with the use of firefighting foam containing PFCs, including the future use of firefighting foam. Additionally, we met with Navy officials responsible for testing existing firefighting foam products and setting the military specifications for firefighting foam use in DOD.
Additionally, we obtained and reviewed four regulatory administrative orders—three from EPA and one from the Ohio Environmental Protection Agency—directing DOD to address elevated levels of PFOS and PFOA contamination in drinking water at or near four active and closed military installations, and reviewed documentation related to DOD’s efforts to address these administrative orders. We also met with officials from Ohio and the EPA regions that issued the orders—EPA Regions 1 and 3—as well as DOD officials who responded to the orders, to discuss DOD’s response to the orders. We reviewed drinking water guidance issued by ASD (EI&E) and the military departments on testing installation drinking water for PFOS and PFOA and responding to known or suspected releases of PFOS and PFOA. We analyzed DOD-provided data on the installations where DOD-conducted testing showed the presence of PFOS and PFOA in drinking water above the EPA’s health advisory level for those contaminants (as of March 2017) and on the costs and actions taken to investigate and mitigate PFOS and PFOA at or near military installations (as of December 2016). We assessed the reliability of the data by examining the data for obvious errors and inconsistencies, comparing the data, where applicable, with other information collected, and by interviewing knowledgeable officials; we found the data to be sufficiently reliable for our purposes of describing what DOD has reported on its actions and costs for responding to PFOS and PFOA.
Additionally, we reviewed DOD policy and our prior work on testing for and responding to perchlorate at military installations. We met with ASD (EI&E) and military department officials to discuss DOD actions to address PFOS, PFOA, and perchlorate. To obtain additional information on DOD actions to address emerging contaminants in drinking water, we conducted site visits to a nongeneralizable sample of seven current and former military installations—at least two installations per military department—that were selected because they were investigating or responding to unregulated DOD-identified emerging contaminants in drinking water; these installations are listed below. We also met with EPA and state regulatory officials to better understand how DOD was responding to administrative orders and addressing PFOS, PFOA, and perchlorate at or near DOD installations. Specifically, we met with officials from selected EPA regions and state regulatory offices that had issued an administrative order for PFOS and PFOA or whose region or state included the installations we visited; those EPA regions and states are listed below. We also compared DOD’s list of emerging contaminants with EPA documentation to determine how many DOD-identified emerging contaminants (1) have been regulated by EPA under the Safe Drinking Water Act or (2) are currently unregulated but have an EPA-issued drinking water health advisory.
We visited or contacted the following offices and locations during our review. Unless otherwise specified, these organizations are located in or near Washington, D.C.
Office of the Secretary of Defense
Office of the Assistant Secretary of Defense for Energy, Installations,
Office of the Deputy Assistant Secretary of Defense for Environment, Safety, and Occupational Health
Office of the Assistant Chief of Staff of the Army for Installation
U.S. Army Installations Management Command, Fort Sam Houston,
U.S. Army Environmental Command, Fort Sam Houston, Texas
Fort Carson, Colorado
Fort Jackson, South Carolina
Office of the Assistant Secretary of the Navy for Energy, Installations,
Office of the Chief of Naval Operations, Energy and Environmental
Commander, Navy Installations Command
Marine Corps Installations Command
Naval Facilities Engineering Command
Naval Sea Systems Command
Former Naval Air Station Joint Reserve Base Willow Grove,
Naval Auxiliary Landing Field Fentress, Virginia Department of the Air Force
Office of the Assistant Secretary of the Air Force for Installations,
Air Force Civil Engineer Center, Joint Base San Antonio, Texas
Former Pease Air Force Base, New Hampshire Joint Base Langley-Eustis, Virginia
Peterson Air Force Base, Colorado
Wright-Patterson Air Force Base, Ohio
Office of Research and Development
Office of Land and Emergency Management
Office of Enforcement and Compliance Assurance
EPA Region 1, Boston, Massachusetts
EPA Region 3, Philadelphia, Pennsylvania
EPA Region 4, Atlanta, Georgia
EPA Region 5, Chicago, Illinois
EPA Region 8, Denver, Colorado
EPA Region 9, San Francisco, California
Colorado Department of Public Health and Environment
Ohio Environmental Protection Agency
Pennsylvania Department of Environmental Protection
South Carolina Department of Health and Environmental Control We conducted this performance audit from June 2016 to October 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Drinking Water Regulatory Status for Department of Defense-Identified Emerging Contaminants
The Department of Defense’s (DOD) list of emerging contaminants includes 21 contaminants that can be found in drinking water: 10 that have been regulated by the Environmental Protection Agency (EPA) under the Safe Drinking Water Act and 11 that are currently unregulated but have an EPA-issued drinking water health advisory. Table 4 shows the regulatory status for each of the 21 contaminants.
Appendix III: Comments from the Department of Defense
Appendix IV: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contacts named above, Maria Storts (Assistant Director), Diane B. Raynes (Assistant Director), Kazue Chinen, Michele Fejfar, Jennifer Gould, Karen Howard, Richard P. Johnson, Mae Jones, Daniel Kuhn, Summer Lingard-Smith, Daniel Longo, Felicia Lopez, Geoffrey Peck, Ophelia Robinson, Jerry Sandau, and Sara Sullivan made key contributions to this report. | Why GAO Did This Study
According to DOD, about 3 million people in the United States receive drinking water from DOD public water systems, which are to comply with EPA and state health-based regulations. EPA and DOD have detected elevated levels of two unregulated, DOD-identified emerging contaminants found in firefighting foam—PFOS and PFOA—in drinking water at or near installations. Perchlorate, an unregulated chemical used by DOD in rocket fuel, can also be found in drinking water.
The Senate Report accompanying a bill for national defense authorization for fiscal year 2017 included a provision for GAO to review DOD management of drinking water contaminants. This report examines the extent to which DOD has (1) internally reported data on compliance with health-based drinking water regulations at military installations and used those data to assess compliance at its two types of public water systems, and (2) taken actions to address concerns with its firefighting foam and elevated levels of PFOS, PFOA, and perchlorate in drinking water at or near military installations. GAO reviewed DOD guidance and EPA drinking water regulations, advisories, and orders; analyzed DOD and EPA drinking water data; and visited seven installations from among those addressing emerging contaminants in drinking water.
What GAO Found
The Department of Defense (DOD) has not internally reported all data on compliance with health-based drinking water regulations or used available data to assess compliance. DOD data for fiscal years 2013-2015 indicate that DOD public water systems complied with Environmental Protection Agency (EPA) and state health-based drinking water regulations at levels comparable with other systems in the United States. However, the military departments did not report all violations to DOD, i.e., while 77 installations reported violations to DOD, GAO found that at least 16 additional installations did not. Until DOD takes steps to increase the clarity and understanding of its internal reporting requirements, it may not have the data it needs to fully oversee compliance. DOD also has not used its data to determine why its two types of systems—one that provides DOD-treated water and another that provides non-DOD-treated water—have different compliance rates. Specifically, DOD's data indicate that about 99 percent of the people who received non-DOD-treated drinking water were served by systems with no violations, while about 89 percent of the people who received DOD-treated drinking water were served by systems with no violations. Absent further analysis of its data, DOD may not be able to improve overall compliance.
DOD has initiated actions to address concerns with both its firefighting foam and also with elevated levels in drinking water of perfluorooctane sulfonate (PFOS), perfluorooctanoic acid (PFOA), and perchlorate, which are DOD-identified emerging contaminants. PFOS and PFOA can be found in DOD's firefighting foam. DOD has restricted its use of this foam and is funding efforts to develop a new foam that meets DOD performance requirements. Additionally, at 11 military installations (see fig.), DOD has shut down wells, provided alternate water sources, or installed water treatment systems to respond to elevated levels of PFOS and PFOA, at times in response to EPA and state orders.
What GAO Recommends
GAO is making five recommendations to improve DOD's reporting and use of data on compliance with health-based drinking water regulations. DOD concurred with the recommendations. |
gao_GAO-18-615 | gao_GAO-18-615_0 | Background
U.S. Law Requires State to Convene an ARB after Certain Types of Incidents
Federal law generally requires the Secretary of State to convene an ARB not later than 60 days after the occurrence of an incident that resulted in serious injury, loss of life, or significant destruction of property at, or related to, a U.S. mission abroad unless the Secretary determines the incident clearly involves only causes unrelated to security. This time period can be extended for an additional 60-day period if the Secretary determines that the additional period is necessary for the convening of the board. Whenever the Secretary convenes an ARB, the Secretary shall promptly inform the Chairman of the Committee on Foreign Relations in the Senate and the Speaker of the House of Representatives. Federal law specifies that an ARB will consist of five members appointed by the Secretary of State and one appointed by the Director of National Intelligence. It also states that the ARB shall submit its findings to the Secretary of State. According to State’s FAM, the ARB is a mechanism to foster more effective security of U.S. missions and personnel abroad by ensuring a thorough and independent review of security-related incidents. Through its investigations and recommendations, the ARB seeks to determine accountability and promote and encourage improved security programs and practices.
M/PRI Is Responsible for Conducting the ARB Incident Vetting Process
M/PRI—the central management analysis organization of State’s Under Secretary of State for Management—is responsible for initiating and shepherding the incident vetting process to identify incidents that may warrant an ARB, according to the FAM. The FAM states that M/PRI will begin the ARB incident vetting process once M/PRI becomes aware of an incident abroad that could involve loss of life, injury, or destruction of property. This process includes consultation with the Office of the Legal Adviser (Legal), DS, and other offices as appropriate to evaluate whether the ARB statute criteria apply. If the ARB statute criteria are deemed applicable or if the applicability is questionable, M/PRI is responsible for calling a meeting of State’s ARB Permanent Coordinating Committee. See figure 1 for members of the Permanent Coordinating Committee and other State offices and bureaus involved in responding to the incidents in Cuba. If M/PRI decides the ARB statute criteria are not applicable, M/PRI will notify committee members in writing, providing a summary of the incident and an explanation as to why the criteria do not apply. If any member disagrees, M/PRI will call a Permanent Coordinating Committee meeting. According to the FAM, the committee will review the available facts and recommend to the Secretary of State whether or not to convene an ARB as quickly as possible after an incident occurs. The Secretary of State makes the final decision on whether to convene an ARB.
The U.S. Embassy in Havana Is Supported by Several State Entities
WHA, DS, and MED, among other State entities, support the U.S. Embassy in Havana by providing advice and guidance on policy, security, and other issues.
WHA. Reporting to the Under Secretary of State for Political Affairs, WHA oversees the U.S. Embassy in Havana and is responsible for managing and promoting U.S. interests in the region. Embassy officials, including senior leadership, report to WHA and its Office of the Coordinator for Cuban Affairs through diplomatic cables, email, and phone calls.
DS. Reporting to the Under Secretary of State for Management, DS oversees security at diplomatic posts and is responsible for providing a safe and secure environment for the conduct of U.S. foreign policy. Embassy Regional Security Officers are required to report security incidents through different systems, including diplomatic cables, SPOT Reports, or the Security Incident Management Analysis System, depending on the type of incident. Regional Security Officers are also in regular contact with DS via phone and email, according to State officials.
MED. Reporting to the Under Secretary of State for Management, MED ensures that U.S. government employees and their families who are assigned to diplomatic posts have access to healthcare and advises State management about health issues around the world. The U.S. Embassy in Havana has a medical unit, including U.S. direct-hire and locally hired staff. MED approves requests to medically evacuate U.S. personnel and family members from diplomatic posts.
Other State entities. Other State entities provide support to the U.S. embassy in Havana on specific issues. For example, CMS, within State’s Executive Secretariat, gathers, assesses, and disseminates information to State senior management about events that threaten the security of U.S. missions and their personnel. The Office of Foreign Missions, which reports to the Under Secretary of State for Management, seeks fair treatment for U.S. personnel abroad while ensuring that foreign diplomats based in the United States receive the same treatment that their respective governments provide to U.S. personnel abroad in return.
State’s ARB Policy Does Not Ensure that the Office Responsible Is Made Aware of Incidents That May Meet ARB Criteria, Such as Those That Occurred in Cuba
Although M/PRI is responsible for initiating and leading State’s ARB incident vetting process, State’s ARB policy does not define how M/PRI should become aware of incidents that may involve injury, loss of life, or destruction of property. Regarding Cuba, the U.S. embassy and several State entities responded to incidents that were later associated with various injuries in early 2017. As of June 2018, State officials remained uncertain of the cause or perpetrator of the incidents and injuries. M/PRI officials said they did not know about the incidents in Cuba until August 2017, when the media began to report on the incidents.
State’s ARB Policy Does Not Define How M/PRI Should Become Aware of Incidents That May Involve Injury
Although M/PRI is responsible for initiating and leading the ARB incident vetting process, State’s polices do not define responsibilities for internal communication to M/PRI of incidents that may involve injury, loss of life, or destruction of property. According to the FAM, M/PRI and the Permanent Coordinating Committee are responsible for evaluating whether incidents meet the ARB statute criteria. However, M/PRI can only initiate the process after it is made aware of potentially qualifying incidents, and the FAM does not outline how M/PRI should be notified of these types of incidents or which, if any, State entities are responsible for notifying M/PRI. In contrast, the FAM outlines other specific reporting responsibilities for Regional Security Officers. According to State officials and our analysis, State’s FAM and Foreign Affairs Handbooks do not establish a policy, procedure, or process for internal communication of such incidents to M/PRI. In 2006, the Under Secretary of State for Management issued a cable requiring U.S. diplomatic posts to report potential ARB incidents directly to M/PRI. However, the cable did not identify who at post was responsible for reporting, and instructed posts to report to an individual who is no longer in M/PRI. Moreover, State officials we met with were unaware of the cable.
M/PRI officials said that information about potentially qualifying incidents is not directed to them through State’s established reporting mechanisms, such as diplomatic cables. State’s cable system does not have a caption, channel, or tag that would direct information to M/PRI about incidents that may involve injury, loss of life, or damage to property. State’s Office of the Inspector General previously found deficiencies in State’s internal communication of incidents that may meet ARB criteria. Despite the 2006 cable on potential ARB incident reporting, in 2013, State’s Inspector General found that State had no systematic process ensuring immediate notification of security-related incidents to M/PRI, and that DS did not routinely provide security reports to M/PRI. The Inspector General made an informal recommendation that DS should include M/PRI as an addressee on all security-related incident reports. In 2015, the Inspector General noted that DS, in response to the recommendation, said that such a blanket inclusion of M/PRI on all security-related incident reports would result in M/PRI being inundated with a large number of irrelevant reports.
Because State has no policy that ensures M/PRI becomes aware of incidents that may involve injury, loss of life, or destruction of property, M/PRI officials said they typically become aware of potentially qualifying incidents—such as explosions at diplomatic facilities—when such incidents are discussed internally and widely publicized. M/PRI officials also told us they occasionally became aware of potentially qualifying incidents through informal communication, such as during senior staff meetings with the Under Secretary of State for Management. If M/PRI officials are not aware of incidents, they cannot initiate State’s ARB incident vetting process. This situation puts State at risk of not meeting statutory time frames for convening an ARB and could result in State being unable to improve security programs and practices at other U.S. diplomatic posts, which could affect the response to similar incidents elsewhere.
Standards for Internal Control in the Federal Government call for internal communication to achieve the entity’s objectives and note that management should document responsibilities through policy. The FAM requires internal controls, which includes as an objective that programs are efficiently and effectively carried out in accordance with applicable law and management policy. The FAM also states that the Under Secretary of State for Management is responsible for, among other things, developing and executing management policies; the organization, operations, and assignment of functions within State; and directing and administering worldwide information resources.
The U.S. Embassy in Havana and Several State Entities Responded to Unexplained Incidents in Cuba Associated with Serious Injury to U.S. Personnel
In January 2017, U.S. embassy and State officials began responding to incidents in Cuba that were later associated with various injuries. In June 2018, the Secretary of State noted that the precise nature of the injuries and the cause had not yet been established. According to congressional testimony by State officials, in late 2016, U.S. personnel in Havana first reported incidents, typically involving sounds and resulting in various medical symptoms, to the embassy’s Regional Security Officer and Chief of Mission. Embassy officials reported the incidents to DS and the National Security Council as a new type of harassment in early January 2017, according to State documents. The embassy’s Medical Officer first evaluated a U.S. official related to the incidents on December 30, 2016, and others in January 2017. Starting in late March 2017, the embassy held several meetings with U.S. personnel to share the limited information it had about the incidents, according to State officials. In April 2017, the embassy held Emergency Action Committee meetings regarding the incidents.
CMS communicated with State senior management about the incidents beginning in April 2017. To ensure that State senior management were aware of how the embassy was responding, CMS distributed among various State entities, including M/PRI, one of the embassy’s April 2017 diplomatic cables reporting on an Emergency Action Committee meeting. According to CMS officials, the cable that CMS distributed was unclear about what incidents had occurred and did not include detailed information about the incidents or associated injuries. According to M/PRI officials, M/PRI was on CMS’s distribution list because M/PRI was responsible for monitoring the implementation of a previous ARB recommendation that called for State to review embassy risk management decisions. According to a former M/PRI official, M/PRI did not review these CMS communications for other purposes, including to identify incidents that may meet ARB statute criteria. In addition, in April and May 2017, CMS included multiple cables on the situation in Cuba in its daily Safety Overseas Summary for State senior management.
In response to the incidents, U.S. embassy and WHA officials met with Cuban officials to emphasize to the Cuban government its responsibilities to ensure the safety of foreign diplomats in Cuba, according to testimony by State officials. In mid-February 2017, U.S. officials met with Cuban officials in Havana and Washington, D.C., about the incidents, citing the Vienna Convention requirements to provide for the safety and security of diplomats, according to State officials. Following additional incidents reported in March and April 2017, U.S. officials met again with Cuban officials in Havana and Washington, D.C. In May 2017, State expelled two Cuban diplomats from the United States to underscore the Cuban government’s responsibility to protect U.S. personnel in Cuba, according to testimony by State officials. In September 2017, State ordered the departure from Cuba of non-emergency U.S. embassy personnel and, in October, expelled 15 Cuban diplomats from Washington, D.C. to underscore to Cuba its obligations to protect U.S. personnel, according to testimony by State officials.
According to State officials, by May 2017, the embassy, WHA, DS, and MED were aware of 16 U.S. personnel and family members in Havana who had been injured, although unable to determine the cause. In January 2018, State’s Medical Director testified to Congress that by May 1, 2017, State had determined that several of those individuals had serious injuries. Between February and May 2017, a specialist at the University of Miami evaluated 80 members of the embassy community. MED arranged for the medical evacuations of about 40 U.S. personnel from Cuba to Miami, Florida, for evaluations with the specialist, and the specialist subsequently conducted additional evaluations at the embassy in Havana. According to State testimony to Congress, the specialist identified 16 individuals who had symptoms and medically verifiable clinical findings similar to mild traumatic brain injury. In June 2018, the Secretary of State noted that the precise nature of the injuries and the cause had not yet been established.
M/PRI Became Aware of the Incidents in Cuba after Media Reports
M/PRI officials said they became aware of the incidents in Cuba after media reports in August 2017. According to M/PRI officials, a State official—who previously worked in M/PRI—contacted M/PRI in early August after seeing media reports to inquire whether they were aware of the incidents in Cuba. Although several State entities were aware of the incidents, WHA, DS, and MED did not report the incidents to M/PRI and it was unclear whether the incidents met the criteria for convening an ARB, according to officials. However, our analysis shows that State’s policies do not instruct State entities to evaluate whether incidents meet the ARB criteria before reporting such incidents to M/PRI. Instead, State’s FAM requires M/PRI to lead the process for evaluating incidents that may involve injury, loss of life, or destruction of property. According to the FAM, M/PRI will call a Permanent Coordinating Committee meeting if the ARB statute criteria apply or if the applicability is questionable. The committee will, as quickly as possible after an incident occurs, review the available facts and recommend to the Secretary whether to convene an ARB. M/PRI initiated State’s incident vetting process in August 2017, as shown in figure 2 below.
As a result of the incidents in Cuba, M/PRI officials told us they realized that they may not be aware of all incidents that may involve injury to U.S. diplomats. In an initial attempt to address this concern, M/PRI officials said they requested that CMS add M/PRI officials to the distribution list for the Safety Overseas Summary to try to increase M/PRI’s awareness of potential incidents. CMS told us that it added M/PRI officials to the distribution list in October 2017.
According to M/PRI officials and a timeline provided by M/PRI, once these officials became aware of the incidents in August 2017, the office began the ARB incident vetting process, as described in the FAM. In August 2017, these officials initially consulted with DS and MED about the incidents. In further discussion with Legal, the officials determined that they did not have sufficient information to determine whether the incidents met the ARB statute criteria. Given the uncertainties surrounding the incidents, in mid-September 2017, they decided to call a meeting of the Permanent Coordinating Committee, which included representatives from M/PRI, WHA, DS, MED, Legal, the Bureau of Intelligence and Research, the Bureau of Counterterrorism, and the Intelligence Community. The committee initially met on September 28, 2017, to review the available facts against the ARB statute criteria, and concluded that it needed additional time to determine whether the ARB statute criteria had been met. On November 28, 2017, the committee met again and recommended to the Secretary of State that an ARB be convened. The Secretary of State concurred with the recommendation on December 11, 2017, and convened the ARB on January 12, 2018. The ARB officially began its work in early February 2018.
Conclusions
An ARB is intended to result in improved security programs and practices at U.S. missions abroad. While State has directed M/PRI to initiate the incident vetting process—including convening the Permanent Coordinating Committee to evaluate the facts—State’s policies do not define responsibilities for internal communication to M/PRI of incidents that may involve injury, loss of life, or destruction of property at U.S. missions abroad. Although M/PRI officials may receive information through informal channels, this approach does not ensure that M/PRI will be made aware of relevant incidents. With regard to the incidents in Cuba, M/PRI could not begin the incident vetting process for determining whether the ARB statute criteria had been met until it became aware of them in August 2017. When M/PRI is not aware of incidents that may meet the ARB statute criteria, it cannot initiate the incident vetting process for convening ARBs. Until State establishes policies that ensure the appropriate office is promptly aware of potentially relevant incidents—for example, policies that identify formal internal communication procedures and document responsibilities for such communication—State is at risk of failing to comply with the ARB statute. Improving its security programs at U.S. diplomatic posts is all the more imperative given recent reports of similar incidents, such as in Guangzhou, China.
Recommendation for Executive Action
To ensure that State’s process allows it to initiate its ARB incident vetting process in a timely manner, the Secretary of State should revise State’s policies to define responsibilities for internal communication to M/PRI of incidents that may involve injury, loss of life, or destruction of property at, or related to, U.S. missions abroad. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report to State. In its written comments, State concurred with our recommendation. State said it will improve its processes for ensuring effective internal communication. We have reprinted State’s comments in their entirety in appendix I. State also provided technical comments, which we incorporated as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of State. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you and your staff have any questions about this report, please contact me at (202) 512-5130 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II.
Appendix I: Comments from the Department of State
Appendix II: GAO Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Judith McCloskey (Assistant Director), Ashley Alley, Debbie Chung, Thomas Costa, Marcia Crosse, Neil Doherty, Justin Fisher, Christopher Hayes, Brandon Hunt, Joseph Kirschbaum, and George Ogilvie made key contributions to this report. | Why GAO Did This Study
U.S. diplomats and their families in Havana, Cuba, were affected by incidents that were associated with injuries, including hearing loss and brain damage. Over State has reported that over 20 U.S. diplomats and family members in Havana have suffered from medical conditions believed to be connected to the incidents, which began in late 2016 and have continued into 2017. By law, State is generally required to convene an ARB within 1260 days of incidents that result in serious injury at, or related to, a U.S. mission abroad, but the Secretary of State can determine that a 60 day extension is necessary. According to State's policy, M/PRI is responsible for initiating and leading State's ARB incident vetting process.
This report is part of a broader request to review State's response to the incidents in Cuba. In this report, GAO examines the extent to which State's ARB policy ensures that M/PRI is made aware of incidents that may meet the ARB statute criteria. GAO analyzed relevant federal laws, State policies, and other State documents. GAO also interviewed cognizant State officials.
What GAO Found
The Department of State's (State) Accountability Review Board (ARB) policy does not ensure that the responsible office—State's Office of Management Policy, Rightsizing, and Innovation (M/PRI)—is made aware of incidents that may meet the ARB statute criteria, such as those that occurred in Cuba and were associated with injuries to U.S. personnel. According to State policy, as soon as M/PRI becomes aware of potentially qualifying incidents, M/PRI will start the process for considering whether the incident warrants an ARB. M/PRI relies on informal communication to identify potentially qualifying incidents to begin the vetting process because State does not have a policy, procedure, or process for internal communication of such incidents to M/PRI, according to State officials and GAO analysis. As illustrated in the figure below, other State entities began responding to the incidents in early 2017, but M/PRI was not made aware of the incidents until mid-August 2017, when a former M/PRI official contacted the office after seeing media reports. If M/PRI is not aware of incidents, it cannot initiate State's ARB incident vetting process. This situation puts State at risk of not meeting statutory time frames for convening an ARB and could result in State being less able to improve security programs and practices at other U.S. diplomatic posts. Standards for Internal Control in the Federal Government call for internal communication to achieve the entity's objectives and note that management should document responsibilities through policy.
What GAO Recommends
GAO recommends that State revise its policies to define responsibilities for internal communication to M/PRI of relevant incidents. State concurred with GAO’s recommendation. |
gao_GAO-18-420 | gao_GAO-18-420_0 | Background
The federal government is the largest real property owner in the United States with a vast inventory costing billions of dollars annually to operate and maintain. Federally owned buildings include courthouses, offices, warehouses, schools, hospitals, housing, data centers, and laboratories, among other things. GSA acts as the federal government’s landlord, and is responsible for designing, constructing, and managing federal buildings for other federal agencies and the judiciary to occupy. There are currently approximately 1,600 federally owned buildings under GSA’s custody and control.
According to the Office of Management and Budget (OMB), agencies, including GSA, should have accurate information on acquisition and “lifecycle” costs of current and proposed assets, including costs for designing and constructing the building, O&M, and disposal. For example, when planning and designing new federal buildings, GSA must analyze building energy and water systems (e.g., for air conditioning and heating) to identify those with the lowest acquisition and operating costs. In addition, once the building is constructed, GSA building managers and O&M contractors are responsible for maintaining the building, which includes tasks related to recurring maintenance and repair (e.g., on heating and cooling systems), maintaining the property’s roads and grounds, cleaning and janitorial services, and paying for utilities.
In 1994, GSA instituted the Design Excellence Program, a process for designing, constructing, renovating, altering, and repairing federal courthouses and office buildings. This program was developed in response to criticisms that federal buildings lacked architectural distinction. It stresses creativity in the design of buildings with the intent of constructing spaces that meet the tenant’s functional needs while also becoming public landmarks. More specifically, the program aims to meet several guidelines—called the Guiding Principles for Federal Architecture— including designing spaces that: reflect the dignity, enterprise, vigor, and stability of the U.S. government; avoid uniformity; and are built in locations in which federal buildings can be incorporated into the existing public streets and landscape.
According to GSA officials, the Design Excellence Program also streamlines how GSA selects and manages the private-sector architects and engineering firms it hires for new projects. The process consists of four primary stages: planning for the prospective tenant’s needs and general project details (e.g., request for proposal announcement); selecting and working with an architectural and engineering firm to design the building; selecting a contractor to construct the building; and occupancy by the tenants.
The process is overseen by a GSA project team, consisting of a project manager, contracting officer, officials from GSA’s Office of the Chief Architect, and additional subject matter experts, who work with the federal tenant that plans to occupy the space.
A large number of the federal courthouses and office buildings constructed and controlled by GSA in the last 20 years have been completed under the Design Excellence Program. Under the program, GSA has constructed 78 facilities including 62 courthouses and 16 federal office buildings, including a data center and laboratories. These buildings account for more than 36-million square feet of space, are located in 33 states and the District of Columbia, and many have won architecture and design awards. Figure 1 shows examples of federal courthouses and office buildings constructed under the Design Excellence Program.
GSA Made Design Choices That Decreased and Increased O&M Costs
Some GSA Design Choices Have Decreased O&M Costs
According to interviews with GSA officials and building tenants, GSA has made choices in some Design Excellence buildings intended to reduce long-term O&M costs. For example: Increased natural light. All 10 of the Design Excellence buildings we visited were designed to include interior natural light, which some building managers reported reduced energy costs. According to GSA officials, natural light is not only aesthetically pleasing; it also improves lighting quality for building tenants and reduces lighting costs. For example, the First Street Federal Courthouse (Los Angeles, California) has a light well as part of its atrium and a serrated glass façade that maximizes natural light. Building officials said that 22 of the 24 courtrooms in the building receive natural light from multiple sources, reducing energy usage and requiring less frequent replacement of lighting. In addition, building officials at the Albert Armendariz, Sr., U.S. Courthouse (El Paso, Texas) reported extensive natural light from a three story window wall and the front atrium; both features provide ample light for building tenants. (See fig. 2).
Durable and easily maintained materials and finishes. In most of the 10 Design Excellence buildings we visited, GSA officials and building tenants reported selecting materials and finishes that (1) are highly durable and easy and inexpensive to clean; (2) are expected to last a long time; and (3) required little maintenance. For example, the lobby walls and floors of the Ronald Reagan Federal Building and Courthouse (Santa Ana, California) are made out of travertine, a very durable stone, which has lasted more than 15 years without the need for repairs or replacement. In addition, officials at a few buildings noted that the decision to install carpet tiles in lieu of large patches of carpet has made it very easy and relatively inexpensive to maintain and repair office spaces and courtrooms.
Low-maintenance landscaping. Several of the 10 Design Excellence buildings we visited incorporated native flora into the landscape design, which can reduce energy and water costs. For example, officials planted native, drought resistant plants around the First Street Federal Courthouse (Los Angeles, California). Building officials at the Las Cruces U.S. Courthouse (Las Cruces, New Mexico), which is located in a desert environment, also reported most of the native landscape around the courthouse does not require watering.
Some GSA Design Choices Have Increased O&M Costs
According to our survey respondents—building managers at all 78 Design Excellence buildings included in our review—certain GSA design choices, such as multistory atriums and custom windows, have resulted in increased O&M costs compared to an average GSA building without those features. Almost all Design Excellence building managers (76 out of 78) reported that certain design choices resulted in increased O&M costs that would not have occurred had that design choice not been selected. For example, 67 out of 78 building managers for Design Excellence buildings stated that the effect of including multistory open spaces, like atriums, increased O&M costs due to the challenges associated with heating and cooling, making needed repairs, and cleaning these spaces. (See table 1). Building managers and tenants we spoke with confirmed our survey results, and provided examples of design choices that resulted in unexpected O&M cost increases. For example, officials noted increased O&M costs associated with separate structures and multistory atriums that were difficult to access for cleaning and repairs.
Separate Structures. Managers from only 21 of 78 Design Excellence buildings reported having an attached, but separate structure (e.g., pavilions, rotundas, restaurants, and other additional spaces connected to the building), but managers at 19 of those buildings stated that the effect of such design features increased O&M costs. For example, one federal building we visited had a rotunda with a domed roof that, according to building managers, has multiple gutter leaks that are not currently accessible due to the design of the space. As a result, maintenance staff continuously patch the ceiling without addressing the cause of the leaks (see fig. 3).
Atriums and Lobbies. Managers from 67 of 78 Design Excellence buildings reported their buildings’ multistory atriums and lobbies increased O&M costs. Several GSA managers we interviewed identified additional costs to maintain a multistory atrium or lobby, including costs for renting expensive scaffolding or mechanical lifts. For example, one Design Excellence building we visited has water leaks in the lobby ceiling, which can only be reached by extensive and expensive scaffolding (see fig. 4).
Large, Custom Windows. Managers from 65 of 78 Design Excellence buildings reported that the effect of design choices related to their buildings’ windows increased O&M costs. In addition, several Design Excellence buildings we visited had custom or uniquely shaped windows, which occasionally increased the costs to replace, repair, or maintain them. For example, GSA officials at one courthouse reported repairing one two-story, custom-made window pane, which cost $80,000 to fabricate and $50,000 to install. The courthouse had eight of these windows, and a GSA official stated that the windows are an attractive feature of the building that introduced natural light, but a different window choice would have been cheaper to maintain (see fig. 5).
Mission Spaces. Managers from 48 Design Excellence buildings reported that the effect of design choices related to mission spaces (i.e., spaces in which federal employees conduct work) increased O&M costs. Specifically, managers from 32 buildings stated that design choices made in mission spaces increased repair costs, and managers from 30 buildings reported increased cleaning costs. GSA officials at several buildings we visited discussed challenges accessing and maintaining mechanical systems incorporated into tenant mission spaces. For example, one Design Excellence building includes a heating, ventilation, and air-conditioning (HVAC) system that is hidden under a raised floor within mission spaces. Because building managers cannot easily access the system, there are maintenance delays and challenges identifying and making necessary repairs, which ultimately result in higher O&M costs. Building officials reported they considered replacing the HVAC system, but doing so would cost approximately $55 million. (See fig. 6).
Other Design Choices. According to Design Excellence building managers that responded to our survey and at locations we visited, the effect of several other design choices including energy efficient elements (e.g., solar panels and green roofs), courtyards, floors, and circulation (e.g., hallways, stairways, and elevators) increased O&M costs. For example, according to these officials, (1) the design of green roofs led to water leaks; (2) the design of courtyards led to problems maintaining unique landscaping; (3) flooring choices, specifically selected materials, led to premature scuffing and cracking; and (4) the design of hallways and stairways made them difficult to maintain.
GSA Does Not Fully Consider O&M and Functionality Effects When Making Design Choices
With the Design Excellence Program, GSA aims to create buildings that are cost-effective and function well for tenants. However, GSA makes design choices for Design Excellence buildings during the planning and design stages of new projects without fully considering the effect of these choices on O&M costs and functionality.
GSA Does Not Fully Consider How Design Choices Affect O&M Costs
GSA does not estimate most O&M costs during planning and design. Specifically, according to GSA officials we interviewed and planning documents we reviewed, when planning and designing new buildings, officials estimate the costs of major energy systems, such as boilers and chillers. However, based on our review of GSA and industry data, these systems only account for about one-third of O&M costs in Design Excellence buildings. GSA officials stated that they do not estimate the remaining two-thirds of O&M costs—which include maintenance, cleaning, and landscaping—until late in the building’s construction. However, GSA officials also said that it would be costly to make significant design changes at that point in the process. In addition, the O&M estimates for maintenance, cleaning, and landscaping are for the purpose of selecting a contractor to provide these services, not as a means for addressing or reducing future O&M costs, according to officials.
GSA building and regional managers who are responsible for addressing the O&M consequences of design choices told us that they were not always integrated or asked to participate in planning and designing new Design Excellence buildings. Specifically, GSA building and regional managers at several of the buildings we visited stated that they were never, or seldom, consulted on O&M costs and issues during the design process, nor did they have an opportunity to review design documents. A few GSA building managers we spoke with stated that on rare occasions when they were consulted their input was rarely incorporated, or was requested too late in the construction stage to allow for necessary changes. According to these officials, if given the chance, they could have highlighted issues with certain design choices that would significantly increase O&M costs and could have offered potential solutions to reduce those costs. Officials responsible for overseeing the Design Excellence Program told us that other officials with an understanding of issues surrounding O&M are involved in the process for designing new buildings through, for example, subject matter reviews of the design concepts. Officials agreed, however, that more could be done to formally involve the perspective of facilities staff, such as building managers, who are responsible for the day-to-day management of O&M.
We found that GSA’s lack of consideration of how design choices may affect the O&M costs of Design Excellence buildings could be attributed to existing procedures that do not emphasize the need to consider such costs during the planning and design stage. Specifically, GSA’s procedures for planning, designing, and constructing new Design Excellence buildings focus on design creativity, construction challenges, budget, and schedule and do not direct GSA to estimate O&M costs during planning and design. While these procedures promote several factors to consider in a building’s design—including aesthetics, functionality, and constructability—and generally require firms to submit documentation on budget and schedule, they do not call for information on expected O&M costs. In addition, these procedures do not include seeking input on design decisions from facilities personnel who will have responsibility for the ongoing O&M once the building is occupied.
Federal standards for internal control state that federal agencies should use complete and relevant information when making decisions and design control activities, including procedures, to achieve objectives. These federal standards also state that federal agencies should ensure the communication of information internally, for example through procedures that allow management to receive quality information from personnel, to help achieve the entity’s objectives. In addition, guidance from GSA and the Office of Management and Budget directs officials to consider and strive for the lowest possible costs, including O&M costs, when designing buildings.
Information on how specific design choices could affect ongoing O&M costs would allow GSA to better understand the impact of those choices. Such information is critical as O&M accounts for a significant proportion of resources dedicated to federal buildings over the long-term. According to GSA and industry associations, O&M costs are significantly higher over time than all other costs, including for construction, and typically account for between 60 and 80 percent of building lifecycle costs. To illustrate this point, we analyzed GSA construction and O&M data for Design Excellence buildings. As figure 7 shows, we estimate that over an average building’s age (60 years) the total construction and O&M costs for GSA’s 78 existing Design Excellence buildings could be about $18 billion—$8.1 billion for construction (45 percent) and $9.9 billion for O&M (55 percent). Because GSA’s procedures do not direct officials to estimate about two-thirds of O&M costs or fully integrate officials with an understanding of the O&M consequences of design decisions, officials may not have been aware of how design choices would affect approximately $6.6 billion (two-thirds of $9.9 billion) in O&M costs. In addition, without procedures that clearly emphasize the need to more fully consider O&M costs in Design Excellence buildings during the planning and design stage, GSA and other stakeholders may not have a complete picture of all relevant information necessary to make informed decisions on how to best design future federal buildings.
GSA realizes that the focus of Design Excellence projects has been on design and construction, not O&M costs, and, in September 2017, initiated a process, called “Operational Excellence”, to more fully consider O&M costs. This process includes considering ways to more fully consider O&M costs during planning and design, including developing a cost tool that would estimate future O&M costs. In addition, GSA is considering ways to update existing procedures for designing and constructing new buildings to include a more comprehensive evaluation of potential O&M costs, for example, by more fully integrating knowledgeable personnel at key stages. However, according to GSA officials, they are still in the early stages of determining what needs to be done in part due to a small staff, which includes one full-time employee and one part-time employee. As of March 2018, GSA has not established a schedule for updating its procedures to require considering O&M during design.
Design Excellence Buildings Generally Function Well, but Some Costly Design Choices Did Not Improve Functionality
Most design choices made for Design Excellence buildings, including the shape and size of courtrooms and the lighting in hallways, have had a positive effect on overall building functionality (i.e., helped the tenant agency achieve its mission), according to officials we surveyed and interviewed. For example, GSA building managers we surveyed reported the functionality of at least one design choice in most buildings (72 of 78 buildings) as good or very good. Specifically, they reported that in most buildings, the overall functionality of design choices was good in many of the areas we asked them about. In addition, building managers reported that the functionality of the following design choices was also good or very good: selected material color (53 buildings) and lighting (58 buildings); shape and size of the space (61 buildings); pedestrian circulation (61 buildings); and temperature control in the areas critical for a building’s operation, such as courtrooms or office space (46 buildings).
GSA and tenant agency officials whom we interviewed were also positive about how the design choices affected the functionality of their buildings, especially the use of windows and atriums to allow natural light. Tenants also reported they enjoyed other features of the new buildings, including commissioned artwork and the design of the interior and exterior. Tenants’ satisfaction with the function of Design Excellence buildings may, in part, reflect the condition of their previous office space. For example, one tenant noted that moving from temporary trailers into a state-of-the-art courthouse was a substantial functional improvement.
However, we found that increased spending on certain design choices did not always provide improved functionality for the building tenant. For example, GSA building managers reported that in many buildings (67 of 78) atriums and lobbies (i.e., vertical penetrations) have increased O&M costs due to higher repair, cleaning, and energy costs. At the same time, building managers reported that in 51 of those 67 buildings, choices made in the design of multistory atriums and lobbies, e.g., material color and lighting, did not have a positive effect on building functionality (see table 2). Similarly, the decision to install solar panels and green roofs (e.g., energy efficient elements), increased O&M costs in several areas, particularly repair costs, but in over half of the buildings with these features, building managers did not report an improvement in functionality. For example, in two courthouses we visited solar panels installed with the intention of saving on energy costs are not supplying as much power as expected and, therefore, have not yet provided the expected energy benefits.
Tenants we interviewed also noted that in some cases, design choices have not functioned well and are costly to maintain and operate. According to a tenant at one Design Excellence office building, while the decision to construct a multistory atrium has added aesthetic value for federal employees, it has also resulted in challenges balancing air pressure between the atrium and the adjacent office spaces. These differences in air pressure have resulted in uncomfortable working conditions, such as fluctuating temperatures, which have hampered productivity. Another tenant told us about design choices such as long hallways and elevators that do not stop at all floors, making it difficult for tenant employees to move efficiently through the building. Some of these design choices, such as elevators with mechanical systems at the bottom of the elevator shaft, have proven costly to maintain as they age more quickly. Other tenants noted that the selection of heating and cooling systems, which automatically adjust building temperatures based on time of day, for example, have not functioned as planned, resulting in variable temperatures and employee discomfort.
In addition, GSA has sometimes made design choices in buildings that do not apply to one of the primary functional goals of the Design Excellence Program—to serve as a landmark that positively represents the federal government to the public. Specifically, GSA does not consider that some buildings, due to their purpose or location, are unlikely to function as landmarks because they have limited interaction with or limited visibility by the public. In this regard, we found that most Design Excellence buildings (66 of 78) are visible and accessible to the general public, i.e., “public-facing”. Many of these buildings have succeeded in becoming public landmarks and several have won awards for their design. Specifically,
62 serve as courthouses, which are visible from public streets and people may enter to observe judicial proceedings or conduct personal business. See figure 8 for an example of a Design Excellence courthouse with publicly visible exteriors and interiors.
Four serve as office buildings for various federal agencies that are publicly accessible.
In contrast, we found that 12 Design Excellence office buildings restrict the public from accessing interior spaces. Specifically,
Seven can be seen from public sidewalks or roads, even though the building is not open to the public, such as the U.S. Secret Service Headquarters and FBI field office buildings. As a result, these buildings’ exteriors could be public landmarks that represent the federal government, but the interior design features are not publicly accessible. For example, the Ronald H. Brown U.S. Mission to the United Nations Building in New York City has an impressive and publicly visible exterior façade but restricts public access to a multi- story rotunda and art space (see fig. 9).
Five have obstructed views from public roads and sidewalks in addition to restricting public access to the interior. Neither the exterior nor interior design choices, which can be expensive to operate and maintain, in these buildings can be seen or appreciated by the public. For example, according to the tenant agency and GSA officials, the visually impressive interior atrium and courtyard at the Ariel Rios Federal Building have proven logistically challenging and expensive to maintain and are not accessible to the public. In addition, the façade of the National Oceanic and Atmospheric Administration Satellite Operations Facility, which, according to GSA officials, is expensive to maintain and repair, is not accessible by the public. (See fig. 10).
According to GSA officials, when they carry out their planning and design for Design Excellence buildings, they do not differentiate between buildings that will be public-facing and those that will not. This approach may be in part due to the fact that GSA’s procedures for planning and designing new Design Excellence buildings do not call for consideration of how design choices may have different functional benefits, including whether the interior and exterior of planned buildings would be accessible to the public. Federal standards for internal control state that federal agencies should use complete and relevant information when making decisions and designing control activities, including procedures to achieve objectives. By taking a “one size fits all” approach and not considering the functionality of design choices, such as how a building’s location and intended use will affect the public’s ability to see the exterior and interior, GSA may be selecting design choices that increase O&M costs without improving functionality.
GSA Does Not Systematically Collect and Share Information on Common O&M Cost Experiences That Could Affect Design Choices
According to GSA officials, GSA currently does not systematically collect and share information on how design choices made for previous Design Excellence projects have affected O&M costs with the project teams— consisting of a project manager, contracting officer, and other GSA officials—that are responsible for overseeing the planning and design of new buildings. GSA has evaluated what is and is not working effectively in some existing Design Excellence buildings and has on occasion shared these evaluations with project teams. For example, GSA has evaluated the performance of 6 out of 78 Design Excellence buildings. These evaluations included identifying design decisions that led to higher O&M costs and, on one occasion, developed a formal presentation to share these lessons with the team working on a new Design Excellence project.
According to officials, GSA requires agency personnel with subject matter expertise to review building design concepts provided by private-sector architects and engineers. GSA also fosters information sharing through procedures that encourage project teams to exchange ideas, lessons learned, and concerns. However, these processes either (1) are not done in a consistent or systematic way, or (2) require information sharing among a small group of officials, i.e., a project team, which might not have visibility over the extensive design choices made in all existing buildings. While all of these information-sharing initiatives offer benefits, GSA’s procedures do not include a systematic collection and sharing of information with the project teams responsible for managing new Design Excellence projects on how design choices affected O&M costs in existing Design Excellence buildings. According to GSA officials, they are considering formalizing this sort of information collection and sharing as part of the Operational Excellence process, but as previously noted, GSA is in the early stages of setting up this initiative and has not established a schedule for completing its actions or updating its procedures.
As discussed, some design choices in existing Design Excellence buildings have decreased or increased O&M costs. Since GSA does not systematically share how these types of design choices affected O&M costs with teams responsible for planning and designing new buildings, similar issues could occur in future buildings. For example, we previously mentioned that building managers indicated that using durable materials, low maintenance landscaping, and energy-efficient lighting can reduce long-term O&M costs.
Building managers also reported common issues caused by design choices that led to increased costs including: Inefficiently located mechanical systems. Building managers reported the location of mechanical systems in Design Excellence buildings often led to increased cost. Specifically, building managers reported the location of these systems increased repair costs (41 out of 77 buildings) and energy costs (32 out of 77 buildings). In the Design Excellence buildings we visited, building managers and tenants reported issues with the location of mechanical systems (4 buildings). For example, officials indicated that air-conditioning systems were placed in inefficient locations that required more energy usage because water had to be pumped unnecessarily far distances (see fig. 11).
Difficult-to-access lights. Building managers reported that design choices for the location of interior lights increased maintenance costs in the majority of Design Excellence buildings (55). In particular, managers reported that the location of lights in atriums and lobbies (38 buildings) and courtrooms and other mission spaces (33 buildings) increased costs. In addition, GSA officials at locations we visited said that lights above tall staircases, ceiling lights in atriums and auditoriums, and lights directly above permanent structures led to additional costs, including the need to use scaffolding or rent large equipment to maintain these lights. (See fig. 12). One way that a majority of GSA building managers (61) we surveyed are attempting to mitigate high maintenance cost for lighting issues is to install energy efficient equipment, such as light-emitting diode (LED) lights.
Difficult-to-maintain materials and finishes. In 68 Design Excellence buildings, building managers reported that materials or finishes were chosen that are easily worn. Similarly, in buildings we visited (4 buildings), GSA officials reported that decisions on the materials used or configuration of exterior surfaces (e.g., the roof or façade) of a Design Excellence building led to repair and maintenance problems, particularly water leaks. (See fig. 13).
Hard to clean surfaces. Cleaning surfaces, especially in atriums, can be a challenge for maintaining Design Excellence buildings. For example, building managers we surveyed reported that the decision to install certain types of window treatments increased cleaning costs (49 buildings). In three buildings we visited, building managers and tenants also said Design Excellence buildings required special equipment or scaffolding to clean windows or surfaces, which led to increased cleaning costs. (See fig. 14).
According to federal standards for internal control, agencies should use and communicate complete and relevant information when designing control activities, including procedures to achieve objectives. Without a formalized process for systematically collecting and sharing how design choices affected O&M costs in existing buildings, designs for future Design Excellence buildings may not benefit from the successful strategies used by others to reduce O&M costs or may continue to repeat problematic choices that may result in increased O&M costs.
Conclusions
Through the Design Excellence Program, GSA has achieved excellence in architecture and the design of federal buildings. Buildings constructed under the Design Excellence Program have created unique and aesthetically pleasing workspaces, have met the functional needs of tenant agencies, and have become public landmarks. However, because GSA does not have program procedures that call for consideration of how certain design features may affect O&M, it may not be fully aware of the costs of including these features in its building design and plans. Specifically, GSA does not estimate or gather all perspectives from building and regional managers on the full O&M costs of design choices, or consider the extent to which they will improve the functionality of the building for tenants and the public. For example, GSA’s one-size fits all approach in designing these buildings does not consider whether non- public buildings need the same costly architectural elements as buildings intended to serve as public landmarks. Further, GSA is missing opportunities to improve future building designs by not systematically gathering and sharing information on the common design choices that had both positive and negative effects on O&M costs. Without a clear picture of the ongoing costs of these choices, GSA and other stakeholders are missing critical information to better inform the design and construction of new buildings. While GSA has just begun an Operational Excellence initiative to help identify future O&M costs, it is not clear what actions GSA will take to improve consideration of O&M costs during planning and design or when it will take those actions.
Recommendations for Executive Action
We are making the following four recommendations to GSA:
The Administrator of the General Services Administration should update existing procedures to require GSA officials to estimate the full operations and maintenance costs of design choices in the planning and design process for new Design Excellence buildings. (Recommendation 1)
The Administrator of the General Services Administration should update existing procedures to require GSA officials to obtain information from personnel responsible for addressing the operations and maintenance consequences of design choices at key decision points during the planning and design of new Design Excellence buildings. (Recommendation 2)
The Administrator of the General Services Administration should update existing procedures to require GSA officials to further consider and document, during the planning and design of new Design Excellence buildings, how design choices may affect building functionality, such as whether a building is publicly visible and accessible. (Recommendation 3)
The Administrator of the General Services Administration should update existing procedures to require GSA officials to systematically collect and share information with project teams responsible for overseeing the planning and design of new buildings on the positive and negative effects of common design choices on operations and maintenance costs in existing Design Excellence buildings. (Recommendation 4)
Agency Comments
We provided a draft of this report to GSA, the U.S. Administrative Office of Courts, the Department of Homeland Security, the Department of Justice, and the Department of Commerce for comment. In written comments, reproduced in appendix IV, GSA stated that it agreed with our recommendations and provided several technical comments. GSA clarified its policies for selecting and analyzing the lifecycle costs of building systems. In addition, GSA stated that table 2 in our report did not capture the full functional benefits and reasons for making certain design choices. As we noted in the report, this table does not preclude that a specific design choice may be functional or have functional benefits. We also included several of the examples GSA highlighted in their comments, such as the functional need for a separate structure, which may serve key security functions. GSA also stated that our conclusions did not indicate that most Design Excellence buildings functioned well. We added language to the conclusions to clarify this point.
The U.S. Administrative Office of Courts, the Department of Homeland Security, the Department of Justice, and the Department of Commerce did not provide comments.
We are sending copies of this report to the appropriate congressional committees, the Administrator of the General Services Administration, Director of the Administrative Office of U.S. Courts, Attorney General, and the Secretaries of Homeland Security and Commerce. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V.
Appendix I: Objectives, Scope, and Methodology
This report assesses the extent to which: (1) the General Services Administration (GSA) made design choices that affect operations and maintenance (O&M) costs; (2) GSA considers O&M costs and functionality when planning and designing buildings; and (3) GSA systematically collects and shares information on O&M costs related to design choices in existing buildings.
To address all of our objectives, we reviewed applicable federal regulations; GSA procedures, policies, and standards for designing, constructing, and operating federal facilities, including specific policies and procedures for Design Excellence buildings; our prior work; and reports by other federal agencies and related professional organizations on topics, including the standard costs of operating and maintaining office buildings. Our review examined 78 federal buildings and courthouses that GSA constructed under the Design Excellence Program—referred to as “Design Excellence buildings”—since the program started in 1994. At our request, GSA provided a list of all buildings under the agency’s custody and control that were constructed under the Design Excellence Program. Based on input from GSA officials indicating that large campuses were unlikely to have reliable O&M data, we excluded nine buildings that are part of the White Oak Campus in Silver Spring, Maryland. We reviewed relevant GSA documents pertaining to the remaining 78 Design Excellence buildings, including the most recent Asset Business Plans detailing investment needs for maintenance and repairs, strategies for efficient operations, building use, and tenant satisfaction. We analyzed GSA-provided historical data on construction and O&M costs from 2000 to 2016 for the buildings in our review and projected O&M future costs. To calculate our projection, we made several assumptions, including (1) that annual O&M costs would increase at the same level as 2016 O&M costs ($174 million), and (2) that Design Excellence buildings will reach the average age of all current GSA buildings (60 years). We assessed the reliability of these data through electronic testing and reviewing documentation on the data. We determined that the data provided were sufficiently reliable for the purpose of illustrating the extent to which O&M costs make up total building costs.
We also conducted a web-based survey of GSA building managers responsible for overseeing O&M for the 78 Design Excellence buildings included in our review. The survey addressed the extent to which certain design choices affect O&M costs and building functionality. We developed the survey based on our objectives, prior GAO work, and site visits to 10 Design Excellence buildings. We pretested the survey with GSA officials at three Design Excellence buildings, which were selected based on building age, location, total square feet, fiscal year 2016 O&M costs, and the building’s primary use (e.g., office or courthouse). As part of our pretesting, we asked GSA building managers to explain their understanding of survey questions and made edits based on their comments. We conducted the survey from November 2017 to March 2018 and our response rate was 100 percent (78 out of 78). See appendix III for a copy of the survey and summarized responses.
We visited 10 Design Excellence buildings in three GSA regions to view design choices and O&M activities. As part of these site visits, we conducted interviews that included tenant agencies located in these buildings, GSA building managers responsible for managing these buildings and officials from GSA regional offices with oversight responsibilities for these buildings. To select our site visit locations and ensure geographic and agency diversity, we considered several factors including building operating costs, size, location, and the tenant agency. Based on these criteria we selected the buildings listed in table 3. The interviews and tours we conducted during our site visits do not allow us to generalize the findings to all Design Excellence buildings. Information gathered from our site visits did allow us to show how O&M costs were considered in specific Design Excellence buildings and the effects of design choices.
We also interviewed GSA officials located in GSA Headquarters within the Office of Design and Construction, including the Chief Architect, and the Office of Facilities Management. We also interviewed GSA regional officials within the Office of Facilities Management in four of GSA’s 11 regional offices: Greater Southwest Region, National Capital Region, Pacific Rim Region, and Southeast Sunbelt Region. We selected regional offices based on the location of our site visits and included one additional regional office based on it having the highest total O&M operating costs of the eight remaining regional offices. We discussed several topics with GSA officials, including how O&M costs were considered during planning and design and how information on the O&M costs of design choices are shared.
To determine the extent to which GSA considers O&M costs and functionality when planning and designing buildings, we analyzed Federal Real Property Profile (FRPP) data. Our analysis of U.S. government- owned office buildings that are less than 40 years old, occupied, and needed for a tenant’s mission, identified five potentially relevant variables to explain variation in the O&M costs: building type (i.e., whether a building was constructed under the Design Excellence Program), size, age, and condition of the building, as well as the median hourly wage of O&M services in the building’s location. After controlling for these variables, we found that size and median hourly wage but not building type had a statistically significant relationship to O&M costs. We assessed the reliability of these data through electronic testing as well as a review of documentation for each federal data source. We determined that the data provided were sufficiently reliable for the purpose of describing our attempts to identify factors that influence O&M costs in federal buildings. We also requested and received additional information from the building managers of Design Excellence federal office buildings. Specifically we asked for information on the extent to which these federal office buildings are public-facing, have restrictions on public entry and are visible from public sidewalks or roads, and what the daily volume of public visitors was.
We compared GSA’s efforts to consider O&M costs in the planning and design of Design Excellence buildings to pertinent Standards for Internal Control in the Federal Government on using complete and relevant information when making decisions and design control activities, including procedures, to achieve objectives, as well as on communicating information internally. In addition, we compared GSA’s efforts to consider these costs in the planning and design of Design Excellence buildings to guidance from GSA and the Office of Management and Budget that directs agency officials to consider and strive for the lowest possible costs, including O&M costs, when designing buildings. We also compared GSA’s efforts to consider functionality when planning and designing these buildings to pertinent Standards for Internal Control in the Federal Government on using complete and relevant information when making decisions and design control activities, including procedures, to achieve objectives.
To assess the extent to which GSA systematically collects and shares information on O&M costs related to design choices in existing Design Excellence buildings, we reviewed Post Occupancy Evaluations commissioned by GSA on six Design Excellence buildings. These evaluations contain information, such as how GSA buildings are performing and the extent to which they comply with GSA’s federal standards for public buildings. These evaluations can include reviews of operations and maintenance documentation, interviews and surveys with building occupants, and interviews with relevant GSA staff, architectural and engineering design team staff, and an on-site evaluation. We also compared GSA’s process for collecting and sharing how design choices affected O&M costs in existing buildings to pertinent Standards for Internal Control in the Federal Government on using and communicating complete and relevant information when designing control activities, including procedures, to achieve objectives.
We conducted this performance audit from May 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Buildings Constructed under the General Services Administration’s (GSA) Design Excellence Program
GSA created the Design Excellence Program in 1994. Under this program, GSA has constructed 78 buildings in 33 states and the District of Columbia, buildings that range in size from about 35,000- to over 3- million gross square feet (see table 4).
Appendix III: Survey of General Services Administration (GSA) Building Managers and Summarized Results
This appendix provides a copy of the survey completed by managers for all 78 buildings constructed under GSA’s Design Excellence Program included in our review. The appendix also includes the responses received for each of the close- ended questions (1a, 1b, 1c, 1e, 2a, 3a, and 4a); it does not include information on open-ended responses (1d, 1f, 2b, 3b, 3c, 4b, and 5). The purpose of this survey was to gather responses on how design choices affected operation and maintenance (O&M) costs and building function. See appendix I for additional information on our survey methodology.
Appendix V: GAO Contact and Staff Acknowledgments
GAO Contact
Lori Rectanus, (202) 512-2834 or [email protected].
Staff Acknowledgments
In addition to the contact named above, Keith Cunningham (Assistant Director); Matthew Cook (Analyst in Charge); Eli Albagli; Sarah Arnett; Colin Ashwood; Melissa Bodeau; Lacey Coppage; Caitlin Cusati; Terrence Lam; Joshua Ormond; Dae Park; Minette Richardson; Kelly Rubin; Ardith Spence; and Dave Wise made key contributions to this report. | Why GAO Did This Study
Since 1994, GSA has spent more than $8 billion to construct 78 new federal buildings through its Design Excellence program. Some design choices can affect a building's O&M costs and functionality.
GAO was asked to review GSA's ability to manage O&M costs under the Design Excellence program. This report assesses the extent to which: (1) GSA's design choices affect O&M costs; (2) GSA considers O&M costs and functionality when planning and designing buildings; and (3) GSA systematically collects and shares information on O&M costs.
GAO conducted a web-based survey of building managers for the 78 Design Excellence buildings. GAO also visited 10 Design Excellence buildings in three GSA regions selected based on several factors, including geographic and agency diversity. GAO reviewed GSA documents, and interviewed GSA officials and building tenants. Information obtained through site visits and interviews is not generalizable.
What GAO Found
The goals of the General Services Administration's (GSA) Design Excellence Program are to creatively design federal buildings that meet federal agencies' functional needs and become public landmarks. Some design choices for Design Excellence buildings have decreased ongoing operations and maintenance (O&M) costs, but others have increased those costs. GSA's building managers and tenants told GAO that design choices that have reduced O&M costs include the use of durable materials and low maintenance landscaping. Other design choices have increased O&M costs. For example, according to GAO's survey of 78 building managers of Design Excellence buildings, multistory atriums often led to additional O&M costs, including the need to erect expensive scaffolding for maintenance.
While GSA aims to create Design Excellence buildings that are cost-effective and functional, it makes design choices without fully considering their effect on O&M costs and functionality. For example, GSA officials do not estimate the majority of O&M costs, such as the building maintenance associated with their design choices until the design is almost finalized. This outcome is partly because GSA procedures do not direct GSA officials to develop such estimates during the design and planning of Design Excellence buildings and because building and regional managers responsible for addressing the O&M consequences are also not involved in the design and planning process. As a result, important cost information that could help building project teams make the most cost-effective design choices is not available to help them. In addition, while building managers GAO surveyed reported that GSA's design choices generally support a building's functionality, they also reported that some design choices increased O&M costs without improving functionality. For example, they identified design choices related to material color and lighting that increased O&M costs but did not enhance the functionality of the building for the tenants.
Although GSA has developed some information on how design choices can affect O&M costs, it does not consistently collect and share such information. For example, GSA has evaluated the performance of only six Design Excellence buildings, and does not systematically collect information on how design choices have affected O&M costs in all existing buildings. Without a process to collect and share such information, future buildings may not benefit from these lessons, and problematic choices may be repeated.
What GAO Recommends
GAO is making four recommendations to update existing GSA procedures for planning and designing new buildings to: (1) estimate full O&M costs; (2) obtain information from personnel responsible for addressing the O&M consequences of design decisions; (3) further consider how design choices may affect building functionality; and (4) systematically collect and share lessons from existing buildings. GSA agreed with these recommendations. |
gao_GAO-17-781T | gao_GAO-17-781T_0 | Law Enforcement Agencies Reported Various Uses and Benefits from the Transfer of the DOD Excess Controlled Property
Federal law enforcement agencies and state coordinators in our survey— as well as officials we interviewed from federal, state, and local law enforcement agencies—reported various uses of DOD excess controlled property for law enforcement activities. The reported uses included enhancing counterdrug, counterterrorism, and border-security activities. Also, law enforcement agencies reported using DOD’s excess controlled property for other law enforcement activities, such as search and rescue, natural disaster response, surveillance, reaching barricaded suspects, police training, and the serving of warrants.
Federal, state, and local agencies cited a number of ways in which they had benefited from LESO program, with several reporting that the transfers of controlled property allowed them to save money. For example, a local law enforcement official in Texas reported that 96 percent of the department budget goes to salaries and that the LESO program helped the department acquire items that it would otherwise not be able to afford, saving the department an estimated $2 million to $3 million. Additionally, agencies provided examples of how property they received through the LESO program have been used. For example, the Bureau of Indian Affairs officials reported they have used vehicles to support their Office of Justice Services’ drug unit during marijuana eradication and border operations by providing transport to agents over inhospitable terrain in mountainous and desert environments. In another example, Texas law enforcement officials reported that the San Marcos and Hays County police departments used their issued Mine Resistant Ambush Protected (MRAP) vehicles to rescue more than 600 stranded people from floodwaters in October 2015. Moreover, the Los Angeles County Sheriff’s Department reported that it used a robot to remove a rifle from an attempted murder suspect who had barricaded himself.
DLA Has Taken Some Actions to Address Weaknesses in Its Excess Controlled Property Program, but Deficiencies Exist in Key Processes
DLA Actions to Address Weaknesses in LESO Program
DLA has taken some steps to address previously identified weaknesses in its processes for transferring and monitoring its excess controlled property through revisions to its policy and procedures on the management, oversight, and accountability of the LESO program. Such revisions were made, in part, because of recommendations made by the DOD and DLA Offices of Inspector General. The DOD and DLA Offices of Inspector General conducted four audits of the LESO program between 2003 and 2013 that identified more than a dozen recommendations, such as developing and implementing written standard operating procedures for the approval and disapproval of law enforcement agency property requests and issuance, transfer, turn-in and disposal of LESO property. In our July 2017 report, we found the department had taken the following actions to enhance its transfer process through revisions to policy and procedures: transitioned full management responsibility of the LESO Program to DLA Disposition Services in 2009; developed LESO Program Standard Operating Procedures in 2012 and updated them in 2013; transitioned to a new data system in 2013 after identifying that the old system was not capable of post-issue tracking;revised the DLA instruction that provides policy, responsibility, and procedures for DLA’s management responsibilities of the LESO program in 2014 and 2016; and revised LESO program processes in 2016 to incorporate recommendations made by the Federal Interagency Law Enforcement Equipment Working Group, such as defining executive order controlled property or prohibiting schools K-12 from participating in the program.
In addition, DLA is in the process of developing additional training on LESO program policies and procedures, and is establishing memorandums of understanding with federal law enforcement agencies on the general terms and conditions of participating in the program, including the restrictions on the transfer and sale of controlled property.
DLA Has Deficiencies in Its Processes for Verifying and Approving Applications and Transferring Property and Has Not Conducted a Risk Assessment
We found weaknesses in three areas: (1) verifying and approving applications, (2) transferring property, and (3) the assessment of risk. First, our independent testing of the LESO program’s internal controls identified deficiencies in the processes for verification and approval of federal law enforcement agency applications. Specifically, our investigators posing as authorized federal law enforcement officials of a fictitious agency applied and were granted access to the LESO program in early 2017. In late 2016, we emailed our completed application to the LESO program office. Our application contained fictitious information including agency name, number of employees, point of contact, and physical location. In early 2017, after revising our application at the direction of LESO officials we were notified that our fictitious law enforcement agency was approved to participate in the LESO program. LESO officials also emailed us to request confirmation of our agency’s authorizing statute; in response, our investigators submitted fictitious authorizing provisions as provisions in the U.S. Code. At no point during the application process did LESO officials verbally contact officials at the agency we created—either the main point of contact listed on the application or the designated point of contact at a headquarters’ level—to verify the legitimacy of our application or to discuss establishing a memorandum of understanding with our agency.
DLA’s internal controls for verifying and approving federal agency applications and enrollment in the LESO program were not adequate to prevent the approval of a fraudulent application to obtain excess controlled property. Specifically, LESO’s reliance on electronic communications without actual verification does not allow it to properly vet for potentially fraudulent activity. For example, DLA did not require supervisory approval for all federal agency applications, or require confirmation of the application with designated points of contact at the headquarters of participating federal agencies. Additionally, at the time we submitted our application, DLA officials did not visit the location of the applying federal law enforcement agency to help verify the legitimacy of the application. After our briefing of DLA officials in March 2017 on the results of our investigative work, DLA officials stated they took immediate action, and in April 2017 visited 13 participating federal law enforcement agencies. However, at this time DLA has not reviewed and revised the policy or procedures for verifying and approving federal agency applications and enrollment in the LESO program.
Second, our independent testing also identified deficiencies in the transfer of controlled property, such as DLA personnel not routinely requesting and verifying identification of individuals picking up controlled property or verifying the quantity of approved items prior to transfer. Our investigators, after being approved to participate in the LESO program, obtained access to the department’s online systems to view and request controlled property. We subsequently submitted requests to obtain controlled property, including non-lethal items and potentially-lethal items if modified with commercially available items. In less than a week after submitting the requests, our fictitious agency was approved for the transfer of over 100 controlled property items with a total estimated value of about $1.2 million. The estimated value of each item ranged from $277 to over $600,000, including items such as night-vision goggles, reflex (also known as reflector) sights, infrared illuminators, simulated pipe bombs, and simulated rifles. Our investigator scheduled appointments and obtained the controlled property items, such as those shown in the photos below.
Using fictitious identification and law enforcement credentials, along with the LESO-approved documentation, our investigator was able to pass security checks and enter the DLA Disposition Service warehouse sites. Personnel at two of the three sites did not request or check for valid identification of our investigator picking up the property. According to DLA guidance, direct pickup of allocated property may be made by an individual with valid identification and the appropriate DOD authorization form that is signed by the authorized individual listed in the letter.
DLA has not taken steps to reasonably ensure that onsite officials routinely request and verify valid identification of the individual(s) authorized to pick up allocated property from the LESO program, as required by the guidance. DLA officials acknowledged they could take additional steps to ensure compliance with the requirements in the handbook. Furthermore, although we were approved to receive over 100 items and the transfer documentation reflects this amount, we were provided more items than we were approved to receive. The discrepancy involved one type of item—infrared illuminators. We requested 48 infrared illuminators but onsite officials at one Disposition Services site provided us with 51 infrared illuminators in 52 pouches, of which one pouch was empty. Additionally, we found that one DLA Disposition Services site had a checklist as a part of their transfer documentation for their personnel to complete. The checklist required manual completion of several items, including quantity, date, and who fulfilled the order. The other two DLA Disposition Services sites, including the site that transferred the wrong quantity, did not include this checklist with the transfer documentation we received. DLA guidance states that accountability records be maintained in auditable condition to allow property to be traced from receipt to final disposition. We concluded that without guidance that specifically requires DLA Disposition Services’ on-site officials to verify the type and quantity of approved items against the actual items being transferred prior to removal from the sites, DLA will lack reasonable assurance that the approved items transferred are appropriately reflected in their inventory records.
Third, while DLA has taken some steps, mostly in early 2017, to address identified deficiencies in the LESO program, DLA lacks a comprehensive framework for instituting fraud prevention and mitigation measures. During the course of our review, DLA revised the LESO program applications by requiring applicants to sign an attestation that the agency that they represent is a legitimate law enforcement agency. Further, DLA officials stated they are more carefully reviewing the legitimacy of some information on the application such as email addresses and physically visiting federal agencies that enter into memorandums of understanding with the LESO program.
However, as previously discussed, we identified internal controls weakness in the policy and procedures for verifying and approving federal agency applications and enrollment as well as weakness throughout the process from approval to the actual transfer of the items to the agencies, which indicates that DLA has not examined potential risks for all stages of the process. According to GAO’s Fraud Risk Framework, effective fraud risk managers collect and analyze data on identified fraud schemes, use these lessons learned to improve fraud risk management activities, and plan and conduct fraud risk assessments that are tailored to their programs. The framework states there is no universally accepted approach for conducting fraud risk assessments since circumstances among programs vary. However, per leading practices, assessing fraud risks generally involves five actions: (1) identifying inherent fraud risks affecting the program, (2) assessing the likelihood and effect of those fraud risks, (3) determining fraud risk tolerance, (4) examining the suitability of existing fraud controls and prioritizing residual fraud risks, and (5) documenting the program’s fraud risk profile.
DLA has begun to examine some fraud risks associated with the LESO program. However, DLA officials acknowledged during our March 2017 meeting that they have not conducted a fraud risk assessment on the LESO program to include the application process, and as such, has not designed or implemented a strategy with specific control activities to mitigate risks to the program. We concluded that conducting such an assessment could have program-wide improvements, including strengthening the controls to verify the legitimacy of applicants.
Overall, we concluded in our July 2017 report that DLA’s internal controls did not provide reasonable assurance in preventing fraud. Therefore, we made four recommendations for DLA to: review and revise policy or procedures for verifying and approving federal agency applications and enrollment; ensure compliance that DLA Disposition Services on-site officials transferring controlled property verify that persons picking up items have valid identification and are authorized to pick up allocated property from the LESO program; issue guidance that requires DLA Disposition Services on-site officials to verify the type and quantity of approved items against the actual items being transferred prior to removal from the sites; and conduct a fraud risk assessment to design and implement a strategy with specific internal control activities to mitigate assessed fraud risks.
DOD concurred with all of our recommendations and highlighted actions to address each one.
Chairman Wilson, Ranking Member Bordallo, and Members of the Subcommittee, this concludes our prepared statement. My colleague, Mr. McElrath, and I would be pleased to respond to any questions that you may have at this time.
Contacts and Acknowledgments
For questions about this statement, please contact Zina D. Merritt at (202) 512-5257 or [email protected] or Wayne A. McElrath at (202) 512-2905 or [email protected]. In addition, individuals making significant contributions to this statement include: Marilyn Wasleski, Assistant Director; Laura Czohara, Martin de Alteriis, Barbara Lewis, Felicia Lopez, Maria McMullen, George Ogilvie, Richard Powelson, and Samuel Woo.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
This testimony summarizes the information contained in GAO's July 2017 report, entitled DOD Excess Property: Enhanced Controls Needed for Access to Excess Controlled Property ( GAO-17-532 ).
[email protected] or Wayne A. McElrath at (202) 512-2905 or [email protected] .
What GAO Found
The Defense Logistics Agency (DLA) has taken some actions and is planning additional actions to address identified weaknesses in its excess controlled property program. However, internal control deficiencies exist for, among other things, ensuring that only eligible applicants are approved to participate in the Law Enforcement Support Office (LESO) program and receive transfers of excess controlled property. DLA is establishing memorandums of understanding with participating federal agencies intended to, among other things, establish general terms and conditions for participation, revise its program application to require additional prospective participant information, and plans to provide additional online training for participating agencies that is expected to begin in late 2017. However, GAO created a fictitious federal agency to conduct independent testing of the LESO program's internal controls and DLA's transfer of controlled property to law enforcement agencies.
Through the testing, GAO gained access to the LESO program and obtained over 100 controlled items with an estimated value of $1.2 million, including night-vision goggles, simulated rifles, and simulated pipe bombs, which could be potentially lethal items if modified with commercially available items . GAO's testing identified that DLA has deficiencies in the processes for verification and approval of federal law enforcement agency applications and in the transfer of controlled property, such as DLA personnel not routinely requesting and verifying identification of individuals picking up controlled property or verifying the quantity of approved items prior to transfer. Further, GAO found that DLA has not conducted a fraud risk assessment on the LESO program, including the application process. Without strengthening DLA and LESO program internal controls over the approval and transfer of controlled property to law enforcement agencies, such as reviewing and revising policy or procedures for verifying and approving federal agency applications and enrollment, DLA lacks reasonable assurance that it has the ability to prevent, detect, and respond to potential fraud and minimize associated security risks.
Examples of Controlled Property Items Obtained
DLA maintains a public Internet site to address statutory requirements to provide information on all property transfers to law enforcement agencies. DLA's public Internet site shows all transferred property, and, as of April 2017, in response to GAO's findings, has included a definition of controlled property to distinguish for the general public what items are considered controlled. |
gao_GAO-18-552 | gao_GAO-18-552_0 | Background
After the terrorist attacks of September 11, 2001, Congress passed and the President signed the Aviation and Transportation Security Act into law on November 19, 2001, with the primary goal of strengthening the security of the nation’s civil aviation system. The act established TSA as the agency with responsibility for securing all modes of transportation, including civil aviation. As part of this responsibility, TSA performs or oversees security operations at the nation’s nearly 440 commercial airports, including managing passenger and checked baggage screening operations.
TSOs inspect individuals and property to deter and prevent passengers from bringing prohibited items on board an aircraft or into the airport sterile area—in general, an area of an airport to which access is controlled through the screening of persons and property. While working at an airport checkpoint as shown in figure 1, TSOs perform a variety of tasks, which include:
Travel document verification: a TSO checks passengers’ identification against the boarding pass and the individual presenting the identification.
Divestiture: a TSO assists passengers by informing them what items need to be placed on the x-ray conveyor belt.
X-ray interpretation: TSOs screen passengers’ carry-on baggage and personal property by interpreting x-ray images to identify any prohibited items.
Advanced imaging technology operations: Passengers are screened via advanced imaging technology (often referred to as body scanners), which identifies areas where they may be concealing prohibited items.
Walk-through metal detector operation: a TSO operates the walk- through metal detector.
Physical searches: Passengers can opt to be screened through a physical search, or TSOs may perform a physical search to resolve an alarm triggered by the AIT system or the walk-through metal detector, among other reasons.
Explosive trace detection and manual searches of property: TSOs use an explosives trace detection system by swabbing carry-on baggage and testing the sample for explosive residue or vapors. This test is usually performed in conjunction with a manual search of the carry-on baggage.
Exit lane monitoring: a TSO watches the lane through which passengers exit the sterile area to ensure that no one enters the sterile area through that passage.
Within TSA, two offices work together to manage TSOs and ensure their training is current and relevant. OSO is responsible for allocating TSO staff to airports, scheduling TSO work hours and training availability, and developing SOPs that govern how TSOs screen passengers and baggage. OTD is responsible for developing initial and ongoing training curricula for TSOs based in part on SOPs. Within OTD, a dedicated team is located at the Academy to manage updates at TSO Basic Training.
In accordance with the Aviation and Transportation Security Act, screeners must complete a minimum of 40 hours of classroom instruction, 60 hours of on-the-job training, and successfully complete an on-the-job training examination. Until 2016, new TSOs completed these training requirements at or near their home airports through the New Hire Training Program (NHTP). Since TSA centralized the TSO Basic Training program in January 2016, TSOs fulfill these training requirements through classroom training at the Academy as well as training at their home airports prior to the Academy and on-the-job training after completion of TSO Basic Training. During the 2 weeks spent at the Academy, TSOs receive 80 hours of training on standard operating procedures, threat detection, and the use of screening equipment. Prior to attending TSO Basic Training, new TSOs complete computer-based prerequisite training and may shadow experienced TSOs at a checkpoint. TSO Basic Training allows for participants to be trained at a dedicated facility with more hands-on training than was possible for NHTP (see Appendix I for a comparison of the two programs).
As shown in table 1, of the $53 million obligated from January 2016 through March 2018, TSA obligated $18.2 million for procurement and development of the modular buildings on the FLETC campus used for TSA training, as well as associated hardware and set-up obligations such as audio/video equipment and fully operational simulated checkpoints. TSA obligated an additional $12 million in fiscal year 2016 and $13.7 million in fiscal year 2017 for the delivery of TSO Basic Training, including associated student travel and related equipment. TSA officials told us that due to continuing budget resolutions that funded the government between October 2017 and March 2018, TSA was not able to fully fund the interagency contract between TSA and FLETC to support the TSO Basic Training course in fiscal year 2018 at the beginning of the year. For this reason, TSA does not yet have 2018 training obligations available for reporting through its accounting system. However, based on the average cost per student in fiscal year 2017 of about $2,300 to attend TSO Basic Training, TSA estimates total training obligations of approximately $9.1 million in the first half of fiscal year 2018.
TSA Established the TSO Basic Training Program at the Academy to Obtain Benefits from Centralized Training
According to the business case for TSO Basic Training and TSA officials, implementation of the TSO Basic Training program at the Academy was anticipated to provide a number of potential benefits. The anticipated benefits identified generally align under two distinct categories: (1) efficiencies and improvements obtained through the centralized delivery of training, and (2) enhanced professionalism and “esprit de corps” obtained through bringing newly hired screeners together for centralized training. Collectively, these benefits were also envisioned by TSA headquarters officials to have a positive impact on screening effectiveness and public perception of the TSA workforce.
Based on several analyses of training delivery options that TSA has conducted since 2008, TSA determined that a centralized training academy would have a number of potential benefits relative to the decentralized training previously administered at field airports through NHTP. Among the potential efficiencies and improvements cited by TSA are: Increased consistency and standardization. According to TSA documents and OTD headquarters officials, centralized training provides a standardized curriculum that serves as a foundation for the skills, knowledge, and equipment used across an array of different airport environments. The TSA business case and other supporting analyses note that such an approach offers greater consistency of training delivery and a better mechanism for developing, delivering, and evaluating course content.
Equipment availability and expanded course content. TSO Basic Training includes a full suite of dedicated checkpoint equipment and x-ray image simulators for students to practice learned skills, eliminating the challenge of finding available equipment and training times in a busy airport environment (see figure 2). Officials told us that being more familiar with the screening equipment increases TSOs’ readiness for on-the-job training when they return to their home airports. Initial test results also indicate that participants trained at the Academy receive higher pass rates on end-of course assessments of x-ray image interpretation skills than those who received their initial training at their home airports. Specifically, according to TSA data, of the 5,877 test-takers who received training at TSO Basic Training in 2016, 91.5 percent passed the Image Interpretation Test on their first attempt. In contrast, 83.2 percent of the 1,458 test-takers who received training at local airports in 2016 passed the test on their first attempt. In addition, the Academy curriculum incorporates new learning opportunities, including a live demonstration of improvised explosive devices and an active shooter drill, both of which would be difficult to reproduce within the airport environment, according to TSA officials.
Dedicated faculty and instructor development. TSO Basic Training offers a dedicated faculty and support staff focused exclusively on training TSOs. According to TSA officials, before TSO Basic Training, training at individual airports was often conducted by TSOs for whom instruction was a collateral duty, whereas instructors at the Academy have full-time training responsibilities and enhanced opportunities to learn from each other, increase their professional training skills, and provide feedback on the delivery of course curriculum.
Centralized facility and shared logistics. By locating the TSA Academy at FLETC, TSA is able to take advantage of the services and logistical support that FLETC provides. Specifically, FLETC services and logistics include accommodations, meals, and transportation, thereby reducing the administrative demands on TSA personnel and allowing students a focused and efficient training experience. Additional efficiencies cited by TSA officials include lower overall costs for office space, janitorial services, and other operational costs because such costs are shared by the 96 agencies that use FLETC. According to TSA officials, conducting training at FLETC can also help TSA accommodate hiring surges and better augment future training, if needed. For example, TSA officials reported that the facility has surge capacity from its current capacity of 240 students up to 300 new students if sufficient instructors are available.
According to TSA documents and training officials, another key benefit of centralized training is the opportunity to enhance professionalism and help foster camaraderie and esprit de corps. TSA anticipates that centralized, standardized training will not only provide trainees with an increased focus on the TSA mission and operational environment, but can serve to instill a common culture and sense of belonging among the broader community of TSOs nationwide. In its business case, TSA notes that centralized training of new recruits is a common model employed by the armed forces and other federal law enforcement agencies within DHS, such as U.S. Customs and Border Protection and the U.S. Coast Guard. According to the business case, by bringing together newly hired TSOs from around the country, TSA also hopes to inspire in its trainees a singular identity and unity of purpose, which previous analyses generally found lacking as part of the decentralized training approach.
The business case also associates such increases in professionalism and esprit de corps with greater employee satisfaction and the potential for reduced attrition. Analysis conducted by TSA in 2017 provides some initial support for positive trends in these areas. For example, results of a 2017 TSA employee engagement survey indicated that respondents who attended TSO Basic Training reported higher scores in categories including Organizational Commitment, Job Satisfaction, and Overall Morale versus respondents who did not attend. TSA also reported a 19 percent reduction in the attrition rate during the first 180 days of being hired for those attending TSO Basic Training at the Academy in 2016 versus those who received their initial training at field airports through the New Hire Training Program.
Factors Considered in Updating the TSO Basic Training Curriculum Include Evolving Security Threats and Input from Course Participants
OTD Uses Information from OSO to Update the TSO Basic Training Curriculum to Address Evolving Security Threats
OTD updates and modifies the TSO Basic Training curriculum based, in part, on regular communications from OSO, the office responsible for developing SOPs for screening operations and managing TSO performance. Officials from both offices told us that OSO provides information to OTD on changes to SOPs as soon as changes are made so they can update the TSO Basic Training curriculum. For instance, in 2017, when OSO began planning major changes to the SOPs, the office gave OTD information about the planned SOP revisions, as well as the airports where the new SOPs would be piloted. In response, OTD modified its curriculum and was able to provide revised training for new TSOs based at airports that were piloting the program, while providing TSOs at all other airports the prior version of training. OTD officials noted that in some cases TSA must quickly update SOPs to reflect imminent threats. According to officials, a plan is in place to make changes to TSO Basic Training curriculum in response to emerging or imminent threats, although such threats have not been experienced since the establishment of TSO Basic Training in 2016.
In addition to changes in SOPs, OSO officials indicated they may also change the timing of when TSOs employed by TSA attend TSO Basic Training. Specifically, officials told us that OSO plans to implement a new model for TSO Basic Training in which TSOs will attend TSO Basic Training 2 to 6 months after they are hired rather than as soon as is practical. According to TSA, the agency is pursuing this change to, among other things, implement a transparent career path for TSOs employed by TSA and to encourage and reward skill development. During the 2 to 6 months prior to attending TSO Basic Training, TSOs will perform checkpoint tasks that require training that can be delivered at the airport as soon as they are hired, such as checking passengers’ travel documents and helping passengers move through the checkpoint. Once TSOs are able to perform these initial tasks, they will attend TSO Basic Training at the Academy, Officials told us they are preparing for the change by modifying the TSO Basic Training curriculum to eliminate subjects that will be covered at the airports and to emphasize skills that more experienced TSOs will need, such as performing physical searches of passengers. TSA plans to implement the revised model beginning in August 2018.
Finally, OTD receives information on TSO performance and uses that information to inform TSO Basic Training curriculum. For example, two TSA offices—OSO and the Office of Inspections—perform regular effectiveness testing of airport checkpoints through covert testing and share the results with OTD. After each covert testing event, each office conducts interviews with TSOs to determine the factors that contributed to their effectiveness at identifying prohibited items. Officials told us that OSO and OTD hold regular meetings to discuss the analyses of covert testing failures and ways in which training curriculum can be modified to address the reasons for the failures, which are then incorporated into the TSO Basic Training curriculum. Office of Inspections officials noted that they participated in the development of the TSO Basic Training curriculum and provide regular reports to OTD on covert testing results.
When Making Updates to TSO Basic Training Curriculum, OTD Considers Feedback from Instructors, Course Participants, and Contractors
OTD gathers input from TSO Basic Training participants, instructors, and contractors on ways to update the curriculum. For instance, TSO Basic Training instructors told us they submit “white paper proposals” to TSO Basic Training course managers detailing their suggested changes to the course. They can also provide feedback and suggestions during “train the trainer” sessions, in which all instructors participate when TSO Basic Training is updated. Instructors told us that all sessions include an opportunity for instructors to provide feedback after reviewing the new curriculum. Officials told us that they take instructors’ feedback into account when implementing new curriculum. For instance, officials told us that at the suggestion of instructors, they added time for discussion at the end of each checkpoint lab to help capture and share lessons learned.
OTD also collects feedback from TSOs who have participated in the course, both at the end of their two weeks at the Academy and several months after their completion of the course. At the end of TSO Basic Training, OTD collects feedback from participants through a survey with both multiple choice and open-ended questions. The survey includes questions on the course curriculum and instructor performance. Officials told us that they regularly review the results of the survey and consider whether it is appropriate to address the feedback by modifying TSO Basic Training. For instance, the most often provided feedback for altering the curriculum was to increase the time in hands-on training using screening equipment in the Academy’s simulated checkpoints. In response, OTD officials told us they added nearly 5 hours of hands-on training to the 80- hour program in addition to the 6 hours that had previously been a part of the curriculum.
In addition to collecting feedback from TSO Basic Training participants and instructors, TSA officials told us that TSA regularly uses a contractor to support the design and development of training courses and to assess existing courses, including TSO Basic Training. In 2016, the contractor conducted an evaluation of the instructional integrity of the TSO Basic Training curriculum. The resulting report made a number of recommendations to improve the curriculum and structure of TSO Basic Training, many of which OTD has implemented. For instance, the contractor recommended that TSO Basic Training include more opportunities for review of the material to reinforce TSOs’ understanding. In response, OTD implemented a review session at the end of the first week of training so TSOs have an opportunity to clarify information presented over the first week.
TSA Has Made Progress in Implementing a Training Evaluation Model but Has Not Established Specific Goals and Performance Measures to Assess TSO Basic Training
TSA has implemented three of the four levels of the Kirkpatrick Model, a training evaluation model that, in part, helps TSA collect feedback from course participants and evaluate the impact on individual development. However, the agency has not developed goals for the program or related performance metrics to demonstrate progress toward goals.
TSA Has Made Progress in Implementing the Kirkpatrick Model to Evaluate Its TSO Basic Training Program
To evaluate the TSO Basic Training program, TSA uses the Kirkpatrick Model, which is a commonly accepted training evaluation model endorsed by the Office of Personnel Management and used throughout the federal government. The Kirkpatrick Model consists of a four-level approach for soliciting feedback from training course participants and evaluating the impact the training had on individual development, among other things. To date, TSA has implemented the first three levels of the model by administering (1) course surveys to participants at the end of the training program; (2) an end-of-course written exam and an x-ray image interpretation test to assess achievement of learning objectives; and (3) course surveys to participants and their supervisors several months after completing training to collect information regarding how the training affected behavior or performance on the job. OTD officials told us that they have not yet implemented Level 4 of the model because they do not believe they have enough data. Table 2 provides a description of what each level within the Kirkpatrick model is to accomplish and TSA’s progress in implementing the levels.
TSA Has Not Yet Developed Goals and Performance Measures for TSO Basic Training
While TSA reported potential benefits of TSO Basic Training in its business case and implemented the Kirkpatrick Model to assess training, it has not yet identified specific goals that the TSO Basic Training program is expected to achieve, nor has it developed performance measures to evaluate progress toward goals. The business case and the Kirkpatrick Model are positive steps and document certain benefits of TSO Basic Training, but without a set of specific training goals and associated performance measures for the program, TSA is not able to fully evaluate the program’s effectiveness and ensure accountability toward results. Such goals are important to help ensure alignment with course objectives and the end-of course examinations administered as part of Level 2 of the Kirkpatrick Model. In addition, without the development of specific goals, it is not possible to determine what types of performance measures should be used to help show progress toward such goals. For example, in its business case, TSA identified improved employee morale as one of the anticipated benefits of TSO Basic Training. However, there are no goals or metrics specifically related to this benefit. If TSA believes improved morale should be something for which TSO Basic Training aims, goals and measures could help them demonstrate the extent to which this benefit is being realized by the training program.
Leading management practices related to training evaluation guidance identifies the importance of agencies developing and using performance measures regularly to ensure accountability and assess progress toward achieving results that are aligned with the agency’s mission and goals. In addition, these practices highlight the importance of agencies having clear goals about what the training or development program is expected to achieve as a precursor to developing such measures. When designed effectively, performance measures help decision makers (1) determine the contributions that training makes to improve results, (2) identify potential gaps in performance, and (3) determine where to focus resources to improve results. In particular, incorporating valid measures of effectiveness into training programs can enable an organization to better ensure that desired changes occur in trainees’ skills, knowledge, and abilities.
According to OTD officials, the TSO Basic Training program was established on an accelerated schedule in late 2015 as one of multiple efforts to improve training delivery and help enhance screener effectiveness. Officials stated that the program is still relatively new and they plan to collect several additional years of data on system-wide screening performance before conducting efforts to further evaluate the impact of the training. They reported that the lack of performance measures is also due to the inherent difficulty of tying specific training initiatives to broader organizational results. Officials told us that once TSOs return to their home airports after TSO Basic Training, they are exposed to additional on-the-job training and differing airport cultures, which make it difficult to isolate the effects of TSO Basic Training. However, senior training officials agreed that establishing applicable goals and performance measures for the TSO Basic Training program would be helpful to support ongoing efforts and better measure program progress.
We recognize that developing metrics to assess the performance of training programs on broad organizational results can be challenging. However, there are additional opportunities to develop program goals and performance measures as part of the training evaluation efforts at the Academy to help ensure that participants can demonstrate proficiency in performing core technical skills before returning to their home airports. We believe that developing goals for a training program does not need to wait for years of data. Goals reflect desired results, connected to an agency’s mission, which a program plans to achieve. In the over 2 years of using TSO Basic Training, TSA has not stated what results the training program is to achieve.
TSOs provide a crucial function to help ensure passenger safety, and it is important to have goals aligned with this mission, as well as associated measures of effectiveness of the training they receive at TSO Basic Training to determine the extent to which they are able to fulfill their important role. As noted by leading management practices for training evaluation, agencies need credible information to demonstrate a training program is contributing to a goal and they can develop such data through a mix of quantitative and qualitative indicators. We found that options for assessing the effectiveness of TSO Basic Training could include measuring TSO performance by leveraging data from end of course examinations, such as the x-ray image interpretation test, and introducing similar additional tests or mechanisms to further evaluate trainees’ knowledge and skills in effective screening procedures. Additional options could include measuring employee morale as indicated by TSOs on their Kirkpatrick Level 1 surveys at the completion of the training program, and comparing these results against applicable program goals for employee morale that TSA could establish related to TSO Basic Training. By identifying annual goals and measures for TSO Basic Training, TSA will also be better positioned to move forward with Level 4 of the Kirkpatrick Model to evaluate the impact of training on broader organizational results. Given that over $50 million has been obligated to set up and operate the TSO Basic Training program to date, it is important that TSA incorporate annual goals and measures into the training program to be better informed when making training decisions and to help hold itself accountable for training results on a regular basis.
Conclusions
TSOs perform a critical role in securing our nation’s commercial aviation system and often represent the most visible face of TSA to the public. For this reason, new hire training is an integral function to ensure that TSOs are obtaining the foundational skills and knowledge to help prepare them to perform their jobs effectively. In 2016, TSA initiated a major change to its training approach for new hires to help ensure a consistent and standardized training experience and promote enhanced camaraderie and esprit de corps. Although TSA has implemented a framework to assess participant reactions to the training and their knowledge of course content, it has not yet established goals for the TSO Basic Training program or measures to gauge effectiveness of the training TSOs receive to determine the extent to which they can fulfill their crucial role in ensuring passenger safety. By taking these steps, TSA will be better positioned to determine if the program is improving trainees’ skills, knowledge, and abilities and whether additional skill development, or other training modifications, may be needed.
Recommendation for Executive Action
We are making one recommendation to the Administrator of TSA. Specifically, the Administrator of TSA should establish specific goals for the TSO Basic Training program and develop performance measures that can be used to assess if the program is achieving desired outcomes and help ensure accountability for training results on a regular basis. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report to DHS for review and comment. DHS provided written comments, which are reprinted in appendix II, and technical comments, which we incorporated as appropriate. DHS agreed with our recommendation that TSA establish specific goals for the TSO Basic Training program and develop performance measures that can be used to assess if the program is achieving desired outcomes. In addition, in its written comments DHS outlined steps to address this recommendation.
With regard to performance goals, TSA plans to establish broad goals that include successful screening and improved morale, among others. The stated goals are an appropriate response to our recommendation that TSA develop goals specifically for TSO Basic Training. These actions, if implemented effectively, should address the intent of our recommendation.
With regard to developing performance measures that can be used to assess program outcomes, TSA intends to leverage existing mechanisms through its Kirkpatrick Model evaluations to measure program success. As we noted in the report, implementing the first three levels of the Kirkpatrick Model are positive steps that document certain benefits of TSO Basic Training, but they do not address specific goals or performance measures. Kirkpatrick Model Level 2 evaluations include proficiency exams administered prior to TSOs’ departure from the Academy. Data from these evaluations, in conjunction with specific goals, may provide quantifiable metrics that could inform further refinement of the TSO Basic Training curriculum. However, the surveys being used by TSA for Level 3 of the Kirkpatrick Model do not include metrics that would allow TSA to measure the program’s effectiveness and ensure accountability toward results. Specifically, the surveys do not demonstrate whether TSO Basic Training is reaching goals related to successful screening or improved morale because survey results are influenced by factors outside of the training program. We will continue to monitor TSA’s efforts in this area.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (206) 287-4804 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III.
Appendix I: Comparison of New Hire Training Program and Transportation Security Officer (TSO) Basic Training
In 2016, the Transportation Security Administration (TSA) established the TSO Basic Training program at the TSA Academy, located at the Federal Law Enforcement Training Centers in Glynco, Georgia. TSO Basic Training allows new TSOs to be trained at a dedicated facility with simulated checkpoints. Previously, TSOs’ initial training was delivered through the New Hire Training Program at or near their home airports, at which they were able to practice using checkpoint equipment only when the equipment was not being used, such as after hours. For further comparison of the two programs, see Table 3.
Appendix II: Comments from the Department of Homeland Security
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Acknowledgements
In addition to the contact named above, Dawn Locke, Assistant Director; Miriam Hill, Analyst in Charge; and Ryan Lambert made key contributions to this report. Also contributing to the report were Elizabeth Dretsch, Eric Hauswirth, Susan Hsu, Heidi Nielson, and Adam Vogt. | Why GAO Did This Study
TSA is responsible for ensuring that all airline passengers and their property are screened for items that could pose a threat to airplanes and passengers at 440 airports across the United States. Since 2016, TSO Basic Training—initial training for newly hired TSOs, including both TSA-employed and private screeners—has consisted of an intensive two-week course at the TSA Academy located at FLETC. TSA has obligated about $53 million for the program from its inception through March 2018. In 2015 and 2017, the Department of Homeland Security Inspector General raised questions about the effectiveness of checkpoint screening, which prompted concerns about training.
GAO was asked to review TSA's training of new TSOs. This report (1) describes the reasons why TSA established the TSO Basic Training program; (2) discusses factors OTD considers when updating TSO Basic Training curriculum; and (3) assesses the extent to which TSA evaluates its TSO Basic Training program. GAO reviewed documents on the development and modification of TSO Basic Training curriculum; visited FLETC; interviewed TSA officials; and compared TSA's program evaluation to leading practices.
What GAO Found
The Transportation Security Administration (TSA) established the Transportation Security Officer (TSO) Basic Training program at the TSA Academy at the Federal Law Enforcement Training Centers (FLETC) in Glynco, Georgia to obtain benefits from centralized training. Prior to the Basic Training program, TSO training was conducted at individual airports, often by TSOs for whom instruction was a collateral duty. According to a business case developed by TSA for Congress in 2017 and TSA officials, TSA expected implementation of the TSO Basic Training program to provide efficiencies to the delivery of new-hire training for TSOs and to enhance the professionalism and morale of newly hired screeners. For example, GAO observed that TSO Basic Training facilities have airport checkpoint equipment and x-ray image simulators for students to practice skills, eliminating the challenge of finding available equipment and training times in a busy airport environment. According to program officials, centralized training also provides trainees with an increased focus on the TSA mission and instills a common culture among TSOs.
TSA's Office of Training and Development (OTD) updates and modifies the TSO Basic Training curriculum in response to evolving security threats and evaluations of effectiveness, among other factors. For example, OTD holds regular meetings with TSA's Office of Security Operations—the office responsible for managing TSO performance—to discuss issues such as imminent threats. The offices also discuss analyses of TSO effectiveness identified through covert tests, in which role players attempt to pass threat objects—such as knives, guns, or simulated improvised explosive devices—through the screening checkpoints. The two offices identify ways to address issues identified in covert testing, which are then incorporated into TSO Basic Training. OTD also gathers input from TSO Basic Training instructors and from participants to adjust training curriculum.
TSA has implemented a training evaluation model but has not yet established specific program goals and performance measures to assess TSO Basic Training. The Kirkpatrick model used by TSA is a commonly-accepted training evaluation model endorsed by the Office of Personnel Management and used throughout the federal government. While TSA reported expected benefits of TSO Basic Training in its business case and implemented the Kirkpatrick model to begin assessing training, it has not yet identified specific goals that the program is expected to achieve, nor has it developed applicable performance measures to evaluate progress toward goals, as called for by leading management practices for training evaluation. TSA officials told GAO that TSO Basic Training is a relatively new program and they planned to collect more data on TSO screening performance before further evaluating the potential impacts of the training program. However, TSO Basic Training serves as the foundation for TSOs to learn core skills and procedures, and it is important to establish goals and measures to better assess the effectiveness of the training they receive. This will help TSA determine the extent to which TSOs are able to fulfill their important role in ensuring passenger safety while also showing results for the funds spent on such training each year.
What GAO Recommends
GAO recommends that TSA establish specific goals and performance measures for the TSO Basic Training program. TSA concurred with the recommendation. |
gao_GAO-18-42 | gao_GAO-18-42_0 | Background
While IT investments have the potential to improve lives and organizations, federally funded IT projects can—and, too often, have— become risky, costly, and unproductive mistakes. We have previously reported that the federal government has spent billions of dollars on failed or troubled IT investments, such as the Office of Personnel Management’s (OPM) Retirement Systems Modernization program, which was canceled in February 2011, after spending approximately $231 million on the agency’s third attempt to automate the processing of federal employee retirement claims; the United States Coast Guard’s effort, initiated in 2010, to replace its aging electronic health records system, but which was discontinued in October 2015 after spending nearly $67 million. As a result, the Coast Guard currently has a manual, paper-based health records management process; the tri-agency National Polar-orbiting Operational Environmental Satellite System, which was halted in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion; the Department of Veterans Affairs’ (VA) Scheduling Replacement Project, which was terminated in September 2009 after spending an estimated $127 million over 9 years; the Farm Service Agency’s Modernize and Innovate the Delivery of Agricultural Systems program, which was halted in July 2014 after spending $423 million to modernize IT systems over 10 years; and the Department of Health and Human Services’ (HHS) Healthcare.gov website and its supporting systems, which were to facilitate the establishment of a health insurance marketplace by January 2014, but which encountered significant cost increases, schedule slips, and delayed functionality.
These failed or troubled projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT investments.
To help address these ongoing challenges, in February 2015, we added improving the management of IT acquisitions and operations to our list of high-risk areas for the federal government. This area highlighted several critical IT initiatives in need of additional congressional oversight, including (1) reviews of troubled projects; (2) efforts to increase the use of incremental development; (3) efforts to provide transparency relative to the cost, schedule, and risk levels for major IT investments; (4) reviews of agencies’ operational investments; (5) data center consolidation; and (6) efforts to streamline agencies’ portfolios of IT investments. We noted that implementation of these initiatives had been inconsistent and more work remained to demonstrate progress in achieving IT acquisitions and operations outcomes.
In our February 2015 high-risk report, we also identified actions that OMB and federal agencies needed to take to make progress in this area. These included implementing FITARA and at least 80 percent of our recommendations related to the management of IT acquisitions and operations within 4 years. Specifically, between fiscal years 2010 and 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations, including many to improve the implementation of the previously mentioned six critical IT initiatives and other government-wide, cross-cutting efforts.
In February 2017, we issued an update to our high-risk series and reported that, while progress had been made in improving the management of IT acquisitions and operations, significant work still remained to be completed. For example, as of May 2017, OMB and federal agencies had fully implemented 380 (or about 47 percent) of the 803 recommendations. Nevertheless, in fiscal year 2016, we made 202 new recommendations, thus further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations. Also, beyond addressing our prior recommendations, our 2017 high-risk update noted the importance of OMB and federal agencies continuing to expeditiously implement the requirements of FITARA.
Agencies Are to Follow Federal Requirements for Acquisitions
The Federal Acquisition Regulation (FAR) is the primary regulation for use by federal executive agencies in their acquisition of supplies and services with appropriated funds. The FAR requires agencies to perform planning for all acquisitions. Acquisition planning begins when an agency need is identified and includes developing requirements and creating written acquisition plans. A detailed acquisition plan must address all of the technical, business, management, and other significant considerations that will control the acquisition. It should include, among other things, a statement of need, cost, a plan of action, and milestones. The FAR is less specific on the requirements for an acquisition strategy, but it states that acquisition planning should include developing the overall strategy for managing the acquisition.
Once a contract is awarded, the awarding agency must enter certain information into the Federal Procurement Data System-Next Generation, the federal government’s database that captures information on contract awards and obligations and is the primary database that serves as the source of other contracting data systems, such as USAspending.gov. The system captures information on contract awards and obligations, including, the vendor, and amount obligated. Further, agencies must select a product and service code that represents the predominant product or service being purchased. Product and service codes are used to describe and identify products, services, and research and development spending within the system.
In an effort to eliminate redundancies and increase efficiencies in federal acquisition, in September 2015, the Category Management Leadership Council and OMB developed a government-wide category structure to support category management implementation across the federal government. The Council and OMB reviewed the product and service codes and grouped them into 19 individual spend categories, including IT. See appendix II for a list of the 79 IT-related product and service codes.
Federal Law Establishes Agency IT Management Responsibilities
Over the last three decades, Congress has enacted several laws to help federal agencies improve the management of IT investments. For example, the Clinger-Cohen Act of 1996 requires agency heads to appoint CIOs and specifies many of their responsibilities with regard to IT management. Among other things, CIOs are responsible for implementing and enforcing applicable government-wide and agency IT management principles, standards, and guidelines; assuming responsibility and accountability for IT investments; and monitoring the performance of IT programs and advising the agency head whether to continue, modify, or terminate such programs. The Clinger-Cohen Act, as amended, also defines IT as: any equipment or interconnected system or subsystem of equipment, used in the automatic acquisition, storage, analysis, evaluation, manipulation, management, movement, control, display, switching, interchange, transmission, or reception of data or information by the agency or a contractor under a contract with the agency.
As previously mentioned, recognizing the severity of issues related to the government-wide management of IT, Congress enacted FITARA in December 2014. The law includes provisions related to seven areas at covered agencies:
Agency CIO authority enhancements. CIOs at agencies are required to (1) approve the IT budget requests of their respective agencies, (2) certify that OMB’s incremental development guidance is being adequately implemented for IT investments, (3) approve the appointment of other agency employees with the title of CIO, and (4) review and approve contracts for IT. With regard to the review of IT contracts, FITARA requires that agency CIOs review and approve IT contracts prior to award, unless that contract is associated with a non- major investment. When the contract is associated with a non-major investment, the CIO are allowed to delegate the review and approval duties to an official that reports directly to the CIO. Alternatively, the law states that an agency may use its governance processes to approve any IT contract, as long as the agency CIO is a full participant in the governance processes.
Federal data center consolidation initiative. Agencies are required to provide OMB with a data center inventory, a strategy for consolidating and optimizing the data centers (to include planned cost savings), and quarterly updates on progress made. The law also requires OMB to develop a goal for how much is to be saved through this initiative, and provide annual reports on cost savings achieved.
Enhanced transparency and improved risk management. OMB and agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their investments by level of risk. In addition, in the case of major IT investments rated as high risk for 4 consecutive quarters, the law requires that the agency CIO and the investment’s program manager conduct a review aimed at identifying and addressing the causes of the risk.
Portfolio review. Agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication. In establishing the process associated with such portfolio reviews, the law requires OMB to develop standardized performance metrics, to include cost savings, and to submit quarterly reports to Congress on cost savings.
Expansion of training and use of IT acquisition cadres. Agencies are to update their acquisition human capital plans to address supporting the timely and effective acquisition of IT. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres or developing agreements with other agencies that have such cadres.
Government-wide software purchasing program. The General Services Administration (GSA) is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, GSA should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user.
Maximizing the benefit of the federal strategic sourcing initiative. FITARA requires that OMB issue regulations for federal agencies that do not use the federal strategic sourcing initiative to purchase services and supplies that are offered by this initiative. The regulations are to include a requirement for agencies to analyze the comparative value between what is to be purchased and what the strategic sourcing initiative offers.
OMB Established Guidance for Agencies to Implement FITARA
In June 2015, OMB released guidance describing how agencies are to implement FITARA. The guidance emphasizes the need for CIOs to have full accountability for IT acquisition and management decisions, and gives agencies considerable flexibility in making those decisions. Among other things, the guidance is intended to: assist agencies in aligning their IT resources with agency missions, goals, and requirements; establish government-wide IT management controls that will meet the law’s requirements, while providing agencies with flexibility to adapt to agency processes and mission requirements; clarify the CIO’s role and strengthen the relationship between department CIOs and bureau or component CIOs; and strengthen CIO accountability for IT cost, schedule, performance, and security.
With regard to CIOs’ review and approval of IT contracts, OMB’s guidance expands upon FITARA in a number of ways. Specifically, according to the guidance:
CIOs may review and approve IT acquisition strategies and plans, rather than individual IT contracts;
CIOs can designate other agency officials to act as their representatives, but the CIOs must retain accountability;
Chief Acquisition Officers (CAO) are responsible for ensuring that all IT contract actions are consistent with CIO-approved acquisition strategies and plans; and
CAOs are to indicate to the CIOs when planned acquisition strategies and acquisition plans include IT.
Agencies Identified $14.7 Billion in IT Obligations, but Did Not Identify an Additional $4.5 Billion
OMB’s FITARA implementation guidance requires agencies’ CAOs to indicate to CIOs when planned acquisition strategies and acquisition plans include IT. Given the Category Management Leadership Council and OMB’s categorization of IT product and service codes, CAOs should be identifying the obligations that have IT-related codes.
The 22 selected agencies identified 78,249 IT-related contracts, to which they obligated approximately $14.7 billion in fiscal year 2016. Of that amount, approximately $14 billion was categorized as IT-related, consistent with the Category Management Leadership Council and OMB’s product and service codes, and approximately $626 million was categorized under other, non-IT codes.
The $626 million in obligations with non-IT codes could contain embedded IT or be associated with IT programs. For example, the agencies reported IT-related acquisitions categorized under such non-IT codes as IT/telecommunications training, data analysis, and research and development. Three agencies accounted for most of these non-IT obligations: the Department of Veterans Affairs (VA) accounted for $220 million, the Environmental Protection Agency (EPA) accounted for $156 million, and the Department of Labor (Labor) accounted for $105 million.
However, in addition to the obligations that agencies reported to us, we identified 31,493 additional contracts at 21 agencies with IT-related product and service codes. The associated agencies obligated approximately $4.5 billion to these contracts, raising the total amount obligated to IT contracts in fiscal year 2016 to at least approximately $19.2 billion. Figure 1 reflects the obligations agencies reported to us relative to the obligations we identified.
The percentage of additional IT contract obligations that we identified varied among the selected agencies. For instance, the Department of State (State) did not identify 1 percent of its IT contract obligations. Conversely, eight agencies—the Departments of the Interior (Interior), Transportation (Transportation), and the Treasury (Treasury), as well as the National Science Foundation (NSF), the U.S. Agency for International Development (USAID), HHS, GSA, and OPM did not identify over 40 percent of their IT contract obligations. Figure 2 reflects the contract obligations that the selected agencies reported to us (both with IT-related codes and those with non-IT codes) relative to the obligations we identified. For additional information about the IT obligations identified by these agencies, see appendix III.
Agencies offered various reasons for why they had not identified the approximately $4.5 billion in IT obligations. For example, officials from OPM and NSF stated that their agencies only identified new IT contracts and did not include contract modifications in their identified IT obligations, making their submissions much smaller. NSF also noted that it only identified IT contracts over $150,000. In addition, GSA and Transportation officials stated that at least one of the Category Management Leadership Council’s IT product and service codes should not be considered IT. For instance, an official in GSA’s Vendor Management Office stated that contracts using a product and service code for miscellaneous maintenance, repair, and rebuilding should not be categorized as IT. Likewise, Transportation officials provided examples of contracts that the agency did not consider being IT-related, even though they were categorized under IT product and service codes for program review or development services. In addition, Transportation and USAID officials stated that they did not use the complete list of IT product and service codes in their identification efforts. A Treasury official in the Office of the CIO stated that the department focused on codes that were the most important.
We agree that the Council’s IT product and service codes could include contracts that are not IT. Further, as previously discussed, IT is included in product and service codes that the Council did not identify as IT. Nonetheless, the Council has provided a valuable service in developing specific categories from which agencies can select in identifying IT. To the extent that agencies have concerns about specific categories, they could raise them to the Council.
In addition, the majority of the selected agencies that did not identify the $4.5 billion in IT obligations also did not follow OMB’s guidance to have the CAO identify all IT acquisitions for CIO review and approval. As those tasked with monitoring their respective agencies’ acquisition activities, the offices of the CAOs are in a unique position to identify prospective IT acquisitions to the CIOs. Of the 21 selected agencies that did not identify the approximately $4.5 billion in IT obligations, 8 involved the acquisition offices in the identification of their IT acquisitions. For example, OPM’s process followed OMB’s guidance by directly involving its senior procurement executive in the identification of the acquisitions.
Conversely, the other 14 agencies did not follow OMB’s guidance to have a process in which the acquisition offices identified, or helped to identify, IT acquisitions for CIO review. Among these agencies, for example, EPA officials indicated that program office officials are responsible for identifying IT requirements and obtaining the appropriate approvals. EPA’s process does not require acquisition office participation. Instead, the program office officials work with IT officials to determine if the contract is IT-related and subject to the IT acquisition approval policy.
In addition, 7 agencies reported that they rely on the requesting program offices to self-identify whether their acquisitions are IT-related. Table 1 summarizes the officials responsible for the identification of IT acquisitions at the selected agencies.
We have previously reported on the importance of developing and issuing policies or supporting guidance in order to successfully implement processes and achieve related objectives. In recognition of the importance of establishing guidance to assist agency officials in identifying IT, 14 of the 22 selected agencies issued such guidance.
However, 7 agencies did not. Specifically, the Departments of Agriculture (USDA), Energy (Energy), Justice (Justice), Labor, and Transportation; the National Aeronautics and Space Administration (NASA); and the Social Security Administration (SSA) did not establish guidance regarding the identification of IT-related acquisitions. For instance, officials in Justice’s Office of the CIO stated that the agency does not follow a prescribed process to determine which acquisitions are IT-related and does not use guidance or checklists to aid with the identification. One other agency, Interior, had established draft guidance to assist officials when identifying IT; however, the agency did not identify a schedule for finalizing the draft guidance.
Until agencies involve the acquisition office in their IT identification processes, and establish and effectively implement supporting guidance, they will likely not be able to ensure that all IT acquisitions are identified.
As a result, agencies risk not having appropriate oversight of IT worth billions of dollars.
Most Agency CIOs Are Not Reviewing and Approving IT Acquisitions in Accordance with OMB’s Requirements
FITARA and OMB’s associated implementation guidance require major civilian agency CIOs to review and approve acquisitions of IT either directly, or through the agency’s governance processes. In particular, OMB’s guidance states that agencies shall not approve any acquisition plan or strategy that includes IT without the agency CIO’s review and approval.
OMB’s guidance also allows the CIO to delegate these responsibilities to other agency officials to act as the CIO’s representative; however, staff in OMB’s Office of the Federal CIO noted that these assignments need to be approved by OMB. Alternatively, FITARA and OMB’s guidance allow agencies to use IT governance processes to conduct these reviews and approvals as long as the CIO is a full participant in the process.
Most of the processes at the 22 selected agencies do not fully satisfy OMB’s requirements that the CIO review and approve IT acquisition plans or strategies (or that the CIO participate in a governance process that reviews and approves IT acquisition plans and strategies). Specifically, 8 agencies’ processes fully satisfy OMB’s requirements, while 14 of the agencies’ processes do not fully satisfy the requirements. Of these, 8 agencies partially satisfy the requirements and 6 do not satisfy the requirements. For example,
NSF fully satisfies OMB’s requirement by requiring that the CIO review and approve each IT acquisition plan. Similarly, SBA requires the CIO to review and approve each IT acquisition plan over the FAR’s simplified acquisition threshold.
HUD partially satisfies OMB’s requirements in that its process only requires the office of the CIO to review a subset of IT acquisitions (those over $500,000). In addition, the HUD CIO has delegated the approval authority to the Deputy CIO and others within the Office of the CIO, but this delegation has not been approved by OMB.
VA does not yet have a process in place that satisfies OMB’s requirements, but officials in VA’s Office of Information and Technology stated that they are currently developing processes and procedures necessary to implement FITARA accountability and responsibilities for IT acquisitions. While the agency did not submit a documented time frame for its plans, VA officials stated that they would like to implement the new process by the second quarter of fiscal year 2018.
Table 2 summarizes the extent to which the selected agencies’ processes satisfy OMB’s requirements for the CIO to review and approve IT acquisition plans. Appendix IV provides additional details about the agencies’ processes that are used to review and approve IT acquisitions.
Of 96 randomly selected IT contracts at 10 agencies, only 11 acquisitions associated with these contracts had been reviewed and approved as required by OMB. The acquisitions associated with 85 contracts, with a total possible value of approximately $23.8 billion, did not receive the appropriate level of review. Further, despite having CIO review and approval processes in place that fully or partially satisfied OMB’s requirements, four agencies (the Department of Commerce (Commerce), HHS, Justice, and SSA) did not consistently ensure that the CIO or a designee reviewed and approved the acquisition plan or strategy.
Table 3 summarizes the number and total possible value of IT contracts that we reviewed for consistency with OMB’s requirements. Appendix V provides more details on the selected IT acquisitions and the CIO approval of them.
Four key factors contributed to the acquisitions associated with the 85 contracts not being reviewed and approved by the CIOs in accordance with OMB’s requirements:
Non-compliant processes. As previously mentioned, agencies’ processes at 7 of the 10 agencies did not fully satisfy OMB’s requirements that the CIO review and approve IT acquisition plans and strategies. Four agencies reported that they were following their own agency processes which we determined do not fully align with requirements. For example, NASA officials responsible for information regarding one of the selected contracts stated that the CIO only provides technical guidance and concurrence on the acquisition plan and does not approve the acquisition plan. This is not consistent with OMB’s requirement that the CIO or designee review and approve IT acquisition plans.
In addition, for 16 contracts, the respective agencies stated that there were no acquisition plans associated with the particular acquisitions. For example, a director in USDA’s Forest Service’s acquisition office issued waivers for 2 acquisitions, making them exempt from needing acquisition plans. Thus, the CIO did not review and approve acquisition plans for those contracts. As noted earlier, OMB’s guidance states that if there is not an acquisition plan or strategy, the contract action itself should be reviewed and approved. However, in all 16 cases, the associated agencies’ CIOs did not undertake such reviews.
Improper delegation. We identified 16 instances where agencies allowed CIOs to delegate their review to levels lower than agency policy or OMB allows. For example, Treasury’s CIO delegated contract approval to the component CIOs—one of whom further delegated this approval based on monetary thresholds to a variety of other officials. According to the component’s policy, one of the selected acquisitions, worth over $22 million, should have been approved by the component’s Deputy CIOs, Associate CIOs, or Deputy Associate CIOs. However, this particular acquisition was approved by an IT Project Manager. Further, two agencies allowed their CIOs to delegate IT acquisition approvals to other officials, without having these assignments approved by OMB. For example, three of NASA’s selected acquisitions were reviewed and approved by the component CIOs; however, NASA had not had these assignments approved by OMB.
Approval of other documentation. In 26 instances, CIOs or designees reviewed and approved acquisition documentation other than the required acquisition plan or strategy. For example, CIOs or designees reviewed and approved documents such as a requisition, a procurement request, or a business case analysis. While the CIOs or designees reviewed and approved some form of acquisition documentation prior to the award of these acquisitions, these forms of documentation did not have all the elements typically associated with an acquisition plan. As a result, the CIO (or designee) may not have been adequately equipped to make an informed decision about the acquisition.
Undocumented approvals. We identified 2 instances where the agency reported that the CIO or designee approved the IT acquisition, but did not document the approval. For example, regarding one contract, Commerce officials stated that one of the agency’s selected acquisitions was reviewed and approved by its component CIO for the Bureau of Economic Analysis. However, the agency could not provide evidence to show the CIO’s approval beyond an e-mail after the contract was signed stating that the CIO was aware of and had approved that particular acquisition.
Until agencies fully satisfy FITARA and OMB’s requirements by ensuring that CIOs, or their appropriate designees, review and approve IT acquisitions, CIOs risk continuing to have limited visibility and input into their agencies’ planned IT expenditures and not being able to use the increased authority that FITARA’s contract approval provision is intended to provide. In addition, agencies are missing an opportunity to strengthen CIOs’ authority and to provide needed direction and oversight of their IT acquisitions. As a result, agencies may award IT contracts that are duplicative, wasteful, or poorly conceived.
Conclusions
Given the history of failures and amount of money at stake, it is imperative that agencies properly oversee IT acquisitions. While the 22 selected agencies reported $14.7 billion in IT obligations, 21 agencies did not identify $4.5 billion as IT. Further, because the selected agencies did not always identify their IT acquisitions, it is likely that agencies have additional unidentified IT spending. Among other reasons, this shortfall existed because many agencies did not ensure that their acquisition offices were involved in the identification process, or provide clear guidance for ensuring that IT was properly identified. Without proper identification of IT acquisitions, agencies and CIOs cannot effectively provide oversight of them.
In addition, many of the selected agencies covered by FITARA did not ensure the appropriate CIO review and approval of IT acquisitions that were identified. The CIOs’ review and approval presents an opportunity for CIOs to increase visibility into agency IT and recognize opportunities for improvement. However, the review and approval processes at 14 of the selected agencies were not in full compliance with OMB requirements, and only 11 of 96 randomly selected IT acquisitions were appropriately reviewed and approved by the CIO. As a result, agencies awarded IT contracts with a total possible value of $23.8 billion without the required CIO review and approval. Consequently, CIOs had limited visibility and insight into their agencies’ IT, thereby increasing the risk of entering into contracts that were duplicative, wasteful, or poorly conceived.
Recommendations for Executive Action
We are making a total of 39 recommendations to federal agencies.
We are making the following 3 recommendations to USDA:
The Secretary of Agriculture should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 1)
The Secretary of Agriculture should direct the CAO and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 2)
The Secretary of Agriculture should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 3)
We are making the following 2 recommendations to Commerce:
The Secretary of Commerce should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 4)
The Secretary of Commerce should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 5)
We are making the following 2 recommendations to Education:
The Secretary of Education should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 6)
The Secretary of Education should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 7)
We are making the following 2 recommendations to Energy:
The Secretary of Energy should direct the CAO and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 8)
The Secretary of Energy should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 9)
We are making the following recommendation to HHS:
The Secretary of HHS should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 10)
We are making the following 2 recommendations to the Department of Housing and Urban Development:
The Secretary of Housing and Urban Development should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 11)
The Secretary of Housing and Urban Development should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 12)
We are making the following 3 recommendations to Interior:
The Secretary of the Interior should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 13)
The Secretary of Interior should direct the CAO and CIO to finalize and issue guidance on identifying IT acquisitions in order to ensure the CIO review and approval of those acquisitions. (Recommendation 14)
The Secretary of the Interior should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 15)
We are making the following 2 recommendations to Justice:
The Attorney General should direct the senior procurement executive and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 16)
The Attorney General should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 17)
We are making the following 3 recommendations to Labor:
The Secretary of Labor should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 18)
The Secretary of Labor should direct the CAO and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 19)
The Secretary of Labor should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 20)
We are making the following 2 recommendations to State:
The Secretary of State should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 21)
The Secretary of State should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 22)
We are making the following recommendation to Treasury:
The Secretary of the Treasury should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 23)
We are making the following 3 recommendations to Transportation:
The Secretary of Transportation should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 24)
The Secretary of Transportation should direct the CAO and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 25)
The Secretary of Transportation should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 26)
We are making the following 2 recommendations to VA:
The Secretary of VA should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 27)
The Secretary of VA should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 28)
We are making the following recommendation to EPA:
The Administrator of EPA should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 29)
We are making the following 3 recommendations to NASA:
The Administrator of NASA should ensure that the office of the CAO is involved in the process to identify IT acquisitions. (Recommendation 30)
The Administrator of NASA should direct the CAO and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 31)
The Administrator of NASA should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 32)
We are making the following recommendation to NRC:
The Chairman of NRC should ensure that the office of the senior procurement executive is involved in the process to identify IT acquisitions. (Recommendation 33)
We are making the following recommendation to OPM:
The Director of OPM should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 34)
We are making the following recommendation to SBA:
The Administrator of SBA should ensure that the office of the senior procurement executive is involved in the process to identify IT acquisitions. (Recommendation 35)
We are making the following 3 recommendations to SSA:
The Commissioner of SSA should ensure that the office of the senior procurement executive is involved in the process to identify IT acquisitions. (Recommendation 36)
The Commissioner of SSA should direct the senior procurement executive and CIO to issue specific guidance to ensure IT-related acquisitions are properly identified. (Recommendation 37)
The Commissioner of SSA should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 38)
We are making the following recommendation to USAID:
The Administrator of USAID should ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. (Recommendation 39)
Agency Comments and Our Evaluation
We provided a draft of this report to OMB and the other 22 agencies included in our review. Among the comments received, 16 agencies (Energy, GSA, HHS, HUD, Interior, Justice, Labor, NASA, OPM, SBA, SSA, State, Transportation, USAID, USDA, and VA) agreed with our recommendations; 2 agencies (EPA and OMB) did not agree or disagree with our recommendations; 1 agency (Education) partially agreed with our recommendations; 1 agency (NRC) disagreed with our recommendations; and 2 agencies (Treasury and NSF) had no comments on the recommendations. One other agency (Commerce) did not provide comments on the report.
The agencies’ comments that we received, and our evaluations of them, are summarized as follows: In comments provided via e-mail on December 8, 2017, an OMB GAO liaison did not agree or disagree with our findings. The official stated that improved coordination and collaboration between CIOs, CAOs, and senior procurement executives is critical, but represents a significant cultural shift for most agencies. The official added that OMB’s Office of Federal Procurement Policy and Office of the Federal CIO are working closely with agency CAOs and CIOs through the CIO Council and CAO Council to discuss practices that agencies have found helpful in achieving this cultural change.
In comments provided via e-mail on November 18, 2017, a Senior Advisor from USDA’s Office of the CIO stated that the department concurred with the findings in our report and had no additional comments.
In written comments, Education concurred with one of our recommendations, which called for the department to ensure that the office of the CAO is involved in the process to identify IT acquisitions. However, Education did not concur with a second recommendation to ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. The department stated that the CIO reviews and approves IT acquisition strategies and plans as part of his review and approval of IT investments. Specifically, the department stated that its Departmental Directive OCIO: 3-108, “Information Technology Investment Management” establishes a process for Office of the CIO review of IT acquisitions. Further, the department stated that its Statement of Work Review Process adds increased rigor to the CIO’s review and approval by requiring all acquisitions with IT elements to be submitted for Office of the CIO review. Finally, the department stated that the Federal Student Aid Investment Review Board charter documents the agency CIO as a voting member. The department added the CIO is required to vote on Federal Student Aid IT investments greater than $10 million. For Federal Student Aid investments less than $10 million, the CIO is provided the same level of insight as any other Investment Review Board member, but has delegated the required vote to the Federal Student Aid CIO.
The IT Investment Management Directive, together with the department’s associated Lifecycle Management Framework (referenced in the directive), indicates that the office of the CIO is to review IT acquisition plans. However, the department’s Statement of Work Review Process does not require the review and approval of acquisition plans. Instead, the process states that the office of the CIO may review IT acquisition plans or strategies as one of several possible documents, including statements of work or cost estimates.
We also reviewed the Federal Student Aid Investment Review Board charter and updated our report to reflect the department CIO’s involvement on the Federal Student Aid’s Investment Review Board. Based on this collective information, we updated our assessment of Education’s IT acquisition policy to reflect that the department had partially met OMB’s requirements. Nevertheless, the CIO’s review of the department’s acquisition plans and strategies should be required, rather than optional. Thus, we believe that our recommendation to ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance is still warranted. Education’s comments are reprinted in appendix VI.
In written comments, Energy concurred with our two recommendations directed to the department and stated that it has activities underway to revise the department’s acquisition policy.
Energy added that it planned to address the recommendations by December 31, 2017. Energy’s comments are reprinted in appendix VII.
In comments provided via e-mail on December 7, 2017, a Management Analyst in HHS’s Office of the CIO stated that the department agreed with the recommendation and had no comments on the report.
In written comments, HUD stated that it concurred with our two recommendations to the department. HUD’s comments are reprinted in appendix VIII.
In written comments, Interior stated that it concurred with our three recommendations to the department. Interior’s comments are reprinted in appendix IX.
In comments provided via e-mail on November 27, 2017, a Program Analyst from Justice’s Internal Review and Evaluation Office stated that the department concurred with our two recommendations. The department also provided technical comments, which we have incorporated in the report, as appropriate.
In written comments, Labor concurred with our three recommendations that we directed to the department. These recommendations called for the department to (1) ensure that the office of the CAO is involved in the process to identify IT acquisitions, (2) issue specific guidance to ensure IT-related acquisitions are properly identified, and (3) ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. Labor detailed actions recently taken to implement each of the recommendations and submitted documentation to support its assertions. For example, the department submitted its Acquisition Plan Preparation Guide and related acquisition plan templates to show that it had issued guidance on identifying IT and required the CIO review and approval of IT acquisition plans. Implementation of these steps should help ensure appropriate oversight of IT acquisitions. Labor’s comments are reprinted in appendix X.
In written comments, State agreed with both of our recommendations. In particular, regarding our recommendation to ensure that the office of the CAO is involved in the process to identify IT acquisitions, the department stated that senior State officials, including the CAO and CIO, will develop a plan to ensure that the CAO monitors acquisition activities and ensures acquisition decisions are consistent with all applicable laws, such as FITARA.
Further, regarding the recommendation to ensure that the department’s IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance, State referenced its capital planning and investment control guide that describes how a group under the direction of the CIO reviews acquisition strategies during the IT portfolio selection process. However, while the guide states that the CIO is to approve the finalized IT portfolio, the guide does not state that the CIO is to review the individual acquisition strategy documents. As a result, our recommendation is still warranted. State’s comments are reprinted in appendix XI.
In comments provided via e-mail on December 7, 2017, an Audit Liaison from Treasury’s Office of the CIO stated that the department had no comments on the report. The department did not say whether it agreed or disagreed with the recommendation, but noted that it had planned corrective actions to work with Treasury stakeholders, to include the Chief Procurement Executive, Bureau CIOs, and Acquisition officials; and OMB officials to develop acquisition plans and strategies according to OMB’s FITARA guidance for IT acquisition.
In comments provided via e-mail on November 27, 2017, the Director of Audit Relations and Program Improvement within the Department of Transportation stated that the department concurred with the findings and recommendations.
In written comments, VA concurred with our two recommendations to the department and stated that it is taking steps to address the recommendations. Specifically, regarding the recommendation to ensure that the office of the CAO is involved in the process to identify IT acquisitions, the department stated that it had addressed this concern by implementing an updated version of the Acquisition and Management of VA IT Resources directive in November 2017. In its discussion of this directive, the department stated that the CIO, in conjunction with the CAO, collaborates on all IT actions to ensure FITARA compliance.
While the directive clarifies the scope of VA’s IT resources subject to the oversight authority of the CIO, the directive does not indicate that the office of the CAO is also involved in this process. It will be important for VA to consider this recommendation as it continues to implement FITARA requirements.
Further, regarding the recommendation to ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance, the department stated that its Office of Strategic Sourcing is currently developing processes and procedures necessary to implement FITARA accountability and responsibilities for IT acquisitions. The department also stated that the new acquisition review process is scheduled to be implemented in the second quarter of fiscal year 2018. VA’s comments are reprinted in appendix XII.
In written comments, EPA stated it did not take exception to the report’s findings, conclusions, and recommendations. Regarding the recommendation to ensure that the office of the CAO is involved in the process to identify IT acquisitions, the agency stated that the policy which implements interim guidance from the CIO to comply with FITARA requirements is being updated. The agency added that future policy revisions are to include the requirement that the CAO or a designee will address this recommendation. EPA’s comments are reprinted in appendix XIII.
In comments provided via e-mail on November 17, 2017, a program analyst in GSA’s GAO/Office of Inspector General Audit Management Division stated that the agency concurred with the report and had no additional comments.
In written comments, NASA concurred with the three recommendations to the agency and stated that it believes it has already addressed them. Specifically, regarding the recommendation to ensure that the Office of the CAO is involved in the process to identify IT acquisitions, NASA asserted that its CAO is already adequately involved. However, NASA did not provide evidence that it fulfills this requirement. For instance, none of the processes mentioned in NASA’s comments support the assertion that the acquisition office is involved in the identification of individual acquisitions as IT. Further, the discussion of a form used to identify IT acquisitions (NASA Form 1707) confirmed our original conclusion that the officials identifying IT acquisitions are not in the acquisition office.
In addition, NASA concurred with our recommendation to issue specific guidance to ensure IT-related acquisitions are properly identified, and stated that the agency currently has several policies that provide such guidance. However, the policies named by the agency (NASA Policy Directive 1000.5B, NASA Interim Directive 1000.110, NASA FAR Supplement 1804.7301, and NASA FAR Supplement 1807.71) do not contain guidance on how the identifying officials should determine whether an acquisition is IT-related. For example, our review of NASA Form 1707 (required by NASA FAR Supplement 1804.7301) showed that, while this form has instructions on how to fill out its IT section, it does not contain guidance on how to properly identify an acquisition as IT-related. In addition, NASA did not provide an official policy on the role of the Center Functional Review Team in the identification process.
Further, NASA concurred with our recommendation to ensure that its IT acquisition plans or strategies are reviewed and approved according to OMB guidance and stated that, on September 27, 2017, the CIO had issued a memo delegating the authority to review and approve all IT acquisitions to the Center CIOs. However, as previously mentioned, these delegations of authority need to be approved by OMB, and NASA’s delegation of IT acquisition authority had not been approved by OMB, as required. In addition, NASA has not demonstrated that the CIO’s review and approval is occurring, as none of the 9 acquisitions we randomly selected were reviewed and approved by the CIO. NASA also stated that the CIO and Assistant Administrator for Procurement review acquisition plans as part of their participation in Acquisition Strategy Meetings. However, as we mention in the report, not all IT contracts have acquisition strategy meetings. NASA’s comments are reprinted in appendix XIV.
In written comments, NRC did not concur with our recommendations and stated that our draft report did not accurately reflect the agency’s process for reviewing and approving IT acquisitions. With regard to our recommendation to ensure that the office of the senior procurement officer is involved in the process to identify IT acquisitions, the agency provided technical comments which stated that acquisition office officials review acquisitions to ensure that IT is properly identified. However, the agency did not provide supporting documentary evidence to support this assertion. Lacking evidence from the agency that would enable us to verify the implementation of the process described in its comments, we maintain that our recommendation is warranted.
In addition, our draft of this report included a recommendation for NRC to ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance. NRC disagreed with this recommendation and stated in its technical comments that the agency does not require the development of acquisition plans for acquisitions under $1 million. Thus, the NRC CIO does not review acquisition plans under that threshold. The agency also stated that it has a process for approving contract actions under the $1 million threshold.
According to OMB guidance, in the absence of acquisition plans or strategies, CIOs may approve the corresponding contract actions.
Since NRC has a process for approving contract actions under the $1 million threshold, we revised the report to reflect that NRC has processes in place for the review and approval of acquisition plans in a manner consistent with OMB guidance and removed the associated recommendation. NRC’s comments are reprinted in appendix XV and its technical comments have been incorporated in the report, as appropriate.
In comments provided via e-mail on November 21, 2017, an NSF liaison stated that the agency had no comments.
In written comments, OPM concurred with our recommendation and stated that the agency will review and update its policies and processes as needed, so that they are aligned with OMB’s guidance. OPM’s comments are reprinted in appendix XVI.
In written comments, SBA agreed with our recommendation to ensure that the office of the CAO is involved in the process to identify IT acquisitions. SBA noted that it is not required to have a CAO, but agreed with having its acquisition workforce involved in IT acquisitions. Based on the agency’s comments, we modified the associated recommendation to refer to the agency’s senior procurement executive rather than the CAO. SBA stated that it has already begun to implement the recommendation for fiscal year 2018. SBA’s comments are reprinted in appendix XVII.
In written comments, SSA agreed with the three recommendations that we had directed to the agency, stated that it had taken steps to address the recommendations, and submitted supporting documentation. In particular, SSA agreed with the recommendation to ensure that the office of the CAO is involved in the process to identify IT acquisitions and, in response, provided documentation that is to detail the involvement of its Chief Financial Officer (who is the agency’s senior procurement executive) in identifying and approving IT acquisitions. Implementation of these steps should help ensure appropriate oversight of IT acquisitions.
Regarding our recommendation to issue specific guidance to ensure IT-related acquisitions are properly identified, SSA agreed with the recommendation and stated that, according to its IT Acquisition Approval Policy, the Chief Financial Officer notifies the CIO of IT acquisitions by submitting acquisition plans to the CIO for approval. However, while SSA’s policy does support this method of CIO notification, it does not provide guidance to assist in identifying IT.
Further, SSA agreed with our recommendation to ensure that IT acquisition plans or strategies are reviewed and approved according to OMB’s guidance and provided its September 2017 policy for acquisition plan approval. After reviewing this policy and SSA’s 2017 capital planning and investment control process, we updated our report to show that SSA’s processes satisfy OMB’s requirements. While SSA has made progress in implementing OMB’s FITARA requirements, the agency needs to demonstrate that the CIO’s review and approval are occurring, as 3 of the 10 acquisitions we randomly selected were not reviewed and approved as required by OMB’s guidance. It will be important for SSA to consider this recommendation as it continues to implement FITARA requirements. SSA’s comments are reprinted in appendix XVIII. The agency also provided technical comments, which we have incorporated in the report as appropriate.
In written comments, USAID agreed with our recommendation and stated that the CIO and CAO are working together to (1) ensure all IT- related acquisition plans and strategies are reviewed and approved by the CIO and (2) further communicate this requirement to the acquisition planning stakeholders. USAID’s comments are reprinted in appendix XIX. The agency also provided technical comments, which we have incorporated in the report as appropriate.
We are sending copies of this report to the appropriate congressional committees, the Secretaries of the Departments of Agriculture, Commerce, Education, Energy, Health and Human Services, Housing and Urban Development, Labor, State, the Interior, the Treasury, Transportation, and Veterans Affairs; the U.S. Attorney General of the Department of Justice; the Administrators of the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Small Business Administration, and the U.S. Agency for International Development; the Commissioner of the Social Security Administration; the Directors of the National Science Foundation and the Office of Personnel Management; and the Chairman of the Nuclear Regulatory Commission. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-9286 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix XX.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to determine the extent to which (1) federal agencies identify information technology (IT) contracts and how much is invested in them, and (2) federal agency Chief Information Officers (CIO) are reviewing and approving IT acquisitions.
For both objectives, our review included the Office of Management and Budget (OMB) and 22 agencies of the 24 agencies covered by the Chief Financial Officer Act. We did not include the Department of Defense because it is excluded from the relevant provision in the Federal Information Technology Acquisition Reform Act (FITARA) requiring CIO approval of IT contracts. Further, we did not include the Department of Homeland Security because we recently issued a report that reviewed the department’s implementation of FITARA, including the CIO’s approval of IT contracts. For specific information on the CIOs’ review of individual IT contracts, we focused on 10 agencies covered by FITARA that obligated the most money to IT contracts in fiscal year 2016 (except the Departments of Defense and Homeland Security).
To determine the extent to which federal agencies identify IT contracts and how much is invested in them, we requested that each of the 22 selected agencies submit a list of their IT contract obligations for fiscal year 2016. We also requested the associated contract identification number, obligation amount, and product and service code.
In order to determine if the agencies gave us a full accounting of their IT obligations, we used the Category Management Leadership Council’s categorizations of federal government spending by product and service codes. In particular, we used the Council’s list of 79 IT-related codes, which is listed in appendix II, to identify fiscal year 2016 IT-related contract obligations on USAspending.gov. For each funding agency, we downloaded all contracts associated with the IT-related codes, such as purchase orders, blanket purchase agreements, and government-wide acquisition contracts. By comparing the resulting list of IT-related contracts on USAspending.gov data to those provided by the agencies, we were able to determine which IT-related contract obligations the agencies had not identified. In doing so, we gave the agency credit for identifying the entire IT contract if an agency identified any portion of the contract (e.g., a contract modification). Consequently, the total of obligations that agencies did not identify is likely higher than the totals we were able to report.
To assess the reliability of the USAspending.gov data, we reviewed publicly available documentation related to the database, such as the USAspending.gov data dictionary. We also reviewed the results of our previous reports on USAspending.gov that had identified deficiencies in the accuracy and reliability of the reported data. For both the USAspending.gov and agency-supplied contract data, we tested the datasets to look for duplicate records and missing data in key fields. We also interviewed agency officials to corroborate the data. We found the contract data from USAspending.gov, while sometimes incomplete, were sufficient for our purpose of identifying IT contracts and demonstrating the amount of obligations toward IT contracts. In addition, we found the contract data provided by the agencies to be sufficiently reliable for the purposes of our reporting objectives. We used these data as evidence to support our findings, conclusions, and recommendations.
We also compared the product and service codes in the lists of IT contracts provided by the agencies to the list of IT product and service codes developed by the Category Management Leadership Council. From this comparison, we determined which agency-submitted obligations were associated with IT-related product and service codes and which obligations were associated with non-IT codes.
To determine the cause for any discrepancies between the agency- provided list of obligations and those found on USAspending.gov, we asked each agency to describe and provide evidence of the Chief Acquisition Officer’s (CAO) involvement in the process for identifying IT acquisitions for CIO review. We also collected both testimonial evidence and documentation that described the identification process for potential IT acquisitions. We analyzed these data from each agency to determine the involvement of the CAO and officials within the CAO’s acquisition office. We also determined the involvement of officials positioned outside of the acquisition office, such as officials from the office requesting the IT acquisition or from the Office of the CIO. As a result, we were able to establish which officials were responsible for identifying acquisitions for IT review at each agency. We also reviewed the submitted evidence to determine whether the agencies provided guidance that clearly described or defined IT to the identifying officials.
To determine the extent to which federal agency CIOs are reviewing and approving IT acquisitions, we first compiled a composite list of IT-related contracts from fiscal year 2016 for each of the 10 selected agencies by combining: contracts associated with IT-related product and service codes from USAspending.gov, contracts associated with IT vendors from USAspending.gov, contracts linked with major IT investments as listed on OMB’s IT Dashboard, and contracts provided by agencies in response to our earlier request for a list of IT contracts.
We then randomly selected 10 IT contracts from each of the 10 agencies on which to perform additional analysis (100 total contracts). For each of the 100 selected contracts, we asked the associated agency to confirm that the contract was, in fact, IT-related and requested evidence of CIO or CIO designee review and approval of the contract’s associated acquisition. We compared the resulting documentation to FITARA and OMB guidance to determine the extent to which the IT acquisitions had been reviewed and approved. In order to receive full credit, agencies had to provide evidence that the CIO had reviewed and approved the acquisition plans or strategies for those IT acquisitions associated with major IT investments. For IT acquisitions associated with non-major IT investments, agencies had to provide evidence that the CIO, or a designee that reports directly to the CIO, reviewed and approved the acquisition plan or strategy. If agencies could not associate the IT acquisition with a particular IT investment, we looked for evidence that the CIO reviewed and approved the acquisition plan or strategy, since FITARA does not state that the review and approval of these IT acquisitions can be delegated.
To determine whether agencies had processes in place to ensure the review and approval of IT acquisitions, we reviewed agency documentation on IT acquisition processes and procedures and compared it to the requirements in FITARA and OMB guidance. We also interviewed agency officials to clarify their respective processes and policies. In order to receive full credit, agencies had to provide evidence that they had a process in place that required the agency CIO to review and approve IT acquisition plans or strategies with the exception of those associated with non-major IT investments. Agencies received partial or no credit if their processes had one or more of the following shortfalls: approval was not documented, delegated IT acquisition review and approval without OMB approval of did not provide the CIOs or their delegates oversight of all IT involved the review of other documentation instead of the required acquisition plans or strategies, or did not provide department CIO oversight over IT acquisitions at the component level.
We conducted this performance audit from July 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: IT-Related Product and Service Codes
In September 2015, the Category Management Leadership Council and the Office of Management and Budget (OMB) identified a total of 79 information technology (IT)-related product and service codes, of which 43 are for IT services and 36 are for IT products. Table 4 provides details on the IT-related services and product codes.
Appendix III: Estimated Total Fiscal Year 2016 IT Obligations by Agency
The 22 selected agencies identified approximately $14.7 billion in obligations for information technology (IT)-related contracts in fiscal year 2016. Of that amount, approximately $14 billion was categorized as IT- related per the Category Management Leadership Council’s product and service codes, and approximately $626 million was categorized under other, non-IT codes. In addition to the obligations that agencies reported to us, we identified an additional $4.5 billion in obligations for contracts with IT-related product and service codes, raising the total amount obligated to IT contracts in fiscal year 2016 to at least approximately $19.2 billion. Table 5 provides details on each selected agency’s obligations for IT-related contracts in fiscal year 2016.
Appendix IV: Agency Acquisition Processes Used to Review and Approve IT Acquisitions
The Federal Information Technology Acquisition Reform Act (FITARA) and the Office of Management and Budget’s (OMB) associated implementation guidance require major civilian agency chief information officers (CIO) to review and approve acquisitions of information technology (IT) either directly, or as full participants in the agency’s governance processes. In particular, OMB’s guidance states that agencies shall not approve an acquisition plan or strategy that includes IT without the agency CIO’s review and approval. OMB’s guidance also allows the CIO to delegate these responsibilities to other agency officials to act as the CIO’s representative; however, staff in OMB’s Office of the Federal CIO noted that these assignments need to be approved by OMB. Alternatively, FITARA and OMB’s guidance allows agencies to use IT governance processes to conduct these reviews and approvals, as long as the CIO is a full participant in the process. Table 6 provides details on the selected agencies’ acquisition processes and the degree to which the processes comply with OMB’s requirements.
Appendix V: Details on Selected IT Acquisitions
Of 96 randomly selected information technology (IT) contracts at 10 agencies, 9 acquisitions associated with these contracts had been reviewed and approved as required by the Office of Management and Budget (OMB). The acquisitions associated with the remaining 87 contracts did not receive the appropriate levels of Chief Information Officer (CIO) review and approval in accordance with OMB requirements. Table 7 provides details on the selected IT acquisitions and the CIO review and approval of them.
Appendix VI: Comments from the Department of Education
Appendix VII: Comments from the Department of Energy
Appendix VIII: Comments from the Department of Housing and Urban Development
Appendix IX: Comments from the Department of the Interior
Appendix X: Comments from the Department of Labor
Appendix XI: Comment from the Department of State
Appendix XII: Comments from the Department of Veterans Affairs
Appendix XIII: Comments from the Environmental Protection Agency
Appendix XIV: Comments from the National Aeronautics and Space Administration
Appendix XV: Comments from the Nuclear Regulatory Commission
Appendix XVI: Comments from the Office of Personnel Management
Appendix XVII: Comments from the Small Business Administration
Appendix XVIII: Comments from the Social Security Administration
Appendix XIX: Comments from the U.S. Agency for International Development
Appendix XX: GAO Contact and Staff Acknowledgments
GAO Contact
David A. Powner, (202) 512-9286 or [email protected].
Staff Acknowledgments
In addition to the contact named above, Kevin Walsh (Assistant Director), Jessica Waselkow (Analyst in Charge), Chris Businsky, Rebecca Eyler, Angel Ip, Tarunkant Mithani, David Plocher, Meredith Raymond, and Adam Vodraska made key contributions to this report. | Why GAO Did This Study
The federal government invested more than $90 billion on IT in fiscal year 2016. However, prior IT expenditures have produced failed projects. Recognizing the severity of issues, in December 2014 Congress enacted IT acquisition reform legislation (referred to as the Federal Information Technology Acquisition Reform Act, or FITARA). Among other things, OMB's FITARA implementation guidance requires covered agencies' chief acquisition officers to identify IT contracts for the CIOs to review and approve.
GAO's objectives were to determine the extent to which (1) federal agencies identify IT contracts and how much is invested in them, and (2) federal agency CIOs are reviewing and approving IT acquisitions. To do so, GAO reviewed data on IT contracts from fiscal year 2016 at 22 agencies and compared agency actions to law and OMB guidance.
What GAO Found
Most of the 22 selected agencies did not identify all of their information technology (IT) contracts. The selected agencies identified 78,249 IT-related contracts, to which they obligated $14.7 billion in fiscal year 2016. However, GAO identified 31,493 additional contracts with $4.5 billion obligated, raising the total amount obligated to IT contracts in fiscal year 2016 to at least $19.2 billion (see figure).The percentage of additional IT contract obligations GAO identified varied among the selected agencies. For example, the Department of State did not identify 1 percent of its IT contract obligations. Conversely, 8 agencies did not identify over 40 percent of their IT-related contract obligations.
Many of the selected agencies that did not identify these IT acquisitions did not follow Office of Management and Budget's (OMB) guidance. Specifically, 14 of the 22 agencies did not involve the acquisition office in their process to identify IT acquisitions for Chief Information Officer (CIO) review, as required by OMB. In addition, 7 agencies did not establish guidance to aid officials in recognizing IT. Until agencies involve the acquisitions office in their IT identification processes and establish supporting guidance, they cannot ensure that they will identify all IT acquisitions. Without proper identification of IT acquisitions, agencies and CIOs cannot effectively provide oversight of these acquisitions.
In addition to not identifying all IT contracts, 14 of the 22 selected agencies did not fully satisfy OMB's requirement that the CIO review and approve IT acquisition plans or strategies. Further, only 11 of 96 randomly selected IT contracts at 10 agencies that GAO evaluated were CIO-reviewed and approved as required by OMB's guidance. The 85 IT contracts not reviewed had a total possible value of approximately $23.8 billion. Until agencies ensure that CIOs review and approve IT acquisitions, CIOs will continue to have limited visibility and input into their agencies' planned IT expenditures and will not be able to use the increased authority that FITARA's contract approval provision is intended to provide. Further, agencies will likely miss an opportunity to strengthen CIOs' authority and the oversight of IT acquisitions. As a result, agencies may award IT contracts that are duplicative, wasteful, or poorly conceived.
What GAO Recommends
GAO is making 39 recommendations, including that agencies ensure that acquisition offices are involved in identifying IT and issue related guidance; and to ensure IT acquisitions are reviewed according to OMB guidance. OMB and 20 agencies generally agreed with or did not comment on the recommendations. One agency agreed with one recommendation, but disagreed with another. GAO believes this recommendation is warranted. One agency disagreed with two recommendations. GAO subsequently removed one of these, but believes the other recommendation is warranted, as discussed in the report. |
gao_GAO-17-795T | gao_GAO-17-795T_0 | SSA Lacks a Formal and Systematic Approach for Identifying CAL Conditions
SSA has in recent years relied on advocates for individuals with certain diseases and disorders to bring potential CAL conditions to its attention. However, SSA has not clearly communicated this or provided guidance on how to make suggestions through its CAL webpage, which communicates information to the public. Without more explicit instructions, we noted that advocates may not present information that is relevant for SSA’s decision-making or that most strongly makes the case for these conditions to be included on the CAL list. One representative from an advocacy organization, for example, described meeting with agency officials and being surprised by SSA’s focus on cancer grades— an indicator of how quickly cancer is likely to grow and spread—as she was not accustomed to discussing the condition she represents in these terms. Federal internal control standards state that agencies should use quality information to achieve their objectives. We concluded that absent clear guidance to advocates on how to make suggestions through its CAL webpage, SSA is missing an opportunity to gather quality information to inform its selection of CAL conditions.
In addition, we found that relying on advocates to bring conditions to SSA’s attention also introduces potential bias toward certain conditions and the possibility of missing others. Some conditions that are potentially deserving of CAL consideration may not have advocacy organizations affiliated with them, and some advocates may be unaware of CAL. As a result, some conditions may have a better chance of being considered than other, equally deserving ones that are not proposed, and individuals with those conditions may have to wait longer to receive approval for disability benefits. Federal internal control standards state that agencies should collect complete and unbiased information and consider the reliability of their information sources. According to some external researchers who work with SSA, an approach leveraging SSA’s administrative data may help address the bias that is introduced by only using advocates. SSA has contracted with the National Institutes of Health and the National Academies of Sciences, Engineering, and Medicine for research using SSA administrative data, which has led to the identification of potential CAL conditions. However, we noted that to date, the research SSA has contracted has not been sufficiently targeted to generate more than a small number of additions to the CAL list. In our August 2017 report, we recommended that SSA develop a formal and systematic approach to gathering information to identify potential conditions for the CAL list, including sharing information through SSA’s website on how to propose conditions for the list and using research that is directly applicable to identifying CAL conditions. SSA agreed with this recommendation and has begun to make revisions to its website.
We also found that SSA has also not consistently communicated with advocates who have suggested conditions to add to the CAL list about the status of their recommendations, leading to uncertainty for some. SSA officials told us that they provide a written or oral response to advocacy organizations that have suggested a condition for inclusion on the CAL list to inform them whether the condition is approved. However, some of the advocates we spoke to had not received such a response from SSA and found it challenging to connect with SSA officials to obtain information about the status of their suggestions. For example, one representative from an advocacy organization told us that she was unable to reach SSA officials to obtain any information on the status of her suggestion despite repeated attempts. In the absence of a response from SSA, she had resubmitted her condition and supporting documents to SSA every six months for three years since her initial submission in 2014. Federal internal control standards state that agencies should communicate quality information externally so that external parties can help the agency achieve its objectives. We concluded that without two- way communication between SSA and advocates, advocates are unclear on the status of their proposed CAL conditions and SSA may be missing an opportunity to improve the quality of the information it obtains from advocates. In our August 2017 report, we recommended that SSA develop formal procedures for consistently notifying those who propose conditions for the CAL list of the status of their proposals. SSA agreed with this recommendation.
Our review also found that SSA has not developed or communicated clear, consistent criteria for deciding which potential conditions will be included on the CAL list. Officials told us that they have informally considered criteria such as allowance rates—the percentage of claimants asserting a certain condition who are approved for benefits—when identifying potential CAL conditions. However, we reviewed 31 assessments of potential CAL conditions prepared by SSA medical consultants and found that they did not cite consistent criteria. There was no standard format used for these reports, and SSA does not have a template, checklist, or guidance—other than the medical listings—that its staff consult when preparing them. Further, SSA officials have cited different reasons for not designating conditions as CAL in communications with those who proposed conditions, which led to confusion regarding CAL condition criteria for staff from some advocacy organizations we interviewed. Federal internal control standards state that agencies should define objectives in specific and measurable terms so that they are understood at all levels of the agency and performance toward achieving these objectives can be assessed. To help achieve these objectives, the standards state that agencies should also communicate key information to their internal and external stakeholders. We concluded that absent clear criteria for designating CAL conditions, advocates and other stakeholders may be confused as to why some conditions are not included on the CAL list and SSA may miss conditions that could qualify for CAL. In our August 2017 report, we recommended that SSA develop and communicate internally and externally criteria for selecting conditions for the CAL list. SSA agreed with this recommendation.
SSA’s Procedures Do Not Ensure All Claims are Accurately Identified for Expedited CAL Processing
To identify disability claims for expedited CAL processing, SSA primarily relies on software that searches for key words in claims. However, because text provided by claimants may be ambiguous, incomplete, inaccurate, or misspelled, the software is hindered in its ability to flag all claimants with CAL conditions and may also flag claimants for CAL processing that should not be flagged. For example, officials we interviewed at 5 of the 6 selected DDS offices said that they have seen claims inaccurately flagged for CAL when the claim text included words like “family history of ” though the CAL condition was not asserted by the claimant. In addition, in our claim file review, we found a claimant asserting a leiomyosarcoma, a soft tissue cancerous tumor that may be found in organs including the liver, lungs, and uterus, who misspelled the term as “leiomysarcoma” on the disability claim, which resulted in the software not flagging the claim as CAL, although liver and lung cancers are CAL conditions.
SSA officials told us that they have not established a feedback loop to capture observations from DDS officials on weaknesses in the software. However, DDS officials we spoke with have observed weaknesses in the software that, if shared, could assist SSA in improving its accuracy in identifying CAL claims. For example, an official at one DDS office noted that the software appears to identify CAL conditions using words from the claim text out of order or without regard to specific phrases. Specifically, the official stated that some claims with “pancreatitis” or “pancreatic pain” have been incorrectly flagged for the CAL condition “pancreatic cancer.” According to federal internal control standards, quality information about the agency’s operational processes should flow up the reporting lines from personnel to management to help management achieve the agency’s objectives. We concluded that absent a mechanism to gather feedback from DDS offices nationwide, the agency may be missing an opportunity to obtain important information that could help improve the software. In our August 2017 report, we recommended that SSA take steps to obtain information that can help refine the selection software for CAL claims, for example by using management data, research, or DDS office feedback. SSA agreed with this recommendation.
We also found that DDS offices play an important role in helping to ensure that claims are accurately flagged for CAL by manually correcting flagging errors made by the software, but SSA’s guidance on how to make such corrections does not address when they should occur. For example, instructions on the mechanical process for removing the flag based on the DDS examiner’s review of the medical evidence in the claimant’s file does not indicate how quickly this should be done after CAL status is clarified. Based on our discussions with officials in the 6 selected DDS offices, we found that some examiners did not understand the importance of making timely changes to a CAL flag designation to ensure faster claim processing and accurate tracking of CAL claims. For example, examiners at one DDS office said that they do not always add or remove a CAL flag when they determine a claim is erroneously designated because it adds another step to claim processing and the step seems unnecessary. Ensuring claims are correctly flagged for or not flagged for CAL is important because the CAL flag reduces DDS processing time by about 10 weeks on average compared to the processing time for all claims, according to SSA data. According to federal internal control standards, agencies should record transactions in an accurate and timely fashion, and communicate quality information throughout the agency. We concluded that without clear guidance on when to make manual changes, DDS examiners may continue to take actions that are not timely and may hinder expedited processing and accurate tracking of CAL claims. In our August 2017 report, we recommended that SSA clarify written policies and procedures regarding when manual addition and removal of CAL flags should occur on individual claims. SSA agreed with this recommendation.
In addition, our analysis of SSA’s data shows that DDS offices varied in their use of manual actions to add the CAL flag to claims that were not initially flagged for CAL by the software. Specifically, we found that over half of DDS offices nationwide that processed disability claims in fiscal year 2016 had one or zero claims with a manually added CAL designation in that year. In comparison, 5 DDS offices together accounted for over 50 percent of all claims with a manual addition. Such variance could result in some claimants who assert a CAL condition not receiving expedited processing because their claims were not flagged for CAL by the selection software or DDS examiners. We found that because SSA had not undertaken a study of its manual action procedures on such claims, it was unclear why this variance existed among DDS offices. Federal internal control standards state that agencies should establish and operate monitoring activities to monitor operations and evaluate results. In our August 2017 report, we recommended that SSA assess the reasons why the uses of manual actions vary across DDS offices. SSA agreed with this recommendation.
SSA Takes Some Steps to Ensure Accurate and Consistent CAL Decisions But Does Not Regularly Update Condition Descriptions or Leverage Data
In our August 2017 report, we found that SSA has taken some steps to ensure the accuracy and consistency of decisions on CAL claims, including developing detailed descriptions of CAL conditions, known as impairment summaries, but has not regularly updated the summaries. These summaries suggest specific medical evidence for the DDS examiner to obtain to verify the claimant’s asserted CAL condition and help examiners make decisions about whether to allow or deny a claim. However, we found that because SSA has not regularly updated the impairment summaries, nearly one-third are 5 or more years old. Several advocates (4 of 6) and medical experts (2 of 3) we interviewed suggested that the impairment summaries should be updated every 1 to 3 years because medical research and advancements may have implications for disability determinations. In addition, federal internal control standards state that as changes in the agency’s environment occur, management should make necessary changes to the information requirements to address the modified risks. We concluded that given the pace of medical research for certain CAL conditions, in the absence of a systematic and regular mechanism to update CAL impairment summaries, SSA potentially faces the risk of making inaccurate and inconsistent disability determinations based on outdated information. In our August 2017 report, we recommended that SSA develop a schedule and a plan for updates to the CAL impairment summaries to ensure that information is medically up to date. SSA agreed with this recommendation.
We also found that SSA does not leverage data it collects to identify potential challenges to accurate and consistent decision-making on CAL claims. SSA and DDS officials review some data to monitor CAL claims processing, such as the total number of CAL claims and claims flagged for CAL by the selection software, but these efforts do not address the accuracy and consistency of decisions on CAL claims. In contrast, our analysis of SSA’s data on outcomes for claims with asserted CAL conditions suggested that a review of data on allowance and denial rates for these claims may help identify conditions that are challenging to accurately and consistently adjudicate. For example, while the vast majority of claims asserting CAL conditions are allowed—about 92 percent were approved in fiscal year 2016—data we reviewed showed that there was a lower percentage of claims allowed for certain asserted CAL conditions. Specifically, SSA denied more than 30 percent of claims asserting 37 CAL conditions, and 17 of these conditions had denial rates that were greater than 50 percent. Advocates we spoke to who represent some of these conditions explained why challenges adjudicating these claims may exist. For example, officials from one of these advocacy groups told us that the CAL condition they represent is frequently confused with a much more common and non-life threatening condition that is less likely to be allowed. According to federal internal control standards, management should obtain relevant data based on identified information requirements, process these data into quality information that can be used to make informed decisions, and evaluate the agency’s performance in achieving key objectives and addressing risks. We concluded that without regular analyses of available data, SSA is missing an opportunity to ensure the accuracy and consistency of CAL decision-making. In our August 2017 report, we recommended that SSA develop a plan to regularly review and use available data to assess the accuracy and consistency of CAL decision-making. SSA agreed with this recommendation.
Chairman Johnson, Ranking Member Larson, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions you may have at this time.
GAO Contact and Staff Acknowledgments
For questions about this statement, please contact Kathryn A. Larin at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Rachel Frisk, Assistant Director; Kristen Jones, Analyst-in- Charge; and Michelle Loutoo Wilson.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
SSA in October 2008 implemented CAL to fast track individuals with certain conditions through the disability determination process by prioritizing their disability benefit claims. Since then, SSA has expanded its list of CAL conditions from 50 to 225.
This testimony summarizes the information contained in GAO's August 2017 report entitled SSA's Compassionate Allowance Initiative: Improvements Needed to Make Expedited Processing of Disability Claims More Consistent and Accurate , GAO-17-625 . It examines the extent to which SSA has procedures for (1) identifying conditions for the CAL list; (2) identifying claims for CAL processing; and (3) ensuring the accuracy and consistency of CAL decisions.
For its August 2017 report, GAO reviewed relevant federal laws, regulations, and guidance; analyzed SSA data on disability decisions for CAL claims from fiscal years 2009 through 2016 and on CAL claims with manual actions in fiscal year 2016; reviewed a nongeneralizable sample of 74 claim files with fiscal year 2016 initial determinations; and interviewed medical experts, representatives from patient advocacy groups, and SSA officials in headquarters and six DDS offices selected for geographic dispersion and varied CAL caseloads.
What GAO Found
The Social Security Administration (SSA) does not have a formal or systematic approach for designating certain medical conditions for the Compassionate Allowance initiative (CAL). CAL was established in 2008 to fast-track through the disability determination process claimants who are likely to be approved because they have certain eligible medical conditions. SSA has in recent years relied on advocates for individuals with certain diseases and disorders to bring conditions to its attention for potential inclusion in CAL. However, by relying on advocates, SSA may overlook disabling conditions that have no advocates, potentially resulting in individuals with these conditions not receiving expedited processing. Further, SSA does not have clear, consistent criteria for designating conditions for potential CAL inclusion, which is inconsistent with federal internal control standards. As a result, external stakeholders lack key information about how to recommend conditions for inclusion on the CAL list.
To identify disability claims for expedited CAL processing, SSA primarily relies on software that searches for key words in claims. However, if claimants include incorrect or misspelled information in their claims the software is hindered in its ability to flag all claimants with CAL conditions or may flag claimants for CAL processing that should not be flagged. SSA has guidance for disability determination services (DDS) staff on how to manually correct errors made by the software, but the guidance does not address when such corrections should occur. Without clear guidance on when to make manual changes, DDS examiners may not take timely actions and may hinder expedited processing for appropriate claims.
SSA has taken some steps to ensure the accuracy and consistency of decisions on CAL claims, including developing detailed descriptions of CAL conditions, known as impairment summaries. These summaries help examiners make decisions about whether to allow or deny a claim. However, nearly one-third of the summaries are 5 or more years old. Experts and advocates that GAO spoke to suggested that summaries should be updated every 1 to 3 years to reduce the risk of SSA making disability determinations using medically outdated information. In addition, GAO found that SSA does not leverage data it collects to identify potential challenges to accurate and consistent decision-making on CAL claims. Without regular analyses of available data, SSA is missing an opportunity to ensure the accuracy and consistency of CAL decision-making.
What GAO Recommends
In its August 2017 report, GAO made eight recommendations, including that SSA develop a process to systematically gather information on potential CAL conditions, communicate criteria for designating CAL conditions, clarify guidance for manual corrections on CAL claims, update CAL impairment summaries, and use available data to ensure accurate, consistent decision-making. SSA agreed with all of GAO's recommendations. |
gao_GAO-17-794 | gao_GAO-17-794_0 | Background
Enacted in November 2001, the Aviation and Transportation Security Act (ATSA) established TSA as the primary federal agency responsible for implementing and overseeing the security of the nation’s civil aviation system. In accordance with ATSA, TSA is to ensure that all passengers and property transported by commercial passenger aircraft to, from, or within the United States are adequately screened. Among other things, TSA is responsible for ensuring that for all flights and flight segments originating in the United States, such screening takes place before boarding and is carried out by a federal government employee except as otherwise permitted in statute. Pursuant to TSA-established policies and procedures in effect at about 440 airports at which TSA performs, or oversees the performance of screening operations (i.e., TSA-regulated airports), all passengers, their accessible property, and their checked baggage are to be screened prior to entering the sterile area of the airport or boarding the aircraft. Among other things, these procedures generally provide that passengers pass through security checkpoints where their person, identification documents, and accessible property are screened by Transportation Security Officers (TSO).
Overview of Selected Aviation Security Countermeasures
In this report, we examine six countermeasures specific to aviation security—passenger prescreening (Secure Flight), checkpoint screening, checked baggage screening, explosives detection canines, BDA, and FAMS. An overview of these countermeasures is provided below and figure 1 depicts an illustrative example of the process by which an aviation passenger may encounter these selected countermeasures.
Passenger Prescreening (Secure Flight): TSA uses its Secure Flight prescreening program to match passenger information against federal government watch lists and other information to assign each passenger to one of three risk categories—high risk, low risk, or unknown risk—that either corresponds to the level of screening they will experience at the checkpoint or may deny them an opportunity to board the aircraft. The program requires U.S.- and foreign-flagged commercial aircraft operators traveling to, from, within, or overflying the United States, as well as U.S. commercial aircraft operators with international point-to-point flights, to collect certain information from passengers—such as full name, gender, and date of birth—and transmit that information electronically to TSA. The Secure Flight program then identifies passengers’ risk levels by matching them against federal government watch lists—for example, the No Fly List, comprised of individuals who should be precluded from boarding an aircraft, and the Selectee List, comprised of individuals who should receive enhanced screening at the passenger security checkpoint. Passengers identified as matching the No Fly List, for example, are precluded from obtaining a boarding pass and proceeding through the screening checkpoint. For passengers matching the Selectee List, air carriers must mark their boarding passes accordingly so TSA can identify them for enhanced screening.
In 2010, TSA began using risk-based criteria to create additional lists for Secure Flight screening, which are composed of high-risk passengers who may not be in the Terrorist Screening Database but whom TSA has determined should be subject to enhanced screening procedures. TSA also began conducting watch list matching against an Expanded Selectee List in order to designate more passengers who are known or suspected terrorists as selectees for enhanced screening. In addition, as part of TSA Pre✓™—a 2011 initiative to preapprove passengers for expedited screening—TSA uses Secure Flight to screen passengers against several lists of preapproved low-risk travelers. Passengers determined to be eligible for TSA Pre✓™ are identified as such on their boarding passes.
Checkpoint Screening: TSA screens individuals and property at airport screening checkpoints to deter and prevent the carriage of any unauthorized or prohibited items on board an aircraft or into the airport sterile area. In general, passengers undergo one of three types of checkpoint screening, based on the Secure Flight determinations shown on boarding passes—standard screening, enhanced screening for selectees, and expedited screening for low-risk passengers. Standard screening typically includes passing through a walk-through metal detector or advanced imaging technology (AIT) machine, which identifies objects or anomalies on the outside of the body. Passengers may also be subject to a pat down if they are screened by the AIT or walk-through metal detector and the equipment alarms. Standard screening also typically includes X-ray screening for the passenger’s accessible property. During X-ray examination of the property, TSOs review the X- ray images, and if potential prohibited items are detected, the property will be manually inspected and screened with an explosives trace detection (ETD) machine to identify any traces of explosives material. Enhanced screening generally includes, in addition to the procedures applied during a typical standard screening experience, a pat-down and an explosives trace detection or physical search of the interior of the passenger’s accessible property, electronics, and footwear. Expedited screening typically includes walk-through metal detector screening and X-ray screening of the passenger’s accessible property, but unlike in standard screening, travelers do not have to, among other things, remove their belts, shoes, or light outerwear.
Checked Baggage Screening: TSA inspects passengers’ checked baggage to deter, detect, and prevent the transport of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is accomplished through the use of explosives detection systems (EDS)—which use X-rays with computed tomography technology to automatically measure the physical characteristics of objects in baggage and trigger an alarm when objects that exhibit the physical characteristics of explosives are detected—and ETD machines, which use chemical analysis to manually detect traces of explosive materials’ vapors and residue. At airports with EDS, EDS machines are generally employed for primary screening of checked baggage while ETD machines are used for secondary screening to help resolve questions raised by EDS screening. At airports without EDS machines, ETDs are used as the primary method for screening checked baggage.
Explosives Detection Canines: TSA’s National Explosives Detection Canine Team Program trains, deploys, and certifies explosives detection canine teams in order to deter and detect the introduction of explosive devices into U.S. transportation systems. Each canine team consists of a handler paired with a canine trained in explosives detection. The canine handlers are generally either a state or local law enforcement officer (LEO) or a TSA employee. Two types of LEO teams and two types of TSA-based teams were trained to operate in the aviation environment during fiscal year 2015. First, TSA explosives detection canine teams patrol terminals, curbside areas, and other airport environments while TSA passenger screening canine teams primarily search for explosives odor on passengers in airport terminals. Second, LEO aviation teams patrol airport terminals, curbside areas, and sterile areas while LEO multimodal teams operate in the airport environment and screen air cargo but also operate in mass transit and maritime environments.
Behavior Detection and Analysis: TSA’s BDA program employs behavior detection officers (BDO) at passenger screening checkpoints to identify potential threats by observing individuals for certain behavioral indicators—behaviors indicative of stress, fear, or deception. These behavioral indicators include, for example, assessing the way an individual swallows or the degree to which an individual’s eyes are open. According to TSA, these verbal and nonverbal cues and behaviors may indicate mal-intent, such as the intent to carry out a terrorist attack, and provide a means for TSA to identify passengers who may pose a risk to aviation security and refer them for additional screening. During this referral screening, if passengers exhibit additional such behaviors, or if other events occur, such as the discovery of a suspected fraudulent document, BDOs are to refer these passengers to a LEO for further investigation. In fiscal year 2015, the program deployed BDOs primarily in teams of two at passenger screening checkpoints. However, TSA officials reported that in the summer of 2016, the agency began taking steps to integrate BDOs into the TSO workforce by assigning BDOs to the travel document checker position and other positions at passenger screening checkpoints where they are able to observe and interact with passengers in the performance of their screening duties.
U.S. Federal Air Marshal Service: FAMS deploys federal air marshals on passenger flights to detect, deter, and defeat hostile acts targeting U.S. air carriers, airports, passengers, and crews. In accordance with ATSA, as amended, TSA is authorized to deploy federal air marshals on every passenger flight of a U.S. air carrier and is required to deploy federal air marshals on every such flight determined by the Secretary of Homeland Security to present high security risks, with nonstop, long- distance flights, such as those targeted on September 11, 2001, considered a priority. One of FAMS’s top priorities is to deploy air marshals on flights that have a known or suspected terrorist on board. When FAMS assigns air marshals to cover such flights, it refers to these flights as special mission coverage assignments.
TSA’s System of Aviation Security Countermeasures
TSA uses a risk management strategy—referred to as “layers of security”—whereby TSA simultaneously deploys a mix of screening and other security countermeasures to deter and detect threats. TSA deploys countermeasures in varying combinations at each airport based on available resources, specific security concerns, and the airport’s risk category, among other things. Since the terrorist attacks of September 11, 2001, TSA has implemented and added countermeasures, and refined security procedures in response to specific attacks or threats— such as the liquid explosives plot in 2006. Figure 2 depicts examples of this progression, illustrating the addition or enhancement of certain TSA countermeasures over the years.
TSA Has Effectiveness Data on Some Countermeasures That Show Mixed Results, But Does Not Measure Deterrence
Data on the Effectiveness of Selected Countermeasures in Detecting and Disrupting Threats to Aviation Security Vary in Extent and Reliability
TSA collected fiscal year 2015 data on the effectiveness of four of the six countermeasures we selected—passenger prescreening, checkpoint screening, checked baggage screening, and explosives detection canines—in detecting or disrupting threats to passenger aviation security. TSA assesses this effectiveness differently for each of these four countermeasures. For example, TSA assessed the effectiveness of its passenger prescreening countermeasure in detecting passengers that may pose a threat to aviation security by measuring the percentage of airline passenger records vetted through its Secure Flight system and the number of high-risk passengers identified. In contrast, TSA assessed the effectiveness of its canine program in detecting and disrupting potential security threats by measuring canine-handler team performance during their annual certification tests as well as covert scenario-based tests called short notice assessments (SNA).
Some of the effectiveness data TSA has for fiscal year 2015 are of limited reliability and TSA is taking steps to improve this information. For instance, we reported in September 2016 that checkpoint and checked baggage screening effectiveness data from TSA’s Aviation Screening Assessment Program (ASAP) Advantage covert tests conducted in fiscal year 2015 were not reliable. Specifically, TSA found that TSOs performed more poorly in ASAP tests conducted by an independent contractor than in the same tests conducted by local TSA personnel at the same airports. This raised questions about the validity of ASAP tests conducted by local TSA personnel and indicated that TSA’s fiscal year 2015 ASAP pass rates likely showed a higher level of TSO performance in screening for prohibited items than was actually the case. In response to this issue, and to provide ongoing quality assurance for field-based covert testing results, in April 2016, TSA began deploying headquarters- based covert testing teams in both the checkpoint and checked baggage screening environments. TSA officials stated that comparing the results of field- and headquarters-based tests provides TSA with a useful indication of whether or not the field-based covert testing results are valid.
In another example, we determined that fiscal year 2015 SNA data were not reliable for the purpose of reporting explosives detection canine teams’ covert testing pass rates. Specifically, in the course of our review we found that these data included duplicate entries and errors, and TSA officials stated that the results of an unknown number of SNAs may not have been recorded. Further, we found that TSA’s data collection process for SNA results that were recorded lacked procedures to ensure that manually entered data were accurate and complete. To address these data limitations, canine program officials stated that a new process was implemented in October 2016 to incorporate SNA results directly into the Canine Website System—a central electronic management database for various canine program data. According to these officials, this new process will better ensure that SNA data are complete, accurate, and reliable for use by program officials and TSA leadership in evaluating the effectiveness of the program. Appendix II presents specific fiscal year 2015 effectiveness data for the four selected countermeasures for which TSA had effectiveness information.
During fiscal year 2015, TSA did not collect data on the effectiveness of two of the six countermeasures we selected—FAMS and the BDA program—in detecting and disrupting threats to aviation security. For FAMS, TSA officials explained that it is very difficult to empirically measure the effectiveness of federal air marshals and the program has no efforts underway to collect such data. We discuss this issue later in this report.
For the BDA Program, we reported in November 2013 that TSA had not demonstrated that BDOs could consistently identify the behavioral indicators and, further, that decades of peer-reviewed, published research on the complexities associated with detecting deception through human observation also called into question the scientific basis for TSA’s behavior detection activities. As a result, we recommended that TSA limit future funding for the agency’s behavior detection activities until TSA can provide scientifically validated evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security. DHS did not concur with the recommendation but has since reduced funding for the BDA Program and taken steps to begin to assess program effectiveness. For example, in 2014 TSA revised its list of behavioral indicators and contracted for a literature review to identify additional sources of evidence supporting these indicators. However, in July 2017, we reported that in our review of all 178 sources TSA cited in support of its revised list, we found that 98 percent (175 of 178) did not provide valid evidence applicable to the specific indicators TSA identified them as supporting. Based on our findings, we continue to believe that TSA should limit future funding for the agency’s behavior detection activities until TSA can provide valid evidence that demonstrates that behavioral indicators can be used to identify passengers who may pose a threat to aviation security, as we recommended in our November 2013 report.
Table 1 identifies whether TSA has information on the effectiveness of the six selected countermeasures in detecting and disrupting threats to aviation security during fiscal year 2015, the data limitations we identified, and steps TSA officials have taken to improve this effectiveness information.
TSA Effectiveness Data on Selected Countermeasures Indicate Mixed Results
Some of TSA’s fiscal year 2015 data indicate countermeasure effectiveness while other data highlight vulnerabilities in the agency’s ability to detect and disrupt threats to aviation security. For example, for the passenger prescreening countermeasure, TSA officials reported that in fiscal year 2015, TSA’s Secure Flight program vetted 100 percent of the more than 816 million records of passengers who flew into, out of, over, or within the United States, and on U.S.-flagged aircraft operating internationally point-to-point. In addition, for the checkpoint and checked baggage countermeasures, TSA uses Annual Proficiency Reviews (APR) to evaluate TSOs’ skill in performing various checkpoint and checked baggage screening functions, such as pat downs of passengers, bag searches, and use of explosives detection equipment. In 2015, the average rate at which TSOs passed all APR component tests on the first try was nearly 95 percent.
On the other hand, some fiscal year 2015 effectiveness data indicate vulnerabilities. For example, results from covert testing conducted by TSA’s OOI during fiscal year 2015 indicate vulnerabilities in the checkpoint and checked baggage screening systems. Specific details about OOI’s test results are omitted because the information is classified.
TSA Does Not Measure Deterrence for Any of Its Aviation Security Countermeasures
While TSA has methods to measure its effectiveness in detecting and disrupting threats, the agency has no such methods to measure progress toward its goal of deterring attacks on the U.S. aviation system. TSA officials have cited the deterrent effect of various countermeasures— including FAMS, canine teams, BDOs, and AIT machines—but does not have information on the deterrent effect of any of these countermeasures. For example, TSA officials explained that canine teams that patrol airports—searching unattended bags and unattended vehicles, among other activities—provide a deterrent presence at airports, but officials noted that they do not have any data on these canines’ deterrent effect. Most notably, with regard to FAMS, TSA officials explained that one of the primary security contributions and a key aspect of the FAMS’s mission is to deter attacks. However, FAMS officials explained that they do not have information on FAMS’s deterrent effect because it is difficult to model, measure, and quantify. TSA officials in multiple offices explained that this difficulty applies not just to FAMS, but also to other TSA countermeasures with an intended deterrent effect.
OMB and GAO have acknowledged the difficulty in measuring the effect of deterrence programs, but have identified options to overcome these challenges. OMB guidance recognizes that programs with a deterrence or prevention focus can be difficult to measure and suggests that proxy measures that are closely tied to the outcome can be used to determine how well a deterrence process is functioning. We have similarly acknowledged such methodological challenges and identified alternate evaluation methods that could be helpful to agencies, such as using simulations. TSA could, for example, develop theoretical game scenarios and have testers simulate would-be attackers’ decisions when attempting to carry out an attack on the aviation system. Officials with CREATE—a DHS-funded research center—told us that they have conducted some conceptual research on the value of deterrence and believe it would be possible to assess TSA’s deterrent effect by, for example, allowing covert testers to choose their method of attack. Such an assessment could provide TSA with insights regarding which countermeasures a would-be attacker might choose to avoid in various scenarios.
In a March 2016 report prepared for TSA, CREATE analyzed a prospective risk-based security initiative TSA had begun developing and highlighted the need for further research into deterrence including the need to model the economic value of deterrence. CREATE officials explained that they highlighted this issue because in a resource constrained environment, optimizing TSA’s deterrent effect may be a more cost effective solution to aviation security threats than focusing solely on detection and interdiction. A senior official with CPER stated that the office believes there is value in pursuing further research regarding deterrence and noted that the office had included a request for funding to study deterrence in its fiscal year 2017 expenditure plan, but the request was on hold due to limited funding.
In accordance with GPRA, as updated by the GPRA Modernization Act, agencies are to establish performance measures to assess progress toward goals. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers critical information on which to base decisions for improving their progress. For example, they can use performance information when developing strategies, allocating resources, identifying problems, and taking corrective action.
TSA officials told us that developing a means to assess TSA’s deterrent effect would be difficult and require a multi-year effort but having such a means would be helpful. For example, TSA’s prior Chief Risk Officer told us that TSA’s countermeasures deter nefarious actors from attempting an attack on an aircraft, but better understanding this concept will be critical to TSA in its transition into a more holistic, system-wide approach to aviation security. Additionally, a senior ORCA official explained that a better understanding of the deterrent effect of TSA countermeasures could help TSA optimize use of its resources. For example, this official noted that there may be a point at which adding additional federal air marshals has diminishing returns in terms of deterrence and better understanding FAMS’s deterrent effect could help TSA identify that point. This official further stated that developing a method to assess deterrence for this purpose would be challenging but feasible.
In the absence of any systematic or methodological approach to assessing TSA’s deterrent value, TSA officials have relied on theories of causality and limited evidence available from U.S. intelligence sources. For example, FAMS officials cited the fact that there has not been a hijacking on a U.S. carrier since 2002 as evidence of FAMS’s deterrent effect, but had no specific evidence to support FAMS’s contribution to this outcome. In another example, ORCA officials noted that a 2014 article in an online magazine published by al-Qaeda encouraging would-be- attackers to avoid airports with a certain countermeasure provided evidence of its deterrent value. These observations may provide limited insight into TSA’s deterrent effect, but developing a method to systematically assess the deterrent effect of TSA’s security efforts would better position TSA to improve progress toward its goal—deterring attacks on the U.S. aviation system.
TSA Can Compare the Effectiveness of Certain Combinations of Aviation Security Countermeasures, but Does Not Systematically Analyze Cost and Effectiveness Tradeoffs Across All Countermeasures TSA Has a Tool to Assess the Security Effectiveness of Alternate Combinations of Some Countermeasures
In 2014, TSA’s ORCA began using a Risk and Trade Space Portfolio Analysis Tool (RTSPA) to analyze the security effectiveness of alternate combinations of some aviation security countermeasures for the purpose of informing TSA acquisition and deployment decisions. RTSPA provides a means for TSA to model its security effectiveness in different scenarios. For example, the tool could be used to compare the security effectiveness of a theoretical airport screening checkpoint with canines to that of a checkpoint modeled without canines.
According to ORCA officials, they developed RTSPA to assess security effectiveness tradeoffs among countermeasures that they believed would most benefit from the detailed quantitative analyses that the tool provides, rather than across TSA’s entire system of aviation security countermeasures. Specifically, TSA officials explained that RTSPA is designed to analyze tradeoffs among checkpoint screening countermeasures—including canine teams and BDOs—and checked baggage screening, but was not developed to analyze tradeoffs among other countermeasures TSA deploys. For example, ORCA officials told us that the tool was not developed to analyze crew vetting or FAMS because understanding the security tradeoffs of these countermeasures, while important, does not require the use of such a resource intensive tool like RTSPA. In addition, RTSPA does not account for the full system of aviation security countermeasures, including countermeasures such as hardened cockpit doors and Federal Flight Deck Officers—flight crew members authorized and trained to use firearms. ORCA officials further explained that in 2014, when initially developing the tool, they also developed comparable countermeasure cost data to allow for cost- effectiveness comparisons among countermeasures. However, ORCA officials report that they subsequently stopped analyzing cost tradeoffs because they believed other TSA offices could conduct such analysis.
In the last two years, TSA officials have used the results of RTSPA analyses to inform some resource tradeoff decisions. For example, ORCA officials told us that in 2015, TSA leadership used the results of a RTSPA analysis when considering options for improving overall security effectiveness at airports that did not have AIT machines. Specifically, TSA used RTSPA to consider the level of risk and potential risk mitigation value of alternative security measures at these airports. TSA officials report that this RTSPA analysis contributed to TSA’s decision to deploy 146 additional AIT machines to such airports. In another example, ORCA officials noted that in early 2017, they used RTSPA to analyze options for resolving checked baggage alarms, taking into consideration the relative risks of military-grade explosive materials and homemade explosive devices.
TSA officials stated that their use of RTSPA has been limited to date because it is still a relatively new tool. However, ORCA officials told us that they expect use of the tool’s analysis to grow as the agency increasingly seeks to use analytic tools to inform acquisition and deployment decisions. As such, ORCA officials plan to update RTSPA and expand its analytical capabilities.
TSA Has Not Systematically Analyzed Potential Cost and Effectiveness Tradeoffs Across the Entire System of Aviation Security Countermeasures
TSA does not have any efforts underway to systematically evaluate the potential cost and effectiveness tradeoffs across the full aviation security system. Although TSA’s use of RTSPA to identify effectiveness tradeoffs among selected countermeasures provides some such information, the tool’s analyses are limited and the tool is not designed to offer a system- wide view of effectiveness. When we asked TSA’s prior Chief of Staff about any such efforts, he stated that TSA had not systematically evaluated cost and effectiveness tradeoffs because TSA’s aviation security system is constantly evolving to meet emerging threats, and assessing a system in flux is challenging. However, he told us that such an analysis would be helpful.
DHS policy and TSA’s strategic plan call for the systematic evaluation of the costs and effectiveness of TSA’s chosen mix of aviation security countermeasures. Specifically, DHS’s 2010 Policy for Integrated Risk Management calls on components, including TSA, to evaluate the performance of risk management strategies it decides to implement. In the case of TSA, TSA’s chosen mix of aviation security countermeasures represents TSA’s current risk management strategy. The policy further establishes that components should develop and analyze alternative strategies to manage risks by considering the projected costs, benefits, and ramifications of each alternative. In addition, TSA’s current Strategic Plan establishes the goal of increasing efficiency and operational effectiveness through disciplined processes and dynamic resource management. One of the stated outcomes associated with this goal is the ability to effectively optimize resource allocation to strike a balance of costs, benefits, and risk. In addition, it was the stated objective of ORCA’s predecessor—the Office of Security Capabilities (OSC)—to develop and implement a comprehensive tradeoff analysis across the security system to inform investment decisions. OSC’s strategic plan further states that such an analysis would include a full set of strategic choices TSA should consider when determining how to respond to a threat or making an investment decision, helping to determine which alternatives provide the greatest risk mitigation value for each dollar spent.
A senior ORCA official explained that while there is a need for a system- wide tradeoff analyses, RTSPA alone may not be the right tool for this. This official explained that TSA may not require detailed quantitative analyses from a resource-intensive tool such as RTSPA to understand the effectiveness tradeoffs among all aviation security countermeasures, and a portfolio of tools of varying precision and depth could be used to obtain a system-wide view. This official noted that developing TSA’s capability for system-wide tradeoff analysis would be challenging and require a multi-year effort. However, RTSPA could serve as a useful starting place for a more comprehensive system-wide analysis. For example, TSA could build upon ORCA’s past efforts to analyze the comparative cost effectiveness of countermeasures and its experience isolating the security effectiveness contributions of individual countermeasures.
Without a systematic analysis of the cost and effectiveness tradeoffs across aviation security countermeasures TSA is limited in its ability to achieve its stated goal of optimizing resource allocation and striking a balance of costs, effectiveness, and risk across the system. In an environment of constrained resources and continuing threats to aviation security, producing such analysis could assist TSA leadership in targeting its limited resources to achieve the greatest system-wide risk mitigation value for each dollar spent.
Conclusions
Since the terrorist attacks of September 11, 2001, TSA has spent billions of dollars on a range of aviation security programs with the goal of detecting, disrupting, and deterring threats. However, TSA does not have a complete understanding of the contributions these programs are making to this goal. Specifically, TSA has some information on how well it can detect and disrupt threats and is taking steps to improve this information, but does not have information on its ability to deter attacks—a key component of TSA’s goal. For example, in fiscal year 2015, TSA spent approximately $800 million on FAMS—a program with a focus on deterring attacks on aircraft—yet the agency has no information on its effectiveness in doing so. While we and OMB have acknowledged the difficulty in measuring deterrence, we have also suggested options to overcome these challenges. Further, in accordance with GPRA, as updated by the GPRA Modernization Act, agencies are to assess the effectiveness of their programs and leading practices established in GAO’s prior work stress the importance of agencies tracking progress toward goals. Developing a method to assess the deterrent effect of aviation security countermeasures would better position TSA to improve progress toward a key goal—deterring attacks on the U.S. aviation system.
Since September 11, 2001, TSA has added countermeasures and refined security procedures in response to specific attacks or threats, but has not systematically evaluated its chosen combination of aviation security countermeasures as called for in DHS policy and TSA’s strategic plan. Specifically, TSA does not have any efforts underway to evaluate the potential cost and effectiveness tradeoffs across the full aviation security system because, according to a senior TSA official, the aviation security system is constantly evolving in response to emerging threats, and assessing a system in flux is challenging. However, it is using a model— known as RTSPA—that could serve as a useful starting place for a more comprehensive system-wide analysis. Developing and implementing a means to systematically evaluate the potential cost and effectiveness tradeoffs across aviation security countermeasures would better position TSA to achieve its stated goal of optimizing resource allocation and striking a balance of costs, effectiveness, and risk. In an environment of constrained resources and continuing threats to aviation security, producing such an analysis could assist TSA leadership in targeting its limited resources to achieve the greatest system-wide risk mitigation value for each dollar spent.
We recognize that developing these analytical methods will be a difficult undertaking that may take years to achieve. Nonetheless, as TSA improves the reliability and extent of its countermeasure effectiveness data, the agency will also improve its ability to perform system-wide cost and effectiveness tradeoff analyses. In this high threat environment, it is essential that TSA determine how to allocate its finite resources to best position the agency to detect, disrupt and deter threats to aviation security.
Recommendations for Executive Action
We are making the following two recommendations to TSA: 1. The Administrator of TSA should explore and pursue methods to assess the deterrent effect of TSA’s passenger aviation security countermeasures; such an effort should identify FAMS—a countermeasure with a focus on deterring threats—as a top priority to address. (Recommendation 1) 2. The Administrator of TSA should systematically evaluate the potential cost and effectiveness tradeoffs across countermeasures, as TSA improves the reliability and extent of its information on the effectiveness of aviation security countermeasures. (Recommendation 2)
Agency Comments and Our Evaluation
We provided a draft of this report to DHS for review and comment. The department’s letter is included in appendix III. In its comments, DHS generally concurred. DHS also provided technical comments, which we incorporated as appropriate.
With regard to our first recommendation that TSA explore and pursue methods to assess the deterrent effect of its passenger aviation security countermeasures, DHS concurred, noting that this may require proxy or output measures and assumptions about potential adversary choices. DHS also concurred with our second recommendation that TSA systematically evaluate the potential cost and effectiveness tradeoffs across countermeasures. In its comments, DHS stated that TSA will continue efforts to improve both its analysis of information related to security effectiveness and its cost information, leading to better informed cost-benefit decisions for individual countermeasures. To address the intent of our recommendation, TSA will need to evaluate the costs and effectiveness of individual aviation security countermeasures and then use this information to systematically evaluate the potential cost and effectiveness tradeoffs across countermeasures. We will continue to monitor TSA’s efforts in addressing these recommendations.
We are sending copies of this report to the appropriate congressional committees, the Secretary of the Department of Homeland Security, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-7141 or [email protected]. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Transportation Security Administration (TSA) Information on the Direct Costs of Selected Countermeasures
As part of this review, we analyzed TSA’s fiscal year 2015 cost data for six selected aviation security countermeasures—passenger prescreening (Secure Flight), checkpoint screening, checked baggage screening, explosives detection canines, the Behavior Detection and Analysis (BDA) program, and the U.S. Federal Air Marshal Service (FAMS). We selected these six passenger aviation security countermeasures because they involved direct interaction with passengers, their belongings, or their personal information and are largely operated and funded by TSA. We determined that TSA can generally identify the fiscal year 2015 direct costs to TSA of the six passenger aviation security countermeasures that we reviewed, as shown in Table 2. TSA generally does not budget or track costs by countermeasure, but is able to identify most direct costs from their financial management system. For those passenger aviation security countermeasures that align with TSA’s budget categories, such as FAMS and passenger prescreening, TSA can run a single report to obtain the direct cost information. However, for those countermeasures that do not align with TSA’s budget categories, such as checkpoint screening and checked baggage screening, TSA is able to run multiple reports and use estimation based on their staffing model to estimate the direct costs.
Appendix II: Fiscal Year 2015 Effectiveness Data for Selected Passenger Aviation Security Countermeasures
The Transportation Security Administration (TSA) collected fiscal year 2015 data on the effectiveness of four of the six countermeasures we selected—passenger prescreening, checkpoint screening, checked baggage screening, and explosives detection canines. These data show mixed results with some data indicating TSA countermeasure effectiveness and other data highlighting vulnerabilities. Below, we describe what TSA knows about the fiscal year 2015 effectiveness of these four countermeasures in detecting or disrupting threats to passenger aviation security.
Overview of Passenger Prescreening
TSA uses its Secure Flight prescreening program to match passenger information against federal government watch lists and other information to assign each passenger to one of three risk categories—high risk, low risk, or unknown risk—that either corresponds to the level of screening they will experience at the checkpoint or may deny them an opportunity to board the aircraft. Since TSA began implementing Secure Flight in 2009, the passenger prescreening program has changed from a program that identifies passengers as high risk solely by matching them against federal government watch lists—for example, the No Fly List, comprised of individuals who should be precluded from boarding an aircraft, and the Selectee List, comprised of individuals who should receive enhanced screening at the passenger security checkpoint—to one that uses additional lists and risk-based criteria to assign passengers to a risk category. Specifically, Secure Flight now identifies passengers as high risk if they are matched to watch lists of known or suspected terrorists or other lists developed using certain high-risk criteria and as low risk if they are deemed eligible for expedited screening through TSA Pre✓™—a 2011 initiative to preapprove passengers for expedited screening—or through the application of low-risk rules. Secure Flight identifies passengers as unknown risk if they do not fall within the other two risk categories.
To separate passengers into these risk categories, TSA utilizes lists in addition to the No Fly and Selectee Lists, and TSA has adapted the Secure Flight system to perform risk assessments, a system functionality that is distinct from both watch list matching and matching against lists of known travelers. At airport checkpoints, those passengers identified as high risk receive enhanced screening, passengers identified as low risk are eligible for expedited screening, and passengers identified as unknown risk generally receive standard screening. Passengers matched to the No Fly List or the Centers for Disease Control and Prevention’s Do Not Board List—a list which includes individuals who pose a significant health risk to other travelers and are not allowed to fly—are considered highest risk, and thus are not to receive boarding passes, and should not be allowed entry into the sterile area. Figure 3 illustrates this passenger prescreening process.
The Effectiveness of Passenger Prescreening in Fiscal Year 2015
TSA officials reported that the percentage of passengers vetted and the number of high-risk passengers identified by Secure Flight demonstrate the effectiveness of this passenger prescreening program. Specifically, TSA data indicate that in fiscal year 2015, Secure Flight vetted 100 percent of the over 816 million records submitted for passengers who flew into, out of, over, or within the United States, and on U.S.-flagged aircraft operating internationally point-to-point. Of these, TSA identified 15,383 (0.002 percent of passenger records vetted) as confirmed matches to watch lists. Specifically, in fiscal year 2015, TSA identified 9,639 passengers as expanded selectees, 5,019 passengers on the Selectee List, and 725 passengers on the No Fly List.
In September 2014, we reported that TSA collects and regularly reviews data on the number of passengers identified by the Secure Flight system as potential matches to the No Fly, Selectee, and Expanded Selectee Lists. However, we found that TSA did not measure the extent to which Secure Flight was missing passengers who were actual matches to these lists—false negatives. We recommended that TSA establish such measures. In response, in August 2016, TSA contracted with a third party to conduct an independent assessment of the effectiveness of the Secure Flight automated vetting system including whether Secure Flight identifies the matches it should (i.e., how well the system minimizes false negatives). TSA officials expect this assessment to be complete at the end of calendar year 2017.
Overview of Checkpoint Screening
TSA ensures that all individuals and accessible property are screened as part of its checkpoint screening process to deter and prevent the carriage of any unauthorized explosive, incendiary, weapon, or other prohibited items on board an aircraft or into the airport sterile area—in general, an area of an airport that provides passengers access to boarding aircraft and to which access is controlled through the screening of persons and property. Ordinarily, screening of accessible property at the screening checkpoint begins when an individual places accessible property on the X-ray conveyor belt or hands accessible property to a Transportation Security Officer (TSO). As shown in figure 4, TSOs then review images of the property running through the X-ray machine and look for signs of prohibited items. If a TSO identifies a potential prohibited item, the accessible property will be manually inspected and screened with an explosives trace detection (ETD) machine to identify any traces of explosives material. The passengers themselves are typically screened via a walk-through metal detector or an advanced imaging technology (AIT) machine—often referred to as a full-body scanner—and passengers generally have the option to request screening by a pat down if they do not wish to be screened by these technologies. Passengers will also be subject to a pat down if they are screened by a walk through metal detector or the AIT and the equipment alarms (in order to resolve the alarm).
TSOs use several screening technologies in order to screen passengers and carry-on bags for prohibited items. For more information on the specific screening technologies deployed at the checkpoint in fiscal year 2015, see Table 3.
The Effectiveness of Checkpoint Screening in Fiscal Year 2015
In fiscal year 2015, TSA collected data on the effectiveness of checkpoint screening by testing TSOs, screening technology (e.g., the AIT and X- ray), and the checkpoint screening system as a whole (i.e., the combination of TSOs and technology).
Checkpoint Screening TSOs
TSA collected fiscal year 2015 data on the effectiveness of its TSO workforce in detecting or disrupting threats to aviation security at the checkpoint in three ways: (1) annual proficiency review (APR) of TSOs, (2) threat-image projection (TIP) testing, and (3) Aviation Screening Assessment Program (ASAP) Advantage covert tests.
Annual Proficiency Reviews. APRs evaluate TSOs’ skill in performing the various checkpoint and checked baggage screening functions and all TSOs must successfully complete the required APR component tests related to their job function on an annual basis as a condition of employment with TSA in their capacity as a screener. Components of the APR focused on checkpoint screening specifically included tests that evaluate TSOs’ ability to identify prohibited items on an X-ray machine and tests that evaluate whether TSOs can perform various practical skills such as pat downs, bag searches, and use of explosive trace detection technology.
In calendar year 2015, TSA conducted roughly 150,000 APR component tests focused on checkpoint screening. Table 4 provides descriptions of these component tests.
Threat Image Projection (TIP) Testing. TSA’s TIP testing system displays fictional threat items, such as guns or explosives, onto X-ray images of actual passengers’ carry-on bags to test TSOs’ ability to identify prohibited items in a live operational environment. TSOs operating the X-ray machine at the checkpoint are monitored to see if they positively identify the threat image and call for the bag to be searched. TSA officials report that they use TIP images on a daily basis to monitor TSOs’ ability to identify prohibited items, aid in keeping them focused and attentive, and keep their skills sharp in identifying items they do not routinely see. TSA requires airport personnel to conduct TIP tests and upload monthly results data into TSA’s national database.
In September 2016, we reported that TSA’s TIP data from fiscal year 2009 through 2014 was incomplete as TSA could not provide TIP scores for every airport during this period. Specifically, during fiscal year 2013, nearly 14 percent of airports failed to report any TIP data. TSA officials also acknowledged that, in addition to the airports that did not report any TIP data for a year or more at a time, other airports may have reported only partial TIP results data during this same time frame. We recommended that TSA officials at individual airports submit complete TIP results to the TSA national database as required and, further, that TSA analyze national TIP data for trends that could inform training needs and improve future training and TSO performance assessments. TSA concurred with our recommendations and is taking steps to address them. Specifically, a new TIP Operations Directive was implemented in October 2016 to disseminate procedures for performance data collection and submission to improve TIP data. According to agency officials, the number of non-compliant airports decreased during fiscal year 2016. However, since these improvements occurred during fiscal years 2016 and 2017, fiscal year 2015 TIP data remained incomplete and unreliable for the purposes of assessing TSO’s effectiveness at identifying TIP images. Therefore, we do not present fiscal year 2015 TIP test results in this report.
Aviation Screening Assessment Program (ASAP) Advantage Testing. To measure TSO performance nationwide in fiscal year 2015, TSA used standardized ASAP covert tests conducted by local TSA testers at each airport. ASAP tests focused on checkpoint screening were designed to assess the operational effectiveness of TSOs in identifying and preventing prohibited items, such as knives, guns, or simulated improvised explosive devices, from being taken through the checkpoint by testers. In fiscal year 2015, TSA conducted 5,213 ASAP covert tests on checkpoint screening at 170 airports.
TSA hired a contractor in fiscal year 2015 to independently conduct ASAP standard scenario tests at 40 airports to assess the validity of TSA testing results at those airports. When comparing the contractor’s results to the local TSA testers’ results, TSA found moderate to significant differences in the two sets of test results for most of the 40 airports. According to TSA officials, TSOs generally performed more poorly in the ASAP tests conducted by the independent contractor personnel when compared to the ASAP testing conducted by the local TSA personnel—indicating that pass rates for tests conducted by local TSA personnel were likely showing a higher level of TSO performance than was actually the case. TSA officials reported that the differences in test results have led them to question the extent to which the ASAP tests accurately measure TSO performance. As a result, we do not present the fiscal year 2015 ASAP test results in this report.
To address this validity issue, in April 2016, TSA officials reported that they began using both headquarters-based covert testing teams composed of headquarters-based TSA employees and field-based covert testing teams composed of local testers in both the checkpoint and checked baggage screening environments at all airports. Both headquarters-based and field-based teams conduct the same scenario- based covert tests that were previously conducted as part of ASAP testing. TSA officials stated that comparing the results of these separate tests has provided TSA with a way to gauge the validity of its test results.
Checkpoint Screening Technology
TSA officials reported that the effectiveness of checkpoint screening technology in fiscal year 2015 is best described by each type of machine’s detection standard—the specified rate of detection each technology is required to achieve in identifying explosives or prohibited items. Specific details about TSA’s detection standards are omitted because the information is classified. Prior to acquiring and deploying a potential new screening technology, TSA conducts testing to evaluate whether potential technologies can effectively achieve the detection standards required by TSA, among other things.
Once technology is deployed in the airport environment, TSA policy requires at least daily calibration testing of each individual piece of technology deployed at the checkpoint—AIT machines, walk through metal detectors, ETDs, and X-ray machines, among others—to ensure the technology is functioning properly and able to achieve the required detection standards. For example, each day when the screening checkpoint opens, TSOs must ensure that AIT machines successfully complete an image quality verification, a calibration test, and an operational test process before they are cleared for screening operations. TSA policy requires that TSOs record the results of these tests in logbooks and, further, that any screening equipment that does not pass daily testing be immediately taken out of service.
TSA’s Checkpoint Screening System as a Whole
In fiscal year 2015, TSA collected data on the effectiveness of its checkpoint screening system as a whole—including both screening technology and TSO performance—through Red Team covert testing conducted by TSA’s Office of Inspection (OOI). In fiscal year 2015, TSA conducted numerous Red Team covert tests on checkpoint screening at a random sample of U.S. airports. During passenger checkpoint testing, each team of inspectors carries threat items, such as simulated explosive devices, through the passenger checkpoint. If the TSO identifies the threat item during screening, the inspector identifies him or herself to the TSO and the test is considered a pass. If the TSO does not identify the threat item, the inspector proceeds to the sterile area of the airport and the test is considered a failure. According to TSA, these tests are designed to approximate techniques that terrorists may use in order to identify vulnerabilities in the people, processes, and technologies that comprise the aviation security system. In addition to OOI’s Red Team testing, in fiscal year 2015 the Department of Homeland Security (DHS) Office of Inspector General (OIG) also conducted covert tests of certain TSA checkpoint operations at 8 U.S. airports that use AIT machines to screen passengers. According to the DHS OIG, the objective of the tests was to determine the effectiveness of TSA’s AIT, automated target recognition software (which displays a box around anomalies on a generic outline of a body), and checkpoint screener performance in identifying and resolving anomalies and potential security threats at airport checkpoints. The results of both the OOI Red Team and the DHS OIG’s covert tests are omitted because the information is classified.
Overview of Checked Baggage Screening
TSA inspects passengers’ checked baggage to deter, detect, and prevent the transport of any unauthorized explosive, incendiary, or weapon onboard an aircraft. Checked baggage screening is accomplished through the use of explosives detection systems (EDS)—which use X- rays with computed tomography technology to automatically measure the physical characteristics of objects in baggage and automatically trigger an alarm when objects that exhibit the physical characteristics of explosives are detected—and explosives trace detection (ETD) machines, in which TSOs swab baggage and use chemical analysis to manually detect traces of explosive materials’ vapors and residue.
Generally, a checked baggage screening system at airports with EDS includes a three-level screening process. First, EDS machines perform automated screening. If the EDS machine determines that a checked bag requires additional screening, it sends an alarm to a TSO who performs a secondary inspection known as On-Screen Resolution by reviewing an image of the contents of the bag on a computer monitor. If the TSO cannot resolve the alarm using on-screen resolution tools and determines a physical bag search is necessary, the bag goes to the Checked Baggage Resolution Area where a TSO performs a manual inspection of the bag assisted by an ETD machine.
At the end of fiscal year 2015, TSA had 1,717 EDS machines deployed at 263 airports. At airports without EDS, which are typically smaller airports, ETD machines are the primary method for manually screening checked baggage. At the end of fiscal year 2015, TSA had 2,291 ETD machines deployed at all 437 commercial (i.e., TSA-regulated) airports for primary or secondary screening of checked baggage.
TSA officials estimate that 25 percent of total TSO time is spent on checked baggage screening, and in fiscal year 2015, this would be the full-time equivalent of approximately 11,000 of TSA’s roughly 45,000 TSOs conducting checked baggage screening.
The Effectiveness of Checked Baggage Screening in Fiscal Year 2015
In fiscal year 2015, TSA collected data on the effectiveness of its checked baggage screening by testing screening personnel (i.e., TSOs), screening technology (EDS and ETD machines), and the checked baggage screening system as a whole (i.e., the combination of TSOs and technology).
Checked Baggage Screening TSOs
In fiscal year 2015, TSA collected data on the effectiveness of its TSO workforce in detecting or disrupting threats to aviation security in the checked baggage environment through its APR evaluations and ASAP Advantage covert tests.
Annual Proficiency Reviews (APR). As discussed above, APRs evaluate TSOs’ skill in performing the various checkpoint and checked baggage screening functions. Components of the APR focused on checked baggage screening include tests that evaluate TSOs’ ability to resolve EDS machine alarms using the appropriate tools and practical skills such as bag searches and the use of ETD technology.
In calendar year 2015, TSA conducted nearly 35,000 APR component tests specific to the checked baggage screening environment. Table 5 provides descriptions of these component tests.
Aviation Screening Assessment Program (ASAP) Advantage. In fiscal year 2015, TSA used standardized ASAP covert tests conducted by local TSA testers at each airport to measure TSO performance in both the checkpoint and checked baggage environments. Tests focused on checked baggage screening were designed to assess the operational effectiveness of TSOs in identifying and preventing a threat object concealed in a checked bag from being cleared for loading onto a passenger aircraft. In fiscal year 2015, TSA conducted 1,859 ASAP covert tests on checked baggage screening at 225 airports.
TSA began deploying headquarters-based covert testing teams in fiscal year 2016 to provide a means to validate the results of covert tests conducted by local TSA testers for both checkpoint and checked baggage screening. However, unlike in the checkpoint environment, the contractor did not perform ASAP covert testing on checked baggage screening during fiscal year 2015. When we compared fiscal year 2016 headquarters-based and field-based pass rates for covert testing of checked baggage screening, we found discrepancies that indicate covert tests conducted by local field-based TSA testers on checked baggage may not be reliable in accurately portraying TSO performance. Additionally, TSA officials stated that they cannot be certain these data are reliable. As a result, we do not present ASAP Advantage data in this report.
Checked Baggage Screening Technology and TSA’s Checked Baggage Screening System as a Whole
As with checkpoint screening technology discussed above, TSA officials reported that in fiscal year 2015, technology deployed at airports for checked baggage screening was calibrated and tested daily to ensure that it was operating as intended. According to TSA officials, these daily tests help to ensure that its screening technologies are meeting the detection standards they were designed to achieve. TSA officials reported that any equipment found not to meet required detection standards was immediately taken out of service. As described above, OOI also conducted Red Team covert testing on checked baggage screening at airports with EDS machines in fiscal year 2015. Specific details about TSA’s detection standards and the results of OOI’s covert tests are omitted because the information is classified.
Explosives Detection Canines
Through its National Explosives Detection Canine Team Program, TSA trains, deploys, and certifies explosives detection canine teams in order to deter and detect the introduction of explosive devices into U.S. transportation systems. Each canine team consists of a handler— generally either a state or local law enforcement officer (LEO) or TSA employee—paired with a canine trained in explosives detection.
As of September 2015, TSA had 692 canine teams deployed to 88 airports across the United States. These teams were composed of four types of canine teams trained to operate in the airport environment: TSA explosives detection canine (EDC) and Passenger Screening Canine (PSC) teams as well as LEO aviation and multimodal teams. Table 6 shows the number of canine teams by type deployed in the airport environment as of September 2015 and describes their roles and responsibilities.
The Effectiveness of Explosives Detection Canines in Fiscal Year 2015
In fiscal year 2015, TSA collected data on the effectiveness of its canine teams in detecting or disrupting threats to aviation security through its annual certification evaluation process and short notice assessments (SNA)—covert tests conducted to assess canine teams’ operational effectiveness in detecting and responding to possible explosives.
Annual Certification Evaluations. TSA’s annual evaluations assess whether canine teams meet the explosives detection certification standards established by the program. Following initial training, new canine teams must demonstrate certain critical skills in order to be certified to work in their home operating environment. After initial certification, all TSA canine teams are evaluated on an annual basis to maintain certification. Canine teams that fail their annual evaluation are decertified and limited to training and operating as a visible deterrent until they successfully complete the annual evaluation and are recertified to conduct screening.
To achieve EDC certification, canine teams must demonstrate their ability to detect hidden explosive training aids across a specified number of areas, a certain percent of the time. After passing this conventional evaluation, PSC teams undergo further testing in different locations within the sterile area of an airport. To achieve PSC certification, canine teams must successfully identify an explosives-carrying target/decoy in a specified number of search areas.
In fiscal year 2015, TSA conducted 673 EDC annual certification evaluations and 116 PSC evaluations. The fiscal year 2015 first-time pass rates for EDC and PSC canine teams has been designated as sensitive security information and thus cannot be included in a public report.
Short Notice Assessments. TSA conducts covert testing of canine teams to measure their effectiveness in detecting and responding to explosives odor during normal operations. These covert tests, known as SNAs, are conducted using one of four scenarios chosen to match a canine team’s primary area of operations—an unattended bag, unattended vehicle, cargo screening, and passenger screening. Field Canine Coordinators—TSA officials that administer SNAs—are responsible for debriefing participants after the assessment, determining if corrective actions are necessary, and officially documenting outcomes.
We assessed the reliability of SNA results in fiscal year 2015 and determined that the data were not reliable for the purpose of reporting overall pass rates. Specifically, we found duplicate entries and errors in the data. In addition, we found that fiscal year 2015 data on pass rates may be incomplete since the results of some SNAs may not have been subsequently recorded in TSA’s system. Further, TSA’s process of manually recording SNA results in fiscal year 2015 lacked procedures to ensure that data entered into TSA’s system were accurate and complete.
To address these data limitations, canine program officials stated that a new process was implemented in October 2016 to incorporate SNA results directly into the Canine Website System—a central electronic management database for various canine program data. According to these officials, this new process will better ensure that SNA data are complete, accurate, and reliable for use by program officials and TSA leadership in evaluating the effectiveness of the program.
Appendix III: Comments from the Department of Homeland Security
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Maria Strudwick (Assistant Director), Chuck Bausell, Claudia Becker, Bryan Bourgault, Bruce Crise, Dominick Dale, Brianna Dieter, Michele Fejfar, Eric Hauswirth, Susan Hsu, James Kernen, and Tom Lombardi made key contributions to this report. | Why GAO Did This Study
Since the attacks of September 11, 2001, TSA has spent billions of dollars on aviation security programs. However, recent attacks involving aircraft and airports in other countries underscore the continued threat to aviation and the need for an effective aviation security program.
GAO was asked to review TSA's passenger aviation security countermeasures. This report examines the extent to which TSA has (1) information on the effectiveness of selected passenger aviation security countermeasures and (2) systematically analyzed the cost and effectiveness tradeoffs among countermeasures.
GAO reviewed TSA documentation on the effectiveness of six passenger aviation security countermeasures in fiscal year 2015—the most recent year for which data were available. GAO selected these countermeasures because they involve direct interaction with passengers, their belongings, or their personal information, and are largely operated and funded by TSA. GAO also reviewed TSA documents and interviewed TSA officials regarding efforts to systematically analyze cost and effectiveness tradeoffs across countermeasures.
What GAO Found
The Transportation Security Administration (TSA) has data on the effectiveness of some, but not all of its passenger aviation security countermeasures. Specifically, TSA has data on passenger prescreening, checkpoint and checked baggage screening, and explosives detection canines. Further, TSA is taking steps to improve the quality of this information. However, it does not have effectiveness data for its Behavior Detection and Analysis (BDA) program and the U.S. Federal Air Marshal Service (FAMS). For BDA—a program to identify potential threats by observing passengers for behaviors indicative of stress, fear, or deception—in July 2017, GAO reported that (1) TSA does not have valid evidence supporting most of its behavioral indicators, and (2) TSA should continue to limit future funding for its behavior detection activities until it can provide such evidence. For FAMS—a program that deploys armed law enforcement officers on certain flights at an annual cost of about $800 million for fiscal year 2015—officials reported that one of the primary security contributions is to deter attacks. However, TSA does not have information on its effectiveness in doing so, nor does it have data on the deterrent effect resulting from any of its other aviation security countermeasures. While officials stated that deterrence is difficult to measure, the Government Performance and Results Act of 1993, as updated, provides that agencies are to assess the effectiveness of their programs. Further, the Office of Management and Budget and GAO have suggested approaches for measuring deterrence. Developing such methods for TSA countermeasures, especially for an effort such as FAMS in which the primary goal is deterrence, would enable TSA to determine whether its substantial investment is yielding results.
TSA has a tool to compare the security effectiveness of some aviation security countermeasures, but has no efforts underway to systematically evaluate potential cost and effectiveness tradeoffs across all countermeasures. In 2014, the agency developed a tool to analyze the security effectiveness of alternate combinations of some countermeasures for the purpose of informing acquisition and deployment decisions, but does not have a tool to assess such tradeoffs across the entire system of countermeasures. TSA officials explained that the aviation security system is constantly evolving, and assessing a system in flux is challenging. However, DHS policy and TSA's strategic plan call for the systematic evaluation of costs and effectiveness of TSA's chosen mix of aviation security countermeasures. Without such an analysis, TSA is not well positioned to strike an appropriate balance of costs, effectiveness, and risk.
This is a public version of a classified report that GAO issued in August 2017. Information that TSA deemed classified or sensitive security information, such as the results of TSA's covert testing and details about TSA's screening procedures, have been omitted.
What GAO Recommends
GAO recommends that TSA (1) explore and pursue methods to assess the deterrent effect of TSA's passenger aviation security countermeasures, with FAMS as a top priority to address, and (2) systematically evaluate the potential cost and effectiveness tradeoffs across aviation security countermeasures. DHS concurred with these recommendations. |
gao_GAO-19-42 | gao_GAO-19-42_0 | Background
Scientific research on and projections of the changes taking place in the Arctic vary, but there is a general consensus that the Arctic is warming and that its sea ice is diminishing. For example, scientists at the National Snow and Ice Data Center reported that for 2018 the minimum amount of sea ice coverage in the Arctic—typically occurring in September each year—was the sixth lowest in the satellite record and 656,000 square miles fewer than the mean for the 1981 through 2010 time frame. Further, the scientists found that the 12 lowest recordings of September ice coverage on satellite record have all occurred in the past 12 years. Figure 1 shows the sea ice coverage (i.e., extent) in the Arctic for September 2018 compared with the median ice edge for 1981 through 2010.
While much of the Arctic Ocean remains ice-covered for the majority of the year, most scientific estimates predict there will be a continued decrease in sea ice coverage in the Arctic Ocean in the summer sometime in the next 20 to 40 years. According to the Navy’s Arctic Roadmap for 2014 to 2030, while there may be less sea ice there in the future, the ice that remains will continue to be a challenge to those operating in the area.
Most commercial ship activity in the Arctic is regional—shipping into or out of the Arctic, mainly in support of commercial activity—not trans- Arctic. However, according to the official Navy estimate from 2013, the decreasing coverage of sea ice will result in more open water allowing increased maritime activity along three trans-Arctic routes from 2012 through 2030: the Northern Sea Route, the Northwest Passage, and the Trans-Polar Route (see fig. 2). This development could, for example, reduce by thousands of miles and by several days of travel the shipping of goods between countries in Asia and North America.
Increased economic activity in the Arctic could potentially increase the need for military capabilities there to safeguard U.S. interests. For example, estimates of significant oil, gas, and mineral deposits in the Arctic have increased the interest in exploration opportunities in the region. These resources include an estimated 13 percent of the world’s undiscovered oil; 30 percent of the world’s undiscovered gas; and approximately $1 trillion of minerals including gold, zinc, nickel, and platinum. According to information provided by the Department of State, the vast majority of these resources are within the undisputed continental shelf of the respective coastal states. Officials from the Department of State stated that disputed claims related to the small remaining portions of the Arctic seabed may be addressed within the international framework established by the United Nations Convention on the Law of the Sea.
However, as we reported in 2015, even with the changing climate and growing interest in the region, several enduring characteristics will continue to provide challenges to surface navigation in the Arctic for the foreseeable future. These include large amounts of winter ice and increased movement of ice from spring to fall. Increased movement of sea ice makes its location less predictable, a situation that increases the risk that ships can become trapped or damaged by ice impacts. In addition, the lack of infrastructure in the Arctic region affects the reliability of shipping through the area. Economic factors such as risk costs, as well as changes in the shipping market resulting from the Panama Canal expansion may also affect the amount of shipping along these routes. As figure 3 shows, even as the seasonal ice decreases over time, the Navy has projected that the Arctic will remain impassable for most commercial ships for most of the year from 2012 through 2030. These factors combined are likely to affect the pace at which commercial activity will increase.
We have previously examined emerging issues and challenges for the United States in the Arctic. See figure 4 for a timeline of our prior reports related to Arctic issues. We also include a list of our prior work related to the Arctic at the end of this report.
The Navy’s Report Aligns with Current Assessments of Arctic Threat Levels and Capabilities Required to Execute DOD’s Arctic Strategy
The Navy’s June 2018 report aligns with DOD’s assessments that the Arctic threat level remains low and that DOD has the capabilities required to execute its 2016 DOD Arctic Strategy. Specifically, the June 2018 report and the information it provides for each of the reporting elements discusses how the department can execute the 2016 DOD Arctic Strategy.
The strategy contains two overarching objectives: to (1) ensure security, support safety, and promote defense cooperation and (2) prepare to respond to a wide range of challenges and contingencies to maintain stability in the region. These objectives reflect DOD’s assessment that there is a low level of military threat in the Arctic, as well as the stated commitment of the Arctic nations to work within a common framework of diplomatic engagement. In the strategy, DOD identifies the types of investments that will need to be made over time as activity in the region increases; however, DOD also discusses the importance of assessing the needs in the Arctic and of balancing potential Arctic-specific capabilities investments against other national security priorities and fiscal realities. The Arctic threat assessment briefings we received from officials at the U.S. Northern Command and the Office of Naval Intelligence also reflected the low risk for conflict in the Arctic referenced in the Navy’s June 2018 report. Below, we summarize the Navy’s response to each reporting element, and our evaluation of whether the response aligns with current assessments of Arctic threat levels and capabilities required to execute DOD’s 2016 Arctic Strategy.
Report Provides Information on Current Naval Capabilities in the Arctic That Align with DOD’s Strategy
Reporting Element One: The Navy was required to report on the current naval capabilities of the Department of Defense in the Arctic region, with a particular emphasis on surface capabilities.
The June 2018 report provides information on this required element, with the Navy stating that it relies on the submarine force as well as on aviation assets and surface operations when necessary to operate in the Arctic. These capabilities in the Arctic region are consistent with those identified in The United States Navy Arctic Roadmap for 2014 to 2030 to execute the 2016 DOD Arctic Strategy, and as corroborated in our discussions with U.S. Northern Command and Navy officials.
In addition, the Navy discusses the significant limitations of its surface ships for Arctic operations in the June 2018 report. The limitations identified are consistent with information contained in the U.S. Navy Cold Weather Handbook for Surface Ships and with information we discussed with Naval Sea Systems Command officials who oversee modifications to the fleet and the acquisition of new ships. For example, Navy officials told us that top-side icing has detrimental effects on ships. As sea spray accumulates on a ship deck and freezes, a ship can lose some of the capabilities of its external sensors and radars and a ship’s stability in the water decreases as the ship’s center of gravity becomes top heavy. Navy and Coast Guard officials told us that while the Coast Guard regularly operates in the Arctic given its ice-breaking and maritime safety missions, among others, Navy surface ships have not been designed to maneuver and operate in icy waters. Although some of the Navy’s T-class ships have some capability to operate in light or broken first-year ice due to the inherent strength of their hulls, traditional surface combatant ships (e.g., Cruisers, Destroyers, or Frigates) are not designed to operate in icy waters.
Report Provides Information on the Gaps between Current Naval Capabilities and the Ability to Execute DOD’s Strategy
Reporting Element Two: The Navy was required to report on any gaps that exist between the current naval capabilities and the ability of the department to fully execute its updated strategy for the Arctic region.
The June 2018 report provides information on this required element, with the Navy stating that the department can execute the 2016 DOD Arctic Strategy with current naval capabilities. The June 2018 report is similarly aligned with Navy assessments of Arctic capabilities and gaps contained in its plan, The United States Navy Arctic Roadmap for 2014 to 2030 that the Office of the Chief of Naval Operations issued in February 2014. This plan provides guidance to prepare the Navy to respond effectively to future Arctic Region contingencies, delineates the Navy’s leadership role, and articulates the Navy’s support to achieve national priorities in the region. At the time of our review, DOD was in the process of drafting another report—on DOD arctic capability and resource gaps—as required by section 1054 of the National Defense Authorization Act for Fiscal Year 2018. In addition, according to Navy officials, the Navy was also drafting its Arctic Strategic Outlook, which is a follow-up to The United States Navy Arctic Roadmap for 2014 to 2030. According to DOD and Navy officials, both forthcoming reports will focus on contextualizing Arctic needs within the framework of the 2018 National Defense Strategy. Because these efforts were not complete at the time of our review, we were unable to determine whether the Navy’s June 2018 report aligns with these assessments.
Report Provides Information on Any Gaps in Naval Capabilities Requiring the Ice- Hardening of Existing Vessels or the Construction of New Vessels to Achieve DOD’s Strategy
Reporting Element Three: The Navy was required to report on any gaps in the current naval capabilities that require ice-hardening of existing vessels or the construction of new vessels to preserve freedom of navigation in the Arctic region whenever and wherever necessary.
The June 2018 report provides information on this required element, with the Navy stating that there are currently no validated capability gaps that require the Navy to ice-harden existing vessels or construct new ice- capable vessels to preserve freedom of navigation in the Arctic. Furthermore, the Navy stated that its current assets are sufficient to execute the 2016 DOD Arctic Strategy. As noted above, freedom of navigation operations are undertaken to, among other things, promote maritime stability and to challenge excessive sovereignty claims. In addition, DOD officials stated that the United States already has options other than Navy surface ships for demonstrating the United States’ freedom to operate in the Arctic, including using Coast Guard vessels, Navy submarines, or military aircraft.
Report Provides Information on Navy’s Analysis and Recommendation for Ice- Hardening Vessels to Achieve DOD’s Strategy
Reporting Elements Four and Five: The Navy was required to provide an analysis and recommendation of which Navy vessels could be ice-hardened to effectively preserve freedom of navigation in the Arctic region when and where necessary, in all seasons and weather conditions, and an analysis of any cost increases or schedule adjustments that may result from ice-hardening existing or new Navy vessels.
The June 2018 report provides some information on these required elements, with the Navy stating that it is not pursuing ice-hardening or the winterization of surface ships. According to the Navy, because there is no specific capability requirement for the Navy to ice-harden ships, the report does not list or name potential ice-hardening candidates among existing vessels or provide cost or schedule estimates for ice-hardening vessels. Officials with the Naval Sea Systems Command, which develops cost and schedule estimates for ship modifications and new construction, told us that they had not conducted life-cycle cost studies for ice-hardening existing ships because there is no capability requirement for an ice- hardened ship and, therefore, no ship design on which to base such a study or estimate.
Furthermore, the June 2018 report states that the Navy is leveraging cooperative research with international partner-nations such as Canada, Denmark, Finland, and Norway, to better understand how other Arctic nations are meeting additional requirements for Arctic operations. Navy officials from the Naval Sea Systems Command stated that ships built to operate in ice and extreme cold environments have unique features, including stronger, thicker construction of all portions of the hull that would come into contact with ice; different hull form design; redesigned propellers constructed of higher than traditional strength material; increased strength ship parts, such as rudders and seawater intakes and discharges designed to resist the formation or accumulation of ice; and more powerful heating and ventilation to accommodate sustained operations in extreme cold environments, among other things. They also noted that research completed to date has advanced the Navy’s knowledge in several of these areas including hull form and propeller design.
Navy officials estimated that a new ship design might require 20 years to reach initial operational capability. They noted the process might take only 10 years if the Navy can leverage an ongoing program, such as the DDG-51 Class program. Navy officials cautioned that the combination of features that enable ice-capable ships to sustain operating in extreme cold environments could compromise other performance areas such as speed, range, and ship motion. Officials told us that this would add to the Navy’s already strained efforts to maintain existing global naval presence requirements.
Although the June 2018 report did not discuss any cost and schedule adjustments that might arise from ice-hardening or new ship construction, we have previously reported that the Navy has faced challenges meeting its shipbuilding cost, schedule, and performance goals over the past decade. Specifically, we found that the 11 lead ships most recently delivered to the Navy cost $8 billion more to construct than initially budgeted for. Navy officials stated that the Navy contractor construction yards currently lack expertise in the design for construction of winterized, ice-capable surface combatant and amphibious warfare ships. Accordingly ice-hardening and winterization design practices could introduce cost and schedule risk, challenging the execution of an ice- hardened new construction ship building program for an ice-capable ship. If the Navy executes this potential program without the requisite knowledge at key points it could be at risk of cost and schedule growth that we have seen in recent Navy shipbuilding programs. The Navy has faced these challenges in part because the department has proceeded with construction prior to completing technology development and ship design. We have found that successful ship building programs are based on sound business cases, starting with the lead ship, and on the attainment of critical levels of knowledge at key points in the process prior to making significant investments.
The Navy Does Not Have a Capability Requirement to Ice- Harden Existing Vessels or Construct New Ones and Is Evaluating Arctic- Related Capabilities Using the Established DOD Process
Navy officials said that the Navy does not currently have a specific capability requirement for ice-hardening existing vessels or for the construction of new ones, and stated that the Navy or Joint Force is unlikely to produce such a requirement in the near term. Navy officials told us that the Navy will continue to use DOD’s established process, the Joint Capabilities Integration and Development System (JCIDS), which governs the department’s requirements process, to assess Arctic-related capability requirements in the near and long term (see fig. 5). All DOD components use the JCIDS process or variations of the process within their organizations to identify, assess, validate, and prioritize joint military requirements.
Before starting the JCIDS process, the military services, combatant commanders, and other DOD components conduct capabilities-based assessments or other studies to assess capability requirements and associated capability gaps and the associated risks. In October 2017, the Joint Requirements Oversight Council (JROC) validated U.S. Northern Command’s initial capabilities document identifying three gaps in the ability to exercise/deploy, position, and conduct deterrence/decisive operations in ice-diminished Arctic waters. At the time of our review, the JROC had reviewed and validated the U.S. Northern Command’s Arctic initial capabilities document and designated it for further study by the Navy. The validation of an initial capabilities document by the JROC is an early part of the JCIDS process, and informs updates to capability requirement documents related to specific materiel and nonmateriel capability solutions to be pursued.
A Navy official stated that the capability gaps identified in the U.S. Northern Command’s validated initial capabilities document will now compete for resources with other issues designated for study across the Navy. According to a Navy official, whenever the Navy initiates a study, this triggers the analysis of alternatives phase of the JCIDS process. Under this process, each alternative would need to be specifically evaluated for its costs and benefits. DOD officials noted that there are several analytical steps in the JCIDS process during which potential solutions for any identified gaps are analyzed. They told us that potential solutions might also include alternatives other than ice-hardening or new ship construction, such as adding capabilities to Coast Guard ships or partnering with allies to achieve common strategic goals in the Arctic.
Even as the seasonal ice decreases over time, according to Navy officials, the Arctic will remain impassable for most commercial ships for most of the year. For these reasons, projections of increased Arctic sea activity remain uncertain. DOD, U.S. Northern Command, Navy, and Coast Guard officials told us that even as Arctic maritime activity is expected to increase, several enduring characteristics will continue to provide challenges to surface navigation in the Arctic for the foreseeable future. These challenges include large amounts of winter ice and increased movement of ice from spring to fall. As mentioned earlier, the increased movement of sea ice makes its location less predictable, a situation that is likely to increase the risk that ships can become trapped or damaged by ice impacts. Coast Guard officials noted that a challenging environment like the Arctic may result in a higher likelihood of incidents occurring. Further, responding to incidents with search and rescue operations are riskier to execute than in non-polar environments. In addition, the lack of infrastructure and logistical support in the Arctic affects maritime activities through that region.
Agency Comments
We are not making any recommendations in this report. We provided a draft of our report to DOD, Department of Homeland Security, and the Department of State for comment. DOD, Department of Homeland Security, and Department of State provided technical comments, which we incorporated into this report as appropriate.
We are sending copies of this report to the appropriate congressional committees. We are also sending copies to the Secretary of Defense, Secretary of State, and the Secretary of Homeland Security. In addition, this report will be available at no charge on our website at http://www.gao.gov.
If you or your staff have questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Organizations We Interviewed
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Suzanne Wren (Assistant Director), Delia Zee (Analyst-in-Charge), John Beauchamp, Mae Jones, Amie Lesser, Ned Malone, and Shahrzad Nikoo made key contributions to this report.
Related GAO Products
Coast Guard Acquisitions: Polar Icebreaker Program Needs to Address Risks before Committing Resources. GAO-18-600. Washington, D.C.: September 4, 2018.
Navy Shipbuilding: Past Performance Provides Valuable Lessons for Future Investments. GAO-18-238SP. Washington, D.C.: June 6, 2018.
Coast Guard Acquisitions: Status of Coast Guard’s Heavy Polar Icebreaker Acquisition. GAO-18-385R. Washington, D.C.: April 13, 2018.
Coast Guard: Status of Polar Icebreaking Fleet Capability and Recapitalization Plan. GAO-17-698R. Washington, D.C.: September 25, 2017.
High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017.
Arctic Planning: DOD Expects to Play a Supporting Role to Other Federal Agencies and Has Efforts Under Way to Address Capability Needs and Update Plans. GAO-15-566. Washington, D.C.: June 19, 2015.
Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014.
Arctic Issues: Better Direction and Management of Voluntary Recommendations Could Enhance U.S. Arctic Council Participation. GAO-14-435. Washington, D.C.: May 16, 2014.
Maritime Infrastructure: Key Issues Related to Commercial Activity in the U.S. Arctic over the Next Decade. GAO-14-299. Washington, D.C.: March 19, 2014.
Managing for Results: Implementation Approaches Used to Enhance Collaboration in Interagency Groups. GAO-14-220. Washington, D.C.: February 14, 2014.
Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Washington, D.C.: September 27, 2012.
Arctic Capabilities: DOD Addressed Many Specified Reporting Elements in Its 2011 Arctic Report but Should Take Steps to Meet Near- and Long- term Needs. GAO-12-180. Washington, D.C.: January 13, 2012.
Coast Guard: Efforts to Identify Arctic Requirements Are Ongoing, but More Communication about Agency Planning Efforts Would Be Beneficial. GAO-10-870. Washington, D.C.: September 15, 2010.
Alaska Native Villages: Limited Progress Has Been Made on Relocating Villages Threatened by Flooding and Erosion. GAO-09-551. Washington, D.C.: June 3, 2009. | Why GAO Did This Study
The Navy is responsible for providing ready forces for current operations and contingency response in the Arctic Ocean. According to data from the National Snow and Ice Data Center, the coverage of sea ice in the Arctic has diminished significantly since 1981. This could potentially increase maritime activities there, leading to a need for a greater U.S. military and homeland security presence in the region.
Public Law 115-91 required the Navy to report to Congress on the Navy's capabilities in the Arctic, including any capability gaps and requirements for ice-hardened vessels. It also included a provision for GAO to review the Navy's report. This report (1) assesses the extent to which the Navy's report aligns with current assessments of Arctic threat levels and capabilities required to execute DOD's 2016 Arctic Strategy and (2) describes any current requirements for ice-hardened vessels and DOD's approach for evaluating the capabilities needed as Arctic requirements evolve.
GAO reviewed the Navy's report along with DOD's assessments of Arctic threats and naval capabilities. GAO also reviewed the 2016 DOD Arctic Strategy— the most current strategy, DOD and Department of State information on the freedom of navigation program as well as DOD's processes for developing capabilities and assessing Arctic capability gaps.
GAO is not making any recommendations in this report. DOD provided written technical comments which were incorporated as appropriate.
What GAO Found
The Navy's June 2018 report aligns with Department of Defense (DOD) assessments that the Arctic is at low risk for conflict and that DOD has the capabilities to execute the 2016 DOD Arctic Strategy . The June 2018 report also aligns with assessments of Arctic capabilities and gaps in the Navy's 2014 roadmap for implementing the strategy. The June 2018 report states that the Navy can execute the strategy with subsurface, aviation, and surface assets. The report notes the significant limitations for operating surface ships in the Arctic, but states that the Navy has the capabilities required for executing the strategy , and so has no plan to design ice-hardened surface ships. In addition, DOD officials stated that the United States has options other than Navy surface ships for demonstrating the U.S. right to operate in the Arctic, including using Coast Guard vessels, Navy submarines, or military aircraft.
Navy officials said that the Navy does not have a specific requirement for ice-hardening existing vessels or constructing new ones. The Navy plans to continue to use DOD's established process, the Joint Capabilities Integration and Development System to reassess Arctic-related requirements as conditions evolve (see fig.). In October 2017, the Joint Requirements Oversight Council validated U.S. Northern Command's initial capabilities document identifying three gaps in the ability to exercise/deploy, position, and conduct deterrence/decisive operations in ice-diminished Arctic waters. At the time of GAO's review, the Joint Staff had validated the capability gaps, which will now compete for resources with other issues designated for further study. Officials said additional study may identify alternative solutions such as adding capabilities to Coast Guard ships or partnering with allies to achieve common strategic goals in the Arctic. |
gao_GAO-18-566T | gao_GAO-18-566T_0 | Background
According to the President’s budget, the federal government plans to invest more than $96 billion for IT in fiscal year 2018—the largest amount ever budgeted. Despite such large IT expenditures, we have previously reported that investments in federal IT too often result in failed projects that incur cost overruns and schedule slippages, while contributing little to the desired mission-related outcomes. For example:
The tri-agency National Polar-orbiting Operational Environmental Satellite System was disbanded in February 2010 by the White House’s Office of Science and Technology Policy after the program spent 16 years and almost $5 billion.
The Department of Homeland Security’s (DHS) Secure Border Initiative Network program was ended in January 2011, after the department obligated more than $1 billion for the program.
The Department of Veterans Affairs’ Financial and Logistics Integrated Technology Enterprise program was intended to be delivered by 2014 at a total estimated cost of $609 million, but was terminated in October 2011.
The Department of Defense’s Expeditionary Combat Support System was canceled in December 2012 after spending more than a billion dollars and failing to deploy within 5 years of initially obligating funds.
The United States Coast Guard (Coast Guard) decided to terminate its Integrated Health Information System project in 2015. As reported by the agency in August 2017, the Coast Guard spent approximately $60 million over 7 years on this project, which resulted in no equipment or software that could be used for future efforts.
Our past work has found that these and other failed IT projects often suffered from a lack of disciplined and effective management, such as project planning, requirements definition, and program oversight and governance. In many instances, agencies had not consistently applied best practices that are critical to successfully acquiring IT.
Such projects have also failed due to a lack of oversight and governance. Executive-level governance and oversight across the government has often been ineffective, specifically from CIOs. For example, we have reported that some CIOs’ roles were limited because they did not have the authority to review and approve the entire agency IT portfolio.
In addition to failures when acquiring IT, security deficiencies can threaten systems once they become operational. As we previously reported, in order to counter security threats, 23 civilian Chief Financial Officers Act agencies spent a combined total of approximately $4 billion on IT security-related activities in fiscal year 2016. Even so, our cybersecurity work at federal agencies continues to highlight information security deficiencies. The following examples describe the types of risks we have found at federal agencies.
In November 2017, we reported that the Department of Education’s Office of Federal Student Aid did not consistently analyze privacy risks for its electronic information systems, and policies and procedures for protecting information systems were not always up to date.
In August 2017, we reported that, since the 2015 data breaches, the Office of Personnel Management (OPM) had taken actions to prevent, mitigate, and respond to data breaches involving sensitive personal and background investigation information. However, we noted that the agency had not fully implemented recommendations made to OPM by DHS’s United States Computer Emergency Readiness Team to help the agency improve its overall security posture and improve its ability to protect its systems and information from security breaches.
In July 2017, we reported that IT security at the Internal Revenue Service had weaknesses that limited its effectiveness in protecting the confidentiality, integrity, and availability of financial and sensitive taxpayer data. An underlying reason for these weaknesses was that the Internal Revenue Service had not effectively implemented elements of its information security program.
In May 2016, we reported that the National Aeronautics and Space Administration, the Nuclear Regulatory Commission, OPM, and the Department of Veteran Affairs did not always control access to selected high-impact systems, patch known software vulnerabilities, and plan for contingencies. An underlying reason for these weaknesses was that the agencies had not fully implemented key elements of their information security programs.
In August 2016, we reported that the IT security of the Food and Drug Administration had significant weaknesses that jeopardized the confidentiality, integrity, and availability of its information systems and industry and public health data.
FITARA Increases CIO Authorities and Responsibilities
Congress and the President have enacted various key pieces of reform legislation to address IT management issues. These include the federal IT acquisition reform legislation commonly referred to as the Federal Information Technology Acquisition Reform Act (FITARA). This legislation was intended to improve covered agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. The law includes specific requirements related to seven areas:
Agency CIO authority enhancements. CIOs at covered agencies have the authority to, among other things, (1) approve the IT budget requests of their respective agencies and (2) review and approve IT contracts.
Federal data center consolidation initiative (FDCCI). Agencies covered by FITARA are required, among other things, to provide a strategy for consolidating and optimizing their data centers and issue quarterly updates on the progress made.
Enhanced transparency and improved risk management. The Office of Management and Budget (OMB) and covered agencies are to make detailed information on federal IT investments publicly available, and agency CIOs are to categorize their investments by level of risk.
Portfolio review. Covered agencies are to annually review IT investment portfolios in order to, among other things, increase efficiency and effectiveness and identify potential waste and duplication.
Expansion of training and use of IT acquisition cadres. Covered agencies are to update their acquisition human capital plans to support timely and effective IT acquisitions. In doing so, the law calls for agencies to consider, among other things, establishing IT acquisition cadres (i.e., multi-functional groups of professionals to acquire and manage complex programs), or developing agreements with other agencies that have such cadres.
Government-wide software purchasing program. The General Services Administration is to develop a strategic sourcing initiative to enhance government-wide acquisition and management of software. In doing so, the law requires that, to the maximum extent practicable, the General Services Administration should allow for the purchase of a software license agreement that is available for use by all executive branch agencies as a single user.
Maximizing the benefit of the Federal Strategic Sourcing Initiative. Federal agencies are required to compare their purchases of services and supplies to what is offered under the Federal Strategic Sourcing Initiative.
In June 2015, OMB released guidance describing how agencies are to implement FITARA. This guidance is intended to, among other things: assist agencies in aligning their IT resources with statutory establish government-wide IT management controls to meet the law’s requirements, while providing agencies with flexibility to adapt to unique agency processes and requirements; strengthen the relationship between agency CIOs and bureau CIOs; strengthen CIO accountability for IT costs, schedules, performance, and security.
The guidance identifies a number of actions that agencies are to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials; and thus, to implement the authorities described in the law. For example, agencies are to conduct a self-assessment and submit a plan describing the changes they intend to make to ensure that common baseline responsibilities are implemented.
In addition, in August 2016, OMB released guidance intended to, among other things, define a framework for achieving the data center consolidation and optimization requirements of FITARA. The guidance directs agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO indicating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard to display consolidation-related costs savings and optimization performance information for the agencies.
Congress Has Undertaken Efforts to Continue Selected FITARA Provisions and Modernize Federal IT
Congress has recognized the importance of agencies’ continued implementation of FITARA provisions, and has taken legislative action to extend selected provisions beyond their original dates of expiration. Specifically, Congress and the President enacted laws to: remove the expiration date for enhanced transparency and improved risk management provisions, which were set to expire in 2019; remove the expiration date for portfolio review, which was set to expire in 2019; and extend the expiration date for FDCCI from 2018 to 2020.
In addition, Congress and the President enacted a law to authorize the availability of funding mechanisms to help further agencies’ efforts to modernize IT. The law, known as the Modernizing Government Technology (MGT) Act, authorizes agencies to establish working capital funds for use in transitioning from legacy IT systems, as well as for addressing evolving threats to information security. The law also creates the Technology Modernization Fund, within the Department of the Treasury, from which agencies can “borrow” money to retire and replace legacy systems, as well as acquire or develop systems.
Further, in February 2018, OMB issued guidance for agencies to implement the MGT Act. The guidance was intended to provide agencies additional information regarding the Technology Modernization Fund, and the administration and funding of the related IT working capital funds. Specifically, the guidance allowed agencies to begin submitting initial project proposals for modernization on February 27, 2018. In addition, in accordance with the MGT Act, the guidance provides details regarding a Technology Modernization Board, which is to consist of (1) the Federal CIO; (2) a senior official from the General Services Administration; (3) a member of DHS’s National Protection and Program Directorate; and (4) four federal employees with technical expertise in IT development, financial management, cybersecurity and privacy, and acquisition, appointed by the Director of OMB.
FISMA Establishes Responsibilities for Agencies to Address Federal Cybersecurity
Congress and the President enacted the Federal Information Security Modernization Act of 2014 (FISMA) to improve federal cybersecurity and clarify government-wide responsibilities. The act addresses the increasing sophistication of cybersecurity attacks, promotes the use of automated security tools with the ability to continuously monitor and diagnose the security posture of federal agencies, and provides for improved oversight of federal agencies’ information security programs.
Specifically, the act clarifies and assigns additional responsibilities to entities such as OMB, DHS, and the federal agencies. Table 1 describes a selection of OMB, DHS, and agency responsibilities.
The Current Administration Has Undertaken Efforts to Improve, Modernize, and Strengthen the Security of Federal IT
Beyond the implementation of FITARA, FISMA, and related actions, the current administration has also initiated other efforts intended to improve federal IT. Specifically, in March 2017, the administration established the Office of American Innovation, which has a mission to, among other things, make recommendations to the President on policies and plans aimed at improving federal government operations and services. In doing so, the office is to consult with both OMB and the Office of Science and Technology Policy on policies and plans intended to improve government operations and services, improve the quality of life for Americans, and spur job creation.
In May 2017, the Administration also established the American Technology Council, which has a goal of helping to transform and modernize federal agency IT and how the federal government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service Administrator are among the members.
In addition, on May 11, 2017, the President signed Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure. This executive order outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation’s cyber posture and capabilities against cyber security threats. Among other things, the order tasked the Director of the American Technology Council to coordinate a report to the President from the Secretary of DHS, the Director of OMB, and the Administrator of the General Services Administration, in consultation with the Secretary of Commerce, regarding the modernization of federal IT. As a result, the Report to the President on Federal IT Modernization was issued on December 13, 2017, and outlined the current and envisioned state of federal IT. The report focused on modernization efforts to improve the security posture of federal IT and recognized that agencies have attempted to modernize systems but have been stymied by a variety of factors, including resource prioritization, ability to procure services quickly, and technical issues. The report provided multiple recommendations intended to address these issues through the modernization and consolidation of networks and the use of shared services to enable future network architectures.
Further, in March 2018, the Administration issued the President’s Management Agenda, which lays out a long-term vision for modernizing the federal government. The agenda identifies three related drivers of transformation—IT modernization; data, accountability, and transparency; and the workforce of the future—that are intended to push change across the federal government.
The Administration also established 14 related Cross-Agency Priority goals, many of which have elements that involve IT. In particular, the Cross-Agency Priority goal on IT modernization states that modern IT must function as the backbone of how government serves the public in the digital age and provides three priorities that are to guide the Administration’s efforts to modernize federal IT: (1) enhancing mission effectiveness by improving the quality and efficiency of critical services, including the increased utilization of cloud-based solutions; (2) reducing cybersecurity risks to the federal mission by leveraging current commercial capabilities and implementing cutting edge cybersecurity capabilities; and (3) building a modern IT workforce by recruiting, reskilling, and retaining professionals able to help drive modernization with up-to-date technology.
Most recently, on May 15, 2018, the President signed Executive Order 13833, Enhancing the Effectiveness of Agency Chief Information Officers. Among other things, this executive order is intended to better position agencies to modernize their IT systems, execute IT programs more efficiently, and reduce cybersecurity risks. The order pertains to 22 of the 24 Chief Financial Officer Act agencies: the Department of Defense and the Nuclear Regulatory Commission are exempt.
For the covered agencies, the executive order strengthens the role of agency CIOs by, among other things, requiring to report directly to their agency head; to serve as their agency head’s primary IT strategic advisor; and to have a significant role in all management, governance, and oversight processes related to IT. In addition, one of the cybersecurity requirements directs agencies to ensure that the CIO works closely with an integrated team of senior executives, including those with expertise in IT, security, and privacy, to implement appropriate risk management measures.
Agencies Have Not Fully Addressed the IT Acquisitions and Operations High-Risk Area
In the February 2017 update to our high-risk series, we reported that agencies still needed to complete significant work related to the management of IT acquisitions and operations We stressed that OMB and federal agencies should continue to expeditiously implement FITARA and OMB’s related guidance, which include enhancing CIO authority, consolidating data centers, and acquiring and managing software licenses.
Our update to this high-risk area also stressed that OMB and agencies needed to continue to implement our prior recommendations in order to improve their ability to effectively and efficiently invest in IT. Specifically, from fiscal years 2010 through 2015, we made 803 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. In addition, in fiscal year 2016, we made 202 new recommendations, thus, further reinforcing the need for OMB and agencies to address the shortcomings in IT acquisitions and operations.
As stated in the update, OMB and agencies should demonstrate government-wide progress in the management of IT investments by, among other things, implementing at least 80 percent of our recommendations related to managing IT acquisitions and operations within 4 years. As of May 2018, OMB and agencies had fully implemented 489 (or about 61 percent) of the 803 recommendations. Figure 1 summarizes the progress that OMB and agencies have made in addressing our recommendations as compared to the 80 percent target.
Overall, federal agencies would be better positioned to realize billions in cost savings and additional management improvements if they address these recommendations, including those aimed at implementing CIO responsibilities, review of IT acquisitions; improving data center consolidation; and managing software licenses.
Agencies Need to Address Shortcomings and Challenges in Implementing CIO Responsibilities
In all, the various laws, such as FITARA, and related guidance assign 35 IT management responsibilities to CIOs in six key areas. These areas are: leadership and accountability, budgeting, information security, investment management, workforce, and strategic planning.
In a draft report on CIO responsibilities that we have provided to the agencies for comment and plan to issue in June 2018, our preliminary results suggest that none of the 24 agencies we reviewed had policies that fully addressed the role of their CIO, as called for by federal laws and guidance. In this regard, a majority of the agencies fully or substantially addressed the role of their CIOs for the area of leadership and accountability. In addition, a majority of the agencies substantially or partially addressed the role of their CIOs for two areas: information security and IT budgeting. However, most agencies partially or minimally addressed the role of their CIOs for two areas: investment management and strategic planning. These preliminary results are shown in figure 2.
Despite these shortfalls, most agency officials stated that their CIOs are implementing the responsibilities even if the agencies do not have policies requiring implementation.
Nevertheless, the CIOs of the 24 selected agencies acknowledged in responses to a survey that we administered for our draft report that they were not always very effective in implementing the six IT management areas. Specifically, our preliminary results show that at least 10 of the CIOs indicated that they were less than very effective for each of the six areas of responsibility. We believe that until agencies fully address the role of CIOs in their policies, agencies will be limited in addressing longstanding IT management challenges.
Figure 3 depicts that extent to which the CIOs reported their effectiveness in implementing the six areas of responsibility.
Beyond the actions of the agencies, however, our preliminary results indicate that shortcomings in agencies’ policies also are partially attributable to two weaknesses in OMB’s FITARA implementation guidance. First, the guidance does not comprehensively address all CIO responsibilities, such as those related to assessing the extent to which personnel meet IT management knowledge and skill requirements, and ensuring that personnel are held accountable for complying with the information security program. Correspondingly, the majority of the agencies’ policies did not fully address nearly all of the responsibilities that were not included in OMB’s guidance.
Second, OMB’s guidance does not ensure that CIOs have a significant role in (1) IT planning, programming, and budgeting decisions and (2) execution decisions and the management, governance, and oversight processes related to IT, as required by federal law and guidance. In the absence of comprehensive guidance, CIOs will not be positioned to effectively acquire, maintain, and secure their IT systems.
Based on our preliminary results, 24 agency CIOs also identified a number of factors that enabled and challenged their ability to effectively manage IT. As shown in figure 4, five factors were identified by at least half of the 24 CIOs as major enablers and three factors were identified by at least half of the CIOs as major challenges. Specifically, most agency CIOs cited five factors as being enablers to effectively carry out their responsibilities: (1) NIST guidance, (2) the CIO’s position in the agency hierarchy, (3) OMB guidance, (4) coordination with the Chief Acquisition Officer (CAO), and (5) legal authority. Further, three factors were cited by CIOs as major factors that have challenged their ability to effectively carry out responsibilities: (1) processes for hiring, recruiting, and retaining IT personnel; (2) financial resources; and (3) the availability of personnel/staff resources.
As our draft report states, although OMB has issued guidance aimed at addressing the three factors that were identified by at least half of the CIOs as major challenges, the guidance does not fully address those challenges. Further, regarding the financial resources challenge, OMB recently required agencies to provide data on CIO authority over IT spending; however, its guidance does not provide a complete definition of the authority. We believe that in the absence of such guidance, agencies have created varying definitions of CIO authority. Further, until OMB updates its guidance to include a complete definition of the authority that CIOs are to have over IT spending, it will be difficult for OMB to identify any deficiencies in this area and to help agencies make any needed improvements.
In order to address challenges in implementing CIO responsibilities, we intend to include in our draft report recommendations to OMB and each of the selected 24 federal agencies to improve the effectiveness of CIOs’ implementation of their responsibilities for each of the six IT management areas.
Agencies Need to Ensure That IT Acquisitions Are Reviewed and Approved by CIOs
FITARA includes a provision to enhance covered agency CIOs’ authority through, among other things, requiring agency heads to ensure that CIOs review and approve IT contracts. OMB’s FITARA implementation guidance expanded upon this aspect of the legislation in a number of ways. Specifically, according to the guidance:
CIOs may review and approve IT acquisition strategies and plans, rather than individual IT contracts;
CIOs can designate other agency officials to act as their representatives, but the CIOs must retain accountability;
CAOs are responsible for ensuring that all IT contract actions are consistent with CIO-approved acquisition strategies and plans; and
CAOs are to indicate to the CIOs when planned acquisition strategies and acquisition plans include IT.
In January 2018, we reported that most of the CIOs at 22 selected agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. For instance, most of the 22 agencies did not identify all of their IT contracts. In this regard, the agencies identified 78,249 IT- related contracts, to which they obligated $14.7 billion in fiscal year 2016. However, we identified 31,493 additional contracts with $4.5 billion obligated, raising the total amount obligated by these agencies to IT contracts in fiscal year 2016 to at least $19.2 billion. Figure 5 reflects the obligations that the 22 selected agencies reported to us relative to the obligations we identified.
The percentage of additional IT contract obligations we identified varied among the selected agencies. For example, the Department of State did not identify 1 percent of its IT contract obligations. Conversely, 8 agencies did not identify over 40 percent of their IT contract obligations.
Many of the selected agencies that did not identify these IT contract obligations did not follow OMB guidance. Specifically, 14 of the 22 agencies did not involve the acquisition office in their process to identify IT acquisitions for CIO review, as required by OMB. In addition, 7 agencies did not establish guidance to aid officials in recognizing IT. We concluded that until these agencies involve the acquisitions office in their IT acquisition identification processes and establish supporting guidance, they cannot ensure that they will identify all IT acquisitions. Without proper identification of IT acquisitions, these agencies and CIOs cannot effectively provide oversight of these acquisitions.
In addition to not identifying all IT contracts, 14 of the 22 selected agencies did not fully satisfy OMB’s requirement that the CIO review and approve IT acquisition plans or strategies. Further, only 11 of 96 randomly selected IT contracts at 10 agencies that we evaluated were CIO-reviewed and approved as required by OMB’s guidance. The 85 IT contracts not reviewed had a total possible value of approximately $23.8 billion.
We believe that until agencies ensure that CIOs are able to review and approve all IT acquisitions, CIOs will continue to have limited visibility and input into their agencies’ planned IT expenditures and will not be able to use the increased authority that FITARA’s contract approval provision is intended to provide. Further, agencies will likely miss an opportunity to strengthen CIOs’ authority and the oversight of IT acquisitions. As a result, agencies may award IT contracts that are duplicative, wasteful, or poorly conceived.
As a result of these findings, we made 39 recommendations in our January 2018 report. The recommendations included that agencies ensure that their acquisition offices are involved in identifying IT acquisitions and issuing related guidance, and that IT acquisitions are reviewed in accordance with OMB guidance. OMB and the majority of the agencies generally agreed with or did not comment on the recommendations.
Agencies Have Made Progress in Consolidating Data Centers, but Need to Take Action to Achieve Planned Cost Savings
In our February 2017 high-risk update, we stated that OMB and agencies needed to demonstrate additional progress on achieving data center consolidation savings in order to improve the management of IT acquisitions and operations. Further, data center consolidation efforts are key to implementing FITARA. Specifically, OMB established the FDCCI in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. The enactment of FITARA in 2014 codified and expanded the initiative.
In a series of reports that we issued from July 2011 through August 2017, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in several areas, including agencies’ data center consolidation plans, data center optimization, and OMB’s tracking and reporting on related cost savings. In these reports, we made a total of 160 recommendations to OMB and 24 agencies to improve the execution and oversight of the initiative. Most agencies and OMB agreed with our recommendations or had no comments. As of May 2018, 80 of these 160 recommendations remained unimplemented.
Further, we recently reported in May 2018 that the 24 agencies participating in OMB’s Data Center Optimization Initiative (DCOI) had communicated mixed progress toward achieving OMB’s goals for closing data centers by September 2018. Over half of the agencies reported that they had either already met, or planned to meet, all of their OMB- assigned goals by the deadline. This would result in the closure of 7,221 of the 12,062 centers that agencies reported in August 2017. However, 4 agencies reported that they do not have plans to meet all of their assigned goals and 2 agencies are working with OMB to establish revised targets. With regard to agencies’ progress in achieving cost savings, 24 agencies reported $3.9 billion in cost savings through 2018.
The 24 agencies also reported limited progress against OMB’s five data center optimization targets for server utilization and automated monitoring, energy metering, power usage effectiveness, facility utilization, and virtualization. As of August 2017, 1 agency reported that it had met four targets, 1 agency reported that it had met three targets, 6 agencies reported having met either one or two targets, and 14 agencies reported meeting none of the targets.
Further, as of August 2017, most agencies were not planning to meet OMB’s fiscal year 2018 optimization targets. Specifically, 4 agencies reported plans to meet all of their applicable targets by the end of fiscal year 2018; 14 agencies reported plans to meet some of the targets; and 4 reported that they did not plan to meet any targets. Figure 6 summarizes agency-reported plans to meet or exceed the OMB’s data center optimization targets, as of August 2017.
In 2016 and 2017, we made 81 recommendations to OMB and the 24 DCOI agencies to help improve the reporting of data center-related cost savings and to achieve optimization targets. As of May 2018, 71 of these 81 recommendations have not been fully addressed.
Agencies Need to Better Manage Software Licenses to Achieve Savings
In our 2015 high-risk report’s discussion of IT acquisitions and operations, we identified the management of software licenses as an area of concern, in part because of the potential for cost savings. Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization’s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees.
As part of its PortfolioStat initiative, OMB has developed policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses.
In May 2014, we reported on federal agencies’ management of software licenses and determined that better management was needed to achieve significant savings government-wide. Of the 24 selected agencies we reviewed, only 2 had comprehensive policies that included the establishment of clear roles and central oversight authority for managing enterprise software license agreements, among other things. Of the remaining 22 agencies, 18 had policies that were not comprehensive, and 4 had not developed any policies.
Further, we found that only 2 of the 24 selected agencies had established comprehensive software license inventories, a leading practice that would help them to adequately manage their software licenses. The inadequate implementation of this and other leading practices in software license management was partially due to weaknesses in agencies’ policies. As a result, we concluded that agencies’ oversight of software license spending was limited or lacking, thus potentially leading to missed savings. However, the potential savings could be significant considering that, in fiscal year 2012, 1 major federal agency reported saving approximately $181 million by consolidating its enterprise license agreements, even when its oversight process was ad hoc.
Accordingly, we recommended that OMB issue a directive to help guide agencies in managing software licenses. We also made 135 recommendations to the 24 agencies to improve their policies and practices for managing licenses. Among other things, we recommended that the agencies regularly track and maintain a comprehensive inventory of software licenses and analyze the inventory to identify opportunities to reduce costs and better inform investment decision making.
Most agencies generally agreed with the recommendations or had no comments. As of May 2018, 78 of the 135 recommendations had not been implemented. Table 2 reflects the extent to which the 24 agencies implemented the recommendations in these two areas.
Agencies Need to Address Shortcomings in Information Security Area
Since information security was added to the high-risk list in 1997, we have consistently identified shortcomings in the federal government’s approach to cybersecurity. We have previously testified that, even though agencies have acted to improve the protections over federal and critical infrastructure information and information systems, the federal government needs to take the following actions to strengthen U.S. cybersecurity:
Effectively implement risk-based entity-wide information security programs consistently over time. Among other things, agencies need to (1) implement sustainable processes for securely configuring operating systems, applications, workstations, servers, and network devices; (2) patch vulnerable systems and replace unsupported software; (3) develop comprehensive security test and evaluation procedures and conduct examinations on a regular and recurring basis; and (4) strengthen oversight of contractors providing IT services. Improve its cyber incident detection, response, and mitigation capabilities. DHS needs to expand the capabilities and support wider adoption of its government-wide intrusion detection and prevention system. In addition, the federal government needs to improve cyber incident response practices, update guidance on reporting data breaches, and develop consistent responses to breaches of personally identifiable information.
Expand its cyber workforce planning and training efforts. The federal government needs to (1) enhance efforts for recruiting and retaining a qualified cybersecurity workforce and (2) improve cybersecurity workforce planning activities.
Expand efforts to strengthen cybersecurity of the nation’s critical infrastructures. The federal government needs to develop metrics to (1) assess the effectiveness of efforts promoting the National Institute of Standards and Technology’s (NIST) Framework for Improving Critical Infrastructure Cybersecurity and (2) measure and report on the effectiveness of cyber risk mitigation activities and the cybersecurity posture of critical infrastructure sectors.
Better oversee protection of personally identifiable information. The federal government needs to (1) protect the security and privacy of electronic health information, (2) ensure privacy when face recognition systems are used, and (3) protect the privacy of users’ data on state-based health insurance marketplaces.
As we have previously noted, in order to take the preceding actions and strengthen the federal government’s cybersecurity posture, agencies should implement the information security programs required by FISMA. In this regard, FISMA provides a framework for ensuring the effectiveness of information security controls for federal information resources. The law requires each agency to develop, document, and implement an agency- wide information security program. Such a program includes risk assessments; the development and implementation of policies and procedures to cost-effectively reduce risks; plans for providing adequate information security for networks, facilities, and systems; security awareness and specialized training; the testing and evaluation of the effectiveness of controls; the planning, implementation, evaluation, and documentation of remedial actions to address information security deficiencies; procedures for detecting, reporting, and responding to security incidents; and plans and procedures to ensure continuity of operations.
Since 2010, we have made 2,733 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations have identified actions for agencies to take to strengthen technical security controls over their computer networks and systems. They also have included recommendations for agencies to fully implement aspects of their information security programs, as mandated by FISMA. Nevertheless, many agencies continue to be challenged in safeguarding their information systems and information, in part because many of these recommendations have not been implemented. As of May 2018, 793 of information security-related recommendations we have made have not been implemented.
Agencies’ Inspectors General Are to Identify Information Security Program Weaknesses
In order to determine the effectiveness of the agencies’ information security programs and practices, FISMA requires that federal agencies’ inspectors general conduct annual independent evaluations. The agencies are to report the results of these evaluations to OMB, and OMB is to summarize the results in annual reports to Congress.
In these evaluations, the inspectors general frame the scope of their analysis, identify key findings, and detail recommendations to address the findings. The evaluations also are to capture maturity model ratings for their respective agencies. Toward this end, in fiscal year 2017, the inspector general community, in partnership with OMB and DHS, finalized a 3-year effort to create a maturity model for FISMA metrics that align to the five function areas in the NIST Framework for Improving Critical Infrastructure Cybersecurity (Cybersecurity Framework): identify, protect, detect, respond, and recover. This alignment is intended to help promote consistent and comparable metrics and criteria and provides agencies with a meaningful independent assessment of their information security programs.
This maturity model is designed to summarize the status of agencies’ information security programs on a five-level capability maturity scale. The five maturity levels are defined as follows:
Level 1 Ad-hoc: Policies, procedures, and strategy are not formalized; activities are performed in an ad-hoc, reactive manner.
Level 2 Defined: Policies, procedures, and strategy are formalized and documented but not consistently implemented.
Level 3 Consistently Implemented: Policies, procedures, and strategy are consistently implemented, but quantitative and qualitative effectiveness measures are lacking.
Level 4 Managed and Measurable: Quantitative and qualitative measures on the effectiveness of policies, procedures, and strategy are collected across the organizations and used to assess them and make necessary changes.
Level 5 Optimized: Policies, procedures, and strategy are fully institutionalized, repeatable, self-generating, consistently implemented and regularly updated based on a changing threat and technology landscape and business/mission needs.
In March 2018, OMB issued its annual FISMA report to Congress, which showed the combined results of the inspectors general’s fiscal year 2017 evaluations. Based on data from 76 agency inspector general and independent auditor assessments, OMB determined that the government-wide median maturity model ratings across the five NIST Cybersecurity Framework areas did not exceed a level 3 (consistently implemented). Table 3 shows the inspectors general’s median ratings for each of the NIST Cybersecurity Framework areas.
OMB Requires Agencies to Meet Targets for Cybersecurity Metrics
In its efforts toward strengthening the federal government’s cybersecurity, OMB also requires agencies to submit related cybersecurity metrics as part of its Cross-Agency Priority goals. In particular, OMB developed the IT modernization goal so that federal agencies will be able to build and maintain more modern, secure, and resilient IT. A key part of this goal is to reduce cybersecurity risks to the federal mission through three strategies: manage asset security, protect networks and data, and limit personnel access. The key targets supporting each of these strategies correspond to areas within the FISMA metrics. Table 4 outlines the strategies and their associated targets.
In conclusion, FITARA and FISMA present opportunities for the federal government to address the high-risk areas on improving the management of IT acquisitions and operations, and ensuring the security of federal IT, thereby saving billions of dollars. Most agencies have taken steps to execute key IT management and cybersecurity initiatives, including implementing CIO responsibilities, requiring CIO review of IT acquisitions, realizing data center consolidation cost savings, managing software assets, and complying with FISMA requirements. The agencies have also continued to address the recommendations that we have made over the past several years. However, further efforts by OMB and federal agencies to implement our previous recommendations would better position them to improve the management and security of federal IT. To help ensure that these efforts succeed, we will continue to monitor agencies’ efforts toward implementing these recommendations.
Chairmen Meadows and Hurd, Ranking Members Connolly and Kelly, and Members of the Subcommittees, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time.
GAO Contacts and Staff Acknowledgments
If you or your staff have any questions about this testimony, please contact David A. Powner, Director, Information Technology, at (202) 512- 9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Kevin Walsh (Assistant Director), Chris Businsky, Rebecca Eyler, Meredith Raymond, and Jessica Waselkow (Analyst in Charge).
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
The federal government plans to invest almost $96 billion in IT in fiscal year 2018. Historically, IT investments have too often failed or contributed little to mission-related outcomes. Further, increasingly sophisticated threats and frequent cyber incidents underscore the need for effective information security. As a result, GAO added two areas to its high-risk list: IT security in 1997 and the management of IT acquisitions and operations in 2015.
This statement summarizes agencies' progress in improving IT management and ensuring the security of federal IT. It is primarily based on GAO's prior reports issued between February 1997 and May 2018 (and an ongoing review) on (1) CIO responsibilities, (2) agency CIOs' involvement in approving IT contracts, (3) data center consolidation efforts, (4) the management of software licenses, and (5) compliance with cybersecurity requirements.
What GAO Found
The Office of Management and Budget (OMB) and federal agencies have taken steps to improve the management of information technology (IT) acquisitions and operations and ensure the security of federal IT through a series of initiatives. As of May 2018, agencies had fully implemented about 61 percent of the approximately 800 IT management-related recommendations that GAO made from fiscal years 2010 through 2015. Likewise, since 2010, agencies had implemented about 66 percent of the approximately 2,700 security-related recommendations as of May 2018. Even with this progress, significant actions remain to be completed.
Chief Information Officer (CIO) responsibilities . Laws such as the Federal Information Technology Acquisition Reform Act (FITARA) and related guidance assigned 35 key IT management responsibilities to CIOs to help address longstanding challenges. However, in a draft report on CIO responsibilities, GAO's preliminary results suggest that none of the 24 selected agencies have policies that fully address the role of their CIO, as called for by federal laws and guidance. GAO intends to recommend that OMB and each of the selected 24 agencies take actions to improve the effectiveness of CIO's implementation of their responsibilities.
IT contract approval . According to FITARA, covered agencies' CIOs are required to review and approve IT contracts. Nevertheless, in January 2018, GAO reported that most of the CIOs at 22 selected agencies were not adequately involved in reviewing billions of dollars of IT acquisitions. Consequently, GAO made 39 recommendations to improve CIO oversight over IT acquisitions.
Consolidating data centers . OMB launched an initiative in 2010 to reduce data centers, which was codified and expanded in FITARA. According to agencies, data center consolidation and optimization efforts have resulted in approximately $3.9 billion of cost savings through 2018. Even so, additional work remains. GAO has made 160 recommendations to OMB and agencies to improve the reporting of related cost savings and to achieve optimization targets; however, as of May 2018, 80 of the recommendations have not been fully addressed.
Managing software licenses . Effective management of software licenses can help avoid purchasing too many licenses that result in unused software. In May 2014, GAO reported that better management of licenses was needed to achieve savings, and made 135 recommendations to improve such management. Four years later, 78 of the recommendations remained open.
Improving the security of federal IT systems . While the government has acted to protect federal information systems, agencies need to improve security programs, cyber capabilities, and the protection of personally identifiable information. Over the last several years, GAO has made about 2,700 recommendations to agencies aimed at improving the security of federal systems and information. These recommendations identified actions for agencies to take to strengthen their information security programs and technical controls over their computer networks and systems. As of May 2018, about 800 of the information security-related recommendations had not been implemented.
What GAO Recommends
From fiscal years 2010 through 2015, GAO made about 800 recommendations to OMB and federal agencies to address shortcomings in IT acquisitions and operations. Since 2010, GAO also made about 2,700 recommendations to federal agencies to improve the security of federal systems. These recommendations include those to improve the implementation of CIO responsibilities, the oversight of the data center consolidation initiative, software license management efforts, and the strength of security programs and technical controls. Most agencies agreed with these recommendations, and GAO will continue to monitor their implementation. |
gao_GAO-18-309 | gao_GAO-18-309_0 | Background
This section provides information on water infrastructure in Indian country, federal programs that provide drinking water and wastewater infrastructure assistance to Indian tribes, and our prior work on interagency collaboration.
Water Infrastructure in Indian Country
The 573 federally recognized Indian tribes in the United States vary greatly in terms of their culture, language, population size, land base, location, and economic status. Many are located in remote and often environmentally challenging areas. According to the U.S. Census Bureau’s American Community Survey, in 2016, about 26 percent of American Indians and Alaska Natives were living below the poverty line, compared with 14 percent for the nation as a whole.
According to EPA databases, tribes operate about 950 public drinking water systems and about 340 public wastewater systems. Drinking water systems often include groundwater wells, water treatment plants, and pipelines to deliver water to homes. A regulated, centralized wastewater system may include sewer lines, tanks, and wastewater treatment plants or lagoons, but small, rural communities are more likely to have decentralized wastewater systems, such as individual septic systems. Once centralized water or wastewater systems are constructed in Indian country, ownership is typically transferred to the tribe. A tribally owned utility, tribal government, or a separate entity operates and maintains the system on behalf of the tribe. Some tribal utilities charge user fees to help offset operations and maintenance costs, but other tribal utilities do not charge these fees because of users’ low income levels or for cultural reasons, according to IHS and tribal officials.
According to EPA, thousands of Indian homes are not currently served by a regulated, centralized drinking water or wastewater system, due in part to the logistical and other challenges associated with Indian water systems that must serve widely dispersed populations in remote locations. Instead, as we reported in September 2017, homes that are not served by water systems may have private wells and septic systems, or they may be entirely unserved. According to EPA and IHS documents, some tribal members may haul drinking water from a regulated drinking water source. However, containers used to haul and store the water can introduce bacteria and other contaminants. Also, because the regulated water source in some communities may be many miles away, residents may haul drinking water from nearby unregulated water sources, such as streams or livestock wells. For homes without access to a wastewater disposal system, residents may use a privy, use honeybuckets, or discharge waste directly to the ground.
According to researchers with the Centers for Disease Control and Prevention, restricted access to clean water for hand washing and hygiene, along with manually disposing of waste, exposes people— especially infants and the elderly—to higher rates of illness and hospitalization. We reported in January 2017 that such health concerns underscore the importance of quality health care—including preventative care, such as providing safe sanitation facilities—for American Indian and Alaska Native people. Further, according to IHS, American Indian and Alaska Native families living in homes with satisfactory environmental conditions, which include safe water and sewer systems, require appreciably fewer medical services and place fewer demands on primary health care delivery systems.
Federal Drinking Water and Wastewater Infrastructure Programs to Assist Indian Tribes
Seven federal agencies administer a number of programs that provide assistance to tribes for drinking water and wastewater infrastructure projects. Each agency has its own programs and processes for providing this assistance, with some similarities. Tribes can apply to one or more federal programs for financial assistance. In some cases, federal agencies coordinate to jointly fund the same project if the project is too large for one agency to fund. In other cases, agencies may work together by separately funding different parts of a large project or different phases of a multi-year project. Of these agencies, IHS, EPA, and USDA administer drinking water and wastewater infrastructure programs that are specific to Indian tribes.
IHS’s mission is to raise the physical, mental, social, and spiritual health of American Indians and Alaska Natives to the highest level. To fulfill this mission, IHS provides primary health care and disease prevention services. IHS’s Office of Environmental Health and Engineering’s Sanitation Facilities Construction program, established in 1959, contributes to IHS’s disease prevention efforts. This program provides technical and financial assistance to Indian tribes for the cooperative development and construction of drinking water and wastewater systems and support facilities. According to the Indian Health Care Amendments of 1988, it is the policy of the United States that all Indian communities and Indian homes, new and existing, be provided with safe and adequate water supply systems and sanitary wastewater disposal systems as soon as possible. IHS’s 12 regional offices, called Areas, are responsible for working with tribes when administering the Sanitation Facilities Construction program.
The Indian Health Care Amendments of 1988 require that IHS report annually to Congress on the sanitation deficiency levels for Indian tribes and communities, including, among other things, the amount of funds necessary to raise all Indian tribes and communities to zero sanitation deficiency. The act identifies five deficiency levels, and IHS uses a deficiency level of 0 to represent the absence of a deficiency in its data systems (see table 1).
To develop its annual report to Congress and identify sanitation deficiencies in Indian communities and homes, IHS maintains two data systems: (1) the Sanitation Deficiency System (SDS), which contains proposed drinking water and wastewater infrastructure projects to address identified sanitation deficiencies; and (2) the Home Inventory Tracking System (HITS), which contains home-specific information that complements the SDS’s project-specific information. According to IHS program documentation, the project descriptions in the SDS are to include information about the sanitation deficiency level that each project will address, the project’s estimated cost, and the number of Indian homes that the project will serve. According to IHS documents, HITS is to include information about each Indian home that may have a sanitation deficiency that is eligible to receive Sanitation Facilities Construction program assistance, including the home’s geographic location and deficiency level. Eligible homes can be located on or off reservations, but according to IHS officials, the agency typically does not collect information about Indian homes located in large urban areas. According to IHS program documentation, IHS uses information in HITS to track the status of and plan for the provision of sanitation facilities for Indian homes.
To address tribes’ identified sanitation facility needs, IHS is authorized to construct essential sanitation facilities, including domestic and community water supplies and facilities, as well as wastewater disposal facilities for Indian homes, communities, and lands. Under the Sanitation Facilities Construction program, IHS administers two primary drinking water and wastewater infrastructure activities: one to address sanitation deficiencies in existing homes and communities based on needs identified in the SDS, and one to provide water infrastructure for newly constructed or recently renovated Indian homes—these needs are not included in the SDS. According to IHS policy, the agency selects projects to fund that address deficiencies in existing homes based on ranked project lists contained in the SDS, by area.
According to IHS policy, the agency can manage sanitation projects on behalf of a tribe (direct service), or a tribe or tribal entity can elect to manage projects. According to this policy, to implement a project under direct service, a tribe formally requests IHS assistance, and IHS engineers typically develop projects to include in the SDS. When IHS selects the project to fund, the tribe decides whether it will complete the project design and manage the construction contract or have IHS engineers do so.
EPA provides annual grants to states to help finance drinking water and wastewater infrastructure through its Drinking Water and Clean Water State Revolving Fund programs, respectively. EPA sets aside a certain percentage of the appropriations it receives for these programs to make grants directly to Indian tribes for drinking water and wastewater infrastructure. Nine EPA regions administer the Drinking Water Infrastructure Grants Tribal Set-Aside and the Clean Water Indian Set- Aside programs, while states administer the State Revolving Funds. Under the drinking water set-aside program, EPA funds projects for community water systems that serve tribal populations, as well as for non- profit, non-community water systems owned by a tribal government that serve a tribal population. Under the clean water set-aside program, EPA provides funding for the planning, design, and construction of wastewater treatment plant facilities that serve federally recognized Indian tribes, Alaska Native villages, and certain tribes in Oklahoma. According to EPA officials, tribes are among those eligible to receive loans from the states’ State Revolving Fund programs. In addition, EPA administers the separate Alaska Native Villages and Rural Community Water Grant program that awards grants to the State of Alaska to, among other things, improve sanitation in rural and Alaska Native villages.
USDA’s Rural Utilities Service allocates a portion of its appropriation for rural water and wastewater disposal programs to make drinking water and wastewater infrastructure grants to Indian tribes; this is referred to as the Native American program. USDA administers the Native American program at the national level and works with tribes at the state office and local level to conduct outreach and assist with the application process, among other things. The Native American program provides grants for water and wastewater facilities and services to rural and low-income tribal communities “whose residents face significant health risks … due to the fact that a significant proportion of the community’s residents do not have access to, or are not served by, adequate affordable water supply systems or waste disposal facilities.” In addition, USDA administers the Rural Alaska Village Grant program, which provides grants to the State of Alaska for development and construction of water and wastewater systems that address dire sanitation conditions in rural or Alaska Native villages with 10,000 or fewer people. Tribes are also eligible to receive loans and grants for infrastructure investments from the agency’s Water and Waste Disposal Program, which is administered by USDA’s state offices. Tribes that are located close to the U.S.-Mexico border and that meet the definition of a colonia are eligible for assistance from USDA’s Colonias program, a water infrastructure grant program to serve state- designated, low-income, unincorporated areas along the border. Finally, USDA administers a grant program to provide technical assistance and training, and the agency makes pre-planning grants available to tribes, organizations that serve tribes, and other recipients through multiple programs to assist with the development of application components, such as preliminary engineering or environmental reports.
Additional Agencies
Four additional agencies may provide drinking water or wastewater assistance to Indian tribes through other programs not specific to drinking water or wastewater or as authorized by statute:
HUD. HUD administers the Indian Community Development Block Grant program, a set-aside from the agency’s Community Development Block Grant program that is specific to Indian tribes. Indian Community Development Block grants can be used for construction of public facilities, provision of public services, housing, and certain economic development projects, among other things. HUD also awards Indian Housing Block Grants to tribes for affordable housing activities, which may include the development and rehabilitation of utilities, necessary infrastructure, and utility services.
Reclamation. As authorized by statute, Reclamation provides assistance for drinking water infrastructure in the 17 western states, including rural water supply projects for tribes. Some of the statutes that direct Reclamation to construct rural water supply projects for tribes are enacted Indian water rights settlements. In addition, until September 2016, Reclamation’s rural water supply program was authorized to conduct appraisal investigations and feasibility studies for proposed rural water supply projects, including those that serve Indian tribes, but the program was not authorized to construct rural water supply projects.
Corps. As authorized by statute, the Corps may provide designated communities, counties, and states with design and construction assistance for drinking water and wastewater infrastructure. For example, Congress has authorized and made appropriations for the Corps to provide assistance to Indian tribes for water-related environmental infrastructure projects—including wastewater treatment facilities and water supply, storage, treatment, and distribution facilities—through the Corps’ Section 219 Environmental Infrastructure Program.
EDA. EDA’s Public Works Program provides grants to economically distressed areas to, among other things, help rehabilitate, expand, and improve their public works facilities, including drinking water and wastewater facilities. The Economic Adjustment Assistance Program provides grants for, among other things, development of public facilities, including drinking water and wastewater facilities. EDA’s Planning Program provides grants to various entities, including tribes, to pay the costs of economic development planning, which can include planning for water infrastructure.
Prior GAO Work on Interagency Collaboration
As part of our body of work on interagency collaboration, our September 2012 report discussed a variety of mechanisms to implement interagency collaborative efforts and identified key features that all efforts benefit from. Mechanisms to implement interagency collaborative efforts include establishing interagency task forces or signing memorandums of understanding. Key features, many of which are related to practices to enhance and sustain collaboration identified in our previous work, fall into the following categories: outcomes and accountability, bridging organizational cultures, leadership, clarity of roles and responsibilities, participants, resources, and written guidance and agreements.
Federal Agencies Estimated About $3 Billion in Existing Tribal Drinking Water and Wastewater Infrastructure Needs in Fiscal Year 2016, but the Needs Are Underestimated
IHS and EPA estimated costs for tribal water infrastructure needs, with IHS identifying at least $3.2 billion in estimated costs for infrastructure projects to address existing drinking water and wastewater infrastructure needs for fiscal year 2016 and EPA estimating the costs of future tribal drinking water infrastructure needs at an additional $2.4 billion over the following 20 years. However, IHS’s estimate of existing needs is likely too low because IHS has not identified all eligible Indian homes that may have existing sanitation deficiencies—drinking water or wastewater infrastructure needs—and some data in the system that IHS uses to track home-specific infrastructure needs are not accurate.
IHS and EPA Have Estimated Several Billion Dollars in Existing and Future Tribal Water Infrastructure Needs
In fiscal year 2016, IHS identified about $3.2 billion in estimated costs for projects to address existing tribal drinking water and wastewater infrastructure needs. This estimate represented more than 2,000 projects in the SDS to address 373 tribes’ existing drinking water and wastewater infrastructure needs. To develop these projects, IHS policy directs area staff to invite all federally recognized tribes to identify existing drinking water and wastewater infrastructure needs each year. IHS staff then work with interested tribes to develop projects, including cost estimates, to include in the SDS. In fiscal year 2016, projects to address deficiency levels 4 and 5—homes or communities that lack a safe drinking water supply or wastewater disposal system, or both—accounted for $1.6 billion, or about half, of the total estimated costs of tribal infrastructure needs in the SDS. More than 80 percent of the deficiency level 4 and 5 project costs were located in the IHS Alaska and Navajo areas. In addition, in fiscal year 2016, IHS determined that more than 60 percent of the total existing drinking water and wastewater infrastructure needs in the SDS were infeasible, mostly due to the significant costs associated with infeasible deficiency level 5 projects.
EPA collects and reports data on the drinking water infrastructure needs of the nation’s public water systems, including the future needs of tribally owned or operated drinking water systems. Specifically, EPA is required to assess capital improvement needs of all eligible public water systems every 4 years, and EPA has conducted its Drinking Water Infrastructure Needs Survey and Assessment to obtain this information every 4 years from 1995 through 2015. EPA last reported in 2013 on the estimated costs of capital improvement projects needed to repair, replace, and upgrade existing tribal and other public drinking water systems over the following 20 years. In its 2013 report, EPA estimated the costs of future tribal drinking water needs of public systems at approximately $2.4 billion. EPA does not, and is not required to collect information about future tribal wastewater infrastructure needs.
Other agencies that provide tribes with assistance for drinking water or wastewater infrastructure projects do not—and are not required to— systematically identify tribal drinking water or wastewater infrastructure needs. For example, USDA officials explained that tribes identify needs through the applications they submit to the agency’s programs. These officials stated that they also identify tribal needs through outreach to tribes and coordination with other agencies, such as IHS. In addition, HUD officials said that they do not collect information specifically about tribal water infrastructure needs because they rely on the tribes to propose or identify projects to meet any needs based on the tribes’ priorities.
IHS Underestimates Existing Tribal Water Infrastructure Needs Because IHS Has Not Identified All Eligible Homes with Infrastructure Needs and Relevant Data Are Not Accurate
IHS area staff work with tribes each year to (1) identify Indian homes eligible for and in need of IHS drinking water or wastewater infrastructure assistance to include in IHS’s home-specific tracking system, HITS; and (2) develop projects aimed at correcting any identified sanitation deficiencies in these homes to include in the SDS. Through this process, IHS has entered information about hundreds of thousands of eligible Indian homes in HITS and developed thousands of projects in the SDS. According to agency documents, HITS is to include information about each Indian home that is eligible to be served by the Sanitation Facilities Construction program and that may have an existing sanitation deficiency. However, based on our review of IHS documentation and interviews with IHS officials, HITS does not contain all eligible Indian homes that may have existing sanitation deficiencies, and some data in the system are not accurate.
HITS Does Not Contain All Eligible Indian Homes That May Have Existing Water Infrastructure Needs
According to IHS officials, as of February 2018, HITS contained information for about 406,000 Indian homes. However, according to IHS area officials, the system does not contain information about all Indian homes eligible to be served by the Sanitation Facilities Construction program. For example, Oklahoma City Area officials we interviewed estimated that, based on Census data and their professional experience, more than 100,000 Indian homes in their area may be eligible for IHS program assistance but are not included in the system, and an unknown number of those homes likely have existing drinking water and wastewater infrastructure needs. These officials, as well as tribal officials administering the Sanitation Facilities Construction program for their tribes in Oklahoma, said that the system does not contain all eligible Indian homes, in part because it is difficult to identify where tribal members are living since most of the communities in the state are a mixture of tribal and non-tribal residents and are not located on reservations. In addition, Portland Area officials stated that they believe the system is missing an unknown number of eligible Indian homes in their area because it is challenging to identify eligible homes that are in scattered locations away from tribal community facilities. In contrast, Navajo Area officials said they believe the system is more than 95 percent complete for their area, in part because the area aligns with the Navajo Nation’s lands.
IHS headquarters officials stated that they do not expect HITS to capture all eligible Indian homes, in large part because some tribes have chosen to not provide such information to IHS for cultural or other reasons. These officials said they are focused on working collaboratively with tribes to identify homes that have existing deficiencies rather than all homes eligible for services but added that IHS areas are expected to identify such homes during the normal course of their work. IHS area officials and tribal officials we interviewed stated that identifying eligible Indian homes not located on reservations is resource intensive, and they do not have sufficient resources to locate these homes. IHS Oklahoma City Area officials said it would be helpful to find efficient ways to identify additional eligible homes that may have sanitation deficiencies. For example, these officials said they have started using EPA data to target communities with water systems that do not meet EPA’s water quality standards and identify eligible homes within those communities, but they have made limited progress with their existing resources.
Standards for Internal Control in the Federal Government calls for management to use quality information to achieve the entity’s objectives; such information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. We recognize that it would be resource intensive for IHS to locate every eligible Indian home to include in HITS, but because the system may not contain roughly 20 percent of eligible Indian homes, opportunities exist for IHS to identify in a targeted, efficient way additional homes with existing deficiencies to include in HITS. By implementing a targeted, resource-efficient method to identify additional eligible Indian homes that may have existing sanitation deficiencies to include in HITS, IHS could have better assurance that it has more complete information to help improve its estimate of the number of eligible Indian homes that may need sanitation facilities assistance.
Deficiency Levels Are Not Accurate For Every Home in HITS
Deficiency levels for thousands of homes may not be accurately captured in HITS. IHS headquarters officials stated that, as of February 2018, of the roughly 406,000 total tribal homes in HITS, about 229,400 homes had a deficiency level of 0. Therefore, the remaining approximately 176,600 tribal homes had deficiency levels 1 through 5. HITS automatically assigns a deficiency level 0 to each home when IHS enters it into the system, and homes remain at a deficiency level 0 until IHS develops projects in the SDS to serve those homes. HITS does not provide IHS with the option of recording a home’s deficiency level as unassessed, so a deficiency level 0 could indicate that there is no deficiency or that the home has not yet been assessed to determine a deficiency.
IHS area officials we interviewed stated that they were aware of homes with sanitation deficiencies that were not accurately reflected in HITS. For example, Phoenix Area staff said they knew of homes with a deficiency level 4 or 5 that had a deficiency level 0 in HITS because these homes were not yet included within the scope of an SDS project. Also, California Area officials estimated that they had not assessed deficiency levels for about 20,000 eligible homes in their area, and Oklahoma City Area officials said they had not assessed more than 100,000 homes in their area—these homes’ deficiency levels all appeared as deficiency level 0 in HITS, but their actual deficiencies were unknown.
According to IHS officials, there are multiple ways to assess homes’ deficiency levels. For homes that are not connected to a public water system, such as homes with private wells, IHS staff may need to visit homes to identify any existing deficiencies, with permission of the tribe. For homes connected to a public water system, staff can assign the homes the deficiency level associated with the water system but may need to visit the community to assess the system’s overall deficiency level. IHS officials from the California and Oklahoma City areas said they did not have the staff resources to begin the process of identifying whether the deficiency level 0 homes in their areas had deficiencies and developing projects for the SDS to serve them.
IHS headquarters officials stated that they have identified homes that the Sanitation Facilities Construction program has served since implementing HITS in 2015. For example, IHS officials stated that of the about 229,400 homes with a deficiency level 0 in HITS, they had determined that about 90,700 correctly show that deficiency level because they have been included in a project in the SDS since 2015. IHS had not included the remaining approximately 138,700 homes with a deficiency level 0 in a project in the SDS. Therefore, using HITS, IHS could not determine if these homes had (1) no deficiency, (2) a deficiency that IHS addressed prior to 2015, or (3) an unknown deficiency because the homes had not been assessed.
IHS officials stated that in the future they will be able to use HITS to better track the agency’s service and project history at the individual home level. However, IHS officials did not explain what steps they would take to identify deficiencies for the approximately 138,700 homes in HITS that had not been included in an SDS project. Standards for Internal Control in the Federal Government calls for management to use quality information to achieve the entity’s objectives; such information is appropriate, current, complete, accurate, accessible, and provided on a timely basis. IHS officials said that improving the system’s accuracy would be beneficial. By implementing a mechanism to indicate in HITS whether each home with a deficiency level of 0 has been assessed, IHS could also have more efficient ways to take steps to address the deficiencies of the homes contained in HITS.
Federal Agencies Provided Funding for Tribal Water Infrastructure Projects, but Processes May Not Prioritize Projects That Address the Most Severe Deficiencies
In fiscal year 2016, federal agencies obligated approximately $370 million for tribal drinking water or wastewater infrastructure projects. The agencies with tribal-specific programs for drinking water and wastewater infrastructure—IHS, EPA, and USDA—funded some projects to address what they identified as the most severe sanitation deficiencies— communities and homes that do not have safe drinking water or wastewater disposal facilities. However, the agencies’ processes may not always prioritize projects that address the most severe sanitation deficiencies. In addition, during the course of our review, we identified issues with how USDA awarded grants under its Rural Alaska Village Grant program.
Federal Agencies Provided About $370 Million in 2016 for Tribal Water Infrastructure Projects
In fiscal year 2016, federal agencies provided about $370 million to develop, construct, or repair tribal drinking water and wastewater infrastructure projects to address tribes’ needs. This amount represents about 11 percent of the more than $3 billion in total existing tribal drinking water and wastewater infrastructure needs that IHS identified in 2016. Appendix III contains additional detail about agency obligations for tribal drinking water and wastewater infrastructure projects for fiscal years 2012 through 2016.
Federal agency obligations were used to address a variety of tribal drinking water and wastewater infrastructure needs. For example, IHS, EPA, USDA, and the State of Alaska provided approximately $15.9 million for multiple, phased projects to bring first-time, in-home piped drinking water and wastewater service to approximately 90 homes in the Native Village of Eek in Alaska (see fig. 1). The residents of Eek obtain their drinking water by hauling water from the village washeteria, a building that contains toilets, washing machines, and a spigot for purchasing water for use in the home. Most homes in the community do not have piped water or sewer service to kitchens or bathrooms, and residents use washbasins for handwashing and food preparation and honeybuckets for wastewater disposal. As of April 2017, construction was ongoing, and officials estimated that the entire community of about 300 people would be served by the fall of 2018. See appendix IV for other examples of tribal drinking water and wastewater infrastructure projects that we visited.
In addition to providing financial assistance for projects to design or construct water infrastructure, federal programs provided grants for technical assistance and training for tribal utilities and staff. For example, in fiscal year 2016, USDA awarded a $130,000 grant from its Technical Assistance and Training program to one organization that works with tribes. USDA also awarded a contract to the National Rural Water Association for it to employ a network of technical consultants who can provide on-site technical assistance to eligible systems, including tribally operated systems experiencing day-to-day operational issues, among other challenges.
Federal programs mostly did not provide financial assistance for routine operations and maintenance of installed community or individual infrastructure. Tribal officials we interviewed, however, said that paying for operations and maintenance is often the tribe’s biggest challenge once a system is constructed or upgraded. For example, officials from one tribe said that the tribe did not have sufficient resources to operate and maintain a newly constructed water treatment system. Tribal officials we interviewed stated that their members are often unable to afford the utility fees needed to support the water system. For private systems, officials from two tribes said some of their members have trouble maintaining new drinking water filtration and septic systems because, for example, the systems are technically complex and costly to maintain. Officials from another tribe said homeowners who have difficulty operating and maintaining a system may return to using an unsafe drinking water source they previously used, for example. According to IHS officials, the agency has been collaborating with EPA, USDA, and tribes to improve collection of information about the cause of some systems’ premature failure and to analyze best practices for operations and maintenance of tribal water systems.
Agencies Funded Some Projects to Address the Most Severe Sanitation Deficiencies
Agencies with tribal-specific programs for water infrastructure—IHS, EPA, and USDA—selected and funded projects that address the most severe sanitation deficiencies. Three of these agencies’ programs—IHS’s Sanitation Facilities Construction, EPA’s clean water set-aside, and USDA’s Native American program—documented in regulation or policy their goal of funding projects to address these needs. Specifically, according to IHS’s Sanitation Facilities Construction program policy, the program’s goal includes providing funding first and in greater degree to homes and communities with the greatest needs, that is, those that lack safe drinking water or wastewater disposal, or both. EPA’s clean water set-aside program policy states the program’s goal is to protect public health in Indian country by addressing the lack of access to sanitation facilities (i.e., deficiency levels 4 and 5 for IHS and EPA). Finally, under the applicable requirements and policy, USDA’s Native American program’s objective is to provide water and waste disposal facilities and services to low-income rural communities whose residents face significant health risks. The program’s goal includes funding the neediest projects, giving priority to areas that lack running water, flushing toilets, and modern sewage disposal systems.
According to agency policy, IHS’s Sanitation Facilities Construction program and EPA’s clean water set-aside program prioritize and select projects to fund according to the projects’ rankings in each IHS area’s SDS list. To create the ranked lists, IHS staff assign scores to each project based on a set of eight scoring factors, each with a different number of points that may be assigned to a project (see table 2).
USDA prioritizes and selects projects to fund from its Native American program using a different process than IHS and EPA. USDA’s process involves tribes, working with USDA state offices, submitting project grant applications to the headquarters office. USDA state offices score project applications before submitting them to the headquarters office. USDA policy directs the program to make funds available according to priority, and the agency accepts and evaluates applications and awards grants throughout the year. USDA officials said the program maintains a wait list for eligible applications received after all available funds have been obligated each year. According to USDA’s scoring sheet for the Native American program, the agency evaluates project applications based on a set of five scoring factors, each with a different number of points to award. These scoring factor categories include population, income, joint financing, and discretionary points that can be awarded at state offices and headquarters (see table 3). USDA officials said that they also take SDS deficiency levels into account when reviewing project applications, but that the statute authorizing the Native American program does not specifically reference IHS’s deficiency level definitions.
Using their respective processes to prioritize and select projects for funding, IHS’s Sanitation Facilities Construction program, EPA’s clean water set-aside program, and USDA’s Native American program obligated a total of nearly $110 million in fiscal year 2016 for projects to meet a mixture of needs. For example, for approximately 190 projects from the SDS that IHS, EPA, and others funded in fiscal year 2016, about 40 percent were projects to address deficiency levels 2 and 3, and about 60 percent were projects to address deficiency levels 4 and 5. Further, in fiscal year 2016, USDA reported that its Native American program funded four projects that provided new drinking water and wastewater service to some tribal communities and funded nine projects that replaced, renovated, or expanded existing infrastructure. Based on our review of IHS and USDA documents, deficiency level 2 and 3 projects as well as replacement and renovation projects can address important water quality and other problems, but they generally do not address the most severe deficiencies or the most significant health risks.
Based on our review of IHS and EPA documents and interviews with these agencies’ officials, we found that their process for prioritizing and selecting projects to fund from the SDS can discourage funding some deficiency level 4 and 5 projects, especially those with a relatively high cost per home. According to some IHS area officials we interviewed, applying IHS’s scoring factors and the points associated with each factor means that deficiency level 3 projects may score higher than—and therefore receive funding before—deficiency level 4 or 5 projects. For example, a project’s cost per home is a significant contributor to its score because IHS assigns as low as minus 20 points for projects that have a relatively high cost to implement per home. IHS officials said that, typically, deficiency level 3 projects replace existing community infrastructure and serve more homes, which makes those projects’ relative cost per home lower than deficiency level 4 and 5 projects. IHS headquarters officials explained that they developed the SDS scoring system in consultation with tribes so the system could balance the need to fund deficiency level 4 and 5 projects with the need to fund projects with lower deficiencies that address health needs and serve a larger number of homes. However, because deficiency level 4 and 5 projects may rank lower than some projects that address less severe deficiencies and rank too low to be funded in a given year, hundreds of feasible projects to address the most severe sanitation deficiencies have remained on SDS lists for 5 years or more, based on our review of these lists. As of the end of fiscal year 2016, many of these projects had not been selected for funding from IHS or EPA.
IHS headquarters officials said that the agency is working to improve the extent to which it funds feasible projects to address the highest sanitation deficiencies. For example, these officials said that they are updating the 2003 Sanitation Facilities Construction program guidelines to incorporate subsequently issued guidance, and this update should also more directly align the guidelines with the program’s original focus—to prioritize service to Indian homes and communities that lack access to piped water and sewer systems. However, a senior IHS official said that changing the SDS scoring factors is not part of this effort because the current scoring factors balance a number of interests in addition to projects’ deficiency levels. The official said that higher deficiency level projects ranking lower than other projects on the SDS list in a given year does not mean that public health needs are going unaddressed. Yet, our analysis shows that projects to address the highest deficiency levels have remained in the SDS for many years. We recognize that IHS faces trade-offs when selecting tribal infrastructure projects to fund. By reassessing the point distribution across the SDS scoring factors as part of IHS’s program guidelines update, in light of trade-offs between funding projects that address the most severe sanitation deficiencies and projects that meet other needs, IHS may have better assurance that its projects address the most severe sanitation deficiencies in Indian communities.
Regarding USDA’s Native American program, based on our review of agency documents and interviews with USDA officials, we found that the agency’s process for prioritizing and selecting projects may not provide USDA with reasonable assurance that it is selecting projects to fund that address the most severe sanitation deficiencies. Specifically, USDA’s scoring factors for its Native American program do not include a scoring factor category to evaluate the extent to which projects will address health risks that stem from tribes’ lack of drinking water and wastewater infrastructure. In contrast, USDA prioritizes projects to fund under its Colonias grant program using an additional scoring factor that awards points based on the extent to which a proposed project will address health risks stemming from lack of safe drinking water and wastewater disposal in a colonia. For example, USDA awards 50 points for projects in colonias where a lack of access to safe drinking water and wastewater disposal results in a significant health risk. We recommended in December 2009 that USDA take steps to better target its limited funds for the Colonias program, and USDA responded in part by creating the additional scoring factor for colonias to ensure that the neediest colonias receive funding.
To prioritize Native American program applications that address significant health risks, USDA officials said they use discretionary points. However, according to program policy, USDA state office and headquarters officials may award discretionary points to meet other purposes that are not related to addressing health risks, such as encouraging projects with green infrastructure or promoting geographic diversity among grantees, or they may not award these points at all. As a result, USDA may not have reasonable assurance that it is consistently evaluating or funding project applications in a way that aligns with the Native American program’s goal. USDA policy states that both the Native American and Colonias programs are to prioritize areas that lack running water, flushing toilets, and modern sewage disposal systems. By implementing a scoring factor similar to the one in the Colonias program—that is, one that awards points for proposed projects that address health risks from a lack of access to safe drinking water and wastewater disposal—for the Native American program, USDA would have more assurance that it is evaluating project applications consistently and funding projects to address the most severe sanitation deficiencies in Indian communities, consistent with the program’s goal.
USDA Did Not Always Award Rural Alaska Village Grants to Authorized Recipients, and the Program’s Regulations Are Inconsistent with Its Authority
During the course of reviewing funding for tribal drinking water and wastewater infrastructure projects, we encountered several issues with one of USDA’s tribal drinking water and wastewater infrastructure programs, the Rural Alaska Village Grant program. Specifically, section 306D of the Consolidated Farm and Rural Development Act authorizes USDA to make grants to the State of Alaska for the benefit of rural or Native villages to provide for the development and construction of drinking water and wastewater systems. According to USDA reports, these grants are used for projects that have provided, for example, rural Alaska Native residents with access to safe drinking water and flush toilets in their homes. From the program’s beginning in fiscal year 1997 through fiscal year 2016, USDA awarded 455 grants totaling more than $444 million to provide safe drinking water and wastewater disposal to thousands of Alaska Natives in remote communities.
We found that from fiscal year 1997 through fiscal year 2016, USDA awarded 159 Rural Alaska Village grants totaling about $157 million to recipients not authorized by section 306D. These recipients were Alaska Native villages, municipalities, and the Alaska Native Tribal Health Consortium, which is the tribal organization that administers IHS’s Sanitation Facilities Construction program in Alaska. USDA’s appropriations acts for fiscal years 2012 through 2017, however, authorized USDA to provide Rural Alaska Village grants to the Consortium. Of the 159 grants, USDA awarded 127 grants (about $121 million) to municipalities and Alaska Native villages from 1997 through 2016, and it awarded 32 grants (about $36 million) to the Consortium in 2011 before first receiving authority to do so in fiscal year 2012. Based on our review of a list of USDA grant agreements, selected agreements, and according to agency officials, in 2011, USDA signed 32 such agreements with the Consortium and the communities on whose behalf the Consortium administered the grants. USDA officials stated that the agency made seven total obligations to the Consortium in 2011 for these grants.
USDA officials stated that they did not agree that the agency had awarded Rural Alaska Village grants to ineligible entities because the program’s authorizing statute gives the State of Alaska control over the use of the grants, and the state concurred with USDA making some grants directly to other parties. For example, the USDA officials stated that a 2011 memorandum of agreement between USDA, the State of Alaska, IHS, and the Consortium was a vehicle for the state to direct a portion of the Rural Alaska Village grants to other parties. These officials stated that since the statute does not prevent the state from redirecting portions of the grant to other parties, it is not improper for USDA to enter into an agreement with the state to award the grants directly to other parties so that the state does not have to redirect them. In commenting on a draft of this report, USDA noted that the agency has awarded two grants to municipalities since signing the 2011 agreement.
In addition, USDA officials said that they entered into the memorandum of agreement and began awarding grants to the Consortium in 2011 to address problems with the program’s administration, which resulted in projects that were delayed or halted. For example, USDA stated in a 2010 report that the State of Alaska had not adequately documented project costs and that USDA staff were concerned that the state had not applied the obligations it received from USDA to the intended communities. According to USDA officials, they have seen a significant improvement in the state’s grant administration and more timely delivery of projects since the 2011 agreement. In addition, the Rural Alaska Village Grant program manager said the agency awards grants directly to Native villages that have the capacity to administer them. In commenting on a draft of this report, USDA stated that the agency has made all grants to the Consortium pursuant to the 2011 memorandum of agreement.
The State of Alaska can choose to make subgrants once it receives the Rural Alaska Village grant, but section 306D of the Consolidated Farm and Rural Development Act only authorizes USDA to award grants to the State of Alaska. Moreover, the 2011 memorandum of agreement cannot authorize USDA to award grants to recipients that are not authorized by statute. By ensuring that all Rural Alaska Village grants are awarded only to recipients identified as eligible in section 306D or USDA appropriations acts, USDA will have assurance that it is complying with the law. If USDA wants to award Rural Alaska Village grants to municipalities and Alaska Native villages, it should seek authority to do so, as it did to award such grants to the Consortium.
In addition, the regulations governing the Rural Alaska Village Grant program identify rural or native villages in Alaska as eligible grant recipients. USDA officials explained that the agency amended the Rural Alaska Village Grant program regulations in 2015 to codify the 2011 memorandum of agreement. However, this regulation identifying rural and Alaska Native villages as eligible grant recipients expands USDA’s authority to award grants beyond the existing statutory authorities, which do not list rural or Alaska Native villages as eligible recipients. Until USDA amends the Rural Alaska Village Grant program regulations to be consistent with USDA’s authority, the agency’s regulations will continue to recognize recipients not authorized by statute.
The Extent to Which Federal Agencies Collaborated to Meet Tribes’ Water Infrastructure Needs Varied at the National Level and in Six Selected States
Most of the seven federal agencies that administer programs to provide drinking water and wastewater infrastructure assistance to Indian tribes have taken actions to collaborate at the national level, and the agencies have identified additional opportunities to collaborate. At the regional level, seven federal agencies we surveyed reported collaborating on a range of activities within six selected states—with some agencies frequently working together and others rarely collaborating—and the agencies identified opportunities to increase collaboration at the regional level to better serve tribes.
Most Reviewed Agencies Have Taken Actions to Collaborate at the National Level on Tribal Water Infrastructure and Have Identified Additional Opportunities to Increase Collaboration
Most of the seven federal agencies we reviewed have taken actions to collaborate at the national level and identified additional opportunities to collaborate that they have not yet taken. In our previous work, we found that achieving important national outcomes—such as providing access to safe drinking water and wastewater disposal—often requires coordinated and collaborative efforts of a number of programs spread across the federal government. For example, IHS, EPA, USDA, HUD, and Reclamation have formed a national tribal infrastructure task force to facilitate the agencies’ collaborative efforts when providing services, support, and technical assistance to tribes.
The tribal infrastructure task force’s efforts reflect some of the key features that we have found all collaborative mechanisms benefit from in our previous work:
Written guidance and agreements. We have previously reported that agencies that articulate their agreements in documents can strengthen their commitment to working collaboratively. The members of the tribal infrastructure task force first documented their agreement in a memorandum of understanding in 2007, the year the task force was created. The agencies updated the memorandum most recently in 2013, and they use the document to formally agree on the group’s common goal and purposes and to clarify and define roles and responsibilities. Having participating agencies document their agreements on how they will be collaborating, and continually updating and monitoring these agreements, are practices that are consistent with our prior work.
Outcomes and accountability. In our previous work, we have reported on the importance of groups having clear goals. In its 2013 memorandum of understanding, the tribal infrastructure task force identified a common goal of improving access to safe drinking water and basic sanitation for American Indians and Alaska Natives. In the memorandum, the member agencies also agreed on the task force’s stated purposes, one of which is to enhance the efficient leveraging of funds.
Leadership. We have found that identifying one agency as the leader of a collaborative group is often beneficial because it centralizes accountability and can speed decision making. EPA has served as the federal focal point for the task force; this has included hosting the task force’s website that serves as a common source for documents the group has produced. According to an official involved with the task force, EPA’s role has provided continuity in leadership.
The task force’s efforts have yielded some specific results. For example, in 2013, tribal infrastructure task force members agreed to adopt a uniform preliminary engineering report template, a key supporting document that multiple agencies require in their project application and evaluation processes. Task force members created this template in part in response to our October 2012 recommendation that EPA and USDA develop such a document. According to agency officials we interviewed, the report template has been helpful for tribes since they no longer have to produce different versions of the same document when submitting multiple applications to different agencies. USDA officials said they have since worked with other agencies to develop an online version of the preliminary engineering report that is accessible to task force members and others to further improve collaboration.
However, according to agency officials involved with the task force, there may be additional opportunities to improve the efficiency of their collaboration at the national level. For example, in 2011, a workgroup of the task force identified a series of 10 options to increase the efficiency of collaboration by streamlining their application processes. As of November 2017, according to agency officials, the task force had not acted on most of the options. One such option was for agencies to better align their different funding and application cycles where possible. Several tribal officials and representatives from a tribal organization we interviewed cited challenges with complying with the agencies’ different application requirements. For example, they said that doing so can be resource intensive and can make it difficult to obtain funds for one project. Other tribal officials we interviewed also identified ways that agencies could improve their collaboration that would benefit tribal applicants and that the task force did not identify in its 2011 report. For example, various tribal officials suggested that agencies standardize federal program application processes and coordinate their outreach to tribes to discuss agency programs and their requirements.
According to an agency official involved with the task force, when the group considered which options from the 2011 report to implement, member agencies focused their efforts on implementing those that were most achievable given the agencies’ limited resources. Other officials also said that it would be worthwhile to reconsider some of the options identified in the report. As stated in the task force’s 2013 memorandum of understanding, one of its purposes is to enhance the member agencies’ efficient leveraging of funds. By reviewing the 2011 task force report and identifying and implementing additional actions to help increase their collaboration, the task force member agencies could improve their ability to leverage limited program funds.
Federal Agencies’ Regional Offices Collaborated to Varying Extents within Six Selected States and Reported That Additional Collaboration Would Be Beneficial
The regional offices of the seven federal agencies we surveyed collaborated with each other to varying extents in the six selected states (Alaska, Arizona, California, New York, Oklahoma, and South Dakota). In the 2013 memorandum of understanding, the tribal infrastructure task force member agencies—IHS, EPA, USDA, HUD, and Reclamation— agreed that they are expected to collaborate at the regional level to achieve a common goal of providing safe drinking water and basic sanitation for tribes. However, based on our review of agency survey responses, these agencies did not always collaborate in each of the six states. We measured agency collaboration in terms of the number of instances in which one agency regional office reported using a collaborative mechanism with another agency. These collaborative mechanisms include state- or project-level working groups, memorandums of understanding, and shared databases, among others. In responses to our survey, we found that the number of instances of agency regional offices reporting that they used one or more collaborative mechanism with other agencies varied across the six states. For example, the agencies’ regional offices collaborated the most in Alaska and the least in New York and Oklahoma. Figure 2 shows the percentage of instances where an agency reported using a collaborative mechanism with another agency when jointly working on tribal drinking water and wastewater activities in the six states, out of the total possible instances. Appendix II contains additional details about our survey and agency responses.
In responses to our survey, certain agencies’ regional offices reported collaborating with each other in some states but not in other states. For example, EPA and USDA regional offices both reported working together in Alaska, Arizona, and California, but not in New York, Oklahoma, or South Dakota. IHS and HUD reported collaborating with each other in three states but not in the other states. Not all agencies work with tribes in every state. For example, Reclamation does not operate in Alaska or New York, so we did not survey the agency in those states. The Corps’ regional offices responded that they are not authorized to work on drinking water or wastewater infrastructure with tribes in two of the six states. In contrast, IHS and EPA regional offices reported collaborating with each other in all six states, the most of any agency pair.
In their responses to our survey and in interviews, the seven federal agencies’ regional offices most frequently identified three key factors that limited how much they collaborated in the six states. Specifically: Incompatibility of agency policies and missions. Agencies’ regional offices reported that having incompatible policies or agency missions was a factor that had hindered their collaboration with other agencies. For example, IHS and HUD regional offices in four states reported that a restriction on IHS’s ability to serve new homes constructed with grants from HUD’s housing programs limited their collaboration. Several agencies’ regional offices reported that having compatible policies helped their collaboration. For example, IHS and USDA regional offices in Alaska responded that multiple agencies’ use of IHS’s SDS list as a common source for identifying potential projects to fund has helped collaboration. We previously found that adopting compatible policies and procedures is one way for agencies to establish means of operating across agency boundaries.
Insufficient resources. An additional factor that hindered agencies’ collaboration was insufficient staff and financial resources. For example, HUD and IHS regional officials we interviewed in Arizona said that a state-level tribal infrastructure working group they were involved in became inactive in 2016, after the lead agency determined it was unable to continue dedicating staff resources to that role and none of the other agencies picked up the lead. In contrast, several IHS and EPA regional offices reported that the existence of standard interagency agreements that facilitate transferring EPA funds to IHS helped them collaborate and leverage funding for projects that each agency would not have funded on its own. Identifying and leveraging the resources needed to initiate or sustain a collaborative effort is a key consideration for implementing the interagency collaborative mechanisms we previously identified.
Absence of personal relationships. Agencies’ regional offices also reported that the absence of relationships with staff from other agencies hindered their collaboration. In contrast, agencies’ regional offices reported that having good working relationships with staff from other agencies helped their collaboration. For example, USDA’s regional office and the State of Alaska reported that their strong relationships with each other and other agencies in Alaska helped their collaboration, and that these relationships were enhanced by agency staff’s frequent communication through regular meetings. We previously found that having positive working relationships can bridge organizational cultures and build trust.
In their responses to the survey and in interviews, several agencies’ regional offices identified examples of inefficiencies that have occurred when they did not collaborate, including inefficient use of their limited resources. For example, officials from one EPA region we interviewed said that there have been years when EPA staff spent time developing a project only to learn that USDA had already funded the same project. The officials stated that this inefficiency could have been avoided if they had been communicating with their USDA counterparts about the projects that each agency was considering to fund. Also, in two states, EPA’s regional office reported that EPA and USDA may be missing opportunities to leverage funding for individual projects by not sharing information about projects.
In all six states, nearly all of the federal agency regional offices responded that it would be beneficial to increase their collaboration. Specifically, more than 90 percent of the federal agency respondents identified at least one collaborative mechanism that would be beneficial for them to begin using with another agency. The specific mechanisms that the agencies identified appeared to relate to the amount of collaboration in which they had already engaged. For example, agency regional offices that reported not having collaborated with another agency most frequently said that it would be beneficial to begin having informal communications with their counterparts in other agencies and to start sharing project-specific documents such as preliminary engineering reports. Alternatively, agency regional offices that reported having collaborated with another agency most frequently responded that it would be beneficial to begin using a memorandum of understanding as an additional mechanism for collaborating where they had not already done so. In the tribal infrastructure task force’s 2013 memorandum of understanding, the member agencies—IHS, EPA, USDA, HUD, and Reclamation—agreed that they are expected to collaborate at the regional level to provide safe drinking water and basic sanitation for tribes and to more efficiently leverage program funds. By directing their regional offices to identify and pursue additional mechanisms to increase their collaboration, the task force member agencies would have better assurance that their regional offices are efficiently leveraging limited program funds and following through on the commitment to collaborate.
Conclusions
Identifying and addressing drinking water and wastewater infrastructure needs in Indian country is a difficult undertaking. IHS dedicates a significant effort each year to working with tribes to identify their existing drinking water and wastewater infrastructure needs. However, one of IHS’s systems—HITS—may be missing tens of thousands of eligible Indian homes, an unknown number of which may have existing sanitation deficiencies. Additionally, some homes’ deficiency levels in HITS are inaccurate. By implementing a targeted, resource-efficient method to identify additional eligible Indian homes that may have existing sanitation deficiencies to include in HITS, IHS could have better assurance that it has more complete information to help improve its estimate of the number of eligible Indian homes that may need sanitation facilities assistance. Also, IHS officials said that improving the system’s accuracy would be beneficial. By implementing a mechanism to indicate in HITS whether each home with a deficiency level of 0 has been assessed, IHS could also have more efficient ways to take steps to address the deficiencies of the homes contained in HITS.
IHS and USDA funded some projects to address the most severe sanitation deficiencies, but residents of many Indian homes remain without safe drinking water or wastewater disposal as the agencies also prioritized and funded projects that addressed other needs. We recognize that IHS faces trade-offs when selecting tribal infrastructure projects to fund. By reassessing the point distribution across the SDS scoring factors as part of IHS’s program guidelines update, in light of trade-offs between funding projects that address the most severe sanitation deficiencies and projects that meet other needs, IHS may have better assurance that its projects address the most severe sanitation deficiencies in Indian communities. Also, by USDA implementing a scoring factor similar to the one in its Colonias program—that is, one that awards points for proposed projects that address health risks from a lack of access to safe drinking water and wastewater disposal—for the Native American program, USDA would have more assurance that it is evaluating project applications consistently and funding projects to address the most severe sanitation deficiencies in Indian communities, consistent with the program’s goal.
USDA has provided thousands of Alaska Natives with safe drinking water and wastewater infrastructure through its Rural Alaska Village Grant program. However, USDA awarded some grants to recipients not authorized by statute. By ensuring that all Rural Alaska Village grants are awarded only to recipients authorized by statute, USDA will have assurance that it is complying with the law. If USDA wants to award Rural Alaska Village grants to municipalities and Alaska Native villages, it should seek authority to do so as it did to award these grants to the Alaska Native Tribal Health Consortium. Also, until USDA amends the Rural Alaska Village Grant program regulations to be consistent with USDA’s authority, the agency’s regulations will continue to recognize recipients not authorized by statute.
The five agencies that participate in the national tribal infrastructure task force have committed to working together at the national and regional levels to increase tribes’ access to safe drinking water and basic sanitation. In our previous work, we have found that achieving important national outcomes, such as providing access to safe drinking water and wastewater disposal, often requires collaborative efforts by a number of programs across the federal government. At the national level, the task force has not acted on most of the options it previously identified to improve member agencies’ collaboration. By reviewing the 2011 task force report and identifying and implementing additional actions to help increase their collaboration, the task force member agencies could improve their ability to leverage limited program funds. At the regional level, we found that the task force member agencies had not fulfilled their commitment to collaborate in all of the six states we reviewed. Responses to our survey also indicated that there is unrealized potential for the task force member agencies’ regional offices to increase the extent of their collaboration. By directing their regional offices to identify and pursue additional mechanisms to increase their collaboration, the task force member agencies would have better assurance that their regional offices are leveraging limited program funds and following through on their commitment to collaborate.
Recommendations for Executive Action
We are making 16 recommendations—two to IHS to improve information in HITS; one each to IHS and USDA to review their project selection processes; two to USDA to address issues with its Rural Alaska Village Grant program; and two each to IHS, USDA, EPA, HUD, and Reclamation to increase collaboration at the national and regional levels.
The Director of IHS should implement a targeted, resource-efficient method to identify additional eligible Indian homes that may have existing deficiencies to include in HITS. (Recommendation 1)
The Director of IHS should implement a mechanism to indicate in HITS whether each home with a deficiency level of 0 has been assessed. (Recommendation 2)
The Director of IHS should reassess the point distribution across the SDS scoring factors as part of its program guidelines update, in light of trade-offs between funding projects that address the most severe sanitation deficiencies and projects that meet other needs. (Recommendation 3)
The Assistant to the Secretary of Agriculture for Rural Development should implement a scoring factor that awards points for proposed Native American program grant projects that address health risks from a lack of access to safe drinking water and wastewater disposal, as it does with the Colonias grant program. (Recommendation 4)
The Assistant to the Secretary of Agriculture for Rural Development should ensure that all Rural Alaska Village grants are awarded only to recipients authorized by law or seek authority to award grants to municipalities and Alaska Native villages. (Recommendation 5)
The Assistant to the Secretary of Agriculture for Rural Development should amend the Rural Alaska Village Grant program regulations so that they are consistent with USDA’s authority. (Recommendation 6)
The Director of IHS, in cooperation with other members of the tribal infrastructure task force, should review the 2011 task force report and identify and implement additional actions to help increase the task force’s collaboration at the national level. (Recommendation 7)
The Administrator of EPA, in cooperation with other members of the tribal infrastructure task force, should review the 2011 task force report and identify and implement additional actions to help increase the task force’s collaboration at the national level. (Recommendation 8)
The Assistant to the Secretary of Agriculture for Rural Development, in cooperation with other members of the tribal infrastructure task force, should review the 2011 task force report and identify and implement additional actions to help increase the task force’s collaboration at the national level. (Recommendation 9)
The Deputy Assistant Secretary of the Department of Housing and Urban Development’s Office of Native American Programs, in cooperation with other members of the tribal infrastructure task force, should review the 2011 task force report and identify and implement additional actions to help increase the task force’s collaboration at the national level. (Recommendation 10)
The Commissioner of Reclamation, in cooperation with other members of the tribal infrastructure task force, should review the 2011 task force report and identify and implement additional actions to help increase the task force’s collaboration at the national level. (Recommendation 11)
The Director of IHS, in cooperation with other members of the tribal infrastructure task force, should direct IHS area offices to identify and pursue additional mechanisms to increase their collaboration. (Recommendation 12)
The Assistant to the Secretary of Agriculture for Rural Development, in cooperation with other members of the tribal infrastructure task force, should direct USDA state offices to identify and pursue additional mechanisms to increase their collaboration. (Recommendation 13)
The Administrator of EPA, in cooperation with other members of the tribal infrastructure task force, should direct EPA regional offices to identify and pursue additional mechanisms to increase their collaboration. (Recommendation 14)
The Deputy Assistant Secretary of the Department of Housing and Urban Development’s Office of Native American Programs, in cooperation with other members of the tribal infrastructure task force, should direct HUD regional offices to identify and pursue additional mechanisms to increase their collaboration. (Recommendation 15)
The Commissioner of Reclamation, in cooperation with other members of the tribal infrastructure task force, should direct Reclamation regional offices to identify and pursue additional mechanisms to increase their collaboration. (Recommendation 16)
Agency Comments and Our Evaluation
We provided a draft of this report for review and comment to the Department of Health and Human Services (for IHS), HUD, the Department of the Interior (for Reclamation), EPA, USDA, the Department of Defense (for the Corps), and the Department of Commerce (for EDA). Of the five agencies to which we directed recommendations, three— Health and Human Services, HUD, and Interior—agreed with the recommendations directed to them. The fourth agency, EPA, agreed with one of the recommendations and agreed with the intent of the second recommendation but proposed revised language, as discussed below. The Acting Director of Grants Evaluation for HUD’s Office of Native American Programs provided comments by e-mail, and Health and Human Services, Interior, and EPA provided written comments that are reproduced in appendixes V, VI, and VII, respectively. The fifth agency to which we directed recommendations, USDA, disagreed with the two recommendations regarding the Rural Alaska Village Grant program and neither agreed nor disagreed with the other three recommendations directed to it, although the agency proposed alternative language for two of these recommendations in its written comments, reproduced in appendix VIII. Of the two agencies to which we did not direct recommendations, Defense provided a letter, reproduced in appendix IX, in which it indicated the agency had no comments on the report, and Commerce’s Audit Liaison stated in an e-mail that Commerce would not send a formal comment letter. In addition, Health and Human Services, USDA, and EDA (for Commerce) provided technical comments, which we incorporated in the report as appropriate.
In its written comments, EPA requested that we revise the language of the recommendation that the members of the tribal infrastructure task force direct their regional, state, or area offices to identify and pursue additional mechanisms to increase their collaboration. EPA stated that it agreed with the intent of the recommendation but that it was concerned that, as worded, the recommendation may not achieve the intended goal. Instead, EPA stated that it can accomplish increased regional collaboration through multiple avenues and as such, provided revised language that would remove reference to its regional offices taking the recommended action. We encourage EPA to take advantage of increasing regional collaboration through all avenues it sees fit. However, because EPA’s regional offices are the entities that collaborate with other agencies in the various regions, we continue to believe it is important for these offices to participate in identifying and implementing the means for increasing collaboration in their respective regions. As a result, we did not modify the recommendation language in response to EPA’s comment.
In its written comments, USDA stated it disagreed with our statements concerning the Rural Alaska Village Grant program and asked that we remove the two corresponding recommendations from our report. Specifically, USDA stated that our recommendations are unnecessary because the agency is operating within its authorities. USDA stated that it believes providing grants directly to parties other than the state— including Alaska Native villages and municipalities—under the 2011 memorandum of agreement is consistent with the purpose of section 306D of the Consolidated Farm and Rural Development Act and appropriations made for the program. As we state in the report, we agree that the State of Alaska can choose to make subgrants once it receives the Rural Alaska Village grant. We also state in the report that we did not see any evidence of grants being used for other than their intended purposes during the course of our review. However, the language of section 306D only authorizes USDA to award grants to the State of Alaska and not directly to other entities. Therefore, we believe that our recommendations are necessary. If USDA wants to make Rural Alaska Village grants to municipalities and Alaska Native villages, it should seek authority to do so as it did to award such grants to the Alaska Native Tribal Health Consortium.
Regarding our fourth recommendation that USDA implement a scoring factor that awards points for proposed Native American program projects that address health risks, USDA stated that it would like clarification as to what form of scoring factor would be acceptable to address this recommendation. USDA stated that it would prefer to use its discretionary points under the program’s existing regulations to award additional points to give a higher priority to projects that address a lack of access to safe drinking water and wastewater disposal, and that the agency could implement this change at the start of fiscal year 2019 or sooner. In contrast, USDA stated that changing the program’s regulations to implement the scoring factor could take 18 months or longer. USDA also stated that this approach would only have a programmatic effect in fiscal years when demand for Native American program grant funds exceeds the available funding. Our intent for the recommendation as written is to provide USDA with the flexibility to best determine how to implement it. If USDA has determined that using its discretionary points under the program’s existing regulations gives greater priority to addressing health risks faced by Native American communities, and that such an approach is consistent with applicable law, such an approach could meet the intent of our recommendation.
USDA also requested in its written comments that we modify the language of the ninth recommendation aimed at increasing collaboration at the national level by removing reference to increasing national collaboration and that we modify the thirteenth recommendation aimed at increasing regional collaboration by removing reference to the agency’s state offices and regional level collaboration. USDA did not provide a clear rationale for its requested change for either recommendation. We continue to believe that implementing these recommendations, as worded, would help improve collaboration at the national and regional levels. Therefore, we did not modify the language in response to USDA’s comments.
In several places in its written comments, USDA stated that our draft either omitted information or contained inaccurate information and requested that we make modifications. Specifically, USDA stated that we omitted statutory language for the Native American program in a few places in the report. In response, we added additional language from and about the Native American program’s authorizing statute in several places. USDA also stated the report is missing information about the scope of some of its programs, including its Technical Assistance and Training program. In response, we added more information about this program, including obligations made to non-profit organizations that work on behalf of tribes. Further, USDA stated that we did not accurately characterize certain activities that USDA conducts under some of its programs, including identifying tribal needs and conducting operations and maintenance. In response, we modified language to reflect additional information about how USDA identifies tribal needs and to indicate that the Native American program is not authorized to fund operations and maintenance. Regarding the Rural Alaska Village Grant program, USDA stated that we did not accurately represent information shared by a USDA official and information about the number of grants made to the Consortium. We revised language attributed to the official and clarified information about the number of grants awarded based on additional information that USDA provided by e-mail after submitting its written comments.
In other cases where USDA requested revisions to the draft in its written comments, we did not make suggested changes because they did not align with the scope of our review. Specifically, in addition to its Technical Assistance and Training program, USDA asked that we add information about tribal obligations under its Solid Waste Management program. Since federal agency efforts to fund solid waste management projects are outside the scope of this review, we did not make this revision. In addition, USDA requested that we limit our discussion of Rural Alaska Village Grant awards to fiscal year 2011 and forward. We did not make this change because USDA’s grants to municipalities and Native villages prior to 2011 are directly relevant to our findings and are within the scope of this review. Finally, USDA asked that we edit our description of the findings of a 2010 report to Congress by citing a different report instead. We did not make this change because the original report contained relevant information for our findings.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Commerce, the Secretary of Defense, the Secretary of Health and Human Services, the Secretary of Housing and Urban Development, the Secretary of the Interior, the Administrator of the Environmental Protection Agency, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Anne-Marie Fennell at (202) 512-3841 or [email protected] or J. Alfredo Gómez at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix X.
Appendix I: Objectives, Scope, and Methodology
The objectives of our review were to examine the extent to which the seven federal agencies, as applicable, (1) identified Indian tribes’ drinking water and wastewater infrastructure needs; (2) funded tribal drinking water and wastewater infrastructure projects, including projects to address the most severe sanitation deficiencies; and (3) collaborated to meet Indian tribes’ drinking water and wastewater infrastructure needs.
To address these objectives, we reviewed our previous reports, other agency reports, and agency obligations to identify the federal agencies that provide financial or other assistance to Indian tribes for drinking water and wastewater infrastructure. We identified seven agencies, as shown in table 4. We identified the Indian Health Service (IHS), Environmental Protection Agency (EPA), and U.S. Department of Agriculture (USDA) as federal agencies that have drinking water and wastewater infrastructure programs specifically targeted to provide financial assistance for planning and construction to address Indian tribes’ needs. According to IHS documentation, such needs arise from a sanitation deficiency in existing drinking water or wastewater infrastructure (or lack thereof) that can negatively affect public health. In addition, the Department of Housing and Urban Development (HUD), the Department of the Interior’s Bureau of Reclamation, the Department of Commerce’s Economic Development Administration (EDA), and the U.S. Army Corps of Engineers administer programs that may assist tribes with drinking water and wastewater infrastructure planning and construction. The types of assistance these agencies provide vary by program, and each program has its own eligibility requirements and authorities.
To determine the extent to which these federal agencies identified Indian tribes’ drinking water and wastewater infrastructure needs, we identified requirements for IHS and EPA to collect and report information on needs, but we did not identify such requirements for the other agencies. We reviewed IHS’s project-level tribal drinking water and wastewater infrastructure needs data from the Sanitation Deficiency System (SDS) for fiscal year 2016, the most recent year of data available at the time of our review. The SDS contains information about proposed drinking water and wastewater infrastructure projects, including each project’s estimated cost. IHS policy directs area staff to invite all federally recognized tribes to identify potential projects each year. Area staff then work with interested tribes to develop projects and enter project information into standard fields in the SDS. As of the end of fiscal year 2016, the SDS included more than 2,000 projects for 373 tribes. We also reviewed IHS’s most recent reports describing tribal drinking water and wastewater needs. In addition, we reviewed information about tribal public drinking water systems reported in EPA’s 2013 Drinking Water Infrastructure Needs Survey and Assessment report. EPA assesses and reports on the nation’s public water systems’ capital improvement needs every 4 years, including needs of tribally owned or operated drinking water systems. For its 2013 report, EPA assessed tribal water system needs by administering a survey to a statistical sample of 306 tribal water systems out of 956 identified tribal public drinking water systems.
We assessed the reliability of SDS project-level needs data and information from EPA’s 2013 Drinking Water Infrastructure Needs Survey and Assessment report by reviewing our previous related work regarding the use of these data and documentation from IHS and EPA. We also interviewed IHS and EPA officials involved with identifying tribal water needs from headquarters and all 12 IHS areas and 9 EPA regions that administered the drinking water and clean water set-aside programs, discussing the data and any of its limitations. We tested the data for accuracy and completeness by identifying any duplicate, missing, or invalid records and cross-referencing with relevant datasets. We determined that IHS’s SDS project-level needs data and information from EPA’s 2013 report were sufficiently reliable to provide descriptive information on tribes’ needs for drinking water and wastewater infrastructure projects for this report.
Further, we reviewed documentation on the Home Inventory Tracking System (HITS)—IHS’s database containing home-specific information that the agency also uses in administering the Sanitation Facilities Construction program. The information in HITS includes each home’s geographic location and individual sanitation deficiency, and IHS officials said in February 2018 that the system contained a total of 405,986 homes. We also interviewed IHS headquarters and area officials about this system’s contents, uses, and limitations, and we compared this information to the agency’s implementation plan and other documentation for HITS. We identified issues with the information contained in the system related to its completeness (whether it contains the correct number of homes in light of its purpose) and related to the accuracy of homes identified as having no deficiency, as we discuss in the report. These issues were sufficient for us to determine that the number of homes in the system was incomplete and that deficiency level information was not accurate for all homes in the system. As a result, we did not assess the reliability of other information in HITS that was not relevant to our review. We also interviewed officials from the other five agencies regarding any efforts to collect information on tribal drinking water and wastewater infrastructure needs.
To determine the extent to which the agencies funded tribal drinking water and wastewater infrastructure projects, we analyzed data from the seven agencies administering programs that provide assistance to tribes for drinking water and wastewater infrastructure—IHS, EPA, USDA, HUD, Reclamation, EDA, and Corps. Specifically, we obtained and analyzed obligations data for drinking water and wastewater projects under programs that are specifically for or available to tribes. Generally, we reviewed each agency’s obligations data for fiscal years 2012 through 2016, the most recent 5 years of data available at the time of our review. Corps provided us with information on obligations for projects that involved tribal drinking water or wastewater infrastructure, but none of these obligations were in fiscal years 2012 through 2016. We assessed the reliability of the other agencies’ data by reviewing our previous related work regarding the use of these data and any available documentation from each agency; interviewing knowledgeable agency officials involved with collecting or analyzing these data; and testing data for accuracy and completeness by identifying any duplicate, missing, or invalid records. We present more details about each agency’s data, any limitations, and how we addressed those limitations below. On the basis of these efforts, we determined that the data obtained from these agencies were sufficiently reliable for our descriptive purposes unless otherwise noted below.
IHS. IHS provided us with project-level obligations data from fiscal years 2012 through 2016 for tribal drinking water and wastewater infrastructure projects from its Project Data System. In reviewing these data, we found data reliability issues that posed challenges to accurately reporting IHS’s project obligations separate from other agencies’ contributions to projects, which IHS also tracks in the system. We determined that the project-level obligations data from the Project Data System were not sufficiently reliable for the purposes of this objective. However, we determined that using IHS’s information on allocations to areas for the same time frame would introduce fewer limitations to our reporting. IHS provided us with information from fiscal years 2012 through 2016 on allocations to each of its 12 areas by Sanitation Facilities Program activity (i.e., sanitation deficiencies, new housing, and emergency and special projects). IHS officials stated that the IHS Director of the Division of Sanitation Facilities Construction determines the area allocations amounts annually, and that IHS obligated all of its area allocations each fiscal year. IHS did not separate the area allocations information by drinking water, wastewater, or solid waste projects; therefore, we report total obligations with solid waste projects included.
EPA. EPA provided us with project-level obligations data from fiscal years 2012 through 2016 from each of its three tribal-specific programs listed in table 4. EPA uses its Tribal Direct Implementation Nexus system to track project obligations for the Clean Water Indian Set-Aside and Drinking Water Infrastructure Grants Tribal Set-Aside programs, but the agency relies on the State of Alaska to provide similar project-level information for its Alaska Native Villages and Rural Community Water Grant program. In reviewing EPA’s data, we found several duplicate project records. We confirmed the issue with EPA officials and deleted those duplicate records to accurately aggregate EPA’s obligations by fiscal year for our report.
USDA. USDA provided us with grant and loan obligations data from fiscal years 2012 through 2016 for all of its programs specifically for or available to tribes from its Community Program Application Processing system. First, we removed solid waste and landfill projects that were indicated as such in the project name. To determine the project obligations for programs specifically for tribes, we included all obligations from USDA’s Native American and Rural Alaska Village Grant programs. USDA also awarded grants and loans to tribes or non-profit organizations working on behalf of tribes from non-tribal specific programs such as from its Water and Waste Disposal program as well as the Section 306C Colonias, Emergency Community Water Assistance Grant, Predevelopment Planning Grants, Special Evaluation Assistance for Rural Communities and Households, and Technical Assistance and Training programs. To determine the project obligations for those programs, we included projects that had an applicant or customer type as a tribe or tribal entity (e.g., an organization working on behalf of a tribe or tribes such as tribal health consortia or tribal utility authorities) and projects that served a population of at least 50 percent tribal users. For these awards, we included the full amount of the award regardless of the percent of tribal users served.
HUD. HUD provided us with project-level obligations data from fiscal years 2012 through 2016 for its Indian Community Development Block Grant program from its Performance Tracking Database. We worked with HUD officials to identify projects that included drinking water and wastewater infrastructure and to identify the amount of the obligations used for those purposes to determine HUD’s overall fiscal year project obligations for tribal water infrastructure.
Reclamation. Reclamation provided us with project-level obligations data from fiscal years 2012 through 2016 for the tribal portions of authorized water system projects, including projects authorized by enacted Indian water rights settlements. For the Indian water rights settlement project obligations, Reclamation provided both mandatory and discretionary amounts. We included both rural water system projects and Indian water rights settlements projects in reporting Reclamation’s overall fiscal year obligations.
EDA. EDA provided us with project-level obligations data from fiscal years 2012 through 2016 for tribal projects funded by its Public Works, Economic Adjustment Assistance, and Planning programs from its Operations Planning and Control System. To determine whether the EDA projects included drinking water or wastewater infrastructure, we reviewed each project’s description or scope of work for mention of a drinking water or wastewater infrastructure component. If we determined that the project included water infrastructure, we included the entire project’s obligation amount for each fiscal year we report.
In addition, to determine the extent to which agencies’ funding addressed the most severe sanitation deficiencies, we identified programs that have documented goals in regulation and policy to fund projects that meet these needs, which the programs identify as the absence of safe drinking water or wastewater disposal facilities. These selected programs included IHS’s Sanitation Facilities Construction program, EPA’s clean water set- aside program, and USDA’s Native American program. For these programs, we compared the number of funded projects to address the most severe sanitation deficiencies with the number of funded projects that met other needs for fiscal year 2016. Specifically, for IHS and EPA, we calculated the percentage of projects for each deficiency level that the agencies and other entities selected to fund from the fiscal year 2016 SDS list. For USDA, we reviewed the list of Native American program project obligations in fiscal year 2016 and determined the number of projects where USDA reported the purpose as new, replacement, renovation, or expansion. We also reviewed documentation of the agencies’ project identification and selection methods to determine whether these methods aligned with stated goals. We interviewed IHS and EPA officials from headquarters and all area and regional offices that administer these programs, and USDA officials from headquarters and six state-level offices (see below for state selection information), regarding their administration of these programs. Additionally, we analyzed IHS’s data from the SDS from fiscal years 2005 through 2016 to identify projects that remained unfunded and that were in the SDS for more than 5 years. We did not review the extent to which EPA’s drinking water set- aside program addressed the most severe sanitation deficiencies because EPA regions implement the program using a variety of different processes.
During the course of evaluating the extent to which federal agencies have provided funding for tribal drinking water and wastewater infrastructure projects, we identified issues with USDA’s Rural Alaska Village Grant program. We reviewed obligations data in light of the program’s authorizing statute, implementing regulations, and relevant provisions in USDA appropriations acts. USDA provided us with the Rural Alaska Village Grant program’s award amounts for fiscal years 1997 through 2016, and we determined whether the grant recipients were eligible or ineligible at the time of the award. We interviewed agency officials who manage the program and from USDA’s Office of the General Counsel.
To determine the extent to which the federal agencies collaborated to meet tribal water needs, we reviewed documentation of national-level collaboration, including federal program and interagency documents, such as national-level memorandums of understanding and interagency agreements. We interviewed headquarters officials from the seven agencies about their interagency collaboration. We compared the agencies’ actions to the key features of interagency collaboration that we have previously identified. We reviewed agencies’ collaboration at the regional level by surveying the seven agencies about their joint actions on activities related to tribal drinking water and wastewater in six states— Alaska, Arizona, California, New York, Oklahoma, and South Dakota— and by conducting a network analysis using the survey responses. We selected agency regional offices within these six states as the unit of analysis because the federal agencies organize their field structures differently, with some using region, district, area, or state offices to work with tribes—we refer to all of these office types as regional offices. We selected the nonprobability sample of six states to include a large percentage of the number of federally recognized tribes, to obtain a range in the total federal obligations to tribes and identified needs of tribes in the SDS, and for geographic diversity. The sample of states is not generalizable, and the results of our work do not apply to all states where Indian tribes are located. However, reviewing federal agency collaboration in these states provides illustrative examples of interagency collaboration within the six selected states, which include about 70 percent of the 573 federally recognized tribes. We compared the agencies’ reported collaboration with a national-level memorandum of understanding that contained commitments for collaborating at the regional level. For a detailed description of our survey methodology and the analysis of our results, see appendix II.
We also interviewed federal agency and State of Alaska officials to discuss the extent to which their drinking water and wastewater assistance programs collaborate with other agencies to meet tribal needs in the six selected states. We interviewed, either in person or by telephone, officials from the eight IHS areas, five EPA regions, and six USDA state offices that work with tribes and other agencies in the six states. We conducted site visits from February through April 2017 to three of the six states—Alaska, Arizona, and Oklahoma. During these visits, we met with tribal officials and staff and federal agency officials, and we visited tribal water infrastructure project sites. We selected these states for site visits based on geographic diversity and to obtain a range in the amount of tribal water infrastructure needs identified in the SDS. We met with or interviewed by telephone officials from 22 Indian tribes and representatives from 8 intertribal organizations that represent and work with tribes on water infrastructure issues to obtain their views about the water and wastewater infrastructure assistance that they receive from federal agencies. We judgmentally selected these tribes and organizations to obtain a range in their geographic locations and the amount and variety of federal drinking water and wastewater infrastructure assistance they have received. Our findings are not generalizable to all tribes but provide illustrative examples of input provided by tribal officials.
We conducted this performance audit from August 2016 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: GAO Survey of Federal Agency Collaboration on Tribal Water Infrastructure Projects in Six Selected States
This appendix describes how we selected the sample and administered the survey, designed the survey questionnaire, and conducted the network analysis for our survey on interagency collaboration regarding tribal drinking water and wastewater infrastructure projects.
Sample Selection and Survey Administration
To determine the extent to which the selected federal agencies have collaborated to meet tribal water needs, we surveyed officials at seven federal agencies: Indian Health Service (IHS), Environmental Protection Agency (EPA), U.S. Department of Agriculture (USDA), Department of Housing and Urban Development (HUD), Economic Development Administration (EDA), Bureau of Reclamation, and the U.S. Army Corps of Engineers. Specifically, we surveyed agency officials in six states: Alaska, Arizona, California, New York, Oklahoma, and South Dakota. Appendix I describes how we selected these agencies and states. The results of this survey are not generalizable beyond these agencies in these states.
We reviewed maps of each agency’s regional or state offices and identified and confirmed the offices that work with tribes and other agencies in the six selected states. If one state included multiple regions from the same agency, we administered the survey to officials in all relevant regional offices. In addition, if one agency’s region covered more than one of the selected states, we administered a survey to the agency’s regional office for each state. The federal agencies and regional offices we included in our survey were:
Corps divisions: Great Lakes & Ohio River, Northwestern, Pacific Ocean, South Pacific, Southwestern;
EDA regions: Austin, Denver, Philadelphia, Seattle;
EPA regions: 2, 6, 8, 9, 10 (Alaska Operations Office);
HUD regions: Alaska, Eastern Woodlands, Northern Plains, Southern IHS areas: Alaska, California, Great Plains, Nashville, Navajo, Oklahoma City, Phoenix, Tucson;
Reclamation regions: Great Plains, Lower Colorado, Mid-Pacific; and
USDA state offices: Alaska, Arizona, California, New York, Oklahoma, South Dakota.
In Alaska, we also included the Alaska Department of Environmental Conservation as a respondent because the state provides a 25 percent match for two federal water infrastructure programs. We did not include other state agencies because they do not provide a similar match. We also included the Alaska Native Tribal Health Consortium because it administers IHS’s Sanitation Facilities Construction program in Alaska.
The practical difficulties of conducting any survey may introduce errors, commonly referred to as non-sampling errors. For example, respondents may have difficulty interpreting a question, they may have limited information to respond to a question, or officials from different agencies may have different recollections regarding the extent of collaboration on a particular project. We sought to minimize the impact of non-sampling error by conducting six pretests of the draft questionnaire with agency officials; five pretests were conducted by telephone and one pretest was conducted in person. We selected officials to cover a range of agencies and locations. During these pretests, we sought to determine whether (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We modified the questionnaire in response to these pretests. To further minimize the impact of non- sampling error, we conducted a sensitivity analysis.
We customized the questionnaire for each agency regional office so that we asked each office to respond about its collaboration only with the other agencies located in its state. We e-mailed these questionnaires to 46 respondents from May 15 through May 17, 2017, and conducted follow-up as necessary. We received a 100 percent response rate.
Survey Questionnaire Design
In the survey, we asked each agency regional office whether it had jointly conducted activities related to tribal drinking water or wastewater projects during the past 3 years with each of the other agencies’ regional offices within the same state. If an agency regional office responded “yes,” we then provided a list of tribal drinking water and wastewater activities and asked the agency regional office if it had jointly conducted any of the listed activities related to tribal drinking water and wastewater infrastructure projects in collaboration with the other agency. The activities included: identifying infrastructure needs, communicating information to tribes about programs that fund projects, planning and designing proposed projects, evaluating proposed projects according to eligibility and scoring criteria, selecting projects to fund, constructing projects, providing technical assistance for operating and maintaining water infrastructure, and negotiating or implementing Indian water rights settlements. We developed the list of activities based on our initial interviews and pretests with agency officials.
We next provided a list of collaborative mechanisms. For each of these collaborative mechanisms, we asked the agency regional office if it had used the mechanism when jointly conducting activities in collaboration with the other agency related to tribal drinking water and wastewater infrastructure projects within the same state during the past 3 years. The mechanisms included: state-, regional-, or project-level memorandum of understanding or agreement; interagency agreement to transfer funding; working group, task force, or committee; consulting on project selection; sharing project documents; geographic co-location; shared database or other data sharing; conferences or forums; informal or ad hoc communication; and personnel detailing or sharing. If the agency regional office responded that it had not used one of the listed mechanisms, we asked if it would be beneficial to use that mechanism to collaborate in the future. We identified the list of mechanisms based on our prior work on interagency collaboration and pretests with agency officials. We also asked the agency regional office what factors, if any, helped it to collaborate with the other agency on tribal drinking water and wastewater infrastructure projects in the state and what factors, if any, hindered it from collaborating with the other agency. For both questions, we asked the agency regional office to consider agency policies and procedures, available resources, leadership, personalities, presence of written agreements, and accountability measures.
If an agency regional office responded “no” to the initial question of whether it had jointly conducted activities related to tribal drinking water or wastewater projects during the past 3 years with another agency’s regional office, we asked a shorter set of follow-up questions. We provided the list of collaborative mechanisms and asked if it would be beneficial for the agency regional office to use any of the listed mechanisms to collaborate with the other agency on activities related to tribal drinking water and wastewater infrastructure projects in the future in the state. We also asked the agency regional office to describe the factors, if any, that hindered its collaboration with the other agency.
Network Analysis
To quantify the extent of interagency collaboration during the past 3 years and the potential for future collaboration among the federal agencies we surveyed, we conducted a Network Analysis—a method of analyzing the patterns of interaction among multiple entities. Specifically, we aggregated the survey responses to our questions about drinking water and wastewater activities and collaborative mechanisms for each pair of agencies in all six states. We configured these aggregated data into networks representing the pattern of collaboration among the agencies. We then analyzed these networks to determine how extensively the agencies have collaborated and the extent to which additional future collaboration could be beneficial for them. We also analyzed these networks to assess how the pattern of collaboration varied by state. We describe the steps of our analysis and agency survey responses below.
Quantifying Collaboration between Pairs of Federal Agencies
To quantify the extent of collaboration among the federal agencies across the six states during the past 3 years, we aggregated the responses to our survey by agency pair. The seven federal agencies form 21 possible agency pairs. For each agency pair, we combined the first agency’s responses regarding its collaboration with the second agency and the second agency’s responses regarding its collaboration with the first agency. We aggregated the agency pair responses in this way for each of the three measures of collaboration for all six states, specifically:
Drinking water and wastewater activities. We calculated the total number of instances in which each agency in a pair reported having worked on an activity with the other agency in that pair during the past 3 years (see column 2 in table 5). We examined this measure to identify the pairs of agencies that collaborated most and least extensively. For example, IHS and EPA reported the highest number of instances of jointly conducting tribal drinking water and wastewater activities across the six states. In contrast, EDA and IHS reported no such instances of collaboration.
Use of collaborative mechanisms. We calculated the total number of instances in which each agency in a pair reported having used a mechanism to collaborate with the other agency in that pair during the past 3 years (see column 3 in table 5). We examined this measure to identify the pairs of agencies that collaborated most and least extensively. The pattern of collaboration based on this measure is similar to the pattern based on drinking water and wastewater activities. For example, IHS and EPA also reported the highest number of instances of using specific collaborative mechanisms across the six states.
Potential future collaboration. We calculated the total number of instances in which each agency in a pair reported that it would be beneficial to use a mechanism to collaborate with the other agency in that pair in the future (see column 4 in table 5). We compared this measure to the number of mechanisms the agency pairs reported having used during the past 3 years. Each of the agency pairs reported that it would be beneficial to use additional collaborative mechanisms in the future, including those pairs that had reported not collaborating. For example, the agency pairs of EDA-IHS and EDA- Reclamation both reported no instances of using a mechanism to collaborate with each other and both reported multiple instances in which use of a collaborative mechanism would be beneficial in the future.
Quantifying the Potential for Increased Collaboration
To quantify the potential to increase collaboration among the federal agencies, we configured the agency pair data into two networks. The first network represented recent collaboration among the agencies—the instances in which agencies reported having used a mechanism to collaborate during the past 3 years (based on column 3 in table 5). The second network represented potential future collaboration among the agencies (based on the sum of columns 3 and 4 in table 5). As such, it captures the instances in which agencies reported having used a mechanism to collaborate during the past 3 years plus the instances in which they reported it would be beneficial to use an additional mechanism in the future.
Figure 3 shows a graphical illustration of these two networks. In this figure, the circles represent agencies and the lines represent collaboration between the agencies. Specifically, the darkness of the lines indicates the number of mechanisms used by the corresponding pair of agencies. The left side of figure 3 illustrates reported use of collaborative mechanisms during the past 3 years, and the right side of figure 3 illustrates potential future collaboration. The figure shows that overall collaboration would increase if the agencies began using the additional mechanisms that they reported would be beneficial.
We quantified the difference between these networks in two ways. First, we calculated the increase in overall collaboration that would occur if agencies began using the additional mechanisms that they reported would be beneficial. Based on this calculation, the number of instances of agencies using collaborative mechanisms would approximately triple. Specifically, agencies reported 403 instances of having used a specific mechanism to collaborate with another agency—this number would increase to 1,249 if agencies began using all of the identified mechanisms that they reported would be beneficial. This difference is shown in figure 3, in which the right side of the figure (potential future collaboration) has a greater number of darker lines connecting the agencies compared with the left side of the figure (recent collaboration).
Second, we measured how the relative amount of collaboration for each agency would change if the agencies began using additional mechanisms they reported would be beneficial. To do this, we aggregated the agency pair data for each of the agencies. For the network of recent collaboration, for example, we added (1) the number of instances that each agency reported using a collaborative mechanism with any of the other agencies and (2) the number of instances that any of the other agencies reported using a collaborative mechanism with the first agency. We performed a similar calculation using the agency pair data for the network of potential future collaboration. The analysis shows that the use of collaborative mechanisms during the past 3 years was primarily centered on three agencies (IHS, EPA, and USDA). If all of the agencies began using the additional mechanisms that they reported would be beneficial, however, collaboration would be distributed more evenly across the entire network of agencies. This difference is also shown in figure 3, in which agencies such as HUD, Reclamation, and Corps are connected to other agencies with dashed lines on the left side of the figure (representing less extensive recent collaboration), but with thick lines on the right side of the figure (representing more extensive potential future collaboration).
Quantifying the Variation in Collaboration by State
To quantify the extent of variation in collaboration by state, we disaggregated the agency pair data reported in table 5 by each of the states for the three measures of collaboration we asked about in our survey. In particular, tables 6, 7, and 8 show the number of instances in which an agency reported collaborating on drinking water and wastewater infrastructure activities with another agency during the past 3 years (table 6), using collaborative mechanisms with another agency during the past 3 years (table 7), and collaborative mechanisms that would be beneficial to use with another agency in the future (table 8). The totals in the bottom rows of these tables show the extent of collaboration based on these measures by state. Specifically, tables 6 and 7 show that agencies worked together on activities and used collaborative mechanisms most extensively in Alaska and least extensively in New York and Oklahoma. Table 8 shows that agencies in New York and Oklahoma reported the greatest potential for using additional collaborative mechanisms. The totals in the far right columns of these tables show the extent of reported collaboration by activity (table 6), collaborative mechanism (table 7), and the extent of potential future collaboration by collaborative mechanism (table 8).
Appendix III: Federal Agency Obligations for Tribal Drinking Water and Wastewater Infrastructure Projects, Fiscal Years 2012 through 2016
Appendix III: Federal Agency Obligations for Tribal Drinking Water and Wastewater Infrastructure Projects, Fiscal Years 2012 through 2016 According to Environmental Protection Agency officials, obligations listed may not match annual appropriations because the agency may have de-obligated and re-obligated any unexpended obligations to other projects. We determined that the U.S. Department of Agriculture awarded a grant or loan from its non-tribal specific programs for a tribal drinking water or wastewater infrastructure project if the recipient was a tribe or tribal entity (for example, an organization working on behalf of a tribe or tribes such as tribal health consortia or tribal utility authorities) and if the project was to serve a population of at least 50 percent American Indian or Alaska Native. The Economic Development Administration obligated approximately $34,000 for one project in fiscal year 2012, which is not reflected in the table due to rounding. We determined that the Economic Development Administration awarded a grant for a tribal drinking water or wastewater infrastructure project if the project’s description or scope of work mentioned a drinking water or wastewater infrastructure component. Obligations are combined from three programs: Public Works, Economic Adjustment Assistance, and Planning.
Appendix IV: Examples of Tribal Water Infrastructure Projects We Visited
This appendix contains summaries and photographs of selected tribal drinking water and wastewater infrastructure projects we visited from February through April 2017 in Alaska, Arizona, and Oklahoma.
Portable Alternative Sanitation System Pilot Project, Native Village of Kivalina, Alaska
The Native Village of Kivalina, located on a barrier island above the Arctic Circle, is one of approximately 30 communities in Alaska where residents do not have access to safe drinking water and wastewater disposal facilities in their homes. Kivalina, a community of 469 residents, has a community washeteria with washing machines, dryers, and drinking water available for purchase. Like many Alaska Native villages, the harsh winter climate, limited revenue, and isolation create challenges for installing and operating water infrastructure. Erosion due to diminishing sea ice and other factors threaten Kivalina, and the community is considering relocation. As such, infrastructure improvements are limited to small projects consisting of moveable, low-water use infrastructure to provide interim sanitation improvements. In 2015, the Alaska Native Tribal Health Consortium installed a pilot sanitation system in nine homes. This system is called the Portable Alternative Sanitation System and consists of a bathroom sink, rainwater catchment, in-home water treatment, and a separating toilet, where liquid waste is collected separate from solid waste. According to a Consortium report, the system is a low-cost alternative to traditional piped infrastructure. The total cost was $633,000 to design, install, and monitor the system, with the Indian Health Service (IHS) and the Consortium contributing to the project. The Consortium recommended expanding the pilot system to the rest of Kivalina, and a Consortium official said it is working with IHS to test the system in several homes in three other unserved communities in Alaska.
Village of Shungopavi Sewer Line Q & Dump Stations Construction Project, Hopi Tribe, Arizona
As of 2015, more than 30 percent of the nearly 80 homes in the Hopi Village of Shungopavi did not have adequate wastewater disposal. The Sewer Line Q and Dump Stations construction project included installing a sewer main to connect nine homes to sewer service. Previously, some of these homes had discharged wastewater directly onto the ground, and one had a septic system. The project also involved installing three honeybucket dump stations in the village and connecting them to the existing sewer system so that an additional 19 homes could dispose of raw sewage in an environmentally safe manner. According to IHS officials, solid rock a few feet beneath the surface made it challenging and expensive to lay the sewer pipes. The total estimated cost was $666,000, with the Environmental Protection Agency (EPA), the Village of Shungopavi, and IHS contributing to the project. According to IHS officials, the project is expected to be fully constructed in 2018.
Oaks Wastewater Lagoons Construction Project, Cherokee Nation, Oklahoma
The Cherokee Nation’s Oaks Wastewater Lagoons project serves an estimated 85 Indian-owned homes in the community of Oaks, Oklahoma. The project consisted of constructing three wastewater lagoons and a spray irrigation field. According to a tribal official, because the previous lagoons leaked into the adjacent creek, local residents who used the creek for swimming, fishing, and other traditional purposes were at high risk of coming in contact with lagoon leakage. The total cost of the project was an estimated $1.22 million, and the U.S. Department of Agriculture, EPA, IHS, the Department of Housing and Urban Development, and the Oklahoma Water Resources Board made contributions to the project. The Cherokee Nation completed the project in 2012 under the provisions of its self-governance compact with IHS.
Drinking Water Pump Station Replacement Project, Sasakwa Rural Water District, Seminole Nation of Oklahoma
The Sasakwa Rural Water District is owned and operated by the Seminole Nation of Oklahoma and serves 61 households—about 60 percent of which are Indian homes, according to tribal officials. The Drinking Water Pump Station Replacement project involved drilling new wells and constructing a new pump station and treatment system. IHS constructed the original Sasakwa water treatment plant in 1972. According to an IHS project summary, the problems with the prior system included (1) recurring leaks in the water transmission line and distribution system and (2) deterioration of the pump and treatment building and equipment due to weather, vandalism, and poor water quality. The project cost approximately $700,000, with EPA funding the project. According to tribal officials, the replacement water treatment plant became operational in 2014.
Appendix V: Comments from the Department of Health and Human Services
Appendix VI: Comments from the Department of the Interior
Appendix VII: Comments from the Environmental Protection Agency
Appendix VIII: Comments from the U.S. Department of Agriculture
Appendix IX: Comments from the Department of Defense
Appendix X: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contacts named above, Jeffery D. Malcolm (Assistant Director, in memoriam), Leslie Kaas Pollock (Analyst in Charge), Carolyn S. Blocker, Mark Braza, John Delicath, David Dornisch, Cynthia Grant, Susan Iott, Serena Lo, Elizabeth Luke, Micah McMillan, Jon Melhus, Jeanette Soares, Sara Sullivan, Kiki Theodoropoulos, and Sarah Veale made key contributions to this report. | Why GAO Did This Study
Tens of thousands of American Indians and Alaska Natives do not have safe drinking water or wastewater disposal in their home—referred to as needs arising from a sanitation deficiency—at a higher percentage than the general population, according to IHS. Among other things, IHS assesses homes, either individually or by reviewing public water systems, to determine any deficiencies. Seven agencies, including IHS, EPA, and USDA, have programs that provide drinking water and wastewater infrastructure assistance to Indian tribes.
GAO was asked to review federal efforts to provide water infrastructure assistance to Indian tribes. This report examines, among other objectives, the extent to which selected federal agencies (1) identified tribes' drinking water and wastewater infrastructure needs and (2) funded tribal water infrastructure projects, including tribes' most severe sanitation deficiencies. GAO reviewed agency data on tribal needs, analyzed agency funding data for tribal water infrastructure projects, reviewed agency policy documents, and interviewed agency officials and officials from 22 tribes representing different geographic locations.
What GAO Found
Federal agencies have identified several billion dollars in existing and future tribal drinking water and wastewater infrastructure needs. Specifically, the Indian Health Service (IHS) worked with tribes to identify, in fiscal year 2016, an estimated $3.2 billion in water infrastructure projects to address existing sanitation deficiencies in Indian homes, and the Environmental Protection Agency (EPA) identified an additional $2.4 billion in future tribal drinking water infrastructure needs over the next 20 years. However, IHS could enhance the accuracy of its information about the water infrastructure needs of some Indian homes. In February 2018, the database that IHS uses to track Indian homes' sanitation deficiencies showed that about one-third of the homes (138,700) had no deficiency. However, because the database does not provide IHS with a way to record if a home's deficiency has been assessed, IHS could not determine whether these homes had no deficiency or if they had not yet been assessed to identify a deficiency. IHS officials stated that improving the database's accuracy would be beneficial. By implementing a way to indicate in its database whether these homes' deficiencies have been assessed, IHS could also more efficiently address any deficiencies in these homes.
Federal agencies provided about $370 million for tribal drinking water and wastewater infrastructure projects in fiscal year 2016, including some projects to address what the agencies identified as the most severe sanitation deficiencies (i.e., communities that lack safe drinking water or wastewater disposal). IHS and U.S. Department of Agriculture (USDA) policies direct the agencies to fund tribal projects that address these deficiencies. However, agency scoring processes may not always prioritize the projects that address them:
IHS assigns points to projects using eight scoring factors, including sanitation deficiency and cost. Based on GAO's review of IHS documents and interviews with agency officials, IHS's process for selecting projects can discourage funding some projects that address the most severe sanitation deficiencies, especially those with a relatively high cost per home. As a result, some projects to serve homes without water infrastructure can remain unfunded for many years. IHS officials said the scoring factors balance a number of interests, and the agency is looking to improve the extent to which it funds projects that address these deficiencies.
USDA uses a different set of scoring factors to assign points when evaluating project applications for its tribal water program, including rural population and income levels. However, USDA does not have a scoring factor to assign points to a project based on whether it will serve homes that lack safe drinking water or wastewater disposal, as it does with another program with similar goals. Instead, USDA officials said they use discretionary points to score projects on this basis, but these points may not be awarded at all. As a result, USDA may not have reasonable assurance that it consistently evaluates project applications in a way that aligns with agency policy to fund projects that address the most severe sanitation deficiencies.
By IHS reviewing and USDA updating their scoring processes, the agencies could have more assurance that the projects they fund address the most severe sanitation deficiencies in Indian communities.
What GAO Recommends
GAO is making 16 recommendations, including that (1) IHS develop a way to indicate in its database if homes' deficiencies have been assessed and (2) IHS and USDA review and update project scoring processes. IHS agreed with these recommendations, and USDA proposed an approach for addressing the recommendation on scoring, as discussed in the report. |
gao_GAO-18-188 | gao_GAO-18-188_0 | Background
Overview of SEC
SEC’s mission is to protect investors; maintain fair, orderly, and efficient markets; and facilitate capital formation. As part of SEC’s strategic plan, SEC strives to promote a securities market that is worthy of the public’s trust and is characterized by, among other things, transparent disclosure to investors of the risks of particular investments.
SEC is headed by a five-member Commission composed of the Chair and four Commissioners. SEC’s responsibilities are divided among five divisions and 24 offices, including the following offices that are responsible for filing review or investor outreach:
Corporation Finance is responsible for reviewing documents that publicly-held companies are required to file with SEC, which may include climate-related disclosures. Corporation Finance reviews disclosure documents that companies are required to file, including annual reports. Corporation Finance performs its filing review responsibilities through accounting and legal staff in 11 offices, organized by industry. The division’s staff also provides companies with assistance interpreting the Commission’s rules and assists the Commission with rule making.
The Investor Advisory Committee was established under the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd- Frank Act) to advise the Commission on regulatory priorities, the effectiveness of disclosure, and initiatives to protect investor interests and to promote investor confidence, among other things. The committee has the authority to submit findings and recommendations for review and consideration by the Commission.
The Office of the Investor Advocate was established in 2014 pursuant to the Dodd-Frank Act to provide a voice for investors, assist retail investors, study investor behavior, and support the Investor Advisory Committee. The Investor Advocate is required to submit reports directly to Congress, without any prior review or comment from the Commissioners or SEC staff.
SEC Disclosure Requirements, Rule Making, and Guidance
SEC rules generally require public companies to disclose, among other things, known trends, events, and uncertainties that are reasonably likely to have a material effect on the company’s financial condition or operating performance through annual and other periodic filings. Information is material if there is a substantial likelihood that a reasonable investor would consider it important in making an investment decision. Regulation S-K, promulgated by SEC, contains disclosure requirements that are applicable to the nonfinancial statement portion of annual filings and other periodic reports filed with SEC.
The Commission occasionally provides guidance on topics of general interest to the business and investment communities by issuing interpretive releases, which publish the Commission’s views and interpret federal securities laws and SEC regulations. The 2010 Guidance was published by the Commission to provide guidance to companies on how existing disclosure requirements apply for climate-related matters.
The 2010 Guidance identifies four items in Regulation S-K that may be most likely to require climate-related disclosure in companies’ annual filings. The four items are as follows:
Description of business. This section of a company’s annual filing requires a description of the company’s business, including its main products and services, and what markets it operates in. This item expressly requires disclosure of certain material effects of complying with environmental laws.
Legal proceedings. This section requires a company to include information about certain material pending legal proceedings, including, in certain circumstances, those arising under any federal, state, or local provisions that have been enacted or adopted regulating the discharge of materials into the environment or primarily for the purpose of protecting the environment.
Risk factors. This section discusses the most significant factors that make investment in the company speculative or risky. Disclosure under this section should clearly state risks and specify how each risk affects the particular company and should not present risks that could apply to any company.
Management’s discussion and analysis. This section presents management’s perspective on material past and anticipated future business results. The information provided in this section is intended to give the investor an opportunity to look at the company through the eyes of management by providing both a short- and long-term analysis of the company’s financial condition. Additionally, in this section companies must identify and disclose known trends, events, demands, commitments, and uncertainties that are reasonably likely to have a material effect on their financial condition or operating performance.
The 2010 Guidance also identifies four different topics under which climate-related risks can be categorized (see table 1). Regardless of whether a company’s identified risk falls under one of these categories, companies need to disclose the information required by the federal securities laws and regulations, and any additional material information necessary to make the required statements, in light of the circumstances under which they are made, not misleading.
Additionally, SEC staff may issue guidance that includes a summary or explanation of rules adopted or amended by the Commission. For example, SEC staff issued a Staff Accounting Bulletin on materiality that provides guidance in applying quantitative materiality thresholds to the preparation of financial statements filed with SEC. According to SEC, staff guidance is not a substitute for any rule, and only the rule itself can provide complete and definitive information on its requirements.
The Commission can adopt new rules through the rule-making process. According to SEC, rule making can involve several steps: concept release, rule proposal, and rule adoption.
Concept release. The Commission at times issues a concept release to seek public input to help identify the appropriate regulatory approach, if any, prior to issuing a rule proposal. In a concept release, SEC describes the area of interest and the Commission’s concerns; identifies different approaches to address the problem; and includes a series of questions that seek the views of the public on the issue.
Rule proposal. The Commission publishes a detailed formal rule proposal for public comment. A rule proposal advances specific objectives and methods for achieving them. The Commission typically provides between 30 and 90 days for public review and comment. Public comment is considered vital to the formulation of a final rule.
Rule adoption. The Commissioners consider what they have learned from public input on the rule proposal and seek to agree on the specifics of a final rule. If a final rule is adopted by the Commission, it becomes part of the official rules that govern the securities industry.
SEC’s Annual Filing and Disclosure Review Process
According to SEC senior staff, SEC reviewers examine climate-related disclosures as part of their review of all disclosures included in the companies’ annual filings. Corporation Finance selects annual filings for review and determines the extent to which annual filings are reviewed based on the requirements of the Sarbanes-Oxley Act of 2002 and review goals established by senior leadership (see fig. 1). The Sarbanes-Oxley Act requires SEC to review the financial statements of each reporting company at least once every 3 years. According to SEC senior staff, SEC staff review the financial statements of a significant number of companies more frequently. SEC staff may also review companies’ nonfinancial disclosures, which may be reviewed as (a) a part of a full cover-to-cover review or (b) a targeted issue review. SEC reviewed the disclosures of approximately 4,400 companies each in fiscal years 2015 and 2016 and approximately 4,200 companies in fiscal year 2017. Of the reviews in fiscal years 2016 and 2017, over 1,400 and 1,250 resulted in comment letters, respectively.
Corporation Finance generally conducts two levels of review at key steps in the filing review process. Once selected for review, a filing enters the review cycle, which generally includes evaluating the disclosure for material compliance with securities laws, preparation and review of comments, review of company responses to comments, and public posting of filing review correspondence on the SEC website. For most filings, a second-level review is required during each of these phases.
According to some SEC staff, as part of SEC’s filing reviews, SEC staff focus on the company’s filing for the current year and can supplement the review with information from the company’s prior years’ filings, filings of other companies in the same industry, SEC’s prior filing review reports, and other external data outside of the filings, including companies’ sustainability and earning reports and financial analyst reports. Companies may voluntarily disclose climate-related risks through channels outside of SEC filings, including nongovernmental organizations, company websites, and in response to reporting requirements in foreign countries.
As part of the review process, SEC staff may issue “comment letters” to companies to obtain additional information, clarification on the companies’ disclosures, or elicit better compliance with applicable requirements. In a review of Corporation Finance’s comment letter process, SEC’s OIG reported in September 2017 that Corporation Finance has established policies, procedures, and internal controls that provide overall guidance for how staff should conduct disclosure reviews and for how information, including comments, should be documented, tracked, and disseminated to companies and the public. However, the report also found, among other things, that SEC reviewers (1) did not always properly document comments before issuing comment letters to companies and (2) inconsistently documented oral comments to companies. The report recommended that Corporation Finance establish mechanisms or controls and provide detailed guidance to staff to improve documentation in the comment letter process. SEC management agreed with these recommendations.
Furthermore, if SEC reviewers find a material inadequacy in a company’s disclosures, the reviewers may refer the potential violations to the Division of Enforcement for investigation. If the Division of Enforcement finds sufficient evidence of a potential violation, SEC may file an action in federal district court or institute an administrative proceeding.
Corporation Finance maintains four distinct electronic databases to track, document, and report on different aspects of its filing review program. One of these is Electronic Data Gathering, Analysis, and Retrieval (EDGAR), which is Corporation Finance’s primary record-keeping system of documents related to filing reviews, including companies’ filings, SEC’s comment letters to companies and their responses to the letters, and SEC staff’s filing review reports.
Developments Associated with Climate-Related Disclosures since the 2010 Guidance
In April 2016, SEC published a Concept Release to seek public comment on modernizing certain business and financial disclosure requirements in Regulation S-K. The 2016 Concept Release specifically requested comments about “Disclosure of Information Relating to Public Policy and Sustainability Matters.” Sustainability disclosures—including topics on climate change, resource scarcity, corporate social responsibility, and good corporate citizenship—are often characterized broadly as environmental, social, or governance concerns. The public comment period for the Concept Release ended on July 21, 2016. According to SEC staff, the agency received approximately 370 unique comment letters on the Concept Release.
Since 2010, several voluntary reporting frameworks are available for companies to use to report climate-related information, including the following: In June 2017, the FSB Task Force issued final recommendations for four areas of voluntary climate-related disclosures that companies can choose to adopt, which are applicable to organizations across sectors and jurisdictions.
In October 2016, the Sustainability Accounting Standards Board (SASB) developed a Climate Risk Framework that enables, among other things, the identification of climate-related risks and the development of metrics that help companies disclose material sustainability information to investors.
In May 2013, the Global Reporting Initiative and CDP (formerly known as the Carbon Disclosure Project) signed a Memorandum of Understanding for the two organizations to work together to align areas of their reporting frameworks. This will provide more consistency in companies’ voluntary climate-related disclosures and improve comparability of data for investors.
SEC Issued the 2010 Guidance and Comment Letters to Specific Companies to Clarify Climate- Related Disclosure Requirements
SEC issued the 2010 Guidance, and comment letters to specific companies, to clarify their existing disclosure requirements as they apply to climate-related matters. SEC staff said the issuance of the 2010 Guidance was the primary form of communication it used to clarify to companies their climate-related disclosure requirements. However, SEC staff also noted that companies should consider the 2010 Guidance along with all other guidance and securities laws and regulations applicable to their filings. In addition to publishing the 2010 Guidance, SEC staff discussed it immediately following its release in webinars and other public events. For example, an SEC staff member presented information on the 2010 Guidance at a panel discussion for an October 2010 webinar hosted by the National Asian Pacific American Bar Association. Representatives from the industry associations with whom we spoke, which represent the five industries we selected, all agreed that the 2010 Guidance helped clarify climate-related disclosure requirements and stated that they consider the disclosure requirements for climate-related risks to be clear and have no need for additional guidance.
In addition, since the release of the 2010 Guidance, SEC staff has issued individual comment letters to specific companies on their climate-related disclosures. For example, on September 26, 2016, SEC staff issued a comment letter to an oil company requesting that the company expand on its disclosures in the risk factor section of the filing to provide a more in- depth description of its climate-related compliance obligations. SEC publishes comment letters in EDGAR, and other interested companies can view these letters to understand SEC’s assessment of a particular company’s disclosures. Ceres, a nonprofit organization that advocates for climate-related disclosure, analyzed SEC’s comment letters from February 2, 2010—the release date of the 2010 Guidance—to December 31, 2013, to determine how many were related to climate-related disclosures. Ceres reported that SEC staff sent 25 letters relating to climate-related disclosures to 23 companies (2 companies received two letters as a result of back-and-forth correspondence) out of the more than 45,000 comment letters sent during this period. Using the same specific keyword search terms—such as “climate change” and “climate mitigation”—that were identified in the Ceres report, we found 14 comment letters to 14 companies that SEC staff issued relating to climate-related disclosures out of the over 41,000 comment letters issued from January 1, 2014, through August 11, 2017. These comment letters were found during our search but may not represent all climate-related comment letters SEC staff has issued during that time frame.
SEC Examined Climate-Related Disclosures for Reports to Congress and Issued a Concept Release Seeking Public Input on Disclosure Requirements
After the issuance of the 2010 Guidance, the Senate Committee on Appropriations directed SEC to conduct two reviews of climate-related disclosures in 2012 and 2014. In response, SEC staff examined climate- related disclosures of a total of 60 companies in six industries each year in 2012 and 2014. In both reports, SEC staff focused on the business description, risk factors, and management’s discussion and analysis sections of companies’ filings and found that most of the filings included some level of climate-related disclosure in one or more of these areas. SEC staff also found that the disclosures they reviewed varied in the level of details provided. Additionally, in the 2012 report, SEC staff reported that they did not find any notable year-to-year changes in the disclosures reviewed from the year before the 2010 Guidance to the year after. According to SEC senior staff, in addition to its regular evaluation of climate-related disclosures in individual filing reviews, SEC staff continues to periodically assess climate-related disclosures within these industries.
SEC senior staff said they did not expect changes in companies’ climate- related disclosures as a result of the 2010 Guidance since SEC did not adopt any new disclosure requirements. As previously mentioned, SEC published the 2010 Guidance to provide guidance to companies on how existing disclosure requirements apply for climate-related matters. At the time the 2010 Guidance was issued, “cap and trade” legislation was pending in Congress; the Environmental Protection Agency was taking steps to regulate greenhouse gas emissions; and there were efforts to launch an international “cap and trade” system. The 2010 Guidance in part provided clarification on how such changes—if they took place— could be incorporated into companies’ filings. However, some of these changes did not occur.
Through the April 2016 Concept Release related to business and financial disclosures in Regulation S-K, SEC sought input from investors, companies, and other interested parties on the effectiveness of its disclosure requirements, including a request for comment on climate- related disclosures in SEC’s filings. In the April 2016 Concept Release, SEC discussed comments previously received that both noted a growing interest in environmental, social, or governance disclosure among investors and recommended increased sustainability disclosure requirements. According to SEC staff, some comments criticized the primarily voluntary nature of current corporate sustainability reporting outside of companies’ SEC filings. As of December 2017, SEC senior staff said they are considering recommendations for the Commission’s consideration based on comments received on the Concept Release.
SEC Faces Constraints in Reviewing Climate- Related Disclosures as It Primarily Relies on Information That Companies Determine Is Material
As SEC reviews climate-related and other disclosures in companies’ filings, SEC relies primarily on information that companies determine is material. SEC may not have details of the information companies used to support their determination of material climate-related risks. Also, this climate-related information varies in format and specificity among companies. SEC has tools, mechanisms, and resources to help ensure that its staff conducts reviews consistently across filings. Stakeholders, including investor and industry groups, have mixed views on the need for more climate-related disclosures with additional specificity and a consistent format for these disclosures to allow for comparison across filings. Additional disclosure requirements or increased scrutiny of companies’ climate-related information—which, if necessary, SEC and Congress can consider—could have mission and resource implications for SEC’s Division of Corporation Finance.
SEC May Not Have the Details of the Information Companies Rely on in Determining Materiality
SEC reviewers may not have access to the detailed information that companies use to arrive at their determination of whether risks, including climate-related risks, must be disclosed in their SEC filings. SEC’s scope of review of companies’ disclosures under federal securities laws differs from the scope of review that may be possible through the investigative authority of the state attorneys general under state laws. SEC senior staff further noted that Corporation Finance staff assess companies’ filings for compliance with the disclosure requirements under federal securities laws but do not have the authority to subpoena companies’ information. As previously noted, if SEC reviewers find a material inadequacy in a company’s disclosures, the reviewers can refer potential violations to the Division of Enforcement for investigation. SEC senior staff stated that the Division of Enforcement can subpoena company information only after obtaining a formal order of investigation.
In an investigation of Peabody Energy under a New York State law, the Attorney General of New York State subpoenaed the company’s internal documents and found that although the company’s disclosures denied it had the ability to reasonably predict the impact of future climate change laws and regulations on its business, Peabody had made internal market projections showing severe negative impacts from certain potential laws and regulations and failed to disclose those projections to the public. As a result of this investigation, Peabody agreed to disclose, among other things, concerns that the environmental impacts of coal combustion are resulting in increased regulation, which could affect demand for Peabody’s products or services. SEC staff explained that when they become aware of an investigation of a company, they look for and assess disclosures related to any pending legal proceedings and the potential impacts. SEC senior staff told us SEC staff reviewed Peabody Energy’s filings and other publicly available information, including its climate- related disclosures, and did not issue climate-related comments in its review of Peabody Energy’s filings; SEC has not taken any public actions against Peabody Energy following the New York Attorney General’s investigation. Also, SEC staff noted that the additional disclosures Peabody Energy is asked to provide by the New York Attorney General may not be applicable for other companies, but these disclosures may be required if the information is material and necessary to make the disclosures not misleading under the current federal disclosure rules.
If SEC reviewers are aware of publicly-available information outside of the filings that is contradictory to companies’ disclosures, they can request additional information or clarification from companies on their climate- related and other disclosures through comment letters. However, a company possesses information necessary to determine whether environmental regulations will have a material effect on the company’s financial condition or results of operations and may claim that the effect of environmental regulations raised by SEC is not material and hence does not need further disclosure. For example, in a 2016 comment letter, SEC staff requested that an oil company expand and clarify its discussion of climate-related compliance with a California environmental law. The company responded that the current costs and impact of compliance with the state law have not been material to the company and it would seek to more clearly disclose such information in its annual filing for the coming year. SEC staff did not issue any further comment on this issue. SEC senior staff told us that they determine whether further comments are needed based on whether the company’s response is consistent with other information the companies reported in other publicly available documents, such as financial analyst reports or the company’s sustainability report.
Climate-Related Disclosures Vary in Format and Specificity
Climate-related disclosures vary in format because companies may report similar climate-related disclosures in different sections of the annual filings. We reviewed and identified illustrative examples of climate-related disclosures in the annual filings of 116 S&P 500 Index companies, filed with SEC in 2016, in the five industries in our review (see app. II for additional information). We found, for example, one beverage company reported its goal to reduce greenhouse gas emissions in the business description section of its filing while another beverage company reported a similar goal on carbon footprint reduction in the risk factors section of its filing. As previously noted, SEC reviewers may compare a company’s disclosures to other companies’ disclosures in the same industry to identify potential missing disclosures if other companies in the same industry have made similar disclosures. When companies report climate- related disclosures in varying format, SEC reviewers and investors may find it difficult to navigate through the filings to identify, compare, and analyze the climate-related disclosures across filings, especially given the size of each individual filing. In addition, companies’ filings may include only a few mentions of climate-related disclosures. For instance, the annual filings we reviewed for an insurance company, an oil company, and a food company, respectively, were 389 pages, 117 pages, and 136 pages long. Within these filings, the corresponding number of mentions of climate-related disclosures was 9, 13, and 6, respectively, based on our analysis using Ceres’ SEC Sustainability Disclosure Search Tool. Given that SEC reviewers primarily rely on information companies disclose in filings, it may be difficult to determine whether a low level of disclosure indicates that the company does not face any climate-related risks or does not consider the risks to be material.
Also, climate-related disclosures in some companies’ filings use boilerplate language, which is not specific to the company, and the information is unquantified. Our review of the annual filings of 116 S&P 500 Index companies found that some companies’ climate-related disclosures provided some quantitative information, while some other companies’ disclosures listed existing environmental regulations without specifying the associated impacts on the companies. For example, one oil and gas company stated in its annual filing that the imposition and enforcement of stringent greenhouse gas emissions reduction targets could severely and adversely impact the oil and gas industry and significantly reduce the value of the company’s business. However, the company did not provide any quantitative information on such impacts on its business. Additionally, SASB reported in October 2016 that its analysis of almost 1,500 disclosures in annual filings of 637 companies in 72 industries found that almost 30 percent of the disclosures SASB reviewed did not include any climate-related information, some contained boilerplate language or company-tailored narratives, and less than 20 percent of these disclosures included quantitative metrics.
SEC Has Mechanisms, Tools, and Resources to Help Its Staff Consistently Review Filing Disclosures
Although SEC relies primarily on information companies provide in their filings when reviewing climate-related and other disclosures, it has mechanisms, tools, and resources to help its staff consistently review filing disclosures, according to SEC documents and SEC staff we interviewed.
Internal supervisory control testing. As we reported in 2016, Corporation Finance’s Disclosure Standards Office (DSO) helps improve consistency in oversight of filing reviews by conducting testing of internal supervisory controls throughout the year. DSO is responsible for managing Corporation Finance’s internal supervisory control and contributes to Corporation Finance’s quality and process improvement efforts. DSO senior staff told us that the office examined filing reviews conducted by SEC staff on a random sample of filings in each year from 2014 through 2016. In DSO’s reviews, DSO examined the documents that are part of the filing reviews conducted by SEC staff, including the underlying filings, filing review reports prepared by SEC staff, comment letters issued, and the associated responses, among other things. Also, DSO staff assessed whether SEC staff had followed the relevant Corporation Finance policies and procedures. For example, DSO checked whether staff followed procedures for second-level reviews and issuing comment letters. However, DSO senior staff said they have not conducted any review specific to climate-related disclosures. Corporation Finance senior staff said DSO submits the results of its testing to its managing executive for use in the division’s management assurance statements. We also reported in 2016 that DSO helped strengthen components of Corporation Finance’s internal control.
Two-level review process. As discussed earlier, SEC generally conducts two levels of review at key steps in the filing review process. The two-level review process helps ensure that staff consistently review disclosures across filings, according to SEC staff we interviewed. For example, the second-level reviewers review the comment letters prepared by the first-level reviewers before sending the letters to companies, according to SEC’s internal policies and procedures. Also, assistant directors and senior assistant chief accountants of the 11 Corporation Finance offices generally meet monthly to discuss recent trends and issues identified in filing reviews in general, which helps ensure that staff assess materiality consistently across industries, according to some SEC staff.
Regulations and guidance. SEC staff can consult regulations and formal and informal SEC guidance for their filing reviews (see table 2 for examples), according to SEC documents and staff we interviewed. SEC posts relevant guidance and other information on its intranet site. Nearly all SEC staff we interviewed said current guidance was sufficient to guide their filing reviews, including the reviews of climate-related disclosures.
Internal and external data. According to SEC’s internal review guidance, SEC staff are expected to consider internal and external data as part of the filing review. As previously noted, some SEC staff told us they consider information from prior filings, internal filing review reports, other filings of companies in the same industry, and external data outside of the filings to supplement their filing reviews. For example, SEC staff can generally use internal and external databases to search prior years’ filings and filing-review-related comments and correspondence with companies. Some SEC reviewers told us that they also compare disclosures with external information, such as companies’ voluntary sustainability reports and financial analyst reports on companies’ earnings and operations, to look for inconsistencies in the companies’ reporting. Although SEC Corporation Finance staff can review external information such as the company’s sustainability report, they do not have the underlying information the company used to determine whether a potential disclosure was material or prepare the sustainability report and cannot perform an independent assessment of the disclosure based on the materiality of the underlying information. For example, SEC staff noted in a 2016 comment letter to an oil company that SEC has compared and identified potential inconsistency between the company’s disclosures on uncertainty about a new climate-related regulation and physical risks and information in the company’s sustainability report. The company stated the climate-related regulatory risks were not material and climate-related risks in their filing were consistent with information in its sustainability report. SEC did not issue any further comments.
Staff training. SEC staff have had some training on assessing materiality and industry-specific issues but fewer training that discussed climate- related disclosures, according to SEC staff.
Training on materiality. Most SEC staff we interviewed said training on materiality assessment was part of staff training or their ongoing on-the-job learning in their day-to-day work. Our review of some SEC training materials showed that training discussions covered federal securities laws and disclosure requirements, disclosure review, and materiality but did not focus specifically on climate-related issues. Also, some SEC staff said they consider materiality based on a given company’s specific facts and circumstances as they review filings in their day-to-day work. For example, two SEC staff we interviewed explained that second-level reviewers help first-level reviewers understand how to apply specific facts and circumstances as they consider materiality when they review filings.
Training on industry-specific issues. All SEC staff we interviewed noted that industry-specific training is provided by individual assistant director offices. For example, some staff mentioned training on disclosures for the oil and gas industry. Other staff noted that they also share information on industry-specific issues as part of their communication or meetings with supervisory staff. However, SEC staff we interviewed did not recall any industry-specific training on climate-related disclosures offered by individual assistant director offices.
Training on climate-related disclosures. Some SEC staff we interviewed identified training on the 2010 Guidance when the guidance was issued or a brownbag discussion on climate-related disclosure issues including the Peabody Energy investigation in 2016. According to SEC senior staff, the 2016 brownbag included a discussion of the 2010 Guidance and was offered to all Corporation Finance staff. In addition, our review of some meeting agendas showed that these meetings sometimes included discussion items on issues related to climate-related disclosures, such as the Peabody Energy investigation and a proposed environmental regulation. Furthermore, new SEC staff receive training on how to conduct filing reviews in general but not specifically on climate-related disclosures, according to some SEC staff.
Most of the SEC staff we interviewed told us they consider the training they have received to be sufficient for conducting filing reviews. Additionally, an SEC OIG survey of SEC staff published in September 2017 asked both first- and second-level reviewers if they felt they had received adequate training and guidance from SEC on how to conduct a disclosure review. Of the 159 who answered as first-level reviewers, 82 percent said that they felt they received adequate training and guidance to conduct disclosure reviews; and of the 130 who answered as second- level reviewers, 83 percent said that they felt they received adequate training and guidance to conduct disclosure reviews. Other staff we interviewed also noted that they receive training through their day-to-day work on an ongoing basis or when new regulations are issued or the need arises.
Staff experience. All eight supervisory staff we interviewed indicated that, as of August 2017, they had at least 10 years of experience at SEC as filing reviewers, while the 12 nonsupervisory staff we interviewed noted that they had from 2 to 18 years of such experience. Also, most of the SEC staff we interviewed indicated that they had some prior accounting or legal experience related to annual filing preparation or review, but they did not have any direct prior experience on climate- related disclosures. However, most SEC staff we interviewed said they generally do not need technical expertise to understand climate-related disclosures. Some staff said they can consult mining or petroleum engineers within Corporation Finance if the disclosures relate to other subjects, such as oil and gas reserves.
Stakeholders Have Mixed Views on the Amount and Specificity of the Current Climate-Related Disclosures
Stakeholders, including investor and industry groups, have different views on whether additional climate-related disclosures, including the amount and specificity, are needed. Some asset management firms and investor groups have highlighted the need for companies to disclose more climate- related information to help investors make more informed investment decisions. Three large asset management firms stated that they are committed to engaging with and encouraging companies to provide additional climate-related disclosures. For example, in 2017, one firm supported shareholder proposals for two companies to report the impacts of climate change on their operations. The proposals passed with majority shareholder support. The Council of Institutional Investors and Ceres stated in their letters commenting on SEC’s April 2016 Concept Release and also told us that the information on environmental risks, including climate risks, has become more significant for investors and companies. The two investor associations also noted that companies’ climate-related disclosures in the risk factors and management’s discussion and analysis sections of the filings generally do not provide investors with sufficient details. They further stated in their letters commenting on SEC’s April 2016 Concept Release that current climate- related disclosures are generally not comparable across companies’ filings. Additionally, SASB reported that climate-related disclosures using quantitative metrics may not be comparable because they lack standardization.
In contrast, representatives from the five industry associations with whom we spoke all noted that they consider the current requirements for climate-related disclosures adequate. They also do not believe additional climate-related disclosures are needed in SEC filings as the filings should include only climate-related information if it is material. Additionally, some companies are providing climate-related information through channels outside of SEC filings. Three of these industry associations also stated in their letters commenting on SEC’s April 2016 Concept Release that they would like to keep the existing requirements for climate-related disclosures.
While some investor organizations we spoke with generally believe more climate-related disclosures are needed, investors have not reached agreement on the priority of advocating for climate-related disclosures or the framework for companies to use to report these disclosures. For example, some members of a subcommittee of SEC’s Investor Advisory Committee have identified climate-related disclosures as a priority issue, but the subcommittee as a whole did not reach agreement that climate- related disclosures should be among its highest priorities. In addition, as previously described, existing reporting frameworks include those developed by CDP, Global Reporting Initiative, SASB, and the June 2017 FSB Task Force final recommendations. Given that these are voluntary frameworks, companies can report climate-related information using any of the frameworks or not use a framework at all. Further, stakeholders advocating for climate-related disclosures have not agreed on whether to adopt one of the existing reporting frameworks or develop a new framework for companies to use in reporting climate-related disclosures. For example, companies have not determined which of the existing reporting frameworks to use or are uncertain on which framework is preferred by investors for reporting climate-related disclosures, according to one investor association, representatives of SEC’s Investor Advisory Committee, and an SEC senior staff of the Office of Investor Advocate.
The SEC senior staff further stated that SEC may be hesitant to recommend a particular framework for companies to use given the uncertainties. Another organization focusing on climate-related disclosures in its letter commenting on SEC’s April 2016 Concept Release suggested that SEC review and consider elements of existing reporting frameworks. Furthermore, SEC’s Investor Advisory Committee, in its letter commenting on the Concept Release, recommended SEC develop an analytical framework on climate-related disclosures, among other things. Most recently in June 2017, the FSB Task Force reported that its recommendations aim to provide a framework to help companies more consistently disclose climate-related information and align their reporting frameworks over time. In particular, the Task Force recommends that companies include material climate-related disclosures in their public filings and encourages standard-setting bodies to support adoption of the recommendations. According to SEC senior staff, while the Task Force recommendations may be helpful if the Commission were to consider new rules on climate-related disclosures in the future, SEC staff is not aware of any specific SEC actions or plans based on the recommendations. Also, additional disclosure requirements or increased scrutiny of companies’ climate-related information—which, if necessary, SEC and Congress can consider—could have mission and resource implications for SEC’s Division of Corporation Finance.
Agency Comments and Our Evaluation
We provided a draft of this report to SEC for review and comment. In oral comments provided on January 10, 2018, senior staff in SEC’s Division of Corporation Finance generally agreed with our findings and provided technical comments, which we incorporated into the report, as appropriate.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Chair of SEC, and other interested parties. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Michael Clements at (202) 512-8678 or [email protected], or J. Alfredo Gómez at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
This report examines: (1) steps the Securities and Exchange Commission (SEC) has taken to help companies understand disclosure requirements for climate-related risks, (2) steps SEC has taken to examine changes in climate-related disclosures since the release of its 2010 Commission Guidance Regarding Disclosure Related to Climate Change (hereafter referred to as the 2010 Guidance), and (3) constraints SEC faces when reviewing climate-related disclosures and stakeholders’ views of those disclosures.
To address all objectives, we reviewed SEC documents, including the 2010 Guidance and internal filing review guidance, related to SEC’s review of climate-related and other disclosures in companies’ annual filings. We also reviewed SEC’s 2012 and 2014 congressional reports, titled Staff Report to the Senate Committee on Appropriations Regarding Climate Change Disclosure, and additional information on SEC staff’s ongoing reviews of climate-related disclosures. In addition, we reviewed prior GAO and SEC Office of Inspector General reports related to the 2010 Guidance, climate-related risks, and SEC’s filing review process and reports from stakeholders, including the report on recommendations from the Financial Stability Board Task Force on Climate-related Financial Disclosures (FSB Task Force).
We selected five industries to focus on for this report: oil and gas, mining, insurance, electric and gas utilities, and food and beverage. We selected the first four industries because they were identified by SEC staff, in its 2012 and 2014 congressional reports, as more likely than other industries to be affected by climate change-related matters due to the nature of their operations. We also selected the food and beverage industry because we identified companies in this industry that have submitted climate-related disclosures and can provide perspectives on these disclosures, and SEC had not selected companies in this industry for review in its 2012 and 2014 congressional reports or ongoing periodic reviews of climate-related disclosures. For all five industries, we searched companies’ annual filings to determine whether the industries include companies that have submitted climate-related disclosures in SEC filings or are represented by associations that have submitted comments on SEC’s April 2016 Concept Release related to business and financial disclosures in Regulation S-K. Because we did not search companies’ filings of all industries, industries we focused on in this report may not be a comprehensive list of industries that are affected by climate-related risks and views on the selected industries are not generalizable to other industries we did not include in our review.
To address the first objective, we reviewed SEC’s 2010 Guidance and Division of Corporation Finance (Corporation Finance) policies and procedures on review of disclosures. We determined the number of comment letters SEC issued to individual companies on climate-related disclosures from February 2010 to August 2017. Specifically, we reviewed a 2014 report by Ceres, a nonprofit organization that works with investors, companies, and public interest groups on sustainable business practices, that analyzed and determined the number of SEC comment letters to companies from February 2, 2010 (the date the 2010 Guidance was released) to December 31, 2013. Additionally, using the same keyword search terms—such as “climate change” and “climate mitigation”—that were used in the Ceres report, we determined the number of SEC comment letters issued to individual companies on issues related to climate-related disclosures from January 1, 2014, through August 11, 2017. Specifically, we searched for SEC’s comment letters in its EDGAR (Electronic Data Gathering, Analysis, and Retrieval) system— which is SEC’s record-keeping system for comment letters to companies, among other things—using the keyword search functionality. The search terms we used were not intended to represent a comprehensive list of keywords that may relate to climate-related issues. Therefore, the nongeneralizable sample of comment letters we identified is not intended to be a comprehensive list or representative sample of comment letters on climate-related information in SEC filings. We reviewed the comment letters identified through our search to understand the climate-related disclosure issues SEC staff has identified.
To understand SEC’s efforts to clarify climate-related disclosure requirements for companies and industry groups’ views on SEC’s efforts, we interviewed SEC staff from Corporation Finance and representatives from a nongeneralizable sample of industry groups representing companies in the five industries we selected. Specifically, we interviewed representatives from the following industry groups: American Insurance Association, American Petroleum Institute, Edison Electric Institute, Grocery Manufacturers Association, and National Mining Association. We selected these groups because they represent companies in the five industries in our review and they or their members submitted letters commenting on SEC’s April 2016 Concept Release or their members submitted climate-related disclosures to SEC in 2016. Additionally, we reviewed the letters these groups submitted commenting on the Concept Release to understand their views on climate-related disclosures. Views from the industry representatives with whom we spoke cannot be generalized to those we did not include in our review.
To address the second objective, we reviewed SEC’s 2012 and 2014 congressional reports and additional information on ongoing periodic reviews of climate-related disclosures. We also reviewed SEC’s April 2016 Concept Release, particularly the section that focuses on climate- related disclosures in SEC’s filings. Further, we interviewed Corporation Finance staff to understand steps SEC has taken to assess the effect of the 2010 Guidance and planned actions related to comments on climate- related disclosures for the Concept Release.
To address the third objective, we reviewed SEC documents on the review of climate-related and other disclosures in companies’ filings, including the 2010 Guidance, filing review guidance, and examples of staff training materials. We also reviewed information related to the New York State Attorney General’s investigation of and agreement with Peabody Energy on the company’s climate-related disclosures in SEC filings. To understand the specificity of companies’ climate-related disclosures in annual filings, we reviewed the Sustainability Accounting Standards Board’s (SASB) October 2016 report that analyzed and categorized selected companies’ climate-related disclosures according to their level of specificity. To identify illustrative examples of climate- related disclosures, we used Ceres’ SEC Sustainability Disclosure Search Tool to search annual filings of S&P 500 Index companies, filed with SEC in 2016, in the five industries we selected. We used Ceres’ SEC Sustainability Disclosure Search Tool because it searches companies’ SEC annual filings by industry, identifies relevant climate-related disclosures and their locations within the filings, and reproduces the excerpts of these disclosures in a single report. In a search of Ceres’ database on September 20, 2017, we identified 116 S&P 500 Index companies that included climate-related disclosures in their annual filings filed in 2016. See appendix II for examples of disclosures with varying levels of specificity.
To obtain information on SEC staff’s review of climate-related disclosures—including information on the review process, tools and guidance used in the review, and staff training and experience—we interviewed 20 Corporation Finance staff. Specifically, we interviewed 8 senior supervisory staff from the four Corporation Finance offices that cover reviews of filings of companies in the five industries we selected. We also randomly selected 12 nonsupervisory staff from these same four offices, with a mix of accountants and attorneys and years of experience at SEC. In addition, we interviewed senior staff from Corporation Finance’s Disclosure Standards Office to obtain information on the office’s examinations of the filing review process conducted in 2014 through 2016. Furthermore, we interviewed Corporation Finance senior staff to obtain an understanding of SEC’s enforcement authority in its filing review program and how that differs from the investigation power of state attorney generals.
To understand stakeholders’ views on climate-related disclosures, we reviewed SEC’s April 2016 Concept Release and individual letters commenting on the Concept Release from organizations that represent investors, companies in the five industries we selected, or organizations that focus on climate-related issues. We also reviewed the websites and documents of three investment management firms—BlackRock Advisors LLC, State Street Global Advisors Limited, and Vanguard Group, Inc.—on their efforts to seek additional climate-related disclosures from companies. We reviewed reports by stakeholders, including SASB and the FSB Task Force, to provide perspectives on investors’ views on the current state of climate-related disclosures. We identified these stakeholders because they represent major investor interests or have submitted letters commenting on SEC’s April 2016 Concept Release.
Furthermore, we interviewed representatives from the five industry groups we selected and other nonprofit organizations representing investors or focusing on climate-related issues. Specifically, we interviewed representatives from the following organizations representing investors or focusing on climate-related issues: Center for Climate and Energy Solutions (C2ES)—an independent, nonpartisan, nonprofit organization that works to address climate and energy challenges; Ceres; and the Council of Institutional Investors—a nonprofit, nonpartisan association that represents corporate, public, and union employee benefit funds and endowments. We selected these organizations because they represent investors or focus on climate-related issues and have submitted letters commenting on SEC’s April 2016 Concept Release. Views from the representatives of investor groups with whom we spoke cannot be generalized to those we did not include in our review. Additionally, we interviewed SEC senior staff from the Investor Advisory Committee and the Office of Investor Advocate and an industry representative who is a member of the Investor Advisory Committee to obtain information on investors’ views on climate-related disclosures. We also interviewed Corporation Finance senior staff to understand SEC’s planned efforts, if any, on climate-related disclosures.
Throughout this report, we use certain qualifiers when describing results from interview participants, such as “few,” “some,” and “most.” We define few as two or three; some as four or more but less than most; and most as more than half or nearly all relative to the total number possible. The views of interviewees we selected cannot be generalized to all SEC staff or stakeholders on issues related to climate-related disclosures.
We conducted this performance audit from November 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Examples of Climate-Related Disclosures in Securities and Exchange Commission (SEC) Form 10-K Filings
This appendix provides illustrative examples of climate-related disclosures by two companies in the oil and gas industry. The first example contains boilerplate and unquantified information. The second example contains some quantitative information and metrics. Filings we identified are not intended to be a comprehensive list or representative sample of companies that disclose climate-related information in SEC filings. See appendix I for additional information on the analysis.
Other Items The amount of insurance covering physical damage to our property and liability related to negative environmental effects resulting from a sudden and accidental pollution event, excluding Atlantic Named Windstorm coverage for which we are self insured, varies by asset, based on the asset’s estimated replacement value or the estimated maximum loss.
Risk Factors Climate change initiatives may result in significant operational changes and expenditures, reduced demand for our products and adversely affect our business. We recognize that climate change is a global environmental concern. Continuing political and social attention to the issue of climate change has resulted in both existing and pending international agreements and national, regional or local legislation and regulatory measures to limit greenhouse gas emissions. These agreements and measures may require significant equipment modifications, operational changes, taxes, or purchase of emission credits to reduce emission of greenhouse gases from our operations, which may result in substantial capital expenditures and compliance, operating, maintenance and remediation costs. In addition, our production is used to produce petroleum fuels, which through normal customer use may result in the emission of greenhouse gases. Regulatory initiatives to reduce the use of these fuels may reduce demand for crude oil and other hydrocarbons and have an adverse effect on our sales volumes, revenues and margins. The imposition and enforcement of stringent greenhouse gas emissions reduction targets could severely and adversely impact the oil and gas industry and significantly reduce the value of our business.
Management’s Discussion and Analysis of Financial Condition and Results of Operations We recognize that climate change is a global environmental concern. We assess, monitor and take measures to reduce our carbon footprint at existing and planned operations. We are committed to complying with all Greenhouse Gas (GHG) emissions mandates and the responsible management of GHG emissions at our facilities.
Risk Factors We expect to continue to incur substantial capital expenditures and operating costs as a result of our compliance with existing and future environmental laws and regulations. Likewise, future environmental laws and regulations, such as limitations on greenhouse gas emissions, may impact or limit our current business plans and reduce demand for our products. Our businesses are subject to numerous laws and regulations relating to the protection of the environment. These laws and regulations continue to increase in both number and complexity and affect our operations with respect to, among other things: The discharge of pollutants into the environment. Emissions into the atmosphere, such as nitrogen oxides, sulfur dioxide, mercury and greenhouse gas emissions.
Carbon taxes.
The handling, use, storage, transportation, disposal and cleanup of hazardous materials and hazardous and nonhazardous wastes. The dismantlement, abandonment and restoration of our properties and facilities at the end of their useful lives. Exploration and production activities in certain areas, such as offshore environments, arctic fields, oil sands reservoirs and tight oil plays.
We have incurred and will continue to incur substantial capital, operating and maintenance, and remediation expenditures as a result of these laws and regulations. To the extent these expenditures, as with all costs, are not ultimately reflected in the prices of our products and services, our business, financial condition, results of operations and cash flows in future periods could be materially adversely affected. Although our business operations are designed and operated to accommodate expected climatic conditions, to the extent there are significant changes in the Earth’s climate, such as more severe or frequent weather conditions in the markets we serve or the areas where our assets reside, we could incur increased expenses, our operations could be materially impacted, and demand for our products could fall. Demand for our products may also be adversely affected by conservation plans and efforts undertaken in response to global climate change, including plans developed in connection with the recent Paris climate conference in December 2015. Many governments also provide, or may in the future provide, tax advantages and other subsidies to support the use and development of alternative energy technologies.
Management’s Discussion and Analysis of Financial Condition and Results of Operations Climate Change There has been a broad range of proposed or promulgated state, national and international laws focusing on greenhouse gas (GHG) reduction. These proposed or promulgated laws apply or could apply in countries where we have interests or may have interests in the future. Laws in this field continue to evolve, and while it is not possible to accurately estimate either a timetable for implementation or our future compliance costs relating to implementation, such laws, if enacted, could have a material impact on our results of operations and financial condition. Examples of legislation or precursors for possible regulation that do or could affect our operations include: European Emissions Trading Scheme (ETS), the program through which many of the European Union (EU) member states are implementing the Kyoto Protocol. Our cost of compliance with the EU ETS in 2015 was approximately $0.4 million (net share pre-tax). In Canada during 2015, the Alberta government amended the regulations of the Climate Change and Emissions Act. The regulations now require any existing facility with emissions equal to or greater than 100,000 metric tonnes of carbon dioxide or equivalent per year to reduce its net emissions intensity from its baseline. The reduction is increasing from the current 12 percent in 2015, to 15 percent in 2016 and to 20 percent in 2017. We also incur a carbon tax for emissions from fossil fuel combustion in our British Columbia operations. The total cost of compliance with these regulations in 2015 was approximately $4.7 million. The U.S. Supreme Court decision in Massachusetts v. EPA, 549 U.S. 497, 127 S.Ct. 1438 (2007), confirming that the EPA has the authority to regulate carbon dioxide as an “air pollutant” under the Federal Clean Air Act. The U.S. EPA’s announcement on March 29, 2010 (published as “Interpretation of Regulations that Determine Pollutants Covered by Clean Air Act Permitting Programs,” 75 Fed. Reg. 17004 (April 2, 2010)), and the EPA’s and U.S. Department of Transportation’s joint promulgation of a Final Rule on April 1, 2010, that triggers regulation of GHGs under the Clean Air Act, may trigger more climate based claims for damages, and may result in longer agency review time for development projects. The U.S. EPA’s announcement on January 14, 2015, outlining a series of steps it plans to take to address methane and smog-forming volatile organic compound emissions from the oil and gas industry. The current U.S. administration has established a goal of reducing the 2012 levels in methane emissions from the oil and gas industry by 40 to 45 percent by 2025.
Carbon taxes in certain jurisdictions. Our cost of compliance with Norwegian carbon tax legislation in 2015 was approximately $31 million (net share pre-tax). The agreement reached in Paris in December 2015 at the 21st Conference of the Parties to the United Nations Framework on Climate Change, setting out a new process for achieving global emission reductions.
In the United States, some additional form of regulation may be forthcoming in the future at the federal and state levels with respect to GHG emissions. Such regulation could take any of several forms that may result in the creation of additional costs in the form of taxes, the restriction of output, investments of capital to maintain compliance with laws and regulations, or required acquisition or trading of emission allowances. We are working to continuously improve operational and energy efficiency through resource and energy conservation throughout our operations. Compliance with changes in laws and regulations that create a GHG emission trading scheme or GHG reduction policies could significantly increase our costs, reduce demand for fossil energy derived products, impact the cost and availability of capital and increase our exposure to litigation. Such laws and regulations could also increase demand for less carbon intensive energy sources, including natural gas. The ultimate impact on our financial performance, either positive or negative, will depend on a number of factors, including but not limited to:
Whether and to what extent legislation or regulation is enacted.
The timing of the introduction of such legislation or regulation.
The nature of the legislation (such as a cap and trade system or a tax on emissions) or regulation. The price placed on GHG emissions (either by the market or through a tax). The GHG reductions required. The price and availability of offsets. The amount and allocation of allowances. Technological and scientific developments leading to new products or services. Any potential significant physical effects of climate change (such as increased severe weather events, changes in sea levels and changes in temperature).
Whether, and the extent to which, increased compliance costs are ultimately reflected in the prices of our products and services.
The company has responded by putting in place a corporate Climate Change Action Plan, together with individual business unit climate change management plans in order to undertake actions in four major areas: Equipping the company for a low emission world, for example by integrating GHG forecasting and reporting into company procedures; utilizing GHG pricing in planning economics; developing systems to handle GHG market transactions.
Reducing GHG emissions—In 2014, the company reduced or avoided GHG emissions by approximately 900,000 metric tonnes by carrying out a range of programs across a number of business units. Evaluating business opportunities such as the creation of offsets and allowances; carbon capture and storage; the use of low carbon energy and the development of low carbon technologies. Engaging externally—The company is a sponsor of MIT’s Joint Program on the Science and Policy of Global Change; constructively engages in the development of climate change legislation and regulation; and discloses our progress and performance through the Carbon Disclosure Project and the Dow Jones Sustainability Index.
The company uses an estimated market cost of GHG emissions in the range of $8 to $35 per tonne depending on the timing and country or region to evaluate future opportunities.
Appendix III: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contacts named above, Barbara L. Patterson (Assistant Director), Giselle Cubillos-Moraga (Analyst in Charge), Anna Chung, Cindy Gilbert, Jesse Lamarre-Vincent, Marc Molino, Tovah Rom, Grant Simmons, and Tyler Spunaugle made key contributions to this report. | Why GAO Did This Study
Impacts from a changing climate can pose serious risks to the global economy and affect many economic sectors, according to reports. Public companies are generally required to disclose certain risks in their SEC filings. In 2010, SEC issued guidance to clarify how existing disclosure requirements apply for climate-related matters.
GAO was asked to review (1) steps SEC has taken to clarify to companies their disclosure requirements for climate-related risks, (2) steps SEC has taken to examine changes companies may have made to their climate-related disclosures since the release of its 2010 Guidance, and (3) constraints SEC faces when reviewing climate-related disclosures and stakeholders' views of those disclosures.
GAO reviewed SEC's disclosure requirements, guidance, and reports on changes in climate-related disclosures; queried SEC's filings system to identify comment letters with issues on climate-related disclosures; identified examples of climate-related disclosures in companies' filings; and interviewed SEC staff and representatives of stakeholder groups, such as industry associations from five industry groups, and nonprofit organizations that work with investors. We selected these stakeholders because they either were from industries likely to be affected by climate change-related matters due to the nature of their operations, or have a key interest in climate-related issues.
Senior staff from SEC's Division of Corporation Finance generally agreed with GAO's findings.
What GAO Found
To help clarify to companies their disclosure requirements for climate-related matters, the Securities and Exchange Commission (SEC) issued the Commission Guidance Regarding Disclosure Related to Climate Change in 2010 (2010 Guidance). The 2010 Guidance was SEC's primary form of communication to clarify companies' climate-related disclosure requirements. In addition, SEC issued individual comment letters to specific companies on their climate-related disclosures. These letters are publicly available and companies can view these letters to understand SEC's assessment of a particular company's disclosures. Representatives from industry associations with whom GAO spoke stated that they consider the disclosure requirements for climate-related risks to be clear and have no need for additional guidance.
SEC issued two reports to Congress in 2012 and 2014 that examined changes in climate-related disclosures in select industries. SEC found that most of these filings included some level of climate-related disclosures and reported that there were no notable year-to-year changes. SEC staff also continue to periodically assess climate-related disclosures in addition to its regular disclosure review process. Additionally, in April 2016, SEC requested public input on modernizing certain business and financial disclosure requirements, including potential changes on reporting climate-related risks in SEC's filings. As of December 2017, SEC staff said they are considering recommendations for the Commission's consideration based on comments received.
SEC faces constraints in reviewing climate-related and other disclosures because it primarily relies on information that companies provide. SEC senior staff explained that SEC's Division of Corporation Finance staff assess companies' filings for compliance with federal securities laws—which require companies to disclose material risks—but do not have the authority to subpoena additional information from companies. Additionally, companies may report similar climate-related disclosures in different sections of the filings, and climate-related disclosures in some filings contain disclosures using generic language, not tailored to the company, and do not include quantitative metrics. When companies report climate-related disclosures in varying formats and specificity, SEC reviewers and investors may find it difficult to compare and analyze related disclosures across companies' filings. SEC has tools, mechanisms, and resources—including internal supervisory controls, regulations and guidance, a two-level filing review process, internal and external data, and staff training and experience—that help SEC staff consistently review filing disclosures, according to SEC documents and staff. Representatives of industry associations told GAO that they consider the current climate-related disclosure requirements adequate and no additional climate-related disclosures are needed. However, some investor groups and asset management firms have highlighted the need for companies to disclose more climate-related information. But, members of SEC's Investor Advisory Committee told GAO that investors have not agreed on the priority of climate-related disclosures. Also, additional disclosure requirements or increased scrutiny of companies' climate-related information—which, if necessary, SEC and Congress can consider—could have mission and resource implications for SEC's Division of Corporation Finance. |
gao_GAO-18-351 | gao_GAO-18-351_0 | Background
Black lung benefits include both cash assistance and medical benefits. Maximum cash assistance payments generally ranged from about $650 to $1,300 per month in fiscal year 2017, depending on the number of dependents the miner has. Miners receiving cash assistance are also eligible for medical benefits that cover the treatment of their black-lung- related conditions, which may include hospital and nursing care, rehabilitation services, and drug and equipment charges, according to DOL documentation. DOL estimates that the average annual cost for medical treatment in fiscal year 2017 was approximately $6,980 per miner.
There were about 25,700 total beneficiaries (primary and dependents) receiving black lung benefits during fiscal year 2017 (see fig. 1). The decrease in the number of beneficiaries over time has resulted from a combination of declining coal mining employment and an aging beneficiary population, according to DOL officials. Further, black lung beneficiaries could increase in the near term due to the increased occurrence of black lung disease and its most severe form, progressive massive fibrosis, particularly among Appalachian coal miners, according to HHS officials.
Black lung claims are processed by DOL’s Office of Workers’ Compensation Programs. Contested claims are adjudicated by DOL’s Office of Administrative Law Judges, which issues decisions that can be appealed to the Benefits Review Board. Claimants and mine operators may further appeal these agency decisions to the federal courts. If an award is contested, claimants can receive interim benefits, which are generally paid from the Trust Fund according to DOL officials, while their claims are in the appeals process. Final awards are either funded by mine operators—who are identified as the responsible employers of claimants—or the Trust Fund, when responsible employers cannot be identified or do not pay. In fiscal year 2017, black lung claims had an approval rate of about 29 percent, according to DOL data. Of the 19,430 primary black lung beneficiaries receiving benefits during fiscal year 2017, 64 percent (12,464) were paid from the Trust Fund, 25 percent (4,798) were paid by liable mine operators, and 11 percent (2,168) were receiving interim benefits, according to DOL officials.
Black Lung Disability Trust Fund revenue is primarily obtained from mine operators through the coal tax. The coal tax is imposed at two rates, depending on whether the coal is extracted from underground or surface mines. The current tax rates are $1.10 per ton of underground-mined coal and $0.55 per ton of surface-mined coal, up to 4.4 percent of the sales price. Therefore, if a ton of underground-mined coal is sold for less than $25, than the tax paid would be less than $1.10. For instance, if a ton of underground-mined coal sold for $20, than it would be taxed at 4.4 percent of the sales price, or $0.88. To a lesser extent, the Trust Fund also receives other miscellaneous revenue from interest payments, and various fines and penalties paid by mine operators, among other sources, according to DOL documentation. Coal tax revenue is collected from mine operators by Treasury’s Internal Revenue Service and then transferred to the Trust Fund where it is then used by DOL officials to pay black lung benefits and the costs of administering the program.
Trust Fund expenditures include, among other things, black lung benefit payments, certain administrative costs incurred by DOL and Treasury to administer the black lung benefits program, and debt repayments. When necessary for the Trust Fund to make relevant expenditures under federal law, the Trust Fund borrows from the Treasury’s general fund. When this occurs, the federal government is essentially borrowing from itself—and hence from the general taxpayer—to fund its benefit payments and other expenditures.
Multiple Factors Have Challenged Trust Fund Finances Resulting in Growing Debt
Multiple factors have challenged Trust Fund finances since it was established about 40 years ago. Its expenditures have consistently exceeded its revenue, interest payments have grown, and legislative actions taken that were expected to improve Trust Fund finances did not completely address its debt. Combined black lung benefit payments and program administrative costs exceeded Trust Fund revenue every year for the program’s first decade (fiscal years 1979 through 1989), resulting in the accrual of debt. During the Trust Fund’s first three fiscal years in particular, revenue covered less than 40 percent of the Trust Fund’s combined benefit payments and administrative costs. For instance, in fiscal year 1980, the Trust Fund received about $251 million in revenue and paid about $726 million in black lung benefits and administrative costs.
Beginning in 1982, revenue increased as a result of the Black Lung Benefits Revenue Act of 1981 that doubled the coal tax rates from $0.50 to $1 per ton of underground-mined coal and from $0.25 to $0.50 per ton of surface-mined coal, up to 4 percent of the sales price. Even with the tax rate increase, combined benefit payments and administrative costs continued to exceed revenue throughout the 1980s (see fig. 2). As a result, the Trust Fund borrowed from Treasury’s general fund to cover the annual differences between its expenditures and revenues, and by fiscal year 1989 the Trust Fund’s outstanding debt to Treasury’s general fund exceeded $3 billion.
Beginning in fiscal year 1990, Trust Fund revenue generally began to exceed combined benefit payments and administrative costs, and, in fact, total Trust Fund cumulative revenue collected from fiscal years 1979 through 2017 exceeded total cumulative benefit payments and administrative costs incurred during these years. However, interest owed from earlier years of borrowing led to more borrowing and debt. From fiscal years 1979 through 1989, the Trust Fund borrowed—primarily through 30-year term loans according to Treasury officials—from Treasury’s general fund at interest rates that varied from about 6.5 percent to about 13.9 percent. In fiscal year 1985, for instance, the Trust Fund paid about $275 million in interest, which was equal to about half of the total revenue collected that year. Since fiscal year 1990, revenue has generally exceeded combined benefit payments and administrative costs, although interest payments on the Trust Fund’s outstanding debt kept the fund in a position whereby its total expenditures continued to exceed its total revenues. As a result, the principal amount of the Trust Fund’s total outstanding debt to Treasury’s general fund increased and exceeded $10 billion by fiscal year 2008.
Legislation has been enacted over the years that was expected to improve Trust Fund finances: In 1981, the Black Lung Benefits Revenue Act of 1981 doubled the coal tax rates from $0.50 cents to $1 per ton of underground-mined coal, and from $0.25 cents to $0.50 cents per ton of surface-mined coal, up to 4 percent of the sales price (as mentioned previously).
In 1986, the Consolidated Omnibus Budget Reconciliation Act of 1985 established a 5 year moratorium on interest accrual with respect to repayable advances to the Trust Fund (which we refer to as annual borrowing from Treasury’s general fund), and increased the coal tax rates to $1.10 per ton of underground-mined coal, and $0.55 per ton of surface-mined coal (up to 4.4 percent of the sales price), where they have remained since.
In 2008, the EIEA included provisions that were expected to eliminate the Trust Fund’s debt. Specifically, EIEA (1) generally extended the coal tax rates at their current rates until December 31, 2018 (after which they are scheduled to decrease to their original levels of $0.50 per ton of underground-mined coal, and $0.25 per ton of surface- mined coal, up to 2 percent of the sales price); (2) provided for a one- time federal appropriation toward Trust Fund debt forgiveness (about $6.5 billion, according to DOL data); and (3) provided for the refinancing of the Trust Fund’s debt that was not forgiven as a result of EIEA (which we refer to as the Trust Fund’s legacy debt). Specifically, the Trust Fund’s legacy debt was refinanced with more favorable interest rates, according to DOL data. Interest rates on the refinanced legacy debt range from about 1.4 percent to about 4.5 percent.
The forgiveness and refinancing of Trust Fund debt along with extending the current coal tax rates through 2018 were expected to result in annual tax revenue that could be used to pay down interest and principal on the Trust Fund’s legacy debt, according to DOL and Treasury officials. These officials said that models showed that debt would be eliminated by fiscal year 2040; however, they noted that coal tax revenue has been less than originally projected due, in part, to the 2008 recession and increased market competition from other energy sources. As a result, the Trust Fund’s total expenditures continued to exceed revenue and the Trust Fund borrowed from Treasury’s general fund each year from fiscal years 2010 through 2017 to cover debt repayments expenditures. In fiscal year 2017, the Trust Fund’s total principal amount of outstanding debt, which includes its legacy debt and the amount borrowed from Treasury’s general fund that year, was about $4.3 billion (see fig. 3).
Trust Fund Borrowing Will Likely Continue to Increase through 2050, and Multiple Options Could Reduce Future Debt
Trust Fund Borrowing Will Likely Continue to Increase through 2050
Trust Fund borrowing will likely continue to increase from fiscal years 2019 through 2050 due, in part, to the scheduled coal tax rate decrease of about 55 percent that will take effect in 2019 and declining coal production, according to our moderate simulation. We simulated the effects of the scheduled 2019 tax rate decrease on Trust Fund finances through 2050, and in this report, we generally present the results of a moderate case set of assumptions (see table 1). These simulations are not predictions of what will happen, but rather models of what could happen given certain assumptions. For more information on our simulation methodology see appendix I. In addition to the moderate case assumptions, we also simulated how Trust Fund debt could change through 2050 given various other assumptions, and the full range of results for all of our simulations are presented in appendix II.
Our moderate case simulation suggests that Trust Fund revenue may decrease, from about $485 million in fiscal year 2018 to about $298 million in fiscal year 2019, due, in part, to the scheduled approximate 55 percent decrease in the coal tax. Our simulation, which incorporates EIA data on future expected coal production, also shows that annual Trust Fund revenue will likely continue to decrease beyond fiscal year 2019 due, in part, to declining coal production. Domestic coal production has declined from about 1.2 billion tons in 2008 to about 728 million tons in 2016, according to EIA. Based on these projections, our moderate simulation shows that Trust Fund annual revenue may continue to decrease from about $298 million in fiscal year 2019 to about $197 million in fiscal year 2050 (see fig. 4).
With the scheduled 2019 tax rate decrease, our moderate case simulation suggests that expected revenue will likely be insufficient to cover combined black lung benefit payments and administrative costs, as well as debt repayment expenditures. Specifically, our moderate case simulation suggests that revenue may not be sufficient to cover beneficiary payments and administrative costs from fiscal years 2020 through 2050 (see fig. 5). For instance, in fiscal year 2029, simulated benefit payments and administrative costs will likely exceed simulated revenue by about $99 million. These annual deficits will likely decrease over time to about $4 million by fiscal year 2050 due, in part, to the assumed continued net decline in total black lung beneficiaries. Our simulation also therefore suggests that Trust Fund revenue may not be enough to also cover the debt repayment expenditures it must continue to make through fiscal year 2040, per the payment schedule established following the 2008 EIEA.
Our moderate simulation suggests that the amount borrowed by the Trust Fund will likely increase from about $1.6 billion in fiscal year 2019 to about $15.4 billion in fiscal year 2050 (see fig. 6). Although the Trust Fund’s legacy debt decreases through fiscal year 2040, total Trust Fund expenditures—including combined benefit payments and administrative costs as well as debt repayments—will likely continue to exceed revenue which will require continued annual borrowing from Treasury’s general fund. However, the amount borrowed by the Trust Fund could vary depending, in part, on future coal production and the number of new beneficiaries and could range between about $6 billion and about $27 billion in 2050, according to our simulations (see appendix II).
Adjusting Coal Tax Rates, Forgiving Interest, and Forgiving Debt Are Options That Could Improve the Trust Fund’s Future Financial Position
We simulated three options that can affect Trust Fund finances through fiscal year 2050. Specifically, we simulated the effects of (1) adjusting the coal tax, (2) forgiving interest, and (3) forgiving debt. In each of the simulations, we compared the results of the option to a baseline in which the coal tax rates will decrease by about 55 percent, which we refer to as the scheduled 2019 tax rate decrease. We compare interest and debt forgiveness options to a baseline which assumes the scheduled 2019 tax rate decrease has taken effect, and that there is no interest or debt forgiveness. The simulated options are not intended to be exhaustive and we are not endorsing any particular option or combination of options.
Adjust Coal Tax Rates
Using the moderate case, we simulated four options: (1) implementing the 2019 coal tax rate reduction to $0.50 per ton of underground-mined coal and $0.25 per ton of surface-mined coal; (2) maintaining the current coal tax rates of $1.10 per ton for underground-mined coal and $0.55 per ton of surface-mined coal; (3) reducing the tax rates by 25 percent (from $1.10 and $0.55); and (4) increasing these tax rates by 25 percent (see fig. 7). Increasing the tax rates by 25 percent was the only option that eliminated simulated Trust Fund debt by fiscal year 2050, according to our moderate case simulation.
We simulated three interest forgiveness options including forgiving interest on (1) legacy debt, (2) annual borrowing, and (3) all debt. Our moderate case simulation suggests that forgiving interest will not eliminate simulated debt by fiscal year 2050 (see fig. 8).
We simulated two debt forgiveness options by forgiving principal and interest on (1) legacy debt and (2) all debt. Our moderate case simulation suggests that both debt forgiveness options would reduce simulated Trust Fund borrowing by fiscal year 2050, but these options would not eliminate debt altogether as simulated revenue will likely not be enough to cover simulated expenditures (see fig. 9). In these cases, the Trust Fund will need to continue borrowing from Treasury’s general fund to cover annual deficits, and thus accumulate debt.
While adjusting coal tax rates and forgiving interest or debt could reduce the Trust Fund’s simulated borrowing by 2050, implementing them could affect the coal industry or general taxpayers, according to stakeholders we interviewed. For instance, a coal industry representative noted that maintaining the coal tax at its current rate would continue to burden the coal industry and increasing the tax would exacerbate the burden at a time when coal production has been declining. Treasury officials noted that the costs associated with forgiving Trust Fund interest or debt would be borne by the general taxpayer since Treasury borrows from taxpayers to lend to the Trust Fund as needed. These officials also said that making a one-time federal appropriation to forgive interest or debt would be the most transparent way to satisfy the Trust Fund’s outstanding debt to Treasury’s general fund.
In addition to the simulations, other options could affect the financial position of the Trust Fund including reducing black lung benefits, eliminating or adjusting the coal tax cap, or creating a variable coal tax. Our moderate case simulation suggests that completely eliminating black lung benefits as of fiscal year 2019 could reduce the Trust Fund’s borrowing from Treasury’s general fund in fiscal year 2050 from about $15.4 billion to about $6.4 billion. However, doing so would generally mean that coal tax revenue would be collected solely to fund the repayment of Trust Fund debt. Another option could be to eliminate or adjust the coal tax cap, which currently prevents mine operators from paying a coal tax of more than 4.4 percent of the price per ton of coal sold. If the coal tax cap were eliminated, for instance, mine operators would pay $1.10 per ton of underground-mined coal and $0. 55 per ton of surface-mined coal regardless of price sold, which could increase revenue. As an additional option, changing the structure of the coal tax to flexible rates that change based on an annual actuarial assessment of the Trust Fund could help to ensure that coal mine operators pay the necessary amount of tax to cover Trust Fund expenditures, without resulting in a Trust Fund balance or deficit.
Multiple Options Could Reduce Future Trust Fund Debt and Would Distribute the Financial Burden Differently Among General Taxpayers and Industry
Multiple options could reduce the Trust Fund’s future debt and distribute the financial burden among the coal industry and general taxpayers. We simulated whether various coal tax and debt forgiveness options could balance the Trust Fund by fiscal year 2050, whereby its simulated revenue would be sufficient to cover its simulated expenditures. These options were selected, in part, based on interviews with Trust Fund stakeholders and the availability of DOL and other data. We approached these simulations from two perspectives. First, we simulated how much Trust Fund debt would need to be forgiven based on various coal tax rates. Second, we simulated the average tax collected per ton needed to balance the Trust Fund by 2050, based on certain debt forgiveness options. The simulated options are not intended to be exhaustive and we are not endorsing any particular combination of options.
Our first set of options using the moderate case simulations are based on the current coal tax rates of $1.10 per ton of underground-mined coal and $0.55 per ton of surface-mined coal, and show the amount of debt forgiveness in fiscal year 2019 needed to balance the Trust Fund by fiscal year 2050 based on certain tax rates (see fig. 10). Specifically, our moderate case simulations show the following: Increasing current coal tax rates by 25 percent could balance the Trust Fund by 2050 and would likely require no debt forgiveness. For this option, the simulated coal tax revenue would likely be sufficient to cover simulated Trust Fund expenditures, including combined benefit payments and administrative costs, as well as debt repayments. However, this option would place the burden solely on the coal industry that would be paying higher taxes at a time when coal production has been declining.
Maintaining current coal tax rates could balance the Trust Fund by 2050 if coupled with about $2.4 billion of debt forgiveness. This option would distribute the burden among the coal industry and general taxpayers.
Decreasing current coal tax rates by 25 percent could balance the Trust Fund by 2050 if coupled with about $4.8 billion in debt forgiveness. This option would burden the coal industry less than maintaining the current tax rates, but would increase the burden on general taxpayers.
Decreasing current tax rates by 55 percent, which we refer to as the scheduled 2019 tax rate decrease, would balance the Trust Fund by 2050 if coupled with about $7.8 billion in debt forgiveness. This figure comprises the Trust Fund’s total simulated outstanding debt in fiscal year 2019 ($6.6 billion), and an additional about $1.2 billion that would be required because the Trust Fund will accrue additional debt from fiscal years 2020 through 2050, according to our moderate case simulations. The coal industry would bear some of the financial burden of this option, while also placing a financial burden on general taxpayers.
Our second set of options using moderate case simulations show the change in average coal tax revenue collected per ton to balance the Trust Fund by fiscal year 2050 based on certain debt forgiveness options (see fig. 11). Specifically, our moderate simulations show the following:
Forgiving the Trust Fund’s legacy debt would allow for an average tax collected of about $0.59 per ton to balance the Trust Fund by 2050. Based on certain assumptions, this could be accomplished with a tax of $0.88 per ton on underground-mined coal and $0.44 per ton on surface-mined coal.
Forgiving all Trust Fund debt would allow for an average tax collected per ton of coal sold of $0.47 per ton to balance the Trust Fund by 2050. Based on certain assumptions, this could be accomplished with a tax of $0.70 per ton on underground-mined coal and a tax of $0.35 per ton of surface-mined coal.
Agency Comments
We provided a draft of this report to the Departments of Labor (DOL), Treasury, and Health and Human Services (HHS) for review and comment. DOL, Treasury, and HHS provided technical comments, which we incorporated as appropriate.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to the appropriate congressional committees, the Secretaries of Labor, Treasury, and Health and Human Services, and other interested parties. In addition, the report will be available at no charge on GAO’s web site at http://www.gao.gov.
If you or your staff should have any questions about this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Black Lung Disability Trust Fund Simulation Methodology
We examined the extent to which (1) Black Lung Disability Trust Fund (Trust Fund) debt may change through 2050 and (2) selected options to improve its future financial position. We interviewed officials from the Departments of Labor (DOL), Treasury, and Health and Human Services (HHS), as well as representatives from the National Mining Association and the United Mine Workers of America. We then selected options to simulate based, in part, on these interviews and the availability of DOL and other data. These options included adjusting the coal tax, forgiving interest on some or all Trust Fund debt, forgiving some or all Trust Fund debt, or various combinations of these options. The options we simulated are not intended to be exhaustive and we are not endorsing any particular option or combination of options. Our simulations are based on various assumptions and simulate Trust Fund revenues and expenditures from fiscal years 2016 through 2050. To develop these simulations, we used actual and projection data from (1) DOL for fiscal years 2015 through 2040; (2) Treasury’s Office of Tax Analysis for fiscal years 2011 through 2015; (3) the Department of Energy’s Energy Information Administration (EIA) for calendar years 2015 through 2050; and (4) the Office of Management and Budget for fiscal year 2017.
Black Lung Benefit Expenditures
To simulate future Trust Fund benefit expenditures, we simulated the number of beneficiaries each fiscal year, and the annual average amount of benefits received (cash assistance and medical benefits). To simulate the numbers of beneficiaries, we used DOL data on the (1) age distributions of miner and widow beneficiaries for fiscal year 2015; (2) mortality rates by age for miner and widow beneficiaries as of fiscal year 2015; and (3) numbers of beneficiaries—including married miners, single miners, widows, and miners receiving medical benefits only—in fiscal year 2015. We assumed—as DOL does in its Black Lung Budget and Liability Model—that all miners are men, all widows are women, and all spouses are 3 years younger than the miner. We also assumed that the age distribution of single miners is the same as for married miners, and that the age distribution of new miner and widow beneficiaries is the same as for miner and widow beneficiaries during fiscal year 2015. We used DOL’s mortality rates to simulate the number of beneficiaries of each age and type in each year, and used those numbers to then simulate the total number of beneficiaries of each type each year (see table 2).
We also assumed that there will be no new medical-benefit-only recipients.
Formula The number of married miner beneficiaries age a in fiscal year y is equal to the number of new married miner beneficiaries age a in fiscal year y plus the number of married miner beneficiaries age a-1 in fiscal year y-1 who survived and whose spouse survived. The total number of married miner beneficiaries in fiscal year y is then the sum of the number of married miner beneficiaries of all ages in fiscal year y. Finally, we averaged the number of married miner beneficiaries by averaging the prior fiscal year’s total and the current fiscal year’s total.
The number of single miner beneficiaries age a in fiscal year y is equal to the number of new single miner beneficiaries age a in fiscal year y plus the number of single miner beneficiaries age a-1 in fiscal year y-1 who survived plus the number of married miner beneficiaries age a-1 in fiscal year y-1 who survived but whose spouse did not survive. The total number of single miner beneficiaries in fiscal year y is then the sum of the number of single miner beneficiaries of all ages in fiscal year y. Finally, we averaged the number of single miner beneficiaries by averaging the prior fiscal year’s total and the current fiscal year’s total.
The number of widow beneficiaries age a in fiscal year y is equal to the number of new beneficiaries who are widows age a in fiscal year y plus the number of widow beneficiaries age a-1 in fiscal year y-1 who survived plus the number of married miner beneficiaries age a+2 in fiscal year y-1 who did not survive but whose spouse did survive. The total number of widow beneficiaries in fiscal year y is then the sum of the number of widow beneficiaries of all ages in fiscal year y. Finally, we averaged the number of widow beneficiaries by averaging the prior fiscal year’s total and the current fiscal year’s total.
The number of MBO beneficiaries of age a in fiscal year y is equal to the number of MBO beneficiaries of age a-1 in fiscal year y-1 who survived. The total number of MBO beneficiaries only in fiscal year y is then the sum of the number of MBO beneficiaries of all ages in fiscal year y. Finally, we averaged the number of MBO beneficiaries by averaging the prior fiscal year’s total and the current fiscal year’s total.
Coal Tax Revenues
To simulate future coal tax revenue, we used Treasury and EIA data to calculate (1) the amounts of underground and surface-mined coal taxed at fixed dollar amounts of $1.10 and $0.55 per ton, respectively, in 2015; (2) the amounts of underground and surface-mined coal taxed at variable dollar amounts per ton equal to 4.4 percent of the price in 2015; and (3) average prices of underground and surface-mined coal taxed at 4.4 percent of the price in 2015. We then used EIA data on projected amounts of total coal production, underground-mined coal production, lignite coal production, and coal exports, as well as projected average coal prices, for the period from 2015 through 2050 to simulate future coal tax revenues (see table 3).
We simulated other Trust Fund expenditures and revenues, including administrative costs and debt repayments (see table 4). For our simulations, total Trust Fund expenditures are the sum of black lung benefits (cash assistance and medical benefits), total administrative costs, repayment of interest and principal on outstanding debt to Treasury’s general fund, and other expenditures. Total Trust Fund revenues are the sum of coal tax revenue and other miscellaneous revenue, and exclude annual borrowing from Treasury’s general fund. Annual borrowing from Treasury’s general fund is the difference between total Trust Fund expenditures and revenues and is assumed to be repaid with interest the following year. If total revenues are greater than total expenditures, then the Trust Fund has a balance and would not have to borrow that year. In this case, we assumed that the Trust Fund will earn interest on that balance at the same rate on which interest would accrue on annual borrowing.
We simulated how the scheduled 2019 tax rate decrease and various options including adjusting the coal tax, forgiving debt interest, and forgiving debt principal and interest may affect Trust Fund finances through fiscal year 2050 (see table 5). The options listed are not intended to be exhaustive and we are not endorsing any particular option or combination of options.
We simulated option combinations for coal tax rates, interest forgiveness, and debt forgiveness to demonstrate how potential financial adjustments could affect future Trust Fund borrowing from Treasury’s general fund through fiscal year 2050. For options that involve adjusting coal tax rates, we estimated the amount of debt that would need to be forgiven in fiscal year 2019 for the Trust Fund’s revenues to be sufficient to cover its expenditures through fiscal year 2050, assuming the Trust Fund does not borrow from Treasury’s general fund after fiscal year 2018. To do so, we first calculated the real discounted present value of Trust Fund expenditures for fiscal years 2019 through 2050, including benefit payments, administrative costs, legacy debt repayments, and repayment of annual borrowing from Treasury’s general fund. Second, we calculated the real discounted present value of Trust Fund revenue for the same period, including coal tax revenue and other miscellaneous revenue. Third, we calculated debt forgiveness as the difference between the real discounted present value of Trust Fund expenditures from the first calculation and the real discounted present value of Trust Fund revenues from the second calculation. When the amount of debt forgiveness is greater than the amount of debt outstanding, the Trust Fund would need an additional cash inflow in addition to forgiveness of all outstanding debt. Amounts of debt forgiveness less than zero suggest that no debt forgiveness is required.
For options involving forgiving debt (interest or principal), we estimated the average tax per ton of coal that, if implemented in fiscal year 2019, would provide the Trust Fund sufficient revenue to cover its expenditures through fiscal year 2050, assuming the Trust Fund does not receive any advances from Treasury’s general fund after fiscal year 2018. To do so, we first calculated the real discounted present value of Trust Fund expenditures for the period from fiscal year 2019 through fiscal year 2050, again including benefit payments, administrative costs, legacy debt repayments, and repayment of annual borrowing from Treasury’s general fund, minus the real discounted present value of miscellaneous revenues for the same period. Second, we calculated the real discounted present value of coal production for the same period. Third, we calculated the average tax per ton of coal as the first amount divided by the second amount.
To assess the sensitivity of each option, we ran each simulation 36 times using four different sets of assumptions about the numbers of future beneficiaries and nine different sets of assumptions about future coal production and prices (see table 6). Doing so provided a range of estimates about the Trust Fund’s future borrowing needs and provided insight on the sensitivity of its overall financial position relative to its various expenditures and revenues. The analysis also provided a range of estimates of the amount of debt forgiveness needed to bring the Trust Fund into balance by fiscal year 2050, assuming various coal tax rates, and the average tax collection per ton needed to do the same, and assuming various amounts of debt forgiveness.
From the range of estimates that resulted from our sensitivity analysis, we selected cases with moderate expectations related to future Trust Fund expenditures and revenue. Specifically, for future expenditures, we assumed an average growth rate of new black lung beneficiaries for fiscal years 2003 through 2015 as a moderate case that reflects historical experience. For future revenue, we used a moderate coal production outlook based on EIA’s reference case, which reflects moderate expectations about future coal production based on various assumptions about economic growth, oil prices, technological innovation, and energy policy.
Appendix II: Results of GAO’s Black Lung Disability Trust Fund Simulations
We summarized the results of our simulations by showing the extent to which the Black Lung Disability Trust Fund’s (Trust Fund) balance—the sum of tax revenue and miscellaneous revenue less expenditures—may change in fiscal year 2050 for each option simulated. For example, with the scheduled 2019 tax rate decrease, our moderate case simulations suggest that the Trust Fund would likely have a deficit in fiscal year 2050 of about $15.4 billion.
Multiple options could reduce the Trust Fund’s future debt and distribute the financial burden among the coal industry and general taxpayers. We simulated how various coal tax and debt forgiveness options could balance the Trust Fund by fiscal year 2050, whereby its simulated revenue would be sufficient to cover its simulated expenditures. We approached these simulations from two perspectives. First, we simulated how much Trust Fund debt would need to be forgiven based on various coal tax rates. Second, we simulated the average tax collected per ton needed to balance the Trust Fund by 2050, based on certain debt forgiveness options.
For our first set of simulations, we calculated the amount of debt outstanding in fiscal year 2019 and the amount that would likely need to be forgiven in fiscal year 2019 for the Trust Fund to have sufficient revenues to cover its expenditures by fiscal year 2050, assuming that it does not borrow from Treasury’s general fund after fiscal year 2018. For example, before any options are implemented, our moderate case simulations suggest that the Trust Fund’s outstanding debt in fiscal year 2019—including both legacy debt and annual borrowing from Treasury’s general fund—would likely be about $6.6 billion (after discounting and adjusting for inflation). Therefore, with implementation of the coal tax rate decrease of about 55 percent as scheduled in calendar year 2019, about 117.7 percent of that debt would need to be forgiven to balance the Trust Fund. In other words, balancing the Trust Fund would require forgiveness of $6.6 billion and an additional cash inflow of about $1.2 billion because the Trust Fund will accrue additional debt from fiscal years 2020 through 2050, according to our moderate case simulations (see table 8).
For our second set of simulations, we estimated the average tax per ton of coal that, if implemented in fiscal year 2019, would likely provide the Trust Fund sufficient revenues to cover its expenditures in fiscal year 2050, assuming that it does not borrow from Treasury’s general fund after fiscal year 2018. For example, if all principal and interest on Trust Fund legacy debt is forgiven, as of 2019, the estimated average tax that balances the Trust Fund is about $0.59 per ton (see table 9). Based on certain assumptions, this could be accomplished with a tax of $0.88 per ton on underground-mined coal and $0.44 per ton on surface-mined coal.
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Blake Ainsworth (Assistant Director), Justin Dunleavy (analyst-in-charge), Angeline Bickner, Courtney LaFountain, and Rosemary Torres Lerma made key contributions to this report. Also contributing to this report were James Bennett, Melinda Bowman, Lilia Chaidez, Caitlin Cusati, Holly Dye, Alex Galuten, Carol Henn, John Lack, Emei Li, Almeta Spencer, Kate van Gelder, and Shana Wallace. | Why GAO Did This Study
With revenue of about $450 million in fiscal year 2017, the Trust Fund paid about $184 million in benefits to more than 25,000 coal miners and eligible dependents. However, the Trust Fund also borrowed about $1.3 billion from the Treasury's general fund in fiscal year 2017 to cover its debt repayment expenditures. Adding to this financial challenge, the coal tax that supports the Trust Fund is scheduled to decrease by about 55 percent beginning in 2019. GAO was asked to review the financial positon of the Trust Fund and identify options to improve it.
This report examines (1) factors that have challenged the financial position of the Trust Fund since its inception and (2) the extent to which Trust Fund debt may change through 2050, and selected options that could improve its future financial position. GAO reviewed Trust Fund financial data from fiscal years 1979 through 2017. GAO also interviewed officials from the Departments of Labor, Treasury, Health and Human Services (HHS) and representatives of coal industry and union groups. Using assumptions, such as the about 55 percent coal tax decrease and moderately declining coal production, GAO simulated the extent to which Trust Fund debt may change through 2050. GAO also simulated how selected options, such as forgiveness of debt, could improve finances. The options simulated are not intended to be exhaustive. Further, GAO is not endorsing any particular option or combination of options.
GAO provided a draft of this report to DOL, Treasury, and HHS. The agencies provided technical comments, which were incorporated as appropriate.
What GAO Found
Multiple factors have challenged Black Lung Disability Trust Fund (Trust Fund) finances since it was established about 40 years ago. Its expenditures have consistently exceeded its revenues, interest payments have grown, and actions taken that were expected to improve Trust Fund finances did not completely address its debt. When necessary to make expenditures, the Trust Fund borrows with interest from the Department of the Treasury's (Treasury) general fund. Because Trust Fund expenditures have consistently exceeded revenue, it has borrowed almost every year since 1979, its first complete fiscal year, and as a result debt and interest payments increased. Legislative actions were taken over the years including (1) raising the rate of the coal tax that provides Trust Fund revenues and (2) forgiving debt. For example, the Energy Improvement and Extension Act of 2008 provided an appropriation toward Trust Fund debt forgiveness; about $6.5 billion was forgiven, according to Department of Labor (DOL) data (see figure). However, coal tax revenues were less than expected due, in part, to the 2008 recession and increased competition from other energy sources, according to DOL and Treasury officials. As a result, the Trust Fund continued to borrow from Treasury's general fund from fiscal years 2010 through 2017 to cover debt repayment expenditures.
GAO's simulation suggests that Trust Fund borrowing will likely increase from fiscal years 2019 through 2050 due, in part, to the coal tax rate decrease of about 55 percent that will take effect in 2019 and declining coal production. The simulation estimates that Trust Fund borrowing may exceed $15 billion by 2050 (see figure). However, various options, such as adjusting the coal tax and forgiving interest or debt, could reduce future borrowing and improve the Trust Fund's financial position. For example, maintaining the current coal tax rates and forgiving debt of $2.4 billion could, under certain circumstances, balance the Trust Fund by 2050, whereby revenue would be sufficient to cover expenditures. However, a coal industry representative said that maintaining or increasing the coal tax would burden the coal industry, particularly at a time when coal production has been declining. Further, Treasury officials noted that the costs associated with forgiving Trust Fund interest or debt would be paid by taxpayers. |
gao_GAO-18-145 | gao_GAO-18-145_0 | Background
This section provides information on select agent regulations and program roles, responsibilities, and requirements; and the history of the Select Agent Program.
Select Agent Regulations and Program Roles, Responsibilities, and Requirements
The Select Agent Program is fragmented because oversight responsibility is, by law, split between CDC and APHIS. The two agencies have delineated roles and responsibilities to regulate laboratories—including conducting inspections and other activities—that possess, use, or transfer biological select agents. CDC’s Division of Select Agents and Toxins is responsible for the oversight and regulation of select agents that could pose a threat to public health and safety, such as the Ebola virus. APHIS’s Agriculture Select Agent Services is responsible for the oversight and regulation of select agents that could pose a threat to animal or plant health or animal or plant products, such as the virus that causes foot-and- mouth disease. Some select agents, such as Bacillus anthracis (the bacterium that causes anthrax), are regulated by both agencies because they pose a threat to both human and animal health; these agents are known as overlap agents. As part of their oversight, CDC and APHIS maintain a list of select agents that they are required to review and republish at least every 2 years.
Generally, laboratories (including those at federal agencies and private institutions) and individuals who possess, use, or transfer these select agents must register with CDC or APHIS and renew their registration every 3 years. Most laboratories registered with the Select Agent Program are registered with CDC (238 of 276). (See fig. 1 for information about the laboratories registered with the program.) In fiscal year 2016, CDC’s budget to manage its component of the Select Agent Program was about $21 million and APHIS’s was about $5.5 million.
Select agent regulations govern the possession, use, and transfer of designated select agents. To apply for a certificate of registration, the laboratory must submit an application package to either CDC or APHIS, and laboratory personnel must submit to a security risk assessment conducted by the Federal Bureau of Investigation (FBI). The Select Agent Program conducts an on-site inspection before issuing a new certificate of registration or renewing an existing registration; both are valid for a maximum of 3 years. Once approved, a laboratory’s certification of registration may be amended to reflect changes in circumstances, such as replacement of the responsible official or other personnel changes, changes in ownership or control of the laboratory, changes in the activities involving any select agents, or the addition or removal of any select agents. As a condition of registration, the select agent regulations require each laboratory to designate an individual to be its responsible official, who is responsible for ensuring compliance with the regulations. In addition, the regulations require laboratories to develop various written plans, as well as provide training and maintain records of training and other activities. For example, the regulations require that laboratories registered with the program develop and implement a written security plan sufficient to safeguard each select agent against unauthorized access, theft, loss, or release; develop and implement a written biological safety plan that is commensurate with the risk of the select agent, given its intended use; provide training on biological safety and security for individuals with access to select agents; and maintain records on the activities covered by the select agent regulations.
History of the Select Agent Program
Several historical security incidents involving hazardous pathogens resulted in a series of laws and other regulatory activity that served to establish and amend the Select Agent Program. First, Congress passed section 511 of the Antiterrorism and Effective Death Penalty Act of 1996 after an individual in the United States unlawfully obtained Yersinia pestis, the bacterium that causes plague, by mail order. Section 511 directed the Secretary of HHS to promulgate regulations identifying a list of biological agents that have the potential to pose a severe threat to public health and safety, providing procedures governing the transfer of those agents, and establishing safeguards to prevent unauthorized access to those agents for purposes of terrorism or other criminal activities. The HHS Secretary delegated the authority to regulate select agents to CDC, thus establishing the Select Agent Program in its initial form. In carrying out this authority, CDC required laboratories transferring select agents to be registered with the program.
After the terrorist events of September 11, 2001, and the subsequent anthrax attacks in October 2001, Congress passed the USA PATRIOT Act of 2001 and the Public Health Security and Bioterrorism Preparedness and Response Act of 2002. These acts significantly expanded the Select Agent Program by restricting access to select agents and increasing safeguards and security measures for select agents. The 2002 act also expanded the program to include not only the regulation of the transfer but also the use and possession of select agents, and it granted comparable authority to USDA for select agents that pose a threat to animal or plant health, or animal or plant products. The Secretary of Agriculture delegated the authority to regulate select agents that affect animal or plant health to APHIS. The act also required HHS and USDA to coordinate on overlap agents and required the Secretaries of both departments to establish, maintain, and biennially review and republish the select agent list, making revisions as appropriate to protect the public.
On July 2, 2010, the President signed Executive Order 13546, “Optimizing the Security of Biological Select Agents and Toxins in the United States.” The executive order directed HHS and USDA, as a part of their ongoing review, to tier the select agents on the list, consider shortening the list, and establish physical security standards for select agents with the highest risk of misuse; HHS and USDA did so in final rules published October 5, 2012. About half of the laboratories registered with the program as of December 2016 were registered to work with tier 1 agents (142 of 276).
Select Agent Program Does Not Fully Meet Key Elements of Effective Oversight or Have Joint Strategic Planning Documents to Guide Its Efforts Select Agent Program Does Not Fully Meet Oversight Elements Related to Independence, Performing Reviews, Technical Expertise, Transparency, and Enforcement
The Select Agent Program does not fully meet key elements of effective oversight. In particular, the program has oversight shortcomings related to each of the five key elements: independence, performing reviews, technical expertise, transparency, and enforcement. In addition, the program does not have joint strategic planning documents to guide its oversight efforts, such as a joint strategic plan and workforce plan; it did, however, begin taking steps to develop a joint strategic plan over the course of our review.
The Select Agent Program does not fully meet our key elements of effective oversight. Specifically, the program is not independent from all laboratories it oversees, and it has not formally assessed the potential risks posed by its current organizational structure. In addition, the program regularly performs reviews of laboratories’ compliance with regulatory and program requirements, but these reviews may not target the activities that pose the highest risk to biological safety and security. Moreover, even though the program has taken steps to hire additional staff and enhance the technical expertise of its staff, workforce and training gaps remain. The program has increased transparency since 2016, but the information it shares is limited and there is no consensus about what additional information could be shared, given security concerns. Lastly, the Select Agent Program has authority to enforce compliance with program requirements, but is still working to address past concerns about the need for greater consistency and clarity in actions it takes in exercising this authority.
Program Is Not Independent and Has Not Formally Assessed All Risks Posed by Its Current Structure
Independence The organization conducting oversight should be structurally distinct and separate from the entities it oversees.
According to our key elements of effective oversight, to be independent, the organization conducting oversight should be structurally distinct and separate from the entities it oversees. The Select Agent Program is not structurally distinct and separate from all of the laboratories it oversees but has taken some steps to reduce conflicts of interest potentially posed by its current structure within CDC and APHIS. The two components of the Select Agent Program are located in CDC and APHIS, both of which also have high-containment laboratories registered with the program. Many experts at our meeting raised concerns that the Select Agent Program cannot be entirely independent in its oversight of CDC and APHIS laboratories because the Select Agent Program is composed of divisions of those agencies. In particular, one expert stated that to be independent, the agencies cannot regulate themselves, and others said that the agencies’ oversight of their own laboratories may present a conflict of interest. However, laboratories owned by CDC and APHIS are not generally located within the same agency divisions and thus are not in the same chain of command as the Select Agent Program. The one exception is an APHIS-owned complex of laboratories in the same division as the APHIS component of the program, but that complex is registered with CDC, which means that CDC leads its inspections and oversight.
Senior program officials, many laboratory representatives, and some experts cited a number of benefits to the Select Agent Program’s current structure within CDC and APHIS, including the ability for inspectors to have access to experts and other support from their respective divisions. For example, program officials said that the Select Agent Program had reached out to CDC scientists for assistance in developing guidance documents for the program. In addition, inspectors sometimes obtain technical assistance from experts in CDC and APHIS, such as in cases where the inspectors are not familiar with certain techniques or equipment being used in a registered laboratory. However, program officials also said that they have tried to limit the extent that they rely on CDC and APHIS scientists from outside the program, so as not to raise concerns about conflicts of interest. Senior program officials from CDC and APHIS also said that the Select Agent Program’s current locations within the two agencies allow for access to additional support as needed, including additional funds and administrative services. Senior program officials from CDC further stated that being located in an office focused on preparedness and response is advantageous because the Select Agent Program can quickly pivot into incident response mode, allowing for rapid response and assessment of incidents that occur in registered laboratories. They noted that this location proved advantageous during an incident in 2015, for example, when the program responded to the discovery that a DOD laboratory had inadvertently sent live Bacillus anthracis, the bacterium that causes anthrax, to nearly 200 laboratories.
The location of the program has also raised some concerns in the past, which the Select Agent Program has taken some steps to address. In response to past concerns about conflicts of interest and separation of duties raised by HHS OIG, APHIS, and us, both CDC and APHIS made structural changes to increase the Select Agent Program’s independence within their respective agencies. In particular, in 2003, in response to concerns from HHS OIG and us, CDC moved its component of the Select Agent Program into the agency’s Office of Public Health Preparedness and Response because that office did not have any laboratories registered with the program. (See fig. 2 for HHS’s organizational chart, including a depiction of where CDC’s Select Agent Program component currently sits in relation to other agency divisions.) According to CDC officials, the director of the CDC component of the Select Agent Program has access to senior leadership at CDC as needed.
Similarly, since 2013, APHIS has also made some organizational changes, including realigning supervisory responsibilities for the program and creating a direct line of communication from the director of the APHIS component of the Select Agent Program to the APHIS administrator. Previously, the program reported to a director whose division had a suite of laboratories that the program inspects. Now it is managed through APHIS’s National Import Export Services, which has different senior-level managers that report directly to the Office of the Administrator rather than the managers who oversee registered laboratories. According to agency officials, these changes increased the level of independence between the Select Agent Program and APHIS-owned laboratories but did not fully address the appearance of a lack of independence within APHIS, since the agency’s organizational chart still places the APHIS component of the Select Agent Program under Veterinary Services. (See fig. 3 for USDA’s organizational chart, including a depiction of where APHIS’s Select Agent Program component currently sits in relation to other agency divisions). The APHIS director of the Select Agent Program and the Associate Administrator of APHIS meet regularly to discuss incidents involving select agents, enforcement actions, and operation of the Select Agent Program, among other issues, according to agency officials, but this reporting structure is not documented. According to federal standards for internal control, management should establish an organizational structure, assign responsibility, and delegate authority to achieve the entity’s objectives and should develop and maintain documentation of its internal control system. Until APHIS formally documents the reporting structure for its component of the Select Agent Program from the APHIS director of the program to the administrator of APHIS, it will continue to appear to have conflicts of interest in its oversight of APHIS-owned laboratories.
In addition to these structural changes, the program has put mechanisms in place to reduce organizational conflicts of interest, but the agencies do not always follow a key mechanism. In particular, CDC and APHIS signed a memorandum of understanding in 2012 that stated that APHIS would provide the lead inspector for all inspections of registered laboratories owned by CDC. However, in practice, CDC inspectors still participate in inspection activities because of their expertise in human agents. In March 2015, the memorandum was amended to state that CDC would lead inspections of all USDA-owned laboratories.
However, since the memorandum was amended, the APHIS component of the Select Agent Program has led at least three inspections of USDA- owned or operated laboratories. In particular, APHIS led an inspection of a laboratory owned by another USDA agency, the Agricultural Research Service, in November 2015; one run by the Agricultural Research Service and APHIS scientists in May 2015; and one owned by APHIS in December 2015. APHIS officials we interviewed said that they had overlooked this amendment to the memorandum of understanding and the program does not have a process in place to help ensure the memorandum is followed. According to federal standards for internal control, management should design control activities to achieve objectives and respond to risk. Such internal control activities help ensure that management directives such as those outlined in the memorandum of understanding are carried out, and should be effective and efficient in accomplishing the program’s control objectives. One example of a control activity would be establishing a process to ensure APHIS and CDC comply with the memorandum to help ensure APHIS does not inspect its own laboratories. Without establishing control activities to help ensure that each component of the program carries out its inspection responsibilities as outlined in the program’s memorandum of understanding, the Select Agent Program cannot have reasonable assurance that its key mechanism to reduce conflicts of interest is implemented.
Although the Select Agent Program has taken steps to help reduce conflicts of interest, it has generally done so in response to concerns raised by others. The program itself has not formally assessed all potential risks posed by its current structure and the effectiveness of its mechanisms to address those risks. For example, the program did not identify all of the areas noted above that may present conflicts of interest and has not considered whether there may be additional areas of concern. An expert in our meeting identified benefits of an independent, third-party review of the Select Agent Program. For example, we and other audit organizations are subject to an external peer review at least once every 3 years that includes a review of documentation related to independence, among other issues. According to senior program officials we interviewed, the program as a whole has not engaged in comprehensive risk management activities but they would be willing to do so in the future.
OMB’s Circular A-123 requires federal agencies to integrate risk management activities into their program management to help ensure they are effectively managing risks that could affect the achievement of agency objectives. According to the circular, once initial risks are identified, it is important for agencies to regularly re-examine risks to identify new risks or changes to existing risks. In addition, federal internal control standards state that management should identify, analyze, and respond to risks related to achieving defined objectives. Without (1) regularly assessing the potential risks posed by the program’s current structure and the effectiveness of its mechanisms to address them, such as by commissioning external reviews, and (2) taking actions as necessary to ensure any identified risks are addressed, the program may not be aware of or effectively mitigate impairments to its independence that could affect its ability to achieve its objectives.
Reviews May Not Target the Highest-Risk Activities
Ability to perform reviews The organization should have the access and working knowledge necessary to review compliance with requirements.
According to our key elements of effective oversight, the organization conducting oversight should have the ability to perform reviews, including access to facilities and working knowledge necessary to review compliance with requirements. The Select Agent Program performs several types of reviews to ensure compliance with regulatory and program requirements, including registration inspections for laboratories seeking certification to use select agents, renewal inspections for laboratories seeking to renew their registration, and verification inspections. (See fig. 4 for additional information on these inspections). The program has the ability to access any registered laboratory for inspection, including without prior notification. Inspections typically include review of registration and other documents—such as biological safety and security plans and inventory and personnel training records— as well as physical inspections of laboratory workspace and interviews with laboratory representatives, among other inspection activities. During inspections, Select Agent Program inspectors go through checklists that are based on the select agent regulations, the Biosafety in Microbiological and Biomedical Laboratories manual, and guidelines from NIH. The inspections cover a variety of topics—such as facility design and operation, incident response, security, training, records management, and biological safety—and may last anywhere from 1 day with 1 or 2 inspectors for simpler laboratories, to a couple of weeks with up to 10 inspectors for larger and more complex laboratories. Most laboratory representatives we spoke with said that the inspectors generally had the working knowledge necessary to review compliance and that the inspections and resulting reports were in-depth and generally fair and accurate.
However, the program may not target the highest-risk activities in its inspections, in part because it has not formally assessed which activities pose the highest risk to biological safety and security. According to Select Agent Program officials, the program’s policy is to conduct at least one verification inspection of all registered laboratories—regardless of their past history or performance—between each 3-year renewal inspection, and the program may consider additional inspections at laboratories that pose a higher risk. Specifically, the program scores laboratories’ risk based on a number of factors, such as past inspection findings. However, a 2017 HHS OIG report found that the CDC component of the Select Agent Program had evaluated some, but not all, variables that could inform the risk a laboratory poses to health and safety and concluded that CDC may wish to enhance its risk assessment by considering additional factors, such as whether a laboratory has previously reported losses or releases of a select agent, to better inform a laboratory’s level of risk over time. In addition, some experts at our meeting and laboratory representatives we interviewed raised concerns that the program’s inspections do not target resources to the highest-risk activities. For example, some experts said that the program has historically not put enough emphasis on verifying that certain laboratory procedures are safe and effective, which some said may have contributed to high-profile incidents in 2014 and 2015 in which select agents were inadvertently released from high-containment laboratories. However, according to the Select Agent Program, the program does not validate or verify laboratory procedures as it is the responsibility of the laboratories themselves to do so. Further, many experts at our meeting and laboratory representatives we interviewed raised concerns about the amount of time inspectors spend assessing compliance with inventory controls (e.g., by counting and examining vials containing select agents) and reviewing inventory records during the inspection process, which takes time away from inspecting other aspects of biological safety and security. Experts at our meeting said that these activities do little to reduce the risk of theft of select agents because samples could be clandestinely removed from vials and replicated without being detected by the inventory controls currently in place. Finally, other laboratory representatives told us that activities to assess compliance with certain program requirements did little to reduce risk and were unnecessarily burdensome, such as time- consuming reviews of records so that nicknames such as “Rob” match up to registered names, such as “Robert.” These inspection activities are generally intended to address biological security concerns, such as theft; however, recent high-profile incidents at registered laboratories have been related to biological safety rather than security, and no thefts have been reported since 2003, when notification requirements were first implemented, according to program officials and documents.
Experts at our meeting generally agreed that the Select Agent Program has historically put more focus on security than on biological safety in its reviews, given that the program was established in response to terrorist incidents. For example, some experts said that the program has not focused enough on ensuring the health and safety of researchers and reducing the potential for their exposure to select agents, which some noted are more likely to occur than thefts due to security issues. Many experts questioned if the focus on security continues to be appropriate, in light of recent biological safety incidents. According to senior APHIS officials we interviewed, the Select Agent Program has been mandated to focus on security and if they move the program’s focus too far from security to biological safety, they may lose the goals established when the program was formed. They also noted that, according to the select agent regulations, laboratories are responsible for developing and implementing a written biological safety plan, and therefore a balance should be maintained between the laboratories’ execution of these plans and the level of oversight from the Select Agent Program. In addition, these officials stated that, during inspections, it is much easier for inspectors to ensure laboratories are meeting security requirements than carrying out their biological safety plans. For example, inspectors can easily check to make sure laboratories have required security barriers in place, such as locks on doors, but it is harder to measure whether laboratories are carrying out laboratory procedures safely. They also noted that the program does not want to be prescriptive with respect to biological safety so that laboratories can implement those biological safety practices that are most appropriate for their facility.
A 2015 internal review of the CDC component of the Select Agent Program acknowledged uncertainties and gaps in understanding how best to balance laboratories’ ability to conduct critical research using select agents with the program’s need to ensure the safety and security of the public and laboratory workers. The resulting report recommended that the CDC and APHIS components of the program work together to analyze inspection and investigation data to identify trends and associations between inspection findings and risk and to improve the inspection process. According to program officials we interviewed, the Select Agent Program has not yet addressed the recommendation because the program does not currently have adequate tools to do so. They noted that the program is transitioning to a new database that will enhance their ability to analyze program data to identify such trends and associations and thereby guide improvements to the inspection process. However, the program did not provide a plan for when or how the program will carry out these analyses or use the information to improve the inspection process. Federal internal control standards state that management should identify, analyze, and respond to risks related to achieving defined objectives. In addition, the Project Management Institute’s Standard for Program Management calls for program scheduling planning as a leading practice to ensure organizational activities are completed. Without developing and implementing a plan to identify which laboratory activities carry the highest biological safety and security risks and to respond to those risks by aligning inspections and other oversight efforts to target those activities, the Select Agent Program will not have assurance that it is effectively balancing the potential safety and security gains from its oversight efforts against the use of program resources and the effect on laboratories’ research.
Select Agent Program Has Taken Steps to Hire Additional Expert Staff and Improve Technical Expertise, but Gaps in Workforce and Training Remain
According to our key elements of effective oversight, the organization conducting oversight should have sufficient staff with the expertise to perform sound safety and security assessments. CDC and APHIS have hired additional staff for the program and improved training to enhance expertise, but workforce and training gaps remain.
Technical expertise The organization should have sufficient staff with the expertise to perform sound safety and security assessments.
The CDC and APHIS components of the Select Agent Program increased the number of full-time federal inspectors in 2016 and 2017, but have faced challenges in hiring and retaining sufficient staff with the requisite expertise to perform the necessary work in a timely manner. According to agency reports, agency officials, and laboratory representatives, Select Agent Program inspectors are subject to a large workload with an intensive travel schedule. Inspectors perform a variety of tasks, including conducting on-site inspections of laboratories, developing written reports of inspection results, processing requests for amendments to laboratory registrations, and communicating program requirements to laboratory representatives.
According to agency reports and inspectors we spoke with, inspectors often travel 30 percent to 50 percent or more of their time in performing their duties. This intensive workload and travel schedule has led to delays in both the issuing of inspection reports and processing of registration amendments. According to a 2017 CDC report, the time to process CDC’s inspection reports in 2016 ranged from 4 to 224 business days, with about 27 percent of reports exceeding the Select Agent Program’s 30-day target for issuance. Workload issues were cited as one of the key reasons for delays. A 2016 APHIS internal report also identified delays in issuing inspection reports. According to the 2016 report, the time to process APHIS’s inspection reports in 2014 averaged 36 days, but some reports were issued more than 100 days from the date the inspection concluded. Similarly, the processing time for amendments to registrations, which the program has not routinely tracked in the past, generally varies from a couple of weeks or months to approve simpler amendments (such as personnel changes) to a year or more to approve major changes to facilities (such as adding new laboratory space), according to laboratory representatives. Delays in issuing inspection reports or processing amendments may hamper the implementation of corrective measures to address safety issues identified in inspections or impede laboratories’ research on select agents, according to agency reports and laboratory representatives. For example, representatives from one laboratory told us that they lost grant funding because it took over a year for the Select Agent Program to review and approve an amendment to its registration to allow the proposed research to be conducted.
Workload issues have also created problems with retention, according to agency documents and program officials we interviewed, and have sometimes resulted in staff from the APHIS component of the Select Agent Program being assigned responsibilities outside their areas of expertise. For example, at the time of our review, an APHIS security specialist was given the additional responsibility of conducting reviews not related to his area of expertise, such as inspecting ventilation systems, which are critical to ensuring select agents are not released into the environment. According to the 2016 internal APHIS report, the APHIS component of the program has historically struggled with resource deficiencies and has had to implement strategies to fulfill its legal mandates and meet basic goals and objectives within its limited resources.
Both the CDC and APHIS components of the Select Agent Program have individually taken steps to identify and address gaps in their workforce but have not coordinated these actions to manage fragmentation across the program. CDC developed a formal workforce plan for its component of the Select Agent Program in 2016, identified and secured the necessary resources to implement the plan, and is working to fill needed positions. As of August 2017, the CDC component of the program had 7 vacancies out of its 51 total inspector positions. APHIS also identified additional needed positions, through development of its 5-year business plan, and has used money from an APHIS contingency fund to fill them. APHIS hired additional inspectors in 2016 and 2017 and now has 11 inspector positions, up from 7 in 2015. APHIS also added several other new positions in the first half of 2017, including a scientific officer, a security manager, and a program analyst, among others.
However, according to program officials we interviewed, even with the additional recently hired inspectors, the program may not have adequate staff to handle surges in workload. For example, if there is a need to respond to critical incidents similar to those that occurred at CDC and DOD in 2014 and 2015, the program may find it challenging to respond to those incidents in addition to meeting its annual inspection schedule. Moreover, according to the 2016 APHIS internal review and CDC and APHIS officials we interviewed, the complexity of laboratories that work with select agents, the select agent regulations, and inspections have continued to increase, which may continue to contribute to workload issues in the future. Program officials we interviewed said they are hopeful that the new database the program is implementing will allow the program to gain efficiencies in amendment processing and other areas, which may reduce workload issues in the future.
Training to Improve or Maintain Expertise Most laboratory representatives we interviewed said that, in their experience, Select Agent Program inspectors generally had appropriate expertise to perform reviews. According to agency documents, the vast majority of the program’s inspectors have advanced degrees, including many inspectors from CDC with doctoral degrees in microbiology or related fields and many inspectors from APHIS with doctoral degrees in veterinary medicine. However, CDC and APHIS internal reviews from 2015 and 2016, respectively, as well as some laboratory representatives we interviewed, identified some shortcomings and inconsistencies in inspectors’ expertise and approach related to their regulatory responsibilities. In particular, the reports found that inspectors had inconsistent knowledge about the select agent regulations, variabilities in skill level, and divergent approaches to inspections, both within and across the two components of the Select Agent Program. In addition, several laboratory representatives said that some inspectors imposed requirements on laboratories that the inspectors considered to be best practices rather than requirements of the select agent regulations or items on inspection checklists.
Both CDC and APHIS officials in the program identified gaps in the training available to maintain their expertise. CDC inspectors we interviewed told us they need additional training opportunities to keep up with scientific changes in the field, such as advances in laboratory techniques and equipment. APHIS officials we interviewed also identified areas where they need additional training, including in facilities and engineering aspects of laboratories; decontamination; and new laboratory techniques, technologies, and equipment. In addition, some APHIS inspectors we interviewed said that they sometimes do not have the necessary knowledge to effectively perform all aspects of inspections and, in some cases, depend on inspectors from CDC to address gaps in expertise. Relying on CDC inspectors when APHIS is inspecting CDC- owned laboratories raises conflict of interest concerns. Furthermore, according to inspectors from both CDC and APHIS, they are rarely able to attend external conferences or other external training because of their intensive workload and travel schedules and because they must compete for training funds with CDC or APHIS scientists who are not assigned to the program. Priority is given to those scientists presenting information at conferences, which Select Agent Program staff rarely do because their inspection work is not the type of information shared at conferences, according to program officials.
In response to these concerns, both the CDC and APHIS components of the Select Agent Program have individually taken steps to improve training for program staff, including inspectors, but have not always coordinated steps to manage fragmentation across the program. For example, in 2016, APHIS increased training opportunities for two inspectors to better enable them to inspect BSL-4 laboratories. In addition, CDC developed a training strategy that identified various areas in its training program that needed improvement, including the need to provide funding support for existing training activities and enhanced professional development opportunities.
According to CDC’s training strategy, the complexity of the inspector position and evolving science on select agents demand ongoing training and professional development opportunities for staff. Among other recommendations, the strategy identified the need for three additional full- time-equivalent positions in the training area—in addition to the one the CDC component of the program currently has; as of August 2017, CDC was in the process of hiring one additional training specialist. APHIS has not developed a similar formal training strategy, but during the course of our review, APHIS sought and received approval and funds to hire a full-time training coordinator, which it was in the process of filling as of July 2017. Because APHIS has not had a training coordinator dedicated to the Select Agent Program in the past, the APHIS component of the program has generally relied on CDC to address training needs, although APHIS does provide its own training to its inspectors and has coordinated with CDC to develop some training, according to APHIS officials. A senior APHIS official noted that having its own training coordinator moving forward will help ensure APHIS’s training needs are met, as animal inspection needs have not explicitly been addressed in the past when CDC has taken the lead on training.
Security Concerns Have Limited the Program’s Transparency
Transparency The organization should provide access to key information, as applicable, to those most affected by operations.
According to our key elements of effective oversight, the organization conducting oversight should provide access to key information, as applicable, to those most affected by operations. Past White House and other reports, as well as experts at our meeting, also emphasized the importance of transparency, including the sharing of information on incidents and lessons learned, in the Select Agent Program. However, the program limits the information it shares about registered laboratories and violations of the select agent regulations, mainly because of security concerns. For example, the program does not disclose to the public or other laboratories the locations of laboratories registered with the program, the agents that laboratories work with, or details on violations of select agent regulations.
The Select Agent Program has recently increased the transparency of high-level laboratory and program information it shares with the public and registered laboratories, partly in response to recent federal reports. For example, in 2016, the Select Agent Program issued its first annual public report on the program. The report provided a variety of information, such as background information on the program, statistics about registered laboratories, and aggregated information on the potential losses and releases reported to the program. In 2015, the program developed a mechanism for laboratories to request interpretation of the select agent regulations from the program and has since published several regulatory interpretations on its website. In addition, starting in summer 2016, the Select Agent Program worked with a nongovernmental organization, the American Biological Safety Association International, to develop an online forum for registered laboratories to share information with one another, which laboratory representatives told us has been very helpful. The Select Agent Program also held a workshop for responsible officials from registered laboratories in December 2016 to disseminate program information; the workshop also provided the opportunity for attendees to interact. Many laboratory representatives told us that this was very helpful, and some noted that they had not had an opportunity to communicate and share lessons learned with responsible officials from other registered laboratories in the past.
Even so, some experts, agency officials, and laboratory representatives we interviewed said there needs to be more transparency to the public about select agent research and incidents in order to increase public trust concerning the activities conducted at high-containment laboratories. For example, several laboratory representatives noted that the media has incorrectly described their laboratories as conducting “bioterror” research, when the research they conduct is to mitigate the consequences of a bioterrorist attack—for example, by developing vaccines and other measures to help diagnose, prevent, or treat exposure to or infection with select agents. On the other hand, many laboratory representatives told us that the program was already sharing an appropriate amount of information with the public. According to officials from HHS and USDA, this issue has been examined and discussed extensively within their departments, partly in response to recent federal reports. CDC officials pointed out that laboratories themselves could share additional information about their select agent research and any incidents. For example, the U.S. Army Medical Research Institute for Infectious Diseases and the National Biodefense Analysis and Countermeasures Center, both at Fort Detrick in Maryland, and the Galveston National Laboratory in Galveston, Texas, voluntarily share information about their select agent research and incidents with the public via their websites.
In addition, many laboratory representatives we interviewed said the program needs to be more transparent for registered laboratories. In particular, some said that it would be helpful for the program to share more information among laboratories about select agent research and incidents to enhance the sharing of lessons learned to improve biological safety and security. According to experts at our meeting, it is important for information, such as lessons learned from incidents, to be shared among laboratories so that they can learn from one another’s experiences to improve their own operations. Some laboratory representatives also said that it would be helpful for the Select Agent Program to provide additional guidance in certain areas, such as regarding the use and storage of toxins. Federal internal control standards state that management should internally and externally communicate the necessary quality information to achieve the entity’s objectives. However, there is no consensus about what additional information should be shared with laboratories. Without determining what additional information about laboratories’ use of select agents, incidents, and violations of the select agent regulations is appropriate for the Select Agent Program to share with registered laboratories, the program may be missing opportunities to provide key information that ultimately could help improve biological safety and security.
Program Has Authority to Enforce Compliance with Requirements and Is Working to Address Concerns about Clarity and Consistency of Enforcement Actions
According to our key elements of effective oversight, the organization conducting oversight should have clear and sufficient authority to require entities to achieve compliance with requirements. The Select Agent Program has the authority to and takes a range of enforcement actions for violations of the select agent regulations and is working to address concerns about the clarity and consistency of enforcement actions. When the Select Agent Program identifies a possible violation of the select agent regulations, the program may take several types of compliance or enforcement actions, as follows:
Administrative actions: The Select Agent Program can propose a corrective action plan; suspend or revoke a registered laboratory’s registration; or deny a laboratory’s application to possess, use, or transfer select agents.
Referrals to HHS OIG or APHIS’s Investigative and Enforcement Services: The Select Agent Program may refer violations to HHS OIG or APHIS’s Investigative and Enforcement Services, both of which can levy civil money penalties, issue a Notice of Violation letter, or close the case.
Referral to the FBI: The Select Agent Program can refer possible violations involving criminal negligence, criminal intent, or suspicious activity or person to the FBI for further investigation. Criminal enforcement may include imprisonment for up to 5 years, a fine, or both.
The Select Agent Program has taken enforcement actions against laboratories but did not always do so consistently or according to any available criteria. The Select Agent Program has taken a range of enforcement actions for violations of the select agent regulations— including suspending or revoking registrations or proposing corrective action plans—as well as referring violations to HHS OIG or APHIS’s Investigative and Enforcement Services for further investigation.
Following investigation, HHS OIG and APHIS’s Investigative and Enforcement Services have taken other enforcement actions, including levying civil money penalties and issuing Notice of Violation letters. However, we previously found in 2016 that the Select Agent Program did not consistently refer laboratories to investigative entities for violations of the select agent regulations or enforce regulations related to incidents involving incomplete inactivation, and we found that this appears to be true beyond incidents involving incomplete inactivation as well. For example, from 2003 through 2016, the program suspended or revoked 10 laboratories’ registrations in response to violations of the select agent regulations, only 1 of which was a federal laboratory, and neither HHS OIG nor APHIS’s Investigative and Enforcement Services have levied a civil money penalty against a federal laboratory. Moreover, we previously found that the program referred various laboratories to HHS OIG for incidents involving incomplete inactivation but did not refer HHS laboratories for two incidents in 2014. We recommended in 2016 that the Select Agent Program develop and implement consistent criteria and documentation requirements for referring laboratories to investigative entities and enforcing regulations.
The Select Agent Program is taking steps to address such past concerns about the need for greater consistency and clarity in enforcement actions and implement our recommendation. In particular, in September 2017, the program finalized a document that provides guidance on when to refer laboratories for violations and options for enforcement. This document categorizes regulatory departures along a spectrum of severity with associated enforcement options, so that inspectors and laboratories have a clear understanding of what to expect during and as a result of inspections, regardless of which Select Agent Program component conducts them. In addition, the CDC component of the program worked with HHS OIG to develop criteria to guide referrals to OIG, which CDC finalized and implemented in June 2017. APHIS is not developing a similar document at this time because APHIS officials believe the guidance on when to refer laboratories for violations and options for enforcement actions described above provides sufficient guidance on referrals for the Select Agent Program. The program’s development of guidance with criteria is a positive step and the program continues to develop associated documentation requirements for referring violations to investigative entities and enforcing regulations, according to a senior program official.
Select Agent Program Does Not Have Joint Strategic Planning Documents to Guide Oversight
As of August 2017, the Select Agent Program does not have joint strategic planning documents to guide its shared oversight efforts across CDC and APHIS. For example, the program does not have a joint mission statement to collectively define what the program seeks to accomplish through its oversight. It also does not yet have a strategic plan, although it is taking steps to develop one. Agencies can use strategic plans to set goals and identify performance measures for gauging progress towards those goals. Strategic plans can also outline how agencies plan to collaborate with each other to help achieve goals and objectives, as well as describe the strategies and resources required to achieve the goals and objectives.
Mission statements for the two components of the Federal Select Agent Program The Centers for Disease Control and Prevention’s (CDC) Division of Select Agents and Toxins reduces the risks for thefts, losses, and releases of biological agents by ensuring regulated laboratories or importers are safe and select agents are secure through its monitoring of facilities and enforcement of regulations. The Animal and Plant Health Inspection Service’s (APHIS) Agriculture Select Agent Services is a team of Agriculture Health Professionals dedicated to providing superior customer service to safeguard the health of domestic animals, plants, and their products from agricultural biological agents and toxins.
Each component of the program has conducted some strategic planning—each has an individual mission statement, some strategic planning documents, and performance measures—but the components differ in what they seek to achieve and how they measure the effectiveness of their efforts. For example, according to CDC officials, in the past, the CDC component has developed yearly strategic goals, such as to improve regulatory oversight through inspections and the biological safety and security of laboratories. In contrast, APHIS developed a 5-year business plan for its component of the Select Agent Program in 2014, which it updated in July 2017. In addition, it identified a number of annual goals in 2015, 2016, and 2017, such as developing additional BSL-4 training and filling vacancies in existing and new positions. CDC’s and APHIS’s performance measures also differ. For example, CDC has a range of performance measures, such as tracking the number of laboratory-acquired infections and the timeliness of inspection reports, whereas APHIS’s performance measures address the number of thefts, losses, and releases involving select agents and the processing of amendments.
The Select Agent Program also does not have a joint workforce plan that collectively identifies workforce and training needs to ensure the program as a whole has the appropriate workforce with sufficient expertise to carry out its responsibilities and that resources are being leveraged appropriately across the two components of the program. According to our past work, strategic workforce planning is an essential tool to help agencies align their workforces with their current and emerging missions and develop long-term strategies for acquiring, developing, and retaining staff. Moreover, the Select Agent Program has not collectively determined its training needs. The APHIS component of the program has generally relied on CDC to help meet its ongoing training needs, as noted, but we found through our review of CDC’s training strategy that it did not specifically address APHIS’s training needs. According to program officials, joint training provided in the past has not always explicitly addressed animal inspection needs, as noted. Program officials noted that the program has taken some steps to coordinate training, such as holding joint inspector training and webinars.
Senior program officials told us that, even without joint strategic planning documents, the CDC and APHIS components of the Select Agent Program manage fragmentation by collaborating on many aspects of the program, such as through maintaining frequent communication at the director level. They also said that the program had not developed a joint mission statement or strategic planning tools in the past because they prioritized other efforts in recent years, including responding to incidents that occurred in 2014 and 2015, addressing recommendations from recent reports, and developing a new database for the Select Agent Program. In addition, each component of the program has generally focused on its own agency’s needs when conducting workforce planning. One senior CDC official said that the Select Agent Program had always been in “reactive mode” and noted that the program could improve its oversight if it took a more strategic view.
During the course of our review, senior program officials told us that they were taking steps to develop a joint strategic plan for the Select Agent Program and, in August 2017, the program began soliciting bids from contractors for the plan’s development. The statement of work for the contract states that the contractor shall develop guiding principles for the Select Agent Program along with a mission statement, strategic goals and objectives, and performance measures, among other requirements. However, the statement of work for the contract does not have any requirements related to development of a joint workforce plan. We have found in the past that agencies’ strategic workforce planning should be clearly linked to the agency’s mission and long-term goals developed during the strategic planning process. Developing a joint workforce plan that assesses workforce and training needs for the program as a whole would help the program to better manage fragmentation by improving how it leverages resources to ensure all workforce and training needs are met; this assessment should be done in conjunction with the development of the strategic plan. Leveraging of resources is especially important given fiscal constraints and the uneven level of resources across the two components of the program.
Selected Countries and Regulatory Sectors Employ Other Approaches to Promote Effective Oversight
Selected countries and regulatory sectors employ approaches to promote effective oversight that, in some cases, differ from those of the Select Agent Program. For example, other countries and sectors have regulatory bodies that are structurally independent from the entities they oversee, take a risk-based approach to performing reviews, rely on scientists and other laboratory personnel to have requisite technical expertise on the pathogens and activities in their laboratories, share incident information on their public websites, and have prosecutorial authority when incidents occur.
Structural Independence of Oversight Bodies
Some countries and sectors we reviewed have regulatory bodies that are structurally independent from the entities they oversee. For example, Great Britain’s Health and Safety Executive, whose mission is to protect worker and public health and safety and who oversees laboratories that work with pathogens, is an independent central government agency, according to officials. It has a chief executive accountable to the UK government’s Department of Work and Pensions and a public-private board composed of representatives from a range of industries, including trade unions. Officials noted that this structure, an independent agency with direct access to a departmental head, allows the Health and Safety Executive to have control over defining its own budget and staffing needs. According to officials from the Health and Safety Executive and laboratory representatives we interviewed, one strength of this approach is that it avoids potential organizational conflicts of interest because none of the laboratories that the Health and Safety Executive oversees are part of the same agency.
Great Britain’s Health and Safety Executive The Health and Safety Executive is an independent regulator in Great Britain whose mission is to prevent death, injury, and illness in the workplace. It was originally established following a government review of the health and safety system in the country in 1974. One division within the Health and Safety Executive—the Chemical, Explosives and Microbiological Hazards Division—regulates sectors that have the potential for low- probability, high-consequence incidents, including work in high-containment laboratories. It began overseeing laboratories following a smallpox outbreak in 1978. Great Britain reviewed the regulations for animal pathogens and rewrote them to make them more aligned with the human pathogen and genetically modified organism frameworks after a 2007 safety incident in which a Great Britain laboratory inadvertently released foot and mouth disease into the environment. The Health and Safety Executive is responsible for safety oversight of pathogens that present a risk to human health as well as animal pathogens. A separate entity, the National Counter Terrorism Security Office, is responsible for security oversight of a subset of pathogens that pose biological security concerns, similar to the United States’ select agents. The Health and Safety Executive and the National Counter Terrorism Security Office work closely together in providing oversight, according to officials. As of July 2017, Great Britain had a total of 434 registered high-containment laboratories across the government, academic, and private sectors.
Some other regulatory sectors in the United States are also structurally independent from regulated facilities as a mechanism to ensure independence. For example, prior to the creation of NRC in 1974, the U.S. Atomic Energy Commission was responsible for both promotion and oversight of the nuclear industry. The Energy Reorganization Act of 1974 established NRC as a separate, independent entity. According to a relevant Senate committee report, this was a response to growing criticism that there was a basic conflict between the U.S. Atomic Energy Commission’s regulation of the nuclear power industry and its development and promotion of new technology for the industry. Independence is one of NRC’s “Principles of Good Regulation” that the commission seeks to follow in carrying out its regulatory activities. NRC’s Office of Nuclear Reactor Regulation uses performance metrics associated with these principles—including measures of the objectivity and independence of its inspectors—to annually evaluate the effectiveness of its Reactor Oversight Process in meeting its pre- established goals and intended outcomes. This office reports the results of this analysis to NRC in an annual report on the self-assessment of the Reactor Oversight Process.
Risk-Based Approaches to Performing Reviews
Other countries and sectors we reviewed have adopted risk-based approaches to reviewing compliance with regulatory requirements. In particular, regulators in some countries, including Great Britain and Canada, apply a risk-based approach to target their reviews to laboratories with a documented history of performance issues or those conducting higher-risk activities. Great Britain’s Health and Safety Executive prioritizes which laboratories to inspect during the year by assessing the level of risk a specific laboratory or program may have on worker or public health and safety or the environment, according to officials. This assessment takes into consideration factors such as which pathogens pose a greater risk, how these pathogens are used in the laboratory, and the potential consequences of an incident. For example, officials noted that a laboratory complex that works with many pathogens that may pose a significant risk to the country—such as animal pathogens that affect livestock and the food supply—may be subject to more oversight and additional inspections from regulators, based on the associated risk assessment, than a diagnostic laboratory that may destroy samples after testing.
The Public Health Agency of Canada is responsible for promoting and protecting the health of Canadians through various public health initiatives. It was established in 2004, partly in response to an outbreak of severe acute respiratory syndrome (SARS) in 2003, when it became evident that Canada had no legal requirements for domestic laboratories to report information such as whether they were working with SARS samples, and therefore officials could not determine the potential scope of the problem. The agency sits under Canada’s Minister of Health and its Centre for Biosecurity is responsible for administering and enforcing Canada’s Human Pathogens and Toxins Act to oversee the safe and secure handling of human pathogens and toxins. The act came into full force in December 2015, following an extensive consultation process with stakeholders. The Centre for Biosecurity has authority to license and oversee laboratory activities involving human pathogens and toxins, some animal pathogens, and a subset of human pathogens that have additional biological security concerns. Oversight responsibility for the other animal pathogens rests with the Canadian Food Inspection Agency. As of June 2017, Canada had a total of 63 licensed high-containment laboratories across the government, academic, and private sectors.
Similarly, officials from the Public Health Agency of Canada’s Centre for Biosecurity, whose mission is to protect the health and safety of the public against the risks posed by human pathogens and toxins, stated that their division for the oversight of laboratories that work with pathogens also has a risk-based licensing and inspection scheme. Under this scheme, the stringency of licensing and inspection requirements largely depends on the pathogen’s risk level. In addition, the Public Health Agency of Canada places different requirements on activities carried out in laboratories depending on their sector (e.g., public health or research) because it determined that activities in certain sectors present a higher risk than others, with the research sector having the highest associated risks. As such, the Public Health Agency of Canada places additional requirements on research scientists conducting certain activities with pathogens than it does with respect to personnel conducting activities in other types of laboratories. For example, the agency requires research scientists to develop and submit documentation that demonstrates a reasonable plan to manage risk and promote compliance with requirements. Officials noted that this approach helps the agency to understand where best to focus its efforts to achieve the desired risk mitigation results. According to officials from both Great Britain and Canada, this risk-based approach helps the oversight bodies in both countries focus their limited resources on laboratories they have identified as having the highest risks.
In addition, Great Britain’s Health and Safety Executive and the Public Health Agency of Canada apply a risk-based approach in determining the focus of their inspections. For example, according to agency officials in Great Britain and Canada, because they have not found stringent inventory requirements to be effective in reducing biological safety risks in the laboratory, neither country places as much focus, time, or resources on inventory management as the Select Agent Program does. For example, neither country spends time during every inspection counting and examining vials and comparing them to inventory logs, according to officials. Instead, Great Britain’s Health and Safety Executive’s approach is to sample laboratories’ biological safety measures and assess whether they have mechanisms in place to mitigate the consequences of incidents should they occur. Similarly, in Canada, the Canadian Biosafety Standard requires that laboratories working with pathogens in high-containment have an inventory tracking system that is based on the risks internally identified by the laboratory, in order to allow for timely identification of missing vials if necessary.
In addition to having less prescriptive inventory requirements than the Select Agent Program, both Great Britain’s Health and Safety Executive and the Public Health Agency of Canada generally focus their oversight on (1) biological safety, and (2) regulation of all potentially hazardous pathogens in laboratories. In contrast, the Select Agent Program originated from security-related concerns and regulates only those pathogens identified on the U.S. select agent list and no other pathogens, such as West Nile virus, that may be handled in high- containment but are not select agents. In both Great Britain and Canada, specific biological safety incidents provided the impetus for establishing oversight for laboratories that work with pathogens and, as a result, their regulatory agencies generally focus on biological safety. Both Great Britain and Canada have additional oversight requirements, such as security clearances for personnel, for a limited number of pathogens for which they have heightened security concerns, similar to the security requirements for working with select agents in the U.S. For example, in Great Britain, the Health and Safety Executive focuses on only biological safety in its oversight of high-containment laboratories and works with the National Counter Terrorism Security Office for oversight of pathogens with biological security concerns. In addition, to ensure compliance with biological safety regulations, officials we interviewed in Great Britain and Canada told us it was beneficial for their programs to have oversight over all hazardous pathogens that present biological safety risks to laboratory workers and the public, regardless of their containment level and their potential to pose biological security concerns. For example, the Public Health Agency of Canada regulates any pathogens with characteristics that require handling in laboratories equivalent to U.S. BSL-2, -3, or -4, which currently covers thousands of pathogens, according to officials, as opposed to the 66 agents on the U.S.’s select agent list.
NRC also considers risk in its oversight of nuclear reactors, fuel cycle facilities, and radioactive materials. In particular, for facilities that work with nuclear materials, NRC conducts inspections of a fraction of these facilities each year because, according to officials, there is a lower risk associated with nuclear materials than there is with nuclear power plants. There are no resident inspectors at these facilities; instead, the frequency of inspections for nuclear materials is based on the risk associated with, among other things, the specific material and each facility’s past performance. Sites with past issues will receive more attention, while sites with a history of good performance will generally be subject to the minimum frequency of inspections applicable to that type of site. In contrast, as part of its Reactor Oversight Process, NRC places at least two resident inspectors at each of the country’s commercial nuclear power plants because they pose a higher risk. For nuclear power plants, potential incidents can have high-consequences and far-reaching effects, such as the effects of the 2011 nuclear accident at the Fukushima Daiichi reactor in Japan. To ensure that each nuclear power plant is complying with federal safety requirements, these inspectors oversee a variety of activities on a daily basis, including by visiting control rooms, reviewing logbooks, performing visual assessments, and observing tests and repairs.
Drawing on Technical Expertise of Advisory Panels and Laboratories
Other countries have adopted various approaches to help ensure they have access to individuals with the appropriate expertise to perform sound safety and security assessments. According to officials in Great Britain, regulators at the Health and Safety Executive have access to external expert advisory committees to advise on issues related to new or emerging pathogens, diseases, or other scientific issues that inspectors may encounter during inspections or when developing policy. Health and Safety Executive officials noted that they generally go to the committees with questions of science and not regulation, as the inspectors are expected to be experts in biological safety and Great Britain regulations. Both France and Germany also have expert advisory committees that regulators can consult on scientific and technical issues, according to officials from these countries.
Merging Oversight of Human and Animal Pathogens in Great Britain and Canada Great Britain merged the inspection and oversight responsibilities for human and animal pathogens into one oversight body, the Health and Safety Executive, in 2008, following the 2007 accidental release of foot and mouth disease into the environment. Oversight of animal pathogens was originally under the United Kingdom’s Department for Environment, Food, and Rural Affairs (DEFRA). When oversight of animal pathogens was first transferred to the Health and Safety Executive, DEFRA initially retained the licensing of sites with animal pathogens. In 2015, DEFRA transferred all oversight responsibilities, including licensing, to the Health and Safety Executive, but retained responsibilities for policy matters. According to agency officials and laboratory representatives in Great Britain, this change had a number of benefits, including creating a single agency contact for laboratories that work with regulated pathogens, strengthening the oversight of animal pathogens, and improving the logistics and ease of the system. Similarly, in 2013, Canada transferred the oversight responsibility for a subset of animal pathogens from the Canadian Food Inspection Agency (CFIA) to the Public Health Agency of Canada to strengthen and harmonize its biological safety oversight framework and reduce the regulatory and administrative burden on researchers and laboratory officials. CFIA continues to issue permits for other animal pathogens, such as emerging animal diseases, which, according to officials, only make up a small number of pathogens.
Officials from the Public Health Agency of Canada noted that they address the issue of technical expertise in part by placing substantial responsibility on the scientists and other personnel in each laboratory to understand and address the risks associated with their specific work, such as the equipment and procedures used in that laboratory. Officials from the Public Health Agency of Canada noted that personnel working in licensed laboratories are the ones most at risk if a safety lapse or other incident occurs, so the agency expects the responsible individuals at the laboratories to reinforce the requirements and help ensure everyone works safely and is in compliance with requirements. Under this approach, the main responsibility is with the laboratory officials to understand and manage the risks inherent in the work being performed at their facility, while the role of the inspector is to verify that they have taken appropriate steps to identify and address the risks.
According to officials in the Netherlands, regulators place responsibility for laboratory biological safety on biological safety officers at each of the laboratories by accrediting them for the oversight of biological safety. Regulators conduct the accreditation process, which includes a review of personnel credentials, before individuals can be accredited. A 2-day course on the laws—such as details of biological safety requirements, case studies, review of transportation rules, and incident examples—is offered to each new accredited biological safety officer. Biological safety officers usually first seek accreditation for the equivalent of U.S. BSL-1 or -2 laboratories and must request additional reviews to receive accreditation for higher levels after acquiring the requisite knowledge and applied laboratory experience for the levels for which they are requesting accreditation. Officials from the Netherlands noted that it is important to have biological safety officers in laboratories as these individuals are versed in biological safety and can convey to researchers what they should be doing to ensure safety, as the regulator cannot be on-site every day.
Transparency through Sharing Information on Agency Websites and Other Means
Some countries and regulatory sectors have approaches that provide transparency to entities and the public in a number of ways. For example, in Great Britain, the Health and Safety Executive shares information on licensing, enforcement actions, and prosecutions, among other information, through its website and the public register. Health and Safety Executive officials noted that the agency also issues information to licensed laboratories when there are safety alerts, lessons learned, or key decisions that it feels are pertinent to the regulated community. However, officials limit the sharing of any information that is sensitive or has security concerns, such as the names of individuals cleared to work with pathogens, which poses additional security concerns. Regulators in the Netherlands stated that they are also authorized to share a great deal of information related to some regulated pathogens, such as laboratory risk assessments, with the public and individuals who request the information. Similarly, in Switzerland, the public can request some information about laboratory licenses and the types of activities that occur at laboratories, but regulators do not share information on laboratory exposures because, according to a Swiss official, the public is not generally affected by them so the officials do not feel a need to share such information.
NRC shares safety-related information on nuclear facilities with the public, including by posting the locations of nuclear facilities, inspection reports, and policies on its website. According to NRC officials, NRC believes transparency is important because, otherwise, secrecy can lead to distrust and negatively affect NRC’s relationship with industry and the public. In addition, NRC has written policies available on its website that detail what information it shares with registered facilities and the public, as well as guidance for NRC staff on what they can and cannot share.
NRC officials stated that NRC strives for a balance between openness and security and that, because the nuclear sector’s needs and the public’s concerns are constantly changing, it is important to reassess policies as the necessity arises. For example, after the September 11, 2001, terrorist attacks, NRC decided to remove some information from the public sphere in response to concerns that such information could be misused and exploited for future terrorist attacks.
The Federal Aviation Administration also shares information with the public through its Aviation Safety Information Analysis and Sharing System, which collects information from multiple databases, including voluntarily reported near-miss data and accident information. This system is intended to promote an open exchange of safety information to continuously improve aviation safety, and it allows users to perform integrated queries, search safety data, and review incident investigations conducted by the National Transportation Safety Board. For example, analysts from the Federal Aviation Administration analyzed data from the Aviation Safety Information Analysis and Sharing System to determine which weather-related factors posed the biggest threats to pilots and aircraft. In addition, the Federal Aviation Administration provides public access to a library of lessons learned from historically significant, policy- shaping accidents to share key knowledge across the industry to improve aviation safety through the application of such lessons and to understand how the current safety regime has been influenced by past accidents. For example, the library discusses how two similar high-terrain crashes in the 1990s led to a requirement in 2000 to install a warning system in aircraft to reduce the incidence of such terrain accidents.
Mechanisms of Enforcement and Nonpunitive Reporting Systems
Countries and regulatory sectors we reviewed employ a range of mechanisms to take enforcement actions against entities or to encourage incident reporting. For example, Great Britain, Canada, France, and Switzerland all have the ability to pursue criminal prosecution in response to serious violations of their laws or regulations governing high- containment laboratories, in addition to the ability to suspend work or shut down laboratories. In Canada, penalties for the most serious violations can include up to 10 years in prison. Officials from the Public Health Agency of Canada and representatives from laboratories we spoke with noted that laboratory personnel are still encouraged to report incidents in laboratories, such as laboratory-acquired infections, regardless of the potentially heavy penalties, because certain information that is voluntarily provided during the course of an incident cannot then be used in any subsequent criminal proceedings against that individual. In addition, experts from our meeting noted that the nonpunitive nature of airline reporting systems also encourages people to report incidents, which in turn provides valuable information to regulators, pilots, airlines, and the public that has been used to improve airline safety, as noted.
Conclusions
In their joint management of the Select Agent Program, CDC and APHIS share a critical role in ensuring that important research on select agents can be conducted in high-containment laboratories in a safe and secure manner. This role is especially important given the significant risks that pathogens handled in high-containment laboratories may pose to laboratory workers and the public. The Select Agent Program has made a number of improvements over the past few years, such as hiring additional staff and sharing more information with the public and registered laboratories. Nevertheless, the program does not fully meet all key elements of effective oversight. For example, the program is not independent in that it is not structurally distinct and separate from all of the laboratories it oversees. Both CDC and APHIS have individually made structural changes and put mechanisms in place to reduce conflicts of interest, but the APHIS component of the program has not documented the reporting process it developed to reduce conflicts of interest. Until APHIS formally documents the reporting structure for its component of the program from the APHIS director of the program to the administrator of APHIS, it will continue to appear to have conflicts of interest in its oversight of APHIS-owned laboratories. Moreover, APHIS has, on at least three occasions, inspected its own or other USDA laboratories, which is not in keeping with the memorandum of understanding it signed with the CDC component of the program. Without establishing control activities to help ensure that each component of the program carries out its inspection responsibilities as outlined in the program’s memorandum of understanding, the Select Agent Program cannot have reasonable assurance that its key mechanism to reduce conflicts of interest is implemented.
In addition, the program has not formally assessed all potential risks posed by its current structure and the effectiveness of its mechanisms to address those risks. For example, the program did not identify some areas that may present conflicts of interest, such as APHIS carrying out inspections of its own laboratories, and has not considered whether there may be additional areas of concern. Without (1) regularly assessing the potential risks posed by the program’s current structure and the effectiveness of its mechanisms to address them, such as by commissioning external reviews, and (2) taking actions as necessary to ensure any identified risks are addressed, the program may not be aware of or effectively mitigate impairments to its independence that could affect its ability to achieve its objectives.
Further, regarding the ability to perform reviews, the program may not be targeting the highest-risk laboratory activities in its inspections and other oversight efforts. Without developing and implementing a plan to identify which laboratory activities carry the highest biological safety and security risks and to respond to those risks by aligning inspections and other oversight efforts to target those activities, the program will not have assurance that it is effectively balancing the potential safety and security gains from its oversight efforts against the use of program resources and the effect on laboratories’ research. Moreover, the program is not fully transparent because it shares only limited information about lessons learned and other matters with registered laboratories, and there is no consensus about what additional information should be shared. Without determining what additional information about laboratories’ use of select agents, incidents, and violations of the select agent regulations is appropriate for the Select Agent Program to share with registered laboratories, the program may be missing opportunities to provide key information that ultimately could help improve biological safety and security. In addition, the program has not had clarity and consistency in its enforcement actions and is taking steps to address our past recommendation.
Further, regarding technical expertise, the two components of the Select Agent Program have individually hired additional staff for the program and improved training to enhance expertise, but workforce and training gaps remain. Although the program has begun to take steps towards development of a joint strategic plan to collectively guide oversight efforts, it does not have a joint workforce plan. Developing a joint workforce plan that assesses workforce and training needs for the program as a whole would help the program to better manage fragmentation by improving how it leverages resources to ensure all workforce and training needs are met; this assessment should be done in conjunction with the development of the strategic plan. Leveraging of resources is especially important given fiscal constraints and the uneven level of resources across the two components of the program.
Recommendations for Executive Action
We are making 11 recommendations to the agencies that manage the Select Agent Program, including 6 to APHIS and 5 to CDC: To improve independence, the Administrator of APHIS should formally document the reporting structure for the APHIS component of the Select Agent Program from the APHIS director of the program to the Administrator of APHIS. (Recommendation 1)
To improve independence, the CDC director of the Select Agent Program should work with APHIS to establish control activities to help ensure that each component of the program carries out its inspection responsibilities as outlined in the program’s memorandum of understanding. (Recommendation 2)
To improve independence, the APHIS director of the Select Agent Program should work with CDC to establish control activities to help ensure that each component of the program carries out its inspection responsibilities as outlined in the program’s memorandum of understanding. (Recommendation 3)
To improve independence, the CDC director of the Select Agent Program should regularly assess the potential risks posed by the program’s structure and the effectiveness of its mechanisms to address those risks, such as by commissioning external reviews, and take actions as necessary to ensure that any identified risks are addressed so that impairments to independence do not affect its ability to achieve its objectives. (Recommendation 4)
To improve independence, the APHIS director of the Select Agent Program should regularly assess the potential risks posed by the program’s structure and the effectiveness of its mechanisms to address those risks, such as by commissioning external reviews, and take actions as necessary to ensure any identified risks are addressed so that impairments to independence do not affect its ability to achieve its objectives. (Recommendation 5)
To improve the ability to perform reviews, the CDC director of the Select Agent Program should work with APHIS to develop and implement a plan to identify which laboratory activities carry the highest biological safety and security risks and to respond to those risks by aligning inspections and other oversight efforts to target those activities. (Recommendation 6)
To improve the ability to perform reviews, the APHIS director of the Select Agent Program should work with CDC to develop and implement a plan to identify which laboratory activities carry the highest biological safety and security risks and to respond to those risks by aligning inspections and other oversight efforts to target those activities. (Recommendation 7)
To improve transparency, the CDC director of the Select Agent Program should work with APHIS to determine what additional information about laboratories’ use of select agents, incidents, and violations of the select agent regulations is appropriate for the program to share with registered laboratories. (Recommendation 8)
To improve transparency, the APHIS director of the Select Agent Program should work with CDC to determine what additional information about laboratories’ use of select agents, incidents, and violations of the select agent regulations is appropriate for the program to share with registered laboratories. (Recommendation 9)
To improve technical expertise and overcome fragmentation, the CDC director of the Select Agent Program should work with APHIS to develop a joint workforce plan that assesses workforce and training needs for the program as a whole. This assessment should be done in conjunction with the development of the strategic plan. (Recommendation 10)
To improve technical expertise and overcome fragmentation, the APHIS director of the Select Agent Program should work with CDC to develop a joint workforce plan that assesses workforce and training needs for the program as a whole. This assessment should be done in conjunction with the development of the strategic plan. (Recommendation 11)
Agency Comments and Third-Party Views
We provided a draft of this report for review and comment to DOD, HHS, the Department of Homeland Security, NRC, the Department of Transportation, and USDA. We also provided copies to officials from Great Britain, Canada, and the Netherlands, as well as experts who participated in our expert meeting at the National Academy of Sciences.
HHS and USDA—the agencies to whose components our recommendations are directed—both provided written comments agreeing with all of our recommendations. These comments are reprinted in appendixes III and IV, respectively. In their comments, HHS and USDA provided additional information about steps they are taking, or planning to take, to improve their oversight of select agents and to address our recommendations. For example, HHS and USDA stated that the Select Agent Program will explore options to improve independence, including reexamining previous reviews and assessing the need for additional reviews to ensure potential risks posed by the program’s structure are adequately assessed and addressed. In addition, to improve the ability to perform reviews, HHS and USDA stated that the Select Agent Program is transitioning to a new secure information system that will allow the program to develop analytical tools and procedures to analyze risk- related data to improve the inspection process. Further, to enhance transparency, HHS and USDA said the program is exploring ways to disseminate information regarding common deficiencies identified during inspections. Finally, to improve technical expertise and overcome fragmentation, HHS and USDA said that the program has initiated contract support for development of a joint strategic plan that will include the assessment of workforce and training needs.
HHS and USDA also provided technical comments, as did the Department of Homeland Security; officials from Great Britain, Canada, and the Netherlands; and a number of experts who participated in our expert meeting at the National Academy of Sciences. We incorporated these comments as appropriate. DOD, NRC, and the Department of Transportation did not comment on this report.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees; the Secretaries of Agriculture, Defense, Health and Human Services, Homeland Security, and Transportation; the Chairman of NRC; the Director of CDC; the Administrator of APHIS; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Timothy M. Persons, Chief Scientist, at (202) 512-6412 or [email protected] or John Neumann, Director, Natural Resources and Environment, at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V.
Appendix I: Key Elements of Effective Oversight
This appendix describes the steps we took to confirm the applicability of five elements of effective oversight we have used in the past for our evaluation of the Federal Select Agent Program (Select Agent Program). We have used these key elements in the past for assessing the effectiveness of oversight in other areas where low probability adverse events can have significant and far-reaching effects. These elements are as follows: Independence: The organization conducting oversight should be structurally distinct and separate from the entities it oversees.
Ability to perform reviews: The organization should have the access and working knowledge necessary to review compliance with requirements.
Technical expertise: The organization should have sufficient staff with the expertise to perform sound safety and security assessments.
Transparency: The organization should provide access to key information, as applicable, to those most affected by operations.
Enforcement authority: The organization should have clear and sufficient authority to require that entities achieve compliance with requirements.
We took several steps to confirm the applicability of these elements for our examination of the Select Agent Program. First, we discussed the applicability of the criteria with senior officials from both components of the Select Agent Program, within the Centers for Disease Control and Prevention (CDC) and the Animal and Plant Health Inspection Service (APHIS). Second, we discussed the elements with representatives from the American Society of Microbiology and American Biological Safety Association International, which were selected because of their focus on microbiology and biological safety, respectively. Finally, we discussed the elements with experts during our National Academy of Sciences meeting (see app. II for information on this meeting). The officials, representatives, and experts generally agreed that the five elements were appropriate for our examination of the Select Agent Program. We compared information from federal documents about the Select Agent Program’s oversight, interviews with laboratory representatives and agency officials, and our expert meeting against the five elements of effective oversight.
Appendix II: List of Experts and Selection Methodology
• Bob Buchanan, Ph.D., Professor and Director of Center for Food Safety and
Security Systems, University of Maryland
Andrew Cottam, Ph.D., Head of the Microbiology and Biotechnology Unit, Health and Safety Executive, United Kingdom John Eakin, Principal Investigator, Air Data Research
David Franz, DVM and Ph.D., Former Commander, United States Army Medical Research Institute for Infectious Diseases
Gigi Kwik Gronvall, Ph.D., Senior Associate, Johns Hopkins Center for Health
Marianne Heisz, Ph.D., Director, Office of Biosafety Programs and Planning, Public Health Agency of Canada
Ruthanne Huising, Ph.D., Associate Professor, McGill University
Gavin Huntley-Fenner, Ph.D., Principal Consultant, Huntley-Fenner Advisors Joseph Kanabrocki, Ph.D. and NRCM(SM), Associate Vice-President for Research Safety, Professor of Microbiology, University of Chicago
Paul Keim, Ph.D., Regents Professor and Cowden Chair, Northern Arizona James LeDuc, Ph.D., Director, Galveston National Laboratory, University of Texas Medical Branch
Carol Linden, Ph.D., Director, Office of Regulatory Science and Innovation, Food
Allison MacFarlane, Ph.D., Professor and Director, Center for International Science and Technology Policy, George Washington University
Brian O’Shea, Ph.D., Senior Biological Safety Officer, Battelle Memorial Institute
Karlene Roberts, Ph.D., Professor Emeritus, Haas School of Business, University Jonathan Rosen, Principal Industrial Hygiene Safety and Health Consultant, AJ Rosen and Associates, LLC The comments of these experts generally represented the views of the experts themselves and not the agency, university, or company with which they are affiliated.
The meeting with these experts was held at NAS in January 2017. To identify experts to participate in the meeting, we worked iteratively with NAS staff to identify and review biographical information and relevant qualifications of experts, as well as factors such as representation from academia, industry, and federal government and expertise in a range of areas. The Board on Life Sciences of NAS solicited nominations for the expert panel from its extensive contacts in laboratory safety, biological security, and other regulatory sectors, such as occupational safety and health, airline safety, food safety, and chemical safety. These contacts included current and former committee members, current and former members of the Board on Life Sciences, and select members of NAS. NAS received responses from approximately 45 nominees. From this initial list, NAS selected experts based on their knowledge and expertise in the above-mentioned areas as well as their ability to attend the meeting on the chosen dates and obtained our approval of its selections. In order to facilitate discussion among participants, NAS did not include any federal employees or contractors of the Select Agent Program. The final list of 18 experts was then evaluated for any conflicts of interest. A conflict of interest was considered to be any current or financial or other interest that might conflict with the service of an individual because it (1) could impair objectivity and (2) could create an unfair competitive advantage for any person or organization. The 18 experts were determined to be free of conflicts of interest, and the group as a whole was judged to have no inappropriate biases.
We developed the session topics for the 2-day meeting based on our researchable objectives and issues that we identified in our audit work, including our analysis of agency documents and interviews with agency officials and representatives from registered laboratories. The meeting was recorded and transcribed to ensure that we accurately captured the experts’ statements, and we reviewed and analyzed the transcripts as a source of evidence. Although the expert meeting was not designed to reach formal consensus on the issues, a number of themes emerged from the group’s discussion to which there was general agreement.
Appendix III: Comments from the Department of Health and Human Services
Appendix IV: Comments from the Department of Agriculture
Appendix V: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the individuals named above, Mary Denigan-Macauley (Assistant Director), Sushil Sharma (Assistant Director), Amy Bowser, William Carrigg, Marcia Crosse, Caitlin Dardenne, Shana Deitch, Karen Doran, Jack Melling, Cynthia Norris, Lesley Rinner, Sara Sullivan, Walter Vance, and Elizabeth Wood made key contributions to this report. | Why GAO Did This Study
Safety lapses continue to occur at some of the 276 laboratories in the United States that conduct research on select agents—such as Ebola virus or anthrax bacteria—that may cause serious or lethal infection in humans, animals, or plants, raising concerns about whether oversight is effective.
GAO was asked to review the federal oversight approach for select agents and approaches from other countries or regulatory sectors. This report (1) evaluates the extent to which the Select Agent Program has elements of effective oversight and strategic planning documents to guide it, and (2) identifies approaches selected countries and regulatory sectors have used to promote effective oversight.
GAO convened a meeting of experts with the help of the National Academy of Sciences to discuss oversight of select agents. GAO also reviewed relevant laws, regulations, and guidance, and interviewed officials from the Select Agent Program and laboratories it oversees. GAO also reviewed documents and interviewed officials from two countries and other U.S. sectors selected because they have alternate oversight approaches.
What GAO Found
The Federal Select Agent Program (Select Agent Program)—jointly managed by the Departments of Health and Human Services (HHS) and Agriculture (USDA)—oversees laboratories' handling of certain hazardous pathogens known as select agents, but the program does not fully meet all key elements of effective oversight, as illustrated in the following examples:
GAO's past work identified independence as a key element of effective oversight. However, the Select Agent Program is not structurally independent from all laboratories it oversees, and it has not assessed risks posed by its current structure or the effectiveness of mechanisms it has to reduce organizational conflicts of interest. Without conducting such assessments and taking actions as needed to address risks, the program may not effectively mitigate impairments to its independence.
Another key element of effective oversight is the ability to perform reviews. Some experts and laboratory representatives raised concerns that the program's reviews may not target the highest-risk activities, in part because it has not formally assessed which activities pose the highest risk. Without assessing the risk of activities it oversees and targeting its resources appropriately, the program cannot ensure it is balancing its resources against their impact.
Technical expertise is another key element GAO identified in past work. The Select Agent Program has taken steps to hire additional expert staff and improve training, but workforce and training gaps remain.
Moreover, the program does not have joint strategic planning documents to guide its oversight. Although it began taking steps to develop a joint strategic plan during GAO's review, the program is not developing workforce plans as part of this effort. GAO's past work has found that strategic workforce planning is an essential tool to help agencies align their workforces with their missions and develop long-term strategies for acquiring, developing, and retaining staff. Developing a joint workforce plan that assesses workforce and training needs for the program as a whole would help the program leverage resources to ensure all workforce and training needs are met.
Selected countries and regulatory sectors GAO reviewed promote effective oversight using approaches that differ from the U.S. Select Agent Program's approaches:
In Great Britain, oversight of laboratories that work with pathogens is under an independent government agency focused on health and safety.
In both Great Britain and Canada, regulators focus their oversight on (1) biological safety, due to safety incidents which provided the impetus for laboratory oversight in these countries; and (2) regulation of all potentially hazardous pathogens and activities in laboratories.
What GAO Recommends
GAO is making 11 recommendations for the Select Agent Program, including to (1) assess risks from its current structure and the effectiveness of its mechanisms to reduce conflicts of interest and address risks as needed, (2) assess the risk of activities it oversees and target reviews to high-risk activities, and (3) develop a joint workforce plan. HHS and USDA agreed with GAO's recommendations.
or John Neumann at (202) 512-3841 or [email protected] . |
gao_GAO-19-173 | gao_GAO-19-173_0 | Background
Overview of Software Sustainment Activities
DOD defines software maintenance and software sustainment synonymously, to comprise any activities or actions that change the software baseline, as well as modifications or upgrades that add capability or functionality. For example, software sustainment activities involve the correction of software errors after the software is released and adaptations to enable interfacing with changing environments. The four categories of software sustainment actions are defined in figure 1 below.
A software sustainment activity can be categorized in multiple areas. For example, an Army command is modifying software to incorporate Windows 10. This action may be described as corrective in that it addresses errors in previous versions of Windows; perfective in that it upgrades the software to support new capabilities and functionality provided by Windows 10; adaptive in that it can accommodate changes to firmware and hardware environments; and preventive in that it improves reliability.
Sustaining software is normally different from sustaining hardware. For example, when hardware breaks, technicians can remove the broken part—such as tread on a tracked vehicle— and install a working part. In contrast, sustaining software typically requires writing, testing, and deploying lines of code. Software provides critical functionality to nearly every hardware system that DOD uses: surface (for example, mobile network systems); air (for example, secure communications arrays in aircraft); sea (for example, submarine guidance systems); missile (for example, targeting systems); ordnance (for example, Common Remotely Operated Weapon Station); and space (for example, positioning software), as shown in figure 2.
Further, a weapon system may comprise numerous software systems, each supporting different components of the system. Hundreds, or even thousands, of software systems can be embedded in a single weapon system. Interoperability and integration within the weapon system as a whole constitute key software considerations for the overall weapon system’s sustainability. For example, the military departments include system-of-systems and family-of-systems considerations. These considerations are defined as a set or arrangement of systems that results when independent systems are integrated within a larger system that delivers unique capabilities. Missions are performed by a system-of- systems arrangement of the platforms and systems that deliver the mission capability.
Weapon System Software and the Acquisition Life- Cycle
Decisions affecting the software on a weapon system are made throughout the acquisition life-cycle. The life-cycle is outlined in DOD Instruction 5000.02, Operation of the Defense Acquisition System. This instruction includes four basic and two hybrid models that serve as examples of defense program structures. The hybrid models combine models, such as a weapon system development that includes significant software development. The instruction also includes phases and milestones to oversee and manage acquisition programs, including major weapon systems. It outlines considerations affecting software sustainment for each milestone, including, for example, the following:
Milestone A: The understanding of the technical, cost, and schedule risks of acquiring the materiel solution; the determination of core requirements; and the development of an intellectual property strategy, to include technical data and computer software deliverables. For example, for incrementally deployed software- intensive programs, the preliminary scope of limited deployment is determined for evaluation prior to a full deployment decision for each capability increment.
Milestone B: A standard series of design reviews performed prior to converging on a final design for production. For example, for a hybrid acquisition program such as the combination of a major weapon system’s basic structural hardware development with a simultaneous software-intensive development, criteria establishing maturity for the development of software functional capability are to be identified.
Milestone C: The point at which a program or increment of capability is reviewed for entrance into the production and deployment phase or for limited deployment. For example, a general criterion applied during review would be to have a mature software capability consistent with the software development schedule.
Figure 3 depicts the milestones and decision points that inform a typical acquisition program.
Decisions affecting the software of a weapon system are made throughout the acquisition life- cycle and involve stakeholders across a number of domains. For example, DOD officials are involved in software development, architecture and design, engineering, coding, integration and testing, cost estimation and collection, and intellectual property. Many decisions affecting software sustainment, such as software data rights decisions, typically occur in one of the phases prior to operations and support. Decisions made in the early phases may have long-term effects on a weapon system’s sustainability, especially for systems that endure beyond their originally intended design life. Software sustainment decisions are often revisited during the operations and support phase, as hardware breaks or needs to be replaced, a new capability or requirement is added, or a modification is made due to feedback received after a weapon system is fielded.
Software Sustainment as Part of Depot Maintenance, Core Requirements, and Core Sustaining Workloads
DOD conducts software sustainment at a variety of depot-level maintenance locations. DOD and military policy refer to these locations variously as DOD depot-level software sustainment activities, Software Engineering Centers, Software Support Activities, and Life-Cycle Software Engineering Centers. For purposes of this report, we will refer to these facilities as DOD software centers.
Section 2460 of title 10 of the United States Code defines depot-level maintenance and repair. This term includes all aspects of software maintenance classified by DOD as of July 1, 1995, as depot-level maintenance and repair—regardless of the source of funds for the maintenance or repair, or of the location at which the maintenance or repair is performed. DOD maintains many weapon systems (such as aircraft and ships) and equipment (such as radar) at the depot level because the systems are too complex to maintain exclusively at the unit, or organizational, level.
Section 2464 of title10 of the United States Code requires DOD to maintain a core depot-level maintenance and repair capability that is government-owned and -operated. Maintaining this capability provides a ready and controlled source of technical competence and resources to enable effective and timely response to mobilizations, contingencies, or other emergencies. Additionally, DOD must assign these government- owned and -operated facilities (the depots) sufficient workload to ensure cost efficiency and technical competence during peacetime, while preserving the surge capacity and reconstitution capabilities necessary to fully support the strategic and contingency plans prepared by the Chairman of the Joint Chiefs of Staff.
Data Rights in DOD
The term “data rights” in the DOD context typically refers to the license rights that the department acquires in two types of deliverables: technical data and computer software. These rights are addressed in law, in the Defense Federal Acquisition Regulation Supplement (DFARS), and in DOD guidance. These data rights are defined as follows:
Technical data: recorded information, regardless of the form or method of recording, of a scientific or technical nature (including computer software documentation).
Computer software: computer programs, source code, source code listings, object code listings, design details, algorithms, processes, flow charts, and related material that would enable the software to be reproduced, recreated, or recompiled.
Computer software documentation: owner’s manuals, user’s manuals, installation instructions, operating instructions, and other similar items, regardless of how this documentation is stored, that will explain the capabilities of the computer software or provide instructions for using the software.
DOD Has Policies and Organizations within Weapon System Management and Depot Maintenance to Manage Operational System Software Sustainment
DOD has policies and organizations in place within weapon system management and depot maintenance to manage the sustainment of operational system software. We found that DOD has policies for managing the life-cycle of weapon systems, including sustainment; and that DOD policy on depot maintenance and cost also considers weapon system software issues. Several organizations, including the Under Secretary of Defense for Acquisition and Sustainment and DOD software centers, play key roles in overseeing and managing software sustainment. Software sustainment activities are conducted at numerous facilities, including military department software centers, weapon system program management offices, government laboratories or software integration laboratories, and contractor facilities. Additionally, while DOD has defined software sustainment and software maintenance activities synonymously, and it defines these functions as part of depot maintenance, we determined that the Navy categorizes and reports software sustainment differently.
DOD Has Policies for Life- Cycle Management of Major Weapon Systems That Include Considerations for Software Sustainment
DOD has published a directive and an instruction to guide the military departments in life-cycle management of major weapon systems, including considerations relating to software and weapon system sustainability. First, DOD’s acquisition publications provide DOD-wide policy and assign responsibilities to OSD and the military departments for executing weapon system development, production, and sustainment. For example, weapon system software considerations, including cost and access to technical data (for example, product specifications) and computer software (for example, source code), are to be included in required documentation, such as the Life-Cycle Sustainment Plan and the Systems Engineering Plan. Regulatory and reporting requirements differ depending on a system’s cost and acquisition category. These policies are in accordance with statute directing the Secretary of Defense to issue and maintain comprehensive guidance on life-cycle management.
Second, DOD includes weapon system software considerations in its instruction regarding depot maintenance core capabilities. DOD-wide policy assigns responsibilities to OSD and the military departments for the performance of DOD core depot-level maintenance, including software. DOD policy states that maintenance tasks are performed to restore safety and reliability when deterioration has occurred. These tasks help to ensure military readiness, including mobilization and surge capabilities, to support national defense strategic and contingency requirements. Additionally, DOD policy states that, for inherently governmental and core capability requirements, maintenance programs are to use organic—or DOD personnel, rather than contractors—in accordance with the law. These DOD policies accord with the statute directing the Secretary of Defense to maintain a core depot-level maintenance and repair capability to ensure technical competence in peacetime while preserving the surge capacity necessary to fully support strategic and contingency needs.
Third, DOD includes weapon system software considerations in its cost policy and manuals. These policies assign responsibilities for estimation of costs and collection of costs (including operations and support costs). They also prescribe cost data reporting and software resource data reporting requirements.
Several DOD Organizations Play Roles in Weapon System Software Sustainment Policy
Several DOD organizations establish policies and procedures for weapon system software sustainment. First, the Under Secretary of Defense for Research and Engineering and the Under Secretary of Defense for Acquisition and Sustainment play key roles in the establishment and maintenance of policy and procedures for software sustainment. For example:
Research and Engineering: This office establishes policy and oversees research, system engineering, and developmental test processes, especially during formative stages of programs. It also supports the Joint Federated Assurance Center, a cross-DOD working group with a mission to develop, maintain, and offer software and hardware vulnerability detection, analysis, and remediation capabilities.
Acquisition and Sustainment: This office establishes policy and manages acquisition and sustainment of major weapon systems. In April 2018 the Under Secretary appointed the first special assistant for software acquisition to advise and assist in addressing software challenges. According to officials, the special assistant will, among other responsibilities, oversee the development of software development policies and standards across DOD practices, and will advise leadership on best practices in software sustainment and data rights issues.
Second, the Deputy Assistant Secretary of Defense for Materiel Readiness, under the Assistant Secretary of Defense (Sustainment), establishes policy for and manages DOD depot-level maintenance, including software sustainment. Third, the Office of Cost Assessment and Program Evaluation analyzes resource allocation and cost estimation, and provides independent analytic advice on, among other things, the cost-effectiveness of defense systems. Figure 4 highlights select organizations that establish and maintain software sustainment policy and procedures.
Software Sustainment Activities Are Conducted at DOD Software Centers or Contractor Facilities
Software sustainment is conducted either at DOD software centers— which include military department software centers, weapon system program management offices, government laboratories, and software integration laboratories—or at contractor facilities. The specifics of how the software sustainment is conducted vary by weapon system, in accordance with what the program manager negotiates with the DOD software center or contractor. At DOD software centers, software is developed, tested, and distributed by government staff, contractor staff, or both to maintain operational capability, correct faults, improve performance, and adapt the software to environmental changes. Activities range from small fixes for software errors to large releases that provide weapon systems with new capabilities or address cybersecurity vulnerabilities.
The DOD software centers sustain a range of different systems. For example,
U.S. Army Communications and Electronic Command’s Software Engineering Center sustains software for Army communications systems; and the U. S. Army Aviation and Missile Research Development and Engineering Center sustains software for missiles, space, and aviation;
The Oklahoma City Air Logistics Complex’s 76th Software Maintenance Group at Tinker Air Force Base provides DOD with capabilities in operational flight programs, mission planning systems, space systems, ground-based radar, weapons support, mission support, jet engine test, training and simulation systems, and diagnostics and repair; and
Space and Naval Warfare Systems Center Pacific supports and maintains Naval systems in the areas of command and control, communications, computers, and intelligence, surveillance, and reconnaissance, as well as cyber and space.
This work is necessary to maintain and upgrade weapon system software and to meet immediate military operational needs. During our review, officials at DOD software centers provided additional examples of software sustainment activities they conduct on a wide variety of weapon systems. Appendix II provides these additional examples.
DOD Includes Software Sustainment as Part of Depot Maintenance and the Core Logistics Capabilities Determination Process, but Navy’s Approach Differs
DOD has defined software sustainment and software maintenance activities synonymously, and it defines these functions as part of depot maintenance and the core logistics process. The Departments of the Army and the Air Force categorize and report software sustainment as part of depot maintenance and the core logistics process. Specifically, the Army and the Air Force have policies that categorize and report software sustainment as part of their core logistics requirements, in accordance with DOD instruction.
Contrary to DOD policy, the Department of the Navy does not categorize and report software sustainment as part of depot maintenance Specifically, Navy officials said that the Navy views software sustainment as an engineering function, not a depot maintenance function. They said that Navy policy reflects the Navy’s view of software sustainment as a continuous engineering process that occurs throughout a weapon system’s life-cycle, rather than a discrete set of activities categorized as depot maintenance.
These officials stated that while the Navy believes software sustainment to be critical to maintaining its weapon systems, it also believes that managing software sustainment as part of depot maintenance is not the most effective approach for the Navy. In particular, Navy officials expressed several concerns about how reporting and categorizing software sustainment as part of depot maintenance could affect their activities. For example, Navy officials noted that this shift would require software engineering to be reported as depot maintenance, which in turn would require the Navy to carry out a greater portion of the work at Navy depots using DOD’s workforce. Navy officials stated that, in their opinion, the Navy does not have the capacity to conduct this level of effort with the current DOD workforce within the Navy depot structure, and that the Navy’s ability to develop adequate capacity in its DOD software engineering workforce in the future is uncertain. They also stated that shifting this capacity away from private industry to the DOD software engineering workforce could create instability in the management of current and future Navy programs, and would be inconsistent with the Navy’s efforts to broaden private-sector software engineering capability and capacity.
We also found that the Department of the Navy does not categorize and report software sustainment as part of its core logistics requirements, in accordance with DOD policy. DOD Instruction 4151.20, Depot Maintenance Core Capabilities Determination Process, assigns responsibilities and prescribes procedures to identify required core capabilities for depot maintenance and the associated workloads needed to sustain those capabilities. It is DOD policy that the core capability requirements determination process underpins the establishment and retention of a broad set of public-sector depot maintenance capabilities necessary for DOD, and that the required core capabilities and depot maintenance workloads necessary to sustain those capabilities will be calculated by military services and then aggregated to determine the overall DOD core requirements. As such, DOD requires the military services to use a computational methodology to identify their essential core capability requirements and their planned workload to support this core maintenance capability.
The Navy’s differing approach to categorizing and reporting software sustainment has created challenges for DOD-wide reporting on core logistics capabilities. DOD is required by law to submit a Biennial Core Report to Congress that identifies core logistics capabilities—and DOD has included software sustainment—at depots, and the workload required to maintain those capabilities. The Army and the Air Force included direct labor hours and estimated sustainment costs for DOD depot-level software sustainment in the 2018 DOD Biennial Core Report. However, while the Navy conducted software sustainment activities, it did not consider these activities to be part of depot maintenance or a core logistics capability, as previously discussed. As a result, the Navy reported no direct labor hours or estimated cost of sustaining its software workload for inclusion in the 2018 DOD Biennial Core Report, as shown in table 1. OSD accepted the Navy’s core report submission for the 2018 DOD Biennial Core Report.
The Department of the Navy’s position that software sustainment is not part of depot maintenance is contrary to DOD Instruction 4151.20, which specifically includes software sustainment as part of depot maintenance. Without the Department of the Navy’s categorizing and reporting of its software sustainment costs, in accordance with DOD policy on the Depot Maintenance Core Capabilities Determination Process, DOD and Congress are not fully informed of the magnitude and cost of core software sustainment capability requirements for the Navy. Accordingly, DOD is impeded in its efforts to plan for a ready and controlled source of technical competence, and to budget resources in peacetime while preserving the surge capabilities necessary to fully support strategic and contingency needs.
Limitations in DOD’s and the Military Departments’ Data Reporting Impede DOD’s Tracking of Weapon System Software Sustainment Costs
DOD’s ability to track weapon system software sustainment costs is impeded by limitations in the collection of software data by both the Office of Cost Assessment and Program Evaluation and the military departments. CAPE oversees the primary cost data collection systems: the Cost and Software Data Reporting system and the military departments’ Visibility and Management of Operating and Support Costs system. Further, CAPE has limitations in its cost and software data reporting system for data collected from DOD software centers. We also found that the military departments collect incomplete data on software sustainment costs in their VAMOSC systems.
CAPE Has Limitations in Its Cost and Software Data Reporting System
CAPE collects software sustainment cost data from contractors on certain major weapon systems through its CSDR system. According to CAPE’s CSDR manual, this system serves as the primary repository of contractor costs for use in most DOD resource analysis efforts, including cost database development, applied cost-estimating, cost research, program reviews, analysis of alternatives, and life-cycle cost estimates. Data from the two principal components of the CSDR system–contractor cost data reporting and software resources data reporting systems—can be used in managing software sustainment costs. Data in the CSDR system may also be used to prepare acquisition and life-cycle cost estimates for weapon system milestone reviews, as well as to estimate and project software sustainment costs.
We identified limitations, however, in CAPE’s CSDR system. First, the system has historically not collected information from contractors for weapon system acquisition programs whose spending levels did not reach the major defense acquisition program threshold. Although collecting this information was not a requirement in the past, in 2016 Congress directed DOD to begin to collect additional information necessary to facilitate cost estimation and comparison across acquisition programs, including costs from programs with eventual total expenditures greater than $100 million. In February 2018, as part of its overall efforts to make data collection more robust, CAPE issued a memo stating that the Army, Navy, and Air Force proposed pilot programs to collect contractor cost data from 26 weapon system programs whose spending levels were below the major defense acquisition program threshold. CAPE plans to use the results of these pilot programs to inform future efforts to improve information-gathering on, and visibility into, the actual expenditures for lower-dollar programs. Additionally, CAPE plans to update its cost- collection policies and manual, if necessary, upon completion of the pilot programs. Because the department is in the midst of these pilot programs and has outlined next steps to be taken upon their completion, we are not making a recommendation about this matter at this time.
Second, CAPE’s CSDR system does not collect any weapon system cost or software data from DOD software centers. Prior to 2017, CAPE required only contractors—and not DOD software centers—supporting major defense acquisition programs to report software sustainment costs into the CSDR system. However, in January 2017 CAPE recognized that the lack of cost and software data from government-executed elements of acquisition and sustainment programs was impeding accurate compilation of total program costs. Accordingly, it issued a memorandum to the military departments directing that cost and software data efforts on major defense acquisition programs should also be collected and submitted into the CSDR system by government-performed efforts, which include DOD software centers. Also, the Standards for Internal Control in the Federal Government states that management should use quality information to achieve an entity’s objectives, and that management should obtain data from reliable internal and external sources in a timely manner based on the identified information requirements for effective monitoring.
According to a CAPE official, as of September 2018, CAPE had not received any inputs into the CSDR system for DOD-performed software sustainment efforts. CAPE officials told us that compliance with this requirement in the memorandum has been very low, and they attributed this to the absence of an implementation plan. The official said that CAPE is currently in the early stages of evaluating cost data systems—that is, CSDR and the military departments’ VAMOSC systems—to determine which is the more effective for use in collecting and submitting cost and software data from DOD software centers. The official acknowledged that after completing this evaluation of the systems, CAPE will develop an implementation plan. However, CAPE is still in the early stages of completing its evaluation. Having a robust implementation plan with time frames for key milestones will be important to executing and monitoring CAPE’s actions to improve the reporting of software sustainment costs. Without cost and software data from the DOD software centers, CAPE is challenged in its ability to accurately compile total program costs for program managers, cost estimators, and Congress, among other information recipients.
Military Departments Collect Incomplete Software Sustainment Costs in Operating and Support Cost Systems
According to the CAPE cost estimating guide, the software sustainment element excludes the costs of new development or major redesigns that provide new capabilities. However, if the costs of new development or major redesigns that provide new capabilities cannot be isolated, these costs will be considered as part of software sustainment and should be so noted in the estimate documentation. defines cost elements that cover the range of weapon system operating and support costs, including software sustainment. CAPE’s cost guide defines the software sustainment cost element as the labor, material, and overhead costs incurred after deployment to maintain, modify, and integrate software.
According to the CAPE cost estimating guide, the software sustainment element excludes the costs of new development or major redesigns that provide new capabilities. However, if the costs of new development or major redesigns that provide new capabilities cannot be isolated, these costs will be considered as part of software sustainment and should be so noted in the estimate documentation. major commands. Therefore, in order to include software sustainment costs for all shipboard systems in the VAMOSC system, Navy officials must manually collect these cost data. This official explained that since the Navy collects these costs manually, officials focus their efforts on the most expensive and most populous shipboard systems. According to the official, they intend to address the Navy VAMOSC system’s incomplete software sustainment data issue by expanding their manual data collection efforts to include additional Navy systems.
According to DOD policy, CAPE’s executive oversight responsibilities include annually reviewing the services’ VAMOSC systems to address data accessibility, completeness, timeliness, accuracy, and compliance with CAPE guidance. CAPE formed a VAMOSC task force in partnership with the service cost-analysis agencies and the Product Support Division in the office of the Assistant Secretary of Defense for Sustainment. The task force is aware of gaps in the military departments’ reporting of software sustainment costs within their VAMOSC systems, particularly within the Army and the Navy, and it has included data completeness in the scope of its efforts. However, closing data gaps is not one of the specific purposes of the task force; these purposes include (1) discussing integration of operating and support cost collection across the department and (2) clearly defining the technical differences across the military services’ VAMOSC systems.
The task force is concerned with multiple cost-reporting issues. We recognize that the task force can enable DOD to improve the completeness of its software sustainment cost reporting. Further, systematic and institutionalized cost data collection by each military department is important to support credible cost estimates of current and future programs. However, without CAPE taking steps to prioritize obtaining complete information on operating and support costs for software sustainment, it cannot provide reliable life-cycle cost estimates to DOD acquisition or maintenance officials—or Congress— to assist with current and future years funding decisions.
DOD Has Begun Addressing Challenges with Data Rights for Weapon Systems’ Software Sustainment but Has Not Yet Reported to Congress on Required Studies
DOD Makes Decisions about Securing Data Rights throughout Weapon Systems’ Life-Cycles
DOD continuously makes decisions about securing data rights, both early and throughout the life-cycle of a weapon system (see sidebar).
DOD may obtain data rights, including access to technical data and computer software related to weapon systems, for a variety of reasons. For example, as we have previously reported, data rights may be obtained to help control costs and maintain flexibility in future acquisition and sustainment of systems and subsystems, including maintenance and upgrade of weapon system software. DOD officials we spoke with emphasized that there is no one-size-fits-all approach. Further, obtaining data rights for software sustainment constitutes only one of many competing priorities that must be considered along with cost, schedule, and performance in the acquisition of weapon systems. technical data or computer software, to be delivered under a contract. DOD officials told us that this was due to cost and proprietary reasons— that is, the contractor retains ownership of the intellectual property, such as the source code. DOD strives to balance the cost of purchasing the rights against the extent of data rights it expects it will need to maintain and support the system for years into the future. For example, DOD obtains data rights for the following reasons:
To support its ability to evaluate weapon system design in order to sustain weapon system software.
Computer software: computer programs, source code, source code listings, object code listings, design details, algorithms, processes, flow charts, and related material that would enable the software to be reproduced, recreated, or recompiled. owner's manuals, user's manuals, installation instructions, operating instructions, and other similar items, regardless of how this documentation is stored, that will explain the capabilities of the computer software or provide instructions for using the software. upgrades and sustainment activities to achieve cost savings. Re- competing requires complete technical data packages that enable the manufacture of data equipment from specification.
During the operating and support phase of a weapon system, DOD may need to reconsider its previous decisions about the extent of data rights it previously acquired. DOD officials we spoke with emphasized that there are situations in which the data rights needed may not be known until years into sustainment. A senior-level DOD official told us that it would be useful if data rights could have a pre-negotiated price and be an option as part of the initial contract. Such an option would give the government the right, but not the obligation, to purchase the data rights at the pre- negotiated price if needed, in the future.
DOD Faces Challenges with Data Rights and Has Initiated Steps to Mitigate Them
DOD has faced challenges in securing the necessary data rights to sustain weapon system software. Specifically, having either partial or incomplete data, or unclear data rights, or both can impede the government’s ability to support the weapon system as intended. For example, our recent work on the F-35 Joint Strike Fighter Program found that DOD has not defined all of the technical data it needs from the prime contractor, and at what cost, to enable competition of future sustainment contracts.
Officials at DOD software centers told us that they take steps to mitigate challenges posed by having either partial or incomplete data, or unclear data rights, or both for decades-old weapon systems and new acquisitions. For decades-old weapon systems, officials at some DOD software centers stated that they use public-private partnerships to bridge gaps for systems that lack access to the necessary data rights. For example, an Air Force official at Robins Air Force Base told us that the C- 5 software sustainment workload has been successful due to a public- private partnership involving the C-5 System Program Office, the 402nd Software Maintenance Group, and the contractor. As part of this partnership, a C-5 software integrated laboratory was established at Robins Air Force Base for DOD personnel to perform software sustainment activities, including deficiency report investigations and testing. In doing so, the 402nd Software Maintenance Group supports $8.4 million in annual C-5 software sustainment requirements.
Officials at DOD software centers further explained that they have the expertise to optimize software that is transferred from a contractor to a DOD software center or to reverse-engineer software for weapon systems, if needed. In some cases, for example, a contractor may decide that it is no longer profitable or advantageous to continue performing the software sustainment; the activities can then be transferred to a DOD software center. Air Force officials at the 402nd Software Maintenance Group stated that on many occasions they have worked to take over software from a contractor without any transition period. In 2013 this DOD software center assumed sustainment responsibility from a contractor without any transition period for a radar system on the F-15 aircraft in order to maintain and upgrade its software. After assuming sustainment responsibility, according to an Air Force official, this DOD software center corrected latent defects and added new capabilities to adapt the radar to a changing threat environment. According to the official, this occurred because the contractor shifted focus to newer radar systems. Further, the contractor priced the support for the older radar system above what the Air Combat Command had budgeted for the updates.
Officials at some DOD software centers told us that if they have the source code but do not have the computer software documentation— such as manuals or instructions—they may need to reverse-engineer the software. For example, engineers at U.S. Army Research, Development and Engineering Command, Armament Research, Development, and Engineering Center (ARDEC) reverse-engineered a key software function, as shown in figure 5 below.
For newer acquisitions, DOD has increased the consideration it affords to the potential needs for access to and delivery of data. For example, Air Force officials said that because of past issues with data rights on legacy systems, they had launched an initiative to ensure that program offices use standardized contract clauses (for example, DFARS software data rights) and contract delivery requirements (for example, models, drawings, associated lists, and specifications) for data rights. To illustrate this, an Air Force official told us that the HH60W Combat Rescue Helicopter program committed early in the life-cycle to securing the necessary data rights for a DOD software center in the 402nd Software Maintenance Group to perform the software sustainment activities. The official told us that the Statement of Work requests that the contractor provide the DOD software center with the source code and full technical data package, to include a complete software-supporting documentation package.
DOD Has Begun Establishing Intellectual Property Policy and Experts but Has Not Yet Reported to Congress on Required Studies on Data Rights
Provisions in the fiscal years 2016 and 2018 National Defense Authorization acts (NDAA) directed the Secretary of Defense to commission studies related to DOD intellectual property, establish an intellectual property policy, and establish a cadre of intellectual property experts. In response, DOD is in the early stages of developing intellectual property policy and establishing a cadre of intellectual property experts. Also, DOD has commissioned studies to review its access to intellectual property for DOD weapon systems, including necessary data rights. However, the department has missed some required reporting time frames, and it has not yet reported to congressional defense committees on the studies’ findings and recommendations.
Congress Directed DOD to Establish Intellectual Property Policy and Identify a Cadre of Intellectual Property Experts
In the fiscal year 2018 NDAA, Congress directed the Secretary of Defense, through the Under Secretary of Defense for Acquisition and Sustainment, to (1) develop policy on the acquisition or licensing of intellectual property; and (2) establish a cadre of intellectual property experts to help support the acquisition workforce on intellectual property matters, including acquiring or licensing intellectual property. The law did not include a time frame for completion. The department is in the early stages of addressing these statutory provisions.
According to the law, the policy is intended to enable DOD-wide coordination and consistency in strategies for acquiring or licensing intellectual property; to help ensure that program managers are aware of DOD’s rights and consider and use best practices early in the acquisition process; and to encourage customized intellectual property strategies based on the unique characteristics for each system. The cadre of experts is intended to ensure a consistent, strategic, and knowledgeable approach to acquiring or licensing intellectual property by providing expert advice, assistance, and resources to the acquisition workforce on intellectual property matters.
While the department is in the early stages of addressing these statutory provisions, senior-level DOD officials have acknowledged a delay in these efforts, primarily due to the department’s recent reorganization. DOD officials stated that the details concerning organizational structure, roles, responsibilities, and realignment of resources had to be finalized in order for the newly formed organizations to implement these provisions. Regarding the intellectual property policy, a senior-level DOD official told us that the Office of Strategy and Design, within the Office of the Secretary of Defense, will facilitate the collaboration of stakeholders to assist in developing the intellectual policy, which the Assistant Secretary of Defense (Acquisition) will then issue and oversee. Senior-level DOD officials spoke with us regarding the complexity of developing this intellectual property policy, as it spans the weapon system life-cycle, including research, development, acquisition, and operating and support considerations.
Regarding the intellectual property cadre, a senior-level DOD official told us that the Assistant Secretary of Defense (Acquisition) may house the cadre. As of August 2018 the department had not yet specified details on the potential size or scope of the intellectual property cadre, nor a time frame to guide implementation. Although not required by law, development of a robust implementation plan with time frames for key milestones could help DOD to execute and monitor its actions.
DOD Established a Government-Industry Panel to Review Technical Data Rights, but the Panel Has Missed Deadlines for Reporting to Congress
In the fiscal year 2016 NDAA Congress directed DOD to establish a Government-Industry Advisory Panel to review technical data rights, and to submit its final report and recommendations to the Secretary of Defense not later than September 30, 2016. The panel, comprising members from both the public and private sectors, was to review defense regulations on technical data and proprietary restrictions to ensure, among other things, that DOD does not pay more than once for the same work, and that contractors are appropriately recompensed for innovation and invention, among several other considerations. The law also directs that the Secretary of Defense submit comments or recommendations to congressional defense committees not later than 60 days after receiving the report. DOD established the panel, as legislatively required.
As of November 2018 the panel had submitted its report to DOD but not to Congress. Panel members acknowledged that the panel is late in reporting to the congressional defense committees, and they attributed the lateness to the complexity of the task. Panel members told us that obtaining consensus between DOD and industry has been difficult, in part because of competing interests. For example, panel members discussed balancing DOD’s needed ability to upgrade and support weapon systems—which is difficult to forecast 30 to 40 years into the future—with industry’s need for a fair return on its intellectual property investments. In November 2018 the panel submitted the report to the Under Secretary of Defense for Acquisition and Sustainment. The report includes 19 recommendations for legislative, regulatory, and policy changes that, according to the panel chairman, recognize and seek to balance the equities of both government and industry. As of November 21, 2018, the panel had not yet transmitted the report to Congress, but the panel Chairman stated that it planned to do so before the end of the month.
DOD Is Late in Reporting to Congress on a 2017 Study on Access to Intellectual Property for Weapon System Sustainment
In the fiscal year 2016 NDAA, Congress directed DOD to contract with an independent entity to review DOD regulations, practices, and sustainment requirements related to government access to and use of intellectual property rights of private-sector firms. The law also directs the Secretary of Defense to submit a report to the congressional defense committees on the findings of the independent entity, along with a description of any actions the Secretary proposed in order to revise and clarify laws, or actions the Secretary may take to revise or clarify regulations, related to intellectual property rights.
In response, DOD contracted with the Institute for Defense Analyses to review the intellectual property for weapon system sustainment. In May 2017 the Institute released its report on access to intellectual property for weapon system sustainment. The report made six recommendations, including that DOD establish or expand existing organizational capabilities within the DOD components (with OSD support) to provide expertise in the acquisition of intellectual property data and rights to program managers throughout their programs’ life-cycles, as well as to other staff involved in weapon system acquisition.
However, DOD has not yet submitted its report to the congressional defense committees on the study’s findings and recommendations, though it was required to do so by March 1, 2016. OSD officials acknowledged that they are late in reporting to congressional defense committees on the study’s findings and recommendations. They attributed the delay to their intent of awaiting the findings and recommendations on technical data rights, if any, of the Government-Industry Advisory Panel, as discussed above. DOD informed the congressional defense committees twice—most recently in January 2018—that the department would consider the recommendations of the Institute for Defense Analyses and those of the Panel collectively, and would provide its recommendations in a single report after receiving the Panel’s report. In this January 2018 update, DOD noted that the Panel expected to complete its report by March 2018. However, the Panel did not complete its report—for which DOD was waiting before responding to the Institute’s study—until November 2018. DOD’s report to Congress on any actions it might take in response to the study’s findings and recommendations could provide insight into whether laws or regulations related to intellectual property rights need to be revised or clarified.
Conclusions
Software is essential to the capabilities and operations of a vast range of military systems, including tactical and combat vehicles, aircraft, ships, submarines, and strategic missiles. DOD has policies and organizations within weapon system management and depot maintenance to manage operational system software sustainment. DOD has defined software sustainment and software maintenance activities synonymously, and the department includes software maintenance as part of depot maintenance core capabilities. However, the Department of the Navy does not categorize or report software sustainment as part of depot maintenance. Without the Department of the Navy’s categorizing and reporting of its software sustainment costs, in accordance with DOD policy on the Depot Maintenance Core Capabilities Determination Process, DOD and Congress are not fully informed of the magnitude and cost of core software sustainment capability requirements. As such, DOD is impeded in its efforts to plan for a ready and controlled source of technical competence and to budget resources in peacetime while preserving the surge capabilities necessary to fully support strategic and contingency needs.
Limitations exist in DOD’s cost and software data reporting system with regard to its obtaining cost data from DOD software centers, as well as in the military departments’ operating and support cost systems. These limitations impede DOD’s tracking of weapon system software sustainment costs. Without cost and software data from the DOD software centers as well as complete information on the military departments’ operating and support costs for software sustainment, CAPE is challenged in its ability to accurately compile total program costs for program managers, cost estimators, and Congress, among other information recipients.
Lastly, while DOD makes decisions about securing data rights both early and throughout the life-cycle of a weapon system, the department faces challenges in balancing the cost of purchasing the rights against the extent of data rights it expects it will need over the life of the system. DOD has begun taking actions to address these challenges. For example, DOD has commissioned several studies, at congressional direction, to examine DOD’s access to and use of intellectual property, including technical data rights and proprietary restrictions. However, Congress has yet to receive two of those studies. Reporting on the findings and recommendations, as well as on any actions DOD may take in response to both studies, would provide insight and would highlight timely issues with technical data rights to keep Congress and DOD informed of government and industry concerns and enable them to use that knowledge in their decision making on weapon systems that may be in operation for decades to come.
Recommendations for Executive Action
We are making five recommendations to the Department of Defense— one to the Secretary of the Navy and four to the Secretary of Defense.
We recommend that the Secretary of the Navy categorize and report the Navy’s software sustainment costs, in accordance with DOD policy on the Depot Maintenance Core Capabilities Determination Process.
We recommend that the Secretary of Defense ensure that the Director for Cost Assessment and Program Evaluation complete its evaluation and select the most effective system to obtain cost and software data from DOD software centers, and develop an implementation plan that includes time frames for key milestones to execute and monitor the centers’ submission of required data.
We recommend that the Secretary of Defense ensure that the Director for Cost Assessment and Program Evaluation takes steps to prioritize the respective military departments’ obtaining and reporting of complete operating and support costs for software sustainment through its VAMOSC systems.
We recommend that the Secretary of Defense develop an implementation plan with time frames for key milestones for establishing a cadre of intellectual property experts.
We recommend that the Secretary of Defense submit a report, as required by law, to Congress about the study on access to intellectual property for weapon system sustainment conducted by the Institute for Defense Analyses, along with a description of any actions that the Secretary proposes, or may take, to revise or clarify regulations related to intellectual property rights.
Agency Comments and Our Response
We provided a draft of this report to the Department of Defense for review and comment. DOD provided written comments, which are reprinted in appendix III. In its comments, DOD concurred with our recommendations and stated it has actions underway or plans to take actions in response to all five of our recommendations.
We are sending copies of this report to the appropriate congressional committees and the Acting Secretary of Defense. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
This report examines the extent to which (1) DOD has policies and organizations in place to manage the sustainment of operational system software for weapon systems; (2) DOD and the military departments track costs to sustain weapon system software; and (3) DOD has addressed challenges securing necessary data rights to sustain weapon system software. Our scope included software sustainment of operational weapon systems.
For objective one, we reviewed DOD policies and organizations in place to manage the sustainment of operational system software for weapon systems. This included DOD Directive 5000.01 and DOD Instruction 5000.02, which establish acquisition program policies; and DOD Directive 4151.18 and DOD Instruction 4151.20, which outline requirements for DOD materiel maintenance and DOD programs’ core capabilities. We reviewed statutory requirements, including 10 United States Code § 2337, which requires the Secretary of Defense to issue and maintain comprehensive guidance on life-cycle management and the development and implementation of product support strategies for major weapon systems. We compared the processes used by DOD and the military departments against those outlined in DOD policy and statute, and against software sustainment activities performed at several DOD software centers. We identified the roles and responsibilities for conducting software sustainment activities among personnel at each level of DOD bureaucracy. We also interviewed officials from the Office of the Secretary of Defense (OSD) and the military departments regarding the department’s guidance and the processes used to collect the data for DOD’s Biennial Core Report. As in our previous reviews of DOD’s biennial core reports, we did not assess the reliability of the underlying data provided by the military services for the 2018 DOD Biennial Core Report. However, we determined that the data were sufficiently reliable for the purpose of determining whether the military services had reported costs of workloads in 2012—2018.
We interviewed officials from the Office of the Secretary of Defense (OSD), including within the Office of the Under Secretary of Defense for Research and Engineering and the Office of the Under Secretary of Defense for Acquisition and Sustainment. Using a semi-structured questionnaire, we also interviewed officials from each of the military department headquarters—U.S. Army G4, Air Force Acquisition office, and the Assistant Secretary of the Navy for Research, Development, and Acquisition—to understand policies and organizations in place to manage the sustainment of operational system software for major weapon systems. We also interviewed industry officials, such as from the Center for Strategic and Budgetary Assessments and the Software Engineering Institute at Carnegie Mellon University. We conducted interviews using a semi-structured questionnaire with officials at select DOD depot-level software sustainment activities, also referred to as DOD software centers for the purposes of this report. We used DOD’s Fiscal Year 2016 Maintenance Fact Book to select 11 of 20 DOD depot-level software sustainment activities based on several criteria, including (1) military department, (2) weapon system type, (3) geographical location, and (4) random selection. Although this sample is not generalizable to the population of DOD depot-level software centers, the use of a random sample of software centers helped mitigate any potential selection bias, and the interviews provided valuable information on those sites selected. The officials we interviewed at DOD software centers included a variety of engineers and others who perform software sustainment activities for weapon system software on several DOD weapon systems, including air and sea platforms, targeting systems, and communications systems, among others. We interviewed these officials to gain an understanding of policies and procedures they follow to guide their software sustainment activities, how they are organized, and the activities they undertake to sustain the software.
For objective two, we reviewed DOD policy and military department guidance regarding software sustainment cost reporting requirements, including Department of Defense Manual 5000.04, Cost and Software Data Reporting Manual, and applicable financial management regulations. We reviewed the Office of Cost Assessment and Program Evaluation (CAPE) Reports to Congress for Fiscal Years 2016 and 2017 to learn about steps that CAPE is taking to address challenges. We interviewed officials at the DOD software centers responsible for weapon system software on several DOD weapon systems to gain an understanding of how they track cost data. We also interviewed officials from OSD, including officials from CAPE, and officials from the three cost analysis agencies responsible for collecting operating and support costs for the military departments’ Visibility and Management of Operating and Support Costs (VAMOSC) data collection systems. These agencies include Office of the Assistant Secretary of the Army for Cost and Economics, the Air Force Cost Analysis Agency, and the Naval Center for Cost Analysis.
For objective three, we reviewed statutes governing DOD intellectual property, including technical data rights, computer software, and computer software documentation. These statutes included, for example, 10 U.S.C. §2320, “Rights in Technical Data,” and 10 U.S.C. §2321, “Validation of Proprietary Data Restrictions.” Both of these statutes are implemented, in part, by the Federal Acquisition Regulation and the Defense Federal Acquisition Regulation Supplement (DFARS), which we also reviewed. Specifically, we reviewed DFARS Subpart 227.71, “Rights in Technical Data,” and DFARS Subpart 227.72, “Rights in Computer Software and Computer Software Documentation.” Both include sections that address DOD definitions of technical data; computer software; and computer software documentation, policy, acquisition, licensure, and delivery rights, among other items.
We also reviewed DOD policy and guidance, including DOD 5010.12-M, Procedures for the Acquisition and Management of Technical Data. We reviewed the Defense Acquisition Guidebook, which addresses the acquisition and maintenance of technical data rights to sustain and upgrade software on major weapon systems. We also reviewed guidance put forth on intellectual property strategies, including a checklist arranged by contract phase for key intellectual property management activities and considerations.
We interviewed officials from OSD, including from the Office of General Counsel and the Office of Strategic Design, as well as officials from the military department headquarters, to gain an understanding of the necessary technical rights to sustain weapon system software, the reasons that technical data rights are needed, and challenges faced by the department. We interviewed officials at the DOD software centers covering a variety of DOD weapon systems to gain an understanding of what technical data rights they need for their respective weapon systems, and the ways in which they manage issues they may encounter in which contractors own the technical data. We analyzed select weapon systems for which DOD had complete data and rights, as well as weapon systems for which DOD had partial or incomplete data rights, and the actions DOD took for sustainment, such as public-private partnerships. We also interviewed members of the Government-Industry Panel examining technical data rights and proprietary data restrictions to gain an understanding of necessary data rights for sustaining weapon systems coupled with proprietary concerns from industry. Finally, we reviewed statutory provisions in the fiscal years 2016 and 2018 National Defense Authorization acts, which directed the Secretary of Defense to commission studies related to DOD intellectual property, and we interviewed officials to understand DOD’s status on the provisions.
Table 2 lists the offices that we visited or contacted during our review.
Appendix II: Select Software Sustainment Activities
Appendix III: Comments from the Department of Defense
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact listed above, Sally Newman (Assistant Director), Laura Czohara (Analyst-in-Charge), Steven Bagley, Steven Boyles, Vincent Buquicchio, Amie Lesser, Janine Prybyla, Andrew Stavisky, and Cheryl Weissman made key contributions to this report.
Appendix V: Related GAO Products
GAO- F-35 Joint Strike Fighter: Development is Nearly Complete, but Deficiencies Found in Testing Need to Be Resolved, GAO-18-321 (Washington, D.C.: June 5, 2018).
GAO- F-35 Aircraft Sustainment: DOD Needs to Address Challenges Affecting Readiness and Cost Transparency, GAO-18-75 (Washington, D.C.: Oct. 26, 2017).
GAO, Military Acquisitions: DOD Is Taking Steps to Address Challenges Faced by Certain Companies, GAO-17-644 (Washington, D.C.: July 2017).
GAO- F-35 Sustainment: DOD Needs a Plan to Address Risks Related to Its Central Logistics System, GAO-16-439, (Washington, D.C., Apr. 14, 2016).
GAO- F-35 Joint Strike Fighter: Preliminary Observations on Program Progress, GAO-16-489T (Washington, D.C.: Mar. 23, 2016).
GAO, Defense Contracting: Early Attention in the Acquisition Process Needed to Enhance Competition, GAO-14-395 (Washington, D.C.: May 5, 2014).
GAO- F-35 Joint Strike Fighter: Problems Completing Software Testing May Hinder Delivery of Expected Warfighting Capabilities, GAO-14-322 (Washington, D.C.: Mar. 24, 2014).
GAO- F-35 Joint Strike Fighter: Program Has Improved in Some Areas, but Affordability Challenges and Other Risks Remain, GAO-13-500T (Washington, D.C.: Apr. 17, 2013).
GAO, Defense Acquisition: DOD Should Clarify Requirements for Assessing and Documenting Technical-Data Needs, GAO-11-469 (Washington, D.C.: May 11, 2011).
GAO, Federal Contracting: Opportunities Exist to Increase Competition and Assess Reasons When Only One Offer Is Received, GAO-10-833 (Washington, D.C.: July 26, 2010).
GAO, Weapons Acquisition: DOD Should Strengthen Policies for Assessing Technical Data Needs to Support Weapon Systems, GAO-06-839 (Washington, D.C.: July 14, 2006).
GAO, Defense Management: Opportunities to Enhance the Implementation of Performance-Based Logistics, GAO-04-715 (Washington, D.C.: Aug. 16, 2004).
GAO, Defense Logistics: Opportunities to Improve the Army’s and the Navy’s Decision-making Process for Weapons Systems Support, GAO-02-306 (Washington, D.C.: Feb. 28, 2002).
GAO, Defense Logistics: Air Force Lacks Data to Assess Contractor Logistics Support Approaches, GAO-01-618 (Washington, D.C.: Sept. 7, 2001).
GAO, Test and Evaluation: DOD Has Been Slow in Improving Testing of Software Intensive Systems, GAO/ NSIAD-93-198 (Washington, D.C.: September 1993).
GAO, Mission Critical Systems, Defense Attempting to Address Major Software Challenges, GAO/IMTEC-93-13 (Washington, D.C.: December 1992).
GAO, Risk and Control of the Software Maintenance Process (Washington, D.C.: January 1987).
GAO, Federal Agencies’ Maintenance of Computer Programs: Expensive and Undermanaged, AFMD-81-25 (Washington, D.C., Feb. 26, 1981). | Why GAO Did This Study
Software is integral to the operation and functionality of DOD equipment, platforms, and weapon systems, including tactical and combat vehicles, aircraft, ships, submarines, and strategic missiles. DOD estimates that software sustainment funding will total at least $15 billion over the next 5 fiscal years. DOD carries out software sustainment at various locations, where DOD uses its maintenance capabilities to maintain, overhaul, and repair its military weapon systems.
GAO was asked to review several issues relating to the sustainment of operational system software for DOD weapon systems. This report examines, among other things, the extent to which (1) DOD has policies and organizations in place to manage the sustainment of operational system software for weapon systems; and (2) DOD and the military departments track costs to sustain weapon system software. GAO reviewed DOD policies and procedures and interviewed cognizant officials from select DOD software centers, among others, who perform weapon system software sustainment activities.
What GAO Found
The Department of Defense (DOD) has policies and organizations to manage the sustainment of operational system software. DOD policy defines software sustainment and software maintenance activities synonymously, to comprise any activities or actions that change the software baseline, as well as modifications or upgrades that add capability or functionality. One example of such an action is the Air Force's modifying the security software on the B-52 bomber to better protect against attempted system penetration. The figure below defines the four categories of software sustainment actions.
DOD policies on life-cycle management of weapon systems address software sustainment, and several DOD organizations—including DOD software centers—play key roles in overseeing and managing software sustainment. DOD policy includes software maintenance as part of core logistics, and it requires the military departments to report biennially to Congress on their estimated workloads to sustain core logistics capabilities, including estimated costs of these workloads. However, while the Army and Air Force categorize and report software sustainment as part of core logistics, the Navy does not. Without the Navy's categorizing and reporting its software sustainment costs, DOD and Congress are not fully informed of the magnitude and cost of core software sustainment capability requirements. This impedes DOD's efforts to plan for a ready and controlled source of technical competence, and to budget resources in peacetime while preserving necessary surge capabilities.
DOD's ability to track weapon system software sustainment costs is impeded by limitations in its collection of software cost data. First, GAO found that the Office of Cost Assessment and Program Evaluation's (CAPE) Cost and Software Data Reporting system did not collect weapon system cost data from DOD software centers. Recognizing this, CAPE directed in January 2017 that cost and software data efforts on major acquisition programs should begin to be collected from government organizations, including DOD software centers. However, CAPE acknowledges that it lacks an implementation plan to execute and monitor the requirement for these centers to submit cost and software data. Second, GAO also found that the military departments' operating and support cost systems have incomplete software sustainment cost data. DOD policy requires the military departments to collect and maintain actual operating and support costs, including software sustainment costs. Without CAPE's taking steps to prioritize obtaining complete information on operating and support costs for software sustainment, CAPE is challenged in its ability to accurately compile total program costs or provide reliable life-cycle cost estimates to DOD and Congress.
What GAO Recommends
GAO is making five recommendations, including that (1) the Navy categorize and report its software sustainment costs in accordance with DOD policy; and (2) CAPE improve the collection of weapon system software cost data. DOD concurred with GAO's recommendations. |
gao_GAO-18-633 | gao_GAO-18-633_0 | Background
Enacted in July 2014, WIOA emphasizes the alignment and integration of workforce programs, primarily administered by the departments of Labor and Education, that provide education and training services to help job seekers obtain employment and advance in the labor market. WIOA also provides for state workforce development boards to help oversee a system of local workforce development boards that, in turn, deliver services through a network of one-stop centers. In its guidance on implementing WIOA, DOL states that this network is a shared responsibility of states, local boards, and other partners, including one- stop programs. It also encourages integration of services across one-stop programs to promote seamless service delivery.
The public workforce system is available to all job seekers, including UI claimants, and through it claimants may access reemployment services from a variety of federally funded workforce programs. At one-stop centers, states make services such as job search assistance and career counseling available to UI claimants and other job seekers using programs including the DOL-administered Wagner-Peyser Employment Service, the WIOA Adult program, and the WIOA Dislocated Worker program. The WIOA Adult program and WIOA Dislocated Worker program may also be used to provide training (see table 1).
UI claimants may also access services from other programs offered through the public workforce system. One such program, RESEA, is designated for the provision of reemployment services to UI claimants specifically. Established as a discretionary grant program in 2015, RESEA makes funding available to states for reemployment services to UI claimants identified by their state as most likely to exhaust their benefits, as well as veterans who receive UI benefits through the Unemployment Compensation for Ex-Servicemembers (UCX) program. During fiscal year 2017, 49 states and the District of Columbia participated in RESEA, and DOL made $115 million in grant funds available through the program. In February 2018, legislation was enacted that established RESEA as a formula grant program with incentive payments for states meeting or exceeding outcome goals, and authorized up to approximately $3.9 billion in funding for the program through fiscal year 2027. In July 2018, DOL announced that it was developing an implementation plan for the new RESEA program provisions, and would provide details on this plan in the coming months.
RESEA aims to link UI claimants to the public workforce system, address their individual reemployment needs, and help states prevent and detect improper payments by conducting UI eligibility reviews. Once a UI claimant is selected for RESEA, the claimant is required to attend a one- stop orientation and meet one-on-one with a caseworker, who conducts a UI eligibility assessment, helps the claimant develop an individualized reemployment plan, and provides or refers the claimant to other reemployment services, as appropriate (see fig. 1). In some states, claimants participate in a second caseworker meeting to receive follow-up services, either in person or by phone.
UI Claimant Profiling Requirements
Since 1994, states have been required by law to develop and use profiling systems to identify UI claimants who are likely to exhaust their benefits, and to refer such claimants to reemployment services. In response to this legislation, DOL launched a Worker Profiling and Reemployment Services (WPRS) initiative in 1994. Currently, most states provide services to such claimants through their RESEA programs, using the profiling systems they developed under the WPRS initiative.
DOL issued WPRS guidance in 1994 describing minimum profiling requirements for all states and listing two profiling options:
Statistical profiling systems predict each UI claimant’s likelihood of exhausting benefits based on claimant characteristics (such as education level, prior claims history, and industry or occupation) and other factors. The system produces a ranked list, and claimants with the highest predicted likelihood of exhausting benefits are selected for reemployment services.
Non-statistical characteristic screens sort claimants into two groups, based on the presence of certain characteristics. Claimants with one or more of these characteristics are considered not likely to exhaust their benefits, and are excluded from selection for services. Remaining claimants are considered likely to exhaust their benefits, and a subset is randomly selected for reemployment services.
This guidance also specifies characteristics that states must, may, and are forbidden to use in their profiling systems. Specifically, states are required to include certain characteristics to identify UI claimants who are permanently laid off and unlikely to return to their previous industry or occupation. States may also use a claimant’s education, tenure at a previous job, and the state unemployment rate. States are prohibited from using claimant age, race or ethnic group, sex, disability, religion, political affiliation, and citizenship, among others. DOL determined that use of these characteristics could produce discriminatory effects, as UI claimants selected for reemployment services through the profiling process are required to attend services, or may lose their eligibility to receive UI benefits.
Research on Effectiveness of Reemployment Services
DOL-commissioned research suggests that reemployment services may help UI claimants find work more quickly and reduce UI program expenditures, though results have differed across states reviewed. A 2008 study found that the Reemployment and Eligibility Assessment (REA) program, the predecessor to RESEA, was effective in reducing the average duration of UI benefits in one of two states reviewed. Specifically, this study found that the REA program led to a statistically significant reduction in the duration of UI benefit claims of about a week for claimants with multiple caseworker meetings in Minnesota, but did not find statistically significant effects for claimants in North Dakota. A subsequent 2011 study found significant reductions in UI benefit duration and amount of benefits received among REA participants in three of four states reviewed, with the largest effects exhibited in Nevada. A more in- depth 2012 evaluation of Nevada’s REA program during the 2007 to 2009 Great Recession found that, on average, REA participants exited the UI program about three weeks sooner and used $873 less in benefits than non-participants as a result. This impact on UI benefit duration and benefit amounts includes both reductions in regular UI benefits and in Emergency Unemployment Compensation (EUC) benefits. Additionally, REA participants were nearly 20 percent more likely to obtain employment in the first two quarters after entering the program.
Selected States Provide Services to Help UI Claimants Find Work Using a Variety of Key Approaches Selected States All Provide Reemployment Services to Connect UI Claimants to Jobs Quickly
Officials from all six of our selected states said they provide reemployment services designed to help UI claimants get back to work quickly. These services include assessing claimant skills and service needs, providing job search assistance and referrals, and conducting interviewing and resume workshops, among others. State officials said they may also refer claimants with more extensive needs to additional services, such as longer-term case management or retraining.
Selected States Vary in How They Deliver Services through their Primary Reemployment Programs for UI Claimants
Officials from all six of our selected states described operating reemployment programs that connect many UI claimants to the state’s public workforce system; we refer to these as primary reemployment programs. While the services available through these programs are similar, state approaches to selecting participants for and delivering services through these programs vary. According to information from state officials, these selected states’ primary reemployment programs generally follow the RESEA model of a one-stop center orientation and one-on-one meeting with a caseworker.
Officials in all six of our selected states said they served UI claimants identified as most likely to exhaust their benefits, as required by law, through their primary reemployment programs, but some select additional claimants for these programs as well. Officials in two states, Massachusetts and Nebraska, said they believe it is important for all claimants to have access to reemployment services and that they require all claimants to report to a one-stop center for an orientation and meeting with a caseworker. (See text box.)
State Spotlight: Service Goals In 2015, Nebraska expanded its primary reemployment program, called NEres, to all unemployment insurance claimants, with state officials noting that all claimants can benefit from the high-quality services it offers.
In contrast, officials from three selected states said they prioritize claimants who are most likely to exhaust their benefits for reemployment services, and noted that these claimants have the greatest service needs. Officials from Wisconsin, for example, said claimants who are not selected for the state’s RESEA program are considered job ready and typically do not need in-person services. In addition to prioritizing claimants who are most likely to exhaust their benefits, our sixth selected state, Nevada, randomly selects additional claimants to participate in a state-funded reemployment program that is similar to the state’s RESEA program. Officials in Nevada said they believe their state-funded program allows them to serve claimants with less intensive needs more efficiently and builds upon the success of the state’s prior REA program.
Officials in the six selected states described varying approaches to providing reemployment services online versus in person. Officials in two states said their state strongly encourages the use of online services. For example, officials in Utah said all UI claimants are required to fill out an online needs assessment when filing a claim, and based on their responses, are required to complete up to five additional online workshops. These officials said leveraging online self-service options helps UI claimants adapt to using technology in the workplace and helps the state preserve limited financial resources (see text box). Similarly, officials in Wisconsin said claimants are required to complete an online needs assessment and orientation, and claimants can access various online workshops to address identified service needs. These officials believe this emphasis on online services will help claimants become more self-sufficient and in control of their job search.
State Spotlight: Online Services Officials in Utah described the one-stop center’s motto as “self-directed.” One-stop center staff encourage customers to access services independently through the state’s online portal in the computer lab so that they feel empowered to use online services at home.
In contrast, officials in three other selected states emphasized the benefits of in-person service provision. In Nebraska, officials said in- person meetings help one-stop center staff observe a claimant’s potential employment barriers that might otherwise be hard to identify. Officials provided an example of a claimant who seemed well-positioned on paper to obtain employment, but in person clearly lacked good interviewing skills, prompting the caseworker to refer the claimant to additional interviewing support. In Texas, officials said in-person service provision, where possible, also helps promote program integrity as it enables caseworkers to more easily set the expectation that claimants must search for work to qualify for UI benefits. Additionally, officials in Nevada said establishing a personal connection with claimants can help one-stop staff encourage those struggling with the experience of applying for dozens of jobs online without receiving any feedback from prospective employers (see text box).
Officials in the six selected states also described varying approaches in the extent to which they provide reemployment services in group settings or on an individual basis. In RESEA guidance, DOL has encouraged the use of group services as a way to enhance efficiency, and officials in four selected states said they conduct group orientations through their primary reemployment programs. For example, in Massachusetts, officials said that all UI claimants attend a group Career Center Seminar, where one- stop center staff provide an overview of available reemployment services and local labor market conditions, and UI claimants complete a needs assessment and career action plan. In Nebraska, a caseworker said the use of group orientations is a strength of the state’s program because it provides an opportunity for claimants to discuss shared challenges and network with each other. In contrast, Nevada provides all services through its primary reemployment program individually, which officials said they believe is more effective than group service provision. Officials said that during these individual meetings, caseworkers identify each claimant’s barriers to employment and assess whether the claimant needs ongoing individual case management or if additional service referrals would be appropriate.
Selected States Leverage Technology and Integrate Program Resources to Help Improve Services
Officials from all six selected states said they use technology and integrate resources from across federally funded workforce programs as strategies that help enhance efficiency and improve UI claimant customer experiences.
Leveraging Technology
To help provide services more cost-effectively and enhance service delivery capacity, officials in two selected states, Utah and Wisconsin, said they invested resources into expanding the array of online self- service options available to UI claimants. Utah officials said the state increased its use of technology to meet heightened service demand during the Great Recession, and continues to encourage online self- services as a cost-effective, fiscally sustainable means of maintaining service levels with fewer staff. Similarly, officials in Wisconsin said the state’s enhanced self-service options are central to its strategy for meeting current UI claimant needs and prepare the state for potential increases in UI claimant demand in an economic downturn.
Officials in five selected states said they have also used technology to help make services more customer-friendly, including the four selected states in which officials described improvements to their online job banks.
One of these states, Nebraska, added a mobile job bank application that, according to officials, has made it easier for UI claimants to use job bank features on their mobile devices and allows them to search for postings within a certain radius of their physical location. Nevada and Wisconsin officials also described other investments in mobile technology. Nevada, for instance, plans to implement a tool that will allow UI claimants to communicate with caseworkers via text message, such as by sending a picture of their first paystub to document that they found a job. Additionally, Wisconsin implemented a self-scheduling feature for initial RESEA meetings as part of broader upgrades to the state’s UI and workforce data systems.
Officials in all six selected states said they use technology to help caseworkers maximize their time. For example, officials in four states said integrating their state UI and workforce data systems has enabled them to automate some caseworker responsibilities. In Massachusetts and Wisconsin, officials said data system integration allows caseworkers to instantly transfer relevant information from the workforce data system to the UI data system, enabling them, for instance, to automatically trigger UI adjudication proceedings after a UI claimant fails to meet RESEA requirements. Officials from Wisconsin, Massachusetts, and Utah said their online self-scheduling features help save time that caseworkers would otherwise spend scheduling and rescheduling missed appointments. (See text box.) Officials in four selected states said they also use technological tools to help caseworkers focus their time on providing individualized services. For example, Nebraska developed a series of orientation videos designed to deliver clear, standardized information on job search requirements and available resources for claimants. As a result, caseworkers who manage in-person orientation sessions are able to focus on answering participant questions and emphasizing key information.
State Spotlight: Self-Scheduling Tool Wisconsin officials said their online self-scheduling tool for participants in the Reemployment Services and Eligibility Assessment (RESEA) program has both freed up staff time and increased RESEA attendance rates. According to data provided by state officials, the percentage of scheduled RESEA meetings attended by claimants increased from about 69 percent in 2014 to 87 percent in 2016. Officials attributed this increase to the implementation of the self-scheduling tool in March 2015.
Program Integration
Officials from all six selected states cited the benefits, such as improving UI claimant access to services, of enhancing program integration. Officials from four selected states said they aim to improve UI claimants’ customer experience using a “no wrong door” service delivery framework in which one-stop center staff guide claimants and other job seekers to the services they need without requiring them to approach different siloed programs for services (see text box). Additionally, officials from three selected states said state workforce agencies work behind the scenes using integrated budgeting, or “braided funding,” to align the appropriate federal resources so one-stop center staff can focus on service provision rather than funding source constraints. Officials in Utah and Wisconsin said integrated budgeting helped them support system-wide improvements, such as IT updates. For example, Wisconsin state officials said they strategically set aside funding from multiple programs to support the technology upgrades needed for a redesign of their reemployment program.
State Spotlight: Program Integration Massachusetts cross-trains one-stop center staff on available workforce programs to increase collaboration and make the experiences of “shared” customers—those who receive services from more than one program—more seamless.
Finally, officials from all six of our selected states said that the Wagner- Peyser Employment Service—a federally funded workforce program that can be used to support any job seeker—is a critical federal resource that they use in conjunction with other workforce programs to meet the needs of UI claimants specifically. These six selected states described using the Wagner-Peyser Employment Service for a wide range of functions, including expanding reemployment service provision to claimants, supporting one-stop center staff or computer labs, and maintaining continuity of RESEA operations in periods of funding uncertainty.
States Served UI Claimants through Four Key Federally Funded Workforce Programs, but Data on Reemployment Service Expenditures Are Not Available States Report that They Most Often Served UI Claimants through the Wagner-Peyser Employment Service
In program year 2015 (July 2015 through June 2016), states reported providing services to UI claimants through four key federally funded workforce programs, most often the Wagner-Peyser Employment Service, followed by RESEA, the WIOA Dislocated Worker program, and the WIOA Adult program (see fig. 2). (See appendix I for selected state participation data.)
States likewise served the largest number of all job seekers through the federally funded Wagner-Peyser Employment Service in program year 2015, followed by RESEA, the WIOA Adult program, and the WIOA Dislocated Worker program. The proportion of service recipients who were UI claimants, and the amount of DOL funding provided to states under these programs, also varied (see fig. 3).
The following sections discuss these programs in more detail.
Selected States Do Not Track All Reemployment Service Spending on UI Claimants, and DOL Officials Said Such Tracking Would Be Burdensome
Officials from all six of our selected states said their accounting systems did not generally track expenses by the UI claimant status of jobseekers served, and as a result, they could not isolate all reemployment service spending on UI claimants specifically. For instance, Utah officials said they allocated workforce system costs across multiple funding streams by surveying staff members about their activities at random moments in time. Officials said that while a jobseeker’s UI claimant status may be relevant to some staff time charges (such as helping a jobseeker apply for UI benefits), it would not be relevant, or even known, in other cases (such as providing computer lab assistance).
Officials from DOL said it would be burdensome for states to track and report workforce program expenditures on reemployment services provided to UI claimants specifically, as states have flexibility to use funds from multiple federal sources on services to both claimants and other jobseekers. DOL officials said they believe states mainly rely on RESEA, Wagner-Peyser, WIOA Dislocated Worker, and WIOA Adult funds to support UI claimant reemployment services. DOL has also reported that some states, including one of our selected states (Nevada), collect taxes designated for purposes that may include reemployment services.
Our six selected states also provided some UI claimant reemployment services through their primary reemployment programs, and five of these states were able to provide us with summary expenditure data from these programs. These five states chiefly leveraged RESEA funds to support these programs in state fiscal year 2017, and three states supplemented RESEA funds with funds from other sources (see fig. 8).
Of the three states that supplement RESEA funds with other sources, two (Nebraska and Wisconsin) used Wagner-Peyser funds, and one (Nevada) used state funds. Nebraska officials said they leveraged flexible Wagner- Peyser funds to enable the state to serve all UI claimants through its primary reemployment program. Wisconsin officials said that they, too, used Wagner-Peyser funds to expand the capacity of their state’s primary reemployment program, but did not aim to serve all UI claimants. Nevada officials said they used state funds from an employer payroll tax to provide reemployment services to randomly selected UI claimants not already selected for RESEA.
States Use Different Profiling Systems to Target UI Claimants for Services, but DOL Has Not Collected Needed Information or Fully Advised States about Profiling Options States Use a Range of Practices to Profile UI Claimants for Reemployment Services
Past national studies and our review of information from nine selected states indicate that the practices used by states to profile, or identify, UI claimants who are most likely to exhaust their benefits and need assistance returning to work differ. A 2007 DOL-sponsored study and a 2014 follow-up questionnaire to states found that, nationally, a large majority of states reported using statistical profiling systems, while a few states used a type of non-statistical profiling system known as a characteristic screen. (See text boxes.) The 2007 study also found that the performance of states’ profiling systems varied widely. Specifically, while some systems predicted claimants’ likelihood of benefit exhaustion relatively well, others did not perform much better than random chance. Accepted statistical practices recommend that profiling systems be updated regularly, and DOL has recommended that states update their profiling systems every 2 to 4 years. However, more than half of states that responded to the 2014 questionnaire reported that they had not updated their systems since before 2008.
Statistical Profiling Systems Statistical profiling systems predict each unemployment insurance (UI) claimant’s likelihood of exhausting benefits based on claimant characteristics (see examples below), which are each assigned weights through a statistical process. The system produces a ranked list, and claimants with the highest predicted likelihood of exhausting benefits are selected for reemployment services.
Sample Characteristics Used to Predict Benefit Exhaustion
Weeks of UI benefits used in the past 3 years Non-Statistical Profiling Systems (example: Characteristic Screen) Non-statistical profiling systems select claimants for services using a process that does not rely on statistical analysis. One example of these, characteristic screens, sort unemployment insurance (UI) claimants into two groups, based on the presence of certain characteristics (see examples below). Claimants with one or more of these characteristics are considered not likely to exhaust their benefits, and are excluded from service requirements. Remaining claimants are considered likely to exhaust their benefits, and a subset is randomly selected for reemployment services.
Of the nine selected states whose profiling systems we reviewed, six use statistical systems and three use non-statistical systems, and profiling practices vary widely, even among states using the same type of system. The six states with statistical systems have varying levels of system sophistication, and different system assessment and updating practices. For example, officials in one state said they invested substantial time and resources in building a sophisticated statistical profiling system and assessing its performance. To maintain the system, officials said they update it biannually through a yearlong, resource- intensive process. Officials described this process as important, noting that employer needs and the economy change over time, as do other factors that influence UI claimants’ likelihood of exhausting their benefits. State officials further said that as part of a large umbrella agency with oversight of numerous federal workforce programs, they have the resources needed to sustain a centralized data office with the capacity to build and maintain a sophisticated statistical system.
Officials in another state told us they had recently replaced their sophisticated statistical profiling system, which was based on the principles of machine learning, with a new, more straightforward, statistical system. While DOL officials said the state’s prior system was innovative, state officials said that after the person who developed it left the agency, they did not know how to update it. The official charged with developing the state’s new profiling system said he had to re-familiarize himself with statistical modeling practices in order to build it, and that it took months to complete. State officials said they had not yet established a performance assessment and updating process for the new system, and that they would need to gather additional data and determine how to address certain analytical challenges before doing so.
Officials from a third state agency said they were using a statistical profiling system that had not been updated in over 25 years, and had asked DOL to help them develop a new statistical profiling system because they lacked the expertise to do so themselves. In March 2017, DOL provided the new system to the contractor that maintains the state’s UI data system and will be responsible for running the new system. However, in June 2018, state officials told us they had delayed implementing the new system until the state completed a UI modernization project. Further, while state officials said they plan to keep the system up-to-date once implemented, they acknowledged that they do not have staff with the skills to do so, and will likely need continued DOL support.
For the three selected states that use non-statistical profiling systems, state officials said that these systems generally require little effort to maintain. Officials in two of these states reported using characteristic screens, which sort claimants into two groups to identify and exempt from service requirements those claimants who meet certain conditions, such as being only temporarily unemployed or in an approved training program. An official from each state said they aim to serve all non-exempt claimants through their reemployment programs.
The third state recently implemented a non-statistical claimant needs assessment that replaced the state’s outdated statistical profiling system, which officials said had never been updated and was only used to comply with the federal profiling requirement. With the new needs assessment, claimant responses to questions such as, “Do you have a resume?” and “How many job interviews have you had in the last month?” are scored to determine whether the claimant is job-ready or needs reemployment services. (See text box.) Caseworkers can also use these responses to make more effective service referrals during their appointments with claimants. For instance, if a claimant reported not having a current resume, a caseworker might refer the claimant to a resume workshop. In addition, officials said that program administrators can easily adjust the scoring and weights used in the assessment, and that they review it each year for potential updates.
Sample Alternative Non-Statistical Profiling System (Needs Assessment) One selected state’s claimant needs assessment scores claimant responses to a questionnaire about job readiness to determine if claimants need reemployment services. Those responses also provide caseworkers with direct information about claimant needs.
How long have you been looking for work?
Do you have a cover letter?
Do you need help preparing for an interview?
Do you have the computer skills needed to complete online job applications?
DOL Has Not Systematically Collected Information on State Profiling Systems that Could Inform Its Oversight and Technical Assistance Efforts
Despite past research identifying weaknesses in state profiling systems, DOL has not systematically collected information on these systems, which limits its ability to oversee their performance. DOL officials said that they communicate with states about their profiling practices and gather some profiling system information in the course of their periodic UI and RESEA reviews. However, DOL technical staff do not review or maintain this profiling system information for oversight purposes, and DOL does not have a systematic method of tracking state profiling practices across states. DOL officials said that they view their primary role, related to profiling systems, as providing technical assistance; however, by law, DOL is also responsible for ensuring that states’ profiling systems meet federal requirements. Further, GAO recommended in a 2007 report that DOL take a more active role in ensuring profiling system accuracy, and federal internal control standards state that agencies should obtain timely and relevant data to conduct effective monitoring. Without such data, DOL’s ability to effectively oversee state profiling practices is limited.
In addition, DOL provides technical assistance—which can range from answering specific questions to developing a new statistical profiling system on a state’s behalf—to individual states only upon request, rather than identifying and providing assistance to states at higher risk of poor profiling system performance. This approach necessitates that states recognize when they need technical assistance and request it. However, states may not know that their profiling systems are performing poorly and may not request needed technical assistance as a result. For example, officials from four of our six selected states with statistical systems told us that they do not currently have a process to assess their systems’ performance. As a result, these states may not be aware of potential issues they may need to address to improve their system performance. Additionally, officials responsible for maintaining another selected state’s profiling system had incorrectly identified the system type. As a result, officials may have difficulty identifying problems and seeking support.
DOL has an opportunity to use its new UI state self-assessment to systematically collect information that could inform its oversight of state profiling practices and technical assistance efforts. This questionnaire, which DOL designed to help states self-identify and correct UI system weaknesses, covers 15 functional areas. Self-assessment questions in one of these areas will collect some information on state profiling systems, such as system type and date of last update. However, as currently designed, the self-assessment will not solicit other information that could help DOL identify states at risk of poor system performance. For example, it does not ask whether states have experienced challenges maintaining their systems (for instance, due to staff turnover), or how states have assessed system performance. DOL officials told us regional staff will review state responses to the self-assessment, the first of which are due in March 2019, and which will be one piece of information used to identify states that DOL might prioritize for general UI program oversight.
While DOL officials said it would make sense to use the information gathered to inform oversight of profiling systems as well, they did not have specific plans about how they would do so. Federal internal control standards state that agencies should identify, analyze, and respond to risks. Without collecting more detailed and consistent profiling system information and having a clearer plan for how to use it, DOL’s ability to conduct effective monitoring and respond to risks will continue to be limited. More specifically, DOL may miss opportunities to help states at risk of poor profiling system performance better identify UI claimants most in need of reemployment services.
DOL Guidance Does Not Fully Address State Options for Meeting Profiling System Requirements
DOL’s current profiling guidance does not clearly and comprehensively communicate the profiling system options available to states, which may prevent states from using the profiling systems that best suit their needs. While the law does not specify a particular type of profiling system states must use, DOL’s only formal profiling guidance, issued in 1994, describes only two state options: statistical systems and characteristic screens, a type of non-statistical system. Further the guidance encourages states to use statistical systems, which it asserts are more efficient and precise, and easier to manage and adapt, than non-statistical systems. DOL officials who provide technical assistance to states told us they also encourage all states to use statistical profiling systems for the same reasons. However, DOL officials acknowledged that, in practice, not all statistical profiling systems predict benefits exhaustion well, particularly outdated systems. The 2007 DOL-sponsored study similarly found that some state profiling systems did not predict benefit exhaustion much more accurately than random chance.
Additionally, statistical profiling systems may be more difficult for some states to develop and maintain than non-statistical systems. DOL officials acknowledged that states with technical capacity issues, such as staffing and data system limitations, may experience particular challenges. Officials we spoke to in four of our six selected states with statistical profiling systems told us that they have faced these challenges. In contrast, officials from all of our selected states with non-statistical profiling systems said their systems are easy to maintain. Officials from one state that uses a claimant needs assessment said this system also provides useful information that caseworkers can review prior to one-on- one meetings with claimants.
DOL officials told us they are supportive of state experimentation with alternative profiling approaches. However, officials in our selected states had differing perspectives on DOL’s views on state flexibility and options for pursuing experimentation. For example, an official in one state was interested in making a change to the outcome variable that the state’s statistical system predicted, believing it could reduce UI program expenditures. As a result, the state consulted with regional DOL staff about the possible revision and made the change with DOL’s support. In contrast, an official in another state who wanted to make a similar change to its statistical profiling system has not pursued the change or discussed it with DOL officials because he believes such a change would not be allowed.
Further, some of our selected states differed in their understanding of state flexibility to use the type of profiling system that works best for them. For example, officials in one of our selected states said they are switching to a statistical system after longstanding encouragement by DOL to do so, even though a key official expressed concern that a statistical system may not be useful, given the state’s goal of providing services to all UI claimants. In contrast, officials in another state said they had recently replaced their outdated statistical profiling system with a claimant needs assessment that differs from the options described in DOL’s 1994 guidance, after requesting DOL review of their revised approach.
The differences in states’ perspectives on allowable options for profiling systems may in part be due to the fact that DOL’s current profiling guidance is limited and outdated. The guidance was issued in 1994, and it does not clearly reflect all of the options available to states, such as using a different outcome variable in a statistical system, or implementing an alternative type of non-statistical system to meet worker profiling requirements. Further, while a key DOL official said they are open to reviewing alternative state profiling approaches, they do not have a formal process for doing so, nor does guidance address the option for DOL to review alternative approaches. DOL officials said they believe the existing guidance provides states relatively wide latitude in designing their profiling systems and, as a result, they have not found the need to change those guidelines. However, federal internal control standards emphasize the importance of periodically reviewing policy for continued relevance and effectiveness in achieving objectives. Without clearer, more current policy information from DOL on profiling requirements and available options, state officials may continue to have differing understandings of what they can do, and states may not pursue innovations that could improve their profiling systems, better suit their technical capacity, and, ultimately, better target claimants for reemployment services.
Conclusions
With 5.7 million UI claimants receiving nearly $30 billion in unemployment benefits in 2017, reemployment services have the potential to substantially improve employment outcomes and conserve resources by shortening UI claimants’ periods of unemployment. Earlier this year, Congress authorized up to approximately $3.9 billion in funding over the next decade for the RESEA program, which states use to provide services to UI claimants most likely to exhaust their benefits. However, DOL has not taken key steps to help states effectively identify and select such claimants for the program. DOL has the opportunity to collect more systematic information on state practices for profiling UI claimants through its new UI state self-assessment, but the information it is planning to collect is limited and may not enable DOL to identify states that are having trouble identifying claimants in need of services. Further, DOL does not have a process for how it can use information on state risks of poor profiling system performance to guide its oversight and technical assistance efforts, choosing largely to assist individual states only when asked. Some states may not be equipped to identify weaknesses in their profiling systems, and as a result may not request the assistance they need. In addition, DOL encourages all states to use statistical profiling systems despite acknowledging that some states’ statistical systems, particularly outdated ones, may not perform well in practice. Moreover, its profiling guidance to states has not been updated since 1994, and may not reflect the flexibility afforded states to pursue alternative profiling options. Without clearer, more current information from DOL, states may not pursue innovations that could help them better identify the UI claimants who need reemployment services most.
Recommendations for Executive Action
We are making the following three recommendations to the Department of Labor:
The Secretary of Labor should systematically collect sufficient information on state profiling systems, possibly through DOL’s new UI state self-assessment process, to identify states at risk of poor profiling system performance. For instance, DOL could collect information on challenges states have experienced using and maintaining their profiling systems, planned changes to the systems, or state processes for assessing the systems’ performance. (Recommendation 1)
The Secretary of Labor should develop a process to use information on state risks of poor profiling system performance to provide technical assistance to states that need to improve their systems. DOL may also wish to tailor its technical assistance based on state service delivery goals and technical capacity. (Recommendation 2)
The Secretary of Labor should update agency guidance to ensure that it clearly informs states about the range of allowable profiling approaches. (Recommendation 3)
Agency Comments and Our Evaluation
We provided a draft of this product to the Department of Labor for comment. In its comments, reproduced in appendix II, DOL agreed with our recommendations and stated that it would take action to address them. DOL reiterated its commitment to providing technical assistance to states and strengthening the connection between the UI program and the public workforce system. DOL also provided technical comments, which we incorporated as appropriate. Additionally, we provided relevant excerpts of the draft report to officials in the selected states we included in our review. We incorporated their technical comments as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Secretary of the Department of Labor, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at 202-512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Selected State Program Participation Data
We selected six states—Massachusetts, Nebraska, Nevada, Texas, Utah, and Wisconsin—for in-depth review. These six selected states all served unemployment insurance (UI) claimants through several key federally funded workforce programs in program year 2015 (July 2015 through June 2016). For the five states that confirmed the reliability of the data they reported to the Department of Labor (DOL) over this time period, the numbers of UI claimants served through each program and percent of all service recipients who were UI claimants varied. Summary data from each of these five states are presented in figures 9 through 13.
Appendix II: Comments from the Department of Labor
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Rebecca Woiwode (Assistant Director), Ellen Phelps Ranen (Analyst-In-Charge), Caitlin Croake, Margaret Hettinger, Efrain Magallan, and Amrita Sen made key contributions to this report. Also contributing to this report were Lilia Chaidez, Alex Galuten, Thomas James, Nicole Jarvis, Serena Lo, Mimi Nguyen, Jessica Orr, Karissa Robie, Almeta Spencer, and Jeff Tessin. | Why GAO Did This Study
In 2017, the UI program provided about $30 billion in temporary income support to 5.7 million claimants who became unemployed through no fault of their own. The federal government provides various resources states can use to help UI claimants achieve reemployment. GAO was asked to review how states identify and serve claimants who need such assistance.
This report examines, among other things, (1) what key federal programs and approaches states used to help UI claimants return to work, and (2) how states used profiling systems to identify claimants who are most likely to exhaust their benefits and need assistance returning to work. GAO reviewed relevant federal laws and guidance; analyzed the most recent available national data on UI claimant participation in key workforce programs, from July 2015 through June 2016; interviewed officials from DOL, six states with key reemployment practices, and three additional states with a variety of profiling practices; and reviewed national studies examining state profiling systems.
What GAO Found
Nationwide, four key federally funded workforce programs helped states provide reemployment services, such as career counseling and job search assistance, to millions of unemployment insurance (UI) claimants, according to data from July 2015 through June 2016, the most recent period available (see table). The six selected states GAO reviewed in-depth reported using these key programs to support their efforts to help claimants return to work. Selected state officials described skills assessments, job search assistance, and interview and resume workshops as the types of services they use to connect UI claimants to jobs quickly. Officials also described varying service delivery approaches, with some of the selected states emphasizing the use of online services, while others relied to a greater extent on in-person services.
According to a 2014 national questionnaire to states, most states used a statistical system to identify UI claimants who are most likely to exhaust their benefits and need assistance returning to work (known as profiling). Six of the nine states GAO reviewed used statistical systems and three used non-statistical approaches. GAO identified several concerns with the Department of Labor's (DOL) oversight and support of state UI profiling systems:
Although a 2007 DOL-commissioned study found that some statistical systems may not perform well, DOL has not collected the information needed to identify states at risk of poor profiling system performance.
Some selected states have faced technical challenges in implementing and updating their statistical systems. However, DOL does not have a process for identifying and providing technical assistance to states at risk of poor system performance or those facing technical challenges. Instead, it only provides assistance to those states that request it.
While states have latitude to choose their preferred profiling approach, DOL's 1994 guidance encourages all states to use statistical systems. Because DOL has not updated this guidance to ensure that it clearly communicates all available profiling system options, some states may not be aware that they have greater flexibility in choosing an option that best suits their needs.
What GAO Recommends
GAO recommends that DOL (1) systematically collect sufficient information to identify states at risk of poor profiling system performance, (2) develop a process for providing risk-based technical assistance to such states, and (3) update guidance to clarify state profiling options. DOL agreed with these recommendations. |
gao_GAO-18-337 | gao_GAO-18-337_0 | Background
NASA’s mission is to drive advances in science, technology, aeronautics, and space exploration to enhance knowledge, education, innovation, economic vitality, and stewardship of Earth. The NASA Administrator is responsible for leading the agency and is accountable for all aspects of its mission, including establishing and articulating its vision and strategic priorities and ensuring successful implementation of supporting policies, programs, and performance assessments.
Within NASA headquarters, the agency has four mission directorates that define its major core mission work: (1) Aeronautics Research conducts cutting-edge research to enable revolutionary advances in future aircraft, as well as in the airspace in which they will fly; (2) Human Exploration and Operations is responsible for NASA space operations, developing new exploration and transportation systems, and performing scientific research; (3) Science carries out the scientific exploration of Earth and space to expand the frontiers of Earth science, planetary science, and astrophysics, and (4) Space Technology develops revolutionary technologies through transparent, collaborative partnerships that expand the boundaries of aerospace. The agency also has a mission support directorate to manage its business needs and administrative functions, such as human capital management.
In addition to NASA headquarters in Washington, D.C., the agency is composed of nine field centers managed by NASA employees, and one federally funded research and development center that are responsible for executing programs and projects. NASA centers are located throughout the country and manage projects or programs for multiple mission directorates. For example, the Goddard Space Flight Center supports various IT programs within the Science mission directorate, while the Johnson Space Center supports multiple programs in the Human Exploration and Operations mission directorate.
According to NASA documents, the agency planned to spend $1.6 billion of its fiscal year 2018 budget authority on IT. Of this total, $888 million was to be used for business IT and $672.8 million was to be used for mission IT. Business IT includes the infrastructure and systems needed to support internal agency operations, such as commodity IT (e.g., e-mail and communications systems), infrastructure, IT management, administrative services, and support systems, whereas mission IT includes the technology needed to support space programs and research for the agency’s mission programs. The technology that the agency uses to support its mission programs includes highly-specialized IT, defined by NASA as any equipment, system, and/or software that is used to acquire, store, retrieve, manipulate, and/or transmit data or information when the IT is embedded in a mission platform or provides a platform required for simulating, executing, or operating a mission.
Historically, NASA and its Inspector General have reported that funding for and oversight of highly-specialized IT has been decentralized among mission directorates and embedded within launch programs and other mission activities instead of being identified as IT to be managed as part of the agency’s IT portfolio. According to the Inspector General, the agency’s decentralized funding for and oversight of IT has minimized agency-wide visibility into and oversight of NASA’s spending on these systems.
NASA’s IT Management and Governance Structure
The agency’s Chief Information Officer (CIO) reports directly to the NASA Administrator and serves as the principal advisor to the NASA Administrator and senior officials on all matters pertaining to IT. The CIO is to provide leadership, planning, policy direction, and oversight for the management of NASA’s information and systems. Toward this end, the CIO’s responsibilities include developing and implementing approaches for executing the goals and outcomes in the NASA strategic plan; ensuring that the agency’s human resources possess the requisite knowledge and skills in IT and information resources management; maximizing the value of NASA IT investments through an investment management process; and leading and implementing the agency’s IT security program. The CIO also is responsible for developing and implementing agency-wide IT policies and processes.
NASA’s CIO also is to direct, manage, and provide policy guidance and oversight of the agency’s center CIOs. Each center has a CIO responsible for supporting center leadership and managing IT staff.
Similarly, each mission directorate has a representative who coordinates with programs on IT-specific issues and, as needed, obtains support from the Office of the CIO. Both center CIOs and mission directorate IT representatives report to the NASA CIO and to the leadership of their respective centers and mission directorates.
The CIO is supported by staff in the Office of the CIO. This office is organized into four divisions responsible for (1) IT security, (2) capital planning and governance, (3) technology and innovation, and (4) enterprise services and integration. Collectively, these divisions support NASA’s approach to IT strategic and workforce planning, governance boards and practices, and cybersecurity.
In March 2017, the Office of the CIO submitted plans to establish a fifth division focused on new applications, and also to rename existing divisions to better represent the services they provide. For example, the Office of the CIO proposed that the Capital Planning and Governance Division be renamed the IT Business Management Division. As of March 2018, NASA had not yet approved or implemented the planned reorganization.
Figure 1 depicts the organization of the Office of the CIO, including relevant reporting relationships for center CIOs and mission directorate IT representatives, as of March 2018.
GAO and NASA’s Office of Inspector General Have Reported on Longstanding Weaknesses in IT Management
We and NASA’s Office of Inspector General have reported on longstanding IT management weaknesses within the agency. For example, in October 2009, we reported that NASA had made progress in implementing IT security controls and aspects of its information security program, but that it had not always implemented appropriate controls to sufficiently protect the confidentiality, integrity, and availability of information and systems. We also identified control vulnerabilities and program shortfalls, which, collectively, increased the risk of unauthorized access to NASA’s sensitive information, as well as inadvertent or deliberate disruption of its system operations and services. We recommended that the NASA Administrator take steps to mitigate control vulnerabilities and fully implement a comprehensive information security program. The agency concurred with our eight recommendations and stated that it was taking actions to mitigate the information security weaknesses identified.
In addition, NASA’s Office of Inspector General has issued 24 reports over the last 7 years on IT governance and security weaknesses at the agency. For example, in June 2013, the office reported that the decentralized nature of NASA’s operations and its longstanding culture of autonomy had hindered the agency’s ability to implement effective IT governance. Specifically, the report stated that the CIO had limited visibility and control over a majority of IT investments, operated in an organizational structure that marginalized the authority of the position, and could not enforce security measures across NASA’s computer networks. Moreover, the IT governance structure in place at the time was overly complex, did not function effectively, and operated under a decentralized model that relegated decision making about critical IT issues to numerous individuals across NASA, leaving such decisions outside the purview of the CIO.
The Office of Inspector General made eight recommendations to the NASA Administrator for improving IT governance, including calling for all governance to be consolidated within the Office of the CIO to ensure adequate visibility, accountability, and integration into all mission-related IT assets and activities. The Administrator concurred with six and partially concurred with two of the recommendations and planned actions sufficient for the Office of Inspector General to close all eight recommendations as implemented. However, the Office of Inspector General later reported that the extent to which NASA had implemented the agreed-upon changes was in doubt based on subsequent audit findings that NASA was still struggling with limited agency CIO authority, decentralized IT operations, and ineffective IT governance.
A follow-on report issued in October 2017 described a continued lack of progress in improving IT governance, determined that the CIO’s visibility into investments across the agency continued to be limited, and identified flaws in the process developed to improve governance. Specifically, the Office of Inspector General noted that the Office of the CIO had made changes to its IT governance boards over the past few years, but the boards had not made strategic decisions to substantively impact how NASA IT would be managed. According to the Office of Inspector General, slow implementation of the revised governance structure had left many IT officials operating under the previous inefficient and ineffective framework.
The report also noted that, as of August 2017, the Office of the CIO had not finalized the roles and responsibilities for IT management and lingering confusion regarding security roles, coupled with poor IT inventory practices, had negatively impacted NASA’s security posture. Importantly, the report explained that the Office of the CIO continued to have limited influence over IT management within the mission directorates and at centers.
The Office of Inspector General made five recommendations to the CIO that were intended to improve, among other things, governance and security. As of October 2017, NASA had concurred with three recommendations, partially concurred with two recommendations, and described corrective actions taken or planned. However, the Office of Inspector General found that NASA’s original proposed action to address the fourth recommendation was insufficient; thus, in December 2017, the agency established additional proposed actions to address that recommendation.
Key IT Management Disciplines
We have identified a set of essential and complementary management disciplines that provide a sound foundation for IT management. These include the following:
Strategic planning: Strategic planning defines what an organization seeks to accomplish and identifies the strategies it will use to achieve desired results. We have previously reported that a defined strategic planning process allows an agency to clearly articulate its strategic direction and establish linkages among planning practices, such as goals, objectives, and strategies and identified leading practices for agency planning.
Workforce planning: We have previously reported that it is important for an agency to have a strong IT workforce to help ensure the timely and effective acquisition of IT. In November 2016, we identified eight key workforce planning activities derived from the Clinger-Cohen Act of 1996 and relevant guidance, including memorandums and guidance from OPM and OMB, and prior GAO reports. These laws and guidance focus on the importance of setting the strategic direction for workforce planning, analyzing the workforce to identify skill gaps, developing strategies to address skill gaps, and monitoring and reporting on progress in addressing skill gaps.
IT governance: IT projects can significantly improve an organization’s performance, but they can also become costly, risky, and unproductive. In 1996, Congress passed the Clinger-Cohen Act, which requires executive branch agencies to establish a process for selecting, managing, and evaluating investments in order to maximize the value and assess and manage the risks of IT acquisitions. Agencies can maximize the value of their investments and minimize the risks of their acquisitions by having an effective and efficient governance process, as described in GAO’s guide to effective IT investment management.
Cybersecurity: Federal systems and networks are often interconnected with other internal and external systems and networks, including the Internet. When systems are interconnected, the number of avenues of attack increases and the attack surface expands. Effective security for agency systems and data is essential to prevent data tampering, disruptions in critical operations, fraud, and inappropriate disclosure of sensitive information, including personal information entrusted to the government by members of the American public. Taking action to assure that an agency’s contractors and partners are adequately protecting the agency’s information and systems is one way an agency can address cybersecurity risks.
NIST has issued a suite of information security standards and guidelines that, collectively, provide comprehensive guidance on managing cybersecurity risk to agencies and any entities performing work on the agencies’ behalf. NIST’s cybersecurity framework was issued in February 2014 in response to Executive Order 13636. The framework outlines a risk-based approach to managing cybersecurity risk and protecting an organization’s critical information assets. Subsequent to the issuance of the cybersecurity framework, a May 2017 executive order required agencies to use the framework to manage cybersecurity risks. The order outlined actions to enhance cybersecurity across federal agencies and critical infrastructure to improve the nation’s cyber posture and capabilities against cybersecurity threats to digital and physical security.
NASA Has Not Yet Effectively Established and Implemented Leading IT Management Practices
NASA has not yet effectively established and implemented leading IT management practices for strategic planning, workforce planning, governance, and cybersecurity. Specifically,
The agency’s IT strategic planning process is not yet fully documented and its IT strategic plan lacks key elements called for by leading practices.
NASA has not yet established an IT workforce planning process consistent with leading practices.
The agency has taken recent action to improve its IT governance structure; however, it has not yet fully established that structure, documented improvements to its investment selection process, fully implemented investment oversight leading practices, or fully defined its policies and procedures for IT portfolio management.
NASA has not fully established an effective approach to managing agency-wide cybersecurity risk. While it has designated a risk executive, the agency lacks a dedicated office to provide comprehensive executive oversight of risks. In addition, the agency- wide cybersecurity risk management strategy is currently in development, and the agency’s information security program plan does not address all leading practices and has not been finalized. Further, policies and procedures for protecting NASA’s information systems are in place, but the agency has not ensured that they are always current or integrated.
NASA Has Not Fully Documented Its IT Strategic Planning Process
Leading practices of IT strategic planning established in OMB guidance call for an agency to document its IT strategic planning process, including, at a minimum, documenting the responsibilities and accountability for IT resources across the agency. It also calls for documenting the method by which the agency defines its IT needs and develops strategies, systems, and capabilities to meet those needs.
NASA’s documented IT strategic planning process describes the responsibilities and accountability for IT resources across the agency. For example, NASA has assigned specific governance bodies with responsibility for developing and overseeing the implementation of the IT strategy. Also, in its IT strategic plan, NASA described key stakeholders across the agency that are responsible for the development of the plan. These stakeholders include the Associate CIOs, representatives from mission directorates, mission support organizations, and the centers.
On the other hand, the methods by which the agency defines its IT needs and develops strategies, systems, and capabilities to meet those needs are not documented. For example, according to the IT strategic plan, the Office of the CIO is to perform a gap analysis to inform the development of NASA’s roadmap that translates its IT needs and the strategies identified for meeting those needs into tactical plans. The tactical plans are to define how the strategic plan will be incrementally executed to achieve the longer term goals.
However, the Office of the CIO has not documented in its strategic planning policies and procedures how the CIO will perform the gap analysis or the methods for developing these tactical plans and roadmaps. This is particularly important since, according to officials in NASA’s Office of the CIO, the centers vary as to whether they have developed their own IT strategic plans or tactical plans, and the office does not oversee or review any center-level plans to ensure they align with the NASA IT strategic plan.
According to officials in the Office of the CIO, NASA used a new model in formulating its IT strategy for fiscal years 2018 to 2021, such as including a broader set of stakeholders in the strategic planning cycle before documenting the strategic planning process. The officials stated that they intend to identify lessons learned from using this new model and formally document a complete and repeatable IT strategic planning process in the future. However, the agency has not established time frames for when the Office of the CIO will fully document its strategic planning process. Without a fully documented strategic planning process, NASA risks not being able to clearly articulate what it seeks to accomplish and identify the IT resources needed to achieve desired results in a way that is consistent and complete.
NASA Has Improved Its IT Strategic Plan, but Has Not Yet Established a Comprehensive Plan
In addition to calling for agencies to fully document the strategic planning process, leading practices from OMB guidance and our prior research and experience at federal agencies have shown that an agency should develop a comprehensive and effective IT strategic plan that (1) is aligned with the agency’s overall strategy; (2) identifies the mission of the agency, results-oriented goals, and performance measures that permit the agency to determine whether implementation of the plan is succeeding; (3) includes strategies, with resources and time frames, that the governing IT organization intends to use to achieve desired results; and (4) provides descriptions of interdependencies within and across projects so that they can be understood and managed. The resulting plan is to serve as an agency’s vision, or road map, and help align information resources with business strategies and investment decisions.
NASA has taken steps to improve its IT strategic plan, but the updated plan is not comprehensive in that it does not fully address all four elements of a comprehensive and effective plan outlined above. In this regard, the agency had a prior strategic plan covering the time frame of March 2014 to November 2017. More recently, in December 2017, the CIO and Associate Administrator approved an updated plan for implementation. The updated plan is intended for use from the date it was approved through fiscal year 2021.
Regarding the four elements of a comprehensive IT strategic plan, NASA’s prior plan addressed one element, partially addressed two elements, and did not address one element. The updated plan was slightly improved in that it addressed two elements, partially met one element, and did not meet one element of a comprehensive strategic plan. Table 1 provides a summary of the extent to which NASA’s prior IT strategic plan (covering the time frame of March 2014 to November 2017) and recently updated IT strategic plan (covering the time frame of December 2017 to fiscal year 2021) addressed key elements of a comprehensive strategic plan.
NASA’s prior IT strategic plan was aligned with the agency’s overall strategic plan and identified the mission of the agency and results- oriented goals. However, these goals were not linked to specific performance measures that were needed to track progress and did not always describe strategies to achieve desired results. Additionally, this plan lacked descriptions of interdependencies within and across projects.
NASA’s updated IT strategic plan is aligned with the agency’s overall strategic plan and identifies the mission of the agency and results- oriented goals. For example, the plan describes the agency’s IT vision, mission, principles, and objectives of five strategic goals—excellence, data, cybersecurity, value, and people. To support these goals, the plan defines 14 objectives to be accomplished over 4 years. For example, the plan defines objectives for increasing the effectiveness of NASA’s IT strategy execution through disciplined program and project management.
In addition, NASA has improved upon the prior plan by identifying performance measures that allow the agency to determine whether it is succeeding in the implementation of its goals. For example, in order to increase the effectiveness of its IT strategy execution, the Office of the CIO expects 85 percent of projects to be in conformance with approved project plans by the end of fiscal year 2018. As another example, to prepare its employees to achieve NASA’s IT vision, the Office of the CIO plans to, by the end of fiscal year 2020, identify skills gaps and ways to close the gaps based on the workforce strategy.
However, similar to the prior plan, the updated plan does not fully describe strategies NASA intends to use to achieve the desired results or descriptions of interdependencies within and across projects. Specifically, the plan discusses how the agency intends to achieve its strategic goals and objectives through various activities. For example, according to the plan, to increase the effectiveness of investment analysis and prioritization, NASA intends to implement a financial management process that integrates Office of the CIO, center, and mission directorate IT spending. The plan states that this process will map IT investments to NASA’s vision and strategy, as well as enable high-quality internal and external investment insight and reporting.
However, the updated plan does not further describe the strategies NASA intends to use to accomplish these activities, including a schedule for significant actions and the resources needed to achieve this objective. For instance, the plan states that the Office of the CIO will define clear lines of authority and accountability for IT between the agency and NASA’s centers, but does not describe a strategy, including time frames and resources, for accomplishing this. Additionally, the plan does not describe interdependencies between projects, which is essential to help define the relationships within and across projects and major initiatives.
According to NASA’s CIO, the updated strategic plan was kept at a higher level with the expectation that more detailed implementation plans (e.g., tactical plans and roadmaps) would define the necessary projects and interdependencies. However, NASA has not defined guidance for developing the implementation plans to ensure that any plans developed will fully describe strategies and interdependencies, or time frames for when these plans will be completed. Until NASA incorporates the key elements of a comprehensive IT strategic plan, it will lack critical information needed to align information resources with business strategies and investment decisions.
NASA Has Gaps in Its IT Workforce Planning Efforts
Key to an agency’s success in managing its IT investments is sustaining a workforce with the necessary knowledge, skills, and abilities to execute a range of management functions that support the agency’s mission and goals. Achieving such a workforce depends on having effective human capital management consistent with workforce planning activities pursuant to federal laws and guidance.
Specifically, OMB requires agencies to develop and maintain a current workforce planning process. In addition, we reported in 2016 on the importance of setting a strategic direction for IT workforce planning, identifying skills gaps and implementing strategies to address them, and monitoring and reporting on progress in addressing the identified skills gaps. We identified eight key IT workforce planning activities that are essential to agency efforts to establish an effective IT workforce: 1. establish and maintain a workforce planning process; 2. develop competency and staffing requirements; 3. assess competency and staffing needs regularly; 4. assess gaps in competencies and staffing; 5. develop strategies and plans to address gaps in competencies and 6. implement activities that address gaps (including IT acquisition cadres, cross-functional training of acquisition and program personnel, career paths for program managers, plans to strengthen program management, and use of special hiring authorities); 7. monitor the agency’s progress in addressing competency and staffing 8. report to agency leadership on progress in addressing competency and staffing gaps.
The Office of the CIO has had IT workforce planning efforts underway since 2015 that are intended to address the workforce planning activities listed above; however, the office has not finalized or implemented any of the planned actions. The office recently began working to establish a more comprehensive workforce strategy for fiscal year 2019 to align with the agency’s increased emphasis on improving the overall workforce. Specifically, in the draft NASA Strategic Plan, the agency established a workforce development goal and two strategic objectives that relate to its IT workforce and call for, among other things, workforce training and efforts to increase cybersecurity awareness to reduce cybersecurity risks.
Nevertheless, NASA has gaps in its IT workforce planning efforts. Of the eight key IT workforce planning activities that we previously outlined, NASA partially implemented five and did not implement three. Table 2 shows the extent to which NASA has implemented each IT workforce planning activity and provides examples of workforce practices planned or implemented, as well as those not yet undertaken.
According to NASA’s CIO, the Office of the CIO put IT workforce planning activities on hold in 2015 pending the outcome of more comprehensive, agency-wide efforts. Specifically, the agency began planning and developing a new phased program—the Mission Support Future Architecture Program—designed to deliver workforce and other mission support services, including a talent management program. Phase 1 of the new phased Mission Support Future Architecture Program began in May 2017.
According to the NASA CIO, the Office of the CIO is expected to be part of a future phase and to renew its IT workforce planning as part of that effort. However, the CIO did not have an estimate for when the Office of the CIO would join the program. Until NASA implements all of the key IT workforce planning activities discussed in this report, the agency will have difficulty anticipating and responding to changing staffing needs. Further, NASA will face challenges in controlling human capital risks when developing, implementing, and operating IT systems.
NASA’s IT Governance Approach Does Not Fully Address Leading Practices
Leading practices for governing IT, such as those identified by GAO in its IT investment management framework, call for agencies to establish and follow a systematic and organized approach to investment management to help lay a foundation for successful, predictable, and repeatable decisions. Critical elements of such an approach include instituting an IT investment board (or boards), developing and documenting a governance process for investment selection and for investment oversight, and establishing governance policies and procedures for managing the agency’s overall IT investment portfolio.
NASA Has Not Fully Instituted an Effective Governance Structure
Instituting an effective IT governance structure involves establishing one or more governance boards, clearly defining the boards’ roles and responsibilities, and ensuring that they operate as intended. Moreover, Section 811(a) of the National Aeronautics and Space Administration Transition Authorization Act of 2017 directs the agency to ensure that the NASA CIO, mission directorates, and centers have appropriate roles in governance processes. The act also calls on the Administrator to provide, among other things, an IT program management framework to increase the efficiency and effectiveness of IT investments, including relying on metrics for identifying and reducing potential duplication, waste, and cost.
NASA has established three boards focused specifically on IT governance—an IT Council which is its executive-level IT board, a CIO Leadership Team, and an IT Program Management Board which provides oversight of programs and projects. Meeting minutes for the three IT- specific governance bodies identified above revealed that these groups are meeting as required by their charters.
Further, two of NASA’s agency-wide councils (whose governance responsibilities extend beyond IT) also play a role in IT governance. Specifically, the Mission Support Council is the governance body to which the IT Council escalates unresolved decisions, and the Agency Program Management Council is responsible for reviewing and approving highly- specialized IT. In addition, NASA centers have the option to create center-specific IT governance boards to make decisions about center- level IT investments under the authority of center CIOs.
Table 3 describes the roles of the IT-specific governance boards, the agency-wide councils with roles in IT governance, and the center-level IT governance boards. The table also includes additional details on how frequently the councils and boards meet, the dollar thresholds NASA has established to determine which investments each council or board reviews, and which officials serve as members of the boards.
Although it has established and assigned responsibilities for the aforementioned governance councils and boards, NASA has not yet fully instituted an effective investment board governance structure for several reasons.
Planned improvements to the IT governance structure are not yet complete. NASA has established new governance boards in addition to the boards listed above, but has not yet approved charters to guide their operations. Specifically, the Office of the CIO has revised its governance structure to establish six new boards, one for each of its IT programs. Agency officials, including the IT governance lead, reported that the boards had been established; however, as of December 2017, NASA had not yet approved charters defining the new governance bodies’ membership, functions, and interactions with other governance boards.
Roles and responsibilities of the IT governance boards and agency-wide governance councils are not clearly defined. NASA continues to operate a federated governance model with decentralized roles and responsibilities for governance of mission and business IT investments. Business IT is selected and approved by the IT-specific governance boards, but mission IT follows a different path for investment selection in that it is not reviewed and approved by the CIO along with other IT investments proposed for selection. Instead, the Agency Program Management Council’s reviews focus on the selection of overall mission programs, and not on selecting IT. As a result, mission IT has historically been reported to the Office of the CIO only if the program has been designated as a major agency IT investment to be reported to OMB.
NASA has begun making changes to its decentralized governance approach in response to provisions in legislation commonly referred to as the Federal Information Technology Acquisition Reform Act that are intended to ensure that the CIO has visibility into both mission and business IT investments. However, the agency has not yet developed policies and procedures to clarify how these changes will affect the CIO’s and governance boards’ roles and responsibilities. For example, in January 2017, the IT Council approved an updated definition for highly-specialized IT and established new expectations about the extent to which highly-specialized IT investments would be reviewed by the NASA CIO.
However, NASA has not clarified roles and responsibilities for identifying such investments and ensuring they are reported by mission directorate programs to the CIO. In addition, the agency has not yet outlined procedures for how these investments that are overseen by the agency-wide Agency Program Management Council are to be reported to the CIO or IT-specific governance boards.
During a January 2017 IT Council meeting, the NASA CIO acknowledged that roles and responsibilities for IT governance were unclear and that it would take 1 to 2 years to clarify them. In July 2017, the Deputy CIO recognized that significant work remained for NASA to achieve a consistent agency-wide governance approach with established roles and responsibilities.
While the IT governance boards are meeting regularly, they are not consistently operating as intended. Board charters finalized in 2016 defined the membership for the governance boards and established expectations for the expertise to be made available to support board decisions. However, the boards are not consistently operating with all designated board members in attendance. For example, the Chief Engineer was designated as a member of the IT Council, but the council’s meeting minutes indicated that the Deputy Chief Engineer regularly attends the council meetings instead.
In addition, IT Program Management Board meetings are consistently held with fewer voting members than designated by the board’s charter. The board’s meeting minutes indicated that fewer than six voting members regularly attend board meetings instead of the eight voting members outlined in the board charter. For example, the minutes showed that each meeting has been held with only one center and mission support directorate representative—instead of the two required by the charter.
NASA officials, including the Associate CIO for Capital Planning and Governance, stated that planned efforts to update the governance structure and develop additional guidance for IT investment management have impacted the agency’s time frames for fully establishing its new boards and defining their roles and responsibilities. Specifically, these officials stated that the Office of the CIO is working to develop a comprehensive IT framework intended to update the governance structure, fully establish the new governance boards, and define governance roles and responsibilities. According to the officials, this framework is expected to be finalized in 2018, but the office did not provide a detailed schedule with milestones for completing the framework. Without a detailed schedule for updating the governance structure and establishing a comprehensive IT framework to help ensure that the revised governance boards are fully established and operating as intended, NASA may not be able to improve IT governance in accordance with the requirements in the National Aeronautics and Space Administration Transition Authorization Act of 2017.
NASA Has Not Completed or Updated Governance Selection Process Policies and Procedures and Lacks Established Guidance for Reselecting Investments
According to our IT investment management guide, defining policies and procedures for selecting investments provides investment boards and others with a structured process and a common understanding of how investments will be selected. Selection policies and procedures should, among other things, establish thresholds or criteria (e.g., investment size, technical difficulty, risk, business impact, customer needs, and cost- benefit analysis) for boards to use in identifying, analyzing, prioritizing and selecting new IT proposals.
In addition, outlining a process for reselecting ongoing projects is intended to support board decisions about whether to continue to fund projects not meeting established goals or plans. Using the defined selection process promotes consistency and transparency in IT governance decision making. Further, after the guidance has been developed, organizations must actively maintain it, making sure that it always reflects the board’s current structure and the processes that are being used to manage the selection of the organization’s IT investments.
NASA’s defined selection process policies and procedures designated the CIO with responsibility to ensure that IT governance, investment management, and program/project management processes are integrated to facilitate the selection of appropriate IT investments. The agency has established multiple policies and procedures outlining certain aspects of how both mission programs and business IT investments are to be planned, such as standardized templates for requesting approval to plan investments and direction for teams to use in planning for investments. In addition, the Office of the CIO has established a Capital Planning and Investment Control Guide for business IT investments and issues annual budget guidance for requesting funding for IT investments.
The agency’s selection process also includes specific IT governance processes developed by centers for the investments they review. For example, Goddard Space Flight Center had developed additional center- specific guidance assigning lead responsibility for assessing new and ongoing projects. The center also has established predetermined criteria, such as whether projects conflict, overlap, or are redundant with other projects, and the risk if the investment was not funded.
Nevertheless, NASA’s established process does not yet define thresholds or criteria (e.g., qualitative or quantitative data) to be analyzed and compared when governance boards make decisions to select investments. Charters for NASA’s governance boards outline the functions these boards are to perform and direct them to be involved in IT governance. However, the charters do not outline specific thresholds or procedures that the boards are to follow in selecting investments.
For example, NASA’s process does not fully define how investment risks are to be evaluated. NASA policy establishes dollar thresholds for IT governance board reviews, but does not define any other parameters for how risk will be evaluated. In addition, NASA has established an expectation that the new capital investment review process is to yield risk- based decisions for all investments and help mitigate IT security risks. However, guidance for capital investment reviews does not address how investment risks are to be evaluated.
Moreover, NASA’s selection process policies and procedures have not been updated to reflect efforts to improve governance. Its guidance for selecting investments (and for all aspects of its governance process) is fragmented, and the agency has not updated its policies and procedures to reflect current selection practices. In addition, this guidance does not yet reflect recent efforts to clarify and standardize the definitions of fundamental IT investment terms, such as “information technology” and “major” investments.
Further, while NASA has begun changing its selection process to ensure that the CIO and IT governance boards will be provided data about all IT investments, including mission IT investments such as highly-specialized IT, the agency’s selection policies have not been updated to reflect these changes. NASA’s Capital Planning and Investment Control Guide does not require all investments to be included in the selection process (or other IT governance processes) and the NASA Space Flight Program and Project Management procedures for mission program governance do not address whether or how the investments within mission programs are to be reported to the agency’s IT-specific governance boards.
In addition, NASA has not yet defined a reselection process for IT investments. Current policies and guidance for selecting investments do not clearly define a consistent approach for how performance is to be considered in reselecting investments. Without a defined reselection process, the agency’s boards lack structure and a common understanding about how to make decisions about whether to continue to fund projects not meeting established goals or plans.
NASA officials acknowledged that the current policies and procedures do not establish sufficient content within the business cases and IT plans for proposed investments to support effective governance decision making.
The agency has begun working to update its policy for IT program and project management but did not expect to complete the update until April 2018. Further, even when this key IT investment management policy is updated, the agency will still need to update related policies and procedures to reflect changes it has made but not yet documented in the investment selection process. NASA has not yet established plans for when all needed updates to the policies and procedures will be completed.
Until NASA updates its IT governance policies and procedures to establish thresholds and procedures to guide its boards in decision making and outline a process for reselecting investments, the agency will be limited in its assurance that the investment selection process will provide a consistent and structured method for selecting investments. Further, until all relevant governance policies and procedures are updated to reflect current investment selection practices and proposed changes intended to provide the CIO with data about mission IT, the CIO will not be positioned to minimize investments that present undue risk to the agency and ensure accountability for both business and mission IT.
NASA Lacks Criteria for Assessing Investment Performance and Ensuring Oversight of All Investments
Organizations that provide effective IT investment oversight have documented policies and procedures that, among other things, ensure that data on actual performance (e.g., cost, schedule, benefit, and risk) are provided to the appropriate IT investment board(s). In addition, such organizations establish procedures for escalating or elevating unresolved or significant issues; ensure that appropriate actions are taken to correct or terminate underperforming IT projects based on defined criteria; and regularly track corrective actions until they are completed.
As with investment selection, NASA has established multiple policies and procedures for the oversight of IT investments. In October 2015, the agency added to its oversight processes by establishing a capital investment review process to improve the quality of the information available for investment oversight and established a matrix defining dollar thresholds to delineate oversight among the IT governance boards. The IT Program Management Board is also assigned specific oversight responsibilities for reviewing investment cost, schedule, performance, and risk at key lifecycle decision points for investments submitted for its review. In addition, the IT Program Management Board’s charter requires this board to track, among other things, board decisions about investments and action items.
In implementing NASA’s oversight practices, the IT Program Management Board consistently reviewed updates on investment performance (i.e., cost, schedule, and benefits) and progress. In addition, the IT Program Management Board’s oversight decisions about IT investments are documented in meeting minutes, and the board also records any action items identified for investments in the decision memorandums it submits to the CIO.
Nevertheless, we identified limitations in NASA’s established oversight policies and procedures. For example, the agency’s policies and procedures require IT investments to report data to the governance boards at key decision points but do not establish specific thresholds or other criteria for the governance boards to use in overseeing the investments’ performance or escalating investments to review by other boards. The oversight guidance also does not specify the conditions under which a project would be terminated.
In addition, weaknesses we identified in oversight of specific NASA IT investments highlighted additional limitations of the established oversight process.
Specifically, NASA did not have a mechanism for alerting the IT Program Management Board to provide oversight if investments were underperforming or overdue for review. For example, significant schedule overruns did not trigger additional oversight for one investment. In March 2015, NASA approved the proposed design for an investment to implement a security tool in June 2015 at an expected cost of $1.3 million. Although the project fell 13 months behind schedule and encountered unforeseen challenges, the IT Program Management Board did not review the investment again until June 2017—2 years later.
Not all IT investments followed the established oversight process. For example, in our review of governance board meeting minutes and documentation, we identified an investment that was close to completion before the IT Program Management Board reviewed its proposed design. Specifically, in February 2016, the board was asked—1 day before the investment was to become operational—to (1) approve the proposed design and (2) grant authority to operate for the investment intended for use by NASA staff and external partners. Although concerns about limited oversight were noted, the investment was approved.
Further, NASA lacks procedures to ensure that action items identified are tracked. We identified instances in which the IT Program Management Board did not consistently track action items identified for IT investments. NASA’s investments typically report back to the IT Program Management Board at future decision point reviews about steps taken to address documented action items. However, the board’s meeting minutes and documentation identified multiple examples of investments that were returned to the board at future decision points without reporting on whether identified action items had been addressed.
Moreover, NASA’s oversight processes do not encompass highly- specialized or other IT that supports mission programs. After reviewing NASA’s fiscal year 2015 budget request, OMB directed NASA to identify unreported IT investments throughout the agency to ensure that all related spending would be documented. NASA established a team in 2016 to explore how to identify such investments so that they could be reported to the CIO. The team initiated efforts to identify such investments in mission directorates and evaluated various mechanisms that NASA could employ to detect unreported IT. However, the agency has not yet finalized decisions about how to implement the team’s recommendations, including those for fully identifying investments for all mission directorates or determining which mechanisms to employ to identify unreported IT. According to NASA officials, time frames for completing these activities have not yet been established.
In July 2017, NASA officials, including the Deputy CIO, acknowledged in governance board meeting minutes describing needed improvements, that the agency had not yet fully identified its IT footprint and needed to establish a comprehensive investment management process to address federal requirements, including those governing processes for selecting, reselecting, and overseeing IT investments. NASA officials explained that important progress had been made in improving oversight practices, but that efforts to implement more thorough capital investment reviews and identify IT investments across the agency had not yet been completed. The officials reported that they anticipated additional improvement to be made by the next annual budget cycle.
However, expanding NASA’s oversight of IT will require continued coordination with the mission directorates to work through any needed changes to the longstanding differences in NASA’s management of mission and business IT. The scope and complexity of such efforts are likely to be significant and may take time to plan and implement. Clearly defining how IT across the agency is to be identified and reported to the CIO would likely involve changes to policies and processes within and across NASA’s IT, engineering, and mission program areas and would involve expertise and collaboration from those same groups. Until such practices are fully established, NASA will continue to operate with limitations in its oversight process and projects that fall short of performance expectations. In addition, the agency will face increased risk that its oversight will fail to (1) prevent duplicative investments, (2) identify opportunities to improve efficiency and effectiveness, and (3) ensure that investment progress and performance meet expectations.
NASA Has Not Yet Fully Defined Policies and Procedures for Managing Investments as a Portfolio
The IT investment management framework developed by GAO notes that, as investment management processes mature, agencies move from project specific processes to managing investments as a portfolio. The shift from investment management to IT portfolio management enables agencies to evaluate potential investments by how well they support the agency’s missions, strategies, and goals. According to the framework, the investment board enhances the IT investment management process by developing a complete investment portfolio. As part of the process to develop a complete portfolio, an agency is to establish and implement policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio.
NASA has not yet fully defined its policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. In its Annual Capital Investment Review Implementation Plan, dated October 2015, NASA began documenting policies for IT portfolio management and procedures for creating and evaluating the portfolio. For example, the procedures state that NASA is to update its IT portfolio annually in conjunction with the agency’s planning and budgeting process. Additionally, in its IT Capital Planning and Investment Control Process guide, dated October 2006, NASA outlined procedures the agency can use to analyze the portfolio by establishing factors that should be taken into consideration, including the relative benefits, costs, and risks of the investment compared to all other proposals and the strength of the investment’s linkage to NASA’s strategic business plan.
However, these documents do not constitute a comprehensive IT portfolio management process in that they do not specifically define the procedures for creating and modifying the IT portfolio selection criteria; analyzing, selecting, and maintaining the investment portfolio; or reviewing, evaluating, and improving the performance of its portfolio. Further, the policies and procedures have not been updated to reflect current NASA practices. Specifically, the current policies and procedures have not been updated to reflect changes the agency made to its capital investment review process that are relevant to portfolio management.
According to NASA officials, the reason that the agency has not fully defined its policies and procedures is because they are intended to be part of a new IT portfolio management framework that also requires NASA to make changes to its investment management process. Specifically, the IT portfolio management plan that NASA drafted in January 2017 called for the agency to develop new IT investment criteria, discover currently unreported IT investments, develop an investment review process, and implement an IT investment dashboard and reporting tool, and a communications plan.
Although the IT Council has not yet approved the IT portfolio management plan, NASA has begun work to address elements of the draft plan, including building the requirements for an IT dashboard and reporting tool for implementation in 2018. In addition, according to Office of the CIO officials, the capital planning team is continuing to work with stakeholders to develop a comprehensive IT framework and investment review process. However, no firm dates have been established for the approval and implementation of the final plan or the framework. Until NASA fully defines its policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio, the agency will lack assurance it is identifying and selecting the appropriate mix of IT projects that best meet its mission needs.
NASA Has Not Fully Established an Effective Approach for Managing Cybersecurity Risk
We have previously reported that securing federal government computerized information systems and electronic data is vital to the nation’s security, prosperity, and well-being. Yet, the security over these systems is inconsistent and agencies have faced challenges in establishing cybersecurity approaches. Accordingly, we have recommended that federal agencies address control deficiencies and fully implement organization-wide information security programs.
NIST’s cybersecurity framework is intended to support federal agencies as they develop, implement, and continuously improve their cybersecurity risk management programs. In this regard, the framework identifies cybersecurity activities for achieving specific outcomes over the lifecycle of an organization’s management of cybersecurity risk. According to NIST, the first stage of the cybersecurity risk management lifecycle— which the framework refers to as “identify”—is focused on foundational activities for effective risk management that provide agencies with the organizational understanding to manage cybersecurity risk to systems, assets, data, and capabilities. NIST also provides specific guidance for implementing foundational activities and achieving desired outcomes that calls for, among other things, the following:
A risk executive in the form of an individual or group that provides agency-wide oversight of risk activities and facilitates collaboration among stakeholders and consistent application of the risk management strategy.
A cybersecurity risk management strategy that articulates how an agency intends to assess, respond to, and monitor risk associated with the operation and use of the information systems it relies on to carry out the mission.
An information security program plan that describes the security controls that are in place or planned for addressing an agency’s risks and facilitating compliance with applicable federal laws, executive orders, directives, policies, or regulations.
Risk-based policies and procedures that act as the primary mechanisms through which current security requirements are communicated to help reduce the agency’s risk of unauthorized access or disruption of services.
However, NASA has not yet fully implemented these foundational activities of effective cybersecurity risk management.
Efforts to Establish Executive Oversight of Cybersecurity Are Underway
According to NIST guidance, federal agencies should establish a risk executive in the form of an individual or group that provides organization- wide oversight of risk activities and facilitates collaboration among stakeholders and consistent application of the risk management strategy. This functional role helps to ensure that risk management is institutionalized into the day-to-day operations of organizations as a priority and integral part of carrying out missions.
NASA has developed a policy regarding the establishment of a risk executive function in accordance with NIST guidance, but it has not fully implemented the policy. Specifically, the agency’s policy designates the Senior Agency Information Security Officer (SAISO) as the risk executive. According to the policy, the SAISO is charged with ensuring that cybersecurity is considered and managed consistently across the systems that support the agency and its partnerships—academic, commercial, international, and others that leverage NASA resources and extend scientific results. The policy also calls for the SAISO to establish an office with the mission and resources for information security operations, security governance, and cyber-threat analysis.
In accordance with its policy, NASA has designated an Acting SAISO. Since April 2017, the Acting SAISO has led the IT Security Division within the Office of the CIO—an office that coordinates information security operations, security governance, security architecture and engineering, and cyber-threat analysis.
However, the agency has not yet established a risk executive office with assigned leadership positions and defined roles and responsibilities. According to NASA documentation, the agency had planned for the office to become operational by mid-December 2016. Agency officials, including the Acting Deputy Associate CIO for Information Security, explained that an IT security program office was not established in 2016 because the planned time frame for doing so was not realistic and failed to take into account other risk management efforts competing for available resources. For example, the officials stated that the agency was focused on a priority goal of deploying a centralized tool across its centers that would provide monitoring of implemented security controls to ensure they are functioning adequately.
According to the NASA CIO, the agency planned to establish a comprehensive risk executive function by employing a cybersecurity risk manager in April 2018 and forming a program office—called the Enterprise Security Office—by September 2018. NASA’s new cybersecurity risk manager began work on April 2, 2018. The agency’s plan to have the new cybersecurity risk manager establish a comprehensive risk executive function should help ensure that current risk management efforts and decisions are appropriate and consistently carried out across the agency and its external partnerships.
NASA Has Not Yet Established an Agency-Wide Cybersecurity Risk Management Strategy
NIST guidance states that federal agencies should establish and implement an organizational strategy for managing cybersecurity risk that guides and informs how the agency assesses, responds to, and monitors risk to the information systems being relied on to carry out its mission. The strategy should, among other things, make explicit an agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, risk response strategies, approaches for monitoring risk over time, and priorities for investing in risk management.
In 2015, NASA recognized the need to establish and implement an agency-wide strategy for managing its cybersecurity risks to address weaknesses it had identified with the decentralized approach it was using. Specifically, because the agency’s centers had independently developed approaches for managing cybersecurity risk, there was little integration regarding risk management and practices across the agency. Further, NASA determined that the decentralized, center-level approach did not provide sufficient transparency regarding risks that could affect mission directorate programs.
To overcome the limitations of its decentralized approach, NASA planned to develop and begin implementing a comprehensive cybersecurity strategy by the end of September 2016 that was expected to include the key elements identified in NIST guidance. For example, it was expected to define the agency’s risk tolerance, establish a methodology for identifying and assessing risks, and provide a clear understanding of NASA’s risk posture.
However, the strategy was not completed as planned and is currently in development. According to officials in the Office of the CIO, including the Acting Deputy Associate CIO for Information Security, the strategy was not completed as planned due to the complexity and scope of the effort. For example, the officials stated that establishing an effective agency- wide strategy required insight into center-specific practices and significant input from stakeholders at all levels of NASA. In addition, these officials and the NASA CIO explained that the agency’s efforts were redirected in order to respond to a new executive order from the President to develop an action plan for adopting NIST’s cybersecurity framework in phases.
According to NASA’s CIO, the agency plans to move forward with drafting an agency-wide cybersecurity strategy that reflects the agency’s approach to using NIST’s framework; however, the agency has not yet established time frames for completing this effort. Until NASA establishes and implements a comprehensive strategy for managing its cybersecurity risks using NIST’s framework, its ability to make operational decisions that adequately address security risks and prioritize IT security investments will be hindered.
NASA’s Information Security Program Plan Does Not Fully Address Relevant Leading Practices and Is Not Finalized
NIST recommends that federal agencies develop and disseminate an information security program plan that describes the organization-wide security controls that are in place or planned for addressing the agency’s risks and complying with applicable federal laws, executive orders, directives, policies, or regulations. Specifically, the plan should provide a description of the agency’s program management controls and common controls in place or planned for meeting relevant federal, legal, or regulatory requirements; include the identification and assignment of roles, responsibilities, and coordination among organizational entities responsible for different aspects of information security; define the frequency for reviews of the security program plan; and receive approval from a senior official with responsibility and accountability for the risk being incurred.
NASA issued a draft information security program plan in November 2017 that addresses many of the components called for in NIST guidance. For example, the plan discusses program management controls that will be established, including the development of an inventory of its information systems, measures to determine information security performance, and an information security workforce development and improvement program; common controls that are to be implemented agency-wide, including configuration management, contingency planning, and personnel security; roles and responsibilities for promoting collaboration and providing consolidated unclassified security operations, and incident response and IT security awareness and training capabilities; and responsibility for ensuring that the information security program plan is maintained, approved by the NASA CIO, and reviewed annually.
However, the plan is currently in draft and incomplete. For example, it does not yet describe the majority of the security functions and services that are to be carried out by the agency’s IT Security Division to address the relevant federal statutory and regulatory requirements. Specifically, the plan does not identify the agency-wide privacy controls derived from standards promulgated pursuant to federal law and guidance that, according to the agency, are an integral part of its security program.
According to NASA’s Acting Deputy Associate CIO for Information Security, the information security program plan has not been finalized because of an upcoming revision to NIST’s guidance for implementing security controls. Specifically, a fifth revision of NIST SP 800-53 is planned for release in December 2018. NASA’s Acting Deputy Associate CIO for Information Security stated that the agency intends to finalize its draft plan after incorporating the updated NIST guidance.
In the absence of an established information security program plan, NASA’s view of the security controls that protect its systems will remain decentralized, and it will lack assurance that it has established oversight over security controls for all of its systems. In addition, the agency will continue to operate its systems without defined and established information security requirements that are essential to agency-wide operations.
NASA’s Security Policies and Procedures Are Not Always Current or Integrated
NIST Special Publication 800-53 recommends that agencies create policies and procedures to facilitate the appropriate application of security controls. If properly implemented, these policies and procedures may be able to effectively reduce the risk that could come from cybersecurity threats such as unauthorized access or disruption of services. Because risk-based policies and procedures are the primary mechanisms through which federal agencies communicate views and requirements for protecting their computing environments, it is important that they are established and kept current.
NASA has taken steps to document policies and procedures that address the security controls identified in NIST guidance for protecting information systems. For example, the agency established an overarching security policy that identified roles and responsibilities related to configuration management, contingency planning, and incident response. In addition, the agency issued procedures for implementing each of the NIST controls.
However, NASA does not have current and fully integrated policies and procedures. For example, the agency’s overarching policy for implementing security controls expired in May 2017. In addition, approximately one-third of the documents that guide the implementation of these controls remained in effect past their expiration dates instead of being updated before they had expired per NASA policy requirements.
Further, in July 2017, NASA determined that cybersecurity roles and responsibilities were not always clear and sufficiently integrated across policies. For example, responsibilities were not consistently well-defined in the policies for governance, IT security, program and project management, and systems engineering. In addition, although NASA’s Policy Directive 2810.1E, NASA Information Security Policy provided the SAISO with responsibility for the agency’s cybersecurity risk, the policy assigned mission directorates control over risk decisions for their missions and programs and the centers were given the authority to implement any technical changes needed to address risk.
NASA’s Procedural Requirement 2810.1A, Security of Information Technology states that the agency’s SAISO is responsible for ensuring that information security policies and procedures are reviewed and appropriately updated. However, according to officials in the Office of the CIO, including the specialist for IT security, responsibilities for establishing, reviewing, and updating policies and procedures are being shared by two groups: the IT Security Division, led by the SAISO, and the Capital Planning and Governance Division. Specifically, the IT Security Division controls the content of IT-related policies and procedures but does not have control over the established NASA-wide process for reviewing the policies and procedures to determine if any changes are needed to the content. Instead, the Capital Planning and Governance Division is responsible for ensuring formal review and approval of any IT- related policies and procedures through the standard agency process and schedule.
Officials from the Office of the CIO, including the specialist for IT security, also stated that they intend to (1) establish a policy management framework that would provide the SAISO with more control over policies and procedures and include an annual document review, and (2) clarify and update cybersecurity roles and responsibilities in NASA policies. However, the agency has not yet developed a plan and specific time frame for completing these activities.
In addition, the Acting Deputy Associate CIO for Information Security stated that, having expired policies and procedures is not significant because they will remain in use until they are rescinded or superseded by updated versions. However, until NASA fully updates its policies and procedures to govern security over the agency’s computing environments, it will have limited assurance that controls over information are appropriately applied to its systems.
Conclusions
NASA continues to pursue efforts to improve IT strategic planning, workforce planning, IT governance, and cybersecurity, but consistently lacks the documented processes needed to ensure that policies and leading practices are fully addressed. Specifically, the agency has taken steps to improve the content of its strategic plan and established an agency-wide goal for improving its workforce. In addition, after analyzing its IT management and governance structure, NASA took action to streamline its governance boards and standardize and strengthen its selection and oversight of investments, including initiating a portfolio management process. NASA has also moved toward new strategies and plans to bolster cybersecurity.
Nevertheless, while NASA has made progress, the agency has not yet fully addressed many of the leading IT management practices noted in this report or completed efforts to increase the CIO’s authority over, and visibility into, agency-wide IT. Among other things, NASA has not fully documented a process for IT strategic planning or addressed all key elements of a comprehensive plan. In addition, it has not yet fully implemented a workforce planning process and has gaps in efforts to address leading practices. Regarding IT governance, its efforts to institute an effective governance structure and update policies and procedures for selecting IT investments are not yet complete. Moreover, NASA has not yet addressed weaknesses in its oversight practices or fully defined policies and procedures for developing an effective portfolio management process.
Similarly, although NASA continues cybersecurity improvement efforts, important elements of an effective cybersecurity approach have not been completed, including establishing a risk management strategy, an information security program plan, and updated policies and procedures. Until NASA leadership fully addresses these leading practices, its ability to overcome its longstanding weaknesses and ensure effective oversight and management of IT across the agency will remain limited. Moreover, NASA may be limited in its ability to strengthen its risk posture, including ensuring effective cybersecurity across partnerships with commercial entities, federal agencies, and other countries.
Recommendations for Executive Action
We are making 10 recommendations to the National Aeronautics and Space Administration:
The Administrator should direct the Chief Information Officer to develop a fully documented IT strategic planning process, including methods by which the agency defines its IT needs and develops strategies, systems, and capabilities to meet those needs. (Recommendation 1)
The Administrator should direct the Chief Information Officer to update the IT strategic plan for 2018 to 2021 and develop associated implementation plans to ensure it fully describes strategies the agency will use to achieve the desired results and descriptions of interdependencies within and across programs. (Recommendation 2)
The Administrator should direct the Chief Information Officer to address, in conjunction with the Chief Human Capital Officer, gaps in IT workforce planning by fully implementing the eight key IT workforce planning activities noted in this report. (Recommendation 3)
The Administrator should direct the Chief Information Officer to institute an effective IT governance structure by completing planned improvement efforts and finalizing charters to fully establish IT governance boards, clearly defining roles and responsibilities for selecting and overseeing IT investments, and ensuring that the governance boards operate as intended. (Recommendation 4)
The Administrator should direct the Chief Information Officer to update policies and procedures for selecting investments to provide a structured process, including thresholds and criteria needed for, among other things, evaluating investment risks as part of governance board decision making, and outline a process for reselecting investments. (Recommendation 5)
The Administrator should direct the Chief Information Officer to address weaknesses in oversight practices and ensure routine oversight of all investments by taking action to document criteria for escalating investments among governance boards and establish procedures for tracking corrective actions for underperforming investments. (Recommendation 6)
The Administrator should ensure that the Chief Information Officer fully defines policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. (Recommendation 7)
The Administrator should direct the Chief Information Officer to establish an agency-wide approach to managing cybersecurity risk that includes a cybersecurity strategy that, among other things, makes explicit the agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, response strategies and approaches for monitoring risk over time, and priorities for risk management investments; (Recommendation 8) an information security program plan that fully reflects the agency’s IT security functions and services and agency-wide privacy controls for protecting information; (Recommendation 9) and policies and procedures with well-defined roles and responsibilities that are integrated and reflect NASA’s current security practices and operating environment. (Recommendation 10)
Agency Comments and Our Evaluation
We provided a draft of this product to NASA for comment. In its comments, which are reproduced in appendix II, NASA concurred with seven of the recommendations, partially concurred with two recommendations, and did not concur with one recommendation.
NASA partially concurred with our first and second recommendations. Specifically, consistent with the first recommendation, NASA agreed to fully document its strategic planning process, including the methods by which the agency defines IT needs and develops outcomes, strategies, major actions, and performance measures to meet those needs.
In addition, our second recommendation called for NASA to update the strategic plan and develop associated implementation plans. With regard to updating the plan, NASA stated that its strategic plan provides the context and parameters to support achievement of the agency's vision and mission through the strategic use of IT. The agency also stated that this plan describes the business outcomes, strategies, major actions, and performance measures to achieve the desired results.
With regard to the implementation plans related to our first and second recommendation, NASA agreed to develop the associated implementation plans for accomplishing the IT strategic plan, including descriptions of the interdependencies within and across programs. Nevertheless, in commenting on both recommendations, as well as the first recommendation, NASA stated that it does not believe that implementation plans, including specific IT capability and system changes, should be part of a strategic plan. The agency also maintained that the implementation plans, including descriptions of interdependencies within and across programs, are at a lower level than the IT strategic plan, since detailed IT implementation plans are more dynamic than the four-year NASA IT Strategic Plan.
However, our first and second recommendations do not call for NASA to incorporate implementation plans within the strategic plan. Rather, as discussed in the report, it is important that NASA document how it intends to accomplish the activities outlined in the strategic plan. Further, we continue to believe that NASA should address the weaknesses we identified in this report by updating the strategic plan to incorporate strategies on resources and time frames to achieve desired results and descriptions of interdependencies within and across projects so that they can be understood and managed. Thus, we stand by both recommendations (recommendations 1 and 2) that the agency take these actions.
NASA did not concur with our third recommendation to implement the IT workforce planning activities noted in our report. In this regard, the agency stated that its workforce improvement efforts were already underway. Specifically, NASA stated that IT workforce planning is part of the agencywide Mission Support Future Architecture Program. It added that, among other things, this program is intended to ensure that mission support resources, including the IT workforce, are optimally structured to support NASA’s mission. In addition, NASA referenced our two additional ongoing audits of the agency’s IT workforce, and noted that its activities related to IT workforce planning would be centered on any recommendations resulting from those audits.
In our view, neither of these circumstances should hinder NASA from addressing our recommendation in this report. As of March 2018, the agency’s IT workforce plans were out-of-date and incomplete because activities the agency had been planning since 2015 had not been finalized in an approved plan or implemented. Further, NASA had not yet determined when the Office of the CIO would become an active part of the agencywide Mission Support Future Architecture program or developed plans for when that program’s assessment of the IT workforce would be completed.
Thus, instead of limiting NASA’s ability to address our recommendation, implementing the workforce planning activities discussed in this report could complement the agency’s ongoing and future efforts. Specifically, NASA could use the IT workforce leading practices described in this report to strengthen any new workforce plans and assess the implementation of any planned improvements. Until NASA documents an IT workforce planning process and implements all of the key IT workforce planning activities, the agency may not be effectively positioned to anticipate and respond to changing staffing needs. Further, the agency is likely to face challenges in controlling human capital risks when developing, implementing, and operating IT systems.
NASA concurred with our four recommendations aimed at addressing deficiencies in its IT governance (recommendations 4 through 7). In this regard, the agency described planned actions intended to address each of these recommendations. For example, among other activities, the agency stated that it intended to publish charters for all IT governance boards; have the IT Council review governance board operations annually; document criteria for escalating investments among governance boards; and update policies and procedures for managing its investments as a portfolio.
Similarly, NASA concurred with our three recommendations related to establishing an agency-wide approach to managing cybersecurity risk (recommendations 8, 9, and 10). The agency described actions it had taken or planned to address each of these recommendations. In particular, with regard to establishing a cybersecurity risk management strategy (recommendation 8), NASA asserted that it had already taken actions that met the requirements of our recommendation. Specifically, NASA stated that it had established an approach to developing its cybersecurity risk management strategy by approving a charter for an agency-wide team to address cybersecurity risk management needs and hiring a Chief Cybersecurity Risk Officer to oversee agency-wide risk management initiatives.
While these actions constitute steps toward addressing the recommendation, we disagree that establishing a charter for a team and hiring a Chief Cybersecurity Risk Officer fully addresses the recommendation. As previously noted in this report, the agency does not have a cybersecurity risk management strategy that includes elements of NIST guidance. The strategy should, among other things, make explicit the agency’s risk tolerance, accepted risk assessment methodologies, a process for consistently evaluating risk across the organization, risk response strategies, approaches for monitoring risk over time, and priorities for investing in risk management. Ensuring that the established agency-wide team and the Chief Cybersecurity Risk Officer develop a cybersecurity risk management strategy that aligns with the NIST guidance will be essential to fully address our recommendation.
NASA also provided technical comments on the draft report, which we incorporated, as appropriate.
We are sending copies of this report to the appropriate congressional committees, the Administrator of the National Aeronautics and Space Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
Should you or your staffs have any questions on information discussed in this report, please contact Carol Harris at (202) 512-4456 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III.
Appendix I: Objective, Scope, and Methodology
The National Aeronautics and Space Administration Transition Authorization Act of 2017 included a provision for us to review the effectiveness of the agency’s approach to overseeing and managing information technology (IT), including its ability to ensure that resources are aligned with agency missions, cost effective, and secure. Our specific objective for this review was to address the extent to which the National Aeronautics and Space Administration (NASA) has established and implemented leading IT management practices in strategic planning, workforce planning, governance, and cybersecurity.
To address this objective, we compared NASA’s IT management policies, procedures, and other documentation to criteria established by federal laws and leading practices. This documentation included the agency’s strategic plans, workforce gap assessments, governance board meeting minutes and briefings, charters, policies and procedures, and other documentation of the Chief Information Officer’s (CIO) authority. We also reviewed relevant reports by GAO and the NASA Office of Inspector General.
With regard to IT strategic planning, we identified the strategic plans and related planning guidance issued by NASA and the Office of the CIO, including NASA’s Governance and Strategic Management Handbook, dated November 26, 2014; NASA’s Information Resources Management Strategic Plan, dated March 2014; and NASA’s updated Information Technology Strategic Plan for fiscal years 2018 to 2021. We then reviewed the agency’s overall strategic plan, and evaluated its previous and current IT strategic plans against key practices for IT strategic planning that we have previously identified. These practices call for documenting the agency’s IT strategic planning processes and developing an IT strategic plan that aligns with the agency’s overall strategy; identifies the mission of the agency, results-oriented goals, and performance measures that permit the agency to determine whether implementation of the plan is succeeding; includes strategies the governing IT organization will use to achieve desired results; and provides descriptions of interdependencies within and across projects so that they can be understood and managed.
To determine the extent to which NASA has established and implemented leading IT workforce planning practices, we conducted a comparative analysis of NASA’s IT workforce planning policies and documents. Specifically, we compared agency documents, such as NASA policy directives, the desk guide, and documentation of efforts to establish IT workforce competencies and staffing requirements and conduct gap assessments, to GAO’s IT workforce framework. GAO’s framework consists of four IT workforce planning steps and eight key activities. The eight key activities were identified in federal law, regulations, and guidance, including the Clinger-Cohen Act of 1996, the legislation referred to as the Federal Information Technology Acquisition Reform Act, Office of Management and Budget (OMB) guidance, the Office of Personnel Management’s Human Capital Framework, and GAO reports.
Based on our assessment of the documentation and discussions with agency officials, we assessed the extent to which the agency implemented, partially implemented, or did not implement the activities. We considered an activity to be fully implemented if NASA addressed all of the underlying practices for the activity; partially implemented if it addressed some but not all of the underlying practices for the activity; and not implemented if it did not address any of the underlying practices for the activity.
We assessed IT governance practices by comparing NASA documentation to critical processes identified by GAO in the IT investment management framework. To align our work with the provision in Section 811(a) of the National Aeronautics and Space Administration Transition Authorization Act of 2017 calling for NASA to take actions regarding IT governance, we selected critical processes from Stage 2 of the framework: instituting the investment board; selecting and reselecting investments that meet business needs; and providing investment oversight.
For each critical process, we compared key practices outlined in the framework to NASA documentation. The documentation we reviewed included NASA’s IT governance policies and procedures, and charters and other guidance. We also reviewed governance board meeting minutes and briefings from each board’s first meeting in 2016 through meetings held in August 2017.
In addition, we selected key practices for effective governance from Stage 3 of the IT investment management framework regarding establishing and implementing policies and procedures for developing the portfolio criteria, creating the portfolio, and evaluating the portfolio. We then compared documentation, including NASA’s IT Capital Planning and Investment Control Process guide dated October 2006, and Annual Capital Investment Review Implementation Plan dated October 2015, and draft IT portfolio management plans, against these practices.
Using standards and guidance from the National Institute of Standards and Technology (NIST), which identify foundational elements of effective cybersecurity risk management, we evaluated NASA’s cybersecurity risk management approach by analyzing policies and plans for establishing a comprehensive risk evaluating documents and plans for establishing a cybersecurity risk comparing a draft Information Security Program Plan to determine if it was consistent with NIST guidance; and analyzing policies and procedures to determine if they address relevant NIST security controls and are current.
In addition to assessing NASA headquarters, we reviewed IT management practices at two of the agency’s nine centers (Marshall Space Flight Center in Huntsville, Alabama; and Johnson Space Center in Houston, Texas) and at one of NASA’s four mission directorates (the Human Exploration and Operations Mission Directorate). The two centers and one mission directorate were selected because they had the largest fiscal year 2017 IT budgets, respectively, as reported on the federal IT dashboard. We also visited the Goddard Space Flight Center in Greenbelt, Maryland, because of the center’s proximity to GAO. The results of our work at the selected NASA centers and mission directorate are not generalizable to other NASA centers and mission directorates.
To assess the reliability of these data, we compared them to budgetary data obtained directly from NASA’s Office of the CIO. We found the data to be sufficiently reliable for the purpose of identifying the NASA centers and mission directorate with the largest IT budgets. We also interviewed cognizant officials with responsibilities for IT management at NASA headquarters and for the selected centers and mission directorate.
We conducted this performance audit from May 2017 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective.
Appendix II: Comments from the National Aeronautics and Space Administration
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact name above, the following staff also made key contributions to this report: Eric Winter (Assistant Director), Donald Baca, Rebecca Eyler, Amanda Gill (Analyst in Charge), Tom Johnson, Kate Nielsen, Teresa Smith, and Niti Tandon. | Why GAO Did This Study
NASA depends heavily upon IT to conduct its work. The agency spends at least $1.5 billion annually on IT investments that support its missions, including ground control systems for the International Space Station and space exploration programs.
The National Aeronautics and Space Administration Transition Authorization Act of 2017 included a provision for GAO to review the effectiveness of NASA's approach to overseeing and managing IT, including its ability to ensure that resources are aligned with agency missions and are cost effective and secure. Accordingly, GAO's specific objective for this review was to determine the extent to which NASA has established and implemented leading IT management practices in strategic planning, workforce planning, governance, and cybersecurity. To address this objective, GAO compared NASA IT policies, strategic plans, workforce gap assessments, and governance board documentation to federal law and leading practices. GAO also assessed NASA IT security plans, policies, and procedures against leading cybersecurity risk management practices.
What GAO Found
The National Aeronautics and Space Administration (NASA) has not yet effectively implemented leading practices for information technology (IT) management. Specifically, GAO identified weaknesses in NASA's IT management practices for strategic planning, workforce planning, governance, and cybersecurity.
NASA has not documented its IT strategic planning processes in accordance with leading practices. While NASA's updated IT strategic plan represents improvement over its prior plan, the updated plan is not comprehensive because it does not fully describe strategies for achieving desired results or describe interdependencies within and across programs. Until NASA establishes a comprehensive IT strategic plan, it will lack critical information needed to align resources with business strategies and investment decisions.
Of the eight key IT workforce planning activities, the agency partially implemented five and did not implement three. For example, NASA does not assess competency and staffing needs regularly or report progress to agency leadership. Until NASA implements the key IT workforce planning activities, it will have difficulty anticipating and responding to changing staffing needs.
NASA's IT governance does not fully address leading practices. While the agency revised its governance boards, updated their charters, and acted to improve governance, it has not fully established the governance structure, documented improvements to its investment selection process, fully implemented investment oversight practices and ensured the Chief Information Officer's visibility into all IT investments, or fully defined policies and procedures for IT portfolio management. Until NASA addresses these weaknesses, it will face increased risk of investing in duplicative investments or may miss opportunities to ensure investments perform as intended.
NASA has not fully established an effective approach to managing agency-wide cybersecurity risk. An effective approach includes establishing executive oversight of risk, a cybersecurity risk management strategy, an information security program plan, and related policies and procedures.
As NASA continues to collaborate with other agencies and nations and increasingly relies on agreements with private companies to carry out its missions, the agency's cybersecurity weaknesses make its systems more vulnerable to compromise. Until NASA leadership fully addresses these leading practices, its ability to ensure effective management of IT across the agency and manage cybersecurity risks will remain limited.
What GAO Recommends
GAO is making 10 recommendations to NASA to address the deficiencies identified in NASA IT strategic planning, workforce planning, governance, and cybersecurity. NASA concurred with seven recommendations, partially concurred with two, and did not concur with one. GAO maintains that all of the recommendations discussed in this report remain valid. |
gao_GAO-18-424 | gao_GAO-18-424_0 | Background
DOD’s MWR Program Categories and Funding Sources
DOD Instruction 1015.10, Military Morale, Welfare, and Recreation (MWR) Programs, establishes policy, assigns responsibilities, and prescribes procedures for operating and managing programs for military MWR programs. Specifically, the policy states that the services are to establish MWR programs in order to maintain individual, family, and mission readiness and that these programs are an integral part of the military and its benefits package. The Office of USD(P&R) oversees DOD’s MWR programs, develops policy, and oversees MWR programs’ funding. DOD’s instruction specifies the purpose of, the funding sources for, and the activities within each of MWR’s three designated program categories—all of which are summarized below in table 1. For a complete listing of the activities by program category, see appendix I.
Each service supports MWR programs with a mix of appropriated and nonappropriated funding. According to officials, the services allocate appropriated funding amounts for MWR purposes, which primarily supports Category A and B programs. Nonappropriated funding is government money from sources other than amounts appropriated by Congress and may be generated in a number of ways to support MWR programs. For example, bowling programs, marinas, and golf programs generate nonappropriated funding revenue through participation fees for recreational activities paid by servicemembers and their families. Services must use any nonappropriated funding generated from or associated with MWR programs within their MWR programs.
DOD’s MWR Program Funding Targets
According to DOD Instruction 1015.10, the MWR programs are divided into three distinct categories, two of which also have specific funding targets. According to DOD’s 2016 report to Congress on appropriated funding support for MWR programs, the funding targets are intended to ensure that the services adequately fund MWR programs instead of requiring the servicemembers and their families to pay out of their own pockets for costs that should be borne by appropriated funding. While DOD Instruction 1015.10 establishes minimum funding targets for MWR Category A and B programs, it directs that the basic funding target, regardless of program category, is to use appropriated funding for 100 percent of costs for which they were authorized. While DOD’s Instruction allows the services to use appropriated funding for 100 percent of authorized costs, according to service officials this is generally not possible given budget constraints. Therefore, for MWR Category A mission sustaining programs, the DOD instruction establishes the funding target—stating that DOD is to use appropriated funding amounts for a minimum of 85 percent of total expenditures. For the MWR Category B community support system programs, the DOD instruction establishes the funding target as DOD’s use of appropriated funding amounts for a minimum of 65 percent of total expenditures. For the MWR Category C recreational activities for servicemembers and their families, appropriated funding support should generally be limited because this category has the highest capability of generating nonappropriated funding revenues.
Budget, Funding, and Accounting Processes for MWR Programs
Budget Processes
The services have annual budget processes for MWR programs that vary based on whether appropriated or nonappropriated funding is being used. For MWR programs supported by appropriated funding, according to officials, the services submit and validate program requirements through DOD’s Planning, Program, Budgeting, and Execution process. DOD and service guidelines for certain MWR programs as well as annual service- issued budget guidance provide input for determining MWR programs’ requirements. Service officials from the Army, the Marine Corps, and the Air Force also stated that they determine program requirements using input from installations and service components, while service officials from the Navy stated that they use a budget model along with performance measures and budget guidance to determine program requirements. The requirements are then submitted to higher level components within the services for review, adjustment, and approval. Once the services validate the requirements, they are provided to the Office of the Secretary of Defense for inclusion in the President’s Budget. Figure 1 provides an overview of the general process the services use to budget for appropriated funding support of MWR activities.
Budget processes and authorities for nonappropriated funding, or program-generated revenue, vary by service. Specifically, the services maintain nonappropriated funding budgets and budget approvals at different levels within the service organization. For example, officials stated that Marine Corps and Air Force installations maintain and manage nonappropriated funding generated at their locations while Army and Navy installations submit nonappropriated funding and budgets to a higher level of command, Installation Directorates for the Army and Regions for the Navy, as well as the service headquarters component. The services plan for and manage their nonappropriated funding budgets based on a number of factors, including revenue generated; projected revenues; and the amount, if any, of appropriated funding available. Figure 2 provides an overview of the general process the services use to approve and manage nonappropriated funding generated within the service.
Each service uses processes to provide funds for the implementation of its MWR programs. Service officials stated that during program execution the services execute their programs and make adjustments to their budgets based on funding authorized from appropriated funding and nonappropriated funding sources. Commanders have authority over budget implementation and the guidelines and parameters for commanders vary by service. For example, according to Army officials, during the fiscal year Army commanders can change MWR program budgets and have some flexibility to move funding to other non-MWR command priorities. Installations report to the services actual expenditures and income generated, which are included in the services’ annual reports. Figure 3 provides an overview of the general process the services use to provide funding for MWR programs.
Each service uses accounting processes for its MWR programs. According to service officials, accounting is handled differently at each service depending on the service’s organizational structure. According to service officials, the Navy and the Marine Corps centrally manage their MWR accounting processes at their service headquarters; the Army manages its accounting process at its headquarters and at the Defense Financial and Accounting Services Nonappropriated Financial Services; and the Air Force manages its accounting process at its Secretariat and at the service components. According to service officials, program managers at the service headquarters and activity level are able to review financial data, such as expenditures and revenues, for MWR programs on a recurring basis. DOD’s Instruction 1015.10 states that the services should identify appropriated and nonappropriated funding accounts in annual budgets, and the services have designated codes to categorize expenditures. Service officials stated they use the codes to report annually to USD(P&R) on MWR programs’ expenditures for both appropriated and nonappropriated funding.
The Services Did Not Consistently Meet One of the Two Appropriated Funding Targets and Are Taking Steps to Address This, but DOD Has Not Comprehensively Evaluated the Targets to Ensure They Are Appropriate
The services generally met the funding target for fiscal years 2012 through 2017 for MWR Category A mission-sustaining programs, but did not consistently meet the target for Category B programs that provide community support systems to servicemembers and their families during the same time period. Service officials said they are taking steps to meet the Category B target, such as restoring targeted levels of appropriated funding support in future budget planning. Data indicate that the services are getting closer to meeting the target. However, DOD has not comprehensively evaluated the funding targets, which were established more than 20 years ago, to ensure they currently are appropriate.
The Services Generally Met the Funding Target for MWR Category A Mission- Sustaining Programs
For MWR Category A mission-sustaining programs, the services generally met the 85-percent target for appropriated funding support. Specifically, the Navy and the Air Force consistently met or exceeded the 85-percent funding target in fiscal years 2012 through 2017, and the Army met or exceeded the target every year except for fiscal year 2012 when it reported that 84 percent of its Category A programs were supported with appropriated funds. The Marine Corps exceeded the minimum funding target for Category A programs in fiscal years 2012 through 2017, but consistently fell below the target with appropriated funding support ranging from 77 percent to 84 percent from fiscal years 2013 through 2016. Table 2 provides additional detail on the extent to which each service met the 85-percent funding target for MWR Category A mission- sustaining programs in fiscal years 2012 through 2017.
The Services Did Not Consistently Meet the Funding Target for MWR Category B Community Support Programs, but Are Taking Steps to Meet the Target in the Future
For MWR Category B community support programs, the services missed the 65-percent target for appropriated funding support with increasing frequency from fiscal years 2012 through 2017. Service officials stated that constrained budgets and competing priorities have made it difficult to allocate the appropriated funding needed to support their programs. However, service officials said they are taking steps to meet the Category B funding target in the future. Specifically, we found that the services collectively missed the funding target over 60 percent of the time from fiscal years 2012 through 2017. All four services missed the funding target in fiscal years 2015 and 2016 with appropriated fund support ranging from 55 to 63 percent. Most recently, in fiscal year 2017 the Army met the 65-percent funding target, but the Navy, the Marine Corps, and the Air Force fell below the 65-percent funding target with appropriated funding support ranging from 60 percent to 62 percent. Although the Air Force did not meet the 65-percent target for fiscal years 2012–2017 citing resource issues, Air Force leadership has increased appropriated funding for the MWR programs each year to help get closer to meeting the Category B funding target. Air Force officials said they plan to continue to increase funding each year so they can meet the target in the future. Table 3 provides additional detail on the extent to which each service met the 65-percent funding target for MWR Category B community support programs in fiscal years 2012 through 2017.
The USD(P&R) monitors the services’ compliance in meeting the targets. When a funding target is missed, USD(P&R) officials said a memorandum is sent to the services that asks for a detailed plan on how they will achieve the required level of appropriated funding support for the missed target in the future, and these officials said that each service has provided such a plan when they fell below the 65-percent funding target. In instances when a service does not respond to the initial request for a remediation plan, USD(P&R) officials said a second memorandum is sent notifying the service that they missed the funding target and that they need to submit a plan detailing how they intend to come into compliance. For example, in fiscal year 2015 the Army did not meet the 65-percent funding target for Category B programs. In June 2016, the Assistant Secretary of Defense for Manpower and Reserve Affairs sent the Army a memorandum asking it to submit a plan on how it would meet the target. After not receiving a response, the Assistant Secretary of Defense for Manpower and Reserve Affairs sent the Army a second memorandum in September 2016 that noted the missed target and reiterated the need to submit a plan for achieving compliance with designated funding targets. Following the second memorandum, the Army issued a memorandum in December 2016 stating it would fully fund Category A and B programs to the required targets in fiscal year 2017. Following these communications, in February 2018, the Army sent USD(P&R) its fiscal year 2017 program and metric report showing that it had successfully met the Category A and B funding targets as planned.
Service officials said they are taking steps to meet the Category B target, and data from fiscal years 2015 through 2017 indicate that the services are getting closer to meeting it. However, in the prior years when the services have not met appropriated funding targets for Category B programs, officials said that the services have relied on nonappropriated funding as supplemental support to help ensure that such programs continue to operate. Specifically, according to USD(P&R) officials, the services have used nonappropriated funding—that is, revenue generated largely through user fees incurred by servicemembers and their families— to cover MWR program costs for which appropriated funding was authorized. However, the use of nonappropriated funds to cover shortfalls in appropriated funding support for MWR programs has been a long- standing issue about which Congress has previously expressed concern. Specifically, in House Report 104-563, which accompanied H.R. 3230, a bill for the National Defense Authorization Act for Fiscal Year 1997, the House Committee on National Security established the annual DOD Category A and B MWR programs reporting requirement to Congress, after receiving testimony from the services’ MWR managers and noting a disparity in the degree of appropriated funding support afforded these programs particularly in the area of Category A and B programs. While the committee recognized that shortfalls in appropriated funding support for MWR programs requires the use of nonappropriated funding to meet requirements, it also stated that the use of nonappropriated funding resources—soldier, sailor, airman, and Marine money—to subsidize appropriated funding activities should be minimized.
While the Army met the Category B funding target for fiscal year 2017, the Navy, the Marine Corps, and the Air Force have each submitted plans and briefed USD(P&R) on how they plan to meet the target in the future. Navy officials said that they acknowledged the Navy’s challenges with meeting the Category B funding target and, as a result, began assessing their Category B programs to eliminate those that had limited use, consolidate some where possible, and implement operational efficiencies. Marine Corps officials indicated that the Marine Corps is committed to preserving valuable MWR programs and restoring appropriate levels of appropriated funding support in future budget planning. Specifically, the Marine Corps plans to readdress appropriated funding levels in the budget planning process in 2019. However, Marine Corps officials noted they may continue to have challenges meeting the 65-percent funding target in fiscal year 2018. Air Force officials said they will continue to advocate for retaining established MWR program funding in the budget process. Air Force officials said that for fiscal years 2014 through 2017, Air Force leadership has increased appropriated funding for the MWR programs each year to help get the Air Force closer to meeting the Category B funding target.
DOD Has Not Comprehensively Evaluated the Funding Targets to Ensure They Are Appropriate
DOD has not comprehensively evaluated the funding targets for Category A and B programs, which were instituted more than 20 years ago, to ensure they are appropriate. Standards for Internal Control in the Federal Government recommends that management periodically review policies and procedures for continued relevance and effectiveness in achieving an entity’s objectives. According to USD(P&R)officials, a limited evaluation took place prior to 1995 that resulted in the Category A funding target in DOD’s instruction being changed from 100 percent to 85 percent. USD(P&R) officials said that the Category A appropriated funding target was changed because some of the activities within the category have expenses, such as for the food and beverage elements, that are able to generate revenue and thus not authorized to use appropriated funds.
USD(P&R) officials stated that since that time there have been no further evaluations of the Category A or Category B targets and agree that it is time to evaluate the current relevance of the targets. Specifically they noted the considerable changes to the budgeting and funding environment that have taken place in the more than 20 years since the Category A funding target was modified. In addition, officials told us they also agree that it is time to evaluate the relevance of the Category B funding target, which has never been modified. Specifically, officials said that the services’ extended engagement in overseas conflicts and constrained budgets have resulted in an operating environment that is substantially different from the peacetime setting in which the targets were first established.
Moreover, Standards for Internal Control in the Federal Government requires management to document internal controls to meet operational needs. Documentation of controls, including changes to controls, is evidence that controls are identified, capable of being communicated to those responsible for their performance, and capable of being monitored and evaluated by an entity. Documentation also provides a means to retain organizational knowledge and mitigate the risk of having that knowledge limited to a few personnel, as well as a means to communicate that knowledge as needed to external parties, such as external auditors. As previously stated, officials stated that the Category A funding target was updated sometime prior to 1995; however, officials did not have any specific documentation related to this change. Furthermore, USD(P&R) officials said the targets were developed so long ago that there is a general lack of information on the funding targets’ origins and that they are not sure of the process or methodology that was used to develop them.
The amount of time that has passed since Category A’s target was modified, recent challenges in meeting the Category B target, and the general lack of information on the funding targets’ origins raise concerns about the appropriateness and continued relevance and effectiveness of the targets in achieving MWR programs objectives. Until DOD comprehensively evaluates the appropriateness of current targets for Category A and B programs and, based on its evaluation, documents any changes it makes to its funding targets, DOD cannot be certain that the targets reflect the current operating environment and do not pose undue financial burden on the servicemembers.
DOD Has Established an Oversight Structure and Performance Measures for MWR Programs but Has Not Developed Measurable Goals for Determining Whether MWR Programs Are Cost-Effective
DOD Has Established a Structure to Provide Oversight of MWR Programs
DOD has established a structure that specifies roles, responsibilities, and procedures for overseeing MWR programs. Specifically, DOD Instruction 1015.10 assigns roles and responsibilities for oversight of MWR programs to the USD(P&R), the Secretaries of the military departments, and the Chiefs of the military services (i.e., the Chiefs of Staff for the Army and the Air Force, the Chief of Naval Operations, and the Commandant of the Marine Corps). In addition, the services’ respective policies assign roles and responsibilities for MWR program oversight to the commander level. Table 4 summarizes the general oversight roles and responsibilities for DOD’s MWR programs.
The first level of oversight responsibility for MWR programs is assigned to the USD(P&R). Specifically, responsibilities include the development of department-level policies, program goals, performance measures, funding targets, and the oversight of appropriated and nonappropriated funding and expenditures for all MWR programs. To help ensure consistent quality, USD(P&R) monitors the services’ compliance in meeting minimum MWR funding targets and performance measures. As previously discussed, if a service misses a funding target, USD(P&R) officials said they ask that service to submit a remediation plan that summarizes its intent to meet the target in the future, as USD(P&R) did in fiscal year 2015 when several services missed appropriated funding targets for Category A and B activities.
The second level of oversight is assigned to the Secretaries of the military departments who are responsible for designating a central point of contact within their respective service to facilitate MWR programs policy compliance, coordinating with USD(P&R), and establishing funding priorities and strategy for MWR programs. For example, service officials we met with from the military departments said they have designated their respective Assistant Secretary Offices for Manpower and Reserve Affairs as the central point of contact for the services’ MWR programs.
The third level of oversight is assigned to the Chiefs of the military services who are responsible for the development of overall goals and uniform quality measures, which could include performance measures, for MWR programs consistent with the performance measures set by DOD in its instruction. For example, the Commander, Navy Installations Command has developed uniform quality measures for the Navy MWR Fitness program based on items such as customer satisfaction, usage rates, and equipment maintenance, among other things. According to officials, these quality measures provide a common tool to measure customer satisfaction and the quality of each installation’s MWR Physical Fitness program. Additionally, these Chiefs are also responsible for helping to ensure MWR programs are resourced with appropriated and nonappropriated funding according to financial categories and for identifying their respective appropriated and nonappropriated accounts in annual budgets to meet DOD funding goals. Service Chiefs are also responsible for ensuring that military installations operate customer-driven MWR programs that are determined locally by market analysis.
Lastly, the services’ respective policies assign roles and responsibilities for MWR program oversight to the commander level. Additionally, according to service officials, commanders assist with preparing an annual briefing for USD(P&R) on their MWR programs, which includes initiatives, challenges, program trends, and financial information. For example, in fiscal year 2017, each of the services reported on new initiatives to support MWR programs for servicemembers and their families, some of which are highlighted in table 5.
DOD and the Services Have Performance Measures to Assess MWR Programs but These Measures Lack Measurable Goals for Determining Cost- Effectiveness
DOD Instruction 1015.10 identifies six broad categories of performance measures that the services use to assess their respective MWR programs. However, these measures do not include measurable goals, which are needed to assess the cost-effectiveness of the 55 activities that currently make up the MWR programs. Specifically, DOD identifies six broad performance measure categories in its instruction and, according to service officials, the services collect and use various types of information within these categories to periodically assess and adjust these activities, as appropriate. Table 6 summarizes the types of information that DOD requires the services to collect across the six categories established in its instruction.
In addition to the information that is to be collected across these six broad categories, DOD established separate, more specific performance measures for 2 of the 55 activities—namely, for Physical Fitness and for Library Programs and Information Services. For the Physical Fitness activity, the services are required to submit annual reports to DOD on their compliance with meeting more specific performance measures in a variety of areas such as administrative operations, staff qualifications, facility equipment, and child play areas. Similarly, DOD requires the services to report on a variety of areas related to the Library Programs and Information Services activity, such as library operation plans, customer programs and service, and technology infrastructure. Unlike the broad measures contained in DOD’s Instruction, the specific performance measures DOD established for the Physical Fitness and Library Programs and Information Services activities tell the services exactly what information to collect and report in each performance measure category instead of the services having to develop specific measures on their own.
In an effort to better evaluate MWR programs, the services also have efforts underway that include the following to develop specific performance measures for their programs beyond the broad performance measures contained in DOD Instruction 1015.10.
Army. Army officials told us that they partnered with the Army Public Health Center to build evidence-based MWR programs. Based on this review, the Army found that Army MWR Community Recreation and Fitness programs have not been formally evaluated as directed by DOD Instruction 1015.10 requirements to measure and assess programs. Additionally, the Army found that, while the Army Office of the Assistant Chief of Staff for Installation Management provides program oversight, it does not possess the capability to conduct program evaluations. According to the results of the Army Public Health Center report issued in June 2017, the Army initiated a three- phase approach for evaluating its MWR programs. The report showed that assessing the evaluability of the Army MWR programs is phase one. According to the Army, these evaluations will enable the Army to validate program outcomes and better position itself to compete for scarce resources. The report also showed that many of the 13 Army MWR programs selected for review do not have direct links between activities and the priority outcomes with behavioral, social, and physical health, and that they do not have sufficient outcomes data that have been consistently collected. Army officials said that phase two will include the development of formal evaluation plans for selected evaluable MWR programs. Lastly, Army officials said that phase three will be the execution of the evaluation for two selected MWR programs, which is on target to be completed by December 2018. While Army officials are learning how to evaluate programs through this partnership with the Army Public Health Center, they said that they have also learned that these endeavors are costly. Officials said that a very modest program evaluation requires approximately $300,000 to $500,000. Army officials also stated that program evaluation requires support and participation by those organizations and people that deliver the programs. Furthermore, according to Army officials, resource reductions at the operational level (garrisons) are increasingly restrictive, preventing them from collecting critical information to support this multiphase effort.
Navy. Navy officials said that they use the MWR Enterprise Modeling System, which is based on performance measures that have been developed and routinely reviewed and updated by headquarters, regional, and installation program managers. The MWR Enterprise Modeling System is used as the baseline for the annual MWR performance data call that measures actual program performance against performance standards. Navy officials said that the performance measures provide the business strategy and guidance to ensure efficient, effective and market-driven delivery of programs and services.
Marine Corps. Marine Corps officials said they collaborated with the RAND Corporation to provide an analytically rigorous assessment framework to evaluate program performance. The RAND Corporation provided draft measures of performance. Marine Corps officials said that the RAND Corporation also provided a user guide that outlines an evaluation methodology and ensures consistent and standard application. Marine Corps officials said that they are reviewing the draft measures to determine appropriate data collection and have drafted an implementation plan. Specifically, Marine Corps officials said that they plan to brief Marine Corps installations in June 2018 on the performance measures they plan to collect data from, which will begin in fall 2018.
Air Force. Air Force officials said that they are building off the work that the RAND Corporation undertook for the Marine Corps and have also started collaborating with the RAND Corporation. The objective of the Air Force study is to develop an evidence-based evaluation framework for MWR programs that identifies immediate and mid-term outcomes that contribute to airman and family readiness and resilience. Specifically, the goal is to provide the Air Force with logic models and performance measures that are tied to each of the programs and services in the MWR portfolio. Air Force officials said they expect to finish this study by June 2018. However, the officials noted that implementing the performance measures will be a challenge since these types of MWR programs are difficult to measure and hard to capture data for.
While both the broad and specific measures established by DOD and the services can provide useful context about the status of individual MWR activities, they do not contain measurable goals that service officials could use to compare program results with costs to determine whether an individual activity is cost-effectively operating. Because the services’ efforts to develop specific performance measures are in early stages of development it is too early to determine whether these efforts will result in measurable goals that can be used to assess the cost-effectiveness of the MWR programs.
DOD’s Financial Management Regulation specifies that performance measurement should include program accomplishments in terms of outputs and how those outputs effectively meet intended agency mission goals. Further, cost itself can be a performance metric, but should also be combined with an effectiveness measure, such as the percentage of a goal achieved at a level of expected performance, to ensure that the resulting output is cost effective. Additionally, through our prior work on performance measurement, we have reported that performance goals and measures should align with an agency’s goals and mission. However, in reviewing DOD Instruction 1015.10, we found no mention of any goals, mission, objectives, or purpose for the MWR programs. There is one section entitled “policy” in the instruction that included items that resemble goals. Specifically, the instruction stated that MWR programs: 1. are an integral part of the military and benefits package; 2. build healthy families and communities and provide consistently high- quality support services that are commonly furnished by other employers or by state and local governments to their employees and citizens; 3. encourage positive individual values and aid in recruitment and retention of personnel; and 4. promote esprit de corps and provide for the physical, cultural, and social needs; general well-being; quality of life; and hometown community support of servicemembers and their families.
USD(P&R) officials who have responsibility for developing MWR program goals acknowledged that these policy items function as strategic goals but were not clearly identified as such in the instruction and also acknowledged that the instruction does not include measurable goals for assessing cost-effectiveness. In addition, USD(P&R) officials said that they are starting a review of DOD Instruction 1015.10 and did not know yet whether they would make any changes to the goals or expand the reporting requirement to include all 55 activities. Until DOD develops performance measures that include measurable goals, DOD officials and other decision makers, such as Members of Congress, may find it difficult to determine whether the MWR programs and the activities that make up the MWR programs are meeting servicemember needs in a cost-effective manner.
Conclusions
DOD’s multibillion dollar MWR programs provide a wide range of benefits for servicemembers and their families that ultimately help support military missions and readiness, both in times of war and peace. DOD has established funding targets for providing appropriated funding support for Category A and B MWR programs. However, the funding targets have not been comprehensively evaluated in the last 20 years to determine their current relevance. Until DOD comprehensively evaluates the appropriateness of current funding targets and documents any changes made to the targets, DOD’s funding targets may not reflect the current operating environment, and may be posing an undue burden on the servicemembers. DOD has also not developed performance measures with measureable goals that would allow it to assess the cost- effectiveness of its MWR programs. Without performance measures that include such measurable goals, it will be difficult for DOD and Congress to determine whether the individual activities and overall MWR programs are meeting desired outcomes in a cost-effective manner.
Recommendations for Executive Action
We are making the following two recommendations to DOD.
We recommend that the Secretary of Defense ensure that the USD(P&R), in consultation with the Secretaries of the military departments, comprehensively evaluate the funding targets for Category A and B MWR programs and document any changes made to the targets and the methodology used. (Recommendation 1)
We recommend that the Secretary of Defense ensure that the USD(P&R), in consultation with the Secretaries of the military departments, develop measurable goals for its MWR programs’ performance measures to determine the programs’ cost-effectiveness. (Recommendation 2)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD for review and comment. In its comments, DOD concurred with our recommendations and noted actions that it is taking. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate.
We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Under Secretary of Defense for Personnel and Readiness. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix III.
Appendix I: Department of Defense’s Morale, Welfare, and Recreation Program Categories
Appendix II: Comments from the Department of Defense
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Kimberly A. Mayo, Assistant Director; Rebekah Boone; Mae Frances Jones; Felicia Lopez; Stephanie Moriarty; Cynthia Saunders; John W. Van Schaik; Paul Seely; Carter Stevens; and Roger Stoltz made key contributions to this report. | Why GAO Did This Study
DOD's MWR programs provide servicemembers and their families with three categories of programs: Category A (e.g., fitness and libraries), Category B (e.g., camping and performing arts), and Category C (e.g., golf). DOD oversees the percentage of appropriated funding allocated to MWR programs by category and measures the military services' compliance with established funding targets. DOD set the targets at 85 percent for Category A and 65 percent for Category B. DOD did not set a target for Category C since this category has the ability to generate revenue from user fees.
House Report 115-200 accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 includes a provision for GAO to review DOD' s MWR programs. GAO assessed the extent to which (1) the services have met DOD's established funding targets for each category of MWR programs and DOD has comprehensively evaluated the relevance of its targets, and (2) DOD has oversight structures and performance measures that include measurable goals, including those for cost-effectiveness, by which to review MWR programs. GAO analyzed MWR program information for fiscal years 2012-2017 and compared DOD's MWR policy with guidance for using measures and evaluating goals.
What GAO Found
The Department of Defense (DOD) established funding targets for two categories of Morale, Welfare, and Recreation (MWR) programs—Category A, which promotes the physical and mental well-being of servicemembers, and Category B, which funds community support systems for servicemembers and their families. These targets are intended to ensure that the military services adequately fund these programs with appropriated funds instead of requiring servicemembers and their families to pay fees out of pocket to cover program costs. However, GAO found the following:
In fiscal years 2012-2017, the military services generally met the DOD-set target to provide 85 percent of appropriated funding for Category A programs but not the 65-percent target for Category B programs. Service officials said they are taking steps to meet the Category B target, such as by restoring targeted levels of appropriated funding support in future budget planning. Data GAO reviewed indicate that these steps are helping the services get closer to meeting the target for Category B.
DOD has not comprehensively evaluated the targets, established more than 20 years ago, to ensure that they are appropriate. DOD officials said they agree that it is time to evaluate the relevancy of the targets as the current operating environment is fundamentally different than when the targets were established 2 decades ago. Further, DOD officials said that they are unsure of the process or methodology used to originally develop the targets because they have no documentation supporting these decisions. Until DOD comprehensively evaluates the appropriateness of the targets and, based on its evaluation, documents any changes made, it cannot be certain that the targets reflect the current operating environment and do not pose undue financial burden on servicemembers.
DOD established oversight structures and performance measures for MWR programs, but has not established measurable goals to assess the cost-effectiveness of the 55 activities that make up MWR programs. DOD's MWR policy identifies six broad performance measure categories for the program. DOD officials responsible for developing MWR program goals acknowledged that DOD's MWR policy does not include measurable goals for assessing the cost-effectiveness of program activities, and do not currently have plans to make any changes to the goals. Service officials told GAO that they collect and use various types of information within the categories to assess specific activities. While both the categories established by DOD and the service-specific efforts provide useful context about the status of individual MWR activities, they do not replace the need for measurable goals that can be used to assess whether the programs are operating cost-effectively. The services are in the early stages of developing more specific performance measures, but it is too early to determine whether these efforts will result in measurable goals that can be used to assess cost-effectiveness. Until DOD develops performance measures that include measurable goals, it cannot ensure that MWR programs meet servicemember needs in a cost-effective manner.
What GAO Recommends
GAO recommends that DOD evaluate the funding targets and document any changes needed and develop measurable goals for MWR programs' performance measures. DOD concurred with the recommendations. |
gao_GAO-18-285 | gao_GAO-18-285_0 | Background
Mortgages under RHS’s program can be used to build, acquire, and rehabilitate rental housing in rural areas and are generally 30-year loans with 50-year amortization periods and include subsidized interest rates as low as 1 percent. To help finance housing projects and keep rents affordable to low-income tenants, RHS offers rental assistance subsidies to some property owners, which cover the difference between the tenant’s contribution and a unit’s rent.
The rental assistance program, authorized in 1974, provides the rental subsidies through agreements with property owners for an amount estimated to last for 1 year as required under the program’s appropriations acts. Eligible tenants pay no more than 30 percent of their income toward the rent, and RHS pays the balance to the property owner. Tenants must be low-income (incomes above 50 percent of area median income but not more than 80 percent of area median income) or very-low- income (with incomes not more than 50 percent of area median income) to be eligible for rental assistance. The agreements with the owners expire when the original dollar amount obligated is fully expended. Agreements specify that owners will receive payments on behalf of tenants in a designated number of units at the property. In addition, property owners must certify tenants’ incomes annually or when a tenant experiences a substantial change in income. Statutorily, rental assistance is tied to RHS loans for rural rental housing and is no longer provided to property owners once mortgages mature.
The program supports five general types of rural rental housing projects— family; elderly (units may be occupied by an income-eligible household that includes a tenant or co-tenant who has a disability or is age 62 or older, or both); mixed (project has both family and elderly units); congregate housing (project may be occupied by income-eligible elderly households that need meals or other services); and group homes (may be occupied by income-eligible elderly persons or individuals with disabilities who share living space within a rental unit).
Properties with RHS rental housing mortgages can exit the program in three ways—foreclosure, prepayment, and natural maturity of the mortgage. When an owner defaults on loan payments and the property is foreclosed, it may exit RHS’s program. Properties can also exit the program when loans mature naturally, meaning the loan is paid off as scheduled by the original loan term. Loans can also be prepaid, meaning payments are made ahead of schedule, which ends the loan term early. Only those loans made on or after December 15, 1989, are ineligible to prepay. As previously noted, once a property exits RHS’s program, owners are generally no longer required to provide housing for low- income tenants and properties are no longer eligible to receive rental assistance that is used to keep rents affordable for tenants.
Some owners that are reaching the end of their RHS mortgage terms may wish to exit the program. Other owners may wish to remain in the program and continue renting to low-income tenants. RHS has offered tools and incentives to help owners stay in the program and preserve the affordability of rural rental housing. Some of these tools involve extending mortgage terms, which extends the availability of rental assistance to properties.
RHS’s June 2017 data showed that the program had approximately 14,000 properties containing 427,000 rental units. Of these, approximately 12,000 properties (85 percent) and 282,000 units (66 percent) received rental assistance. According to RHS, the agency has not financed any new rental housing properties since 2011. Instead, RHS has generally used program funding to repair and rehabilitate existing program properties.
RHS properties are geographically dispersed, but one-quarter of the RHS program, or about 3,500 properties, was concentrated in six states as of June 2017: Texas (670 properties); Missouri (609); North Carolina (595); Michigan (564); Illinois (534); and Minnesota (509) (see fig. 1). Appendix II provides data in table form for RHS properties, units, and units with rental assistance.
RHS’s Multi-Family Housing Portfolio Management Division and the Multi- Family Preservation-Direct Loan Division administer USDA’s rural rental housing loan program. RHS’s national office also maintains the Automated Multi-Family Housing Accounting System (AMAS) and Multi- Family Information System (MFIS) databases, develops program policy, and oversees management of the program. RHS state offices administer the day-to-day operations of the rural rental housing program, including entering key mortgage and project information contained in hard copy mortgage closing documents into the AMAS and MFIS databases.
RHS Developed a Tool That Estimates That Large Numbers of Mortgages Will Mature Starting in 2028, and Better Controls Could Improve Data Accuracy
RHS Developed a Tool to Estimate Property Exit Dates
In March 2016, RHS developed the Multi-Family Housing Property Preservation Tool (preservation tool), an electronic system designed to use data from AMAS and MFIS to estimate mortgage maturity and property exit dates and to calculate new dates that may result from RHS’s preservation efforts. Before introducing the preservation tool in 2016, RHS officials manually calculated exit dates for rural rental properties, a process that was subject to errors and inconsistencies due to properties with multiple mortgages and mortgages that could be prepaid. AMAS and MFIS track loan closing dates; loan amounts; interest rates; and property location, among other information, but were not designed to estimate property exit dates.
According to RHS officials, the preservation tool and the underlying data it uses are publicly accessible via the Internet and are intended to improve program transparency and help support the agency’s preservation efforts. Users can search for the date a property began operating; total number of units; units receiving RHS rental assistance; mortgage amount and interest rate; mortgage prepayment eligibility; and property exit date estimates, among other information. The preservation tool enables RHS to look at trends in property exits across years and help determine when RHS will need to take preservation actions. As of April 2018, RHS had estimated property exit data available from 2017 to 2050, but not information on properties whose mortgages may mature in 2051 or beyond. RHS officials said that data will be released publically on its website when available.
RHS Data Show a Significant Increase in Maturing Mortgages after 2027
Our analysis of data used by the preservation tool showed that approximately 900 properties (6 percent of the program’s portfolio), including 20,000 units (5 percent), will have maturing mortgages and could exit the program between 2017 and 2027. Industry stakeholders said that low-income tenants living in these properties could face escalating rents or lose their housing altogether. In addition, over 13,000 properties (94 percent) and about 407,000 units (95 percent) are estimated to have mortgages that will mature between 2028 and 2050 (see fig. 2).
Our analysis of RHS’s June 2017 data, the most recent data available, also showed that 35 percent of RHS’s rural rental properties (4,944 out of 14,075 properties) have mortgages that are eligible for prepayment and could exit the RHS program ahead of their original mortgage maturity date. This earlier exit could cause tenants to face rent increases or search for alternative affordable housing earlier than expected (see fig. 3). According to RHS, if an owner prepays and a property exits the RHS program, rental assistance is no longer available to assist that property’s tenants. Concerns about the loss of affordable units led Congress to enact legislation that precluded prepayment for loans made on or after December 15, 1989. For those properties that are eligible for prepayment, RHS officials said they cannot predict which owners might make this choice and the agency has not been collecting data on borrower’s prepayment choices. As a result, outreach to these owners is particularly important for possible preservation of affordable housing.
Better Controls Could Improve the Accuracy and Utility of Maturing Mortgage Data
Our review identified three internal control shortcomings that could impact the accuracy, completeness, and timeliness of RHS’s data on properties with maturing mortgages.
First, RHS lacks sufficient controls to help ensure the accuracy of all loan information for each mortgage at the time of initial data entry because it only retroactively reviews a sample of loan document information entered into AMAS and MFIS. Although RHS staff reviews some loan information through the agency’s State Internal Review process, officials noted that the review of mortgage data entered into AMAS and MFIS only occurs for each field office at least once every 5 years and includes a step for staff to review and reconcile AMAS information with loan documents to help ensure the accuracy of RHS debt instruments. RHS officials added that they improved the guidance in October 2017 by adding specific data checks intended to help ensure that loan amount, interest rate, and amortization period information were correct. In addition, during our review of RHS’s rural rental housing loan documents, we identified mismatches between loan document information and the data in AMAS and MFIS used by the preservation tool. We found errors in the data on mortgage amounts, closing dates, and repayment periods in an estimated 3 percent to 5 percent of the properties in five states we examined. While the data we reviewed had limited errors, without appropriate internal controls, RHS cannot be assured that the data that is used by the preservation tool will be reliable in the future, and the mismatches suggest that RHS could improve how data are entered into AMAS and MFIS.
According to RHS officials, these systems were not designed to estimate the expected maturity of rural rental housing mortgages. At the time of the systems’ development, officials said that it was not a priority to build in controls to ensure the accuracy of such estimates. RHS officials said that rural rental housing mortgages would not mature for many years after they were originated. As a result, RHS did not create controls intended to ensure the accuracy of data related to mortgage maturity and did not prioritize establishing a process to check that data.
Federal internal control standards state that management is responsible for designing control activities for information systems and information processing objectives to support the completeness, accuracy, and validity of information processing. Without these controls, mortgage information used by the preservation tool to estimate property exit dates may be inaccurate and could affect the reliability of exit date estimates needed to identify properties for possible preservation.
Second, RHS lacked controls to check the accuracy and completeness of underlying data used by the preservation tool. When we examined the underlying data the preservation tool uses to estimate property exit dates, we observed missing (blank) values for some property address; property state; borrower address; and management company name information. For example, borrower address and property address were missing for 588 and 141, respectively, of the roughly 14,000 properties. In addition, some properties in RHS’s data included estimated property exit dates but contained incomplete information (“N/A” designations) for key variables such as property name; property address; property state; number of units; and type of housing.
Although RHS has been developing and implementing the preservation tool since 2016 and has made the preservation tool’s exit date estimates available on its website, the agency has not yet developed a control process to identify potential issues with its underlying data. As noted above, federal internal control standards require activities to help ensure the completeness, accuracy, and validity of program information. Without information that has been checked for accuracy, RHS might not be assembling the most complete and accurate information with which to estimate exit dates and begin possible preservation of rural rental housing for low-income tenants. In addition, RHS is missing an opportunity to improve data on properties with maturing mortgages and be better positioned to address those properties to protect low-income tenants.
Third, the agency has not developed a regular, timely process for updating the preservation tool’s underlying data and exit date information. Since RHS developed the tool in March 2016, RHS updated the underlying data for September and December 2016 and June 2017 but not for 2018. RHS staff said the data were intended to be updated quarterly because information that affects exit date calculations changes as RHS preserves rural rental housing or properties exit the RHS program. However, RHS officials said that they have been unable to update the preservation tool quarterly due to staff attrition and competing program demands across RHS.
Federal internal control standards require activities to help ensure the accuracy and validity of program information. For RHS’s information to be accurate and valid, it needs to be as current as possible for program management purposes. Since the mortgage maturity dates of properties are affected by RHS’s preservation options and the exit dates of properties can change over time as mortgages mature, it is critical for RHS to have accurate, complete, and timely rural rental housing information.
Without controls to help ensure that RHS, industry stakeholders, and the public have the most recent data available, they might not have the most current information that could be used for estimating property exit dates and starting preservation.
RHS Has Taken Steps to Address Properties with Maturing Mortgages, but Lacked Comprehensive Planning and Faces Statutory Constraints That Limit Preservation
While RHS has taken steps to address properties with maturing mortgages, such as identifying various options and incentives intended to preserve the affordability of properties for low-income tenants, a majority of properties with maturing mortgages from 2014 to 2017 have exited RHS’s rural rental housing program. Moreover, RHS has not taken important steps to comprehensively plan and prepare for the much larger number of potential property exits in future years, such as developing goals and metrics to assess the effectiveness of its preservation efforts and analyzing risks to its ability to preserve properties. While taking these steps would help RHS’s preservation efforts, some tenants may still be at risk of losing rental assistance when mortgages mature because RHS cannot continue to provide rental assistance. RHS also cannot provide vouchers to tenants residing in properties whose mortgages have matured.
RHS has Taken Steps to Preserve Properties with Maturing Mortgages with Limited Success to Date
In addition to developing the preservation tool as a first step in preserving properties with maturing mortgages, RHS officials said they commissioned two studies on the impacts of maturing mortgages to advance the agency’s understanding of key issues. Officials said they hoped the two studies would help the agency prepare for maturing mortgages. In September 2016, the Housing Assistance Council completed its first study for RHS, which identified the characteristics of RHS’s rural housing program and the impact that maturing mortgages may have on tenants and geographic regions. The report noted that understanding these characteristics and effects is important for planning and implementing strategies to preserve the properties. According to officials, the second study, which was under review by the agency as of December 2017, was intended to outline issues facing RHS’s multifamily housing program, such as the estimated $5.6 billion needed to rehabilitate properties program-wide, and possible policy solutions for addressing potential property exits.
RHS has offered property owners several options to prevent property exits and preserve the access to and affordability of housing for low- income tenants (see fig. 4).
Reamortization: Loan reamortization and a shortened reamortization process (known as “Re-Am Lite”) allow borrowers to repay outstanding loan balances over new, longer repayment periods. By extending the term of the loan, officials said that the agency can continue providing rental assistance to that property. Re-Am Lite does not require borrowers to have their properties appraised, which officials said can shorten the reamortization application process by 60 to 90 days.
Deferral: Borrowers can defer repayment of direct loans for up to 20 years. This prevents property exits and preserves affordability for low- income tenants by continuing the payment of rental assistance to property owners. Loan deferrals can be offered under the Multi- Family Housing Preservation and Revitalization program. This 12- year-old demonstration program offers a combination of property rehabilitation funding and the opportunity for owners to reamortize or defer loan payments to help keep rents affordable. Officials said the program can also be used to attract new owners who wish to stay in the affordable housing program by offering a funding source for property rehabilitation.
Prepayment Offer: If borrowers decline RHS’s options that extend loan terms (reamortization, Re-Am Lite, and deferral), but wish to remain in the RHS portfolio, the agency encourages property owners to submit a request to prepay their mortgage, if eligible to do so and if their mortgages are 12 or more months from maturity. After an owner submits a prepayment request, RHS is authorized to offer owners incentives to avert prepayment. These incentives include increased returns on investment to for-profit owners, additional rental assistance units, and equity loans.
Prepayment: If borrowers decline RHS’s options, the agency encourages property owners to prepay their loans. While owners who prepay would no longer have rural rental housing loans with RHS or be eligible to receive rental assistance from the agency, prepayment of a loan allows RHS to provide vouchers to tenants affected by the loss of affordable housing. According to RHS data, only about 5,000 of the 14,000 properties within RHS’s multifamily housing program are eligible to prepay loans.
Transfer: RHS has taken steps to facilitate the sale (transfer) of properties to new owners to prevent property exits. Officials described this as a key preservation tool because new RHS mortgage terms typically accompany the sale and allow for rental assistance to continue at properties where applicable. First, the agency established a more centralized and standardized transfer process based on input from developers, owners, and other stakeholders, which officials say reduced the average property transfer time from 156 to 112 days. Second, RHS maintains a spreadsheet available on the agency’s website, called the Preliminary Assessment Tool, which officials said streamlines and provides greater transparency to the property transfer process for potential buyers and sellers. The agency also hosted three conferences in 2016 designed to help find new buyers for RHS properties whose owners were seeking to sell their properties. Finally, in September 2016, RHS announced a 2-year pilot program to encourage nonprofit organizations to purchase rural rental properties with maturing mortgages, which could create new loan terms that would extend the repayment period and continue the properties’ affordability. Prior to the pilot, nonprofit owners were not required to make an initial equity contribution to projects and therefore could not earn any return on investment. Under the pilot, loan transfers to nonprofits would allow nonprofits to earn returns on their own resources initially invested in the property.
Despite the preservation options and incentives identified by RHS, 61 percent (148 of 244) of the properties with mortgages that matured between January 2014 and December 2017 exited the agency’s rural rental housing program (see table 1). Some industry stakeholders said that options and incentives did not adequately or broadly appeal to property owners. They added that existing options and incentives would be used primarily by owners who have no other choice but to stay in the program. Stakeholders explained that owner choice might be limited because of the condition of their property or because their property is located in a market that would not accommodate the sale of a property or rent increases to market levels. Some stakeholders also said options that extend loan terms only offer a short-term solution to preservation challenges because mortgages cannot be extended indefinitely.
RHS Has Not Comprehensively Planned to Preserve Properties with Maturing Mortgages
RHS’s efforts lacked a number of important steps that would better position the agency to preserve properties. First, RHS lacked documented goals for preserving its program and has not created measures for tracking progress toward those goals. In the absence of documented goals, RHS national officials stated that the agency’s goal is to preserve all properties within its program that are needed to ensure sufficient affordable housing, though they acknowledged that current resource levels would preclude that possibility and that some owners may leave the program regardless of the options the agency offers.
Second, RHS is not monitoring and assessing options and incentives it is providing in a way that would inform or improve the use of these options. While the agency can track preservation status—meaning whether a property is still within the program or not—through its preservation tool discussed above, it is not actively tracking preservation outcomes. RHS is also not systematically collecting data for monitoring purposes. RHS officials said agency databases contain variables that would show which options owners choose to use, but added that this information is not available in a single source. RHS is also not collecting information that would help them assess options. For example, the agency is not collecting information from property owners on what options and incentives appeal to them. This information would help the agency assess preservation options on how well they are being received by borrowers. Similarly, RHS is not monitoring the results of efforts to preserve properties, including information on how many properties were transferred as a result of its three buyers-sellers conferences.
Finally, RHS has not fully analyzed or responded to the risks facing its rural rental housing program, such as the following:
Owner behavior—RHS officials told us a key risk to preserving its rural rental housing program is that the agency cannot predict whether owners will choose to leave the program or stay. To help respond to this risk, the agency directed staff to notify owners 3 years in advance that their loan is maturing and that options are available for preserving the property within the program. While this window could provide RHS with the time to plan for property exits, RHS is not collecting information from owners on why they may choose to exit rather than stay in the affordable housing portfolio. The agency’s effort to predict owner behavior would be aided by collecting and analyzing data on how many owners choose to leave the program and why.
Resource constraints—During a May 2017 conference, a senior RHS official highlighted the issue of agency resource constraints for addressing maturing mortgages, saying that the agency does not have the ability or the financial resources to preserve all of the properties that could leave the program once the loans mature. RHS has also acknowledged that, even at lower levels of about 80 maturing mortgages each year, the agency does not have the resources to provide all preservation options to every owner who wishes to use them. RHS has also not analyzed or planned for how it would prioritize the use of limited resources. RHS national office officials said there is some guidance that could be used by state offices to prioritize the use of resources, but this guidance was not specific to addressing maturing mortgages and was in the process of being updated to include information that could be used to help prioritize limited resources for preserving properties. That update is expected by January 2019.
Management of maturing mortgages—RHS has not analyzed or responded to risks involving staff management of maturing mortgages. For example, the agency’s national office said that staff attrition and turnover in the national, state, and field offices that manage mortgages have resulted in fewer staff managing its program in general and that they were not sure what the effect of maturing mortgages would have on staff workloads. RHS staff in some of the states we visited expressed concern that workloads are already heavy and that any increase caused by maturing mortgages, including smaller numbers occurring now, might affect their ability to be responsive to program needs. Similarly, some state office staff expressed concerns that they were not trained for managing and responding to properties with maturing mortgages and needed additional guidance from the national office. RHS national office officials said that while the agency does not provide training specific to maturing mortgages, it does provide training on loan servicing, which includes the use of preservation options, and the national office conducts monthly conference calls that all state offices participate in, which have included maturing mortgages as a topic and which can be used to answer staff questions about maturing mortgages.
Rehabilitation Costs—RHS has commissioned two studies on the risks that program-wide rehabilitation costs pose to its ability to preserve its program, but has not analyzed or planned for how it would address the estimated $5.6 billion needed to rehabilitate its aging portfolio of properties. Officials said that they have met with industry stakeholders and Congress about capital needs estimates, but no additional steps such as requesting additional funding were taken. Officials added that federal budget uncertainties caused by years of continuing resolutions and a sequestration have made planning for maturing mortgages and program-wide rehabilitation more difficult. However, RHS has been aware of growing rehabilitation needs since at least 2004, when the agency released a commissioned study that said capital needs program-wide would continue to increase and cost more if not addressed.
Federal internal control standards call for agencies to define objectives in specific and measurable terms to enable management to identify, analyze, and respond to risks related to achieving those objectives. Specifically, these standards call for agencies to establish goals and performance measures for tracking progress toward achieving goals; establish activities that monitor performance and assess results so that appropriate action is taken; and identify, analyze, and respond to risks related to achieving their goals.
RHS officials said that, as of December 2017, they had not taken steps to develop goals and measures, perform key monitoring and assessments, and analyze and respond to risks because the larger number of potential property exits is not expected to begin for another 10 years (2028). RHS officials said that they were using this time to see how their existing options and resources perform, and that the agency would make resource and other adjustments over time as they gained experience with preservation. However, as discussed above, mortgages have already begun to mature and the majority of properties with maturing mortgages between 2014 and 2017 exited the agency’s rural rental housing program. Some property owners may have chosen to exit the program regardless of additional actions or incentives. For example RHS officials noted that many of the property owners whose mortgages are currently maturing are nearing retirement or prefer market returns to RHS’s options and incentives. However, the percentage of exits (61 percent) suggests that RHS’s current planning efforts have not proven sufficient to prevent the majority of properties with mortgages that have matured from exiting its rural housing program.
By not having taken the planning steps identified above, RHS is not well positioned to respond to properties that currently have maturing mortgages and require action, nor is the agency prepared for the future larger number of potential property exits that starts in 2028. In particular, without developing goals and measures, conducting sound monitoring and assessments of rural rental housing program developments, and analyzing and responding to risks, RHS may not have the key information, staff, tools, and resources in place to effectively preserve properties and prevent the financial hardship that increasing housing costs could cause rural low-income tenants or the loss of their housing altogether.
Law Limits RHS’s Ability to Offer Rental Assistance and Vouchers to Low- income Tenants
RHS has options to extend loan terms in order to continue rental assistance at properties but cannot continue providing rental assistance to tenants once the loan is paid off and the property exits RHS’s affordable housing program. Some owners of properties with maturing mortgages may be open to continue offering rental assistance and agree to restrict the units to eligible low-income tenants after mortgage maturity. Further, some industry stakeholders cited that having the ability to extend existing rental assistance contracts after mortgage maturity would be useful in protecting tenants from rent increases or displacement.
However, in some cases, property owners may not want to extend rental assistance contracts after mortgage maturity. Tenants living in these properties could be subject to rent increases or the risk of displacement. RHS lacks the authority to provide vouchers to tenants in these situations. Voucher assistance would allow RHS to provide assistance to the tenants to help pay for rent in their existing unit or at other rental housing in the private market without requiring the owner to serve low-income tenants exclusively.
In 2016, legislation was introduced that would have allowed RHS to continue providing rental assistance to properties through new contracts with owners after their loans matured or to provide vouchers to tenants under different circumstances, including mortgage maturity. In exchange for accepting rental assistance payments on behalf of eligible tenants, the legislation would have required the property owners to enter into an agreement with RHS to ensure that the property remained subject to low- income use restrictions for an additional period of time. In cases where a new rental assistance contract is not possible, RHS would offer vouchers to tenants after mortgage maturity. The proposed legislation was introduced on April 12, 2016, but no further action on the bill was taken.
In the past, Congress has taken legislative action to continue rental assistance to low-income tenants and protect them from the impact of terminated assistance. For example, beginning in fiscal year 2006, Congress has authorized RHS to provide vouchers to tenants affected by loan prepayments, which leads to the property owners’ exit of the RHS’s housing program. Tenants receiving vouchers after the prepayment of a loan could use them to remain in the property after it exits RHS’s program or to find other suitable housing in the private market. Congress has limited the amount that RHS paid in subsidies. The amount of a voucher is limited to the difference between the comparable fair market rent for the housing unit occupied by a tenant and the rent paid by the tenant on the date of prepayment or foreclosure.
In addition, when the Department of Housing and Urban Development (HUD) faced a similar loss of affordable housing, Congress gave the department authority in 2011 to further protect tenants through the creation of the Rental Assistance Demonstration (RAD). Before the RAD program, HUD had limited authority to extend rental assistance at these properties when contracts expired or owners terminated contracts. However, this demonstration, among other things, allowed HUD to continue providing rental assistance to property owners after the original contracts expired. In 2014, we reported that the conversion of rental assistance should not have an effect on voucher program costs because HUD uses the same calculation for providing budget authority for the project-based vouchers converted under RAD as it does for calculating budget authority for tenant-protection vouchers.
Without the authority to continue providing rental assistance or to provide vouchers to tenants at existing properties whose mortgages have matured, RHS is not well positioned to protect tenants from potential increased rents or displacement from their units. The agency could lose important sources of low-income housing, which for some communities may be the only source of affordable housing. Further, without the authority to offer vouchers to tenants living in units that received rental assistance at mortgage maturity, tenants may also face rent increases and not be able to afford their rents in properties where the owners choose not to extend their rental assistance contracts. Continued provision of rental assistance could be limited to units or tenants that were receiving rental assistance at mortgage maturity and would not represent an expansion of the number of units or tenants assisted. Furthermore, Congress could structure this to have no or limited budgetary impact, similar to what was done under HUD’s RAD program. For example, subsidies could be kept at a level that is similar to what was provided at mortgage maturity.
Conclusions
RHS’s preservation tool is a positive first step to help the agency estimate property exit dates, alert stakeholders to properties with maturing mortgages, and begin to preserve their affordability. However, the lack of data controls for information on RHS rural rental properties raises concerns about data used by the preservation tool, especially as RHS applies preservation options that extend mortgages and result in new exit dates. The lack of controls for underlying data used by the preservation tool, and missing information on some properties, demonstrate that RHS has opportunities to improve rural housing program data as properties continue to have maturing mortgages. RHS has not been able to update the preservation tool’s data on a regular basis. Developing controls with clear guidance on the frequency and process for routinely updating data on RHS’s website could help ensure that preservation efforts are based on the most current information available. Regular updated information would also help ensure that industry and other stakeholders have the most recent information available on RHS’s rural rental housing program.
While RHS has taken steps to better understand maturing mortgage challenges and preserve properties, RHS’s strategy to use the next several years to plan for the larger number of expected future maturations and test available preservation options does not address the significant number of mortgages that will mature before then. The agency has also not taken important planning steps required by federal internal control standards to establish goals and performance measures for tracking progress toward achieving goals; establish activities that monitor performance and assess results so that appropriate action is taken; and identify, analyze, and respond to risks related to achieving their goals. Actions to enhance the agency’s data and controls, and strengthen its comprehensive planning and program evaluation processes, would better position RHS to respond to maturing mortgages, preserve its rural rental housing program, and maintain affordable housing for low-income tenants.
Further, the agency lacks the authority to continue rental assistance to properties with matured mortgages and is limited in its ability to issue vouchers to tenants affected by property exits. Even if the agency takes additional steps to plan for maturing mortgages or increases options and incentives for preserving housing, these limits to rental assistance and vouchers restrict RHS’s ability to protect tenants. These limits also effect RHS’s ability to meet the agency’s objective of providing decent, safe, and sanitary housing to low-income rural residents. Expanding RHS’s ability to protect existing tenants would give the agency tools that are available to other affordable rental housing programs, and could be implemented in a way to maintain, rather than increase, program size and costs.
Matter for Congressional Consideration
We are making the following matter for congressional consideration:
For RHS properties whose mortgages have matured, Congress should consider granting RHS the authority to renew annual rental assistance payments to owners who wish to continue to receive them and provide vouchers to tenants living in rental assistance units in properties whose owners choose to no longer receive rental assistance.
Recommendations for Executive Action
We are making the following five recommendations to RHS:
The RHS Administrator should establish additional controls to check the accuracy of all loan information entered into RHS information technology systems, to help ensure complete, accurate, and reliable data for estimating rural rental housing property exit dates. (Recommendation 1)
The RHS Administrator should establish a process to help ensure regular and frequent updates for the preservation tool and its underlying data. (Recommendation 2)
The RHS Administrator should establish performance goals and measures for its rural rental housing preservation and rehabilitation efforts and report out these outcomes. (Recommendation 3)
The RHS Administrator should monitor the results of rural rental housing preservation efforts and assess the degree to which those efforts yielded intended outcomes. (Recommendation 4)
The RHS Administrator should identify, analyze, and respond to risks to achieving its preservation goals, including resource and staffing limitations. (Recommendation 5)
Agency Comments
We provided a draft of this report for review and comment to RHS and HUD. RHS provided technical comments, which we incorporated into the report, and stated that it agreed with all five of our recommendations but did not provide a formal agency comment letter. HUD stated that it had no comments on the draft.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Agriculture, the Secretary of Housing and Urban Development, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to examine the Rural Housing Service’s (RHS) efforts to (1) estimate the dates that properties may exit the rural rental housing program due to mortgage maturity, and (2) preserve the affordability of rural rental properties with maturing mortgages.
To examine RHS’s efforts to estimate property exit dates, we analyzed RHS documentation and interviewed RHS officials about the data the agency uses to identify and preserve properties with maturing mortgages. To determine what steps RHS has taken to help ensure the accuracy and reliability of RHS’s Multi-Family Housing Property Preservation Tool (preservation tool), we reviewed documentation that included the preservation tool’s user guide, and the capabilities it offered the agency and the public. We also conducted interviews with RHS national and state office officials about the preservation tool and about how the agency’s Automated Multi-Family Housing Accounting System (AMAS) and the Multi-Family Housing Information System (MFIS) operate. AMAS contains data on loans and rental assistance contracts and MFIS tracks monthly loan and rental assistance payments and contains data on the location of RHS’s rural rental properties. Both systems provide data used by the preservation tool to calculate mortgage maturity and exit dates for rural rental housing properties. To determine how the preservation tool was built and the main information it uses to determine mortgage maturity and property exit dates, and the information it calculates for users, we interviewed the contractor hired by the agency to create and populate the preservation tool.
To analyze the accuracy of AMAS and MFIS data used by the preservation tool to calculate mortgage maturity and property exit dates, we reviewed mortgage documents that RHS uses to populate those systems. We reviewed loan documents for a generalizable stratified random sample of 100 properties in five states—California; Illinois; Minnesota; Pennsylvania; and Virginia—to determine if loan information found within mortgage documents matched data contained in AMAS and MFIS for selected variables relevant to mortgage maturity and property exit date calculations. We stratified the population of 2,152 loan documents in the five states by state, number of loans per property, and age groups. We computed an initial sample size of 60 properties for a simple random sample to achieve an upper bound of no more than 5 percentage points, an expected error (inaccurate data field) rate of 0 percent, and a 95 percent confidence level. We then proportionally allocated the sample across the strata and increased sample sizes in stratum within each state so that we selected at least 10 properties with more than 1 loan and 10 properties older than 20 years old. States we visited were selected based on their geographic diversity, diversity (age and size of program) of rural rental housing properties, and their proximity to GAO offices.
To select properties’ loan files for this review, we created a nongeneralizable sample of 20 properties in each of the five states. We also interviewed agency officials knowledgeable about the data— including officials from RHS, Rural Development’s Office of Operations and Management, and the U.S. Department of Agriculture’s (USDA) National Financial and Accounting Operations Center—about the processes used to populate these systems and any quality checks in place for ensuring that data were inputted completely and accurately, including any available documentation on these steps. We also interviewed RHS state office officials, who service loans, about the process for identifying errors in these systems and making corrections.
To determine which rural rental housing properties were estimated to exit the RHS program and where these properties were located, we analyzed RHS’s raw data from June 2017 (the latest available RHS data). We analyzed the data to determine the number of properties, units, and rental assistance units with property exit dates by state and by year from 2017 to 2050. We also generated summary statistics on the number of properties that were eligible to prepay their mortgages. In assessing RHS’s data we also conducted checks on the data for outliers and missing information. Although we found a selected number of data anomalies that point to the need for better data controls, we determined the data we used were sufficiently reliable for purposes of describing the estimated number of properties that could exit the RHS program between 2017 and 2050.
To better understand the calculations used by the preservation tool, we reviewed the logic or code it uses to calculate mortgage maturation dates. For this analysis, we used documentation on the program used to generate estimates and compared this documentation to the code to see if there were any operational differences. Additionally, we reviewed each of the functions within the logic and looked for inconsistencies in logic or deviations from financial convention that might cause incorrect predictions.
To examine steps RHS has taken to preserve properties with maturing mortgages, we reviewed documents that listed options available for retaining properties with maturing mortgages. We gathered and analyzed documentation on any comprehensive planning efforts by RHS to address rural rental housing maturing mortgages, including documentation showing preservation goals and measures, and any assessments of RHS’s plans, efforts, or resources needed to address maturing mortgages. We also analyzed documentation and interviewed national and state RHS officials about any training and guidance that was being provided to staff on maturing mortgages. In addition, we interviewed RHS national and state officials about what tools, resources, and plans were in place for addressing maturing mortgages and their limits. Further, we asked about ongoing efforts to address maturing mortgages, including any plans to obtain additional resources for managing maturing mortgages now and in the future when a larger number of properties are expected to have loans mature. We reviewed and interviewed officials about studies commissioned by RHS on the effects of maturing mortgages on the rural affordable rental housing program and affected communities and on program-wide rehabilitation needs and cost estimates. We also assessed how the studies and reports were conducted for any flaws in their approaches and methodologies.
To determine stakeholder perspectives on how RHS was managing maturing mortgages, we interviewed officials from a judgmental sample of rural housing industry organizations. We took multiple steps to identify these industry organizations. First, we met with an affordable housing organization with a national membership that represents owners; developers; housing advocates; and tenants. We asked this national organization to identify industry organizations that work with RHS. From that list, we focused on organizations that also had a multi-state or national focus. Second, during interviews with these organizations, we requested additional contacts. We interviewed organizations that were named during multiple interviews. This selection process allowed us to identify stakeholders that represented a diverse range of roles in the rural housing industry including: developers, borrower and tenant advocacy organizations, and organizations advocating for the retention or expansion of affordable housing.
To determine how other agencies approached expiring rental assistance contracts and low-income housing preservation, we also interviewed Department of Housing and Urban Development officials. More specifically, we determined what key steps and best practices the department used to preserve its multifamily housing program properties, including properties with maturing mortgages, and what tools and resources were required for managing its housing program.
We conducted this performance audit from May 2016 to May 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Number of Properties and Units That Could Exit the Rural Housing Service’s Program between 2017 and 2050
State
State
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Harry Medina (Assistant Director); Steve Ruszczyk (Analyst in Charge); Holly Hobbs; Enyinnaya David Aja; Jim Ashley; Stephen Brown; William Chatlos; DuEwa Kamara; John McGrail; Marc Molino; and Tovah Rom made key contributions to this report. | Why GAO Did This Study
Under its rural housing program, RHS provides mortgages and rental assistance to support affordable rental units for low-income tenants (see figure). When these mortgages reach the end of their terms (mature), property owners may exit the program; current law does not allow RHS to continue providing rental assistance when such exiting occurs. As a result, tenants in properties with mortgages that are maturing may face rent increases or lose their housing altogether.
GAO was asked to examine how RHS is addressing the risks posed by maturing mortgages. This report examines RHS's efforts to (1) estimate rural housing property exit dates and (2) preserve the affordability of rural rental properties with maturing mortgages. GAO reviewed RHS mortgage loan data and preservation documents, and interviewed RHS officials and industry stakeholders.
What GAO Found
The U.S. Department of Agriculture's Rural Housing Service (RHS) implemented an automated tool to estimate when properties could exit the rural rental housing program, but RHS lacked sufficient controls to ensure the accuracy, completeness, and timeliness of those estimates. In 2016, RHS developed its Multi-Family Housing Property Preservation Tool to replace a manual process of estimating exit dates. RHS data suggest that a smaller number of properties could exit RHS's program in the near term, but between 2028 and 2050, over 90 percent of RHS's properties and units could exit the program (about 13,000 properties with 407,000 units). However, RHS lacked controls that would better ensure the accuracy and completeness of these estimated exit dates, such as the verification of key data input at mortgage origination. In addition, RHS had not established a regular process to update the preservation tool's underlying data due to staff turnover and data system challenges. Without these controls, RHS may lack assurance that is has reliable data for calculating exit dates and initiating preservation efforts.
While RHS has taken actions to address properties with maturing mortgages, such as offering property owners options designed to prevent property exits, about 60 percent of properties with maturing mortgages exited the program between 2014 through 2017. The agency's planning efforts lacked key steps such as (1) establishing preservation goals, (2) developing metrics for evaluating preservation efforts, and (3) analyzing and responding to risks facing its portfolio such as resource limits and growing capital rehabilitation needs. Without taking these actions, RHS is not well positioned to preserve affordable housing in the near term or when much larger numbers of properties and units could exit the program starting in 2028. Although taking the steps above would help RHS's preservation efforts, some tenants may still be at risk of losing rental assistance when mortgages mature. Accordingly, allowing RHS to renew rental assistance after mortgage maturity could protect assisted low-income tenants from increased rents or displacement from their units. When the Department of Housing and Urban Development (HUD) faced a similar loss of affordable housing subsidies, Congress authorized the department in 2011 to continue providing rental assistance at properties after contracts expired.
What GAO Recommends
Congress should consider granting RHS authority to continue providing rental assistance to tenants in properties with maturing mortgages. GAO is also making five recommendations, including that RHS improve data quality and take steps to comprehensively plan for preserving properties with maturing mortgages. We provided a draft of this report for review and comment to RHS and HUD. RHS agreed with all five of GAO's recommendations. |
gao_GAO-19-3 | gao_GAO-19-3_0 | Background
VA provides education benefits to eligible veterans and their beneficiaries enrolled in approved programs of education and training to help them afford postsecondary education. VA staff conduct oversight of schools receiving these benefits. In addition, each year, VA contracts with state agencies to help provide this school oversight. In fiscal year 2017, there were about 14,460 schools receiving VA education benefits for about 750,000 veterans and their beneficiaries across the country.
State agencies’ core oversight functions, as generally required by statute, VA regulations, and their VA contracts, include approval of schools to receive VA education benefits, annual compliance surveys of schools— which are reviews to ensure schools’ compliance with program requirements—and technical assistance to schools, among other things (see fig. 1). VA and state agencies both conduct annual compliance surveys of selected schools, which generally entail a visit to the school. For veterans to receive the education benefits, school employees must certify to VA that they are enrolled in classes and notify VA of any changes in enrollment.
NASAA was founded to coordinate the efforts of state agencies and is managed and administered by an executive board and several leadership committees, such as a contract committee and a legislative committee. All members of NASAA leadership are also either directors or have other roles at individual state agencies. VA’s Education Service is led by a Director and is under the Veterans Benefits Administration. This office works with NASAA to prepare annual contracts to allocate federal funding and specify workload requirements for each state agency.
Limited Funding Has Impacted States’ Oversight Abilities, Leading State Agencies to Withdraw from This Role, and VA Has Not Assessed How It Will Respond to Future State Withdrawals
Funding Has Remained Relatively Constant Over a Decade and VA Recently Revised Its Allocation Method
For over a decade, funding provided by VA to state agencies remained at the same level of $19 million. In fiscal year 2018, VA allocated $21 million for state agencies—the first increase in funds allocated to states since fiscal year 2006 (see fig. 2).
Each year, state agencies can also request supplemental funding from VA if their costs exceed their allocated funding amount. VA has the discretion to approve an agency’s request based on its justification of need and the amount of VA funding available for supplemental requests. NASAA officials said that supplemental funding is helpful, but that it is not a reliable funding source because there is no guarantee that VA will be able to provide states with the requested amount. According to NASAA officials, some state agencies also receive additional funding from their state governments if they request these funds, but many states do not provide this additional funding. NASAA officials also noted that in some cases, states do not want to provide their own funds to state agencies because their view is that the agencies already receive VA funding through their federal contracts.
VA recently changed its method of allocating funding to state agencies. VA hired an external contractor to develop a new funding allocation method. Before fiscal year 2017, VA funded state agencies primarily based on the number of schools in the state with at least one veteran student receiving VA education benefits in the previous year. In fiscal year 2017, VA implemented a new funding allocation method. VA officials told us this new method was a significant improvement over the previous method they used, which was very limited. For example, VA officials said the prior funding method did not estimate how long it took state agencies to perform certain oversight activities. The officials said this limitation was a key reason they decided to develop a new funding method. VA’s new method to fund states more equitably is based on their work requirements, i.e., their school oversight activities and the amount of time needed to complete them. The new funding method factors in, among other things: the number of staff needed to complete a state’s workload in overseeing schools; national salary averages ($80,000 for professional and $50,000 for support staff), including benefits; a national travel allowance based on the number of professional staff required to complete work requirements; the number of schools receiving VA education benefits in the state; and the estimated time needed to review different school types, the type of review (such as approvals vs. compliance surveys), and the number of student veterans enrolled.
State Agencies Identified Impacts of Limited Funding on Their Ability to Fulfill Oversight Responsibilities
VA, NASAA, and selected state agency officials we spoke with said that limited funding before and after the recent changes to the funding method has impacted state agencies’ ability to fulfill their oversight responsibilities in three areas: (1) ability to pay and train oversight staff, (2) ability to visit geographically dispersed schools due to travel costs, and (3) ability to provide technical assistance and training to schools. Under their contracts with VA, state agencies have been meeting their core school oversight functions, according to NASAA officials. VA and NASAA officials we interviewed, however, said state agencies have been underfunded for many years. They said states’ funding concerns and challenges existed prior to the new method to allocate funds to state agencies and remain despite a total funding increase to state agencies from about $19 million to $21 million in fiscal year 2018.
NASAA officials we interviewed said some state agencies have difficulty paying for the number of staff they need because there is a mismatch between VA’s average salary and benefits used to calculate states’ funding and the actual salaries and benefits some state agencies are required to pay under state laws. VA officials acknowledged that some states have required salary and benefit levels that exceed the average levels used in VA’s new funding allocation method. VA’s new funding method uses an average salary of $80,000 (including benefits) for professional staff. VA officials noted that some states have annual salaries for professional staff of over $100,000 excluding benefits. A state agency official we spoke with said the salary and benefit costs for professional staff in her state average $130,000, with some salary and benefits costing up to about $150,000. The official said this can make it difficult for the state agency to be able to pay a sufficient number of staff, which hinders its ability to fulfill its VA-contracted oversight. In another case, a NASAA official said his state agency did not have enough funds to pay for a second full-time employee because the state’s required salary and benefits were higher than VA’s $80,000 allotment for professional staff.
Limited funding for state agency oversight staff has led to state requests for additional funds, as well as higher turnover and less training of the staff. VA officials said that the primary reason that some state agencies requested supplemental funding from VA in fiscal years 2016 and 2017 was that their initial allocation was not sufficient to cover salary, benefits, and travel expenses. Some state governments have had to cover those costs, hoping that VA would reimburse the state at the end of the fiscal year, according to VA officials. In addition, some state agencies have had significant turnover due, in part, to the uncertainty about the amount of annual VA funding, according to NASAA officials. NASAA officials also said that funding amounts limit the professional development provided to state agency staff, including travel to conferences. VA officials said that they support professional development and routinely provide funding for travel to conferences. However, according to VA officials, VA has denied requests from state agencies for travel to additional, repetitious conferences during the same year.
NASAA officials said limited VA funding also makes it difficult for state agencies in geographically large states to pay travel expenses to visit schools as part of their oversight responsibilities. For example, NASAA officials said state agencies in Alaska, Montana, and Washington find it difficult to afford mileage and hotel costs for school visits that require travelling long distances—sometimes over mountain ranges—and overnight stays. NASAA officials also said VA’s new funding method does not allocate sufficient funding for travel.
Officials we interviewed at selected state agencies have had mixed experiences with travel costs. One state agency official told us her agency selected schools to visit that were physically near her office because of insufficient travel funds. In contrast, a state agency official in a geographically small state said the agency has sufficient funding to travel throughout the state to visit schools, mainly because overnight stays are unnecessary. VA and NASAA officials said some state agencies have been able to address travel costs by stationing agency staff in different parts of the state. VA officials, however, acknowledged that this is not possible in all states because some states require agency staff to be located in a central office.
VA’s new funding allocation method calculates a national travel allowance for all states based on the total number of professional staff it estimates would be required to complete work requirements in all states. VA officials explained that this travel allowance does not account for individual differences in geographic size among states. VA officials said that in developing the new funding method, the contractor reviewed the historical travel costs of states and determined that a distinction by the geographic size of a state did not need to be factored into the funding method. The contractor based this decision on several factors, including that some state agencies: (1) paid their travel costs using state funds, not VA funds; (2) have located their staff in offices across the state and, as a result, their travel costs were lower than in other states; and (3) planned their travel so they visited schools within a short timeframe, which reduced travel costs.
When faced with funding difficulties, many state agencies reduce their technical assistance to schools and outreach activities because they need to use available funds on salaries, benefits, and travel related to compliance survey and approval workloads, according to NASAA officials. For example, one state agency official told us her agency has significantly reduced its technical assistance to schools because it does not have the funds to travel across the large, rural state to provide it. A NASAA official said available funding has reduced his state agency’s ability to conduct outreach, such as connecting veterans with education and benefit resources, or holding in-person meetings to educate employers on providing apprenticeships to veterans using VA education benefits.
NASAA officials also said that many state agencies have reduced the number of visits to train school employees on VA education benefits requirements. They noted that this training is important because it helps reduce over- and under-payments and the misuse of VA education benefits. A 2016 report from VA’s Inspector General estimated that VA makes $247.6 million in improper payments of VA education benefits annually, mostly over-payments. The Inspector General found that many of the improper payments occurred because school employees provided VA incorrect or incomplete information on student enrollment.
VA Plans to Revise the New Funding Method to Address Ongoing Concerns by States
NASAA officials told us that they continue to have concerns that the new funding method’s time estimates for completing certain oversight activities are inaccurate and, as a result, this method does not allocate sufficient funds. For example, NASAA officials said the funding method does not properly estimate the time it takes state officials to travel to schools and carry out oversight functions, including conducting certain school approvals, and providing schools with technical assistance and training. NASAA officials said the time estimates used to fund approvals are inaccurate and need to be revised because different types of schools and education programs—including flight schools, degree programs, and non- degree programs—take different amounts of time to review and approve. For example, NASAA officials said that state agencies need less time to conduct an approval for an on-the-job training program than for a large public university.
VA officials said they are aware of the concerns that NASAA and state agencies have raised that the time estimates for oversight in the new funding method are inaccurate—with some being too high and others too low. They are also aware that NASAA and state agencies believe that the analysis to develop these estimates should have more accurately factored in the time needed to approve and review different types of schools and education programs.
To address the concerns states have raised about its new funding allocation method, VA provided documentation to us of its plans to hire a contractor in fiscal year 2018 to improve and update its funding method. In September 2018, VA hired a contractor to carry out a contract with a 6- month period of performance. VA reported that the contractor would review the new funding allocation method to determine if any specific changes are needed to more equitably distribute funding across state agencies. Specifically, VA officials said the contractor would review the accuracy of the funding method’s allowances for state agencies’ salary, benefits, and travel costs, and its time estimates for states to conduct oversight activities to determine if changes are needed. VA officials reiterated that allowances for salaries and travel, and the time estimates are critical factors in the funding method. VA officials noted, however, that regardless of how VA divides the funding up among the state agencies, the total amount of program funding to these agencies will remain the same within any one fiscal year.
Two State Agencies Have Discontinued Their Oversight Contracts, but VA Has Not Assessed These Impacts or How It Will Address Future Withdrawals
States have the option of not renewing their school oversight contracts with VA, and two have exercised this option in recent years, citing insufficient funding levels from VA to fulfill their responsibilities. When this happens and the state withdraws from its school oversight role, VA must perform all oversight responsibilities for VA education benefits in that state. New Mexico—which currently has 4,754 veteran students and 107 schools receiving VA education benefits—did not renew its contract with VA in fiscal year 2018 because funding was not sufficient to cover its costs for salaries, travel, and technical assistance to schools, according to VA officials (see text box).
New Mexico Did Not Renew Department of Veterans Affairs (VA) Contract Due to Lack of Funding New Mexico’s state agency began to face significant funding difficulties starting in fiscal year 2015, according to a state official, and it did not renew its VA contract to oversee schools receiving VA education benefits in fiscal year 2018. Although the state agency was able to conduct the oversight activities required by its VA contract in fiscal year 2017, the official said the agency had to reduce its staff, and the one remaining employee was frequently required to work long hours and weekends to meet contract requirements. Further, New Mexico did not receive adequate funding for travel costs to visit schools in its geographically large, rural state, the state official noted. As a result, the official said the state agency opted not to renew its VA contract in fiscal year 2018. VA and New Mexico officials have differing views on how well VA staff will be able to provide effective oversight of schools receiving veterans’ education benefits in the state. In January 2018, New Mexico state officials stated that although VA regional staff have assumed the former state agency’s oversight responsibilities, they are unlikely to be able to provide the same level of oversight the state agency did because the VA staff are also responsible for overseeing schools in three other states in addition to New Mexico. As a result, state agency officials said schools in New Mexico would likely receive fewer oversight visits. VA officials, on the other hand, believe that their regional staff are handling oversight of schools in New Mexico effectively, although they acknowledged the staff may be conducting fewer compliance surveys and providing schools less technical assistance.
Other states have also expressed concerns about their ability to conduct oversight given available funding levels. For example, Alaska—which currently has 4,011 veteran students and 53 schools receiving VA education benefits—also chose not to contract with VA for about 5½ years (fiscal year 2012 through January 2017), according to VA officials and the director of Alaska’s veterans affairs office. Alaska’s director also said that a major reason that Alaska did not renew its contract was limited VA funding. During this time, regional VA staff based in Oklahoma handled Alaska’s oversight, which VA officials said often had to be conducted remotely given that schools are spread throughout the state, and travel to those areas can be expensive as well as challenging given weather conditions. VA officials said that VA’s presence was not as strong in Alaska as in other states because VA staff overseeing Alaska are located in another state and in a different time zone. Further, according to VA data for fiscal years 2014 and 2015, VA staff were unable to complete all the compliance surveys they were assigned in Alaska. In addition, California officials told us they almost did not renew their oversight contract in fiscal year 2018 due in part to funding concerns. California has the largest number of veteran students (86,926) and schools receiving VA education benefits (1,091) of any state, yet state agency officials told us that they lacked sufficient funding to pay salaries for staff to conduct necessary oversight of these schools, including approvals and technical assistance visits. VA officials noted, however, that California receives the most funding of any state and has received the greatest increases of any state in the last two years.
Although VA stepped in to provide oversight of schools in New Mexico and Alaska, the agency does not have a plan for how it will oversee additional schools if other states choose not to renew their oversight contracts. VA officials told us their current approach is to assign the state agency’s workload to regional VA staff who already have their own school oversight responsibilities. However, providing oversight in states without a contract in addition to VA staffs’ existing workload is likely to stretch agency resources. For example, existing VA regional staff may not be able to oversee all schools in states with a large number of schools. In addition, VA staff may be strained in providing oversight in geographically large states where schools are widely dispersed because school visits would be time consuming and costly.
VA has begun some initial steps to identify and assess how it would handle additional oversight. In August 2017, VA began working with its Office of General Counsel regarding what options the agency has when a state agency chooses not to contract with VA, and the Office issued a legal opinion in September 2017. In April 2018, VA formed a workgroup, which also met a few times in May and once in July, to prepare a draft paper of possible scenarios and response options based on this legal opinion. In August 2018, the workgroup followed up with the field supervisor responsible for approval, compliance, and liaison and produced a new draft paper of scenarios and options. As of September 2018, VA’s Education Service Director is holding discussions with VA leadership regarding assessing the options and developing a formal plan. However, VA has not completed an assessment to ensure the agency can handle additional school oversight responsibilities in states that do not renew their contracts and has yet to prepare a contingency plan.
Federal standards for internal control state that agencies should identify, assess, and respond to risks related to achieving objectives. After identifying risks, the agency should assess the significance—or effect on achieving the objective—of these risks, which provides a basis for responding to the risks. Then, in responding to these risks, the standards state that agencies should define contingency plans for assigning responsibilities if key roles are vacated to help the entity continue to achieve its objectives. Specifically, if the agency relies on a separate organization to fulfill key roles, then the agency should assess whether this organization can continue in these key roles, identify others to fill these roles as needed, and implement knowledge sharing with replacement personnel. Without fully identifying and assessing the risks of additional state withdrawals, and without a contingency plan to address how VA can oversee additional schools, the agency runs the risk that if more states withdraw from their oversight responsibilities, then VA will be unprepared to oversee the schools in these states.
VA and State Agencies Use Certain Risk Factors to Select Schools for Review, and Have Taken Steps toward a New Oversight Approach
VA and State Agencies Use Payment Errors and Other Risk Factors to Select Schools for Compliance Surveys
Each year, VA uses findings from prior compliance surveys and other information to develop a strategy for prioritizing a sample of schools to receive annual reviews, according to VA officials. VA is generally required by statute to conduct an annual compliance survey of schools with 20 or more enrolled veterans at least once every 2 years. VA officials said with the help of state agencies, VA uses these surveys to determine if schools are meeting legal requirements and are using VA education benefits funds appropriately, including whether they are making over- or under-payments on students’ education expenses. According to a VA document, in conducting the surveys, VA and state agencies review various statutory and regulatory requirements, such as the accuracy of a school’s student enrollment records, tuition payments, and whether a school has corrected deficiencies identified in previous compliance surveys.
According to VA officials, the agency has taken steps to incorporate risk factors into its compliance survey strategy in response to recommendations from our prior work and recent VA studies. The examples below show how VA has responded to recommendations to use risk in overseeing schools.
In 2011, we recommended that VA adopt risk-based approaches to ensure proper oversight of schools. As part of the agency’s official response to this recommendation, VA reported to us that in fiscal year 2012 the agency began prioritizing compliance surveys at for-profit schools. Further, VA officials said that the agency added this focus to its written annual compliance survey strategy for fiscal years 2016 and 2017 based on prior years’ compliance survey findings and congressional priorities.
In a 2016 report, VA’s Inspector General recommended that VA consider particular risk factors in selecting schools for compliance surveys. Specifically, the report recommended that VA prioritize schools at risk of payment errors including (1) making errors resulting in over- or under-payments of VA education benefits, and (2) neglecting to recover unspent VA education benefit funds, such as when students receive funds but then reduce their course loads or repeat classes. In response, VA officials stated that the agency began using data on these payment errors to prioritize schools with high error rates. For example, VA officials said that when data revealed that flight schools were particularly prone to such errors—along with charging high tuition and fees and failing to meet some VA education benefits criteria, among other issues—VA decided to prioritize these schools for compliance surveys in its fiscal year 2018 strategy (see text box).
VA’s Compliance Survey Strategy for Schools Receiving VA Education Benefits for Fiscal Year 2018 The Department of Veterans Affairs (VA) is generally required by statute to conduct an annual compliance survey of schools receiving VA education benefits and that have 20 or more enrolled veterans at least once every 2 years. For its fiscal year 2018 compliance survey strategy, VA prioritized the following types of schools for review: 100 percent of schools with flight programs; 100 percent of schools with fewer than 20 veterans, with priority to those that had not received surveys for the longest time period; 100 percent of federal on-the-job training and apprenticeship programs; schools with serious deficiencies identified in previous compliance surveys; schools newly approved for the program with enrolled VA beneficiaries; schools that have never received a compliance survey (for example, VA officials said some schools have not received a compliance survey due to a shortage of VA oversight staff or due to the fact that in prior years, the statute did not require VA to conduct compliance surveys at schools with fewer than 300 veterans); and a sample of foreign schools receiving VA education benefits for students from the United States (conducted by VA via remote survey).
An August 2017 study, conducted by an external contractor hired by VA, reviewed ways to strengthen VA’s compliance survey process and outcomes. The report found that VA has not placed enough emphasis on improving school compliance over time. For example, VA has historically prioritized completing a certain number of surveys each year rather than ensuring that schools are actually demonstrating compliance. Among other recommendations, the report identified the need for VA to more effectively use data to measure schools’ compliance over time and to establish priorities to select schools for compliance surveys based on their risk level. As of July 2018, VA officials said that the agency has begun analyzing the study’s recommendations to improve its compliance survey process and that its new compliance survey strategy for fiscal year 2019 and future years will address many of these study recommendations.
VA Conducts Reviews in Response to Complaints at Schools
VA officials said that in 2014 they began conducting targeted reviews of schools in response to complaints received from students, government officials, or others. VA’s policies and procedures state that, in addition to complaints, other factors that could trigger a targeted review include compliance survey results, management mandates, and a school self- reporting a violation, among others. VA officials said, however, that VA has not initiated a targeted review in response to anything other than a complaint.
To determine whether to conduct a targeted review, VA officials said they review each complaint and may corroborate it with other sources of information, such as compliance survey data on that school and input from states or other agencies. According to VA’s policies and procedures, the focus of targeted reviews varies based on the nature of the complaint, and VA assigns a higher priority to complaints that are higher risk, i.e., those that allege fraud, waste, or abuse (see table 1). As of July 2018, VA and state agencies have conducted about 160 targeted reviews of schools in response to complaints since 2014, resulting in the withdrawal of program approval for 21 schools, according to data provided by VA officials.
VA Has Taken Steps to Adopt a New Risk-Based Oversight Approach
VA has taken steps to adopt a new risk-based approach to overseeing schools receiving VA education benefits, including selecting schools based on risk factors such as those identified in the Colmery Act. Among other things, the Colmery Act explicitly authorizes VA to use the state agencies for risk-based surveys and other oversight based on a school’s level of risk, and identifies specific risk factors that can be used for school oversight (see text box).
Risk Factors Identified in the Harry W. Colmery Veterans Educational Assistance Act of 2017 The Colmery Act explicitly authorizes the Department of Veterans Affairs (VA) and state agencies to use risk-based surveys (reviews) in oversight of schools receiving VA education benefits. The Colmery Act identifies specific risk factors that can be used for school oversight, but does not require VA or state agencies to use these risk factors in their oversight of these schools: rapid increases in veteran enrollment, increases in the amount of VA education benefits a school receives per veteran student, volume of student complaints, rates of federal student loan defaults of veterans, veteran completion rates, deficiencies identified by accreditors and other state agencies, and deficiencies in VA program administration compliance.
VA officials told us that they have not yet used the risk factors cited in the Colmery Act in conducting their compliance surveys. VA officials acknowledged, however, that adopting a more risk-based oversight approach could help prevent problems, such as some schools’ use of deceptive practices in recruiting veterans and receipt of overpayments from VA. VA officials said that the agency is exploring risk factors to consider in developing its compliance survey strategy for selecting schools in fiscal years 2019 to 2021.
State agency officials we spoke to said that they use the risk factors cited in the Colmery Act to varying degrees in their oversight of schools receiving VA education benefits. For example, one state agency official said that he tracks all of the risk factors cited in the Colmery Act except the rates of veterans’ student loan defaults. On the other hand, a NASAA official said that her state agency tracks the volume of student complaints and deficiencies identified by accreditors and other state agencies. States generally have limited opportunities to select specific schools for compliance surveys, because VA develops the annual priorities for compliance surveys, according to NASAA officials. In some cases, NASAA officials told us, state agency staff work with regional VA staff to select schools for visits based on VA’s priorities.
VA has recently taken steps to explore a new risk-based approach to oversee schools receiving VA education benefits that would be in addition to compliance surveys, according to VA officials. Specifically, VA officials told us that VA has participated in a joint working group with NASAA officials focused on developing a new type of school review in which VA would select schools based on specific risk factors, including those identified in the Colmery Act. NASAA officials told us they were supportive of VA’s efforts in this area. As of February 2018, NASAA officials had drafted a possible approach to state agencies’ oversight to monitor one risk factor—rapid increases in veteran enrollment for VA’s consideration. VA officials told us the working group plans to build on this effort in reviewing other risk factors. In May 2018, VA prepared a draft charter for the working group, which, among other things, outlines the potential scope and implementation of new risk-based surveys, and provided it to NASAA for review. Documentation we reviewed from a VA and NASAA working group meeting held in May 2018 stated that in its upcoming meetings, the working group plans to continue developing the charter, including agreeing to roles and responsibilities, establishing the risk factors to be used, and identifying data sources related to these risk factors. VA officials said that at an August 2018 joint working group meeting, the charter was deemed to have served its purpose and the decision was made to establish a risk-based review policy and procedures moving forward. According to VA officials, as of mid-October 2018, VA used this strategy to select five schools to undergo risk-based reviews. VA officials said they expect these five reviews to be completed by late December 2018.
VA and State Agencies Have Approaches to Coordinate Oversight Activities and VA Is Developing Additional Guidance for States on Targeted Reviews
VA and State Agencies Identified Various Ways They Coordinate on Oversight Activities
VA and state agencies coordinate to divide responsibility for who will conduct compliance surveys of schools receiving VA education benefits in a variety of ways, according to VA and NASAA officials. After VA provides state agencies information about its annual strategy for selecting schools for these surveys, VA regional staff work with state agency staff to select the specific schools for that year, according to these officials. NASAA officials we interviewed said their working relationships with regional VA staff are excellent—they have good communication and understand and help each other. For example, one state official we interviewed said the state agency and regional VA staff in the state coordinate to make sure they alternate who visits which schools to obtain multiple perspectives. They also have discussions before and after each visit, the official said. In some cases, VA officials said, VA and state agency officials collaborate to conduct compliance surveys together.
VA also provides information to states on how to conduct and report on compliance surveys, including a checklist to help guide the states’ review of items tied to specific statutory requirements, as well as a template for reporting compliance survey results. VA leadership also holds conferences twice a year that NASAA and state agency staff can attend, and communicates throughout the year on school oversight issues, according to officials from these entities.
In addition, VA officials told us they collaborate with NASAA on providing training for state agency staff that NASAA provides through the National Training Institute. According to NASAA’s website, the Institute provides an overview of state agency responsibilities and activities, including information on public laws, accreditation, VA education benefits approval criteria, and compliance surveys. New state agency staff must attend this training, according to NASAA officials.
VA Is Developing New Guidance for States on Targeted Complaint-Based Reviews
NASAA officials told us that VA has not provided state agencies with sufficient information on how to conduct targeted school reviews in response to complaints, and as a result it is difficult for states to conduct these types of reviews. VA officials acknowledged this lack of information. NASAA officials reported that many state agencies want more direction on how to conduct and report on targeted school reviews in response to complaints. A policy and procedures document on targeted school reviews that VA developed in 2014 describes the criteria to use in determining when to conduct targeted, complaint-based reviews, including what issues to prioritize. VA officials acknowledged, however, that the document is outdated and does not provide sufficient detail. VA officials said the agency is in the process of revising the document to provide more clarity. In July 2018, VA provided a draft document to us showing the changes it plans to make in its policy and procedures on targeted, complaint-based school reviews, which includes specific information about how state agencies should conduct and report on these reviews. As of late October 2018, VA officials said these procedures were undergoing internal review. VA officials said they are open to state agency feedback on the new procedures. In addition, VA officials said they are currently updating their database for complaint-based reviews to add specific, standard data fields for states to use in reporting the results of these reviews. VA officials told us that the revised database and procedures will allow state agencies to develop their own template to electronically report information collected during these reviews in a standardized way. We believe that when implemented, VA’s new procedures could help enhance VA’s and state agencies’ efforts in responding to complaints about schools receiving VA education benefits.
Conclusions
It is critical for VA to ensure that schools receiving VA education benefits are complying with program requirements and that veterans receive the education they have been promised. Because funding concerns have led to states withdrawing from their oversight roles, decisions by other states to not renew their school oversight contracts could result in VA taking on additional school oversight responsibilities. However, VA has neither completed identification nor assessment of the risks posed by any future state withdrawals that could leave VA unprepared to conduct oversight in these states. Further, VA’s lack of a contingency plan for assuming the responsibilities of state agencies in these cases raises the risk that schools receiving VA education benefits would not be overseen and student veterans could be adversely affected.
Recommendation for Executive Action
We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Benefits to: (1) Complete efforts to identify and assess risks related to future withdrawals by state agencies in overseeing schools and (2) address these risks by preparing a contingency plan for how VA will oversee additional schools if more states choose not to renew their oversight contracts. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report to VA for review and comment. VA’s comments are reproduced in appendix I. VA agreed with our recommendation. VA also provided technical comments, which we considered and incorporated as appropriate.
In addition, we provided relevant excerpts from a draft of this report to NASAA leadership for review and comment. NASAA provided technical comments, which we considered and incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees; the Secretaries of Veterans Affairs and Education; and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (617) 788-0534 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Comments from the Department of Veterans Affairs
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgements
In addition to the contact named above, Elizabeth Sirois (Assistant Director), Linda L. Siegel (Analyst-in-Charge), Jessica Ard, and Rachel Pittenger made key contributions to this report. Also contributing to this report were Susan Aschoff, James Bennett, Deborah Bland, Sheila R. McCoy, Jean McSween, Benjamin Sinoff, and Sarah Veale. | Why GAO Did This Study
In fiscal year 2017, VA provided about $11 billion in education benefits to about 14,460 schools to help eligible veterans and their beneficiaries pay for postsecondary education and training. VA typically contracts with state agencies to help it provide oversight of schools participating in this education benefit program.
The Harry W. Colmery Veterans Educational Assistance Act of 2017 included a provision for GAO to review VA's and states' oversight of schools receiving VA education benefits. This report examines (1) how, if at all, the available level of funding to state agencies has affected states' and VA's ability to carry out their oversight responsibilities, (2) to what extent VA and state agencies use risk-based approaches to oversee schools, and (3) to what extent VA coordinates and shares information with the states to support their oversight activities. GAO reviewed VA documents; assessed VA funding data for fiscal years 2003-2018; interviewed VA and selected state agency officials; and reviewed correspondence between these officials. GAO interviewed officials from eight state agencies who were past or present officials at the association representing state agencies, and officials from three other states, including one that did not renew its contract with VA in fiscal year 2018.
What GAO Found
The Department of Veterans Affairs (VA) is responsible for overseeing schools nationwide that provide VA education benefits to veterans. To help provide this oversight, VA contracts with state agencies to oversee schools in their states and provide outreach and training to school officials and allocates them funding to cover the cost of oversight, outreach, and training activities. However, since fiscal year 2006, funding for oversight, outreach, and training has remained at about $19 million, and only recently increased in fiscal year 2018 to $21 million. State agency officials told GAO that the limited level of funding they have received from VA has been a long-standing problem that has strained their ability to (1) adequately cover staff costs, (2) pay for travel for school visits, and (3) provide needed technical assistance and training to the schools about VA education benefit requirements. As a result, a few states, such as New Mexico, have chosen to withdraw from their school oversight roles. When this happens, VA must take over the state agencies' oversight responsibilities. GAO found that assuming additional oversight responsibilities is likely to stretch VA's staff resources, especially in large states, where schools are geographically dispersed and school visits are time consuming and costly. VA has begun but has not completed an assessment of the risks that potential future state agency withdrawals could have on its ability to provide school oversight. Moreover, VA has not developed a contingency plan for how it will oversee more schools if additional states do not renew their oversight contracts. Federal standards for internal control state that agencies should identify and assess risks related to achieving objectives, and define contingency plans for assigning responsibilities if key roles are vacated. Until VA takes these steps, the agency runs the risk of being unprepared to conduct effective oversight in the event that more state agencies withdraw from their contracts in the future.
VA and state agencies use certain risk factors to select schools for oversight. VA officials said that they prioritize schools for annual reviews of compliance with program requirements based on findings from prior reviews as well as other risk factors, such as schools with a history of VA benefit payment errors. GAO found that VA and state agencies have recently begun a joint effort to explore a new strategy that they expect will strengthen the school review selection and prioritization process. According to VA officials, as of mid-October 2018, VA used this strategy to select five schools to undergo risk-based reviews. VA officials said they expect these five reviews to be completed by late December 2018.
VA and state agencies coordinate and share information about their oversight activities in a variety of ways. For example, VA has shared information with the state agencies on how to conduct annual reviews of schools in their states. However, according to officials at the association representing state agencies, VA has not provided specific direction on conducting targeted reviews in response to complaints. VA officials acknowledged that the procedures they currently have in place are outdated and said that they are being revised to provide state agencies with more details. As of late October 2018, VA officials said these procedures were undergoing internal review. Once implemented, VA's new procedures have the potential to enhance VA's and state agencies' efforts to conduct reviews at those schools for which they have received complaints.
What GAO Recommends
GAO recommends that VA complete the identification and assessment of oversight risks, and prepare a contingency plan for overseeing schools if additional states do not renew their oversight contracts. VA concurred with the recommendation. |
gao_GAO-18-275T | gao_GAO-18-275T_0 | Background
Key Stakeholders in the Federal Criminal Justice Process
Various DOJ and federal judiciary stakeholders play key roles in the federal criminal justice process, and as such, they can also have key roles in considering whether to use incarceration alternatives for a given offender or inmate. For example, in the course of the federal criminal justice process, a U.S. attorney is involved in the process of investigating, charging and prosecuting an offender, among other responsibilities. Federal defenders are called upon to represent defendants who are unable to financially retain counsel in federal criminal proceedings. The U.S. Probation and Pretrial Services Office (PPSO), an office within the judiciary, also has responsibilities including supervising an offender pretrial or after conviction. Federal judges are responsible for determining an offender’s sentence, and, in the case of incarceration, BOP is responsible for caring for the inmate while in custody.
Federal Criminal Justice Process
Federal laws and guidelines determine what, if any, incarceration is appropriate for offenders. The Sentencing Reform Act of 1984 established the independent U.S. Sentencing Commission (USSC) within the judicial branch and charged it with, among other things, developing federal sentencing guidelines. The guidelines specify sentencing guideline ranges—a range of time (in months) that offenders should serve given the nature of their offense and other factors—but also permit sentences to depart upward or downward from guideline ranges because of aggravating or mitigating circumstances. In 2005, the Supreme Court found the sentencing guidelines, which had previously been binding for federal judges to follow in sentencing criminal defendants, to be advisory in nature. Regardless of the guidelines’ advisory nature, judges are still required to calculate sentences properly and to consider the guideline ranges as well as the nature and circumstances of the offense, the defendant’s history, and the need for deterrence, among other sentencing goals.
As we reported in June 2016, alternatives to incarceration were available at various steps in the federal criminal justice process, from charging and prosecution through incarceration (see figure 1).
For instance, at the front-end of the criminal justice process, there are pretrial diversion programs that can provide offenders an opportunity to avoid prosecution or incarceration if they satisfy program requirements. In addition, toward the end of inmates’ periods of incarceration, BOP may place inmates in residential reentry centers (RRC, also known as halfway houses), in which inmates are housed outside of a prison environment prior to their release in the community. During their time in RRCs, inmates are authorized to leave for approved activities, such as work; are monitored 24 hours a day, such as through sign-out procedures; are required to work or be actively seeking work; and are required to pay a percentage of their salaries as a subsistence fee to cover some of their expenses at the RRC.
In addition, BOP may place inmates in home confinement toward the end of their sentences. While in home confinement, inmates are required to remain in their homes when not involved in approved activities, such as employment, and are supervised and monitored, such as through curfews, random staff visits, or electronic monitoring. RRC staff may provide the supervision of inmates in home confinement. Through an interagency agreement, BOP and the PPSO also established the Federal Location Monitoring Program, through which PPSO officers provide supervision for BOP inmates on home confinement under certain conditions. Among other things, to qualify inmates ordinarily must be classified as minimum security level; seek and maintain employment; and pay for all or part of the costs of the Federal Location Monitoring Program.
Overview of BOP’s Institutions and Role in Transitioning Offenders into Society
BOP is responsible for the custody and care of federal inmates. As of December 2017, there were a total of about 184,000 federal inmates, according to BOP. According to BOP data, 83 percent of these inmates are in the 122 institutions managed by BOP. The remainder are confined in secure privately managed or community-based facilities, local jails, or in home confinement.
BOP has a role to help ensure that offenders properly transition into society and avoid a return to prison or criminal behavior (recidivism) after they have completed their terms of incarceration. Among other activities, BOP provides reentry services to inmates within federal prisons that may include drug treatment programs, education and vocational training, and psychology services. BOP also is to facilitate the transfer of inmates into RRCs, which provide assistance as inmates transition into communities, to include home confinement. RRCs provide employment counseling and job placement assistance, financial management assistance, and substance abuse treatment or counseling as well as other services, which may vary by facility. According to BOP, approximately 180 RRCs provide housing for over 7,500 federal offenders prior to release into their communities.
Federal Collateral Consequences Can Affect Reentry
As we reported in September 2017, individuals convicted of a crime may have limitations placed upon them that can affect their reentry. Individuals convicted of a crime generally face a sentence, which can include fines, probation, and incarceration in jail or prison. In addition to the sentence, individuals may also face collateral consequences— penalties and disadvantages, other than those associated with a sentence, which can be imposed upon an individual as a result of a conviction. For example, collateral consequences may prohibit people who committed crimes involving a sex offense or offense involving a child victim from working in a child care facility. Collateral consequences can be contained in federal and state laws and regulations. Notably, federal collateral consequences can serve various functions, such as enhancing public safety or protecting government interests. In 2012, the American Bar Association began compiling the first nationwide inventory of collateral consequences, known as the National Inventory of the Collateral Consequences of Conviction (NICCC). As of December 31, 2016, the NICCC contained roughly 46,000 collateral consequences established through federal and state laws and regulations.
We reported on collateral consequences contained in federal laws and regulations (i.e., federal collateral consequences) that can be imposed upon individuals with nonviolent drug convictions (NVDC). Our review of the NICCC found that, as of December 31, 2016, there were 641 collateral consequences in federal laws and regulations that can be triggered by NVDC. The NICCC data indicated that these 641 collateral consequences can limit many aspects of an individual’s life, such as employment, business licenses, education, and government benefits. For example, individuals may be ineligible for certain professional licenses, federal education loans, or federal food assistance. Moreover, we found that the NICCC identified that 78 percent of these 641 collateral consequences can potentially last a lifetime.
We also reported on selected stakeholders’ views. We spoke to 14 individuals who were leaders of organizations representing judges, victims of crime, and states, among others—on actions the federal government could consider to mitigate these collateral consequences. Most of the stakeholders that we interviewed—13 of 14—said it was important for the federal government to take action to mitigate federal collateral consequences for NVDC. Thirteen stakeholders said that mitigating federal collateral consequences could potentially reduce the likelihood that individuals with NVDC reoffend. Similarly, 11 stakeholders said that mitigation could potentially increase the likelihood that individuals with NVDC successfully reenter the community after jail or prison. The text box below identifies some of the statements made by stakeholders during our interviews from our prior work regarding federal collateral consequences for NVDC.
Stakeholder Perspectives on Federal Collateral Consequences for Nonviolent Drug Convictions, as Reported in GAO-17-691 “The breadth of federal collateral consequences for nonviolent drug convictions is so massive and affects so many aspects of a person’s life, such as family life, immigration, jury service, housing, employment, and voting, that they contribute to an underclass of people.” “Many instances wherein the federal collateral consequences for nonviolent drug convictions end up making it hard for people to live a law abiding life. For example, they may not be able to live in public housing or may be barred from getting an occupational license or doing a particular job. This may push them to turn back to committing crimes to make some money.” “…some federal collateral consequences for nonviolent drug convictions are sensible and appropriate. If we abolish exist you could imperil public safety…” “We can’t just say we’re going to err on the side of public safety and implement a wide range of collateral consequences strictly across the board. The problem is that public safety is undermined by making it impossible for individuals to move on from the criminal offense.” “It is important not to assume that nonviolent means that there is no victim.”
Since 1980, the federal prison population increased from about 25,000 to about 184,000, as of December 2017. In June 2015 and June 2016, we reported that in part to help address challenges associated with overcrowding in certain institutions and related costs of incarceration, DOJ had taken steps to reduce the prison population by pursuing initiatives to: use alternatives to incarceration for low-level nonviolent crimes; prioritize prosecutions to focus on serious cases; and commute, or reduce, sentences of qualified federal inmates. In these reports, we highlighted potential areas for continued oversight of these initiatives and made six recommendations. DOJ concurred with five of these recommendations and partially concurred with the other. As of December 2017, DOJ has implemented two of the six recommendations and had not fully addressed the remaining four.
DOJ could better measure effectiveness of pretrial diversion alternatives. In June 2016, we reported that DOJ had taken steps to pursue alternatives to incarceration for certain offenders, but could improve data collection and efforts to measure outcomes resulting from the use of pretrial diversion alternatives. Our review examined two pretrial diversion programs on the front-end of the criminal justice process that provided offenders an opportunity to avoid incarceration if they satisfy program requirements. Title 9 of the U.S. Attorneys’ Manual permits U.S. Attorneys’ Offices to divert, at the discretion of a U.S. Attorney, certain federal offenders from prosecution into a program of supervision and services administered by the PPSO. Under the Title 9 diversion program, if the offender fulfills the terms of the program, the offender will not be prosecuted, or, if the offender has already been charged, the charges will be dismissed.
In addition to the Title 9 Pretrial Diversion Program, federal criminal justice stakeholders within some judicial districts have voluntarily established court-involved pretrial diversion practices. Court-involved pretrial diversion allows certain federal offenders the opportunity to participate in supervised programs or services, such as a drug court to address criminal behavior that may be linked to addiction to drugs or alcohol. Program participants are to meet regularly with court officials including a judge and pretrial services officer to discuss their progress in the program. If the offender satisfies program requirements, the offender may not be prosecuted, charges may be dismissed, or the participant may receive a reduced sentence.
While DOJ had collected some data on the use of pretrial diversion, we found that the data were of limited usefulness and reliability because its case management system did not distinguish between the different types of diversion and DOJ had not provided guidance to U.S. Attorneys’ Offices as to when and how pretrial diversion cases are to be entered into the system. In addition, we found that DOJ had not measured the outcomes or identified the cost implications of its pretrial diversion programs. To address these deficiencies, we made four recommendations to DOJ. The first two relate to tracking and entering pretrial diversion data, while the second two relate to assessing outcomes based on the data. Specifically, we recommended that DOJ (1) separately identify and track the different types of pretrial diversion programs, (2) provide guidance to its attorneys on the appropriate way to enter data, (3) identify, obtain, and track data on the outcomes and costs of pretrial diversion programs, and (4) develop performance measures to assess diversion program outcomes. DOJ concurred with all four of our recommendations.
In October 2016, DOJ took actions to fully implement the first two recommendations. Specifically, in September 2016, DOJ provided guidance to staff in its U.S. Attorneys’ Offices that outlines (1) the use of two new pretrial diversion codes—one for Title 9 pretrial diversion and another for court-involved diversion and (2) the appropriate entries to create and dispose of each type of pretrial diversion. Attorneys were instructed to use the codes starting on October 1, 2016. However, as of December 2017, DOJ has not implemented the third and fourth recommendations. We continue to believe that by obtaining data on the costs and outcomes of pretrial diversion programs and establishing performance measures, DOJ would gain multiple advantages in its ability to manage these programs and optimize their outcomes and cost implications.
DOJ could better assess initiatives to address prison overcrowding and costs. In June 2015, we reported that DOJ could better measure the efficacy of two incarceration initiatives designed to address challenges related to overcrowding and rising costs. One of these was the Smart on Crime initiative, announced in August 2013 as a comprehensive effort to: prioritize prosecutions to focus on the most serious cases; reform sentencing to eliminate unfair disparities and reduce overburdened prisons; pursue alternatives to incarceration for low-level nonviolent crimes; improve reentry to curb repeat offenses and re-victimization; and surge resources to prevent violence and protecting most vulnerable populations.
In our report, we found that DOJ had established indicators that were well-linked to these goals; however, the indicators lacked other key elements of successful performance measurement systems, such as clarity, a measurable target, or context. For example, none of the indicators had numerical targets by which to assess whether overall goals and objectives are achieved. To address this deficiency, we recommended that DOJ modify its Smart on Crime indicators to incorporate key elements of successful performance measurement systems. DOJ partially concurred with the recommendation, and agreed to continually refine and enhance the indicators to improve their clarity and context. However, DOJ did not agree that establishing measurable targets for its indicators was appropriate. We recognized that it might not be appropriate to create targets for every indicator. Nevertheless, we maintained that measurable performance targets that are properly developed, communicated, and managed, can aid Department leadership in the admittedly challenging task of assessing progress in the Smart on Crime Initiative.
In March 2017, DOJ noted that, due to a change in administration, the status of the Smart on Crime Initiative was uncertain. In May 2017, the Attorney General issued a new charging and sentencing policy to all federal prosecutors that effectively rescinded any previous policy of DOJ that is inconsistent with the new charging and sentencing policy, including certain aspects of the Smart on Crime Initiative. In December 2017, DOJ stated it would start to collect data on and monitor the implementation of this new policy. However, DOJ did not provide information on how it plans to modify its indicators to incorporate key elements of successful performance measurement systems. To the extent that DOJ continues to implement other aspects of the Smart on Crime initiative, such as improving reentry and surging resources to prevent violence we continue to believe this recommendation is valid.
The second initiative we addressed in our June 2015 report was the Clemency Initiative, which encourages nonviolent, low-level federal offenders to petition to have their sentences commuted, or reduced, by the President. Commutation of sentence, as we reported, has long been considered to be an extraordinary remedy that is rarely granted. According to DOJ, in 2013, then-President Obama expressed a desire to review more petitions, and DOJ pledged to expedite the review of such petitions in order to provide them to the President for consideration. However, we found that DOJ had not adequately assessed the extent to which the Clemency Initiative is expeditiously identifying meritorious petitions because it had not tracked how long it takes for petitions to clear each step in its review process or identified and addressed any processes that may contribute to unnecessary delays. We made a recommendation to DOJ to address this deficiency. DOJ concurred, but in March 2017 DOJ stated that it had no standard review process to evaluate. In December 2017, DOJ reported to us that it has taken steps to accelerate the review of commutation cases, such as assigning two attorneys to spend additional time on commutation cases. Although DOJ’s actions are consistent with our recommendation, DOJ has not tracked how long it takes for petitions to clear each step in its review process. This makes it unclear whether DOJ’s actions are addressing the processes that contribute to unnecessary delays.
DOJ Has Addressed Two of Four GAO Recommendations Related to its Reentry Programs
As part of its mission to protect public safety, BOP provides reentry programming that aims to facilitate offenders’ successful return to the community and reduce recidivism. These reentry efforts include programs offered in BOP facilities, as well as RRC and home confinement services that allow inmates to serve the final months of their sentences in the community. In our February 2012, June 2015, and June 2016 reports we highlighted potential areas for continued oversight and made four recommendations to BOP. As of December 2017, BOP has implemented two of the four recommendations and has taken action to address one other recommendation.
BOP has developed a plan to evaluate its reentry programs. In June 2015, we reported that BOP had 18 reentry programs available to inmates in BOP institutions in the areas of inmate treatment and education. We found that while BOP had plans to evaluate the performance of some of its reentry programs, it did not have a plan in place to prioritize evaluations across all of these programs. As a result, we recommended that BOP include, as part of its current evaluation plan, all 18 of BOP’s national reentry programs, and prioritize its evaluations by considering factors such as resources required for conducting evaluations. In May 2016, BOP provided to us an evaluation plan that was consistent with our recommendation. BOP has continued to update the evaluation plan to reflect changes in priority. For example, the most recent plan, updated in July 2017, lists BOP’s Mental Health Step Down Unit program as its top priority, with a target evaluation date of fiscal year 2018. According to BOP, this reflects the need for analysis of services for seriously mentally ill inmates.
BOP has taken steps to assess costs of home confinement services. In February 2012, we reported that BOP did not know the actual cost of home confinement services. To facilitate inmates’ reintegration into society, BOP may transfer eligible inmates to community corrections locations for up to the final 12 months of their sentences. Inmates may spend this time in a RRC and in confinement in their homes for up to 6 months. BOP contracts with private organizations to manage the RRCs and monitor inmates in home confinement. At the time of our review, BOP was paying a rate of 50 percent of the overall per diem rate negotiated with the RRC for each inmate in home confinement. For example, if BOP paid a contractor the average community corrections per diem rate of $70.79 for each inmate housed in a RRC, BOP would pay $35.39 per day for that contractor’s supervision of each inmate in home confinement. However, according to BOP, the agency did not require contractors to provide the actual costs for home confinement services as part of their contract and therefore did not know the cost of home confinement. To help BOP better manage its costs, we recommended that BOP establish a plan for requiring contractors to submit separate prices of RRC beds and home confinement services. BOP implemented this recommendation and determined that all new solicitations as of February 1, 2013, will have separate line items for RRC in-house beds and home confinement services. According to BOP, as of November 2017, 184 solicitations with separate RRC bed and home confinement service line items have been issued since February 2013.
BOP could better measure the outcomes of RRCs and home confinement. In June 2016, we reported that BOP was not positioned to track the information it would need to help measure the outcomes of inmates placed in RRCs and home confinement and did not have performance measures in place. Specifically, we found that, as part of its strategic plan, BOP had two measures—one to track the number of inmates placed into RRCs, and another to track the number of inmates placed in home confinement. However, these measures did not help assess the outcomes of RRCs and home confinement, such as how these programs may or may not affect the recidivism rates of inmates. To address this deficiency, we made two recommendations to BOP to (1) identify, obtain, and track data on the outcomes of the RRC and home confinement programs; and (2) develop performance measures by which to help assess program outcomes. DOJ concurred with these recommendations.
As of December 2017, BOP has taken steps to implement our recommendation to identify, obtain, and track data on the outcomes of RRCs and home confinement. In particular, BOP reported to us that it has developed a revised Statement of Work for use with its RRC contractors that requires the contractors to track and report quarterly to BOP on, among other things, the number of placements into and releases from RRCs and home confinement; revocations from RRCs or home confinement; and RRC and home confinement residents that have secured full, part-time, or temporary employment. BOP plans to compile these data to track contractor performance and program outcomes. Further, BOP reported to us that it has developed a voluntary survey that asks RRC residents about their RRC experiences, including the amount of help they received in finding and keeping a job, and finding a place to live. These actions are in line with our recommendation and we will continue to monitor their implementation. However, as of December 2017, BOP has not provided evidence to us that it has developed performance measures by which to help assess program outcomes. We continue to believe BOP should do so.
Chairman Gowdy, Ranking Member Cummings, and Members of the Committee, this concludes my prepared statement. I would be pleased to respond to any questions that you may have.
GAO Contacts and Staff Acknowledgments
For further information about this statement, please contact Diana Maurer at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this statement include Brett Fallavollita (Assistant Director), David Alexander, Pedro Almoguera, Joy Booth, Billy Commons, III, Tonnye’ Connor-White, Jessica Du, Lorraine Ettaro, Michele Fejfar, Christopher Hatscher, Susan Hsu, Tom Jessor, Matt Lowney, Heather May, and Jill Verret. Key contributors for the previous work on which this testimony is based are listed in each product.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
BOP's rising costs and offender recidivism present incarceration challenges to both DOJ and the nation. For example, BOP's operating costs have generally increased over time, and in fiscal year 2017 amounted to more than $6.9 billion, or 24 percent of DOJ's total discretionary budget. In addition, from 1980 through 2013, BOP's prison population increased by almost 800 percent, from 24,640 to 219,298. While the prison population began to decline in 2013, DOJ has continued to identify prison crowding as a critical issue. GAO has examined a number of DOJ efforts to slow the growth of the prison population and to reduce recidivism through the use of reentry programs to help offenders successfully return to the community.
This statement summarizes findings and recommendations from recent GAO reports that address (1) DOJ's incarceration reduction initiatives, and (2) BOP reentry programs.
This statement is based on prior GAO products issued from February 2012 through June 2016, along with updates on the status of recommendations obtained as of December 2017. For the updates on DOJ's progress in implementing recommendations, GAO analyzed information provided by DOJ officials on actions taken and planned.
What GAO Found
The Department of Justice (DOJ) has fully addressed two of six GAO recommendations related to its incarceration reduction initiatives . In June 2015 and June 2016, GAO reported that to help address challenges associated with incarceration, DOJ had, among other things, taken steps to reduce the prison population by pursuing initiatives to use alternatives to incarceration for low-level nonviolent crimes. GAO made six recommendations to DOJ related to these efforts. As of December 2017, DOJ has implemented two of the six recommendations and has not fully addressed the remaining four. Specifically, to enhance efforts to measure program outcomes, DOJ issued guidance on proper data entry and began tracking data on different types of pretrial diversion programs that allow certain offenders to avoid incarceration if they satisfy program requirements. In addition, as of December 2017, DOJ has taken steps to partially implement GAO's recommendation to address unnecessary delays in reviewing inmates' petitions to commute their sentences.
DOJ has not taken action to address recommendations to better assess the results of pretrial diversion programs or another effort to prioritize prosecutions and reform sentencing to eliminate unfair disparities, among other goals. Further, in December 2017, DOJ noted there had been policy changes since GAO made a recommendation related to enhancing measures to monitor prioritizing prosecution and sentencing reform. Although DOJ reported taking some actions to implement GAO's recommendation, these actions did not include establishing measures that incorporate key elements of successful performance measurement systems.
DOJ has addressed two of four GAO recommendations related to its reentry programs . As part of its mission to protect public safety, DOJ's Federal Bureau of Prisons (BOP) provides reentry programming that aims to facilitate offenders' successful return to the community and reduce recidivism (a return to prison or criminal behavior). These reentry efforts include programs offered in BOP facilities as well as contractor-managed residential reentry centers (RRC)—also known as halfway houses—and home confinement services that allow inmates to serve the final months of their sentences in the community. GAO issued three reports in February 2012, June 2015, and June 2016 and made four recommendations to BOP in this area.
As of December 2017, DOJ has implemented two of the four recommendations and has begun to take action to address one of the remaining two. Specifically, to implement one of GAO's recommendations, DOJ established a plan to evaluate the effectiveness of all the 18 reentry programs it offers to inmates in BOP facilities. To implement another GAO recommendation to improve cost management, DOJ began requiring contractors to submit separate prices for RRC beds and home confinement services. As of December 2017, DOJ noted it has taken initial steps to address a recommendation to track outcome data for its RRC and home confinement programs; however, it has not taken action to develop measures to assess the performance of these programs.
What GAO Recommends
GAO has made 10 recommendations to DOJ in prior reports to help improve performance measurement and resource management. DOJ generally concurred and has addressed or taken steps to address several. GAO continues to believe all of these recommendations should be fully implemented. |
gao_GAO-18-72 | gao_GAO-18-72_0 | Background
Risk management, as applied to security of federal facilities, entails a continuous process of applying a series of mitigating actions—assessing risk through the evaluation of threats, vulnerabilities, and consequences; responding to risks with appropriate countermeasures; and monitoring risks using quality information (see fig. 1).
In 1995, Executive Order 12977 established the ISC after the bombing of the Oklahoma City Alfred P. Murrah Federal Building in April 1995. The ISC’s mandate is to enhance the quality and effectiveness of security in and protection of federal facilities in the United States occupied by federal employees for nonmilitary activities. The order directs the ISC to develop and evaluate security standards for federal facilities, develop a strategy to ensure executive agencies and departments comply with such standards, and oversee the implementation of appropriate security measures in federal facilities. The ISC has released a body of standards, including the ISC Standard, designed to apply to the physical security efforts of all federal, non-military agencies. The ISC Standard prescribes a process for agencies to follow in developing their risk assessment methodologies (see fig. 2).
Most federal departments and agencies are generally responsible for protecting their own facilities and have physical security programs in place to do so. The ISC Standard requires executive departments and agencies to follow the risk-management process when conducting risk assessments for each of their facilities. That process begins with determining the facility security level, ranging from level I (lowest risk) for facilities generally having 100 or fewer employees to level V (highest risk) for the most critical facilities and generally having greater than 750 employees. The security level designation determines the facility’s baseline countermeasures. For each facility, departments and agencies are required to (a) consider all of the “undesirable events” that could pose a risk to their facilities— such as active shooters, vandalism, and explosions—and (b) assess three factors of risk (threats, vulnerabilities, and consequences) to specific undesirable events. Subsequently, agencies are to combine all three factors to yield a measurable level of risk for each undesirable event (see app. III). Based on the results of these assessments, agencies should customize (either increase or decrease) the countermeasures to adequately reflect the assessed level of risk.
In addition, as part of planning for physical security resources within an agency’s budget process, the ISC has identified the need to balance allocations for countermeasures with other operational needs and with competing priorities. The ISC Best Practices have some similarities with leading practices in capital decision-making. For example, both state that the allocation of resources should be integrated into the agency’s mission, objectives, goals, and budget process. However, beyond the ISC Best Practices, the Office of Management and Budget and we have developed more comprehensive leading practices in capital decision- making that provide agencies with guidance for prioritizing budget decisions such as for countermeasure projects. The Office of Management and Budget and our guidance also emphasize evaluating a full range of alternatives, informed by agency asset inventories that contain condition information, to bridge any identified performance gap. Furthermore, the guidance calls for a comprehensive decision-making framework to review, rank, and select from among competing project proposals. Such a framework should include the appropriate levels of management review, and selections should be based on the use of established criteria.
The following describes the mission and physical security program characteristics for the agencies in our review:
CBP, the nation’s largest law enforcement agency, has responsibility for securing the country’s borders. It also has responsibility for conducting security assessments at about 1,200 facilities, including approximately 215 federally owned and agency-controlled higher-level facilities (facility security levels III and IV). These facilities include border patrol stations with holding cells for people detained at the border, office buildings, and canine-training centers. CBP conducts these assessments.
FAA’s mission is to provide a safe and efficient aerospace system for the country. According to agency data, FAA has 55 federally owned and agency-controlled higher-level facilities—including critical air traffic control towers. According to FAA officials, FAA specialists conduct security assessments.
ARS conducts research related to agriculture and disseminates information to ensure high-quality safe food and to sustain a competitive agricultural economy. According to agency data, ARS has security responsibility for four domestic federally owned and agency- controlled higher-level facilities—including laboratories for research to improve food and crop quality, office buildings, and warehouses. ARS security personnel have responsibility for conducting security assessments.
The Forest Service sustains the health, diversity, and productivity of the nation’s forests and grasslands. According to agency officials, the Forest Service has one federally owned and agency-controlled higher- level facility—a regional headquarters office building. The Forest Service’s security officials have responsibility for conducting security assessments, but at the time of our review, USDA security officials conducted the assessment at Forest Service’s one higher-level facility.
Selected Agencies’ Assessment Methodologies Do Not Fully Align with the ISC’s Risk Management Standard
None of the four selected agencies’ security assessment methodologies fully aligned with the ISC Standard. The ISC gives agencies some flexibility to design their own security-assessment methodologies for identifying necessary countermeasures as long as the chosen methodology adheres to fundamental principles of a sound risk- management methodology. Specifically, methodologies must: consider all of the undesirable events identified in the ISC Standard as possible risks to federal facilities, and assess three factors of risk (threats, vulnerabilities, and consequences) for each of the events.
Furthermore, the ISC Standard requires executive departments and agencies to document decisions that deviate from the ISC Standard. Agencies’ policies and methodologies reference the ISC Standard. However, none of the agencies’ methodologies considered all of the undesirable events during assessments although they used some type of risk assessment methodology. In addition, the agencies did not always adhere to these principles of risk management (see table 1).
At the time of our review, CBP’s methodology did not fully align with the ISC Standard because it did not consider all of the 33 undesirable events nor assess threat and consequence. CBP security specialists assessed vulnerabilities at building entrances and exits, in interior rooms, and around the perimeter using a yes/no checklist during the assessment process. However, assessment reports showed that specialists did not assess the threats and consequences of undesirable events at each facility. According to security officials, the gap occurred because they designed the checklist to meet requirements in the 2009 CBP Security Policy and Procedures Handbook, which predates the first edition of the ISC Standard issued in 2010. CBP officials told us that as of January 2017, they began using an improved methodology to assess the threats, vulnerabilities, and consequences for 30 of 33 undesirable events— omitting three now identified in the November 2016 revision to the ISC Standard. However, CBP has not yet updated its handbook to align with the ISC Standard, even though it started this effort over 3 years ago in December 2013. CBP officials did not provide a draft of its updated handbook, but they provided a plan with milestone dates for issuing the handbook by September 2018. CBP officials also told us that updates to the handbook may have to wait due to competing priorities, including efforts to address the backlog of assessments (which we discuss later in this report). Delays in updating the handbook mean that CBP’s policy will continue to not align with the ISC Standard. Furthermore, although CBP security officials told us that all of the agency’s security specialists have been trained to use the improved assessment methodology, without documentation of the methodology in agency policy, there may be greater risk of its inconsistent application. Standards for Internal Control emphasize the importance of agencies developing and documenting policies to ensure agency-wide objectives are met. Documentation serves to retain institutional knowledge over time when questions about previous decisions arise. Without an updated policy handbook that requires a methodology that assesses all undesirable events consistent with the ISC Standard, CBP cannot reasonably ensure that its facilities will have levels of protection commensurate to their risk.
FAA’s methodology does not fully align with the ISC Standard because it does not consider all of the 33 undesirable events nor does it assess all three factors of risk. FAA security specialists assess vulnerabilities to the site perimeter, entryways, and interior rooms using a yes/no checklist, but the checklist does not assess the consequences from each of the undesirable events at each facility. With respect to threat, FAA applies the ISC’s baseline threat—a general federal facilities threat level that relates directly to a set of baseline countermeasures—across all its higher-level facilities because FAA policy states that there is no agency-specific threat that exceeds the current baseline threat. According to FAA officials, the baseline threat standardizes the security needs across their facilities rather than addressing the security needs of individual facilities from specific threats. When necessary, FAA policy allows specialists to modify countermeasures based on an evaluation of conditions at the facility.
FAA realized that this approach was no longer appropriate given the agency-wide goal to make risk-based decisions, a review of the assessment process after a 2014 Chicago fire incident that destroyed critical FAA equipment, and an awareness of ISC initiatives to assess compliance. To address the resulting methodological gaps, FAA hired a contractor to design, develop, test, and validate an improved risk- assessment methodology. Subsequently, FAA improved its methodology in January 2017 to assess the threats, vulnerabilities, and consequences for 30 of the 33 undesirable events identified in the November 2016 revision to the ISC Standard —and tested the methodology at lower- and higher-level facilities. This revised methodology addresses the need to assess individual facility needs rather than using a standardized baseline approach. In April 2017, FAA officials told us of their plan for implementing this methodology and provided tentative milestone dates to conduct further testing, training, and analysis before deciding to use the improved methodology, which they expect to complete by January 2018. However, their plan lacks the necessary information to ensure successful implementation, such as detail on how many facilities they will test and how they will use the results of testing, training, and analysis to implement the improved methodology within the identified 9-month time frame. Furthermore, the improved methodology does not address undesirable events for which ISC issued countermeasures in May 2017. Without a detailed implementation plan to assess the methodology’s impact on its security program, FAA cannot reasonably ensure that its facilities have the proper countermeasures. With ongoing changes to its security program, FAA has an opportunity to fully align its improved methodology with the ISC Standard by including all 33 undesirable events and to update its policy requiring the use of such a methodology.
Unlike CBP and FAA—which developed their own methodologies separate from their parent departments (Department of Homeland Security (DHS) and Department of Transportation (DOT), respectively)— ARS and the Forest Service follow an assessment methodology developed by USDA. USDA’s methodology does not fully align with the ISC Standard because it does not consider all of the 33 undesirable events for which ISC issued countermeasures in May 2017. Security specialists from USDA headquarters typically assess ARS’s and the Forest Service’s higher-level facilities using a risk-based methodology that considers the 31 undesirable events listed in the previous version of the ISC Standard dated August 2013. However, until recently, USDA did not assign ratings to each of the three risk factors—threat, vulnerability, and consequence—and then combine these ratings to yield a measurable level of risk for each undesirable event. USDA security officials said that they have revised the assessment-reporting format to include this risk calculation and trained their specialists to measure risk in this way. USDA officials provided us with a new assessment template that addresses all 33 undesirable events and includes measuring risk. Additionally, USDA officials said that they are revising their outdated physical security manual and expect to complete it by April 2018. With a revised manual and application of the new assessment template, USDA should be better positioned to assess risk at its facilities.
When agencies do not use methodologies that fully align with the ISC Standard, they could face deleterious effects, ranging from facilities having inappropriate levels of protection to agencies having an inability to make informed resource allocation decisions for their physical security needs. Specifically, the ISC Standard states that facilities may face the effect of either having (1) less protection than needed resulting in inadequate security or (2) more protection than needed resulting in an unnecessary use of resources. The ISC Standard also states that these effects can be negated by determining the proper protection according to a risk assessment. Identified excess resources in one risk area then can be reallocated to underserved areas, thus ensuring the most cost- effective security program is implemented. As an illustration of such potential effects, we found that two agencies assessing two higher-level facilities came to two different conclusions in terms of their need for X-ray machines to screen for guns, knives, and other prohibitive items in federal facilities. Specifically, one agency based its decision on a policy that does not deviate from the ISC’s baseline set of countermeasures, and the other agency based its decision on professional judgement that deviated from the ISC’s baseline set of countermeasures. Neither agency based its decision on a risk assessment nor documented its decision—both ISC requirements, specifically:
Without conducting a risk assessment, FAA recently expanded a policy requirement calling for all higher-level facilities to have X-ray machines and magnetometers. This new requirement poses a potentially sizeable investment for the agency with an estimated cost of X-ray machines of about $24,000 and magnetometers of about $4,000 each. FAA may need such equipment at all its higher-level facilities. However, the ISC Standard requires that agencies conduct risk assessments first to justify their needs. Without conducting risk assessments, FAA managers could unnecessarily use resources by installing such equipment in all higher-level air traffic facilities when there may be higher priority needs
A USDA security specialist decided, despite an ISC baseline requirement that higher–level facilities have X-ray machines, not to recommend an X-ray machine at a higher-level Forest Service facility. The specialist reasoned that unlike other federal buildings with numerous unknown visitors, this facility receives mostly known individuals and a limited number of visitors. The ISC Standard allows for professional judgement; however, the ISC requires that agencies document deviations from the baseline set of countermeasures. Reducing the facility’s level of protection without documenting an assessment of risk could result in no record of the basis of the decision for current and future facility managers and security officials to review or use as justification in the case of a question of compliance.
In another case, we found that one higher-level facility did not have access control for employees or visitors nor did it have armed guard patrols. The facility manager told us that intelligence and a history without incidents gave leadership reason to believe that these measures were not needed and that therefore the agency did not require and would not fund such protective measures for this facility—in effect, accepting the risks to the facility. Security officials said they also had the same understanding and did not document the matter in the assessment report even though agency policy and the ISC Standard require written documentation when officials deviate from the baseline requirement.
Without security assessments that fully align with the ISC Standard and provide measureable levels of risk, agencies do not have the information they need to determine priorities and make informed resource allocation decisions. For example, they may not be able to assess whether to acquire or forego costly physical-security countermeasures—such as, X- ray machines, access control systems, and closed-circuit television systems—for facilities. Additionally, after determining the need to acquire a countermeasure, agencies must fund the countermeasure. As previously discussed, leading practices in capital decision-making include a comprehensive framework to review, rank, and select from competing project proposals for funding. In conducting risk assessments that do not fully align with the ISC Standard (i.e., not assessing threats, vulnerabilities, and consequences and measuring risks), agencies miss the opportunity for more informed funding decisions. Three of the four agencies (CBP, ARS, and the Forest Service) currently prioritize funding for operational needs over physical security needs (see table 2) when agencies’ priorities might be different if they based their decisions on an aligned risk assessment.
Selected Agencies Reported Facing Challenges in Conducting Security Assessments and Monitoring Results
Agencies Have Not Conducted Timely Security Assessments
Standards for Internal Control state that agencies should use quality information on an ongoing basis as a means to monitor program activities and take corrective action, as necessary. The ISC requires that agencies assess higher-level facilities at least once every 3 years—an interval requirement to identify and address evolving risks. We found that three of the four agencies (CBP, ARS, and the Forest Service) did not meet this requirement. Officials reported various challenges including (1) assessments competing with other security activities, (2) an insufficient number of qualified staff to conduct assessments when compared to the number of facilities, or (3) not knowing of the required assessment schedule.
An “information system” is the people, processes, data, and technology that management organizes to obtain, communicate, or dispose of information. that had not been reassessed since 2010. CBP security officials attributed the backlog to (1) having too few security specialists assigned to assess about 1,200 facilities and (2) the specialists working on competing priorities, such as revising the security handbook, conducting technical inspections, and reviewing new construction designs and renovation projects. According to CBP security officials, they have developed a plan to eliminate the backlog by the end of fiscal year 2018 by prioritizing the completion of assessments. While we found the plan comprehensive, the schedule did not seem feasible. For example, the plan assumes that one specialist can complete six assessments in 3 consecutive days and that another specialist can complete three assessments in 1 day. In contrast, security officials told us specialists take about 20 work hours (or 2½ days) to conduct an on-site assessment of one facility. CBP officials said that they believe they can meet the time frames of the plan because they have set aside other priorities and have a thorough understanding of the scope of work involved at the facilities. They added that it will not be easy to meet the timeline, but they can accomplish it with a motivated and committed workforce, adequate financial resources, and absent activities that would otherwise require shifting of resources. We question the feasibility of setting aside important priorities, such as updating the policy manual and reviewing physical security elements in new construction designs, as well as the workload assumptions for completing the assessments. Further, these other priorities are also key to securing facilities. Without balancing assessments with competing priorities, CBP’s time frames for completing the assessments by the end of fiscal year 2018 may not be feasible and may also result in the agency’s not addressing other important physical security responsibilities.
Since the ISC issued its standard in 2010, ARS and the Forest Service have assessed their higher-level facilities at least once. However, these agencies have not reassessed all of their higher-level facilities within the 3-year interval requirement. Specifically, security specialists have not conducted required reassessments of two ARS and one Forest Service higher-level facilities. The ARS headquarters official explained that the agency had not reassessed the two facilities due to competing priorities and insufficient internal resources. During the course of our review, ARS headquarters officials said they began assessing one of the two ARS facilities in May 2017 and will begin assessing the second facility in October 2017. The Forest Service official explained that the agency missed its security reassessment of the regional office because the facility staff had not requested one. During our visit, facility staff responsible for security told us that they were not aware of the ISC’s 3- year interval requirement. Facility staff requested a reassessment, and security officials told us that they expected to complete it by mid-June 2017. Completing this one-time assessment may address the facility’s security needs temporarily. However, ARS and the Forest Service have not implemented a long-term schedule with key milestones and lack a means to monitor completion of assessments of higher-level facilities at least once every 3 years. Consequently, these agencies cannot reasonably ensure that they have full knowledge of the risks to their facilities.
FAA data from 2010 through 2016 show that FAA has assessed its 55 higher-level facilities at least once every 3 years. FAA policy requires that specialists schedule assessments of higher-level facilities every 12– 18 months depending on whether the facility has met FAA physical security standards.
Data Limitations Affect Agencies’ Ability to Fully Monitor Security Activities
The ISC Standard states that to make appropriate resource decisions, agencies need information, such as what is being accomplished, what needs management attention, and what is performing at expected levels. We found that agencies’ methods of collecting and storing security information had limitations that affected agency and facility officials’ oversight of the physical security of their facilities (see table 3).
Without having long-term, agency-wide information to monitor whether assessments are conducted on schedule, ARS and the Forest Service may not meet the ISC Standard, resulting in not adequately protecting their facilities and employees.
The ISC Standard also states that agencies should measure their security program’s capabilities and effectiveness to demonstrate the need to fund facility security and to make appropriate decisions for allocating resources. However, the agencies in our review were unable to demonstrate appropriate oversight of their physical security programs because:
CBP’s handbook does not include requirements for data collection and analysis for monitoring physical-security program activities. Facility managers and security officials do not enter assessment results, such as the countermeasures recommended for facilities, in the real property database. Consequently, they do not have comprehensive data to manage their security program, assess overall performance, and take any necessary corrective actions. A CBP official told us that a comprehensive database would allow CBP to set priorities for addressing countermeasures. Without including data collection and analysis requirements in its updated handbook, CBP may be unable to monitor the performance of its physical security program.
FAA’s policy does not require ongoing monitoring of physical security information, such as the status of recommended countermeasures or assessment schedules. As a result, FAA officials do not proactively use physical security information to assess the overall performance of its physical security program and take corrective actions before an incident occurs. Without a policy requiring ongoing monitoring of information—an internal control activity, FAA may be unable to assess the overall performance of its security program and take necessary corrective actions.
USDA has a decentralized security program and places the responsibility on agencies to create their physical security programs. Security officials from ARS and the Forest Service told us that USDA does not have a policy for collecting and managing agency-wide information; however, they said that USDA is drafting a new departmental regulation and manual that will specify (1) the roles and responsibilities of agency and facility managers and (2) electronic- data-reporting requirements for monitoring the performance of the physical security program. USDA officials provided a draft of USDA’s regulation and manual for our review. The draft regulation did not mention data reporting and monitoring, while the draft manual only contained a table of contents that included a section entitled “Facility Tracking Database.” USDA officials expect to issue new policies sometime between October 2017 and April 2018. In the absence of new departmental regulation and manual, USDA and Forest Service officials told us that they have begun to develop a Forest Service system for storing electronic copies of agency-wide assessments and that they plan to expand the use of this system to track site specific assessment dates and status of recommended countermeasures. Forest Service officials provided milestone dates and described the capabilities for a future information system, which they expect to complete in September 2017. However, we could not determine whether the manual will have information system requirements to monitor agencies’ physical security program, an internal control activity. Without USDA’s including data collection and analysis requirements in its manual, its agencies may not be able to monitor the performance of their physical security programs.
Selected Agencies Vary in Addressing Recommended Corrective Actions
Without agencies having information to monitor security activities, they were unable to provide us information on the status of countermeasures across their entire portfolio. In order to better understand the status of countermeasures implemented and facilities’ experiences when implementing countermeasures, we determined the status of countermeasures at 13 facilities we visited.
As previously noted, risk management, as it pertains to physical security, involves agency officials monitoring their physical security programs. During our visits to 13 selected facilities, we found the four agencies differed in the number of countermeasures that they had not implemented. Facility officials provided us with some information on why countermeasures had not been implemented, specifically:
CBP had a significant number of recommended countermeasures from 2010 through 2016 that remained open at the eight selected CBP facilities. CBP facility officials gave reasons why recommended countermeasures had not been implemented. At one facility, officials did not know about the recommended countermeasures from its last 2010 assessment because the individuals previously knowledgeable about the assessments left the organization without communicating the results. By taking action to improve facility security, they implemented some needed countermeasures. However, at the time of our review, a large number of the recommendations remained open. At another facility, officials told us that they too had not known (for the same reason mentioned above) of their 2010 assessment, which contained recommended countermeasures. However, these officials told us that they submitted a funding request a few weeks before our visit to address all except one of the open countermeasures. In other cases, facilities have not implemented needed countermeasures due to resource constraints or physical site limitations.
FAA had a large number of recommended countermeasures from 2010 through 2016 that remained open at the time of our review for the two FAA facilities visited. In this case, the most recent security assessment, completed in late 2016, resulted in one facility’s having little time to implement countermeasures by the time we conducted our analysis.
While ARS had closed almost all recommended countermeasures at two facilities at the time of our review, one Forest Service facility had not yet implemented a recommendation (to secure its entrance doors) that was identified in a 2013 security assessment (see bottom center photo, fig. 3). This countermeasure remained open because facility officials said they continued to explore alternatives to address the recommendation.
Figure 3 shows examples of countermeasures not fully implemented at selected facilities we visited.
During our site visits and discussions with facility staff, we found that physical site limitations or other priorities can make it difficult for facility managers to implement countermeasures. For example, a countermeasure might involve correcting a clear zone violation—that is, moving an object (such as a brick wall) a certain distance away from the facility’s perimeter fence to prevent a potential intruder from using the object to climb over the fence. However, when the object near the fence is a building and the property outside of the fence is not federally owned (see bottom right photo, fig. 3), it may not be cost effective to correct the clear zone violation. In this situation, the agency bears the responsibility for exploring ways to address the vulnerability. In following the ISC Standard, as previously noted, managers are required to justify and document why they could not implement recommended countermeasures—what the ISC calls risk acceptance.
Conclusions
Selected agencies carry a great responsibility for protecting facilities that support border protection activities, provide safe and efficient air traffic around the country, and protect the quality of the nation’s food supply. With this responsibility comes the need to appropriately assess risk to ensure the security of these agencies’ facilities. However, 7 years after the ISC issued its initial risk-management process standard, each of four selected agencies continued to use assessment methodologies that did not fully align with this standard. During our review, agencies improved their methodologies to better align with the ISC Standard, but the agencies had not yet incorporated the methodologies into their policies and procedures. Without updated policies and procedures requiring a methodology that adheres to the ISC Standard (including all 33 undesirable events now identified in the November 2016 revision to the ISC Standard), agencies may not collect the information needed to assess risk and determine priorities for improved security. This situation could hamper the agencies’ ability to make informed resource allocation decisions or to recommend countermeasures commensurate to the needs at specific facilities. To address challenges in conducting timely assessments, agencies that had a backlog developed plans to address them, but the assumptions used in CBP’s plans and time frames did not appear to fully reflect the agency’s competing priorities and actual experience. Additionally, ARS and Forest Service have not implemented a long-term assessment schedule with key milestones to ensure that higher-level facilities are reassessed at least once every 3 years. Further, in cases where the agencies may have had risk assessment information, CBP, ARS, and the Forest Service lack the means to collect, store, and analyze this information in order to monitor the status of a facility’s security. Without these key aspects of a comprehensive security program—a methodology that meets the standard, policies, and procedures that incorporate that methodology; the ability to complete assessments on time; and information to perform monitoring—agencies remain vulnerable to substantial security risks.
Recommendations for Executive Action
To improve agencies’ physical security programs’ alignment with the ISC Risk Management Process for Federal Facilities and Standards for Internal Control in the Federal Government for information and monitoring, we recommend that the Commissioner of U.S. Customs and Border Protection take the following three actions: with regard to the updated Security Policy and Procedures Handbook, the ISC’s Risk Management Process for Federal Facilities requirement to assess all undesirable events, consider all three factors of risk, and document deviations from the standard, and data collection and analysis requirements for monitoring the performance of CBP’s physical security program. revise the assumptions used in the plan to address the backlog to balance assessments with competing priorities, such as updating the policy manual and reviewing new construction design, to develop a feasible time frame for completing the assessment backlog.
Secretary of Transportation direct the FAA Administrator to take the following three actions: develop a plan that provides sufficient details on the activities needed and time frames within the date when FAA will implement an improved methodology; update FAA’s policy to require the use of a methodology that fully aligns with the ISC’s Risk Management Process for Federal Facilities for assessing all undesirable events, considering all three factors of risk, and documenting all deviations from the standard countermeasures; and update FAA’s policy to include ongoing monitoring of physical security information.
Secretary of Agriculture take the following two actions: include data collection and analysis requirements for monitoring the performance of agencies’ physical security programs, in the department’s revised physical-security manual, and direct the Administrator of the Agricultural Research Service and the Chief of the Forest Service to implement and monitor a long-term assessment schedule with key milestones to ensure that higher-level facilities are reassessed at least once every 3 years.
Agency Comments
We provided a draft of this report to the Departments of Homeland Security, Transportation, and Agriculture for review and comment. All three departments agreed with the findings and recommendations for their respective agencies. DHS agreed with our recommendations and provided actions and timeframes for completion. With regard to our recommendation to update the Security Policy and Procedures Handbook, DHS stated that CBP is updating the handbook to include: (1) a discussion and diagram of the ISC risk management process and its application within CBP’s assessment processes; (2) specific guidance for conducting risk assessments in accordance with the ISC’s Risk Management Process for Federal Facilities; and (3) a requirement and guidance for data collection and analysis in support of a robust physical security program. With regard to our recommendation to revise the assumptions used in the plan to address the assessment backlog, DHS stated that CBP has reevaluated current priorities and believes the current plan to eliminate the risk assessment backlog by the end of fiscal year 2018 is achievable. DHS also provided technical comments, which we incorporated as appropriate. DHS’s official written response is reprinted in appendix IV.
DOT also agreed with our recommendations and by e-mail requested that we publish the response to the sensitive version of this report. DOT stated that FAA continues to refine its policy and develop processes that address the ISC threats, vulnerabilities, and consequences. Further, DOT stated that FAA would either validate that current mitigation strategies address those risks or apply additional appropriate countermeasures. DOT stated that it will provide a detailed response to each recommendation within 60 days from the date of this report. DOT’s official written response is reprinted in appendix V.
USDA agreed with our recommendations and provided the agency-wide actions for completion. USDA provided a plan to ensure compliance with the ISC’s Risk Management Process for Federal Facilities by development of a standard physical-security assessment process and by initiation of a compliance program to track assessments and monitor the installation of countermeasures. In an e-mail, USDA provided milestone dates and planned completion by January 2019. USDA’s official written response is reprinted in appendix VI.
If you or your staff has any questions about this report, please contact me at (202) 512-2834 or [email protected]. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Objectives, Scope, and Methodology
This report examines: (1) how selected agencies’ assessment methodologies align with the Interagency Security Committee’s (ISC) risk management standard for identifying necessary countermeasures and (2) what management challenges, if any, selected agencies reported facing in conducting physical security assessments and monitoring the results.
To determine how selected agencies’ assessment methodologies align with ISC standards for identifying the necessary countermeasures, we identified federal executive branch departments and agencies reported by the Department of Homeland Security (DHS) to have received delegations of authority to protect their own buildings. We reviewed the Federal Real Property Council’s data on the Federal Real Property Profile to identify federally owned and agency-controlled buildings. We determined that these data were sufficiently reliable for the purpose of our reporting objectives based upon our recent report that reviewed these data fields. We selected four agencies based upon their large quantity of reported federally owned and agency-controlled buildings: DHS, U.S. Customs and Border Protection (CBP); Department of Transportation (DOT), Federal Aviation Administration (FAA); United States Department of Agriculture (USDA), Agricultural Research Service (ARS) and USDA’s United States Forest Service (Forest Service). This methodology purposely does not include federal buildings protected by FPS and under the control of the General Services Administration as well as other agencies that we reported on in our previous work. We obtained and reviewed one particular ISC standard, The Risk Management Process for Federal Facilities (the ISC Standard) and its related appendices for assessing physical security and providing recommended countermeasures at federal facilities. We obtained and analyzed the selected departments’ and agencies’ facility-security policies and procedures for a risk assessment methodology. According to the ISC Standard, agencies’ risk assessment methodologies must: consider all of the undesirable events identified in the ISC Standard as possible risks to federal facilities as listed in appendix III; assess the threat, consequences, and vulnerability to specific produce similar or identical results when applied by various security provide sufficient justification for deviations from the ISC-defined security baseline.
We limited the scope of this review to the first two standards above because agencies’ adherence to these standards could be objectively verified by reviewing and analyzing agency documentation and interviewing agency officials, and their adherence to the two additional standards could not be verified in this manner. We did not conduct risk assessments with independent security professionals to evaluate: 1) the results from prior agency evaluations and 2) the sufficiency of justifications for deviations from the ISC-defined security baseline, as both evaluations were outside of the scope of the engagement. Therefore, for the purposes of this report, risk assessment policies, procedures and resulting methodology that align with ISC standards are those that consider all of the undesirable events and assess the threats, consequences, and vulnerabilities to specific undesirable events. We reviewed and analyzed information to answer the following five questions: 1. Do the policies and procedures mention the ISC standards? 2. Do the policies and procedures consider all of the undesirable events? 3. Do the policies and procedures assess the threat of specific undesirable events? 4. Do the policies and procedures assess the consequences of specific undesirable events? 5. Do the policies and procedures assess the vulnerability to specific undesirable events?
We answered each of these questions as either a “Yes” or “No” for our selected agencies. The “No” answer to questions 3, 4, and 5 includes the following two possibilities: (a) the agency’s threat, consequence, or vulnerability ratings are not tied to specific undesirable events, or (b) the agency does not have a framework or formalized steps within which it collects and analyzes threat-, consequence-, or vulnerability-related information. If the answer to each of the five questions was “Yes,” then the agency’s overall risk assessment methodology aligns with ISC risk assessment standards for the purposes of this report. If the answer to one or more of the five questions was “No”, then the agency’s methodology does not to align with ISC standards for the purposes of this report.
We interviewed security officials at ISC; three departments (DHS, DOT, and USDA); and four agencies (CBP, FAA, ARS, and the Forest Service). We obtained and analyzed agency guidance on prioritizing physical security needs and interviewed agencies’ facility maintenance and budget officials. We reviewed the ISC’s best practices for planning for physical security resources within an agency budget process. Additionally, we reviewed the Office of Management and Budget’s and our leading practices in capital decision-making that provide agencies with guidance for prioritizing budget decisions such as “countermeasure projects.” We also reviewed Standards for Internal Control in the Federal Government because internal controls play a significant role in helping agencies achieve their mission-related responsibilities. Our findings from our review of the selected agencies are not generalizable to all ISC member agencies, but provide insight into and illustrative examples about selected agencies’ facility risk-assessment methodologies.
To determine what management challenges selected agencies reported facing in conducting physical security assessments and monitoring results, we interviewed agencies’ security, maintenance, and budget officials. We requested agency security officials to provide portfolio- wide data on facility security assessments for our review in order to select sites to visit and analyze data for dates of assessments and the status of findings. We assessed the reliability of this data through interviews with knowledgeable agency staff and a review for completeness and any unexpected values. We compiled information from physical security assessments when no portfolio-wide agency data were available. We determined that these data were sufficient for the purpose of our reporting objectives and selected geographically dispersed sites with buildings with higher reported security levels per the ISC Standard, as these higher security levels have greater requirements and therefore the potential for greater resource needs. See appendix II for the 13 sites we selected. For these selected sites, we interviewed agency staff concerning the assessment process, site-specific findings, recommendations, justification for deviations from ISC’s baseline standards, and management challenges faced in addressing physical security needs. We observed and photographed the status of the findings from the site physical security assessments. We did not independently determine what constitutes a management challenge or a physical security finding. Rather, we relied on these stakeholders to determine these physical security concerns as defined in their own standards and guidance. The information from our selected sites is illustrative and cannot be generalized to sites agency- wide.
The performance audit upon which this report is based was conducted from June 2016 to August 2017 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate, evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We subsequently worked with DHS, DOT and USDA from August 2017 to October 2017 to prepare this version of the original report for public release. This public version was also prepared in accordance with these standards.
Appendix II: Selected Facilities GAO Visited
Appendix III: The Interagency Security Committee’s Undesirable Events
Appendix IV: Comments from the Department of Homeland Security
Appendix V: Comments from the Department of Transportation
Error! No text of specified style in document.
Appendix VI: Comments from the Department of Agriculture
Error! No text of specified style in document.
Appendix VII: GAO Contact and Staff Acknowledgments
Appendix VII: GAO Contact and Staff Acknowledgments Error! No text of specified style in document.
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Amelia Shachoy (Assistant Director), Steve Martinez (Analyst-in-Charge), Jennifer Clayborne, George Depaoli, Geoffrey Hamilton, Joshua Ormond, Alison Snyder, Amelia Michelle Weathers, and Elizabeth Wood made key contributions to this report. | Why GAO Did This Study
Protecting federal employees and facilities from security threats is of critical importance. Most federal agencies are generally responsible for their facilities and have physical security programs to do so.
GAO was asked to examine how federal agencies assess facilities' security risks. This report examines: (1) how selected agencies' assessment methodologies align with the ISC's risk management standard for identifying necessary countermeasures and (2) what management challenges, if any, selected agencies reported facing in conducting physical security assessments and monitoring the results.
GAO selected four agencies—CBP, FAA, ARS, and the Forest Service—based on their large number of facilities and compared each agency's assessment methodology to the ISC Standard; analyzed facility assessment schedules and results from 2010 through 2016; and interviewed security officials. GAO also visited 13 facilities from these four agencies, selected based on geographical dispersion and their high risk level.
What GAO Found
None of the four agencies GAO reviewed—U.S. Customs and Border Protection (CBP), the Federal Aviation Administration (FAA), the Agricultural Research Service (ARS), and the Forest Service—used security assessment methodologies that fully aligned with the Interagency Security Committee's Risk Management Process for Federal Facilities standard (the ISC Standard). This standard requires that methodologies used to identify necessary facility countermeasures—such as fences and closed-circuit televisions—must:
1. Consider all of the undesirable events (i.e., arson and vandalism) identified by the ISC Standard as possible risks to facilities.
2. Assess three factors—threats, vulnerabilities, and consequences—for each of these events and use these three factors to measure risk.
All four agencies used methodologies that included some ISC requirements when conducting assessments. CBP and FAA assessed vulnerabilities but not threats and consequences. ARS and the Forest Service assessed threats, vulnerabilities, and consequences, but did not use these factors to measure risk. In addition, the agencies considered many, but not all 33 undesirable events related to physical security as possible risks to their facilities. Agencies are taking steps to improve their methodologies. For example, ARS and the Forest Service now use a methodology that measures risk and plan to incorporate the methodology into policy. Although CBP and FAA have updated their methodologies, their policies do not require methodologies that fully align with the ISC standard. As a result, these agencies miss the opportunity for a more informed assessment of the risk to their facilities.
All four agencies reported facing management challenges in conducting physical security assessments or monitoring assessment results. Specifically, CBP, ARS, and the Forest Service have not met the ISC's required time frame of every 3 years for conducting assessments. For example, security specialists have not conducted required reassessments of two ARS and one Forest Service higher-level facilities. While these three agencies have plans to address backlogs, CBP's plan does not balance conducting risk assessments with other competing security priorities, such as updating its policy manual, and ARS and the Forest Service lack a means to monitor completion of future assessments. Furthermore, CBP, ARS, and the Forest Service did not have the data or information systems to monitor assessment schedules or the status of countermeasures at facilities, and their policies did not specify such data requirements. For example, ARS and the Forest Service do not collect and analyze security-related data, such as countermeasures' implementation. FAA does not routinely monitor the performance of its physical security program. Without improved monitoring, agencies are not well equipped to prioritize their highest security needs, may leave facilities' vulnerabilities unaddressed, and may not take corrective actions to meet physical security program objectives. This is a public version of a sensitive report that GAO issued in August 2017. Information that the agencies under review deemed sensitive has been omitted.
What GAO Recommends
GAO recommends: (1) that CBP and FAA update policies to require the use of methodologies fully aligned with the ISC Standard; (2) that CBP revise its plan to eliminate the assessments backlog; and (3) that all four agencies improve monitoring of their physical security programs. All four agencies agreed with the respective recommendations. |
gao_GAO-19-181 | gao_GAO-19-181_0 | Background
We and others have identified challenges facing the federal human capital system’s ability to recruit, retain, develop, and engage workers, both today and in the future. For example:
Classification system. The General Schedule classification system—which defines and organizes federal positions, primarily to assign rates of pay—has not kept pace with the government’s evolving requirements.
Recruiting and hiring. Federal agencies need a hiring process that is applicant friendly and flexible, and meets policy requirements.
Pay system. Employees are compensated through an outmoded system that (1) rewards length of service rather than individual performance and contributions, and (2) automatically provides across- the-board annual pay increases, even to poor performers.
Performance management. Federal agencies have faced long- standing challenges developing modern, credible, and effective employee performance management systems and dealing with poor performers.
Employee engagement. Agencies can improve employee engagement and performance through analysis and sharing of promising practices. Employee engagement is generally defined as the sense of purpose and commitment employees feel toward their employer and its mission.
The administration is moving forward with broad efforts to address government-wide human capital challenges, improve government efficiency, and understand how key trends will affect the future of federal work and the workforce. For example, the President’s Management Agenda’s cross-agency priority goal on the 21st century workforce aims to (1) improve employee performance management and engagement, (2) train staff to develop new skills and redeploy human capital resources, and (3) enable simple and strategic hiring practices.
In 2018, OPM issued the first Federal Workforce Priorities Report to communicate key government-wide human capital priorities, suggest strategies, and help inform agency strategic and human capital planning. The report identifies changes in the external environment that will likely affect federal human capital management, including the evolving role of workers, changes in technology, employee health, and shifting generational demographics. In addition, OPM is developing a foresight program to help federal agencies navigate emerging strategic workforce challenges and harness potential opportunities. As part of its foresight efforts, OPM has also hosted a series of symposia that provide human capital specialists insight on addressing workforce challenges of the future.
Federal Work Is Changing Amid Demographic and Technological Trends
We identified key trends in agency operations and attitudes toward work that are affecting how federal work is done and, consequently, the skills and competencies that workers need to accomplish agency missions, as illustrated by figure 1. These trends will require a federal workforce that can better adapt to and leverage constantly evolving technology and mission requirements. They will also require a federal workforce that can effectively collaborate and partner with workers both within and outside of the federal sector to achieve national policy objectives.
Technological Advances
Technological advances will change the way work is done. Advances in automation, artificial intelligence, robotics, and information and communication technology have the potential to accelerate changes in federal work beyond any past experience, but they also involve risks. Advances in automation and robotics are changing the way that work is done by altering the balance between what tasks are completed by humans and those completed by machines. The federal workforce will need to develop new skill sets and expertise to effectively utilize and manage these technological advances.
In 2017, we convened a forum that highlighted several applications of artificial intelligence, many of which could affect agencies and federal work. For example, robots enabled by artificial intelligence could assist patients with medication management and mobility support in clinical settings; developments in automated vehicles could affect work related to government vehicle pools, safety, and transportation management; the use of artificial intelligence in criminal justice and cybersecurity applications could bring benefits but would need to be carefully managed with regard to privacy protection, among other concerns; and the accelerated pace of change associated with artificial intelligence may strain workforce systems’ capacity to train and hire individuals with appropriate skill sets.
Technology is also changing human capital management, according to experts we contacted. Experts stated that technology can help improve recruitment efforts, streamline hiring processes, and match employees to tasks. For example, they said that employers can develop mobile apps to make the hiring process easier to navigate and use artificial intelligence to better screen and align applicants with job positions. Experts also stated that employees will need to constantly update their digital literacy to stay current with emerging technology.
OPM has also explored the effect of technology on the federal workforce. In February 2018, OPM reported that, in most jobs, certain activities may be automated rather than the entire occupation. OPM also reported that machine assistance may amplify the value of expertise and may increase work capacity by providing employees time to focus on more important work. Further, OPM reported that it is seeking to acquire or develop enterprise technological solutions to improve the analytic capabilities of the federal human capital community. Improved data analytics should help support more informed and evidence-based planning and decision- making.
OPM suggested that the technological changes will require agencies to coordinate efforts to (1) fund technological experimentation and pilots; (2) promote acquisition of skills that are not replaced by technology (e.g., creativity, relationship building, and innovation); and (3) engage in strategic foresight activities. Relatedly, OPM, the Office of Management and Budget, and the Department of Defense are developing a plan to identify ways to provide employees impacted by automation with other work, and to identify skills needed in the future. The agencies’ efforts are part of the cross-agency priority goal on the 21st century workforce.
Federal work is also being affected by increased use of virtual communication, which provides flexibility in where employees can do their work. In both the workforce-at-large and the federal workforce, the percentage of employees who telework has increased. For example, OPM reported that the percent of eligible employees teleworking increased from 29 percent in 2012 to 51 percent in 2016.
We have previously reported that the federal government has increasingly recognized telework as an important human capital strategy that can give employees more work-life balance and help agencies continue operations during emergency events. However, federal agencies also face costs associated with telework, including training staff, ensuring supervisors have the necessary skills to manage remote staff, and overseeing the telework program to ensure compliance and reduce the risk of fraud. In July 2016, we found that OPM provided resources to agencies to help them with their telework programs, but was missing other opportunities to help agencies better identify the net cost savings associated with their telework programs.
We recommended that OPM work with the Chief Human Capital Officers Council to provide clarifying guidance on options for developing supporting data for benefits and costs associated with agency telework programs. OPM concurred with the recommendation and in October 2018 provided documentation showing it is developing draft guidance on evaluating work-life programs, including telework. To fully implement this recommendation, it will be important for OPM to finalize and provide this guidance to agencies.
Increased Reliance on Nonfederal Partners
An increased reliance on nonfederal partners to achieve policy goals will require new skills and competencies for which agencies will need to identify, recruit, and hire. Increasingly, the federal government works with state and local governments, as well as other partners, to achieve a wide range of policy goals. The federal government uses grants as a tool to achieve national priorities through nonfederal partners, including state and local governments, educational institutions, and nonprofit organizations.
Federal grant outlays to state and local governments have generally increased as measured in constant fiscal year 2015 dollars from $230 billion in fiscal year 1980 to $624 billion in fiscal year 2015. We previously reported that a range of skills are needed to manage the various tasks associated with the grants life cycle. For example, the grants workforce needs to notify grant awardees of the general terms and conditions of the grant, including statutory and regulatory requirements.
In support of their missions and activities, agencies also use contractors to procure a variety of services and products, including products that cannot be easily and clearly defined in advance and that are difficult to verify after delivery. In addition, agencies use contractors to provide the skills needed to help them manage complex operations. In fiscal year 2017, federal agencies obligated almost $306 billion for service contracts. Contractors can help agencies meet surge capacity needs, among other benefits. However, the Office of Federal Procurement Policy and our prior work have identified risks of overreliance on contractors and the need for increased management attention on certain types of services, such as professional and management support services.
In addition to using grants and contractors, Congress has given broad statutory authority across the executive branch to use various open innovation strategies. Open innovation involves using various tools and approaches to harness the ideas, expertise, and resources of those outside an organization to address an issue or achieve specific goals. Our October 2016 report highlighted cases where agencies are using open innovation strategies—such as crowdsourcing and prize competitions—to effectively engage and collaborate with each other, and to leverage knowledge outside the federal workforce to achieve their goals.
For example, at the time we found that every 2 years since 2009 the Federal Highway Administration had engaged a broad range of public- and private-sector stakeholders to identify and implement innovative ideas that measurably improved highway construction projects. Federal workers in charge of such open innovation initiatives will need to be able to work in collaborative, cross-cutting environments. To that end, in June 2017, we identified various government-wide and agency-level resources —such as interagency communities of practice and dedicated staff positions—the executive branch has put into place to support effective implementation of open innovation initiatives.
Fiscal Constraints
Increasing fiscal constraints require agencies to reevaluate and reprioritize what the federal government does, how it does business, and, as appropriate, who conducts its business. The nation is on a long-term, unsustainable fiscal path. We have previously reported that the federal government is spending far more money than it is collecting and is projected to do so going forward. Further, fiscal pressures have already begun to affect the management of the federal workforce, including decisions to hire, retain, train, contract, and collaborate. Without careful attention to strategic and workforce planning and other approaches to managing and engaging personnel, the reduced investments in human capital may have lasting, detrimental effects on the capacity of an agency’s workforce to meet its mission.
In May 2014, we identified strategies to help agencies maintain their human capital capacity while facing fiscal constraints. These strategies include strengthening coordination within the human capital community, using enterprise solutions to address shared challenges, and creating more agile talent management to address inflexibilities in the current system.
Also, guidance from the Office of Management and Budget directs federal agencies to reconsider priorities, determine how to make trade-offs, and evaluate the potential effects of these decisions. In June 2018, we reported that as federal agencies reexamine their role in carrying out specific missions and programs, they should determine whether the federal government is best suited to provide that service or if it can be provided by some other level of government or sector more efficiently or effectively.
Evolving Mission Requirements
Evolving mission requirements challenge agencies to adapt their work and workforces as they respond to policy shifts, technology changes, and resource constraints affecting their work.
Our previous work on the Census Bureau (Bureau) highlights this trend. The Bureau is redesigning its approach to the 2020 Census to address rising costs and declining response rates. In May 2017, we reported that the basic design of the enumeration—mail out and mail back of the census questionnaire with in-person follow-up for nonrespondents—has been in use since 1970. However, this traditional design is no longer capable of cost effectively counting the population, and the Bureau has fundamentally reexamined its approach for conducting the 2020 Census. For example, the Bureau is planning to offer households the option of responding to the survey through the internet.
The Bureau is also leveraging nonfederal partners and technology to respond to evolving mission requirements. For example, the Bureau plans to enhance its work with nonfederal partners to successfully complete the enumeration, particularly for hard-to-count groups, such as minorities, renters, and young children. In July 2018, we reported that to facilitate this effort, the Bureau plans to hire nearly twice as many partnership specialists as it had planned for the 2010 Census.
These partnership specialists will need core relationship-building skills and advanced knowledge of digital media. However, the Bureau faces a significant challenge in hiring staff with these skills because it is operating in a much tighter labor market than it did prior to the 2010 Census. Likewise, the Bureau has had difficulty filling key positions to oversee information technology contracts. In August 2018, we reported that a government program management office is managing the contractor that will integrate all of the Bureau’s key systems and infrastructure for the decennial. However, in June 2018, Bureau officials reported that 33 of the office’s 58 federal employee positions were vacant. These vacancies create risks for the program management office’s ability to oversee contractor cost, schedule, and performance.
Changing Demographics and Shifting Attitudes towards Work
Changing demographics and shifting attitudes towards work may require new skills to manage a diverse workforce that seeks purpose, autonomy, and career mobility. We found increases in the percentage of federal employees who had a disability, identified as a minority, were veterans, or who held an advanced degree over the past 10 years (see figure 2).
This increasing diversity should help provide agencies with the requisite talent and multidisciplinary knowledge to accomplish their missions. While the percentage of federal employees 40 years and older remained relatively flat, the federal workforce had a higher percentage of individuals who are 40 and older compared to the U.S. employed civilian labor force. The federal workforce had a higher percentage of people with a disability, who were veterans, or held an advanced degree (see figure 3).
Agencies face a potential risk related to retirement, particularly among the Senior Executive Service (SES). Specifically, we found that retirement rates for SES employees are higher than for all employees, with 7 to 8 percent of SES retiring every year for the past 6 years (see figure 4). Cumulatively, 41 percent of the permanent SES workforce in fiscal year 2012 retired by fiscal year 2017. If turnover is not strategically managed and succession plans are not in place, gaps can develop in an agency’s institutional knowledge and leadership as experienced employees retire. While retirements can aggravate the problem of skill gaps, they also present an opportunity for agencies to realign their workforce with needed skills and leadership levels to better meet existing and newly emerging mission requirements.
Based on expert interviews, we also identified shifts in employee attitudes toward work, which present recruiting opportunities and challenges for the federal government. Experts said that employees seek meaningful work (i.e., work that can influence the greater society); autonomy within the workplace (i.e., opportunities to develop creative and innovative solutions to complex problems); control over their work environment (i.e., they want to set a schedule and to work in a location that provides work-life balance); and career mobility, including opportunities for upward mobility (i.e., promotions) and lateral mobility (i.e., opportunities to rotate to different roles or projects within the same agency, a different agency, or outside of government).
Related to career mobility, experts said that employees are seeking greater developmental opportunities and would prefer longer-term employment where they can continue to build their skills and train. Experts noted that while employees change jobs more often than in the past, this phenomenon can be a result of employers investing less in employee development, which has led to greater turnover. OPM also recently reported that millennials are known for frequently transitioning from one job to the next.
While federal agencies offer unique opportunities to pursue meaningful work, achieve autonomy, and have a healthy work-life balance, experts also highlighted key challenges regarding perceptions surrounding federal work from the potential applicants. These challenges include perceptions that the government is too bureaucratic, federal work lacks innovation and involves maintaining the status quo, federal work is less prestigious than the private sector, and federal workers do not get to see the immediate effect of their work. Officials from federal employee and manager groups believed that furloughs, government shutdowns, pay freezes, and negative rhetoric from elected officials have all contributed to the negative perceptions among potential applicants. For example, from December 22, 2018, to January 25, 2019, a partial government shutdown occurred as a result of a lapse in appropriations affecting some, but not all federal agencies. It was the second multiweek lapse in appropriations causing a government shutdown since 2013 and the longest shutdown in American history. Federal employees at the affected agencies did not receive a paycheck during the government shutdown. Experts we interviewed noted that the perception of job security offered by federal work is attractive to employees. However, prolonged shutdowns may alter this perception and harm the government’s recruitment and retention efforts.
Given the changing demographic composition of the federal workforce and shifting attitudes toward work, our analysis suggested that it may be important to select and train managers and supervisors who possess several leadership competencies. These competencies include fostering an inclusive workplace (valuing diversity and individual differences and leveraging these differences to achieve the agency’s mission); team building (inspiring and fostering team commitment, spirit, pride, and trust); interpersonal skills (treating others with courtesy, sensitivity, and respect); and managing conflict (encouraging differing opinions to be expressed and resolving disagreements in a constructive manner). Such competencies can help managers and supervisors develop an agency culture where all employees feel valued, respected, engaged, and able to contribute toward an agency’s mission.
Key Talent Management Strategies Can Help OPM and Agencies Better Manage the Current and Future Workforce
In light of trends discussed, we identified actionable strategies that agencies may be able to use to effectively manage the future federal workforce in key talent management areas (see table 1). While these strategies are not an exhaustive list, collectively they suggest basic steps that agencies can take within existing authorities to position themselves to meet their talent needs. Since, in some cases, agencies already use these strategies, focused attention to leadership, culture, and sound management practices can help agencies prepare for the future workforce.
For each strategy, we highlight some of the challenges agencies face, actions OPM can take to implement open, related recommendations from our prior work, and practices that may help agencies implement the strategy. These practices are based on our review of related reports, group interviews with federal Chief Human Capital Officers (CHCO), and interviews with selected private organizations and foreign governments.
Align Human Capital Strategy with Current and Future Mission Requirements
Why Is Aligning Human Capital Strategies Important? Strategic workforce planning aligns an organization’s human capital program with its current and emerging mission and programmatic goals, and develops long-term strategies for acquiring, developing, and retaining staff to achieve programmatic goals. This process—in conjunction with identifying skills and competencies and analyzing gaps— enables the organization to be agile, resilient, and responsive to current and future demographic and technological trends, as well as other demands. These efforts can also help agencies tailor their recruiting programs.
In our prior work, we reported that high-performing organizations define what they want to accomplish and what kind of organization they want to be. They then identify and analyze the personnel skills, competencies, numbers, and other factors needed to achieve those objectives. However, these steps are a challenge for agencies that lack the capacity for strategic workforce planning. Consequently, these agencies’ human capital efforts tend to focus on support and transactional activities and compliance with rules and regulations. While these functions are important, successful strategic human capital management requires human capital professionals to integrate human capital strategies with their agency’s core business practices. In addition, high-performing organizations recognize the fundamental importance of measuring both the outcomes of human capital strategies and how these outcomes have helped the organizations accomplish their missions and programmatic goals.
Set Workforce Goals and Assess Skills and Competencies Needed to Achieve Them
Identify existing skills and competencies. In May 2014, we reported that agencies should be aware of existing skills and competencies in their workforce to help inform workforce planning. According to the Department of the Treasury (Treasury) CHCO, establishing a skills inventory can help managers assign the right talent to the right place at the right time. For example, the CHCO told us that during the Puerto Rico debt crisis, Treasury needed to be able to identify the necessary skills to manage the crisis. The agency is now implementing an Integrated Talent Management System to facilitate workforce and succession planning as well as learning and performance management.
In May 2014, we recommended that OPM work with the CHCO Council to review the extent to which new capabilities are needed to develop tools that help identify existing skills. OPM agreed and took a number of actions to address this and other related recommendations. For example, OPM developed an action plan template for closing skills gaps that adheres to our selected best practices for project planning. However, as of November 2018, other actions were still needed to fully address this and other related recommendations.
Assess gaps in existing and future skills and competencies. With shifting attitudes toward work, technological advances, and increased reliance on nonfederal partners, agencies need to assess whether there are gaps in existing and future skills and competencies. We previously reported that most federal human resources (HR) systems—reflecting the General Schedule classification system—only identify employee skills and competencies by their occupational series, job title, and grade. This level of detail does not adequately address the multidisciplinary nature of modern work. For example, cybersecurity spans many occupational families. Similarly, with technological advances, agencies may need interdisciplinary talent such as workforce specialists in information technology. Agencies may be better able to assess gaps in such talent by defining, developing, and deploying workers based on skills and competencies, not by occupational series.
According to the Department of Defense Civilian Human Capital Officer, agencies can assess gaps in skills and competencies through functional communities, in which experienced leaders in areas such as acquisition or financial management define, assess, and determine how to distribute skills and competencies in the workforce. She said that although her department and other agencies have made progress in closing skills gaps, only functional communities themselves can define the skills and competencies needed for current and future work. She also said that a mature functional community can help align workforce planning to agency strategic goals and objectives.
In January 2015, we recommended that OPM work with agency CHCOs to (1) establish a schedule specifying when OPM will modify its Enterprise Human Resources Integration (EHRI) database to capture staffing data that it currently collects from agencies through its annual workforce data reporting process; and (2) bolster agencies’ ability to assess workforce skills and competencies by sharing competency surveys, lessons learned, and other tools and resources. In December 2018, OPM released a memorandum outlining plans for a phased, government-wide competency assessment of program and project managers beginning in May 2019. Additionally, in March 2019, OPM reported that it had identified a data source that was more efficient and accurate in identifying staffing gaps than EHRI data. We will continue to monitor OPM’s progress in implementing its planned actions.
Monitor progress toward closing skills gaps. We previously reported that the federal government faces skills and competencies gaps in a number of agency-specific and government-wide occupations. One such occupation is in the HR profession. Skills gaps in HR occupations can hamper both strategic and transactional HR activity, exacerbate additional skill gaps, and hinder agencies’ ability to accomplish their missions. For example, our December 2016 report highlighted how the Veterans Health Administration’s limited HR capacity undermined its ability to improve delivery of health care services to veterans.
Further, OPM officials said that a challenge to federal hiring efforts is high turnover among HR staff, and one CHCO said her HR staff is not up to date on hiring options. As a result, OPM officials noted that HR offices are missing specialists who understand the agencies’ specific hiring needs and flexibilities.
In January 2015, we recommended that OPM (1) work with the CHCO Council to develop a core set of metrics that all agencies should use to close mission-critical skills gaps, among other HR goals; and (2) coordinate with the interagency working group that identified the list of skills gaps to explore the feasibility of collecting necessary information during a CHCO-led review of HR goals. OPM concurred with the recommendation in 2015. In March 2019, OPM stated it had addressed the recommendation by developing a multifactor model consisting of core metrics. This model included quit rates and retirement rates. OPM said that it provides the model to agencies for identifying mission-critical occupations. OPM added that agencies should have the autonomy to determine which human capital metrics are important for achieving their missions. While this is an important step forward, to close the recommendation, OPM needs to provide evidence that agencies are using the multifactor model as a common set of metrics to close mission- critical skills gaps, regardless of other agency-specific metrics.
Acquire and Assign Talent
Why Is Acquiring and Assigning Talent Important? To ensure agencies have the capacity to address evolving mission requirements, agencies will need to compete with other sectors to acquire top talent, as well as have the flexibility to reassign existing talent to where they are most needed. This helps ensure the right people, with the right skills, are assigned to the right roles at the right time.
According to OPM data, expert interviews, and our previous work, the federal government faces a range of challenges acquiring and assigning talent. These challenges include a lengthy hiring process and negative perceptions of government. In 2017, the average government-wide time- to-hire was 106 days, according to OPM. Candidates do not consider this time frame to be reasonable, according to human capital experts and federal employee and management groups. OPM’s government-wide goal is 80 days.
Further, only 42 percent of respondents to the 2017 Federal Employee Viewpoint Survey (FEVS) think their work unit can recruit the right skills. Human capital experts, CHCOs, and OPM officials reported that agencies face challenges (1) matching applicants with job positions best suited to their skills, and (2) moving existing employees with specific skills to address emerging, temporary, or permanent needs across an agency.
In the sections below, we highlight actions OPM can take to implement open recommendations from our prior work, and practices agencies can follow to address these challenges by (1) sourcing and recruiting talent, (2) assessing and screening candidates, and (3) assigning employees where needed.
Source and Recruit Talent
Sourcing and recruiting is the process of attracting strong applicants who are prepared to perform successfully on the job. Some practices agencies can use to better source and recruit include cultivating a talent pipeline, highlighting agency mission, recruiting continuously, starting the hiring process early in the school year, reviewing available hiring flexibilities, and writing user-friendly vacancy announcements.
Cultivate a diverse talent pipeline. In our prior work, we have noted the importance of active campus recruiting that goes beyond infrequent outreach to college campuses. Active campus recruiting includes developing long-term institutional relationships with faculty, administrators, and students. In addition, OPM guidance emphasizes that agencies should develop an inclusive approach to their talent acquisition strategies. This includes developing strategic partnerships with a diverse range of colleges and universities, trade schools, apprentice programs, and affinity organizations from across the country.
Likewise, representatives of consulting firms we interviewed stated they cultivate a talent pipeline by building a brand on campus, developing relationships with college students, and recruiting on campuses for entry- level positions and internship programs. One consulting firm representative said that the firm sends “brand ambassadors” to build relationships with college freshmen and sophomores, and to discuss working in the professional services industry. Another consulting firm representative said that the firm uses social media to develop relationships with students prior to a campus visit. Consulting firm representatives also noted that they expanded their talent pool by visiting technical conferences, veteran groups, and campuses with students of diverse backgrounds.
Consulting firm representatives stated that their internship programs are among their most successful practices for cultivating a talent pipeline because the firms can offer full-time positions to rising seniors during the internship. Similarly, CHCOs and federal employee and management group representatives we interviewed noted that internships are important for establishing a pipeline for recruitment.
Highlight agency mission. Agencies can help counter negative perceptions of federal work by promoting their missions and innovative work, according to expert and CHCO interviews. For example, the Department of Homeland Security (DHS) provides “Day in the Life” information on its work to promote public awareness of how its everyday tasks tie in with its mission of protecting the United States, according to the DHS CHCO. The DHS CHCO stated that promoting agency mission can be done while cultivating a talent pipeline and assessing applicants’ abilities. For example, the department holds recruitment events where potential candidates can participate in law enforcement-related activities such as fitness testing. The CHCO noted that in addition to promoting homeland security careers, these events help prospective candidates determine if a position is a good fit for them.
Recruit continuously and start the hiring process early in the school year. The ability to hire students is critical to ensuring that agencies have a range of experience levels for succession planning and a talent pipeline to meet mission requirements. One of the key challenges agencies face in recruiting students is managing the timing of recruitment. The federal fiscal year begins on October 1—about when private sector firms we interviewed start recruiting on campus. Frequently, however, federal agencies have been unable to hire at this time of year because of the limitations of continuing resolutions. Yet if agencies wait to start the recruiting and hiring process until they receive funding, many graduates will have taken other job opportunities.
Agencies can overcome these timing challenges by recruiting continuously and starting the hiring process early in the school year. To recruit continuously, CHCOs from the U.S. Departments of Agriculture and Homeland Security said they advertise funding-conditional positions throughout the year. Similarly, representatives of some consulting firms said they post positions that are contingent on funding and complete the hiring paperwork, among other requirements, for these positions before obtaining federal funding. This has helped navigate the timing of annual appropriations because these organizations can onboard candidates as soon as they receive funding.
Representatives of one federal management group also stated that recruiting continuously and starting the hiring process earlier is a good practice even when agencies receive funding in October, since it can reduce stress from cumbersome recruiting and hiring work when a position needs to be filled.
Strategically leverage available hiring flexibilities. CHCOs cited the complex competitive examining process as a cause of the lengthy hiring time. This has been a long-standing concern: In our 2002 report on human capital flexibilities, we noted that for many years prior, federal managers had complained that competitive examining procedures were rigid and complex.
However, agencies can use a number of additional hiring authorities beyond competitive examining. These authorities can add flexibility to the process and CHCOs expressed a desire for more. However, we previously found that agencies relied on only a small number of available authorities. In fiscal year 2014, 20 hiring authorities were used to make around 90 percent of the new appointments, although agencies used 105 hiring authority codes in total.
We recommended that OPM use information from its review of agencies’ use of certain hiring authorities to determine whether opportunities exist to refine, consolidate, or expand agency-specific authorities, and implement changes where OPM is authorized, including seeking presidential authorization or developing legislative proposals if necessary. OPM agreed with our recommendation and has made progress in these areas, although more work is needed. As of July 2018, OPM had started a project to review hiring authority data and to create an inventory of authorities used by agencies. In its July 2018 study on excepted service hiring authorities, OPM identified possible opportunities to streamline authorities and outlined planned actions to promote a more effective and efficient hiring process. As of December 2018, OPM said that it continues to research and examine these streamlining opportunities as part of the broader initiative to modernize federal hiring practices under the President’s Management Agenda. To fully implement the recommendation, OPM needs to complete these efforts and, as appropriate, develop legislative proposals in consultation with the CHCO Council.
Write user-friendly vacancy announcements. We previously reported that some federal job announcements were unclear. This can confuse applicants and delay hiring. In July 2018, OPM officials stated that agencies can develop more effective vacancy announcements when hiring managers partner with HR staff. According to OPM, hiring managers can work with HR staff to identify the critical competencies needed in the job, develop a recruiting strategy, and ensure the job announcement accurately and clearly describes the required competencies and experience. To promote collaboration between hiring managers and HR staff, OPM is training agencies on the role of hiring managers in writing vacancy announcements, according to OPM officials.
As we reviewed human capital practices in foreign governments, Canadian officials told us that Canada’s Public Service Commission shortened job announcements and reduced the number of qualifications required to apply for most positions. Canadian officials also noted that they simplified their job application portal, which reduced the time to apply for a job.
Assess and Screen Candidates
Assessing includes developing and implementing tests, structured interviews, and other evaluations to determine whether candidates are qualified for the position and to gauge their relative levels of knowledge, skills, and abilities. Screening involves reviewing qualified candidates for potential suitability concerns and conducting background investigations. Practices for assessing and screening include using relevant assessment methods, sharing hiring lists, and improving the security clearance process.
Use relevant assessment methods and share hiring lists. CHCOs and OPM officials stated that roadblocks to hiring the right skills include issues with assessment methods. Specifically, agencies may use methods that are less relevant for assessing the desired skills or agencies may experience issues incorporating multiple assessments in the hiring process. For example, one CHCO said that her agency uses multiple- choice questions to assess candidates, but essay questions more effectively assess the skills she seeks. OPM issued guidance to agencies on how to use additional assessment methods, including how to rank applicants.
Additionally, federal employee and management group representatives said agencies could reduce the time of the assessment process by sharing hiring lists. The Competitive Service Act of 2015 allows agencies to share hiring lists, but agencies have only started to pilot the practice within departments, according to OPM officials. OPM and agencies discussed sharing hiring certificates with the CHCO Council, and OPM is planning virtual training sessions on this topic. However, one federal employee group representative noted that to be consistent with merit principles, agencies may need to refresh the list every 2-to-3 months to give new candidates the opportunity to enter the application pool.
In looking at human capital practices in foreign governments, we found that Australian agencies incorporated more relevant assessment methods and shared hiring certificates. According to officials from the Australian Public Service Commission, Australian agencies previously relied on interviews as the main assessment method. However, the Australian Public Service Commission encouraged agencies to use a range of different assessment methods, such as prescreening questionnaires, video interviews, and technical multiple-choice questions. As a result, officials stated that Australian agencies interview fewer but more suitable candidates, which can save time and resources. Also, Australian agencies can hire from a list of candidates that one agency already determined to be qualified in certain skills.
Improve the security clearance process. The security clearance process can contribute to onboarding delays, according to CHCOs. For example, at one agency, the CHCO said it takes applicants more than 400 days to receive their security clearances. Also, our previous work found that 98 percent of agencies did not meet the 60-day timeliness objectives for initial secret clearances in fiscal year 2016, an increase of 25 percentage points since fiscal year 2012.
In January 2018, we added the security clearance process to our High- Risk List and reported a backlog of more than 700,000 background investigations as of September 2017. In December 2017, we made three recommendations to the National Background Investigations Bureau within OPM.
These recommendations included developing a plan for reducing the security clearance backlog, increasing total investigator capacity, and implementing a comprehensive strategic workforce plan that focuses on what workforce and organizational needs and changes will enable the National Background Investigations Bureau to meet the current and future demand for its services. OPM concurred with the recommendations, and officials reported in February 2019 that the National Background Investigations Bureau had taken steps to reduce backlog of pending security clearance investigations to approximately 565,000 and increase the number of investigators to almost 8,700. The National Background Investigations Bureau has also reported publically on the security clearance background investigations, including investigator headcounts, in September 2018, and quarterly on performance.gov. While an important step, OPM needs to complete the workforce plan and identify workforce goals to fully implement the recommendation.
The Department of Homeland Security (DHS) CHCO said DHS navigates this challenge by onboarding talented, qualified applicants as soon as possible, then, while waiting for their high-level clearance, assigning them tasks that do not require the clearances. She also said that DHS has issued more interim clearances and has redesignated some positions so they can be held by employees with a lower clearance classification.
Assign Employees Where Needed
Our previous work noted that it is important for agencies to be able to place employees where needed, especially since utilizing skills of employees already in the workforce could improve agencies’ ability to meet emerging or temporary mission needs more cost-effectively than hiring employees.
Develop a culture of agility. We previously reported that to develop a culture of agility, agencies need to be able to (1) identify the skills available in their existing workforces, and (2) move people with specific skills to address emergency, temporary, or permanent needs within and across the agencies. Agencies can develop a culture of agility to meet mission needs by supporting rotational assignments for employees. For example, the Nuclear Regulatory Commission established an oversight board when it faced a period of downsizing and could not hire externally as a result of contraction within the nuclear industry, according to the agency CHCO. This board helped ensure that employees with the required skill sets were considered first before an approval to hire would be granted. Through its active rotational program and hiring oversight, the commission met its mission amidst the downsizing, according to the agency’s CHCO.
Relatedly, Canada and two of the private government contractors we interviewed have used internal job application platforms to promote a culture of agility. Canada’s internal job platform, Career Marketplace, allows all government employees to share profiles and career opportunities, particularly for short-term projects. One company’s representatives said their internal job platform posts openings in different countries and industries across the company. According to these representatives, this company established a culture where supervisors understand that staff work for the entire company, not just a particular unit or program. Another company supplements its internal job platform with tools to recognize employee skills and find opportunities that best fit those skills.
Incentivize and Compensate Employees
Why Is Incentivizing and Compensating Employees Important? Changing mission requirements and technological trends requires the federal government to compete with other sectors for in-demand skill sets, and compensation and incentives are key determinants of where employees choose to work. While federal agencies may struggle to offer competitive compensation for highly skilled workers given fiscal constraints, leveraging existing incentives such as work-life balance programs can help agencies to better compete for top talent even in labor markets where federal pay may not be competitive.
While federal agencies may face challenges implementing competitive compensation in certain labor markets, certain benefits and incentives other than pay can help federal agencies better compete in the labor market. However, agencies do not always promote these benefits and incentives as part of a total compensation package, in part because managers are not always aware of the importance of doing so. In the sections below, we highlight practices agencies can use to promote current benefits and incentives, and discuss our open recommendations to leverage existing pay flexibilities.
Leverage Benefits and Incentives
In cases where federal pay may not be competitive, certain benefits and incentives, such as work-life balance programs, tax-exempt health savings plans, and retirement savings plans, could give the government an edge to recruit and retain employees. Some practices agencies can use to leverage these benefits and incentives are as follows.
Increase awareness of benefits and incentives, such as work-life programs. In 2017, the majority of federal employees were satisfied with compensation, and employees who participated in work-life programs were satisfied with those incentives (see table 2 and figure 5). However, OPM’s 2018 Federal Work-Life Survey Governmentwide Report found that one of the most commonly reported reasons employees do not participate in work-life programs is lack of program awareness among employees and supervisors. For example, 23 percent of those who did not participate in the employee assistance program said they were unaware of the program services.
Some agencies are addressing this issue by advertising and helping employees use available benefits, work-life balance programs, and other resources. For example, the National Science Foundation offers employees many opportunities to learn about existing benefits, according to the foundation’s CHCO. These opportunities include triannual retirement seminars where employees receive personalized retirement estimates, quarterly financial planning seminars where employees receive a free 1-hour consultation, and annual benefit fairs where employees can learn about various health care providers, the work-life programs, and the employee assistance program.
Tailor benefits and incentives to employees’ needs. Our analysis of CHCO and expert interviews also found that employees may value different benefits and incentives depending on their stage in life. By better understanding the desires of the workforce at various life stages, agencies can better tailor benefits packages and incentives to their employees. For example, the Social Security Administration’s CHCO said that the agency’s younger workers value work-life and wellness programs, so the agency implemented a health-tracking program and a fitness discount program for all employees. CHCOs also suggested identifying and incorporating the benefits that would be most useful to various groups of employees, such as sabbaticals for midlevel employees or paid parental leave for employees starting families. One CHCO found that her cybersecurity workforce values subsidies for training and additional certifications more than bonus pay.
Further, OPM’s 2018 Federal Work-Life Survey Governmentwide Report found that the number of respondents who anticipate adult dependent care responsibilities in the next 5 years (31 percent) is double the number of number of respondents with current adult dependent care needs (15 percent). OPM officials stated in light of this change, agencies may need to provide greater workplace flexibilities and other support services to retain talent.
Address barriers to telework. Telework can serve as an important recruitment and retention tool. According to OPM’s 2018 Federal Work- Life Survey Governmentwide Report, 68 percent of employees who telework said they intended to remain at their agencies, compared to 62 percent of those who do not telework. However, our previous work and OPM’s 2018 Federal Work-Life Survey Governmentwide Report found that some supervisors discourage telework despite agency participation goals and that managers may make telework decisions before taking relevant training.
In February 2017, we recommended that OPM develop tools to help agencies assess and analyze persistent barriers to telework, including managerial resistance. While OPM disagreed with our recommendation, it took steps consistent with the recommendation. For example, in 2017, OPM administered the first government-wide work-life survey. This survey included questions about a number of work-life programs, including telework, to help identify common barriers to participation in telework, including managerial resistance. Specifically, the survey discussed supervisory perceptions of employees' reported telework participation outcomes, supervisors' confidence to effectively manage telework performance, and key drivers for telework approvals and denials. OPM then provided individualized reports on results to agencies and agency components. OPM also developed and distributed a video tutorial to help agencies analyze their results. In 2019, following receipt and review of documentation from OPM, we determined that these actions will help agencies prioritize ways to improve their telework programs. We then closed the recommendation as implemented.
In our review of other countries’ human capital practices, we found that Australia encouraged managers to support telework by passing legislation outlining standards and developing a culture that supports work-life programs. For example, Australia’s Parliament passed legislation outlining standards for work-life programs, but Australian officials also stated that commitment from top management was instrumental in creating a culture that supported work-life programs, including telework. More than 80 percent of respondents to the Australian Public Service employee census reported that their supervisor actively supports work-life programs.
Leverage Existing Pay Authorities
It is the policy of Congress that pay for federal workers under the General Schedule (GS) classification system—the pay system covering the majority of federal employees—align with pay for comparable nonfederal workers. However, in 2012, we reported that recent studies comparing the compensation of federal employees to workers in other sectors arrived at different conclusions as to which sector had the higher pay and the size of the pay disparities, in part because each study included different sets of assumptions. When necessary, agencies can use special payment authorities strategically to help ensure pay is competitive.
Use special payment authorities strategically. A variety of authorities can help agencies compete in the labor market for top talent, but agencies only use them for a small number of employees. In December 2017, we reported that agencies can tap an array of special payments when they need to recruit or retain experts in engineering, cybersecurity, or other in-demand fields. These payments include, for example, payments for recruitment, retention, or critical positions. We found that agencies reported that these payments were helpful, but few documented their impacts, and OPM had not assessed their effectiveness. Further, we analyzed EHRI data and found that less than 5 percent of employees received payments for recruitment or retention annually in the past 10 years.
In December 2017, we recommended that OPM track the effectiveness of special payment authorities, provide guidance and tools to assess their effectiveness, and review and consider ways to streamline approval procedures. OPM partially concurred with the recommendation to track the effectiveness of special payment authorities, saying that agencies are in the best position to take this action. Moreover, in December 2018, OPM stated that it established a baseline to measure changes in the use of special payment authorities over time, and that it is focused on government-wide, mission-critical occupations to help identify trends where there may be recruitment and retention difficulties. OPM is also working with the CHCO Council to administer a survey to agencies to obtain input on possible improvements to special payment authorities and whether agencies have best practices to share on effective use of special payment authorities. OPM officials said that they plan to review approval procedures in 2019 for ways to streamline them; however, they have not yet provided documentation on how this and future reviews will identify ways to streamline the procedures. We will continue to monitor OPM’s actions to implement this recommendation.
Engage Employees
Why Is Engaging Employees Important? Employee engagement— generally defined as the sense of purpose and commitment employees feel toward their employer and its mission—is important because engaged employees are more innovative, more productive, more committed, more satisfied, and less likely to leave, according to OPM.
OPM’s study on engagement and our prior work found that what matters most in improving engagement levels is valuing employees by authentically focusing on their performance and career development. Specifically, our prior work found that the strongest drivers of engagement were similar across age groups and include constructive performance conversations and communication from management, career development and training, inclusion and involvement in decisions affecting employees’ work, and work-life balance.
The challenge for agencies, then, is to (1) overcome weaknesses in the performance management process, including rewarding strong performers and dealing with poor performers; (2) create support for an inclusive work environment; and (3) develop and implement strategies for prioritizing training during times of fiscal constraint. In the sections below, we highlight actions OPM can take to implement open recommendations from our prior work and practices agencies can take to improve employee engagement.
Manage Employee Performance and Create a “Line of Sight” Between Individual Performance and Organizational Results
Experts said that employees desire an environment where they can collaborate with their peers and feel a sense of comradery. In contrast, even a small number of poor performers can negatively affect employee morale and agencies’ capacity to meet their mission, according to CHCOs and our previous work. In the 2017 FEVS, 64 percent of federal employee respondents agreed that their supervisor provides them with constructive suggestions to improve job performance and 31 percent agreed that steps are taken to deal with poor performers. Without effective performance management, agencies risk not only losing the skills of top talent, they also risk missing the opportunity to effectively address increasingly complex and evolving mission challenges. Agencies can make performance management more effective with the following practices.
Improve selection and training of supervisors and managers. Agencies can improve employee engagement by having a strong management team that can provide constructive performance conversations and deal with poor performers. This can be done by selecting managers who (1) are inclined toward and interested in supervision, and (2) have the ability to coach staff and provide constructive performance feedback. One way agencies can ensure they are selecting managers who want to manage is to establish a dual career ladder structure, which allows advancement opportunities for employees who have technical skills but are not inclined to manage.
Representatives of private consulting firms we interviewed use the dual career ladder and said it helps expand opportunities for employees to move around internally. We recommended in 2015 that OPM determine if promising practices, such as the dual career ladder structure, should be more widely used across government. In November 2018, OPM officials said that the President’s Management Agenda requires agencies to ensure first-line supervisors possess critical leadership competencies within the first year of appointment, either through selection or development. We will continue to monitor OPM’s actions in this area.
Agencies can also train managers to ensure they have skills to address poor performance. In February 2015, we reported that supervisors may not possess confidence or experience in having difficult performance conversations, and they may not have skills or training on addressing poor performance. These factors point to the importance of effective selection, assessment, and development of new supervisors, as well as to the importance of providing refresher training for current supervisors.
Link agency’s mission and employees’ work. We have previously reported that high-performing organizations create a “line of sight” between individual performance and organizational results by aligning employees’ daily activities with broader results. Further, agencies can motivate and retain employees by connecting them to their agency’s mission, according to human capital experts and federal employee and management group representatives we interviewed. Employee responses to FEVS indicate the federal government appears to be performing well in this area. In 2017, 84 percent of employees knew how their work related to the agency goals and priorities.
Several private consulting firms we spoke with connect employees to their missions in various ways. One firm aligns individual performance expectations with the organization’s goal of serving federal clients objectively with the highest caliber of scientific and technical excellence. According to the firm’s representative, this effort has improved employee satisfaction scores. Other firms train employees on the firm’s core values and its clients’ missions. According to the firms’ talent directors, this practice helps keep employees interested in working for the firm.
Implement meaningful rewards programs. We have previously reported that high-performing organizations seek to create effective incentive and reward systems that clearly link employee knowledge, skills, and contributions to organizational results. However, agencies sometimes struggle to allocate limited resources between mission requirements and recognition, according to CHCOs and representatives of one federal management group. According to the representatives, some managers may not implement reward programs because they are time intensive, and managers may not understand the importance of reward programs to motivating the workforce. Among 2017 FEVS respondents, 50 percent reported that they were satisfied or greatly satisfied with the recognition received for doing a good job.
Further, our November 2018 report highlighted challenges in recognizing employee performance. We noted that approximately one-third of 2017 FEVS respondents agreed or strongly agreed with the statement, “In my work unit, differences in performance are recognized in a meaningful way.” We also found that employees in supervisory roles responded more positively to statements related to rewarding performance than other employees. For example, in 2017, an estimated 69 percent of senior leaders agreed or strongly agreed with the statement. In contrast, an estimated 48 percent of supervisors and an estimated 33 percent of nonsupervisors and team leaders agreed or strongly agreed.
Human capital experts and federal employee and management group representatives said that recognizing employees for their contribution to achieving the agency’s mission can be as strong an incentive as money. For example, according to the Social Security Administration CHCO, the agency offers a variety of awards programs. These programs include agency-wide monetary awards that are based on performance ratings, monetary awards that are not based on performance ratings, and nonmonetary awards, some of which are showcased in a virtual ceremony during Public Service Recognition Week. The Social Security Administration also incorporates office-level awards to recognize employee contributions. For example, in some offices, supervisors give “Life Saver” or “You Rock” certificates.
Share innovative approaches to performance. In November 2018, we found that opportunities exist to share innovative approaches to performance management. We recommended that OPM work with the CHCO Council to develop a strategic approach for identifying and sharing emerging research and innovations in performance management. Examples of innovations OPM has found include changes in performance ratings models and setting goals that are focused on growth. We also recommended that OPM develop and implement a mechanism for agencies to share promising practices, such as focusing on performance conversations and recognition to increase engagement and performance. OPM agreed with our recommendations and reported that it plans to formalize its processes for sharing emerging research and soliciting views from the CHCOs. We will monitor OPM’s efforts to implement the recommendations.
Involve Employees in Decisions
Our analysis of expert interviews found that employees seek autonomy in the workplace, meaningful work, and opportunities to achieve results by developing creative and innovative solutions. Also, experts noted that in some cases, connecting employees to a sense of inclusion and meaning can compensate for the opportunity to make higher salaries in other sectors. Having an inclusive work environment is one practice that can help increase employee involvement in decisions.
Increase support for an inclusive work environment. An increasingly diverse workforce can help provide agencies with the requisite talent and multidisciplinary knowledge to accomplish their missions. We previously reported that diversity in the workforce can help address complex challenges and foster innovation and creativity. We also reported that fostering a diverse and inclusive workplace could help organizations reduce costs by reducing turnover, increasing employee retention across demographic groups, and improving morale. To harness diverse talent, agencies need to continue using thoughtful strategies to engage employees. In 2017, almost 70 percent of FEVS respondents stated that supervisors work well with employees of different backgrounds, and about half were satisfied in other areas related to inclusiveness (see table 3).
In January 2005, we reported that top management commitment is a fundamental element in the implementation of diversity management initiatives. We’ve also reported on the importance of diversity in the Senior Executive Service (SES) corps. In January 2003, we stated that diversity can bring a wider variety of perspectives to bear on policy development and decision-making that help agencies achieve results. Other practices that can help agencies support an inclusive environment include having a diversity strategy and plan that are developed and aligned with the organization’s strategic plan. Agencies should also involve employees in driving diversity throughout the organization (e.g., implementing mentoring programs or advisory groups).
Practices implemented by the United Kingdom (UK) and Australia emphasize the importance of setting an inclusive tone from the top. For example, according to country officials, the UK and Australia designate high-level agency officials to champion a particular government-wide initiative, such as increasing diversity and inclusion, work-life balance, and well-being. In the UK, champions promote the initiatives by blogging or chairing interagency groups of senior civil servants to share best practices, among other activities.
Agencies can promote an inclusive work environment by providing employees opportunities to share common interests and involving employees in decisions. Private consulting firms we interviewed help employees feel involved in the organization by sponsoring employee groups where employees can gather around common interests, such as community service, or skill sets, such as cybersecurity or acquisition management. One firm incorporates results of its annual employee survey into its decision-making and modified its career progression trajectory based on feedback from employee focus groups.
Develop Employees
Agencies can use career developmental opportunities, including training, details, and rotations, to (1) help the workforce develop skills to meet evolving mission requirements, (2) ensure managers are well qualified, and (3) appeal to current and future workers’ desires for career mobility. Some actions OPM can implement and practices agencies can take include prioritizing training and encouraging mobility opportunities.
Prioritize training for employees and managers. CHCOs and federal employee and management group representatives said that more can be done to prioritize training, particularly given resource constraints. Further, our past work found that diversity training can help employees develop concrete skills to assist in communicating and increasing productivity.
However, in 2017, only 55 percent of FEVS respondents were satisfied with training.
In 2012, we recommended that OPM include in its guidance steps and factors agencies should consider when prioritizing training. OPM partially agreed with our recommendation and has taken steps to implement it. In July 2017, OPM officials reported they were gathering information on agencies’ talent development processes, tools, and procedures, and would use the information they gathered to develop criteria for ranking training. We requested an update in December 2018 and will continue to monitor OPM’s actions to implement this recommendation.
As an example of agency training efforts, the Social Security Administration has national and regional development programs that offer 12 to 18 months of training and rotations for entry-, mid-, and senior-level employees to strengthen foundational, technical, and leadership knowledge and skills, according to the agency’s CHCO. For example, its Leadership Development Program assigns selected GS-9 through GS-12 employees to developmental assignments in new areas of work, and provides leadership training that broadens their perspective of the agency’s mission.
Encourage details, rotations, and other mobility opportunities. According to our group interviews with CHCOs and interviews with human capital experts and federal management groups, upward and lateral mobility opportunities are important for retaining employees. CHCOs also said that in some cases, lateral mobility opportunities such as rotations, details, and opportunities to gain experience in other sectors can help employees gain new skills more cost effectively than training, particularly for rapidly changing skill sets such as those related to the sciences. We previously reported that effective interagency rotational assignments can develop participants’ collaboration skills and build interagency networks.
Further, providing supervisory candidates with details or rotational opportunities could help them develop and demonstrate supervisory competencies.
Regarding upward mobility, the 2017 FEVS found that only 37 percent of respondents were satisfied with opportunities to get a better job in their organization. Agencies can use details and rotations to meet employees’ desire for mobility, according to our CHCO group interviews and interviews with human capital experts and federal employee and management groups. However, according to OPM data, few federal employees moved horizontally in 2017 (see table 4).
Few employees move horizontally because managers are sometimes reluctant to lose employees, according to federal manager group representatives and our previous work. Furthermore, federal budgeting and account structures create disincentives to share resources across agencies. Additionally, barriers to rotations in other sectors may include challenges identifying willing industry partners and addressing concerns regarding conflict of interest and access to sensitive information. Meanwhile, federal employees who have left for another sector must apply competitively to return at a higher level.
We have previously made recommendations that could help address these challenges. In 2014, we recommended that OPM review the extent to which new capabilities are needed to promote mechanisms for increasing employee mobility within an agency and government-wide. OPM agreed with the recommendation and since October 2016 has been exploring a pilot project, GovConnect, that tests models for workforce agility that includes cloud-based skill deployment across organizational components and employee-initiated innovation initiatives. In November 2018, OPM officials also stated that the President’s Management Council Interagency Rotations Program offers rotational assignments across agencies. We will continue to monitor OPM’s efforts in this recommendation.
In 2015, we recommended that OPM determine if promising practices, such as providing detail opportunities or rotational assignments to managerial candidates prior to promotion, should be more widely used across government. OPM partially concurred with this recommendation and agreed to work with the CHCO Council to explore more government- wide use of rotational assignments. However, OPM noted that agencies already have authority to take these actions. As of October 2018, OPM had not provided us with information regarding how it plans to implement the recommendation.
In looking at human capital practices in foreign governments, we found that the UK encourages rotation and promotion opportunities through its developmental programs for entry-, mid-, and senior-level employees. For example, participants in its entry-level program, called Fast Stream, are centrally employed in the UK Cabinet Office. For the first 3 to 4 years, Fast Stream participants rotate among agencies and receive technical training in a specific field, such as accounting, finance, or human capital. While evaluating Fast Stream’s feasibility in the federal workforce, one federal employee group representative emphasized the need to provide career development opportunities to all employees, not just selected program participants.
Agency Comments
We provided a draft of this report to the Acting Director of OPM for review and comment. OPM provided technical comments, which we incorporated as appropriate. We revised the report to further emphasize how agencies can use work-life programs to recruit, retain, and engage federal employees. We also added the concept of interpersonal skills to our discussion of the leadership competencies needed to manage the future workforce. OPM’s comments also included updates to prior recommendations on enterprise human capital solutions, skills gaps, telework, and special pay authorities. We incorporated these comments as appropriate.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Acting Director of the Office of Personnel Management, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2757 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
In this report, we identify (1) key trends affecting federal work and workers, and (2) key talent management strategies agencies can employ to achieve a high-performing federal workforce, given those trends.
To address both objectives, we reviewed our own reports as well as those from the Office of Personnel Management (OPM), academia, think tanks, and public opinion organizations related to human capital and the future of work. We also analyzed data from OPM’s Enterprise Human Resources Integration (EHRI) system. EHRI contains personnel action and onboard data for most executive branch and some legislative branch federal civilian employees. We analyzed government-wide EHRI data on demographics, including veterans status; employee movement such as details and transfers; and retirement eligibility. We analyzed 10-year trends from fiscal years 2008 to 2017, the most recent, complete fiscal year of data available at the time of our review.
For our analysis of demographic trends, we included permanent, temporary, and term-limited employees. However, we focused on permanent employees in our analysis of personnel movement and retirement eligibility because these employees (1) comprise most of the federal workforce and (2) become eligible to retire with an annuity, for which temporary and term-limited employees are ineligible.
To calculate the number of federal civilian employees, we included all onboard staff, regardless of their pay status. Cases with missing values on a variable were excluded from the reported statistics for that variable. To calculate eligibility for retirement within the next 5 years, we computed the date at which the employee would be eligible for voluntary retirement with an unreduced annuity, using length of service, birth date, and retirement plan coverage. Since work schedule does not affect retirement eligibility, we included permanent employees with full-time schedules and part-time, seasonal, and other schedules in these results.
We assessed the reliability of the EHRI data through electronic testing to identify missing data, out-of-range values, and logical inconsistencies. We also reviewed our prior work assessing the reliability of these data and corresponded with OPM officials knowledgeable about the data to discuss its accuracy and the steps OPM takes to ensure reliability. On the basis of this assessment, we believe the EHRI data we used are sufficiently reliable for the purpose of describing demographic trends and workforce management challenges facing the federal government.
Objective 1
To identify key trends in the workforce and workplace, we analyzed data from the U.S. Bureau of Labor Statistics (BLS) Current Population Survey (CPS) and Federal Procurement Data System – Next Generation, reviewed our prior work, and reviewed reports from OPM and selected think tanks and consulting firms.
Key trends in the workforce. To assess key trends in the workforce, we analyzed data from the CPS, a national survey designed and administered jointly by BLS and the Census Bureau. The CPS is a key source of official government statistics on employment and unemployment in the United States, and also contains data on poverty rates, earnings, and labor market demographics. We analyzed 2017 annual averages on age, racial or ethnic minority status, disability status, veteran status, and educational attainment of the U.S. civilian labor force.
The CPS uses a probability sample conducted monthly. As with all samples, estimates produced from the CPS are subject to sampling and nonsampling error. Sampling error results from the fact that the samples are one of a large number of random samples that might have been drawn. We followed the BLS technical guidance for estimating the standard errors of annual average totals from CPS data. We used the standard errors to construct 95 percent confidence intervals for each estimate presented in this report. This is the interval that would contain the actual population value for 95 percent of the CPS samples that the BLS could have drawn. All estimates from the CPS presented in this report have a margin of error of plus or minus 4 percentage points or fewer at the 95 percent confidence level. Nonsampling error results from issues such as inability to obtain information about all people in the sample, or the inability or unwillingness of respondents to provide correct information in the self-reporting process. We assessed the reliability of CPS data by reviewing related technical documentation from the BLS website on the concepts and methodology of the CPS, and obtaining BLS feedback on our analysis. We conducted manual data testing for obvious errors and compared selected underlying data to CPS annual reports. We found the data were sufficiently reliable for the purposes of comparing characteristics of the federal workforce to those of the U.S. civilian labor force.
Key trends in the workplace. To assess key trends in the workplace, we reviewed our prior work on human capital management and trends in government and the workforce. We also reviewed OPM reports on human capital trends and management, including the 2018 Work-Life Survey Governmentwide Report, 2018 Federal Workforce Priorities Report, and 2016 Federal Employee Benefits Survey Results. We interviewed OPM officials knowledgeable on these topics to better understand the methodology used to obtain report findings, and to understand previous and current efforts to assess federal human capital policies. We also reviewed selected reports from think tanks, public opinion organizations, and consulting firms on workplace trends. For reports used in our analysis, we corresponded with knowledgeable staff to better understand the methodologies used to obtain findings in the report and we assessed the methodologies against our own standards.
Service contracts. To describe the size of service contract obligations in fiscal year 2017, we reviewed data from the Federal Procurement Data System – Next Generation. We found the data sufficiently reliable for this purpose based on our review of related documentation.
Objective 2
To identify key areas to help agencies manage the workforce, we analyzed employee responses to questions from OPM’s 2017 Federal Employee Viewpoint Survey (FEVS) and spoke with various groups. We interviewed human capital experts, federal employee and management groups, and held moderated group interviews with agency Chief Human Capital Officers (CHCO).
Federal Employee Viewpoint Survey. To obtain information on federal employee attitudes toward work and the workplace, we analyzed employee responses to questions from OPM’s 2017 FEVS, the most recent data available at the time of our analysis. The FEVS provides a snapshot of employees’ perceptions about how effectively agencies manage their workforce. The FEVS includes a core set of 84 questions.
Agencies have the option of adding questions to the surveys sent to their employees. The 84 questions address the following areas: (1) work experience, (2) work unit, (3) agency, (4) supervisor, (5) leadership, (6) satisfaction, (7) work-life, and (8) demographics. OPM has administered the FEVS annually since 2010.
The FEVS is based on a sample of full- and part-time, permanent, nonseasonal employees of departments and large, small, and independent agencies. The total sample size for the 2017 FEVS was 1,139,882 employees and the response rate was 45.5 percent. According to OPM, the 2017 sample size was sufficient to ensure a 95 percent chance that the true population value would be between within 1 percent of any estimated percentage for the total federal workforce. Since each sample could have provided different estimates, we express our confidence in the precision of the FEVS statement estimates using the margin of error at the 95 percent level of confidence. This margin of error is the half-width of the 95 percent confidence interval for a FEVS estimate. A 95 percent confidence interval is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn.
For our analysis, we selected FEVS questions related to work unit recruitment, satisfaction with compensation and incentives, management, employee involvement, and career opportunities. We categorized responses into three categories—positive, neutral, and negative, as shown in table 5. In our findings, we included the percent of positive responses to FEVS questions. Neutral responses ranged from 6.7 to 29.3 percent, as shown in table 5 below.
To assess the reliability of the FEVS data, we reviewed FEVS technical documentation. On the basis of these procedures, we believe the data were sufficiently reliable for our purposes.
Interviews with experts. To identify key strategies for managing a high- performing workforce, we conducted semistructured interviews with 22 experts in the areas of human capital, strategic foresight, and the future of work. See appendix II for a list of experts interviewed. We selected these experts using a nonprobability sample based on our literature review, suggestions from OPM officials and our own human capital experts, and relevance of their expertise to our objectives. We selected experts from a range of organizations to ensure our analysis included a variety of viewpoints. During these interviews, we asked about, among other things, future trends that are likely to affect the federal workforce and innovative practices to recruit and retain a high-performing workforce.
We analyzed the interviews using qualitative analysis software to describe employees’ shifting attitudes toward work, and to categorize the practices into key strategies for managing a high-performing workforce. We corroborated these practices with federal human capital experts, CHCOs, and federal employee and management groups, and reflected their input in our report.
Interviews with private organizations and foreign governments. To identify examples of human capital practices for managing a high- performing workforce, we conducted semistructured interviews with human capital managers from four private organizations (Noblis, Deloitte, Accenture, and NetImpact Strategies) and officials from three foreign governments (Australia, Canada, and the United Kingdom). We selected the private organizations based on (1) the similarities of their talent pool to that of the federal government, (2) accolades received for being a good place to work, and (3) size of the organization and types of services offered. We selected foreign governments based on (1) similarities to the United States in terms of percent of the labor force in civil service, and (2) the country having recently improved human capital policies or practices, or having been recognized for having human capital practices that positively affect recruitment and retention. In our report, we included examples of human capital practices that managers and officials told us were helpful to improving their organization, and that could feasibly be implemented within the federal government.
Interviews with federal employee and management group representatives. We interviewed representatives from federal employee and management groups to assess the feasibility of applying the identified examples to the federal sector, including identifying any opportunities or challenges. We selected employee groups that represented the broadest population of blue- and white-collar federal employees from all 24 Chief Financial Officers Act agencies: the American Federation of Government Employees and the National Treasury Employees Union. We selected the Federal Managers Association due to its representation of federal managers, supervisors, and executives.
Group Interviews with CHCOs. We also held two virtual, moderated group interviews with a nongeneralizable sample of CHCOs. We invited 23 CHCOs from the 24 Chief Financial Officers Act agencies; of those, nine were available and participated (see table 6). To ensure the questions were valid and understandable, we pretested the questions with our CHCO and Deputy CHCO. During each group interview, one of our own moderators used a standard set of discussion questions to ask participants to (1) assess the feasibility of specific examples for improving employee recruitment and retention, (2) explain challenges to implementing these examples in specific agencies, and (3) identify other agency examples. At the group interviews, at least two analysts took and reconciled their notes to summarize the results. We reviewed our summaries of the group interviews to identify key themes discussed. When highlighting examples from CHCOs, we provided summaries of the examples to the CHCOs for comment and incorporated technical edits, where appropriate. Because of the dynamics inherent in a group interview setting, we cannot be sure whether the participating CHCOs discussed the same information in the group format with other CHCOs present that they might have discussed in individual interviews without other CHCOs present.
We conducted this performance audit from April 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Experts We Interviewed
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Robert Goldenkoff, (202) 512-2757 or [email protected].
Staff Acknowledgments
In addition to the individual named above, Shannon Finnegan, Assistant Director; Shelby Kain, Analyst-in-Charge; Justine Augeri; Jehan Chase; Arpita Chattopadhyay; Ann Czapiewski; Robert Gebhart; John Hussey; Krista Loose; Meredith Moles; Rachel Stoiko; Jessica Walker, and Edith Yuh made major contributions to this report. James Ashley, Chelsa Gurkin, Elizabeth Hennemuth, and Walter Vance also contributed to the report.
Related GAO Products
Best Practices and Leading Practices in Human Capital Management. https://www.gao.gov/key_issues/leading_practices_in_human_capital_ma nagement/issue_summary.
Strategic Management of Human Capital—High Risk Issue. https://www.gao.gov/key_issues/strategic_human_capital_management/i ssue_summary.
Federal Workforce: Opportunities Exist for OPM to Further Innovation in Performance Management. GAO-19-35. Washington, D.C.: November 20, 2018.
Federal Pay: Opportunities Exist to Enhance Strategic Use of Special Payments. GAO-18-91. Washington, D.C.: December 7, 2017.
Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities. GAO-16-521. Washington, D.C.: August 2, 2016.
Federal Workforce: Additional Analysis and Sharing of Promising Practices Could Improve Employee Engagement and Performance. GAO-15-585. Washington, D.C.: July 14, 2015.
Federal Workforce: Improved Supervision and Better Use of Probationary Periods Are Needed to Address Substandard Employee Performance. GAO-15-191. Washington, D.C.: February 6, 2015.
Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps. GAO-15-223. Washington, D.C.: January 30, 2015.
Human Capital: OPM Needs to Improve the Design, Management, and Oversight of the Federal Classification System. GAO-14-677. Washington, D.C.: July 31, 2014.
Human Capital: Strategies to Help Agencies Meet Their Missions in an Era of Highly Constrained Resources. GAO-14-168. Washington, D.C.: May 7, 2014.
Federal Workers: Results of Studies on Federal Pay Varied Due to Differing Methodologies. GAO-12-564. Washington, D.C.: June 22, 2012.
Organizational Change and Transformation
Government Reorganization: Key Questions to Assess Agency Reform Efforts. GAO-18-427. Washington, D.C.: June 13, 2018. | Why GAO Did This Study
Much has changed since the federal government's employment policies were designed generations ago. Without careful attention to strategic human capital management, the federal government may continue to struggle to compete for workers with the skills needed to address the nation's social, economic, and security challenges.
GAO was asked to review issues related to the future of federal work and the workforce. This report identifies: (1) key trends affecting federal work and workers, and (2) key talent management strategies for achieving a high-performing workforce, given those trends.
GAO analyzed data from OPM and the Bureau of Labor Statistics, and reviewed reports from GAO, OPM, and selected think tanks. GAO also held group interviews with agency Chief Human Capital Officers, and interviewed human capital experts and representatives of federal labor unions, managers, and executives. Additionally, GAO spoke with private consulting firms and foreign governments regarding human capital strategies that officials said were helpful to improving their organizations.
What GAO Found
Federal work is changing amid demographic and technological trends (see figure below).
Given these trends, key talent management strategies can help agencies better manage the current and future workforce. These strategies are all within agencies' existing authorities:
Align human capital strategy with current and future mission requirements. With shifting attitudes toward work, technological advances, and increased reliance on nonfederal partners, agencies need to identify the knowledge and skills necessary to respond to current and future demands. Key practices include identifying and assessing existing skills, competencies, and skills gaps.
Acquire and assign talent. To ensure agencies have the talent capacity to address evolving mission requirements and negative perceptions of federal work (e.g., that it is too bureaucratic), agencies can cultivate a diverse talent pipeline, highlight their respective missions, recruit early in the school year, support rotations, and assign talent where needed.
Incentivize and compensate employees. While federal agencies may struggle to offer competitive pay in certain labor markets, they can leverage existing incentives that appeal to workers' desire to set a schedule and to work in locations that provide work-life balance.
Engage employees. Engaged employees are more productive and less likely to leave, according to the Office of Personnel Management (OPM). Agencies can better ensure their workforces are engaged by managing employee performance, involving employees in decisions, and developing employees.
What GAO Recommends
GAO has open recommendations to OPM related to key talent management strategies, including developing a core set of metrics that agencies should use to close mission-critical skills gaps. OPM agreed with most of these recommendations and has made some progress, but additional actions are needed. OPM provided technical comments, which GAO incorporated as appropriate. |
gao_GAO-19-202 | gao_GAO-19-202_0 | Background
WMATA was created in 1967 through an interstate compact—matching legislation passed by the District of Columbia, state of Maryland, and Commonwealth of Virginia, and then ratified by Congress—to plan, develop, finance, and operate a regional transportation system in the National Capital area. A board of eight voting directors and eight alternate directors governs WMATA. The directors are appointed by the District of Columbia, Virginia, Maryland, and the federal government, with each appointing two voting and two alternate directors. WMATA operates six rail lines—the Red, Orange, Blue, Green, Yellow, and Silver Lines—connecting various locations within the District of Columbia, Maryland, and Virginia. WMATA’s rail system has 118 linear miles of guideway: 51 miles of subway, 58 miles at ground level, and 9 miles on aerial structures.
WMATA’s capital investments are funded through multiple sources. These include a combination of grants it receives from the federal government, along with matching funds and other contributions it receives from the states and local jurisdictions in which it operates (see fig. 1).
From fiscal years 2011 through 2017, WMATA received about $5.8 billion in capital funding. Over half of this funding came from the federal government ($3.2 billion), and state and local jurisdictions provided 41 percent ($2.4 billion). WMATA also took on about $230 million in long- term debt to finance its capital program during this time period. The federal funding included grant awards, in addition to annual appropriations authorized under PRIIA. In 2008, PRIIA authorized $1.5 billion to WMATA, available in increments over 10 years beginning in fiscal year 2009, or until expended, for capital improvements and preventive maintenance. PRIIA funding and certain federal grants require state or local jurisdictions to provide matching funds. Additionally, a large portion of funding from state and local jurisdictions is governed by capital-funding agreements, which are periodically negotiated between WMATA and the states and localities. From fiscal years 2011 through 2017, state and local jurisdictions contributed on average about $340 million annually to WMATA, generally for capital purposes. The annual capital contributions from the jurisdictions are expected to more than double as a result of the recent legislation enacted by the District of Columbia, Maryland, and Virginia in 2018. In addition, WMATA officials told us that it will have the ability to further leverage this dedicated funding and issue debt to finance its capital projects.
WMATA has several steps in its capital planning process. These include developing the following:
Capital Needs Inventory. WMATA periodically identifies its capital investment needs in this inventory. WMATA issued a Capital Needs Inventory in February 2010 and another in November 2016, each covering a 10-year period. According to WMATA, Capital Needs Inventories help inform the annual capital budget and capital improvement program.
Annual Capital Budget. Each year, WMATA prepares an annual capital budget, which identifies projects WMATA plans to undertake in the next fiscal year. WMATA’s fiscal year 2019 annual capital budget was approved by the board of directors at $1.3 billion.
Six-Year Capital Improvement Program. Within WMATA’s annual capital budget, WMATA includes a Six-Year Capital Improvement Program identifying capital projects WMATA plans to implement over a 6-year period. WMATA’s most recent Six-Year Capital Improvement Program (covering the fiscal year 2019—2024 period) was approved by the board of directors at $8.5 billion.
According to WMATA officials, WMATA is currently implementing a new capital planning process through which it will develop its fiscal year 2020 Capital Budget and fiscal year 2020-2025 Six-Year Capital Improvement Program. WMATA adopts and implements the capital budget by June 30 for the new fiscal year, which begins on July 1. The fiscal year 2020 Capital Budget is scheduled to be adopted and implemented by June 30, 2019. Among other things, the goals and objectives of this new capital planning process are to construct an objective, data-driven, and risk-based approach to estimate major rehabilitation and capital asset replacement needs; build a capital investment prioritization methodology aligned with WMATA’s strategic goals and grounded in asset inventory and condition assessments; and develop a process that will support the construction and ongoing stewardship of its Transit Asset Management Plan. The latter is discussed in more detail below.
WMATA has also recently undertaken efforts to address issues related to the condition and maintenance of its track. After SafeTrack concluded in June 2017, WMATA implemented what officials describe as its first track preventive maintenance program designed to incorporate industry-wide best practices related to track maintenance, in order to improve the rail system’s long-term safety and reliability. The new program commenced in June 2017, and WMATA’s board reduced late-night service to allow for longer maintenance work hours.
To make the best use of the extra maintenance hours, WMATA focused its new program on six separate initiatives that together would address what WMATA viewed as its two most pressing track maintenance concerns—electrical fires caused by cable and insulator defects along the track wayside, and defects to the track itself, including unsecured rail fasteners and worn track switches (see table 1). These initiatives are planned to cover the entire transit system and will take various amounts of time to complete.
FTA also plays a role in WMATA activities by providing and directing the use of federal funds, overseeing safety, and requiring transit asset management. FTA provides grants that support capital investment in public transportation, consistent with locally developed transportation plans, and has provided such funding to WMATA as noted above. Additionally, though states play a role in safety oversight of rail transit systems through state safety oversight programs, FTA also has the authority to conduct various safety oversight activities such as inspections and investigations. Furthermore, FTA has the authority to assume temporary, direct safety oversight of a rail transit system if it finds the state safety oversight program is inadequate, among other things. After FTA conducted a safety management inspection and issued a safety directive with 91 required actions, it found WMATA’s state safety- oversight program to be inadequate and assumed direct safety-oversight of WMATA in October 2015. Finally, FTA is responsible for assisting public transportation systems to achieve and maintain their infrastructure, equipment, and vehicles in a state of good repair. Specifically, in July 2016, FTA issued regulations establishing a National Transit Asset Management System. Applicable transit agencies were required to have an initial transit asset management plan completed by October 1, 2018. For “tier I providers,” such as WMATA, this plan is to contain nine elements, including an inventory of the number and type of capital assets, and a condition assessment of those inventoried assets for which a provider has direct capital responsibility. WMATA completed its Transit Asset Management plan, dated October 1, 2018. This plan outlines WMATA’s policy, approach, and targeted actions to improve its asset management practices over the next 4 years.
WMATA Has Focused Recent Capital Expenditures on Its Vehicle Fleet and Expects Future Expenditures to Increase to Meet State-of-Good-Repair Needs
Since Fiscal Year 2011, WMATA Has Expended the Largest Share of Its Capital Funds on Replacing and Maintaining Its Rail and Bus Fleets
WMATA expends its capital funds on a variety of capital assets as part of its capital budget and Capital Improvement Program. From fiscal year 2011 through 2017, WMATA expended approximately $5.9 billion on capital investments. Of this amount, WMATA expended the largest portion on assets related to the replacement, rehabilitation, and maintenance of its revenue vehicles (railcars, buses, and vans) and lesser amounts on other categories of assets, as discussed below and shown in figure 2.
Rail and Bus Vehicle Fleet: WMATA expended approximately $2.16 billion (36 percent) of the total $5.9 billion on projects related to its rail and bus fleet from fiscal years 2011 through 2017. The $2.16 billion included approximately $1.1 billion (51 percent) on replacing, expanding, and rehabilitating its rail fleet and approximately $956 million (44 percent) on its bus fleet. According to WMATA, it initiated its railcar replacement program in 2005 to increase capacity and reduce maintenance costs. In addition, a June 2009 Red Line collision of two trains near Fort Totten resulted in nine deaths and led the NTSB to recommend that WMATA retire and replace all 1000 series railcars. From fiscal year 2011 through 2017, WMATA expended almost $656 million on replacing these and other railcars and expanding its overall fleet. This effort includes WMATA’s planned purchase of a total of 748 new 7000-series railcars (see fig. 3). Approximately $530 million was expended on replacing vehicles from fiscal years 2015 through 2017. For example, in fiscal year 2017 WMATA accepted delivery of about 50 percent (364 railcars) of its planned purchase of 748, 7000-series railcars. WMATA expects to complete its current railcar replacement program by fiscal year 2024, with an estimated total program cost of about $1.7 billion.
Fixed Rail Infrastructure: WMATA expended about $1.23 billion of the total $5.9 billion (21 percent) to maintain its fixed-rail infrastructure. Of this $1.23 billion, WMATA expended about $650 million (53 percent) on rail infrastructure and rehabilitation projects and $573 million (47 percent) on improvements to its track and structures (e.g., bridges and tunnels). According to WMATA, the rail infrastructure and rehabilitation projects began in 2009 and were the first comprehensive rehabilitation of WMATA’s rail infrastructure in its history. Typical projects included rehabilitating WMATA’s water drainage pumps and tunnel ventilation, fire, and communications systems, among other things. WMATA work related to track and structures involved the maintenance and rehabilitation of the steel rail that guides railcars, the cross ties and fasteners that hold the rail in place, the third rail that provides power to trains, and the bridges and tunnels the track runs on or through. The share of WMATA’s total capital expenditures going to track and structures increased from about $80 million in fiscal year 2016 to $158 million in fiscal year 2017. This expenditure was primarily to implement SafeTrack.
Maintenance Facilities and Equipment: WMATA expended approximately $1.1 billion of the total $5.9 billion (19 percent) on assets related to maintenance facilities and equipment, which include rail yards, bus garages, and equipment used to rehabilitate and maintain WMATA’s track and vehicle fleet. For example, from fiscal years 2011 through 2017 WMATA expended approximately $75 million in constructing the Cinder Bed Road bus maintenance facility in Lorton, Virginia.
Passenger and Other Facilities: WMATA expended about $814 million of the total $5.9 billion (14 percent) on passenger, business, and security support facilities. Such facilities include rail and bus stations, police facilities, and elevator and escalator rehabilitation.
Business Systems and Project Management Support: WMATA also expended about $628 million of the total $5.9 billion (11 percent) on assets related to operations and business support software and equipment.
Prior to Fiscal Year 2017, WMATA Did Not Fully Expend Its Total Annual Capital Budget but Expects to Increase Expenditures to Address Repairs
From fiscal years 2011 through 2017, WMATA frequently over-estimated in its annual budgets the annual amount of capital investments it could implement (see fig.4). Out of the approximately $7.5 billion that WMATA budgeted for capital investments over this period, it expended approximately $5.9 billion (80 percent). WMATA’s ability to fully expend its capital budget has varied from year to year. Specifically, WMATA expended about 65 percent ($700 million) of its $1.1 billion capital budget in fiscal year 2015, compared with 85 percent ($1.1 billion) of its $1.2 billion capital budget in fiscal year 2016. In fiscal year 2017, WMATA expended nearly 100 percent of its $1.18 billion capital budget. WMATA attributed the increased expenditures to intensified efforts to address deferred maintenance, primarily through the SafeTrack initiative and an increased delivery and acceptance rate for the new 7000-series railcars, among other things. The total amount expended in fiscal year 2017 to replace the older railcars with new vehicles totaled about $335 million.
According to WMATA, there are a number of reasons why it has not fully expended its capital budget in any given year:
Contracting and Scheduling Issues: WMATA officials stated that there were contract and scheduling delays in the implementation of planned capital projects. For example, WMATA officials said contracts were sometimes not executed during the fiscal year in which funds were originally budgeted for the work, and in other instances contract work was not carried out according to schedule and expenditures were delayed.
Changing Priorities: WMATA officials stated that in some instances, the reevaluation and reprioritization of contracted projects affected WMATA’s ability to expend its capital budget. In such cases, new capital needs were sometimes identified and prioritized over other needs, which caused delays in work schedules and potential financial claims by contractors due to delays. For example, WMATA stated that in fiscal year 2011 the initiation of the Red Line rehabilitation program was delayed as a result of the prioritization of the safety needs in response to the 2009 Fort Totten accident.
Federal Reimbursement Restrictions: WMATA officials cited FTA restrictions on its reimbursement of federal funds between fiscal years 2014 and 2015 as a reason for its inability to expend budgeted capital funds in those years. In a financial management oversight review completed by FTA in 2014, FTA found material weaknesses and significant deficiencies in WMATA’s financial management controls, policies, and procedures regarding its receipt of federal grant funds. Based on these preliminary findings, FTA restricted WMATA’s ability to automatically access federal grant reimbursements until WMATA undertook corrective actions. During these years, WMATA reported its management slowed expenditures on targeted capital projects due to concerns over reimbursement of grants. By October 2017, after WMATA implemented an action plan to improve its financial controls, FTA reinstated WMATA’s ability to automatically receive all awarded federal funds on a regular schedule.
Unpredictable Funding: WMATA officials stated that unpredictable funding affected the level of its capital expenditures from year to year. Since WMATA had multi-year capital projects with multi-year procurements, according to WMATA officials, uncertainty with regard to how much capital funding would be received on an annual basis affected the implementation of projects.
Inadequate Capital Planning Process: WMATA attributed some of its inability to expend budgeted capital funds to the absence of a uniform and efficient capital planning process. According to WMATA, it lacked formal procedures to initiate projects and newer projects often experienced delays in implementation, which delayed expenditures on these projects. Later in this report, we discuss WMATA’s efforts to develop a new capital planning process.
Although WMATA expended more of its capital budget in fiscal year 2017 than it had in prior years, it estimated that capital spending will need to increase even more to address state-of-good-repair needs. In 2016, WMATA projected that its state-of-good-repair needs amounted to about $17.4 billion from 2017 through 2026. This level is almost $10 billion more than WMATA estimated for its state-of-good-repair needs from 2011 through 2020 in its February 2010 Capital Needs Inventory. WMATA officials attributed the increase to a capital planning process insufficient to identify capital needs and an increase in cost of needs that were previously unmet. In addition, WMATA officials said the quality and quantity of asset data had improved over time. To address its state-of- good-repair needs, in November 2016 WMATA estimated that it will need to expend about $1.74 billion annually on capital expenditures from 2017 through 2026. This is more than twice the $845 million average annual capital expenditures from fiscal year 2011 through fiscal year 2017.
WMATA’s New Capital Planning Process Could Address Some Previous Weaknesses WMATA Identified, but the Process Does Not Have Documented Policies and Procedures and Has Other Weaknesses
WMATA’s new capital planning process could address some of the weaknesses it identified in the previous process, such as better distinguishing capital needs (investments in groups of related assets) from capital projects (investments in specific assets). However, WMATA has not established documented policies and procedures to guide the developed performance measures to assess capital projects and the capital planning process; and developed a plan to obtain complete information about the inventory and condition of WMATA assets.
These remaining weaknesses could hinder sound capital investment decisions.
WMATA’s New Capital Planning Process Could Facilitate Better Identification of Capital Investment Needs
WMATA’s new capital planning process could facilitate better identification of capital investment needs. Leading practices for capital planning, among other things, call for an organization to conduct a comprehensive assessment of its needs to meet its mission. WMATA uses the Capital Needs Inventory to assess its capital needs over a 10- year period across its various assets and help identify specific projects to include on subsequent capital improvement programs. In November 2016, WMATA issued its most recent Capital Needs Inventory, covering calendar year 2017 through 2026, and reported there were weaknesses and limitations in the process used to prepare the previous Capital Needs Inventory, issued in 2010. Those weaknesses and the actions WMATA has taken to address them include the following:
Distinguishing capital needs from capital projects. WMATA reported in 2016 that the 2010 Capital Needs Inventory was primarily a list of proposed projects and did not provide proper attention to evaluating WMATA’s overall asset needs and the readiness of projects for programming in the capital improvement program. WMATA has taken actions to potentially address this weakness. In April 2016, WMATA issued a policy/instruction document that established policies and procedures for preparing capital needs inventories. This document defined the process for capital needs identification and established a framework evaluating and prioritizing capital investment needs. Among other things, this framework requires that WMATA departments develop capital needs justification packages and that these packages be reviewed by the Capital Program Advisory Committee for completeness and accuracy before being forwarded for further review. The guidance also requires that WMATA’s strategic objectives be considered when identifying and prioritizing capital projects.
Qualitative rather than quantitative prioritization of needs. In 2016, WMATA reported that the prioritization of capital needs in the 2010 Capital Needs Inventory was primarily based on qualitative assessments by management rather than being driven by quantitative information and condition assessments. According to WMATA, the 2010 Capital Needs Inventory was largely based on the professional judgment of staff in consideration of WMATA’s strategic goals but was not data-driven. WMATA has taken actions to address this weakness by issuing a policy that requires WMATA’s senior management serving on the Capital Program Advisory Committee to use a more quantitative-based capital prioritization formula in preparing the Capital Needs Inventory. For example, the November 2016 Capital Needs Inventory used a quantitative approach to rank and prioritize capital needs. This approach included the use of four criteria—asset condition, safety and security, service delivery, and ridership impact— to numerically score capital needs and WMATA then used a risk- based weighting approach to combine these criteria into a single overall prioritization score.
WMATA Has Not Yet Established Documented Policies and Procedures, or Developed Performance Measures and Complete Asset Inventory Information
While WMATA has addressed some weaknesses it identified in its prior planning, it has not established documented policies and procedures to guide the annual capital planning process, or developed measures to assess capital project and program performance and a plan to obtain complete information on its assets and their physical condition.
Policies and Procedures to Guide the New Capital Planning Process
Although WMATA established policies and procedures for prioritizing capital needs—that is, investments in groups of related assets—for the 2016 Capital Needs Inventory, it has not established documented policies and procedures for the new capital planning process, including how WMATA will rank and select individual projects to address those needs through its annual capital budgets and Six-Year Capital Improvement Program. For example, through its Capital Needs Inventory WMATA stated it needed to invest $17.4 billion over a 10-year period to address its state-of-good-repair needs, including replacing vehicles, rehabilitating stations, and investing in other types of assets. WMATA uses the annual capital budget and Six-Year Capital Improvement Program to identify the specific projects to be funded to meet the 10-year investment needs. However, because WMATA has not established documented policies and procedures for the new capital planning process, it has not yet identified the specific methodologies to rank and select projects for funding on an annual basis.
According to WMATA officials, the legacy annual capital planning process was based on implementing the list of projects that resulted from its 2010 Capital Needs Inventory and WMATA did not have a documented capital planning process that it followed on an annual basis. WMATA officials told us that the legacy capital planning process was “ad hoc” in nature, in part because WMATA was reacting to emergencies. For example, because WMATA needed to address the NTSB recommendation to replace the 1000-series railcars and address FTA safety directives after the 2015 smoke incident at the L’Enfant Plaza Station, it did not adhere to a formal annual-planning process.
The COSO internal control standards point out the importance of organizations documenting their processes to facilitate retention and sharing of organizational knowledge. Leading practices contained in the Executive Guide also recommend that organizations have defined processes for ranking and selecting projects for capital funding. In addition, the Executive Guide noted that organizations find it beneficial to rank projects because the number of requested projects often exceeds available funding.
Officials from all five of the peer transit agencies we spoke with told us they had or planned to develop documented processes for making capital investment decisions. For example, officials from four of the five peer transit agencies we spoke with said they use a project scoring and ranking system in their capital planning process, and officials from the fifth agency told us it plans to develop such a system. Officials from one agency provided us with its project evaluation and scoring system that assigns scores using eight selection criteria that are tied to the agency’s strategic business plan and state priorities. The selection criteria include such things as system preservation, safety, and cost-effectiveness. Officials from another agency told us they use an analytical tool to score projects and that every project (new or existing) gets re-scored annually.
As a result of WMATA not having documented policies and procedures for its capital planning process, it is unclear how important parts of the process will work and the basis for WMATA’s investment decisions. WMATA has outlined some high-level policies for the capital planning process and prepared limited guidance for certain parts of the process. For example, WMATA officials told us that its recently issued Transit Asset Management Plan contains asset management policies that address the ranking and selecting of capital projects. Although the Transit Asset Management Plan discusses the process for estimating and prioritizing capital needs and, which are precursors of projects, the plan does not specifically address how projects would be selected for annual capital budgets and the capital improvement program. In addition, WMATA developed limited guidance for staff to use in developing new capital projects. Under this guidance, capital funds could be provided to evaluate, plan, and develop projects. While this guidance may be useful for developing projects, it does not establish the policies and procedures WMATA will follow to decide which projects will be funded through the annual capital budget and the capital improvement program.
Further, the documentation prepared by WMATA to date does not establish policies and procedures for the entire capital planning process and how decisions will be made throughout the process. WMATA reported in its fiscal year 2019 annual budget that it had created a capital program manual that identifies the roles, responsibilities, processes, and calendars of events to inform the fiscal year 2020 capital program. WMATA officials told us that the previous Director of the Capital Planning and Program Management Department had included this information in the draft budget proposal when these documents were being developed. However, WMATA officials told us that these documents were not completed, and that the information was mistakenly not removed from the budget before the previous director of the department left the agency.
WMATA officials told us they plan to formalize policies, procedures, and manuals for the fiscal year 2021–2026 capital-investment program cycle. The current leadership of the Capital Planning and Program Management Department told us that given the time-constraints facing WMATA in the current fiscal year 2020 planning cycle, WMATA decided not to formally document the new capital planning process until after WMATA has had a chance to test it through the current planning cycle to see how it works. According to the official, the department’s leadership has instructed staff to document steps taken in implementing the new process so that WMATA will have the opportunity to learn from the new process and make necessary changes before developing formal, written procedures that will guide future planning cycles.
Although delaying formal development of policies and procedures may provide an opportunity to learn from the process while implementing it, it does not provide the guidance necessary now as WMATA uses its new capital planning process to develop the fiscal year 2020 capital program. In particular, because WMATA has not established policies and procedures for ranking and selecting projects, WMATA does not have a framework or clear criteria for programming projects in the annual capital budget for fiscal year 2020. WMATA has proposed a fiscal year 2020 capital budget of $1.4 billion. In addition, WMATA’s plan to document steps taken in implementing the new process as it is occurring does not provide reasonable assurance that WMATA is making decisions using a consistent process to direct investments toward WMATA’s highest priority needs. A consistent process is all the more important to ensure that WMATA does not continue to use an ad-hoc process for capital investment decisions, as it did in its legacy process. WMATA’s annual capital spending is anticipated to increase substantially over the fiscal year 2020-2025 period, as WMATA expects to be programing the additional $500 million annually for capital purposes committed by the District of Columbia, Maryland, and Virginia. Without a documented planning process that includes procedures for ranking and selecting projects for funding in the fiscal year 2020 capital budget, WMATA’s stakeholders lack reasonable assurance that WMATA’s capital investment decisions will be made using a sound and transparent process.
Performance Measures to Assess Capital Projects and the Capital Planning Process
WMATA has also not developed performance measures to assess capital projects and the capital planning process. Leading practices from the Executive Guide suggest that one way to determine if a capital investment achieved the benefits that were intended when it was selected is to evaluate its performance using measures that reflect a variety of outcomes and perspectives. By looking at a mixture of measures, such as financial improvement and customer satisfaction, managers can assess performance based on a comprehensive view of the needs and objectives of the organization. Leading organizations we studied in preparing the Executive Guide, such as private sector companies, use financial and non-financial criteria for success that are tied to organizational goals and objectives. According to the Executive Guide, project-specific performance measures are then used to develop unit performance measures and goals, which are ultimately used to determine how well an organization is meeting its goals and objectives.
WMATA officials told us they have not developed performance measures for assessing the performance of individual projects or the capital planning process as a whole. One WMATA official told us that WMATA would like to evaluate results of the new capital planning process to determine whether organizational goals have been met. The official suggested that WMATA would work with a consultant to demonstrate a linkage between capital planning goals and WMATA’s organizational goals. However, the official did not indicate when this step would occur or provide additional information. Moreover, it is unclear whether the official’s intentions for this effort would result in measures for assessing individual projects as well as the overall capital planning process. By developing measures, WMATA will be better positioned to assess whether specific capital investments met their intended outcomes or if the capital planning process itself is helping WMATA achieve its strategic goals and objectives and effectively using taxpayer funds.
Information on Asset Inventories and Physical Condition Assessments
WMATA also does not have a complete inventory or physical condition assessments of its assets. Leading practices for good capital decision- making call for organizations to conduct a comprehensive assessment of their needs and identify the organization’s capabilities to meet these needs. This process includes taking an inventory of assets and their condition and assessing where there are gaps in meeting organizational needs. The Transit Cooperative Research Program has also identified asset inventory and condition assessments as the first step in determining what asset rehabilitations and replacements are needed as transit providers address their state-of-good-repair requirements. Asset inventories and condition assessments provide critical information for capital-investment decision making.
WMATA has initiated various efforts to obtain better information about its assets and their condition. These efforts have included:
Transit Asset Inventory and Condition Assessment Project. In 2016, WMATA began this project to provide a physical inventory of WMATA assets and their condition, in part to comply with FTA’s Transit Asset Management regulations. According to WMATA, this project was to be the cornerstone in ensuring a complete, consistent, accurate, and centralized repository of relevant asset-related data. However, WMATA officials said that the project primarily focused on obtaining an inventory and condition assessment of WMATA facilities and equipment. A February 2018 WMATA memo to senior management stated that even when the project was completed, WMATA would still lack a robust database of track, guideway, infrastructure (e.g., tunnels and bridges), systems, and communication assets—elements that the November 2016 Capital Needs Inventory noted were the largest gaps in the asset information used to support capital needs forecasting. According to WMATA, this project produced inventory and condition assessments for about 30 percent of WMATA’s asset base. As of October 2018, WMATA considered the project complete since it provided information to help prepare WMATA’s completed Transit Asset Management Plan, dated October 1, 2018. WMATA officials noted that they will continue to develop their asset inventories and condition assessments through its new Enterprise Asset Management Program, described below.
Enterprise Asset Management Program. In December 2017, WMATA began development of an Enterprise Asset Management Program. According to WMATA, this program is an effort to institutionalize asset management practices that are aligned with industry best practices to provide, among other things, high quality asset data for informed decision-making, including for capital planning. Expected program tasks include updating asset records and improving and consolidating asset inventories in WMATA’s asset system of record (called Maximo).
WMATA’s efforts to develop more complete asset inventory and condition assessments are not complete. Among other things, WMATA documentation on the Enterprise Asset Management Program cited “inattention, poor standardization, and organizational silos” as factors that have resulted in WMATA having multiple sets of asset records in various states of accuracy and usefulness. The Enterprise Asset Management Program, according to WMATA, is an effort to help address this situation and improve asset data quality, including inventory and condition assessments.
Although WMATA is developing a new Enterprise Asset Management Program, it has yet to develop a plan for obtaining a complete inventory or physical condition assessments of its assets. The Project Management Institute’s Guide to the Project Management Body of Knowledge, PMBOK® Guide describes the elements of good project management and their importance in achieving organizational goals. Among these elements are:
Having a project charter that formally authorizes a project, that commits resources to the activity, and that provides a direct link to organizational strategic objectives;
Preparing a project plan to define the basis of the project’s work and how the work will be performed; and
Establishing a monitoring and control process to track, review, and report overall progress in meeting the plan’s objectives.
WMATA has prepared draft documents that describe how it will implement the Enterprise Asset Management Program and that contain some elements of good project management. For example, in January 2018 WMATA circulated a proposed charter that once approved would authorize the Enterprise Asset Management program, identify needed resources, and link to WMATA’s strategic goals. As of October 2018, this proposed charter had not yet been finalized. Draft program documents also indicate there would be a monitoring and control process that would establish regular reporting to internal stakeholders to assess program accomplishments and progress implementing the program.
While WMATA has developed a proposed charter and a monitoring and control process for its Enterprise Asset Management Program, it has not established a plan for collecting asset inventory and condition assessment information. The draft program charter includes general tasks for updating asset records and improving and consolidating asset inventory data in Maximo. However, a plan would provide more specific details for how the work would be completed, such as the information to be collected on different assets, how and when this information would be consolidated into Maximo, milestones for completing the work, or how the effort would be funded. Without a plan to obtain asset inventory and condition assessment information WMATA will continue to lack critical information needed for good capital planning and sound investment decision-making.
WMATA Reported Significant Progress toward Goals, but the Track Preventive Maintenance Program Does Not Fully Align with Leading Practices
WMATA Has Reduced Both Track Defect Incidents and Electrical Fires but Faces Challenges Implementing Its Track Preventive Maintenance Program
WMATA has reported significant progress toward its goals of reducing track defects and fire incidents, but still faces several challenges with implementing its track preventive maintenance program. WMATA defines an incident as any unplanned event that disrupts rail revenue service. According to WMATA officials, within the track preventive maintenance program WMATA seeks to reduce incidents specifically caused by electrical wayside fires and track defects each by 50 percent from fiscal year 2017 to fiscal year 2019. WMATA reported that in fiscal year 2018 it had met its goal for track defect incidents but not for electrical wayside fires. According to officials, track defect incidents—which include incidents caused by defective fasteners, switches, and “ballast”—were reduced by 50 percent from a total of 778 in fiscal year 2017 to 387 in fiscal year 2018. Electrical-wayside-fire incidents—including incidents caused by cable and insulator fires—went down 20 percent from a total of 55 in fiscal year 2017 to 44 in fiscal year 2018 (see fig.5).
Although WMATA has reduced both track defect incidents and electrical fires, the track preventive maintenance program is not intended to address the full range of all defects and track fires that may occur on the system. WMATA officials told us that the track preventive maintenance program specifically seeks to reduce electrical-wayside-fire incidents, which are a specific sub-set of overall track fires, and does not include non-electrical fires or smoke incidents, such as the ones caused by railcars or debris. WMATA captures and publicly reports the non-electrical fires as part of its quarterly Metro Performance Report, but according to WMATA officials, these fires are not specifically addressed through the track preventive maintenance program. While electrical fires decreased in fiscal year 2018, non-electrical fires did not change, as WMATA reported 23 non-electrical fires for both fiscal years 2017 and 2018. Additionally the track preventive maintenance program addresses a certain sub-set of track defect incidents such as those caused by loose fasteners and defective switches. According to WMATA, these track defect incidents can be addressed through its track geometry, torqueing, and switch maintenance initiatives. WMATA addresses other types of track defects, such as rail breaks and third-rail defects, through its capital program. However, according to WMATA, track defects attributable to the capital program are still included as part of the overall goal to reduce all track defect incidents by 50 percent by fiscal year 2019.
WMATA established goals for completing each of the six track preventive maintenance initiatives within a certain time period and reported that in fiscal year 2018 it was on-track to meet or exceed those goals for four of the initiatives. For example, in implementing its “cable meggering” initiative, WMATA established a goal to inspect and replace electric cables across its entire rail system within 4 years. According to WMATA, it met its target for fiscal year 2018 by completing 25 percent of the entire system in that year. In addition to cable meggering, WMATA also met its annual targets for the switch maintenance, track bed cleaning, and stray current-testing initiatives. As for the two initiatives behind schedule, the torqueing initiative was 70 percent complete and the tamping initiative stood at 90 percent for the 2018 target (see table 2). Officials told us they have developed various ways to improve efficiency with these initiatives. For instance, WMATA improved the productivity of its switch maintenance initiative by separating the work to inspect the switches from the follow-up repair work to grind and weld them. These activities had previously been conducted by the same team.
However, WMATA faces challenges in implementing the track preventive maintenance program moving forward. WMATA officials described track preventive maintenance as a necessary operation that must be continuously performed and balanced in conjunction with regular train operations that provide service to their customers. According to WMATA officials, executing this new program requires regular refinements to ensure it continues to progress toward its desired outcomes. Among the implementation challenges identified by WMATA officials were the following:
Securing Sufficient Track Time. WMATA officials told us that getting adequate time to perform track maintenance is difficult because it requires reducing the number of hours in which WMATA provides service to customers. Consequently, increased maintenance hours can result in lost revenue. Officials from the peer transit agencies we interviewed stated that the tension between conducting maintenance and providing service is common in the transit industry. According to WMATA officials, prior to SafeTrack, windows for performing track maintenance were not sufficient to complete all necessary work, partially because of this need to balance maintenance hours and service hours. To address this issue, WMATA increased its weekly overnight work hours from 33 hours to 39 hours during SafeTrack. After SafeTrack was complete, WMATA extended weekly overnight work hours again to a total of 41 hours. However, maintaining these extended overnight work hours past fiscal year 2019 requires approval from WMATA’s board of directors. As a result, the long-term viability of WMATA’s track preventive maintenance program is partially dependent on the board’s decision to balance the competing demands for service hours and maintenance time.
Work Time Productivity. To maintain extended track-maintenance hours into succeeding years, it will be important for WMATA to demonstrate the new program’s productivity. According to WMATA officials, making the most productive use of the extended working hours is a challenge, but it will be necessary to justify the extended maintenance windows. WMATA officials told us that only a portion of overnight work hours yields productive maintenance time. For example, once a line ceases operations, it takes an additional hour for all trains to reach their final destination, and another hour after that to safely turn off all power running to the track and then establish a work zone. Once maintenance work is completed, additional time must be allotted for restoring power and allowing trains to move back into position. Because of these requirements, a five-hour work window may only yield two hours of productive work time (called “wrench time”). For this reason, WMATA began tracking its wrench time at the beginning of fiscal year 2018. As of June 2018, WMATA reported that average wrench time had increased from about 2.0 hours per day in July 2017 to 2.37 hours.
Resource Constraints. According to WMATA officials, having sufficient people with the necessary skills and experience to perform track maintenance work is a significant challenge. For instance, expanded maintenance windows have increased WMATA’s workforce requirements. As a result, WMATA has used contractors to assist with its stray-current testing and track bed cleaning initiatives. In another example, WMATA’s torqueing initiative is particularly resource intensive as the entire rail system contains 135 miles of “direct fixation” track, where the torqueing work is being done, and over 504,000 fasteners to check and tighten as necessary. According to WMATA officials, bolts and fasteners are torqued during their initial installment and then again 90 days afterward as part of the initial capital expenditure. After that, any subsequent torqueing is executed as part of the new track preventive maintenance program. WMATA stated that the torqueing initiative seeks to torque all 135 miles of direct fixation track annually. WMATA officials said the torqueing initiative is a mix of contractor and in-house staff, with contractors supplementing WMATA forces as needed.
WMATA’s Track Preventive Maintenance Program Does Not Fully Align with Leading Program Management Practices
WMATA’s track preventive maintenance program has followed certain leading program management practices such as establishing key performance metrics and monitoring progress toward them. Leading practices recommend that organizations establish performance baselines for their programs and communicate performance metrics to key stakeholders. For instance, as previously noted, WMATA established a measureable program goal to reduce track-defect and electrical-wayside- fire incidents by 50 percent within 2 years, and WMATA also established time periods to complete its system-wide preventive maintenance initiatives. In addition, WMATA’s Rail Services Department—which manages the track preventive maintenance program—among other things, holds a monthly “RailSTAT” meeting in which the teams leading the preventive maintenance initiatives report their progress toward these goals to WMATA’s management.
However, WMATA’s program does not fully align with other applicable internal-control standards or leading program-management practices. Specifically, COSO internal control standards and leading practices identified by the Project Management Institute’s The Standard for Program Management stresses the importance of identifying and assessing program risks and developing a program management plan.
COSO recommends that organizations identify risks to the achievement of its objectives and analyze risks as a basis for determining how the risks should be managed. Furthermore, the risk identification is to be comprehensive.
The Standard for Program Management also recommends that when identifying risks, the assessments be both qualitative and quantitative in nature.
Regarding program management plans:
The Standard for Program Management recommends that organizations develop program management plans that align with organizational goals and objectives. This includes aligning the program management plan with the organization’s overall strategic plan. Elements of the plan are to provide a roadmap that identifies such things as milestones and decision points to guide program activities.
In developing the track preventive maintenance program, WMATA did not fully identify or quantitatively assess risks associated with the program. WMATA officials told us that in developing the track preventive maintenance program they used their professional judgment to identify track-defect and fire incidents as the most significant risks that they needed to address through the program. However, WMATA’s risk identification was not comprehensive in nature, as it only considered two technical aspects of track maintenance: electrical fires and track defects. As previously mentioned, non-electrical fires—which were not included in the scope of the program or risk assessment—did not change from fiscal year 2017 through 2018 and represent approximately 30 percent of all fires on the system over those years. Although WMATA officials told us in designing the program they reviewed track-related incident data from 2016, they did not quantitatively analyze the impact of these incidents on service or safety. In addition, WMATA did not consider broader strategic risks to its program, such as the availability of a program’s funding and stakeholders’ support for the continuation of the program. Specifically, while WMATA has identified several challenges with implementing the program—such as securing sufficient track time, demonstrating work time productivity, and overcoming resource constraints—none of these factors, or potential mitigations, were documented in a risk assessment in developing the program.
WMATA has also not prepared a program management plan for the track preventive maintenance program. Although WMATA has identified program goals, officials told us that WMATA has not formally documented the overall structure of the program or how it would be implemented. Instead, the officials said the presentations they provide to WMATA’s board of directors, along with their ongoing staff and executive team meetings, regarding the track preventive maintenance program cover the relevant information needed for running the program. While providing such information to the WMATA board of directors provides some accountability for the program, these presentations do not represent a formal program management plan that links with WMATA’s strategic plan or that identifies milestones and decision points necessary to guide the program. As we previously reported, WMATA did not develop a project management plan before starting its SafeTrack work, and due to this omission and other issues, we found that WMATA lacked assurance that the approach taken with SafeTrack was the most effective way to identify and address safety issues. Furthermore, as this is the first time WMATA has implemented a track preventive maintenance program, a program management plan could help formally establish the program, provide strategic guidance for this new program by providing accountability for both internal and external stakeholders, and ensure that program goals are met. A program management plan could also provide practical benefits, such as helping ensure that WMATA’s extended overnight work hours are efficiently implemented and that sufficient resources are devoted to the program.
Without the strategic direction provided by a comprehensive risk assessment and a formal program management plan, WMATA lacks a documented vision for how the track preventive maintenance program should be structured and implemented in order to meet the agency’s strategic goals and improve track safety. Specifically, without a risk assessment that uses quantitative and qualitative data to assess risks— such as data for all fires on the system and qualitative risks such as securing sufficient time for maintenance—WMATA lacks assurance that the program is comprehensively designed to address risks affecting the safety of the rail system or other risks that could hinder the program’s success. Moreover, a program management plan that draws on information from a comprehensive risk assessment would provide WMATA officials with the assurance that they are prepared to respond to current and future challenges that could threaten the long-term viability of the program.
Finally, although WMATA developed the track preventive maintenance program to prevent the need for another emergency repair project like SafeTrack, without a formal program management plan, the WMATA employees charged with managing and implementing the program lack an important document to guide their decision-making to meet that objective and the agency’s overall strategic objectives. Developing a program management plan would outline the specific requirements to successfully implement the program, including necessary track time, expected productivity of program initiatives, and required resources. Furthermore, it would provide WMATA’s board of directors with confidence that the program has a clear roadmap with milestones and decision points as the board considers maintaining the extended overnight work hours necessary to implement the program.
Conclusions
WMATA’s rail and bus systems provide nearly a million passenger trips each day, and those passengers rely on WMATA for safe and reliable public transportation in the nation’s capital and the surrounding areas. The federal, state, and local jurisdictions that fund WMATA expect WMATA to wisely use taxpayer funds to ensure the system is safe and reliable. WMATA can better meet these expectations by establishing documented policies and procedures that outline how the new capital planning process will work and the basis of investment decisions. In addition, developing measures to assess the performance of individual projects and the capital planning process would provide greater assurance to WMATA’s funding partners that its investment decisions result in a measurable improvement in operating performance, reliability, or other metrics. Furthermore, WMATA’s recent efforts to establish an Enterprise Asset Management Program, once finalized, could help WMATA develop a more complete inventory of its assets and collect critical information on their condition—both of which are consistent with sound capital planning. However, without a plan that provides specific details for obtaining this information, WMATA will continue to lack the critical asset information necessary to make lasting improvements in its capital planning process and make sound capital-investment decisions.
Similarly, track preventive maintenance plays a critical role as WMATA works to reduce the track defects and fires that have endangered safety and service reliability. WMATA could better demonstrate the direction of the track preventive maintenance program and how it can improve track safety by more comprehensively assessing the technical and broader risks facing the program and by developing a formal plan that provides greater assurance WMATA is prepared to address challenges that could threaten the long-term viability of the program. Both actions would help WMATA better focus the program on critical maintenance needs and demonstrate its value to WMATA’s board of directors and other stakeholders as WMATA endeavors to provide safe, reliable, and quality service to its riders.
Recommendations for Executive Action
We are making the following five recommendations to WMATA.
The General Manager of WMATA should establish documented policies and procedures for the new capital planning process. These policies and procedures should include methodologies for ranking and selecting capital projects for funding in WMATA’s fiscal year 2020 capital budget and fiscal years 2020-2025 Capital Improvement Program and for future planning cycles. (Recommendation 1)
The General Manager of WMATA should develop performance measures to be used for assessing capital investments and the capital planning process to determine if the investments and planning process have achieved their planned goals and objectives. (Recommendation 2)
The General Manager of WMATA should develop a plan for obtaining complete information regarding WMATA’s asset inventory and physical condition assessments, including assets related to track and structures. (Recommendation 3)
The General Manager of WMATA should conduct a comprehensive risk assessment of the track preventive maintenance program that includes both a quantitative and qualitative assessment of relevant program risks. In addition to considering technical program risks, WMATA should also consider broader program risks, such as the availability of funding for the program and stakeholders’ support. (Recommendation 4)
The General Manager of WMATA should prepare a formal program management plan for the track preventive maintenance program that aligns with WMATA’s strategic plan, addresses how the program is linked to overall strategic goals and objectives, and includes program milestones and decision points. (Recommendation 5)
Agency Comments and Our Evaluation
We provided a draft of this report to WMATA and the Department of Transportation for review and comment. WMATA provided written comments, which are reprinted in appendix II, and technical comments, which we incorporated as appropriate in the report. The Department of Transportation provided technical comments, which we incorporated as appropriate.
WMATA concurred in part, or with the intent of four of the recommendations, and disagreed with a fifth. Specifically, regarding the first recommendation, which is that WMATA establish documented policies and procedures for the new capital planning process, and that the policies and procedures include methodologies for ranking and selecting capital projects for the fiscal year 2020 capital budget and fiscal year 2020—2025 capital-improvement program.
WMATA stated that it agreed with the recommendation, in part. WMATA said it will continue its efforts to finalize and document policies and procedures for the capital planning process for fiscal year 2021 and beyond. WMATA noted that it already has in place numerous planning tools, such as the 2016 Capital Needs Inventory assessment, which helped inform the fiscal year 2020 capital planning process. According to WMATA, it is currently reviewing policies, procedures, training materials, and other documents for the fiscal year 2020 planning process, and those documents will be updated and formalized through final documentation in fiscal year 2021. WMATA noted that it anticipates that many of the elements we recommend regarding the capital planning process will be part of the process documented in fiscal year 2021. For example, WMATA expects that additional automation, decision-making, governance, and reporting capabilities, will be part of the process that will be documented for fiscal year 2021. However, while WMATA has tools available to inform the capital planning process, it has not prepared documented policies and procedures for this process in fiscal year 2020. As we reported, without documented policies and procedures, including those for ranking and selecting projects for the fiscal year 2020 capital budget, WMATA’s stakeholders do not have reasonable assurance that capital investment decisions are made using a sound and transparent process. Taking action now to establish methodologies for ranking and selecting projects for the fiscal year 2020 capital budget would provide WMATA with an opportunity to improve upon those methodologies for the fiscal year 2021 capital planning process to better ensure investments are directed to WMATA’s highest priority needs. As such, we continue to believe this recommendation is valid and that WMATA should fully implement it.
Regarding the second recommendation that WMATA develop performance measures for assessing capital investments and the capital planning process, WMATA stated that it agreed with the intent of the recommendation. WMATA also stated that it has developed such measures through compliance with federal requirements, including the FTA’s performance-based planning requirements and the requirement under MAP-21 that tier I transit providers, such as WMATA, establish state-of-good-repair targets that are linked to the capital program. WMATA noted these targets are set forth in its Transit Asset Management Plan. Although WMATA’s October 2018 Transit Asset Management plan includes some broad performance measures and targets for the state-of-good-repair for its various asset classes, as we reported, WMATA has not developed performance measures to assess individual capital projects or the capital planning process itself, as suggested by leading practices in the Executive Guide. As discussed in the report, such measures are important to determine if capital investments have achieved their expected benefits and if they have achieved organizational goals. Leading practices also indicate that by using a mixture of measures managers can assess performance based on a comprehensive view of the needs and objectives of an organization. These needs and objectives can go beyond just the state-of-good-repair to include such things as measures for assessing projects that would improve service reliability, expand capacity, or achieve financial objectives. We continue to believe that fully implementing this recommendation would help ensure that capital investments meet their intended outcomes and that the capital planning process helps WMATA achieve its strategic goals and objectives.
Regarding the third recommendation that WMATA develop a plan for obtaining complete information about asset inventories and condition assessments, WMATA stated that it agreed with the intent of the recommendation and that its 2018 Transit Asset Management Plan outlines plans for continuing its asset inventory update. WMATA also said that it is working to ensure it has a complete asset inventory that addresses legacy information and that includes accurate, up-to-date condition assessments. As we reported, the Enterprise Asset Management Program—the program that WMATA told us it plans to use to continue development of asset inventories and condition assessments—includes some elements of good project management, but it also lacks an established plan for collecting asset inventory and condition assessment information. Without a plan to obtain asset inventory and condition assessment information WMATA will continue to lack critical information needed for good capital planning and sound investment decision-making. Thus, we continue to believe that this recommendation is valid and that WMATA should fully implement it.
Regarding the fourth recommendation that WMATA conduct a comprehensive risk assessment of the track preventive maintenance program that includes both quantitative and qualitative assessment of relevant program risks, WMATA stated that it agreed with the intent of the recommendation and is putting in place a new process that will address it. Specifically, WMATA stated it is in the process of developing a new Reliability Centered Maintenance process that will include a comprehensive risk assessment of track infrastructure that includes consideration of broader risks such as costs, funding, and track access. According to WMATA, the new process is an engineering framework that will define the maintenance regimen, including preventive maintenance, and improve safety, reliability, and cost-effectiveness. During our review, WMATA officials did not discuss the Reliability Centered Maintenance process in detail or provide documentation that allowed us to evaluate how this process might interface with the current track preventive maintenance program. As a result, we were not able to evaluate how it might address identification and assessment of risks associated with track preventive maintenance. As we reported, going forward track preventive maintenance will play a critical role as WMATA works to reduce track defects and fires. We will review WMATA’s actions to conduct a comprehensive risk assessment as part of our routine recommendation follow-up process.
Regarding the fifth recommendation that WMATA prepare a formal program management plan for the track preventive maintenance program, WMATA stated that it disagreed with the recommendation. WMATA noted that specific technical details of the track preventive maintenance program are evolving as it better understands the most effective maintenance regime through implementation of the Reliability Centered Maintenance process. WMATA stated that it believes the framework of Reliability Centered Maintenance is better suited to the ongoing mission of physical asset management than traditional project and program management tools. According to WMATA, the purpose of Reliability Centered Maintenance is to ensure that all efforts are focused on the safety, reliability, and cost-effectiveness of assets through their lifecycle, which is more relevant and applicable to WMATA’s strategic plan than any individual preventive maintenance program. As stated above, WMATA did not provide details about Reliability Centered Maintenance during our review so we are not able to evaluate this process in relation to the track preventive maintenance program. We will review WMATA’s actions related to implementation of the Reliability Centered Maintenance process as part of our routine recommendation follow-up process. We continue to believe this recommendation is valid and that WMATA should fully implement it.
We will send copies of this report to appropriate congressional committees, the Secretary of Transportation, the Administrator of the Federal Transit Administration, and the General Manager of WMATA. In addition, we will make copies available to others upon request, and the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope and Methodology
This report examines: (1) How WMATA expended its capital funding from fiscal years 2011 through 2017; (2) How WMATA’s new capital planning process addresses weaknesses it identified in the previous process; and (3) WMATA’s progress toward its track preventive maintenance goals and how the program aligns with leading program management practices.
For each of our objectives we reviewed pertinent federal statutes and regulations as well as WMATA and FTA policies and documents. We also selected a non-generalizable sample of five similar U.S. transit agencies based on similarity to WMATA in transit route mileage, system use, capital spending, system age, and rail fleet age. We also factored geographical diversity into our selection process. We then interviewed the officials from these selected transit agencies using a standard set of questions to learn how they utilize their capital funds, conduct capital planning, and oversee maintenance; and then we compared their processes to WMATA. Transit route mileage, system use, capital spending, and rail fleet age were measured using data from FTA’s National Transit Database. We measured system age according to data available within the American Public Transportation Association’s 2017 Public Transportation Fact Book, and geographical diversity was determined through data available from the U.S. Census Bureau. The transit agencies we selected were: (1) Bay Area Rapid Transit, Oakland, California; (2) Chicago Transit Authority, Chicago, Illinois; (3) Massachusetts Bay Transportation Authority, Boston, Massachusetts; (4) Metropolitan Atlanta Rapid Transit Authority, Atlanta, Georgia; and (5) Southeastern Pennsylvania Transportation Authority, Philadelphia, Pennsylvania.
To assess WMATA’s capital spending from 2011 through 2017, we interviewed knowledgeable officials from WMATA and FTA and also reviewed WMATA annual budgets, fourth-quarter and year-end financial reports, budget reconciliation reports, comprehensive annual financial reports, and FTA grant awards. We selected fiscal year 2011 because it was the first year in which WMATA received federal funding authorized by the Passenger Rail Investment and Improvement Act of 2008 (PRIIA), and we selected fiscal year 2017 because it was the most recent year that capital expenditure data were available at the time of our review. By analyzing this information we determined that the following sources provided the most comprehensive and reliable available data on each of the following topics for our report (see table 3): We collected the aforementioned data, analyzed them to identify errors or other anomalies, and interviewed officials to determine how the data are compiled and checked for accuracy. We determined that these data had some limitations, as an external audit report of WMATA financial information for fiscal year 2016 noted a material weakness with WMATA’s process for accounting acquisition costs of capital assets. Specifically, there were inconsistencies between WMATA’s general ledger and sub- ledger, which are used to record acquisition costs, depreciation, and other financial information related to capital assets. As a result, additional steps were required to reconcile the differences between the two sources and could have resulted in a material error. However, after interviewing WMATA officials about the weakness and assessing the available financial information, we determined that the data we used were sufficiently reliable for our purpose of showing general trends of capital expenditures.
Our analysis sought to depict how WMATA allocates and expends funds according to major asset categories within its capital-improvement plan. However, these asset categories only remained consistent from 2011 through 2015, and were revised during 2016 and 2017. However, we determined that each asset category consisted of Capital Improvement Projects that were each assigned a number. These projects and their corresponding numbers remained in existence from fiscal year 2011 through 2017, even though the asset categories were updated in fiscal year 2016. Tracking by Capital Improvement Project number provided a means to report consistently through that time period. Therefore, we used the asset categories from fiscal years 2011 through 2015 as our base reporting categories. These categories consisted of: (1) Vehicles/Vehicle Parts, (2) Rail System Infrastructure Rehabilitation, (3) Maintenance Facilities, (4) Systems and Technology, (5) Track and Structures, (6) Passenger Facilities, (7) Maintenance Equipment, (8) Other Facilities, and (9) Project Management and Support. We consolidated WMATA’s nine asset categories into five asset categories in order to represent broader categories of investment: Rail and Bus Vehicle Fleet (Vehicle/Vehicle Parts), Fixed Rail Infrastructure (Rail System Infrastructure and Track and Structures), Maintenance Facilities and Equipment (Maintenance Facilities and Maintenance Equipment), Passenger and Other Facilities (Passenger Facilities and Other Facilities), and Business Systems and Project Management Support (Systems and Technology and Project Management and Support). We then reviewed WMATA’s fiscal year 2016 Fourth Quarter Report, fiscal year 2017 Fourth Quarter Report, and fiscal year 2017 Budget Reconciliation Report to match each project number from those two years to their corresponding category from fiscal year 2011 through 2015.
To assess WMATA’s new capital planning process and how it addresses weaknesses WMATA identified in the previous process, we interviewed WMATA officials about their capital planning process and reviewed WMATA documentation related to the capital planning process. This included Capital Needs Inventories, WMATA’s policy for preparation of the 2010 and 2016 Capital Needs Inventories, annual capital budgets—to include capital improvement programs, and guidance documents issued by WMATA related to submitting projects for inclusion in the annual capital budget. We also reviewed the fiscal year 2018 business plan for WMATA’s Capital Planning and Program Management Department. We also interviewed officials from the Metropolitan Washington Council of Governments, the American Public Transportation Association, and FTA to discuss WMATA’s capital planning and budgeting processes. Furthermore, we compared WMATA’s capital planning practices to leading practices identified in GAO’s Executive Guide. The Executive Guide was used since it identifies leading practices for capital decision- making that are applicable to a wide variety of organizations, both public and private. For example, the Executive Guide developed leading capital planning practices by (1) identifying government and private sector organizations recognized for outstanding capital decision-making practices and (2) identifying and describing the leading capital decision- making practices implemented by these organizations. To identify leading practices for capital planning, we also reviewed Transit Cooperative Research Program Report 157. This report developed a framework for transit agencies to use when prioritizing the rehabilitation and replacement of capital assets and discusses leading practices in how to do this. We also identified project management principles from the Project Management Institute, Inc. Finally, we discussed capital planning with the peer transit agencies and prepared a summary of various aspects of capital planning in these agencies.
To examine progress toward goals in WMATA’s track preventive maintenance program and how the program compares with leading program management practices, we reviewed WMATA documentation about the program, interviewed WMATA officials, and analyzed track- defect data and electrical-wayside-fire data provided by WMATA for fiscal years 2016 through 2018—which were the only years detailed track defect and electrical fire incident data were available. In order to determine whether the data provided were sufficiently reliable, we checked the data for errors, conducted interviews with knowledgeable officials to learn their procedures for collecting and analyzing the data, and performed independent tests that included verifying WMATA’s final tally of track defect and fire incidents and verifying there were no extended periods of time where data was missing. We also provided a set of data reliability questions to determine whether procedures were sufficient. After performing these steps we determined that the data were sufficiently reliable for the purposes of our report.
In our interviews with WMATA, officials also described what goals they had created for the track preventive maintenance program, their progress in meeting those goals, and provided documentation to demonstrate their progress, which we reviewed. We also interviewed officials from the American Public Transportation Association and the American Railway Engineering and Maintenance-of-Way Association about best maintenance practices in the transit industry. We then compared WMATA’s track preventive maintenance program to leading program management practices identified by the Project Management Institute, Inc.’s The Standard for Program Management and internal control standards published by the Committee of Sponsoring Organizations of the Treadway Commission (COSO). The Project Management Institute’s, standards are utilized worldwide and provide guidance on how to manage various aspects of projects, programs, and portfolios. In particular, The Standard for Program Management provides guidance that is generally recognized to support good program-management practices for most programs, most of the time.
We conducted our work from November 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Washington Metropolitan Area Transit Authority
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Matt Barranca (Assistant Director), Richard Jorgenson (Analyst in Charge), Melissa Bodeau, Lacey Coppage, Cory Gerlach, Erin Guinn-Villareal, Kirsten Lauber, Joshua Ormond, and Patrick Tierney made significant contributions to this report. | Why GAO Did This Study
Safety incidents in recent years on WMATA's rail system have raised questions about its processes for performing critical maintenance and replacing capital assets. WMATA initiated a new preventive maintenance program for its rail track in 2017, and is currently implementing a new capital planning process.
GAO was asked to examine issues related to WMATA's capital funding and maintenance practices. This report examines: (1) how WMATA spent its capital funds from fiscal years 2011 through 2017, (2) how WMATA's new capital planning process addresses weaknesses it identified in the prior process, and (3) WMATA's progress toward its track preventive maintenance program's goals and how the program aligns with leading program management practices. GAO analyzed WMATA's financial and program information, interviewed officials of WMATA, the Federal Transit Administration, and five transit agencies selected for similarities to WMATA. GAO compared WMATA's capital planning process and track maintenance program with leading practices.
What GAO Found
From fiscal years 2011 through 2017, the Washington Metropolitan Area Transit Authority (WMATA) spent almost $6 billion on a variety of capital assets, with the largest share spent on improving its rail and bus fleet (see figure). Over this period, WMATA's capital spending was, on average, about $845 million annually.
WMATA's new capital planning process could address some weaknesses it identified in the prior process. WMATA established a framework for quantitatively prioritizing capital needs (investments to a group of related assets) over a 10-year period. However, WMATA has not established documented policies and procedures for implementing the new process, such as those for selecting specific projects for funding in its annual capital budget. WMATA is currently using its new capital planning process to make fiscal year 2020 investment decisions. WMATA has proposed a fiscal year 2020 capital budget of $1.4 billion. Without documented policies and procedures for implementing the new planning process, WMATA's stakeholders do not have reasonable assurance that WMATA is following a sound process for making investment decisions.
WMATA has made significant progress toward its track preventive maintenance program's goals, which are to reduce both track-defect and electrical-fire incidents by 50 percent in fiscal year 2019 compared with 2017. In fiscal year 2018, WMATA met its goal for reducing track defect incidents and reduced electrical fire incidents by 20 percent. However, in designing the program, WMATA did not fully assess risks. For example, WMATA did not quantitatively assess the impact of track defects or electrical fires on its ability to provide service, nor did it consider other risks such as non-electrical track fires, which represent about 30 percent of all fires on the system, or other factors, such as resources or track time. Without a comprehensive risk assessment, WMATA lacks reasonable assurance that the program is designed to address risks affecting the safety of the rail system or other risks that could hinder the new program's success.
What GAO Recommends
GAO is making five recommendations, including that WMATA establish documented policies and procedures for the new capital planning process and conduct a comprehensive risk assessment for the track preventive maintenance program. WMATA described actions planned or underway to address GAO's recommendations. GAO believes the recommendations should be fully implemented, as discussed in the report. |
gao_GAO-18-216 | gao_GAO-18-216_0 | Background
Recent Legislation Related to IPR Enforcement
The Trade Facilitation and Trade Enforcement Act of 2015 (TFTEA) includes provisions related to IPR enforcement, among other things. According to CBP, the act codified existing CBP activities and supports CBP’s efforts to protect U.S. economic security through trade enforcement, to collaborate with the private sector, and to streamline and modernize business processes to meet the demands and complexities of a changing global supply chain. The act defines trade enforcement as the enforcement of the customs and trade laws of the United States. TFTEA requires the development and implementation of Centers of Excellence and Expertise (Centers), which CBP began piloting in 2010, and centralizes CBP’s trade enforcement and trade facilitation efforts. Among other things, TFTEA directs the CBP Commissioner to establish IPR as a priority trade provides CBP with explicit authority to share certain information with trademark and copyright owners prior to completing a seizure; directs the Secretary of the Department of Homeland Security to establish the government-wide National Intellectual Property Rights Coordination Center (IPR Center) within ICE; requires the Assistant Director of the IPR Center to coordinate with CBP and ICE, along with other agencies; and requires the Assistant Director of the IPR Center to work with CBP and other federal agencies to conduct outreach to the private sector.
TFTEA also includes reporting requirements for CBP and ICE. Specifically, TFTEA requires CBP and ICE to submit a joint strategic plan every 2 years that, among other things, describes their efforts to enforce IPR and makes recommendations for the optimal allocation of resources to ensure adequate enforcement. TFTEA also requires the agencies to submit a joint report annually that includes specific IPR criminal and border enforcement metrics, a summary of outreach efforts, and a summary of efforts to address the challenges presented by Internet commerce and the transit of small packages.
Roles of CBP and ICE in IPR Enforcement
CBP and ICE both play critical roles in IPR enforcement. CBP’s responsibilities include identifying and seizing IPR-infringing goods at the U.S. border, a function that also includes assessing penalties and denying entry to certain types of IPR-infringing goods. ICE’s responsibilities include investigating IPR violations, building cases for federal prosecution, and serving as the lead agency for the IPR Center. CBP employs a risk-based approach that uses targeting and other tools to identify for further examination a selection of imported goods that have arrived at U.S. ports; when violations are found, CBP seizes infringing goods and may refer cases to ICE for criminal investigation. Figure 1 shows CBP’s and ICE’s roles in IPR enforcement at the U.S. border.
CBP. CBP’s trade policy, processing, and enforcement operations, including those related to IPR, are primarily carried out by two offices— the Office of Trade and the Office of Field Operations.
The Office of Trade develops policies to guide trade enforcement efforts. The Office of Field Operations conducts a range of trade processing and enforcement activities at more than 300 ports, where people and goods enter the country by land, air, or sea. At these ports, CBP officers and import specialists target potentially IPR- infringing goods, conduct examinations, and detain items if officers suspect they are counterfeit.
Import specialists working for the Office of Field Operations’ 10 Centers appraise and evaluate detained goods to identify any IPR violation. As we reported in June 2017, the creation of the Centers represented a shift in CBP’s approach to trade operations, centralizing the processing of imported goods on a national scale through industry-focused Centers rather than individual ports of entry.
In determining goods’ authenticity, CBP relies on product information supplied by rights holders and prioritizes enforcement of IPR that rights holders have recorded with CBP, using the Intellectual Property Rights e- Recordation database. CBP also uses product identification manuals that are prepared by rights holders and linked to the database. In addition, CBP may consult with rights holders as part of the examination process. If CBP determines that a good is counterfeit, it seizes and destroys the good and may assess penalties if warranted.
IPR enforcement is one of seven priority trade issues around which CBP focuses its activities and resources for trade facilitation and enforcement. Priority trade issues represent high-risk areas that can cause significant revenue loss, harm the U.S. economy, or threaten the health and safety of the American people, according to CBP. In 2017, we evaluated CBP’s trade enforcement efforts and found that CBP’s plans for its priority trade issues generally lacked performance targets that would enable it to assess the effectiveness of its enforcement activities. We recommended that CBP include performance targets in its plans for priority trade issues; CBP concurred with this recommendation.
ICE. ICE’s Homeland Security Investigations is responsible for a wide range of domestic and international criminal investigations arising from the illegal movement of people and goods into, within, and out of the United States, including the importation and exportation of counterfeit goods. ICE field agents work with CBP and various partners in their investigations of identified cases of IP crime. In addition, the ICE-led, multi-agency IPR Center coordinates with other federal agencies on IPR infringement investigations, law enforcement training, and private sector and public outreach. The IPR Center brings together many of the key domestic and foreign investigative agencies to leverage resources and promote a comprehensive response to IP crime.
Risks Associated with the Counterfeit Goods Market
Counterfeit goods may pose risks to the health and safety of consumers. CBP and ICE have seized and investigated counterfeit goods, such as health and personal care products and consumer electronics, that carried a number of health and safety risks. For example, CBP has seized counterfeit versions of personal care products such as contact lenses, perfume, hair removal devices, hair curlers and straighteners, skin cleansing devices, and condoms, which pose risks to the consumer that include damage to skin or eyes caused by dangerous chemicals and bacteria, burning or electrocution due to nonstandardized wiring, or ineffectual family planning protection. ICE has also investigated IP crimes involving counterfeit airbags, phone accessories, pharmaceuticals, and other items that present risks to the health and safety of consumers. Counterfeit electronics and batteries can also pose significant risks, including the risk of injury or death, according to CBP. For instance, in December 2015, CBP seized 1,378 hoverboards with counterfeit batteries that carried a risk of causing fires.
In addition, the sale of counterfeit goods can pose a threat to national security. For example, CBP and ICE have seized and investigated counterfeit goods, such as integrated circuits, destined for Department of Defense supply chains. Additionally, counterfeiting has been linked to transnational organized crime and terrorist organizations. According to the United Nations Office of Drugs and Crime, the illicit trafficking of counterfeit goods is an increasingly attractive avenue for criminal organizations to diversify their product range. Criminal networks use similar routes and methods to move counterfeit goods as they use to smuggle drugs, firearms, and people, according to reports from U.S. law enforcement and international organizations. The high rate of return on investment and perceived low risk of prosecution associated with IP crimes make counterfeiting attractive to criminal organizations as a lucrative source of revenue, according to the IPR Center.
In 2010, we reported that counterfeiting also posed a wide range of economic risks to consumers, industry, government, and the economy as a whole. Counterfeiting’s economic effects on consumers include, for example, financial losses resulting from counterfeit products that fail due to inferior quality. In addition, counterfeiting may pose risks to industry and government by increasing IPR protection and enforcement costs, by affecting sales and brand value for the businesses whose products are counterfeited, and by potentially reducing tax revenue collected by the government. Finally, counterfeiting may harm the U.S. economy as a whole by slowing economic growth, resulting in the loss of jobs in IP- intensive industries, according to the Congressional Research Service.
Accelerated by E- Commerce, Changes in the Counterfeits Market Present Challenges to U.S. Agencies, Consumers, and the Private Sector
Driven in part by the rise of e-commerce, the market for counterfeit goods in the United States has shifted in recent years from one in which consumers often knowingly purchased counterfeits to one in which counterfeiters try to deceive consumers into buying goods they believe are authentic. According to CBP officials and seizure data, the volume, value, and variety of counterfeit goods entering the United States increased in fiscal years 2012 through 2016, and counterfeit goods were increasingly imported in smaller express-carrier or mail packages. The results of our undercover purchases from third-party sellers indicate that counterfeit goods are available on a variety of popular e-commerce websites frequented by U.S. consumers. These changes in the marketplace present a number of challenges for U.S. agencies, the private sector, and consumers.
E-Commerce Has Contributed to a Shift in the Market for Counterfeit Goods
The rise of e-commerce has contributed to a fundamental change in the market for counterfeit goods, according to our analysis of documents from CBP, ICE, and international organizations and our interviews with CBP and ICE officials. U.S. agencies and international organizations have observed a shift in the sale of counterfeit goods from “underground” or secondary markets, such as flea markets or sidewalk vendors, to primary markets, including e-commerce websites, corporate and government supply chains, and traditional retail stores, where consumers typically believe they are purchasing authentic goods. This shift has been accompanied by changes in the ways in which counterfeit goods are sold, as shown in table 1.
In the past, consumers could often rely on indicators such as appearance, price, or location of sale to identify counterfeit goods in the marketplace, but counterfeiters have adopted new ways to deceive consumers. Consumers may have difficulty differentiating between counterfeit and authentic goods in the primary market for several reasons:
The physical appearance of counterfeit goods may no longer serve as a “red flag” for consumers that the good they are considering purchasing is not genuine. Counterfeit goods and their packaging are becoming more sophisticated and closely resemble genuine goods, making it difficult for consumers, law enforcement, and sometimes even manufacturers to identify counterfeit goods, according to CBP and ICE officials.
When selling online, counterfeiters may post pictures of authentic goods on the websites where they are selling counterfeits and may post pseudonymous reviews of their products or businesses in order to appear legitimate.
By setting the price of a counterfeit at, or close to, the retail price of a genuine good, counterfeiters are able to deceive consumers, who will pay the higher price because they believe the goods are real or who believe that they are getting a slight bargain on genuine goods.
Counterfeiters exploit third-party online marketplaces to gain an appearance of legitimacy and access to consumers, according to the Federal Bureau of Investigation.
The growth of e-commerce has provided additional opportunities for counterfeiters to deceive consumers, according to U.S. agencies and international organizations. In June 2000, approximately 22 percent of Americans reported having made a purchase online, but by December 2016 that portion of the population had risen to 79 percent, according to a study by Pew Research Center. Worldwide e-commerce sales are expected to reach over $4 trillion by 2020, and e-commerce retail sales are expected to reach nearly 15 percent of overall global retail spending in 2020, according to CBP’s E-Commerce and Small Business Branch. CBP also has reported that e-commerce is increasing and altering global trade, as consumers import and export goods and services when they make purchases over the Internet, allowing for more cross-border transactions and giving counterfeiters direct access to consumers through the Internet.
CBP Data Indicate Changes in Several Key Characteristics of Counterfeit Goods Seized
According to CBP seizure data and CBP officials, the volume, value, and variety of counterfeit goods seized by CBP and ICE have increased. CBP reports indicate the number of IPR seizures increased by 38 percent in fiscal years 2012 through 2016, from approximately 22,850 seizures in fiscal year 2012 to an estimated 31,560 seizures in fiscal year 2016. The total estimated value of the seized goods, had they been genuine, increased by 10 percent, from about $1.26 billion in fiscal year 2012 to an estimated value of over $1.38 billion in fiscal year 2016. According to CBP data, most of the goods seized during this period were shipped from China and Hong Kong. Counterfeit goods originating in China accounted for approximately half of all IPR seizures in fiscal years 2012 through 2016, and counterfeit goods shipped from Hong Kong represented over one-third of all IPR seizures over the same time frame. As the number of IPR seizures increased from 2012 to 2016, the proportion of seizures shipped from China and Hong Kong remained fairly stable, ranging from 83 percent of all IPR seizures in 2014 and 2015 to 94 percent in 2013, as shown in figure 2.
The variety of products being counterfeited has also increased, according to CBP officials. CBP and ICE noted that, while many consumers typically think of luxury handbags or watches as the most commonly counterfeited goods, counterfeiting occurs in nearly every industry and across a broad range of products. According to CBP officials we interviewed in headquarters and CBP and ICE port officials, almost any product can be counterfeited. For example, major seizure operations in fiscal year 2016 resulted in the confiscation of automobile parts, consumer electronics, pharmaceuticals, sports-related merchandise, semiconductor devices, furniture, and hoverboards. In fiscal year 2016, the commodity types with the highest number of seizures were apparel, consumer electronics, footwear, watches, and jewelry.
In addition, according to CBP data and officials, the ways in which counterfeit goods are imported into the United States have changed in recent years. Specifically, express carriers and international mail have become the predominant forms of transportation for IPR-infringing goods entering the United States, constituting approximately 90 percent of all IPR seizures in fiscal years 2015 and 2016, according to CBP data and officials. The number of IPR seizures from express carrier shipments increased by 105 percent from fiscal year 2012 through fiscal year 2016, while the number of IPR seizures shipped by cargo increased by 6 percent over the same period. Similarly, the total value of express carrier seizures increased by 337 percent from fiscal year 2012 through fiscal year 2016, while the total value of cargo seizures decreased by 34 percent over the same period.
CBP and ICE have attributed the increase in seizures of mail and express carrier shipments to three factors: continued growth of online counterfeit merchandise sales, which facilitate direct-to-consumer shipments of infringing goods; training by rights holders and coordination between CBP and ICE, which have helped CBP and ICE to focus more enforcement efforts on express carrier operations; and counterfeiters’ response to enforcement efforts.
According to an IPR Center report, counterfeiters may assume that multiple, smaller packages are more likely to elude seizure than a single large shipment and may view the seizure of a few packages as the cost of doing business.
Twenty of 47 Items Purchased from Third- Party Sellers on Popular E-Commerce Websites Were Counterfeits, Highlighting Potential Risks to Consumers
In an attempt to understand the frequency with which consumers may unknowingly encounter counterfeit products online, we purchased a nongeneralizable sample of four types of consumer products—shoes, travel mugs, cosmetics, and phone chargers—from third-party sellers on five popular e-commerce websites. According to CBP data and officials, CBP often seizes IPR-infringing counterfeits of these types of products. As table 2 shows, the rights holders for the four selected products determined 20 of the 47 items we purchased to be counterfeit.
We did not identify any clear reasons for the variation among the counterfeit and authentic that we purchased based on the products they represented, the e-commerce websites from which they were purchased, or the third-party sellers from whom they were purchased. For three of the four product types, at least one item we purchased was determined to be counterfeit, with results varying considerably by product. Representatives of the rights holders could not provide a specific explanation for the variation among authentic and counterfeit goods that we received. They noted that the results of undercover purchases can fluctuate depending on enforcement activities and the variety of goods and sellers on a particular website on a given day. Rights-holder testing also showed that we purchased at least one counterfeit item and one authentic item from each of the five e-commerce websites. In addition, our analysis of the customer ratings of third-party sellers from whom we made purchases did not provide any clear indications that could warn consumers that a product marketed online may be counterfeit. For example, we received both counterfeit and authentic items from third-party sellers with ratings that were less than 70 percent positive as well as sellers with ratings that were up to 100 percent positive.
Some counterfeit items we purchased were easily identifiable as likely counterfeit once we received them. Rights holders were able to determine that they were not authentic on the basis of inferior quality, incorrect markings or construction, and incorrect labeling. For example, one item contained misspellings of “Austin, TX” and “Made in China,” as figure 3 shows.
Other items could be more difficult for a typical consumer to identify as counterfeit. For example, the rights holder for a cosmetic product we purchased identified one counterfeit item on the basis of discrepancies in the color, composition, and design of the authentic and counterfeit items’ packaging, as figure 4 shows.
Counterfeit goods may also lack key elements of certification markings and other identifiers. For example, on a counterfeit phone charger we purchased, the UL certification mark did not include all components of the authentic mark, as shown in figure 5.
The risks associated with the types of counterfeit goods we purchased can extend beyond the infringement of a company’s IPR. For example, a UL investigation of counterfeit iPhone adapters found a 99 percent failure rate in 400 counterfeit adapters tested for safety, fire, and shock hazards and found that 12 of the adapters posed a risk of lethal electrocution to the user. Similarly, counterfeits of common consumer goods, such as Yeti travel mugs, may contain higher-than-approved concentrations of dangerous chemicals such as lead, posing health risks to consumers. According to ICE, seized counterfeit cosmetics have been found to contain hazardous substances, including cyanide, arsenic, mercury, lead, urine, and rat droppings.
Representatives of rights holders and e-commerce websites whom we interviewed reported taking independent action to try to protect IPR within their areas of responsibility. Both rights holders and e-commerce websites maintain IPR protection teams that work with one another and with law enforcement to address infringement issues. These teams may include global networks of investigators and contracted brand-protection companies. E-commerce websites may also take a variety of steps to block and remove counterfeit items listed by third-party sellers. These efforts rely on data collected through a variety of means, including consumer reporting of counterfeits, notification by rights holders of IPR infringement, and corporate efforts to vet potential third-party sellers, according to private sector representatives. According to these representatives, both rights holders and e-commerce websites have utilized technology to aid their efforts. For example, one rights holder uses search-engine “crawlers” to find terms commonly associated with counterfeit sales, in an effort to identify illicit sites and the individuals behind them, while one e-commerce website maintains a large database of information on the history and activity of its sellers.
According to representatives of rights holders, consumers can best protect themselves by buying directly from the manufacturer or its authorized retailers online, avoiding prices that look “too good to be true,” and reporting counterfeit purchases. For additional actions that consumer protection organizations, government agencies, and private companies have recommended consumers take to limit the risk of purchasing counterfeits online, see appendix II.
Changes in the Marketplace Can Pose Challenges to U.S. Agencies and the Private Sector
We identified a number of key challenges that the changes in the market for counterfeit goods can pose to CBP and ICE as well as to the private sector. First, the increasing sophistication of counterfeits can make it difficult for law enforcement officers to distinguish between legitimate and counterfeit goods. According to CBP officers, because the quality of counterfeits is improving, inspecting and processing a seizure can be time consuming and often requires working with private industry to test potential counterfeits.
Second, the increased variety and quantity of counterfeit goods crossing the border complicate CBP and ICE enforcement efforts. As the range of counterfeit goods expands, CBP has a wider variety of goods to screen, which requires CBP officials to have in-depth knowledge of a broad range of products and of how to identify counterfeits. The overall volume of goods entering the country—including more than 11 million maritime containers; 13 million containers carried over land borders by truck or rail; and 250 million cargo, mail, and express carrier packages annually—can also be difficult to manage, according to CBP officials. CBP has responsibility for facilitating trade as well as preventing the importation of illicit goods—missions that can conflict when attempts to identify illicit goods threaten to slow the movement of legitimate trade. Additionally, the increased volume of imports at specific locations can strain CBP resources. For example, CBP officials at one international mail facility noted that the volume of both incoming mail and counterfeit goods increased exponentially when some international mail shipments from China were rerouted to enter the United States through that port.
Third, shifts in the mode of transportation of counterfeit goods to the United States pose additional challenges to CBP and ICE. According to CBP officials, seizure processing takes roughly the same amount of time and costs the same regardless of shipment size or value, which means that CBP must expend the same resources to seize an express carrier shipment that contains a few infringing goods as it would to seize a large cargo container with hundreds of infringing goods. Another effect of the shift in transportation mode is that seizures have become less of a deterrent for counterfeiters who break up large shipments into multiple smaller express carrier or mail packages. Each of these smaller packages includes fewer goods than a single large shipment, decreasing the counterfeiter’s risk of losing significant quantities of merchandise to a single seizure. Furthermore, the shift in mode of transportation affects CBP’s ability to target shipments in advance. For example, as we have previously reported, the mail environment generally does not provide CBP with access to advance information that can be used for targeting or package retrieval. In other shipping environments, CBP officials may have access to advance information that they can use to target potentially counterfeit goods.
Fourth, counterfeiters may use a variety of methods to try to deceive law enforcement or evade detection. A large majority of infringing products are produced overseas and shipped to the United States, according to the Intellectual Property Enforcement Coordinator. According to CBP officials and CBP, IPR Center, and Intellectual Property Enforcement Coordinator reports, counterfeiters may try to evade detection in a number of ways. For example, counterfeiters sometimes separate IPR- infringing labels from counterfeit goods during the transportation process and then complete the labeling and packaging of the goods in the United States (see fig. 6). In fiscal year 2016, CBP seized 572 shipments containing counterfeit labels and tags intended to be applied to articles after importation to create non-genuine products, which CBP estimated would be worth more than $17 million if they were genuine.
Finally, CBP and ICE officials noted that targeting the root causes of IPR infringement requires international cooperation to disrupt the networks that produce, sell, and ship counterfeit goods. IPR enforcement is a global issue, as counterfeit operations may cross several borders; however, officials said some countries are more receptive to working with U.S. agencies than others. For example, ICE officials noted that some countries, such as China, do not have stringent IP laws in place or do not enforce existing laws. Officials added that it can be difficult to convince some countries to take IP theft seriously when it constitutes a large part of their economy.
The changing marketplace also presents challenges to the private sector, according to representatives from rights holders and e-commerce websites: It is more difficult for rights holders and e-commerce websites to identify and investigate individual counterfeit cases, as e-commerce websites face growing inventory from a larger registry of sellers. Tracking goods from known counterfeiters through various website fulfillment and delivery mechanisms is also a significant challenge for the private sector.
The growth of e-commerce has accelerated the pace at which counterfeiters can gain access to consumers or reinvent themselves if shut down. E-commerce platforms on mobile devices, for example, represent the newest space in which counterfeiters can operate.
CBP and ICE Engage in Activities to Enhance IPR Enforcement, but CBP Has Not Fully Evaluated the Results of Its Activities
CBP and ICE engage in a number of activities to enhance IPR enforcement and have collected performance data on the activities we reviewed. However, CBP has conducted limited evaluation of its IPR enforcement, while ICE has taken some steps to evaluate the impact of its efforts.
CBP and ICE Undertake Several Types of Activities to Enhance IPR Enforcement
According to our analysis of CBP and ICE documents and interviews with CBP and ICE officials, CBP and ICE undertake a variety of activities to enforce IPR, including (1) detecting potentially infringing goods, (2) conducting special operations, (3) engaging with international partners, and (4) undertaking localized pilot programs or port-led initiatives.
Detecting potentially IPR-infringing goods. CBP and ICE engage in a number of activities to detect imports of potentially IPR-infringing goods. For example, CBP officers at each port have responsibilities for targeting such goods, and CBP conducts targeting and trend analysis at the national level. As we observed during our port visits, CBP also uses its Automated Targeting System to review data on inbound and outbound shipments and to identify shipments of potential concern. CBP has created two IPR targeting models for the system. In addition, CBP and ICE both maintain online systems for reporting allegations of counterfeiting and other IPR infringements.
Conducting special operations. CBP and ICE periodically conduct special operations—such as operations focused on particular products or surge operations that provide additional manpower to examine a larger number of shipments—at U.S. ports of entry. CBP’s Mobile Intellectual Property Enforcement Team (MIPET) and ICE’s national operations are examples of activities in this area of effort.
Engaging with international partners. IPR enforcement requires coordination with international partners. The IPR Center includes representatives of the governments of Canada and Mexico, as well as international law enforcement entities like Interpol and Europol. CBP and ICE also work with the customs and law enforcement agencies in other countries to share information, provide training, and conduct joint operations.
Undertaking localized pilots and port-led initiatives. CBP and ICE delegate much of the responsibility for day-to-day enforcement to ports, Centers, and field offices. This allows CBP’s headquarters offices to test pilot programs in a small number of ports and allows ports and Centers to initiate their own activities to enhance IPR enforcement. CBP engaged in localized pilots or port-led initiatives to enhance IPR enforcement at each of the locations we visited.
Within these areas of effort, CBP and ICE have undertaken activities to enhance their IPR enforcement. We selected and reviewed eight activities in these four categories, as shown in table 3.
CBP and ICE Have Collected Some Performance Data on IPR Enforcement Activities
Consistent with federal internal control standards, CBP and ICE have collected some data on the results of each of the eight activities we reviewed. Generally, the agencies collected information on the outputs of the selected activities, such as the number and value of seizures resulting from these activities (see table 4).
CBP Has Conducted Limited Evaluation of Its IPR Enforcement
We found that CBP has conducted limited evaluation of the impact of its efforts to enhance IPR enforcement. In particular, (1) CBP’s metrics for tracking the overall effectiveness of its IPR enforcement have limitations, (2) CBP has not systematically evaluated individual IPR enforcement activities, and (3) CBP lacks a defined process for assessing port-led initiatives and sharing information about effective practices.
First, CBP’s metrics for tracking the overall effectiveness of its IPR enforcement have limitations. When asked how they assess effectiveness of CBP’s IPR enforcement, CBP officials in headquarters cited an increase in the number and value of IPR seizures as an indication of the effectiveness of CBP’s IPR enforcement efforts. However, while seizure statistics provide important information about CBP activities, using seizure data to measure the effectiveness of CBP’s IPR enforcement has limitations. For example, according to the U.S. Joint Strategic Plan on Intellectual Property Enforcement for fiscal years 2017 through 2019, it is difficult to determine whether an increase in the number of IPR seizures represents a result of more-effective IPR enforcement or reflects a higher volume of trade in counterfeits. Also, according to CBP officials, the increasing shift from seizures of large cargo shipments to seizures of smaller express carrier and mail shipments may partially explain the growth in the number of reported seizures. Further, while CBP officials in headquarters noted that the overall value of IPR seizures has increased, CBP officials in the field observed that presenting CBP seizure statistics in relation to the overall volume of trade could provide additional context on whether CBP is seizing a larger portion of overall shipments or whether increased seizures might be partially attributable to an increase in the volume of trade. Other CBP officials noted that, in theory, effective enforcement could cause the number of seizures to decrease as the number of counterfeits entering the country also decreases. Finally, given the volume of trade in counterfeits, CBP officials commented that CBP cannot “seize its way out of” the problem of IP theft.
Second, CBP has not systematically evaluated its individual IPR enforcement activities and has not followed through on previous plans to conduct such evaluations. We identified one instance in which CBP evaluated an IPR enforcement activity. Specifically, CBP officials conducted an analysis of the fiscal year 2016 expedited seizure processing pilot and identified several benefits, including savings of frontline officer hours and time and cost savings, associated with seizure processing. While CBP has acknowledged the need to evaluate other IPR enforcement activities, it has not followed through on previous plans to conduct evaluations. For example, CBP’s 2010 IPR Enforcement Strategy: 5-Year Plan laid out goals and corresponding activities that it planned to pursue. CBP outlined specific plans to evaluate all but one of these goals at least once over the course of the 5-year period covered by the strategy. In response to our questions about what activities had been undertaken and how they had been evaluated, CBP could not provide evidence that it had conducted evaluations of any of these activities as planned.
CBP has more recently said that it plans to evaluate other IPR enforcement efforts to better understand their impact. For example, one goal of MIPET and other surge operations is to build the capacity of officers at participating ports. The U.S. Joint Strategic Plan on Intellectual Property Enforcement for fiscal years 2017 through 2019 notes that CBP intends to assess ports after surge operations to determine their effect on long-term interdiction rates. Additionally, although CBP tracks the accuracy of its Automated Targeting System’s IPR targeting models, a CBP official stated that CBP has not evaluated the extent to which its officers use these models at ports of entry. Officials said that such evaluation would be beneficial for determining whether to continue using the models and, if so, whether policy changes are needed to improve their use. The U.S. Joint Strategic Plan on Intellectual Property Enforcement also states that CBP plans to evaluate the voluntary abandonments pilot, and CBP officials noted their intention to evaluate compliance rates in various e-commerce environments to inform future enforcement efforts.
Finally, CBP does not have a standard process for collecting information about the results of port-led initiatives to enhance IPR enforcement and for sharing this information internally. We have previously noted that agencies can use pilots and demonstration projects to identify innovative ways to improve performance, because pilots and demonstration projects allow for experiences to be evaluated, shared systematically with others, and adjusted as appropriate. CBP’s decentralized structure allows it to pilot new activities at individual ports. CBP officials stated that they currently collect information on special operations conducted at ports but that they do not have a standardized process for assessing port-led efforts and sharing information on process improvements. Officials also noted that they sometimes share information about port-led efforts during quarterly phone calls and stated that they had shared information about the expedited seizure processing initiative and the Special Operations Team in such calls. However, they were unable to provide examples of information about other port-led initiatives that had been shared through this process. Officials we interviewed in the field and in headquarters indicated that sharing of such information could be useful.
Federal internal control standards state that agency management should use data it collects to make informed decisions and evaluate the agency’s performance in achieving key objectives. According to federal program evaluation guidance, which articulates best practices for program evaluation, a program evaluation is a systematic study using research methods to collect and analyze data to assess how well a program is working and why. Program evaluation is closely related to performance measurement and reporting. Evaluations answer specific questions about program performance; may focus on assessing program operations or results; and can play a key role in strategic planning and program management, providing feedback on both program design and execution. CBP officials acknowledged that further steps to evaluate their IPR enforcement efforts would be useful. Without evaluations of, or more complete information about, the results of its efforts, CBP may not have the information it needs to direct its resources to the most effective enforcement activities.
ICE Has Taken Some Steps to Assess Its Efforts
While ICE officials identified a number of challenges that affect their ability to track the effectiveness of IPR enforcement activities, the agency has taken steps to understand the impacts of some of its efforts. ICE officials noted that evaluating the impacts of specific IPR enforcement activities, including those we reviewed, can be difficult, because these impacts ultimately rely on prosecutors’ decisions to pursue criminal charges—that is, decisions over which ICE has no control. ICE officials also noted factors that limit the usefulness of enforcement statistics, such as arrests or convictions for IPR-related offenses, as measures of the effectiveness of ICE’s IPR enforcement activities. First, according to ICE officials, prosecutors for some cases that start as IPR investigations ultimately pursue money laundering or other, related charges, because they carry harsher penalties. Second, while ICE collects data on enforcement outcomes by fiscal year, the complicated nature of some investigations often causes a significant amount of time to elapse between an investigation’s start and any results. Thus, various IPR enforcement statistics reported for a single fiscal year, such as the number of cases initiated, arrests made, or convictions secured, may be unrelated, making it sometimes difficult to link enforcement outcomes to ICE investigations.
To address some of these challenges, ICE has created a process to track cases it deems significant, which, according to ICE officials, will allow it to better understand the impact of its efforts. ICE officials told us that ICE had developed a set of criteria for what constitutes a significant case and that a panel reviews proposals from the field to determine whether an investigation meets the criteria for a significant case. If a case is deemed significant, ICE tracks it until (1) the criminal activity is disrupted (i.e., actions taken as part of the investigation impede the operations of the target organization) or (2) a criminal organization is dismantled (i.e., the leadership, network, and financial base of the target organization are impeded to the point where it is unable to reconstitute itself). According to ICE, of the 115 IPR-related investigations that were deemed significant cases in fiscal years 2012 through 2016, 59 cases, or about 51 percent, had resulted in a disruption of criminal activity or dismantlement of a criminal organization as of January 2017.
CBP and ICE Generally Collaborate on IPR Enforcement, but CBP Is Restricted in Sharing Information with the Private Sector
Our analysis showed that CBP and ICE collaboration on IPR enforcement is generally consistent with selected key practices for interagency collaboration and that the agencies collaborated to address some challenges they have faced with the creation of the Centers. CBP and ICE also coordinate with the private sector in a variety of ways. However, according to private sector representatives we spoke to, restrictions on CBP’s information sharing limit the ability of rights holders and e- commerce websites to protect IPR.
Collaboration between CBP and ICE on IPR Enforcement Is Generally Consistent with Selected Key Practices
CBP and ICE collaborate on IPR enforcement in ways that are generally consistent with the following selected key practices that we have previously identified as important for enhancing and sustaining collaboration among federal agencies: (1) define and articulate a common outcome; (2) establish mutually reinforcing or joint strategies; (3) identify and address needs by leveraging resources; (4) agree on roles and responsibilities; and (5) establish compatible policies, procedures, and other means to operate across agency boundaries.
Define and Articulate a Common Outcome
In developing the U.S. Joint Strategic Plan on Intellectual Property Enforcement, CBP and ICE, among other agencies, defined and articulated common IPR enforcement outcomes, and they continue to define common outcomes through interagency efforts. The plan’s seven objectives, mandated by the Prioritizing Resources and Organization for Intellectual Property Act of 2008, include reducing counterfeit and infringing goods in domestic and international supply chains, among others. For example, through the IPR Center, CBP and ICE coordinate special interagency operations that target IPR violations for specific industries or product types, such as beauty products, pharmaceuticals, or automotive parts (e.g., airbags).
Establish Mutually Reinforcing or Joint Strategies
CBP and ICE, among other agencies, participated in the development of the U.S. Joint Strategic Plan on Intellectual Property Enforcement for fiscal years 2017 through 2019 and completed a TFTEA-required joint strategic plan. The Prioritizing Resources and Organization for Intellectual Property Act of 2008 requires the U.S. Intellectual Property Enforcement Coordinator to coordinate the development of the Joint Strategic Plan on Intellectual Property Enforcement. This plan serves as a blueprint for the work CBP, ICE, and other federal agencies are to carry out in support of IPR enforcement. The joint strategic plan for fiscal years 2017 through 2019 notes that CBP and ICE will, among other things, engage in joint efforts, such as meeting at least annually with industry stakeholders to discuss potential new opportunities for employing technology to enhance identification and investigation of illicit trade. In addition, TFTEA required CBP and ICE to develop, by February 2017 and every 2 years thereafter, an interagency strategic plan for trade enforcement that includes information related to IPR enforcement. The agencies finalized this strategy in October 2017 and provided us with a copy after we had sent them our draft report for comment.
Identify and Address Needs by Leveraging Resources
CBP and ICE have leveraged IPR enforcement resources in a variety of ways. For example, according to a strategy issued by the IPR Center, ports and field offices may establish Trade Enforcement Coordination Centers and colocate CBP and ICE personnel to enhance information sharing and foster collaboration on enforcement actions. Officials in three of the locations we visited told us that colocating CBP and ICE staff or temporarily assigning some agency staff to the other agency improves the two agencies’ ability to work together. In addition, ICE officials at two of the locations we visited said that CBP officers share their expertise in operating the Automated Targeting System, which CBP officers use more frequently. ICE officials in one location also told us that CBP officers sometimes accompany ICE agents on investigative operations and that the ICE agents without IPR backgrounds find the CBP officers’ expertise helpful.
Internally, CBP also has taken steps to leverage resources. For example, CBP conducts surge operations, such as MIPET operations, to temporarily focus resources on specific IPR violations. In addition, according to CBP, the agency created the Centers to increase CBP’s industry knowledge.
Agree on Roles and Responsibilities
CBP and ICE have defined roles and responsibilities for a variety of interagency IPR enforcement efforts. For example, after CBP established the Centers, CBP and ICE jointly issued guidance that explained the Centers’ role in CBP and clarified CBP’s and ICE’s roles and responsibilities in the case-referral process. This guidance describes the process by which CBP may refer IPR-infringement cases to ICE, which is then responsible for determining whether to initiate an investigation. CBP defines intra-agency roles and responsibilities in its Trade Special Operations Standard Operating Procedures, which provide CBP personnel with direction for initiating, developing, and executing national- level trade targeting operations. For example, the standard operating procedures define the targeting roles for three CBP targeting groups—the National Targeting and Analysis Group, the Commercial Targeting and Analysis Center, and the Tactical Trade Targeting Unit—as well as for the Centers.
Establish Compatible Policies, Procedures, and Other Means to Operate across Agency Boundaries
CBP and ICE have established compatible policies, procedures, and other means to operate across agency boundaries. For example, CBP and ICE developed standard operating procedures for the Commercial Enforcement Analysis Response (CEAR) process—a process to ensure coordination between the agencies when violations are detected, agree on a response best suited to remedy the problem, and follow up on actions taken.
CBP and ICE have also taken steps to address some challenges they encountered following the creation of the Centers. Both CBP and ICE officials noted that the creation of the Centers has posed communication challenges, but the agencies have taken steps to address some of the challenges posed by the new organizational structure. Officials at ports we visited and Centers we interviewed noted that there were challenges associated with integrating the Centers, which operate nationally, into local efforts, like the CEAR process. This is consistent with our June 2017 report, in which we noted that ICE officials have had to adjust to working in the new, nationwide environment of the Centers. For example, ICE officials in one city may be working on a case with an import specialist located in another city. This has diminished cooperation and communication between CBP and ICE and resulted in fewer investigations, according to ICE officials. CBP and ICE have initiated steps to address some of the challenges posed by the new organizational structure. For example, CBP and ICE issued joint guidance in December 2016 outlining how the two agencies would coordinate with one another in light of the creation of the Centers. Additionally, according to CBP officials, the CEAR process was revised in September 2017 with the Centers in a lead role.
CBP officials also noted they have had to adapt to new ways of sharing information within the agency between officers and import specialists at Centers when processing a seizure. Officials at port locations we visited and at the Centers where we conducted interviews noted that the creation of the Centers has enhanced IPR enforcement. However, officials at the Centers and ports also noted challenges related to the sharing of information. For example, Center and port officials stated that sharing information about seizures via email and coordinating remotely—often across time zones—can extend the amount of time needed to process a seizure. Center officials also stated that ports may use different procedures for processing seizures, which can be challenging for the Centers because they operate on a national level and therefore may interact with a number of ports. CBP has initiated steps to address some challenges related to sharing information about seizures. For example, CBP is adding a function to upload photos and forms to its seizures database, allowing for enhanced information sharing across locations, according to CBP officials.
CBP and ICE Coordinate with the Private Sector in Several Ways, but Restrictions on CBP Information Sharing Limit Private Sector IPR Enforcement
CBP and ICE Work with Various Private Sector Entities to Enforce IPR
CBP and ICE work with a variety of private sector entities—including rights holders, industry groups, importers, and e-commerce websites, among others—to enforce IPR and prevent the sale of counterfeit goods on e-commerce websites, according to CBP and ICE documents and our interviews with CBP and ICE officials and private sector representatives. In particular, CBP and ICE work with the private sector to encourage rights holders to record trademarks and copyrights, make determinations on the authenticity of goods, conduct training, and collaborate with e- commerce websites.
Recording trademarks and copyrights. CBP and ICE conduct outreach with rights holders to ensure recordation of trademarks and copyrights in CBP’s online recordation system. According to CBP officials, business owners are often unaware of CBP’s recordation process, and many may not recognize that CBP prioritizes enforcement of IP that has been recorded with CBP after it has been registered with the U.S. Patent and Trademark Office or the U.S. Copyright Office. CBP engages in efforts to enhance awareness of this process, such as meeting with industry groups, according to CBP. Representatives of one rights holder told us that increasing the number of trademarks recorded with CBP was an important component of the company’s enhanced IPR enforcement efforts.
Determining goods’ authenticity. CBP officials noted that they often coordinate with rights holders to determine whether a detained item is counterfeit. ICE also works with rights holders during criminal investigations, according to ICE officials. When CBP officers and import specialists are uncertain about the authenticity of a particular item, they work with rights holders to evaluate the item, because rights holders have the most detailed knowledge of how a product is made and packaged and therefore can determine whether seemingly authentic goods are in fact counterfeit. Representatives of all of the rights holders we spoke with noted that this was an important part of their interaction with CBP. In addition, representatives of rights holders and e-commerce websites stated that they share information to assist with law enforcement and with potential criminal prosecution.
Conducting training. CBP and ICE coordinate with rights holders, industry groups, and other private sector entities to receive training on topics like detection, supply chains, and packaging. For example, CBP officials said they work with rights holders to arrange trainings about specific products to help officers identify potentially counterfeit goods. CBP reported that in fiscal year 2016, rights holders conducted 11 “webinars” and over 50 trainings for agency personnel to increase CBP expertise regarding their products. CBP also conducted three industry roundtables on IPR enforcement. In addition, to combat the illegal importation and distribution of counterfeit goods, the IPR Center engages in training and outreach to rights holders, manufacturers, importers, and others through its Operation Joint Venture initiative. The IPR Center reported that it reached out to more than 14,000 people at over 300 outreach and training events in fiscal year 2016 through Operation Joint Venture. Representatives of one rights holder we spoke with noted that the company hosts two large conferences every year to discuss issues in IPR enforcement with other private sector entities and U.S. and international law enforcement.
Working with e-commerce websites. CBP and ICE officials noted that their agencies collaborate with e-commerce companies in a number of national and international working groups to better understand the challenges associated with IPR enforcement in e- commerce. In 2016, CBP created an E-Commerce and Small Business Branch within its Office of Trade, which, among other things, is charged with helping CBP understand the complexities resulting from the increasing volume of online trade. Representatives of one e- commerce website stated that the IPR Center, in particular, has been effective in private sector outreach. ICE officials noted that in November 2017, the IPR Center hosted a symposium on e-commerce with over 150 attendees from the private sector and government.
Representatives from most rights holders and websites we spoke with stated that coordination with U.S. agencies is effective and that CBP and ICE work well with the private sector. Rights holders told us they are aware that, due to competing priorities, CBP and ICE are unable to focus as extensively on IPR enforcement as rights-holding companies would like, but they noted that the agencies are willing partners in enforcement as resources permit.
Restrictions on CBP Information Sharing Reportedly Limit Private Sector IPR Enforcement
Private sector representatives of rights holders and e-commerce websites stated that restrictions on the amount and type of information that CBP shares about seized goods impede their ability to protect IPR. CBP officials stated that they share information about identified counterfeits with e-commerce websites and rights holders to the extent possible under current regulations. However, the officials noted that there are legal limitations to the amount and type of information they can share, particularly if the e-commerce website is not listed as the importer on forms submitted to CBP. One rights holder representative stated that the information CBP provides, such as importer names from bills of lading, is sometimes not useful, because counterfeiters use fake identities or otherwise mask their identities.
Several private sector representatives stated that receiving additional information from CBP would enhance their ability to protect IPR. Rights holders noted that additional identifying information about the counterfeiter would aid rights-holding companies in their own investigations and enforcement activities. One rights holder said that some European customs agencies are able to share more information than CBP, better enabling rights holders to take action following a seizure. Representatives of one website noted that information on the exterior of seized packages, such as business identifiers on packages destined for distribution centers, would be helpful for identifying groups of counterfeit merchandise from the same seller.
However, according to CBP officials, CBP cannot provide such information to e-commerce websites. Without this information, websites may be unable to identify additional counterfeit goods from the same seller in their distribution centers. Representatives of one e-commerce website noted that ICE sometimes shares information when it relates to an investigation, but ICE’s involvement in the enforcement process begins only after CBP has identified and seized counterfeit items. Representatives of two e-commerce websites stated that, because of the limited information shared by CBP, they may not be aware of IPR- infringing goods offered for sale on their website even if CBP has seized related items from the same seller. CBP officials stated that they have not yet determined whether changes to the amount and type of information provided to e-commerce websites would require regulatory changes or additional legal authorities. These officials noted that CBP is reviewing options for sharing additional information with rights holders and e- commerce websites and is assessing what, if any, additional information would be beneficial to share with private sector entities. They also said that they have discussed differences in CBP’s and ICE’s information sharing with ICE officials.
Representatives of rights holders and e-commerce websites noted that information shared by law enforcement is critical to private sector IPR enforcement, such as pursuing civil action against a counterfeiter or removing counterfeit items from websites. Congress has also demonstrated an interest in CBP’s sharing information with the private sector in certain instances. Specifically, in TFTEA, Congress provided CBP with explicit authority to share certain information with trademark and copyright owners prior to completing a seizure. However, CBP has not yet completed an assessment of additional information that would be beneficial to share with the private sector or determined whether it can share such information under current regulations and statutes. As a result, CBP does not know whether it needs to revise its regulations or seek additional authorities.
Conclusions
Counterfeit goods provide a lucrative market for criminal activity and can pose serious risks to consumers. Growth in e-commerce has changed the way counterfeiters interact with consumers, and the accompanying increase in the volume and sophistication of counterfeit goods has created challenges for CBP and ICE enforcement. While CBP and ICE have undertaken activities to enhance IPR enforcement and collected some performance data on their activities, CBP has conducted limited evaluation of its efforts. Managing the huge volume of both legitimate and counterfeit goods entering the country requires efficient use of resources. Without better information on the effectiveness of its activities, CBP may not be able to focus its resources on the most efficient or effective efforts. Additionally, without collecting and disseminating effective practices resulting from port-led initiatives, CBP may be missing an opportunity to scale up or improve on existing efforts.
With the growth of e-commerce, the private sector—including rights holders and e-commerce websites—can play an important role in helping to enforce IPR and protect consumers. Information shared by CBP plays an important role in facilitating private sector enforcement, but CBP has not determined what, if any, additional information would be beneficial to share with private sector entities. Until it completes an assessment of information sharing, CBP will not know whether sharing additional information requires regulatory or legal changes.
Recommendations for Executive Action
We are making the following two recommendations to CBP: The Commissioner of CBP should take steps to evaluate the effectiveness of CBP’s IPR enforcement efforts, such as by improving its metrics to track the overall effectiveness of its IPR enforcement efforts, evaluating selected activities to enhance IPR enforcement, and developing a process to assess and share information on port-led initiatives to enhance IPR enforcement (Recommendation 1)
The Commissioner of CBP, in consultation with ICE, should assess what, if any, additional information would be beneficial to share with the private sector and, as appropriate, take action to enhance information sharing, where possible, such as by proposing regulatory revisions or requesting additional legal authorities from Congress. (Recommendation 2)
Agency Comments
We provided a draft of this report to the Department of Homeland Security for comment. In its comments, reproduced in appendix III, the department concurred with our recommendations to (1) take steps to evaluate the effectiveness of CBP’s IPR enforcement efforts and (2) assess what, if any, additional information would be beneficial to share with the private sector. The department also described actions that CBP plans to take to implement our recommendations. CBP and ICE also provided technical comments, which we incorporated as appropriate. Our draft report also included recommendations to CBP and to ICE to complete a joint strategic plan, as required by TFTEA. After the agencies received our draft report, they notified us that this plan had been completed in October 2017, and they provided us with a copy of the plan. As a result, we removed these recommendations from the final report. We also provided relevant excerpts of the draft report to the private sector companies mentioned in it and incorporated their technical comments as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report please contact me at (202) 512-8612 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
We examined (1) what is known about counterfeit goods entering the United States and the challenges they present, (2) efforts U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE) have undertaken to enhance intellectual property rights (IPR) enforcement and the extent to which they have assessed the results of these efforts, and (3) the extent to which CBP and ICE collaborate on IPR enforcement as well as ways in which they coordinate with the private sector in enforcing IPR.
To examine what is known about counterfeit goods that enter the United States and the challenges they present, we reviewed U.S. government reports and strategic plans, including those produced by CBP, ICE, the National Intellectual Property Rights Coordination Center (IPR Center), and the Office of the U.S. Intellectual Property Rights Enforcement Coordinator. We also reviewed reports on the counterfeits market and illicit trafficking from international organizations, including the Organisation for Economic Cooperation and Development and the United Nations Office on Drugs and Crime. In addition, we analyzed data from annual CBP public reports on IPR seizures from fiscal years 2012 through 2016 to identify the types of goods seized, the goods’ countries’ of origin, the modes of transportation used to import the goods, and the value of the goods. We analyzed data from CBP’s public IPR reports because, according to CBP officials, those data are refined prior to the issuance of the reports and therefore are more accurate than data extracted directly from CBP’s seizure database. We reviewed the data, conducted electronic tests of the data, and interviewed knowledgeable agency officials to determine that these data were sufficiently reliable for our purposes. We interviewed CBP and ICE officials in Washington, D.C., and in field locations in Chicago, Illinois; Los Angeles, California; Miami, Florida; and New York, New York, to discuss the composition of IPR- infringing goods and challenges the agencies face in enforcing IPR. We selected these locations on the basis of the number and composition of IP seizures in each location, the availability of multiple ports of entry covering different modes of transportation, and geographic diversity. We also interviewed representatives of IP rights–holding companies and e- commerce websites to discuss the challenges counterfeit goods pose in online marketplaces.
In addition, in an attempt to understand the frequency with which consumers may unknowingly encounter counterfeit products online, we used investigative tools and techniques to conduct nongeneralizable, undercover purchases of consumer goods from third-party sellers on popular consumer websites and asked the rights holders to test the goods to determine whether they were authentic or counterfeit.
We selected four trademarked consumer products of which CBP often seizes counterfeits, according to CBP seizure data and CBP officials, and that represented a range of consumer goods: Nike Air Jordan shoes, Yeti travel mugs, Urban Decay cosmetics, and UL–certified phone chargers.
We selected five popular e-commerce websites that (1) were among the top 50 consumer shopping websites as of March 2017, according to Alexa, a data analytics company, and (2) received a rating of “B” or better from the Better Business Bureau. From the top 50 consumer shopping websites, we chose those that (1) offered platforms for third- party sales, (2) sold a variety of trademarked products to the public, and (3) offered a minimum of two items from at least two different third-party sellers.
We purchased, and had rights holders test, a total of 47 items from third-party sellers on the five e-commerce websites. We selected items that were advertised as new, brand-name items, and we generally selected the lowest-priced items, factoring in both purchase price and shipping while also targeting a variety of sellers and product options. We did not select items whose cost exceeded the manufacturer’s suggested retail price or exceeded that of an identical item sold and fulfilled by the host website. Where seller ratings were available, we selected items from third-party sellers with ratings of 60 percent (or the equivalent, such as 3 of 5 stars) or higher; on average, the sellers of the items we selected had customer ratings above 90 percent as of August 2017.
For each selected product, we purchased a minimum of two items and a maximum of five items from different third-party sellers on any of the five e-commerce websites that listed the product. Across all the websites, we purchased a minimum of eight items for each product. On each website, we purchased a maximum of one item from any third-party seller.
We contacted the companies that held the trademark or copyright for each of the four products, asking for their assistance in reviewing the items we purchased to determine whether they were authentic or counterfeit. These companies made their assessments with no knowledge of the websites or sellers from which we purchased the items. We discussed the results of these tests with representatives of the rights-holding companies and the e-commerce websites where we purchased the items.
To examine the efforts CBP and ICE have undertaken to improve IPR enforcement and the extent to which they have assessed the results of those efforts, we reviewed agency and government-wide strategic plans for IPR enforcement, and we spoke with agency officials in headquarters and selected field locations. We reviewed a selection of eight CBP and ICE activities, which we grouped under four major areas of effort on the basis of the activities highlighted in these strategic plans and agency interviews. The list of activities we reviewed does not constitute the entirety of activities undertaken by CBP and ICE to enhance IPR enforcement and is intended to highlight significant efforts. We did not review activities that officials told us were in early stages, because it would not be reasonable to expect the agencies to have assessed the results of those activities. Our discussion of activities does not include activities related to private sector engagement, which we discuss elsewhere in the report. We reviewed documentation pertaining to the eight activities we reviewed, and we interviewed CBP and ICE officials about the activities and any efforts to assess their results. We reviewed federal internal control standards and prior GAO reports to identify good practices for assessing the results of activities, and we determined the extent to which CBP and ICE had followed those practices.
To examine the extent to which CBP and ICE follow selected practices for effective interagency collaboration, we reviewed agency documentation and spoke with CBP and ICE officials in headquarters and in selected field locations. We reviewed prior GAO reports to identify effective practices for interagency collaboration and selected five of eight practices that we had identified in a fiscal year 2006 report. The five practices we selected as most relevant to the ways in which CBP and ICE coordinate with one another are (1) establish mutually reinforcing or joint strategies; (2) define and articulate a common outcome; (3) agree on roles and responsibilities; (4) identify and address needs by leveraging resources; and (5) establish compatible policies, procedures, and other means to operate across agency boundaries. We did not evaluate CBP and ICE’s interagency collaboration against the remaining three practices identified in our fiscal year 2006 report. We also assessed CBP’s intra-agency collaboration against three of the five selected practices on the basis of interviews with CBP officials in headquarters and selected field locations and reviews of CBP documentation. We did not evaluate internal CBP collaboration against the other two practices—establish mutually reinforcing or joint strategies and define and articulate a common outcome—because we determined that such practices were not applicable to intra-CBP collaboration. To determine the ways in which CBP and ICE collaborate with the private sector, we interviewed CBP and ICE officials in headquarters and selected field locations, reviewed CBP and ICE documentation, and interviewed representatives of rights-holding companies and e-commerce websites.
We conducted this performance audit from September 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work in accordance with investigation standards prescribed by the Council of the Inspectors General on Integrity and Efficiency.
Appendix II: Consumer Information and Advice for Avoiding Counterfeits Online
According to consumer protection organizations and government agencies, consumers can take the following steps to try to limit the risks of buying counterfeit goods online. 1. Locate the listed retailer on the product page and determine whether it is a third party. “Fulfilled by” does not mean “Sold by.” 2. Look for external consumer trust–building features, such as a mailing address or telephone number, real-time customer service, customer reviews, or third-party accreditation that can be verified through the accreditor. 3. Buy products only from authorized retailers, such as official brand stores. If uncertain whether a retailer acquired its product from a legitimate distributor, ask for verifiable information from the retailer about the source of the goods. 4. Be aware of pricing. While some counterfeiters may try to legitimize their merchandise with realistic prices, others may attract buyers with low prices. If a price seems too good to be true, it probably is. 5. During checkout, ensure your payments are submitted via a website beginning with https:// and look for a lock symbol in your web browser. 6. After receiving an item, look for signs that it may be counterfeit, such as irregular brand markings; missing “use by” dates, safety seals, or markings; and missing warranty information. Verify the item’s serial number by checking the manufacturer’s website. 7. If you suspect that you have purchased a counterfeit product, notify the brand owner and contact the place of purchase. Also, report the counterfeit at http://www.iprcenter.gov/referral. To report an unsafe consumer product, visit http://www.SaferProducts.gov.
According to the National Intellectual Property Rights Coordination Center, word-of-mouth is the best way to spread information about illegitimate products as well as sources of safe, affordable, and legal alternatives. For further information, consult http://www.stopfakes.gov.
Appendix III: Comments from the Department of Homeland Security
Appendix IV: GAO Contact and Staff Acknowledgements
GAO Contact
Kimberly Gianopoulos, (202) 512-8612 or [email protected].
Staff Acknowledgements
In addition to the contact named above, Joyee Dasgupta (Assistant Director), Kara Marshall (Analyst-in-Charge), Kristen Timko, Katie Bassion, Reid Lowe, Sarah Collins, Neil Doherty, Ramon Rodriguez, Helina Wong, Julie Spetz, Kevin Loh, Wayne McElrath, Grace Lui, James Murphy, Mary Moutsos, Justin Fisher, Rachel Stoiko, and Sarah Veale made key contributions to this report. | Why GAO Did This Study
Infringement of IPR through the illegal importation and distribution of counterfeit goods harms the U.S. economy and can threaten the health and safety of U.S. consumers. CBP leads IPR enforcement at U.S. ports of entry by detecting and seizing counterfeit goods that enter the United States. CBP works with ICE, which investigates IPR violations and builds cases for prosecution.
GAO was asked to review CBP's and ICE's IPR enforcement at U.S. borders. In this report, GAO examines (1) what is known about counterfeit goods entering the United States and the challenges they present, (2) efforts CBP and ICE have undertaken to enhance IPR enforcement and the extent to which they have assessed the results, and (3) the extent of CBP's and ICE's collaboration on IPR enforcement and ways they coordinate with the private sector. GAO reviewed agency data and documents, interviewed agency officials, and conducted field work at port locations selected on the basis of factors such as the volume of IPR seizures and variety of modes of transportation at each location. GAO also conducted undercover purchases of commonly counterfeited consumer goods on popular consumer websites, using investigative tools and techniques.
What GAO Found
Changes in the market for counterfeit goods entering the United States pose new challenges for consumers, the private sector, and U.S. agencies that enforce intellectual property rights (IPR). Specifically, growth in e-commerce has contributed to a shift in the sale of counterfeit goods in the United States, with consumers increasingly purchasing goods online and counterfeiters producing a wider variety of goods that may be sold on websites alongside authentic products. For example, 20 of 47 items GAO purchased from third-party sellers on popular consumer websites were counterfeit, according to testing by the products' rights holders (see table), highlighting potential risks to consumers. The changes in the market for counterfeit goods can also pose challenges to the private sector—for example, the challenge of distinguishing counterfeit from authentic goods listed for sale online—and complicate the enforcement efforts of U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE).
CBP and ICE engage in a number of activities to enhance IPR enforcement; however, while ICE has assessed some of its efforts, CBP has taken limited steps to do so. CBP's and ICE's IPR enforcement activities broadly include detecting imports of potentially IPR-infringing goods, conducting special operations at U.S. ports, engaging with international partners, and undertaking localized pilot programs or port-led initiatives. CBP and ICE have collected some performance data for each of the eight activities GAO reviewed, and ICE has taken some steps to understand the impact of its efforts. However, CBP has conducted limited evaluation of its efforts to enhance IPR enforcement. Consequently, CBP may lack information needed to ensure it is investing its resources in the most efficient and effective activities.
CBP and ICE generally collaborate on IPR enforcement, but according to private sector representatives, restrictions on CBP's information sharing limit private sector enforcement efforts. GAO found that CBP and ICE have undertaken efforts that align with selected key practices for interagency collaboration, such as participating in developing a national IPR enforcement strategy and agreeing on roles and responsibilities. However, sharing additional information about seized items with rights-holding companies and e-commerce websites could improve enforcement, according to private sector representatives. CBP officials said they share information to the extent allowed under current regulations, but CBP has not completed an assessment of what, if any, additional information would be beneficial to share with private sector entities. Without such an assessment, CBP will not know if sharing additional information requires regulatory or legal changes.
What GAO Recommends
GAO is making two recommendations to CBP, recommending that it (1) evaluate its efforts to enhance IPR enforcement and (2) assess potential additional information sharing with the private sector. CBP agreed with these recommendations. |
gao_GAO-18-253 | gao_GAO-18-253_0 | Background
Perstempo and Operational Tempo
DOD uses two related but distinct terms to differentiate between individual service members’ time away from home versus unit deployments:
Perstempo: The amount of time individual service members serve on official duty at a location or under circumstances that make it infeasible for them to spend off-duty time in the housing in which they reside including for deployment events, such as operations, exercises, and unit training, and non-deployment events, such as individual training and hospitalization.
Operational tempo: The rate at which military units are involved in all military activities, including contingency operations, exercises, and training deployments.
Operational deployments are one type of deployment event, but do not account for all of the time individuals spend away from home. As a result, individual perstempo is typically higher than operational tempo.
Statutes and DOD Policy Regarding Perstempo
The National Defense Authorization Act for Fiscal Year 2000 included a provision that required the Under Secretary of Defense for Personnel and Readiness to monitor the perstempo of the armed forces, and required DOD to manage the number of days its service members are deployed. Section 991 of title 10 defines “perstempo” as the amount of time members of the armed forces are engaged in their official duties at a location or under circumstances that make it infeasible for a member to spend off-duty time in the housing in which the member resides. The law establishes thresholds for deployment perstempo events—220 deployment perstempo days in a 365-day period and 400 deployment perstempo days in a 730-day period. The law also requires the Secretary of Defense or a delegated official to approve when service members exceed these thresholds, and requires DOD to establish a system for tracking and recording the number of deployment perstempo days for each member of the armed forces. Additionally, DOD obtained the statutory authority to pay service members an allowance for lengthy or numerous deployment perstempo events. Congress authorized DOD to waive the deployment perstempo thresholds and recordkeeping requirement, which in turn would prohibit the payment of high-deployment allowances, if the department found that the waiver is necessary in the interests of national security. See figure 1 for a timeline of these and additional congressional and DOD actions related to perstempo.
In the aftermath of the September 11th attacks, the Deputy Secretary of Defense issued a memorandum that suspended the requirements to manage deployment days for service members and the payment of high- deployment allowances. As a matter of DOD policy, the memorandum did not suspend the recordkeeping requirement included in section 991 of title 10. In May 2001, the Under Secretary of Defense for Personnel and Readiness issued an instruction that described policy, responsibilities, procedures, and information requirements for reporting of active duty military personnel records, and this instruction included requirements for perstempo reporting. In 2009, the Under Secretary of Defense for Personnel and Readiness issued another instruction, DOD Instruction 1336.07, that was focused on the reporting of perstempo and the instruction identifies responsibilities, procedures, and information- reporting requirements for perstempo. In particular, DOD Instruction 1336.07 states that the:
Under Secretary of Defense for Personnel and Readiness is responsible for providing overall policy guidance for DOD reporting of all perstempo events;
Director of the Defense Human Resources Activity, through the Defense Manpower Data Center, is required to maintain a perstempo events database;
Secretaries of the military departments are responsible for implementing these reporting requirements whenever service members participate in or are associated with a perstempo event or activity; and services must record all perstempo events, including deployment events such as operations, exercises, and unit training as well as non- deployment events such as individual training and hospitalization.
In November 2013, the Under Secretary of Defense for Personnel and Readiness issued a memorandum conveying that the amount of time that a unit, detachment, or individual service member can be operationally deployed should be equal to or less than the amount of time not deployed. Operational deployments are one of the deployment perstempo events. The memorandum also requires the military services to register perstempo events. DOD’s stated intent in the memorandum was for commanders at every level to ensure that individual service members, regardless of unit assignment, are not repeatedly exposed to combat, do not experience disproportionate deployments, and do not spend extended periods of time away from home unless required by operational necessity.
Prior Work on Perstempo and DOD Readiness
We have reported on perstempo and readiness in multiple prior reports. For example, in 1996 we reported on DOD’s actions to mitigate the impact of high perstempo, including efforts to create systems for measuring perstempo. We reported that DOD had not issued regulations for the long-term management of perstempo and had not directed the services to have policies that limit perstempo. Further, we reported that it was difficult for DOD to determine the amount of perstempo time for military personnel for multiple reasons, including that the services had different systems for tracking deployments. We recommended that DOD (1) issue guidance on managing perstempo that states whether each service should have a goal for the maximum perstempo time for personnel and (2) issue regulations defining the minimum perstempo data that each service must collect and maintain. DOD concurred with these recommendations and, as we noted earlier, the Under Secretary of Defense for Personnel and Readiness issued DOD Instruction 1336.5 in 2001 that described policy, responsibilities, procedures, and information requirements for perstempo reporting. However, our recommendation has not been fully implemented because DOD Instruction 1336.5 did not include guidance on managing perstempo that states whether each service should have a goal for the maximum perstempo time for personnel, as discussed later in the report.
In 2007, we found that Army and Marine Corps perstempo data were incomplete and inaccurate due to a lack of quality controls. We recommended that the Office of the Under Secretary of Defense for Personnel and Readiness provide guidance that directs the Army and Marine Corps to develop quality control procedures for validating the accuracy of the perstempo data. DOD concurred with our recommendation and in 2009 issued DOD Instruction 1336.07; however, our recommendation has not been fully implemented because the instruction did not provide guidance that directs the Army and Marine Corps to develop quality control procedures for validating the accuracy of perstempo data, as discussed later in the report.
Finally, our work has identified several challenges with readiness rebuilding due in part to the high pace of operations that drives up perstempo. In 2016, we reported that the global security environment will likely continue to require significant reliance on U.S. military forces to respond to a range of demands, and the military services have attributed low readiness levels to increasingly long and frequent deployments, reduced force structure, and continuing and emerging demands. We also reported that DOD implementation and oversight of department-wide readiness rebuilding efforts did not fully include key elements of sound planning. We recommended, among other things, that DOD and the services establish comprehensive readiness goals and strategies for implementing them, as well as associated metrics that can be used to evaluate whether readiness recovery efforts are achieving intended outcomes. DOD generally concurred with our recommendations and the department has taken some steps to improve the readiness of the military forces, but it has not yet taken steps to fully implement our recommendations.
DOD, Service, and SOCOM Policies Vary in Identifying Perstempo Thresholds for Service Members
DOD, service, and SOCOM policies vary in identifying specific and measurable thresholds on perstempo for individual service members. DOD policy focuses on time away for deployment, which is a part of perstempo but does not encompass the full range of activities that can take service members away from home. Specifically, a 2013 memorandum from the Under Secretary of Defense for Personnel and Readiness states that individual service members should not be deployed longer than they are at their home station. However, the memorandum describes perstempo only in general terms—stating that individual service members should not serve extended periods of time away from their homestation unless required by operational necessity. An official in the Office of the Under Secretary of Defense for Personnel and Readiness acknowledged that the department has not defined DOD’s perstempo threshold—to encompass non-deployment events—in specific and measurable terms and has not directed the services to establish such perstempo thresholds.
The Navy and SOCOM have established perstempo thresholds in their policies and clarified which types of perstempo events apply to their thresholds. While these policies vary slightly, both the Navy and SOCOM describe in their policies the need to balance the pace of operations with the quality of life of their service members. More specifically:
Navy: In 2014, the Navy issued an instruction that includes a perstempo threshold that identifies the number of days that individual Navy service members may serve away from home. The Navy’s instruction established a threshold of 220 days in a 365-day period or 400 days in a 730-day period. The Navy’s instruction also identified that the threshold applies to all deployment perstempo events—which comprise operations, exercises, unit training, temporary duty, and homestation training.
Special Operations Command: In 2016, SOCOM issued a policy memorandum that establishes a perstempo threshold that identifies the number of days that individual SOCOM service members may serve away from home. The policy memorandum established a perstempo threshold of 480 days in a 730-day period. SOCOM’s policy memorandum also clarified that the threshold applies to both deployment perstempo events (e.g., operational deployments and exercises) and non-deployment perstempo events (e.g., serving as a student or trainee at a school and performing administrative, guard, or detail duties in garrison at the service member’s permanent duty station).
In contrast, the Army, the Air Force, and the Marine Corps are either not enforcing or have not established a specific and measurable perstempo threshold in their policies. Officials from these services told us that they focus on managing the impact of deployments consistent with the 2013 memorandum from the Under Secretary of Defense for Personnel and Readiness, but noted that the memorandum does not set specific perstempo limits. As a result, each service has taken a slightly different approach:
Army: In 2015, the Army issued a regulation that identified the number of days that a service member may spend away from home; however, Army officials told us it is not being enforced. The regulation updated the Army’s policy to include a perstempo threshold. The regulation also defined the events that could be counted toward that threshold and included a provision for the Army to manage its personnel to that threshold. However, Army headquarters officials told us that the Army is not enforcing this perstempo threshold and that the Army only added these provisions to emphasize that collecting perstempo data was a priority. According to the Army regulation, the Secretary of the Army may suspend the applicability of this perstempo program in the interest of national security, but Army headquarters officials told us that the Secretary of the Army had not suspended the perstempo program and the officials could not provide any official action that suspended the requirement.
Air Force: The Air Force does not have a specific and measurable perstempo threshold in policy. An Air Force personnel instruction states that the Air Force considers service members who spend more than 120 days on temporary duty to have a high perstempo. However, Air Force headquarters officials told us that this policy does not establish a threshold for the amount of time that Air Force personnel may serve away from their homestation and that the Air Force does not require units to manage the assignments of their personnel to ensure that they do not spend more than 120 days on temporary duty. Air Force headquarters officials told us that they did not think they needed to include thresholds for perstempo in Air Force policies expressed in specific, measurable terms because the Air Force relies on unit commanders to manage the perstempo of individual service members and they believed that a perstempo threshold would affect a small number of their service members.
Marine Corps: The Marine Corps also does not have a specific and measurable perstempo threshold in its policy, but its policy accounts for perstempo time in determining individual service members’ eligibility for overseas deployments, among other things. For example, Marine Corps Order 1300.8 adjusts and delays the date that service members are scheduled to deploy overseas by the amount of perstempo time accrued for those service members. The Marine Corps also issued an administrative message directing unit commanders to manage the perstempo of individual service members. However, neither of these policies establishes a specific and measurable perstempo threshold. Marine Corps officials told us that it has studied the effects of high rates of perstempo on retention and told us that these studies have not provided the Marine Corps evidence that perstempo drives retention.
The approach taken by the Army, the Air Force, and the Marine Corps— to focus primarily on deployments—reflects the focus placed on deployments in DOD’s policy but this approach omits perstempo events, such as training and exercises. Such activities can take service members away from home for long periods. For example, Air Force officials told us that F-16 pilots spend considerable amounts of time participating in multiple exercises every year that require them to spend significant time away from their homestation. Similarly, a 2011 study conducted by CNA found that perstempo was very high for service members in the III Marine Expeditionary Force in Okinawa and Hawaii because of the number of exercises in which those service members participated. In particular, the study found that service members in the III Marine Expeditionary Force participate in over 70 exercises and training events per year. Additionally, relying on unit commanders to monitor the perstempo of service members without providing specific and measurable guidance leaves it to the interpretation of unit commanders to define excessive time away.
Standards for Internal Control in the Federal Government state that management should define objectives in specific and measurable terms to enable it to identify risks to achieving those objectives. The standards also state that specific terms are those that are fully and clearly set forth so they can be easily understood, and measurable terms are those that allow for the assessment of performance toward achieving objectives. As we reported in 2007, shortly after the September 11, 2001, attacks, DOD shifted its focus away from collecting and maintaining perstempo data and began focusing on collecting and maintaining data to track deployments related to major operations, which does not include the full range of perstempo events. DOD continued this focus on managing deployments versus perstempo in its issuance of the 2013 memorandum.
Furthermore, even as it has continued to waive the statutory perstempo thresholds and cited the effect of the high pace of operations and training on service members, DOD has not taken action to focus attention on the management of perstempo thresholds within the services and DOD. As a result, the services have taken differing approaches, with the Army, the Air Force, and the Marine Corps having no specific and measurable thresholds. Through providing specific and measurable department-wide perstempo thresholds in DOD guidance or directing the services and SOCOM to establish and follow service-specific thresholds for its service members, DOD will be better able to judge whether service members are spending too much total time away from home and, if so, whether there have been any associated effects on military readiness.
DOD and the Services Do Not Have Reliable Data to Monitor Perstempo
DOD does not have reliable perstempo data, which limits its ability to effectively monitor perstempo across the department. In part due to the incompleteness of the perstempo data, an official within the Office of the Under Secretary of Defense for Personnel and Readiness told us that the office cannot monitor perstempo even though section 136 of title 10 makes the office responsible for doing so. For example, a December 2017 Defense Manpower Data Center analysis indicated that perstempo data are missing records for at least 145,000 individuals that deployed in fiscal years 2014-2016. In addition, officials from the Office of Cost Assessment and Program Evaluation told us that they attempted to analyze the effects of high rates of perstempo on unit readiness in 2016 but that they were unable to draw conclusions from the analysis because, among other things, the perstempo data were incomplete. Officials explained that certain events were not captured in the perstempo data consistently, such as Army rotations to a combined training center. Senior service officials also told us that the analysis had limited usefulness due to unreliable data.
Although data are incomplete, our analysis of available data indicates that tens of thousands of service personnel experienced high rates of perstempo in fiscal year 2016. Because the perstempo policies vary widely, we anchored our analysis to the 220 days in a 365-day period identified in the currently waived statutory threshold. Using that benchmark, we estimate that at least 51,000 service personnel spent more than 7 months away from their homestation in fiscal year 2016 (see table 1).
Moreover, we believe these numbers may be far higher because our analysis is limited by incomplete perstempo data as stated above. Additionally, our estimate likely understates the number of servicemembers as we excluded records from our analysis because they were missing an end date in the data system for the perstempo event.
Further, we found that the perstempo records we analyzed for fiscal years 2012 through 2016 were also missing other information, which limits the utility of the data for users and decision makers. For example, we found that 30 percent of perstempo records for fiscal years 2012 through 2016 were missing information that identifies the service member’s occupation, 14 percent were missing information that identifies the purpose of the perstempo event, and 8 percent were missing information that identifies the category of perstempo event.
Incomplete and unreliable data have presented management problems, particularly for the Navy and SOCOM as they have sought to manage the perstempo of their service members. For example, a Naval Personnel Command official who oversees the Navy’s perstempo program told us that the 18,000 Navy personnel with more than 220 perstempo days in fiscal year 2016—that we estimated using Defense Manpower Data Center data—likely significantly understates the actual number of Navy personnel with high rates of perstempo. The official stated that the Navy’s data showed that more than 31,000 Navy service members were away from home more than 220 days in fiscal year 2016—a difference of about 13,000 personnel. Officials from the Navy and Defense Manpower Data Center were unable to explain the discrepancy. Moreover, Navy officials told us that the Navy oversees perstempo by requiring subordinate commands to obtain waivers when service members exceed 220 days in a year. However, the Navy had waivers for about 6,000 personnel in 2016, or only about one-fifth of the personnel the Navy’s own data indicated were gone more than 220 days. To address this, the Navy Personnel Command official told us that the Navy plans to establish an automated system to verify that Navy service members who have exceeded the Navy’s 220-day perstempo threshold have a waiver.
In addition, a SOCOM headquarters official told us that the command does not have reliable perstempo data on its service members because of limitations in the command’s information technology system. As a result, SOCOM does not currently have the ability to determine whether its units are adhering to the SOCOM perstempo threshold. The official told us that SOCOM is working to address the problem with this information technology system.
We previously reported on challenges DOD has had with collecting reliable perstempo data in 1996 and 2007. While the department has made some progress, the reliability of perstempo data has remained a persistent challenge for the department. In 1996, we reported that DOD could not measure the increase in time away from home because no department-wide data system existed to track it. DOD generally agreed with our findings and recommendations and indicated that it had taken, and would continue to take, initiatives to manage perstempo. In 2007, we reported that Army and Marine Corps perstempo data were inaccurate and incomplete because of the lack of quality controls. We recommended that the Under Secretary of Defense for Personnel and Readiness provide guidance directing the Army and Marine Corps to develop quality control procedures for validating the accuracy of the perstempo data they collect and report to the Defense Manpower Data Center. The department concurred with the recommendation and issued an instruction in 2009 that required the services to report perstempo data to the Defense Manpower Data Center. However, the Under Secretary of Defense for Personnel and Readiness has not fully implemented our recommendation because the instruction did not direct the Army and Marine Corps to develop quality control procedures for validating the accuracy of their perstempo data.
The Standards for Internal Control in the Federal Government state that management should use quality information to achieve its objectives and that such information should be complete and accurate. The underlying reason that perstempo data are not reliable is that DOD has not emphasized the collection of complete and reliable perstempo data. Specifically, an official from the Office of the Under Secretary of Defense for Personnel and Readiness told us that the office last reviewed perstempo data in 2012 and, at that time, determined that these data were not fully reliable. The official also told us that to address this challenge the office reiterated the requirement that the services must collect perstempo data in its 2013 memorandum, but the memorandum did not emphasize that the perstempo data collected should be complete and reliable. Without taking steps to improve the quality of its perstempo data, DOD will be limited in its ability to assess the amount of time service members are serving away from home for all perstempo events and use that information to assist them in monitoring and gauging the stress on the force.
Conclusions
In the years since 2001, senior DOD leaders have expressed concern about the impact of a high pace of military operations and the high pace has continued for portions of the force. DOD has taken steps to limit operational deployments for individual service members, but has been less focused on the impact of total time away from home on personnel, commonly called perstempo. Total time away from home includes the training and other activities that can take service members away from home for long periods.
DOD has two primary and long-standing challenges in managing perstempo: setting clear policy and gathering reliable data. First, DOD has not established a perstempo policy with specific and measurable thresholds even as it has waived a statutory requirement that sets such thresholds. In the absence of clear and specific guidance, the Navy and SOCOM have set their own thresholds. By contrast, the Army set a threshold but does not enforce it and the limits for Air Force and Marine Corps service members are unclear. Unless DOD ensures that perstempo thresholds are established and followed across the department in specific and measurable terms, DOD will be unable to judge when individual service members are spending too much time away from home. Second, perstempo data are unreliable across the department—primarily because they are incomplete—but high perstempo is affecting tens of thousands of personnel. For example, available data indicate that at least 51,000 active duty personnel spent more than 7 months per year away from home in fiscal year 2016, and the number may be considerably higher. Incomplete perstempo data are a persistent problem that continues to hamper efforts to oversee the impact of time on duty away from home on individual service members. Until DOD and the military services take steps to emphasize the collection of complete and reliable perstempo data, DOD will be limited in its ability to oversee the time its personnel are spending away from home or gauge the stress on the force.
Recommendations for Executive Action
We are making two recommendations to DOD.
The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in conjunction with the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Commanding General of SOCOM, clarify its guidance on perstempo thresholds as long as the statutory thresholds are waived by either establishing specific and measurable department-wide perstempo thresholds in DOD policy or ensuring that the Army, the Air Force, and the Marine Corps establish and follow their own service-specific guidance on thresholds. (Recommendation 1)
The Secretary of Defense should ensure that the Under Secretary of Defense for Personnel and Readiness, in conjunction with the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Commanding General of SOCOM, take steps to emphasize the collection of complete and reliable perstempo data so that DOD, the services, and SOCOM can monitor perstempo. (Recommendation 2)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD for review and comment. In written comments, DOD concurred with our two recommendations. DOD separately provided technical comments, which we incorporated as appropriate. DOD’s written comments are reprinted in their entirety in appendix I.
We are sending copies of this report to the appropriate congressional committees; the Secretary of Defense; the Under Secretary of Defense for Personnel and Readiness; the Secretaries of the Air Force, the Army, and the Navy; the Commandant of the Marine Corps; and the Commanding General of SOCOM. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3489 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members who made key contributions to this report are listed in appendix II.
Appendix I: Comments from the Department of Defense
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
John H. Pendleton, (202) 512-3489 or [email protected].
Staff Acknowledgments
In addition to the contact named above, Patricia Lentini, Assistant Director; Suellen Foth; Mae Jones; James P. Klein; Amie Lesser; Ricardo Marquez; Shari Nikoo; Joshua Parr; and Michael Silver made key contributions to this report.
Related GAO Products
Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Affecting the Fleet. GAO-17-809T. Washington D.C.: September 19, 2017.
Navy Readiness: Actions Needed to Address Persistent Maintenance, Training, and Other Challenges Facing the Fleet. GAO-17-798T. Washington, D.C.: September 7, 2017.
Military Readiness: DOD’s Readiness Rebuilding Efforts May Be at Risk without a Comprehensive Plan. GAO-16-841. Washington, D.C.: September 19, 2016.
Special Operations Forces: Opportunities Exist to Improve Transparency of Funding and Assess Potential to Lessen Some Deployments. GAO-15-571. Washington, D.C.: July 16, 2015.
Military Personnel: DOD Lacks Reliable Perstempo Data and Needs Quality Controls to Improve Data Accuracy. GAO-07-780. Washington, D.C.: July 17, 2007.
Military Readiness: A Clear Policy Is Needed to Guide Management of Frequently Deployed Units. NSIAD-96-105. Washington, D.C.: April 8, 1996. | Why GAO Did This Study
In 1999, Congress required DOD to monitor the time that individual service members spend away from home and set a threshold to limit excessive time away. At the time, the threshold was no more than 220 days served away from home in a 365-day period. In the interest of national security, in 2001 DOD exercised a provision in the law and waived the requirement to limit time away for service members. Recently, DOD leaders have stated that the continued high pace of military operations have limited their ability to rebuild readiness.
Senate Report 114-255 includes a provision for GAO to review the root causes of degraded readiness, including reviewing DOD's management of perstempo. This report assesses the extent to which DOD, the services, and SOCOM have (1) policies with specific and measurable thresholds on perstempo and (2) reliable data to monitor perstempo.
GAO analyzed DOD, service, and SOCOM perstempo policies and analyzed DOD-wide perstempo data for fiscal years 2012-2016.
What GAO Found
The Department of Defense (DOD), military service, and U.S. Special Operations Command (SOCOM) policies vary in identifying specific and measurable thresholds on the total time individual service members can be away from home, known as personnel tempo or “perstempo.” DOD's policy issued in 2013 states that service members should not be deployed for longer than they are at home. However, the policy does not set thresholds for perstempo, which includes time away from home for exercises and training in addition to deployment. Service members are sometimes away from home for long periods for training, exercises, or other activities. For example, Air Force officials told GAO that F-16 pilots participate in multiple exercises every year that require them to spend significant time away from home. The Navy and SOCOM set specific and measurable perstempo thresholds in policy in 2014 and 2016, respectively. However, the other services either are not enforcing or have not established specific and measurable perstempo thresholds in their policies. DOD has maintained the waiver of statutory perstempo thresholds since 2001, and officials have cited the effect of the high pace of operations and training on service members; however, DOD has not taken action to focus attention on the management of perstempo thresholds within the services and department-wide. Unless DOD ensures that perstempo thresholds are established and followed while statutory thresholds are waived, DOD will be unable to judge whether service members are spending too much total time away from home and, if so, whether this has resulted in any associated effects on military readiness.
DOD does not have reliable data to monitor perstempo because the data are incomplete. Based on available DOD-wide data, GAO estimated that for fiscal year 2016 at least 51,000 service personnel spent more than 7 months away from home. However, that number is conservative because the analysis is limited by incomplete data. Specifically:
DOD analysis shows that perstempo records are missing for at least 145,000 personnel who deployed in fiscal years 2014-2016.
For fiscal years 2012-2016, 30 percent of DOD's perstempo records were missing information that identifies service members' occupations, 14 percent were missing information that identifies the purpose of the perstempo events, and 8 percent were missing information that identifies the category of perstempo events.
The Navy identified about 13,000 personnel who spent more than 220 days away from home in fiscal year 2016 but were not accounted for in the DOD-wide data, and DOD officials could not explain why they were missing.
Without taking steps to emphasize the collection of complete and reliable perstempo data, DOD will be limited in its ability to assess the amount of time service members are serving away from home for all perstempo events and in its ability to use that information to assist in gauging the stress on the force.
What GAO Recommends
GAO recommends that DOD (1) clarify its policy to include specific and measurable department-wide perstempo thresholds for use while statutory thresholds are waived or ensure service-level policies are established and followed, and (2) take steps to emphasize the collection of complete and reliable perstempo data. DOD concurred with GAO's recommendations. |
gao_GAO-18-491 | gao_GAO-18-491_0 | Background
Training Is Important for Effective Grants Management
In fiscal year 2017, the federal government awarded approximately $675 billion in grants to state and local governments. As shown in figure 1, approximately 80 percent of the grant dollars awarded by the federal government in fiscal year 2017 came from the three agencies we reviewed for this report—HHS, USDA, and Education.
A range of skills are needed to manage the various tasks associated with the grants lifecycle. For example, during the award phase, grant staff at federal grant-making agencies are to send all grantees a grant award notification that provides details about the grant, including the amount of the award; and the general terms and conditions of the grant, including statutory and regulatory requirements. Figure 2 below illustrates the four distinct phases of the grants lifecycle.
Given the billions of dollars in federal grants funding that are awarded every year, effective training could help provide grants managers with the skills and competencies they need to better manage and oversee those dollars. As one example of the importance of rigorous grants management and training, in April 2017 we found that Education grants staff inconsistently documented key required monitoring activities and, as a result, about $21 million in discretionary grants lacked the correct documentation of grantee performance. We recommended that Education establish and implement detailed written supervisory review procedures for official grant files to provide reasonable assurance that grant staff perform and document key monitoring activities. Education officials agreed with the recommendation and said they would develop a department-wide standard operating procedure (SOP) that will, among other things, provide standards for timeliness of documenting key monitoring and administrative activities and require the periodic review of grant files. Officials expect to complete the SOP by September 30, 2018.
In 2011, OMB established the Council on Financial Assistance Reform (COFAR), an interagency group of executive branch officials with the stated aim of creating a more streamlined and accountable structure to coordinate financial assistance, including grants. In 2012 and again in fiscal years 2016 and 2017, COFAR identified the need to develop a qualified and professional workforce as one of six priorities to guide its work on grants management reform. According to OMB staff, they disbanded COFAR on June 15, 2017 as part of OMB’s efforts to reduce grants-related requirements once COFAR had recommended policies and actions to effectively deliver financial assistance. COFAR’s recommendations resulted in the Uniform Administrative Requirements, Cost Principles, and Audit Requirements for Federal Awards, which is intended to improve performance, transparency, and oversight for federal awards.
Moving forward, the responsibility of coordinating financial assistance priorities was given to the Chief Financial Officers Council (CFOC), a group of 24 agency chief and deputy chief financial officers that work together to improve financial management in the U.S. government. According to OMB staff, the controller of OMB’s Office of Federal Financial Management is the chair of the CFOC. In addition, OPM is responsible for providing leadership and guidance over federal agency training to ensure the effective promotion and coordination of federal agency training programs and operations. Further, the President’s Management Agenda established “results-oriented accountability for grants” as a cross-agency priority goal to “maximize the value of grant funding by applying a risk-based, data-driven framework that balances compliance requirements with demonstrating successful results for the American taxpayer.”
Certification Standards for the Grants Workforce
In 2013, we examined grant workforce and training issues and found there were no specific government-wide training requirements for the federal grants workforce. As of June 2018, this continued to be the case. By contrast, there are government-wide training requirements for the acquisitions workforce intended to help ensure its quality and effectiveness. For example, OMB’s Office of Federal Procurement Policy (OFPP) provides government-wide guidance on managing the acquisitions workforce. The Federal Acquisition Institute, which coordinates with the OFPP, promotes the development of the civilian acquisitions workforce. Further, OFPP has developed Federal Acquisition Certification requirements for acquisition professionals serving as contracting staff, contracting officer’s representatives, and program/project managers. Notably, in fiscal year 2017, the federal government spent approximately $166 billion more on grants to state and local governments than it did on federal acquisitions. OMB staff explained that the acquisitions workforce faces more requirements because contracts have more uniform requirements and are specified in law. They stated that grants, on the other hand, are diverse and are established by individual statutes with varying conditions.
Our work in the acquisitions area identifies the importance of providing reasonable assurance of an appropriately trained staff through certification. Certification programs are designed to ensure that individuals attain the knowledge and skills required to perform in a particular occupation or role by establishing consistent standards. For example, for the acquisition workforce, OFPP requires a minimum set of career-specific courses, along with education and experience requirements, to obtain certification. To ensure acquisition professionals remain current on acquisition policies and practices, OFPP also requires the acquisition workforce to meet continuing learning requirements. See appendix I for a comparison of training for the federal acquisition workforce versus the federal grants workforce.
Education, HHS, and USDA delegate the decision to their various sub- agencies of whether grants employees should obtain professional grants certifications. Of the 11 sub-agencies we reviewed, 3 at HHS—the Centers for Medicare and Medicaid Services Discretionary Grants Office, the Health Resources and Services Administration, and the National Institutes of Health—and 2 at Education—the Office of Special Education and Rehabilitative Services and the Office of Post-Secondary Education—required certification of some of their grants employees. Officials at the remaining 6 sub-agencies offered certification to their grants employees on an optional basis. USDA sub-agency officials said they often recommend the certificate program to their grants employees, and Education’s sub-agency officials at the Office of Elementary and Secondary Education said they nominate staff to take the grants certificate program whom they believe would benefit the office most by receiving the training.
While COFAR officials explored the possibility of establishing certification standards for the grants workforce by September 2015, OMB staff said they determined that certification was not the most appropriate course of action for the grants workforce for several reasons including risk management and internal control concerns and the need for a variety of skills for the grants workforce. As previously mentioned, OMB disbanded COFAR in June 2017, and CFOC took over COFAR’s responsibilities. When we spoke with OMB staff in the fall of 2017, they said their focus had shifted from establishing certification standards for the grants workforce to providing guidance on needed competencies and enabling the grants workforce to obtain them.
OPM, OMB, and CFOC Have Taken Some Steps to Help Provide Grants Training but Have Opportunities for Further Improvements
OPM, OMB, and CFOC Developed a Grants Competency Model Among Other Steps
OPM, in consultation with OMB and the CFOC, took several steps to ensure the federal grants management workforce has access to grants management competencies and training. For example, OPM identified grants management competencies that could be used in agency efforts for workforce planning, training and development, performance management, recruitment, and selection. After establishing grants management competencies, OPM officials told us they established the 1109 job series partly because OMB and CFOC staff requested a new grants management job series in response to the increased grant awards and staffing needs created because of the 2009 American Recovery Act. Figure 3 illustrates the timeline of the main steps taken by OPM, OMB, and CFOC over the last decade.
In 2008, OPM initiated a government-wide study to identify critical competencies for grants management work. After the government-wide study was completed, OPM issued a memorandum to all federal agencies announcing a grants management competency model that included general competencies such as accountability, writing, and computer skills. OPM also included technical competencies such as grants management, financial analysis, and compliance. In our prior work, we found that grants management competency models can be used to establish an overall framework to guide agencies’ training efforts.
Before OPM established the 1109 job series in 2010, no other agency- specific job classification series existed for the many federal employees responsible for carrying out managerial and administrative tasks related to grants, including ensuring compliance with OMB and agency policies and procedures. In the absence of a specific job classification, we reported in 2013 that officials at selected agencies told us they had classified these employees under a variety of other job series that did not focus on grants, such as general, administrative, and subject-matter job titles.
According to OPM officials, the agency’s development of the “Position Classification Flysheet for the Grants Management Series (1109)” leveraged the competencies and tasks from the Competency Model for Grants Management and input from federal agencies’ subject matter experts on grants management work.The Flysheet includes a job series definition, a basic job title, general occupational information, and a link to the position classification standard. The 1109 job series manage, supervise, lead, or perform administrative business, policy, and analytical work involving the: (1) management, award, or obligation of funds for grants; (2) competitive or non-competitive evaluation of grants proposals; and/or (3) administration or termination, and/or closeout of grants and/or grants assistance and agreement awards. The work requires knowledge of laws, regulations, rules, policies, procedures, and financial methods to help ensure accountability of the grant funds.
As of fiscal year 2016, grant-making agencies reported 2,035 federal employees in the 1109 job series, and HHS reported 38 percent of those employees (see figure 4). We used fiscal year 2016 data to determine the agency-wide numbers of 1109 job series employees because this was the most recent set of full year data available at the time of our analysis.
The federal grants workforce also includes a wide range of employees in other non-1109 job series positions. OPM does not collect data on grants workforce employees in these other job series positions as they span a large number of different job series that can vary by agency. Non-1109 employees working on grants typically possess expert knowledge in the specific area necessary to meet a grant’s goals (e.g., announcing the terms and conditions of a grant, recommending potential grantees, and monitoring grantees’ progress in achieving the grants goals). Reflecting the wide variety of federal programs that grants support, these individuals typically possess expertise in a specialized program or subject.
A number of factors affect usage of the 1109 job series within agencies. According to OMB staff, various agency employees have told them that many agency employees would rather be classified as a subject matter specialist, such as a scientist, rather than a grants management specialist whose primary tasks are grants management under the 1109 job series. In addition, OMB staff said that some agencies preferred recruiting staff using a more general non-1109 job series classification. OMB staff also said that some agencies indicated their grants workforce employees do not want to be classified as grants specialists because the other job series are more general and are a better fit in terms of the needed subject matter expert skills and duties.
We found that one of our selected agencies, Education, does not use the 1109 job series at all because, according to Education sub-agency officials, they require grants employees to have specialized grant program content knowledge in the field of their grant program focus, such as rehabilitation, special education, behavior science, and other areas (e.g., standards and assessments, state accountability systems). The sub- agency officials said that 1109 grants management specialists would not have the specific content knowledge and experience associated with the specific educational grant programs that Education requires. We also found that over 61 percent of HHS grants workforce employees and over 90 percent of the USDA grants workforce was not part of the 1109 job series.
OPM officials told us that, in April 2017, they started a government-wide Grants Management Post Classification Implementation Study that may change the Grants Management Classification Flysheet and revalidate the Competency Model for Grants Management Work. OPM officials developed the study after meeting with grant-making agency HHS and will include a survey of the grants management workforce government- wide. OPM officials also stated they are in the final stages of developing and clearing the government-wide survey and anticipate issuing it in the fall of 2018. They said the study will take several additional months to complete because the team must review the results of the government- wide survey and update competencies, job classifications, and compliance policy/requirements.
OMB and CFOC Have Provided Some Grants Training and Guidance, but Use Has Been Limited Among Selected Agencies
OMB’s role with the grants management workforce includes issuing government-wide guidance and providing a framework that enables agencies to take actions to align their grants training with OMB’s internal control standards. In this role, OMB has taken some actions to provide grants guidance for federal agencies that include the Career Roadmap Report, Career Roadmap Builder, and Grants Training 101. However, we found that almost all of the officials we interviewed at the 11 selected sub- agencies were not familiar with the Career Roadmap Report and Career Roadmap Builder. Additionally, almost all of them did not mention using Grants Training 101 as part of their grants workforce training.
Financial Assistance Career Roadmap
OMB, in collaboration with the CFOC, COFAR, and federal awarding agencies, developed the Financial Assistance Career Roadmap Report in June 2017. OMB staff said that the Career Roadmap Report is one vehicle used to address grants training for the federal agency grants workforce. It is a tool for federal agencies to identify and document the competencies needed for successful job performance of federal financial assistance management professionals. According to the CFOC, the competencies and related elements outlined in the Career Roadmap Report are to be used to identify and prioritize training needs for the federal financial assistance management workforce. This is an optional tool for the federal grants workforce and may be customized to reflect an organization’s unique requirements and specifications. That workforce includes the grants management 1109 job series employees, as well as employees performing grants responsibilities as program, finance, and audit experts who are classified under other job series.
During the initial development of the Career Roadmap Report, a team consisting of OMB staff and industrial and organizational psychologists collected financial assistance research and documentation from OMB, federal awarding agencies, and OPM. The team analyzed this information to identify foundational competencies and create a draft competency model which OMB reviewed. The team also facilitated two workshops with specialists on financial assistance management to gather feedback on the Career Roadmap Report. Figure 5 below shows the 14 different competencies from the Career Roadmap Report that are divided into two types of competencies: functional and leadership.
After the report’s release, CFOC developed and released an interactive version called the Career Roadmap Builder available to the public online. This version allows users to build their own customized financial assistance management Career Roadmap based on their specific mission and needs. To obtain a custom Career Roadmap Report, users complete several steps in the Career Roadmap Builder involving selection of one or more of nine functional competencies; one or more of three job levels (foundational, practitioner, or one of three proficiency levels for each functional competency (basic, intermediate, or advanced); an option to include a leadership competency; and one of three different leadership levels (entry, mid, or senior) and a leadership proficiency rating (basic, intermediate, or advanced).
The user then receives a customized report with relevant competencies, career levels, a sample of the associated developmental experiences and recommended training courses.
Department-level officials we spoke with at HHS, USDA, and Education were familiar with the Career Roadmap Report. However, almost all of the officials we interviewed at the 11 selected sub-agencies were not aware the Career Roadmap Report was available to them.
All but one of the officials we spoke with at four HHS sub-agencies said they were unaware of the Career Roadmap Report and grants management competencies.
While USDA’s agency-wide Federal Financial Assistance Committee received a copy of the Career Roadmap Report in August 2017 and discussed it at their monthly meetings, almost all of the officials at the four USDA sub-agencies we reviewed said they had not received it. However, three sub-agency officials were familiar with the report because they had been involved with agency-wide efforts to provide grants management competency support and information. All other USDA sub-agency officials with whom we spoke were unfamiliar with the Career Roadmap Report or the grants workforce competencies.
Almost all of the officials we interviewed at three Education sub- agencies were unaware of the Career Roadmap Report. However, one official from one sub-agency was familiar with the Career Roadmap Report as he had been part of the Career Roadmap Report development process.
OMB staff stated they publicized the report by sending a “Controller Alert” on July 3, 2017 to agency chief financial officers and to members of the Financial Assistance Committee for E-Government notifying them of its availability and OMB’s future plans to map it to existing training resources, place it on OPM’s website, and develop an online interactive tool including position competencies. However, we found it difficult to locate the “Controller Alert” on the COFC website as it is not located on the same tab where the Career Roadmap Report is published but instead in a news section that users may not know to search. Further, OMB’s “Controller Alert” states that it “does not constitute official guidance or prescribe specific tasks for agencies beyond consideration of appropriate steps to address the issue.” OMB did not issue any official government- wide memorandums to explain that it supported the Career Roadmap Report, or that the report included updated competencies for both the 1109 and non-1109 job series workforce.
Our internal control standards state that management should internally communicate the necessary quality information to achieve the entity’s objectives. However, if all levels of an agency are not aware of government grants workforce competencies and guidance, the agency may not be able to ensure that grants workforce employees have the training resources needed to develop and maintain skills to achieve the objectives of grant awards.
OMB Grants Training 101
OMB also worked with federal grant-making agencies, COFAR, and the CFOC to establish Grants Training 101, a set of five online training modules designed to provide federal officials a basic knowledge of grants and cooperative agreements. According to OMB staff, the Grants Training 101 webpage states that the training is not designed to provide detailed administrative, accounting, and audit requirements specific to statutory provisions, agency regulation, and guidance because agencies need to have flexibility in designing grants training programs to meet those grant- specific statutory requirements. OMB staff said they designed the training modules in response to a request from the federal grants community for a government-wide grants management training resource to ensure some level of consistent training among grant-awarding agencies. In addition, OMB staff said it was optional for agencies to incorporate Grants Training 101 into established grants training and that each agency is responsible for the means by which they conduct grants management training.
Only one of the agencies we reviewed had plans to include OMB’s Grants Training 101 as part of its grant-training program. HHS officials said they are developing an internal online grants 101 course and plan to incorporate parts of OMB’s Grants Training 101. However, most agency and sub-agency officials we spoke with did not use OMB’s Grants Management 101 as part of their grants workforce training. OMB staff said that Grants Management 101 modules cover the grant lifecycle and the requirements of the Uniform Guidance, and are intended to complement other trainings that agencies provide to their grants managers. OMB staff said that agencies make the decision whether to use the Grants Training 101 modules and can integrate parts of the training modules into their agency- specific training requirements. For example, officials at one of the agencies—Education—stated they cover many of OMB’s Grants Training 101 learning objectives through their cross-cutting grant training program courses as well as sub-agency specific training. Furthermore, OMB staff said that each agency would have to internally monitor grants employees’ completion of the grants training modules.
OMB and CFOC Do Not Collect Detailed User Data or Feedback to Determine Usefulness of Grants Training and Guidance
OMB staff told us that OPM initially had the responsibility of hosting the first two modules of Grants Training 101 on the OPM website while the remaining three modules were under development. After these remaining modules were completed, all five of the modules were moved to the CFOC webpage. In addition, OPM was responsible for collecting the Grants Training 101 user and completion data. OMB provided us the Grants Training 101 data which totaled 1,277 users registered between December 2015 and November 2017; however, we found that the data were incomplete due to missing data fields.
OMB staff stated that the Grants 101 training website was moved to the CFOC webpage so the general public can access it. The CFOC will not collect data on the access dates, the agency names, or the number of Grants Training 101 users; however, the CFOC will collect data on the number of visitors that go to the Grants Training 101 website. OMB staff also said that agencies can decide to track Grants Training 101 users internally because OMB and the CFOC will not collect specific data on users. In addition, OMB staff said OMB and CFOC have not collected any formal Grants Training 101 feedback from users and have no plans to do so. OMB reported that a total of 175 visitors went on the Career Roadmap Report website between September 2017 and January 2018.
Our Standards for Internal Control in the Federal Government advise management to process data into quality information that is appropriate, current, complete, accurate, accessible, and provided on a timely basis. It further states that management should also evaluate the processed information and revise when necessary so that it can be used to make informed decisions. In addition, our 2004 Human Capital Guide states that it is increasingly important for agencies to be able to evaluate their training and development programs to demonstrate how these efforts help develop employees and improve the agencies’ performance. As part of this approach, the Human Capital Guide also states that assessing training and development efforts should consider feedback from employees.
OMB, CFOC, and COFAR devoted time and multiple resources to developing the Career Roadmap Report to identify and document the competencies needed for successful job performance of federal financial assistance management professionals. Obtaining more detailed user information and regular feedback from federal agencies on the usefulness of the Career Roadmap Report and the online Career Roadmap Builder could help OMB and CFOC to evaluate the effectiveness of these grant training tools. In addition, obtaining user information and feedback from federal agencies on the usefulness of Grants Training 101 can also help OMB and CFOC evaluate its effectiveness.
HHS, USDA, and Education Vary in Following Selected Leading Training Practices
In 2004, we issued a framework of principles and key questions that federal agencies can use to ensure that their training and development investments are targeted strategically and are not wasted on efforts that are irrelevant, duplicative, or ineffective. Our framework identifies four components of the training and development process: (1) Planning, (2) Design and Development, (3) Implementation, and (4) Evaluation. Within each component, the guide identifies leading practices and questions for agencies to consider when assessing each of these four components. We compared current grants training practices at the selected agencies and sub-agencies with selected leading training practices from the guide. We found variation among sub-agencies in following those selected training practices.
Planning: skills and competencies assessment. In our guide, we stated that effective workforce planning and training begins with a skills and competency assessment. A leading practice under this component is that agencies use an organization-wide knowledge and skills inventory and industry benchmarks to help identify performance problems in their workforces. We stated that workforce planning should entail the collection of valid and reliable data on such indicators as distribution of employees’ skills and competencies.
Officials we interviewed at all the selected sub-agencies explained that grants training needs are primarily identified by grants management supervisors or self-identified by grants workforce employees. The training needs are identified on an ad hoc basis during (1) manager evaluations or observations of employee performance, (2) annual and semiannual performance assessments, and (3) employee career individual development plans.
When it came to implementing a more rigorous process involving a knowledge and skills inventory or the collection of valid and reliable data, we found varied use among the 3 agencies and 11 sub-agencies with only some employing such a method.
The four HHS sub-agencies we reviewed assess new grants workforce employees’ knowledge, skills, and abilities by identifying skills gaps when onboarding new grants workforce employees, through supervisor observation of employee performance, or employee feedback.
In fiscal year 2015, USDA’s Food and Nutrition Service (FNS) sub-agency started holding monthly meetings with its Regional Grants Management Division Directors to identify national training needs for its grants management staff. In fiscal year 2017, FNS also conducted a nationwide qualitative survey of its grants employees to identify training gaps and needs. The remaining three sub-agencies we reviewed informally identify skills gaps and training needs through ongoing discussions between supervisors and grants employees and during annual performance evaluations.
Officials from Education’s central Learning and Development office stated they issue a department-wide competency assessment and training needs assessment to the various department sub-agencies annually or bi-annually. Officials from Education’s Office of Elementary and Secondary Education sub- agency told us they also conduct their own grants workforce learning needs assessment examining grants tasks, content knowledge, and general skills. Officials at the other two Education sub-agencies told us they assess skills gaps and training needs through ongoing discussions between supervisors and grants employees, supervisor observation of employee performance, and also during annual performance evaluations.
Without a formal knowledge and skills inventory or collection of valid and reliable data on the grants workforce’s skills and competencies, some sub-agencies may be limited in identifying performance problems, competency gaps, and training needs in their grants workforce.
Design and development: using a mix of approaches, sources, and delivery. Design and Development involves identifying specific training and development initiatives that the agency will use, along with other strategies, to improve individual and agency performance. One of the leading practices under this component is choosing the most appropriate mix of centralized and decentralized management of training programs; internal and external training sources; and training delivery mechanisms (e.g., classroom, computer-based, on the job, etc.). All three agencies provide the majority of their grants training at the sub-agency level. In most cases, the sub-agencies use a mix of training sources and delivery methods in developing and implementing their grants training programs, including identifying training needs and training content, as detailed in appendix III.
HHS and USDA primarily use decentralized approaches to grants training while Education uses a hybrid approach of centralized and decentralized grants training.
Although there is no overarching grants training program across HHS, the department’s central offices provide topic-specific training to Chief Grants Management Officers (CGMO) within each sub-agency on an ad hoc basis as new grant policies or requirements are developed. CGMOs then decide how to disseminate this information within their respective sub-agencies (e.g., through webinars, teleconferences, or ad hoc trainings). An HHS council comprised of CGMOs also meets on a quarterly or biannual basis to discuss new grants policy and requirements. Further, HHS’s central grants offices are developing a foundational “Grants 101” course to help standardize a baseline of grants knowledge across all of HHS’s sub-agencies, which they expect to complete by November 2018. Currently, the sub- agencies provide the majority of grants-specific training, which focuses on grants topics and mission requirements relevant to their specific areas.
USDA’s Office of Chief Financial Officer (OCFO) provides some required training courses across the agency such as suspension and debarment and federal appropriations law training; however, these trainings are not specific to just the grants workforce employees. The sub-agencies provide all grants-specific training.
Of the three selected agencies, Education provides the most central office training. For example, Education’s OCFO provides agency-wide training on discretionary and formula grants financial and budgetary courses; Learning and Development provides introductory grant courses; and Risk Management Services provides risk-based grants training covering topics including cost analysis, budgetary review, monitoring grants, and uniform guidance. Additionally, Education’s sub-agencies provide mission- and program-specific grants training to augment the centrally provided trainings.
Centralized and decentralized training approaches may present different advantages for agencies and sub-agencies. On the one hand, efficiencies may be achieved by centralizing the design and delivery of some grants training that has widespread applicability throughout the agency. Additionally, if each sub-agency is responsible for implementing its own grants training program, the potential exists for inconsistent grants workforce training across the agency. On the other hand, each sub- agency is able to tailor the training to its own needs when it manages and provides the training itself. In making this decision, it is important for agencies to carefully analyze and consider trade-offs.
Implementation: establishing agency-level accountability. Implementation involves ensuring effective and efficient delivery of training and development opportunities in an environment that supports learning. One of the leading training practices under this component is an agency organization that is held accountable, along with the line executives, for the maximum performance of the workforce. According to our Human Capital Guide, there are different ways of ensuring accountability, including establishing clear lines of authority in agency policies, issuing agency-wide guidance to ensure consistency, and establishing a central oversight office, among others.
We found variation among the three selected agencies in following this leading training practice with HHS and Education having some agency level of accountability but USDA having less.
HHS’ central Office of Grants Policy, Oversight, and Evaluation assigns desk officers to work with sub-agency CGMOs in helping them understand available training resources and needs. HHS also has an Executive Committee for Grants Administration Policy Council that meets quarterly to discuss regulations, policies, and grants administrative requirements. This committee is made up of CGMOs from each HHS sub-agency. HHS describes the roles of officials involved in overseeing grants management in an agency- wide grants policy manual.
USDA has not defined roles for central offices to hold them accountable for grants training. While its central OCFO provides some guidance on federal financial assistance policies and grants terms and conditions, and ensures department-wide training requirements are met, USDA has no agency-wide grants training guidance, no agency-wide grants manual, or a central office that oversees grants training at the component level.
Education officials stated that the agency has two agency-wide grants policy manuals and some Education offices have roles in overseeing grants training. For example, the central Learning and Development office provides some oversight of employee development, training programs, and providers. Further, Education officials stated that Risk Management Services oversees Education’s licensure training program across the sub- agencies, and OCFO provides agency-wide training on financial management of grants.
Holding a central office accountable for grants training can provide agencies with reasonable assurance that training is being delivered efficiently and effectively and that grant staff have sufficient developmental opportunities. In this way, agencies can better ensure the maximum performance of the grants workforce.
Evaluation: using data to assess training results. Evaluation involves assessing the extent to which training and development efforts contribute to improved performance and results. A selected leading training practice under this component is the use of performance data (both qualitative and quantitative measures) to assess the results achieved through training and development efforts.
The three agencies we reviewed primarily conduct evaluation at the sub- agency level. The sub-agencies vary as to how they carry out their evaluations and few use any quantitative performance measures to determine if training was successful.
HHS officials stated the central offices do not measure the effectiveness of training, nor is there centralized information sharing on how well training works. Officials at the HHS sub- agencies we reviewed told us they primarily use informal feedback such as ongoing conversations between employees and supervisors after training completion and supervisor observations of employee performance to determine if grants training is successful. Officials at HHS’ Health Resources and Services Administration also said they receive data regarding employee scores on required grants training courses. Some HHS sub- agencies use an external vendor for some grants training and employees complete a survey at the end of each of these courses, but HHS officials do not see those results. HHS officials rely on employee feedback after training completion to determine if external vendor training is effective.
Officials at the USDA sub-agencies we reviewed told us they primarily use informal feedback through supervisory review of employee performance and employee individual development training plans; internal local level reviews and audits of grant processes; and some course completion surveys.
Officials at Education’s central Learning and Development office told us they conduct electronic course evaluation surveys. Officials at the Education sub-agencies we reviewed told us they primarily use informal feedback from employees, supervisor observation of an employee’s progress after training, and some course evaluations.
While informal, qualitative feedback from employees taking grants training is useful, it is not quantifiable or measurable. Using a balanced approach that reflects feedback from employees as well as organizational results is more effective in terms of evaluating the usefulness of grants training efforts.
Many of the issues discussed above regarding following leading training practices stem from limited oversight of the sub-agencies, which we describe in the next section.
Selected Agencies Provide Limited Monitoring and Oversight of Sub- agencies’ Grants Training Efforts Selected Agencies Cannot Readily Identify All Employees Working on Grants and Provide Limited Oversight of Sub- Agencies’ Grants Training Efforts
As previously mentioned, the federal grants workforce consists of employees in the OPM Grants Management Specialist 1109 job series as well as employees in various other OPM job series (referred to as non- 1109s in this report). HHS and USDA both employ 1109s as well as non- 1109s in their respective grants workforces while Education only employs non-1109s. According to HHS, USDA, and Education officials, each sub- agency is responsible for identifying its grants workforce employees and ensuring they receive needed grants training. However, the central offices do not have a reporting mechanism tracking sub-agencies’ grants workforce. After querying each sub-agency, at our request, officials from the three agencies provided us with data on 1109 and non-1109 grants personnel. As figure 6 shows, the majority of grants personnel at the three agencies we reviewed are non-1109 employees.
Standards for Internal Control in the Federal Government state that, “Management should demonstrate commitment to recruit, develop, and retain competent individuals.” Furthermore, internal controls state that “management evaluates competence of personnel across the entity in relation to established policies.” Since the agencies we reviewed cannot readily identify their total grants workforce, they have limited ability to evaluate the competence of grants personnel across the entity to ensure they are receiving needed training.
Since the three agencies we reviewed do not centrally monitor their sub- agencies’ identification of grants employees, they cannot readily identify the agency’s total grants workforce. Consequently, the selected agencies do not have reasonable assurance that all employees working on grants across their agency are receiving needed grants training and have the necessary knowledge, skills, and abilities to properly manage, administer, and monitor grants.
Central offices at HHS, USDA, and Education provide limited oversight of the types of training sub-agencies provide to their grants workforce. Our Human Capital Guide identifies having an agency organization that is held accountable, along with the line executives, for the maximum performance of the workforce as a leading practice. Further, the guide states that the agency’s training organization and line executives should work together to establish control mechanisms to ensure that agency employees successfully complete required and assigned training and development. Additionally, the guide states that agencies must assign authority and delegate responsibility to the proper personnel and establish clear accountability for maximizing workforce performance.
However, as mentioned earlier, there is no overarching office responsible at the selected agencies for overseeing the types of grants training sub- agencies provide. Additionally, the central offices at the selected agencies do not evaluate sub-agency grants training efforts. We found variation among the 11 sub-agencies’ grants training programs (as shown in appendix III), which highlights the importance of central office oversight for making sure the training variation is appropriate. As a result of these issues, the selected agencies do not have assurance that grants training provided across the various sub-agencies is sufficient in meeting the needs of the various employees working on grants.
Since there is no overarching central office at any of the three agencies we reviewed actively being held accountable for sub-agency grants training programs, HHS, USDA, and Education cannot ensure that all of the sub-agencies working on grants are sufficiently training their grants employees. Without central agency oversight and accountability across sub-agency grants training programs, not all grants employees may be sufficiently trained on grants processes and procedures, which could affect grant oversight in terms of grants employees monitoring grants properly.
Conclusions
Given the importance of grants as a tool to achieve federal objectives and the large outlays the federal government makes to fund them each year, it is critical that the people who manage these grants—the federal grants workforce—be well-trained to handle their responsibilities. To help provide training to this workforce, OPM, OMB, and CFOC created grants management competencies, a grants job series, some grants training, and a career roadmap. However, they have not widely publicized the roadmap and some sub-agencies we reviewed were unaware of it. Moreover, OMB and the CFOC are not collecting detailed data on users or feedback, which limits their ability to determine how useful these resources are to the federal grants workforce.
The selected agencies varied in following selected leading training practices and they provided limited monitoring and oversight of their sub- agencies’ grants training efforts. Without sufficient monitoring and oversight, the agencies cannot have reasonable assurance that their sub- agencies are sufficiently training their grants workforce so they have the necessary knowledge, skills, and abilities to properly manage, administer, and monitor the billions of dollars that the federal government spends on grants annually.
Recommendations for Executive Action
We are making a total of five recommendations, including two to OMB and one to each of the selected agencies in our review. Specifically: OMB’s Office of Federal Financial Management’s Controller (the CFOC chair) should ensure CFOC formally publicizes the Career Roadmap guidance among the 24 CFO agencies through memorandums, briefings, trainings, regular CFOC meetings, or technical assistance and clearly posts its “Controller Alert” on the CFOC website with the Career Roadmap Report. (Recommendation 1)
The Director of OMB, working with CFOC, should (1) collect data metrics regularly on the Career Roadmap Builder online tool and Grants Training 101 to determine how widely the resources are being used, and (2) obtain periodic feedback from federal agencies on the usefulness of these tools and any needed improvements. (Recommendation 2)
The Secretary of HHS should establish a process to monitor and evaluate HHS’s grants training at the central office level. This process should include (1) a method for identifying all employees working on grants across the agency, and (2) oversight procedures to evaluate the sufficiency of sub-agencies’ grants training efforts including the incorporation of leading practices related to assessing competencies, training approaches, accountability, and training results. (Recommendation 3)
The Secretary of USDA should establish a process to monitor and evaluate USDA’s grants training at the central office level. This process should include (1) a method for identifying all employees working on grants across the agency, and (2) oversight procedures to evaluate the sufficiency of sub-agencies’ grant-training efforts including the incorporation of leading practices related to assessing competencies, training approaches, accountability, and training results. (Recommendation 4)
The Secretary of Education should establish a process to monitor and evaluate Education’s grants training at the central office level. This process should include (1) a method for identifying all employees working on grants across the agency, and (2) oversight procedures to evaluate the sufficiency of sub-agencies’ grants training efforts including the incorporation of leading practices related to assessing competencies, training approaches, accountability, and training results. (Recommendation 5)
Agency Comments and Our Evaluation
We provided a draft of this product to Education, HHS, OMB, OPM, and USDA for review and comment. In written comments reproduced in appendixes IV and V respectively, HHS concurred and Education generally concurred with our findings and recommendations directed at them. Both agencies described the steps they were taking to implement our recommendations. In an email, the Chief Learning Officer said that USDA concurred with our findings and recommendation. In an email, a Management Analyst said that OPM had no comments on the draft report.
OMB staff provided us with oral comments stating that the agency partially concurred with our first two recommendations. Specifically, for our first recommendation, OMB generally agreed with our finding that the Career Roadmap guidance should be better publicized. However, OMB believes this is not its responsibility but rather the responsibility of federal agencies. OMB stated that federal agencies could incorporate a method into their improvement plans to ensure that sub-agencies are made aware of the Career Roadmap Guidance. We believe that, as the federal government’s central management agency and developer of the Career Roadmap, OMB has a responsibility for ensuring that federal agencies are aware of the Career Roadmap guidance by formally publicizing it through memorandums, briefings, trainings, regular CFOC meetings, or technical assistance.
For the portion of our first recommendation that discusses clearly posting the “Controller Alert,” OMB stated it will look at the alert’s placement on the CFOC website to see if the agency can make it more prominent. We continue to believe that the “Controller Alert” should be easily accessible to anyone visiting the website and should be located on the same page as the Career Roadmap, where it would have greater visibility.
For our second recommendation, OMB agreed that user feedback data regarding the Career Roadmap Builder and Grants Training 101 is useful. However, OMB stated that while it will continue to collect data on the number of users, it believes that federal agencies should be responsible for collecting specific, detailed user data if they are using those resources. We continue to believe that OMB and CFOC would benefit from collecting specific, detailed user data on these tools, which they devoted time and multiple resources to developing. Collecting detailed data metrics that go beyond the number of users can help OMB and CFOC to better evaluate the effectiveness of these grants training tools. Additionally, OMB stated the agency is committed to working with CFOC to review the Grants Training 101 module to determine how useful it is and if any improvements or adjustments are needed.
All five agencies provided technical comments on the report draft, which we incorporated where appropriate.
We are sending copies of this report to the Secretaries of Education, HHS, and USDA and to the Directors of OMB and OPM. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2757 or [email protected]. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Comparison of Federal Acquisition Training and Grants Workforce Training
Appendix II: Grants Workforce by Job Series for Health and Human Services, Agriculture, and Education as of March 2018
Appendix III: Grants Training Programs at the Departments of Health and Human Services, Agriculture, and Education
The Department of Health and Human Services (HHS). HHS is a large agency with 11 sub-agencies administering a wide variety of health and human services that takes a decentralized approach to training its grants workforce. While HHS’ central Assistant Secretary for Financial Resources (ASFR) office provides grant policy and regulatory guidance updates to HHS sub-agencies, ASFR officials said they leave the decision on how to implement grants training to each of those sub-agencies. The selected sub-agencies we reviewed—the Administration for Children and Families, Centers for Medicare and Medicaid Services, Health Resources and Services Administration, and National Institutes of Health—all implement their own grants training programs and procedures.
The four sub-agencies at HHS that we reviewed take different approaches in how they implement their respective grants training programs. For example, some sub-agencies require that grant personnel take required courses while others make them optional; some provide internal grants training while others also use the services of an external training vendor; and some require certification while others make it optional. Table 3 highlights some of the grants training programs’ characteristics at the four HHS sub-agencies we reviewed.
The Department of Agriculture (USDA). USDA is made up of 29 agencies and offices at more than 4,500 locations across the country and abroad. While its central Office of the Chief Financial Officer (OCFO) provides some guidance on federal financial assistance policies and grants terms and conditions, and ensures department-wide training requirements are met, it, like HHS, leaves the decision on how to implement grants training to each of its sub-agencies. The selected sub- agencies we reviewed—the Food and Nutrition Service, Forest Service, National Institute of Food and Agriculture, and Rural Development—all implemented their own respective grants training programs and procedures. Table 4 highlights some of the grants training programs’ characteristics at the four USDA sub-agencies we reviewed.
The Department of Education (Education). Education approaches grants training by combining both centralized and decentralized approaches for its eight principal offices that conduct grant work. Education’s central OCFO offers broad financial grants training such as Oversight of Financial Management of Ed Formula/Discretionary Grants and Discretionary Grant Budget Reviews. Education’s central Learning and Development office offers broad introductory grants training such as Introduction to Grants and Cooperative Agreements, Uniform Administrative Guidance, and Cost Principals. According to Education officials, Education’s Risk Management Services (RMS) offers risk management-based grants training including Discretionary Grants Overview, Conducting a Cost Analysis and Budget Review, Monitoring Grants, Suspension and Debarment, and Risk Assessment and Risk Mitigation. RMS also manages Education’s licensing program and oversees training for new license holders geared towards grants administration.
In addition to these central office trainings, each Education sub-agency also provides specific training tailored for its mission as verified by the three Education sub-agencies we reviewed—the Office of Special Education and Rehabilitative Services (OSERS), the Office of Elementary and Secondary Education (OESE), and the Office of Post-Secondary Education. For example, according to Education officials, OSERS trains grant staff on the Individuals with Disabilities Education Act grant application review process, and OESE recently identified a need for and developed and taught a course on improving the grantee communication process. Table 5 highlights some of the grants training programs’ characteristics at the three Education sub-agencies we reviewed.
Appendix IV: Comments from the Department of Health and Human Services
Appendix V: Comments from the Department of Education
Appendix VI: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Tom James (Assistant Director), Jyoti Gupta (Analyst-in-Charge), Benjamin Adrian, Dawn Bidne, Jeff DeMarco, Karin Fangman, Joseph Fread, Robert Gebhart, Shirley Hwang, Serena Lo, Sharon Miller, Meredith Moles, Steven Putansu, Kayla Robinson, Robert Robinson, Cynthia Saunders, Stewart Small, and Dan Webb made key contributions to this report. | Why GAO Did This Study
In fiscal year 2017, the federal government awarded approximately $675 billion in grants to state and local governments. GAO was asked to review the federal grants workforce training. GAO reviewed (1) OPM's, OMB's, and the CFOC's actions to address the grants workforce's training needs; (2) the extent to which grants workforce training at selected agencies is consistent with leading practices; and (3) how selected agencies monitor and oversee training of their grants workforce. GAO selected HHS, USDA, and Education and several of their sub-agencies based on their grants spending and numbers of grants management specialists. GAO reviewed OPM and OMB memorandums and guidance, compared selected agency training practices against leading training practices, and interviewed officials.
What GAO Found
The Offices of Personnel Management (OPM) and Management and Budget (OMB) and the Chief Financial Officers Council (CFOC) have taken some steps to help ensure the federal grants workforce receives training. For example, OMB worked with the CFOC to issue five basic grants training modules and a “Career Roadmap” for grants managers; however, they did not widely publicize the resources. Many of the officials with whom GAO spoke at selected sub-agencies at the Departments of Health and Human Services (HHS), Agriculture (USDA), and Education (Education) were unfamiliar with the Career Roadmap and made limited use of the training resources. Further, OMB and CFOC do not collect detailed user data or feedback, limiting their abilities to determine the usefulness of these resources.
GAO found that sub-agencies at HHS, USDA, and Education vary in following leading training practices for planning, designing, implementing, and evaluating their grants training programs. Additionally, HHS, USDA, and Education could not readily identify grants management specialists—the 1109 job series—or employees in other job series working on grants without querying each sub-agency. These agencies cannot do so because their central offices do not have a reporting mechanism tracking their sub-agencies' grants workforce. Further, agency central offices do not evaluate sub-agency grants training efforts. Without sufficient monitoring and oversight, the agencies cannot have reasonable assurance that their sub-agencies are sufficiently training their grants workforce so they have the necessary knowledge, skills, and abilities to properly manage, administer, and monitor the billions of dollars that the federal government spends on grants annually.
What GAO Recommends
GAO is making five recommendations including that OMB, working with the CFOC, should (1) publicize the Career Roadmap and (2) collect data metrics and user feedback on its use. HHS, USDA, and Education should establish processes to centrally monitor and evaluate their grants training, including identifying the grants workforce and ensuring consistency with leading practices. HHS and USDA concurred, Education generally concurred, and OMB partially concurred with our recommendations. OPM had no comments on the report. |
gao_GAO-19-68 | gao_GAO-19-68_0 | Background
Cobra Dane and other radar systems can provide capabilities that contribute to a range of missions, such as ballistic missile defense, space surveillance, and intelligence-gathering missions. DOD uses Cobra Dane and other radar systems to provide information over a short period of time to ground-based interceptors so they can hit their targets. Such radar systems contribute to ballistic missile defense by tracking incoming missile threats, classifying the missile threat, and determining if a threat was intercepted successfully. In addition, some radar systems can provide discrimination capabilities, which allow for that radar to identify a warhead when a missile threat deploys decoys at the same time. Radar systems can also have the capability to contribute to a space surveillance mission, which provides an awareness of space objects within or near the Earth’s orbit and their movements, capabilities, and intent. Finally, radars can also contribute intelligence-gathering capabilities. Each radar system’s ability to contribute to various missions can be dependent on that radar’s inherent capabilities and physical location.
See table 1 for a description of selected radar systems that can provide some or all of these capabilities.
Various offices within the Air Force, in coordination with MDA, are responsible for the operation and sustainment of the Cobra Dane radar. Since 2013, Air Force Space Command has overseen the operation of Cobra Dane, and contributes to the sustainment of Cobra Dane’s site at Shemya Island. The Air Force Life Cycle Management Center has overall responsibility of the sustainment of the Cobra Dane radar. In addition, MDA works in coordination with the Air Force and combatant commands to develop, test, and field ballistic missile defense assets. MDA also shares funding with the Air Force to operate and sustain Cobra Dane.
U.S. Northern Command and U.S. Strategic Command define priorities for the overall radar infrastructure and establish the various missions that those radar systems are intended to meet. U.S. Northern Command oversees the homeland ballistic missile defense mission, and establishes operational objectives for radar systems operating in its region. U.S. Northern Command officials told us that they are the end user for Cobra Dane. U.S. Strategic Command has established a ballistic missile defense and a space surveillance mission, both of which are supported by Cobra Dane. Further, U.S. Strategic Command’s components coordinate global missile defense and space operations planning.
Air Force Reported That Cobra Dane and LRDR Can Contribute to Various Missions, and We Found That Additional Radar Investments May Reduce Reliance on Cobra Dane
Air Force Reported That Cobra Dane and LRDR Contribute Both Shared and Unique Capabilities to Their Respective Missions
In its January 2018 report to Congress, the Air Force described how Cobra Dane and LRDR can meet mission requirements through their shared and unique capabilities, as well as how their locations affect their ability to provide those capabilities for DOD’s ballistic missile defense mission. MDA studies we reviewed found that locating LRDR at Clear Air Force Station allows for operational advantages and cost savings.
Ballistic Missile Defense and Space Surveillance Missions
The Air Force included information in its report to Congress on the ballistic missile defense capabilities of Cobra Dane and LRDR, and the effects of each radar’s location on those capabilities. Specifically, the Air Force report stated that both radars have the capabilities to track and classify missile threats. However, the report incorrectly stated that both radar systems have the inherent capability to determine if a missile threat is successfully intercepted. MDA documentation that we reviewed shows that Cobra Dane does not yet have this capability. When we shared our finding with Air Force and MDA officials, they agreed that this reported capability was incorrectly identified in the Air Force report to Congress. MDA officials also told us that Cobra Dane could provide this capability in the future if it implements software changes, but they are unlikely to do this until calendar year 2025.
The Air Force report also noted that LRDR would have a unique capability, once it is operational, to discriminate missile threats from any deployed decoys. See table 2 for a summary of what the Air Force reported for the ballistic missile defense capabilities of Cobra Dane and LRDR.
In addition to identifying ballistic missile defense capabilities of each radar, the Air Force report noted that both Cobra Dane and LRDR will have the inherent capabilities to support space surveillance and intelligence-gathering missions. DOD officials we spoke to confirmed that they have plans to use those inherent capabilities to support these other missions. For example, U.S. Strategic Command identified that DOD needs Cobra Dane to support its space surveillance mission. Further, Air Force and MDA officials told us that they use Cobra Dane to track small objects that no other radar system can track. MDA officials told us that LRDR could be used for space surveillance. However, Air Force and U.S. Strategic Command officials stated that there are no plans to use LRDR’s space surveillance capabilities as a replacement for Cobra Dane. Additionally, Air Force officials told us that neither Cobra Dane nor LRDR is required to support an intelligence-gathering mission.
The Air Force also included information in its report on how the locations of Cobra Dane and LRDR affect their abilities to contribute to the ballistic missile defense mission. For example, the Air Force reported that Cobra Dane’s location at Shemya Island, Alaska, allows it to track missile threats from North Korea earlier in their trajectories than LRDR would be able to track at Clear Air Force Station, Alaska. This is consistent with an MDA analysis that we reviewed that outlined additional advantages provided by Cobra Dane’s location at Shemya Island. According to that analysis, Cobra Dane can begin tracking missile threats approximately 210 seconds earlier than LRDR. Air Force officials told us that the additional time to track missile threats allows the warfighter an earlier opportunity to intercept a missile threat and deploy additional interceptors if the first attempt fails. Further, the MDA analysis described a tracking gap between the areas covered by LRDR—once it is operational at Clear Air Force Station—and the two sets of AN/TPY-2 radars that are currently located in Japan. Without Cobra Dane’s coverage of this gap, the analysis found that the warfighter would have a more limited opportunity to intercept a missile threat from North Korea. Figure 2 shows how Cobra Dane covers a gap between the LRDR (once operational) and the two AN/TPY-2 radars in Japan.
The Air Force report also noted that LRDR’s geographic location has its own advantages in contributing to ballistic missile defense compared to Cobra Dane’s location. For example, the Air Force report noted that LRDR’s location would allow it to track missile threats later in their trajectories beyond Cobra Dane’s coverage as those threats make their way to the continental United States. We also found that MDA has determined LRDR will have other advantages due to its location. For example, an MDA analysis that we reviewed found that LRDR’s location will allow for the radar system to contribute to ballistic missile defense from North Korean and Iranian threats. Absent LRDR, this analysis determined that there are no other radar systems that are located in a position to provide the capability to discriminate missile threats and determine if a threat was successfully intercepted.
Determination of LRDR Location
In addition to what the Air Force reported, we found that DOD decided to locate LRDR at Clear Air Force Station in Alaska after considering the advantages and disadvantages of other locations. For example, MDA completed studies that examined how LRDR could perform at various locations in Alaska, and the cost-effectiveness of constructing and sustaining the radar at those sites. In a June 2015 analysis, MDA compared how LRDR could perform in discriminating missile threats when co-locating it with Cobra Dane at Shemya Island or placing it at Clear Air Force Station. MDA determined that LRDR could provide more real-time discrimination information for missile threats targeting Alaska and the continental United States if it constructed the radar at Clear Air Force Station versus Shemya Island. Additionally, MDA identified in an October 2016 study that the department could obtain operational advantages and cost savings by constructing LRDR at Clear Air Force Station, Alaska, when compared to constructing it at Shemya Island, Alaska. Specifically, MDA determined that Clear Air Force Station could provide better results for 11 of the 13 factors it reviewed compared to Shemya Island. For example, MDA determined that locating LRDR at Clear Air Force Station would result in lower costs and enhanced system performance.
DOD Has Made Other Investments in Radar Systems That May Reduce Its Reliance on Cobra Dane to Meet Mission Requirements
According to DOD officials and documents we reviewed, other radar investments may reduce the department’s reliance on Cobra Dane for ballistic missile defense and space surveillance, given that U.S. Northern Command identified it has a need for Cobra Dane after DOD begins operating LRDR in fiscal year 2021. Specifically, the Pacific Radar and Space Fence may reduce DOD’s reliance on Cobra Dane to support ballistic missile defense and space surveillance, respectively.
Pacific Radar: According to DOD officials, the department may no longer need Cobra Dane to meet the ballistic missile defense mission after MDA fields a new radar in the Pacific region in fiscal year 2025. MDA began developing the Pacific Radar to provide additional missile threat tracking and discrimination capabilities. According to U.S. Northern Command and MDA officials, the Pacific Radar may fill the gap in tracking missile threats currently covered by Cobra Dane.
Space Fence: The Air Force has also determined it will no longer have a requirement for Cobra Dane to provide space surveillance once the Space Fence is fully operational. The Air Force plans for the Space Fence to be operational in fiscal year 2019. According to a U.S. Strategic Command briefing, the Space Fence will provide the same capabilities as Cobra Dane. Air Force officials noted that they want to continue relying on Cobra Dane for space surveillance when the Space Fence is operational, as long as the radar is available and used to contribute to ballistic missile defense.
Air Force Reported That Cobra Dane Generally Meets Its Requirements for Operational Availability, and We Found That the Air Force Can Mitigate Radar Downtime for Its Missions
In its January 2018 report to Congress, the Air Force noted that Cobra Dane met its requirement for operational availability—i.e., the percentage of time that the radar system is able to meet its ballistic missile defense and space surveillance missions. Specifically, the Air Force report noted that Cobra Dane had been available an average of 91 percent of the time over a 2-year period (January 2016 through December 2017), which exceeded the 90 percent requirement for operational availability.
Information that we reviewed from a more recent 2-year period (August 2016 through July 2018) showed that Cobra Dane’s 2-year average for operational availability had declined to approximately 88 percent—below the 90 percent requirement. Air Force officials stated that the decline in the operational availability over the more recent two-year period was due to a few instances where they needed to take Cobra Dane off-line for extended periods of scheduled downtime (e.g., regular operations and maintenance, calibration of instruments). Further, they noted that when Cobra Dane is not operationally available, the reason is usually due to scheduled downtime.
Officials also told us there was one instance of unscheduled downtime (e.g., part or system failure) in that 2-year period which required emergency maintenance on the radar’s mission control hardware. We also reviewed Air Force data on the frequency of unscheduled downtime between August 2016 and July 2018, which show that Cobra Dane is able to contribute to its missions without unscheduled downtime 99.7 percent of the time.
According to U.S. Northern Command and MDA officials, they can mitigate the effect on the ballistic missile defense mission if they know far enough in advance that Cobra Dane will not be operationally available— such as during scheduled downtime. Officials stated that they do this by moving a transportable radar, known as the Sea-Based X-band radar, to specific locations in the Pacific Ocean to provide additional tracking coverage of missile threats. A U.S. Northern Command analysis that we reviewed describes how DOD can deploy the Sea-Based X-band radar at particular locations in the Pacific Ocean to supplement Cobra Dane. This analysis found that U.S. Northern Command can lose the ability to track some missile threat trajectories if Cobra Dane is not available and the Sea-Based X-band radar is not deployed.
We also reviewed Air Force data on space surveillance, which shows that the Air Force would face some limitations in its ability to complete its space surveillance mission when Cobra Dane is not operationally available. According to the data, Cobra Dane tracks 3,300 space objects each day that cannot be tracked by any other radar system. Air Force officials noted that when Cobra Dane is not operationally available for space surveillance for short periods (less than 24 hours), they can overcome that downtime without losing track of those unique objects. However, officials told us that it would take six months to reacquire all of the small space objects that Cobra Dane tracks, if they encounter any significant scheduled or unscheduled downtime. MDA officials told us there are no scheduled plans to take Cobra Dane down long enough to compromise DOD’s ability to conduct space surveillance.
Air Force Reported That DOD Has Plans to Fund Cobra Dane and Its Site, and We Found That It Has Developed Cost Estimates for Some Projects
Air Force Reported the Funding for the Operation and Sustainment of Cobra Dane, and We Found That DOD Has Developed Cost Estimates for Some Modernization Projects
In its January 2018 report to Congress, the Air Force projected that the Air Force and MDA would contribute total funding of $278.6 million based on their fiscal year 2019 budget plans for the operation and sustainment of Cobra Dane. According to the report, the Air Force and MDA plan to share funding for the operation and maintenance of the Cobra Dane radar, and for three modernization projects that make up their sustainment plan for the radar. Table 3 outlines the plan for how the Air Force and MDA will share funding for the operation and maintenance of Cobra Dane.
In addition, the Air Force included information in its report on how the Air Force and MDA plan to share funding to support Cobra Dane’s three modernization projects. Specifically, the Air Force and MDA plan to redesign parts for three sets of obsolete systems: (1) mission system replacement; (2) traveling wave tubes; and (3) transmitter groups. The Air Force has identified that it no longer has vendors that manufacture some critical parts, and failure of any of the three systems could result in Cobra Dane not being available to meet mission requirements. As such, the Air Force determined that it could sustain these three systems more effectively if they were redesigned. Table 4 summarizes the reported funding for the three projects that make up the Cobra Dane sustainment plan.
In addition to what the Air Force reported, we identified that the Air Force developed a total cost estimate for its transmitter group replacement, but not for its other two projects. For the other two projects, Air Force officials stated that they plan to complete estimates for the total costs in conjunction with their fiscal year 2020 budget submission. In August 2016, the Air Force estimated that the transmitter group replacement would have a total cost of $91.2 million, but reported it would fund this project at $94.0 million through fiscal year 2023 (see table 4). Air Force officials plan to request the transfer of any unused funding to the other projects once it completes the transmitter group project. The Air Force also completed a partial cost estimate for the traveling wave tube redesign—covering the redesign of the parts and replacement of 1 of 12 groups of parts—estimating that the first phase would cost $16.0 million. Further, Air Force officials told us that they have not yet developed a total cost estimate for the mission system replacement.
We also found that the Air Force and MDA expedited Cobra Dane’s mission system replacement project, but Air Force officials told us they face challenges in expediting the other two projects without compromising Cobra Dane’s operational availability. For the mission system replacement, MDA requested additional funding in fiscal year 2018. Air Force and MDA officials told us that the additional funding they received allowed them to prioritize the mission system replacement and advance its timeline earlier that year. Air Force officials stated that they explored ways to expedite the two other projects: the traveling wave tubes and transmitter groups. However, they stated that replacing too many parts at the same time will result in their having to take Cobra Dane off-line for longer periods of time. According to Air Force and MDA officials, they may look for opportunities to expedite timeframes for their other two projects as long as the amount of scheduled downtime is kept to acceptable levels.
The Air Force Reported the Funding for the Operation and Sustainment of Shemya Island
In its report to Congress, the Air Force identified that it plans to provide $140 million in funding for the sustainment and maintenance of operational access to Cobra Dane’s site at Shemya Island based on its fiscal year 2019 budget plans. According to the report, the Air Force is solely responsible for funding all work related to the operation and sustainment of Shemya Island, shared between two of its major commands: Air Force Space Command and Pacific Air Forces. Table 5 summarizes the information the Air Force included in its report on how funding will be shared for Shemya Island.
We also reviewed a support agreement between Air Force Space Command and Pacific Air Forces that identifies how they will sustain the site and the calculation for sharing costs. The agreement describes the specific work to sustain the site, including maintaining the airfield, support facilities, and communication infrastructure. Air Force officials told us that they are constantly addressing challenges related to operational access to the site at Shemya Island, but Air Force Space Command and Pacific Air Forces work together to address those challenges.
Agency Comments
We provided a draft of this report to DOD for review and comment. DOD told us that it had no comments on the draft report.
We are sending copies of this report to the Secretary of Defense; the Under Secretary of Defense for Acquisitions and Sustainment; the Secretary of the Air Force; the Director of the Missile Defense Agency; and the Commanders of U.S. Northern Command and U.S. Strategic Command. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Joe Kirschbaum at (202) 512-9971 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions to the report are listed in Appendix I.
Appendix I: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Kevin O’Neill (Assistant Director), Scott Bruckner, Vincent Buquicchio, Martin De Alteriis, Amie Lesser, and Richard Powelson made key contributions to the report. | Why GAO Did This Study
First fielded in 1976 on Shemya Island in Alaska, the Cobra Dane radar faces growing sustainment challenges that DOD plans to address through modernization projects. Anticipating future needs, DOD began investing in new radar systems that share capabilities with Cobra Dane to support ballistic missile defense and space surveillance, including the LRDR (Alaska), the Space Fence (Marshall Islands), and the Pacific Radar (location to be determined).
The conference report accompanying a bill for the National Defense Authorization Act for Fiscal Year 2018 included a provision that GAO review the Air Force's report to Congress on the operation and sustainment of Cobra Dane. This report identifies information included in the Air Force's report and describes additional information that GAO reviewed on (1) the capabilities of the Cobra Dane radar and other planned radars to meet DOD's mission requirements, (2) Cobra Dane's operational availability and the plan to mitigate the effect on those missions when Cobra Dane is not available, and (3) DOD's funding plan and project cost estimates for the operation and sustainment of Cobra Dane and its site at Shemya Island. GAO reviewed the Air Force report and related documentation, and interviewed relevant officials.
What GAO Found
In its January 2018 report to Congress, the Air Force reported how the Cobra Dane radar and the Long Range Discrimination Radar (LRDR) have shared and unique capabilities to support ballistic missile defense and space surveillance missions. The report noted that the respective locations of both radar systems affect their ability to provide those capabilities. The Department of Defense (DOD) also has other radar investments—the Pacific Radar and the Space Fence, which, according to DOD officials, may reduce DOD's reliance on Cobra Dane to provide ballistic missile defense and space surveillance capabilities.
The Air Force's report to Congress noted that Cobra Dane met its requirement for operational availability, which refers to the percentage of time that the radar is able to meet its missions. GAO found that the Air Force has developed procedures to mitigate risks when Cobra Dane is not available. For example, U.S. Northern Command and Missile Defense Agency (MDA) officials stated that they can mitigate risks when Cobra Dane is not available by using the Sea-Based X-band radar to provide support for ballistic missile defense. The Air Force would face some limitations in its ability to conduct space surveillance if Cobra Dane were not available, as Cobra Dane tracks objects no other radar can track. However, MDA officials noted there are no plans to take Cobra Dane offline long enough to compromise space surveillance.
The Air Force and MDA plan to contribute total funding of $278.6 million for the operation and sustainment of Cobra Dane, according to their fiscal year 2019 budget plans. Specifically, the Air Force and MDA plan to share funding for the operation and maintenance of the Cobra Dane radar and for three modernization projects that make up their sustainment plan for the radar. Further, the Air Force report noted that the Air Force also plans to provide $140 million in funding for the sustainment and maintenance of operational access to Cobra Dane's site at Shemya Island. In addition, GAO found that the Air Force developed a total cost estimate for one project—known as the transmitter group replacement—but not for its other two projects. Air Force officials plan to complete cost estimates for those two projects in conjunction with their fiscal year 2020 budget submission. |
gao_GAO-18-155 | gao_GAO-18-155_0 | Background
Biodefense Doctrine
Several presidential directives and national strategies establish biodefense policy for the federal government. These directives establish overall goals and policies as well as assign specific responsibilities to federal agencies. See table 1 for relevant directives and strategies. Among these directives, the White House released HSPD-10 in 2004, which outlines the structure of the biodefense enterprise and discusses various federal efforts and responsibilities that help to support it. The directive organizes biodefense efforts into four key pillars, consisting of threat awareness, prevention and protection, surveillance and detection, and response and recovery. Each of these pillars comprise numerous activities—such as conducting research on emerging pathogens that could pose a threat—that are carried out by multiple federal agencies and generally require coordination across the entire biodefense enterprise.
The Challenges of Threat Awareness in a Vast and Evolving Biological Threat Landscape
The biological threat landscape is vast and requires a multidisciplinary approach to developing threat awareness. Synthetic biology, if used to create and combine agents, also poses a significant threat and potentially complicates the ability to assess the biological threat landscape. Despite ratification of the Biological Weapons Convention in 1975 and the end of the Cold War decades later, the threat of biological warfare persists today. For example, as the Blue Ribbon Study Panel on Biodefense reported, the State Department assessed in 2015 that China, Iran, North Korea, Russia, and Syria continue to engage in dual-use or biological weapons-specific activities and are failing to comply with the convention, to which each of these countries has agreed. Additionally, the revolution in biotechnology presents opportunities to advance the life sciences, yet that same technology in the wrong hands could be used to create biological weapons. For example, nonstate actors such as terrorist organizations, domestic militia groups, and “lone wolves” have both the interest and capacity to develop biological weapons. The intelligence community plays a key role in assessing these types of threats.
Threat awareness is also challenged by the unpredictable nature of naturally occurring disease, which could affect human and animal health and agricultural security, potentially causing global catastrophic biological risks which could lead to loss of life, and sustained damage to the economy, societal stability, or global security. To assess and develop means to combat these threats, many federal agencies conduct biological threat awareness activities, which may include a combination of risk assessment and key activities to better understand certain characteristics of biological threats. For example, the genetic compositions of some viruses naturally change, as exemplified in 2009, when an H1N1 influenza virus emerged with a new combination of genes, causing a global pandemic. According to the Centers for Disease Control and Prevention (CDC)—an entity within HHS—when these significant genetic changes occur in a virus, most people have little or no immunity to the new virus. Climate change also has the potential to negatively impact human health and the agriculture industry. As we reported in October 2015, climate change may contribute to the spread of vector-borne diseases that are transmitted to humans by animals, including invertebrate animals such as mosquitoes and ticks. Examples of vector- borne diseases that currently pose health risks in some regions of North America include chikungunya disease, dengue fever, Lyme disease, and West Nile virus fever. Additionally, habitat loss and human encroachment on rural and wildlife environments are bringing populations of humans and animals into closer and more frequent contact, increasing the risk of disease transmission among people, pets, livestock, and wildlife.
Finally, the scientific community must safeguard the biological agents it uses to assess threats. Protecting laboratory workers and the population at large from intentional or accidental release of dangerous pathogens during the pursuit of more knowledge about them is also challenging. Recent high-profile events, such as a DOD laboratory inadvertently shipping incompletely inactivated samples of Bacillus anthracis, the bacterium that causes anthrax, to almost 200 laboratories worldwide over the course of 12 years and the unexpected discovery of misplaced vials of smallpox (variola) virus at the National Institutes of Health (NIH) campus, also highlight the threat due to improper handling and unknown storage of dangerous biological agents.
Federal Roles and Responsibilities
Several federal departments and agencies have responsibilities as part of their mission to assess the threat of biological agents and carry out key biodefense roles as delineated in HSPD-10 and the National Strategy for Countering Biological Threats, among other documents.
National Biodefense Analysis and Countermeasures Center (NBACC) NBACC consists of two centers: National Biological Threat Characterization Center Its mission supports national goals to deter and reduce the impact of current and newly identified biological threats by providing timely scientific data, knowledge products, and expertise required for accurate and informed threat analyses and biodefense planning, preparedness, response, and recovery. National Bioforensics Analysis Center It serves as the lead federal facility to conduct and facilitate the technical forensic analysis and interpretation of materials from biocrime and bioterror investigations or those recovered following a biological attack in support of the lead federal agency.
Department of Homeland Security. DHS is the principal federal department with responsibility for domestic incident management and supports federal efforts to prepare for, respond to, and recover from domestic biological attacks. Within DHS, the Science & Technology Directorate’s (S&T) Chemical and Biological Defense (CBD) Division leads key efforts related to enhancing threat awareness with a focus on bioterrorism. S&T develops Material Threat Assessments in collaboration with HHS, as well as the BTRA, which includes assessments of the relative risks posed by biological agents based on variable threats, vulnerabilities, and consequences. S&T also operates NBACC, which conducts scientific research and develops reports and products specifically intended to address identified knowledge gaps associated with current and future biological threats, including the characterization of key attributes of biological attacks by an adversary such as agent acquisition; agent production; dissemination methods; virulence; and the effectiveness of potential countermeasures.
Department of Defense. DOD is responsible for protecting U.S. armed forces from biological threats worldwide and conducts a range of efforts to support research, development, and acquisition of medical countermeasures and other technologies to prevent or mitigate the health effects of biological agents and naturally occurring diseases. Multiple organizations across DOD are responsible for a number of activities, including (1) determining requirements; (2) providing science and technology expertise; (3) conducting research, development, test, and evaluation; and (4) providing oversight. This enterprise is structured to conduct research and develop defenses against chemical and biological threats.
Department of Health and Human Services. HHS is the federal agency primarily responsible for identifying needed medical countermeasures to prevent or mitigate potential health effects from exposure to biological agents for the nation and engaging with industry to develop them. In 2006, HHS established the Public Health Emergency Medical Countermeasures Enterprise (PHEMCE), a federal interagency body that is responsible for providing recommendations on medical countermeasure priorities and development and acquisition activities. Within HHS, the Office of the Assistant Secretary for Preparedness and Response (ASPR) leads PHEMCE and the federal medical and public health response to public health emergencies, including strategic planning, medical countermeasure prioritization, medical countermeasure requirements development, and support for developing and procuring medical countermeasures for the Strategic National Stockpile. CDC maintains the Strategic National Stockpile and supports state and local public health departments’ efforts to detect and respond to public health emergencies, including providing guidance and recommendations for the mass distribution and use of medical countermeasures, among other activities. The agency also engages in laboratory detection of diseases and epidemiological investigation of outbreaks to protect the nation from health, safety, and security threats, both foreign and in the United States. The Food and Drug Administration (FDA) conducts research and performs vulnerability assessments to help prevent adulteration of the food supply. NIH conducts and funds basic and applied research to develop new or enhanced medical countermeasures and related medical tools and provides oversight and guidance on biosafety and biosecurity to research laboratories.
U.S. Department of Agriculture. USDA is the lead agency with responsibility to protect and improve the health, quality, and marketability of our nation’s agricultural products. Within USDA, the Animal and Plant Health Inspection Service (APHIS) is responsible for working to prevent, control, or eliminate harmful pests, pathogens, and diseases of animals and plants. APHIS consists of multiple component units with key roles in biodefense including Veterinary Services, and the Plant Protection and Quarantine (PPQ) program. These offices are supported by multiple research centers and laboratory networks, as well as the Agricultural Research Service (ARS), which conducts a wide range of research addressing agricultural issues of high national priority.
Environmental Protection Agency. EPA is the lead agency for environmental cleanup and remediation, including indoor cleanups. EPA is also the lead federal agency for protecting drinking water and wastewater infrastructure. In addition, EPA provides technical assistance and operational support for sampling, characterization, decontamination, clearance, and waste-management efforts. According to EPA officials, if there is potential for environmental contamination due to a biological incident, HHS collaborates with EPA in developing and implementing sampling strategies and sharing results. EPA’s Office of Research and Development’s Homeland Security Research Program aims to help increase the capabilities of EPA and communities to prepare for and respond to chemical, biological, and radiological disasters. EPA’s Water Security Division also provides resources to monitor incidents and threats.
Intelligence Gathering and Global Surveillance, Research, and Analysis Are Designed to Inform Biological Threat Awareness and Investment Decisions
Key biodefense agencies, including DHS, DOD, HHS, USDA, and EPA rely on intelligence and global surveillance information, scientific study of disease agent characteristics, and analysis to better understand threats and help make decisions about biodefense investments. Figure 1 depicts the three components of threat awareness described in this report.
Agencies Rely on Intelligence Gathering, Scientific Research, and Analysis Activities to Develop Biological Threat Awareness
Intelligence Collection and Global Disease Surveillance
Key federal biodefense agencies use intelligence to understand adversaries’ capabilities to cause harm with a biological weapon and conduct global disease surveillance to monitor threats from naturally occurring agents. DHS and DOD rely on information from the intelligence community about adversaries’ capabilities to acquire, produce, reengineer, and disseminate a biological agent. For example, DHS solicits information from the intelligence community to create models on nonstate actors’ possible target (e.g., a transportation hub), the possible agent and amount used, and the method of attack. DHS also gathers information on terrorist organizations’ financial and technical resources to help determine their capabilities in staging an attack. This information is used to develop the BTRA to support DHS’s responsibilities to protect against non-state actor intentional acts of bioterrorism. For more information on the BTRA and its development and evolution, see appendix I.
Predicting the Threat of Zika Virus Spread to the United States Based on Chikungunya and Dengue Zika virus is a flavivirus that is primarily spread in humans by the same mosquitos that also spread dengue, chikungunya, and other viruses. The first confirmed local transmission of this emerging threat in Brazil occurred in May 2015. Since that time, the Centers for Disease Control and Prevention’s (CDC) Global Disease Detection Operations Center has been monitoring the spread of the epidemic from Brazil to other countries in the Americas. By early 2016, the Zika virus had spread to dozens of countries, including local transmission in U.S. territories. At this time, CDC activated its Emergency Operations Center to respond to outbreaks of Zika occurring in the Americas, and enhance disease surveillance and response coordination. In February 2016, the director of CDC said that recent chikungunya and dengue outbreaks in the United States suggest that Zika outbreaks in the U.S. mainland may be relatively small and localized, which can be attributed to better infrastructure and mosquito control than that found in Latin America. In contrast, he said outbreaks of dengue and chikungunya suggest that Zika virus may spread widely in the U.S. territories. CDC estimates of Zika virus cases for 2016 support the CDC director’s prediction, with 224 locally acquired mosquito-borne cases in the United States (in Florida and Texas) compared to nearly 36,000 locally acquired cases in U.S. territories (largely in Puerto Rico). Efforts to improve international capacity for virus surveillance support CDC’s ability to characterize emerging threats and enhance threat awareness. subjects. It projects foreign capabilities in particular warfare areas out 20 years in the future.
Other agencies, such as HHS and USDA, rely on global disease surveillance to identify and characterize naturally occurring disease events that may impact human, animal, or plant health. Although surveillance and detection activities constitute an entire separate pillar of the biodefense enterprise, these activities can also help federal agencies enhance threat awareness by providing information about emerging global disease events that might affect the United States. For example, within HHS, CDC’s Global Disease Detection program conducts global surveillance on emerging infectious disease events to rapidly detect and monitor the characteristics of the disease event to determine whether and what kind of threat it poses to the U.S. population.
Similarly, within USDA, APHIS conducts surveillance of foreign animal diseases and plant pests and pathogens to determine what threat they may pose to the U.S. agriculture industry. APHIS officials said they have a number of relationships and sources they use to gather information on traditional and emerging animal diseases. These include the National Center for Medical Intelligence within DIA, DHS’s National Biosurveillance Integration Center, CDC, and the World Organisation for Animal Health. USDA’s Risk Identification and Risk Assessment unit conducts open source monitoring globally to identify situations of greatest risk to the animal agriculture community. For plant surveillance, USDA’s PestLens is an offshore open-source monitoring and analysis function designed to identify emerging pests and diseases. The PestLens team stationed overseas evaluates these potential threats for their impact on trade and identifies threats to look for at ports. It conducts research to determine whether there are outbreaks of disease or pests in other countries.
Scientific Research
Epidemiology Terms Virulence is the relative capacity of a pathogen to overcome body defenses. Pathogenesis is the process by which an infection leads to disease. Infectious dose is an estimate of the amount of a pathogen required to cause illness. Zoonotic disease is an infectious disease that is transmissible from animals to humans.
Agencies use scientific research to help understand the characteristics of various threat agents, including their virulence, stability, and ability to be dispersed through various methods. Agencies also perform or contract for scientific research on emerging pathogens to understand their means of transmission, host susceptibility, and effects of infection. Research is conducted on agents that may be used intentionally as biological weapons or on disease-causing agents that may exist in nature and contribute to outbreaks or pandemics, such as influenza viruses. One example of DHS-conducted scientific research is NBACC’s work to understand properties associated with agent acquisition, production, dissemination, stability, virulence and pathogenesis, and existing medical countermeasure efficacy. A DOD example of scientific research is DTRA’s efforts to characterize biological agents (virulence, dissemination, infectious dose, etc.). For instance, DTRA might fund research to determine whether current diagnostic tools would be adequate if the Ebola virus’s genetic sequence were to change.
For conducting scientific research to characterize naturally occurring threats, HHS and USDA agencies engage in a spectrum of activities. Within HHS, CDC, NIH, and FDA all conduct various scientific research to characterize biological agents. For example, CDC conducts characterization of infectious diseases, including analyses of pathogenesis, and works to identify uncommon signals of disease and conduct research to assess zoonotic potential. One effort CDC has to characterize an infectious disease is the Influenza Risk Assessment Tool that assesses potential pandemic risk. NIH also conducts characterization research—such as pathogenesis, infectious dosage rates, and potential effects if agents are aerosolized—primarily for known public health threats, which may also be used as inputs into modeling. Additionally, FDA conducts scientific food defense research to understand, among other things, thermal stability and inactivation of biological agents.
Within USDA, ARS also conducts basic biological research on animal and plant pathogens. Because of the sheer volume of animal diseases, ARS takes a strategic approach to research and study families of viruses, rather than a single virus. For example, ARS officials said they were able to leverage ongoing research on flaviviruses when Zika virus, a flavivirus, emerged in the Americas. ARS is also trying to use more predictive biology to anticipate and properly prepare for new and emerging pathogens—such as understanding vector-borne virus adaptability to potentially prevent transmission to humans—to ensure the public and animal health, as 70 percent of new and emerging diseases are zoonotic. ARS researchers also look at pests and pathogens not currently in the United States to help identify countermeasures, should they appear.
Additionally, EPA conducts research to fill science gaps associated with environmental contamination resulting from accidental or intentional releases of biological agents. For example, EPA studies the behavior of biological agents in the environment to inform strategies for characterization and remediation. Research includes developing methods for characterization of persistent biological contamination, mitigating its impacts, cleaning it up in the environment, and managing the subsequent waste.
Modeling Studies and Other Analytical Work
All agencies we interviewed described modeling studies and other analytical work they conduct to help determine the scope and impact of possible biological threats. For example, because biological threat agents cannot be released into the air in operational environments due to health risks, programs such as DHS’s BioWatch Program rely on computer modeling and attack simulations to assess the performance of biological detection systems. DHS also uses the BTRA modeling to assess potential public health impacts and mitigation efforts for potential biological attacks (see app. I). Similarly, according to DOD officials, DTRA develops and employs modeling and simulation tools for consequence assessment of biological attacks within and outside of the United States.
HHS conducts public health consequence modeling for various types of attacks with specific agents, which uses inputs from DHS Material Threat Assessments to help determine the unmitigated medical consequences. Unmitigated consequence estimates are modeled based on factors such as projected spread patterns, infectious dose rates, and estimated time frames, which can help inform response efforts that could mitigate these consequences such as needed prophylaxis and medical countermeasures as part of the PHEMCE process. The public health and medical consequence assessment is the first step in developing the documents necessary for the PHEMCE to establish medical countermeasure requirements. This analysis allows PHEMCE to determine how many lives could be saved if a medical countermeasure were developed, procured, and deployed, and informs HHS decisions regarding the development of medical countermeasures that might be needed during an event.
HHS and USDA also conduct disease patterns and pathways analysis to determine the routes by which certain pathogens found overseas might arrive in the United States. For example, CDC conducts modeling to identify modes of transmission, sources and nodes; and to project epidemiological patterns. One such example is a 2015 CDC study to estimate future numbers of Ebola patients needing treatment at any one time in the United States. The model was developed to help public health officials assess the potential risk for Ebola virus infection in individual travelers and the subsequent need for postarrival monitoring. USDA units also use pathways analysis to assess the likelihood and means by which animal diseases and plant pests might arrive in the United States. For example, USDA Plant Protection and Quarantine (PPQ) evaluates the environmental and economic impacts of pest introduction, and the pathways by which certain pests might arrive (e.g., imported commodities via ship or rail).
Additionally, EPA supports water utilities by providing models, tools, and guidance that help harden their infrastructure to respond to and recover from contamination incidents and other disasters, as contamination of drinking water can result from acts of terrorism.
Agencies Reported Using Biological Threat Awareness Information to Help Prioritize Their Various Biodefense Activities and Investments
Agency officials in our review described how their threat awareness activities help identify biological threat agents of concern and broad- based capability needs, which help guide their biodefense investment decisions. For example, agencies use threat information to determine which agents represent their highest priorities based on the potential of those agents to cause catastrophic harm. Officials from HHS and USDA also described properties or criteria against which they evaluate emerging or reemerging biological agents while conducting surveillance activities to determine whether they pose a serious threat, such as: health effects after exposure to an agent or toxin, degree of contagiousness, economic and trade impact, and likely transmission routes.
This threat assessment activity allows agencies to characterize and respond to urgent or real-time disease events, such as a Zika virus or an avian influenza outbreak.
In addition to agent-specific approaches, some agencies also reported using threat awareness information as part of efforts to identify and develop broader capabilities that would prepare them to respond to more than one agent. For example, DOD looks at what types of protective equipment are needed to complete the mission in the face of various threats, rather than starting with an individual threat agent. DOD’s Joint Requirements Office (JRO) uses a broad capability-based approach by performing operational risk assessments to evaluate current and future capability needs that will translate into military service requirements. Additionally, HHS, through PHEMCE, reported working on broad capabilities-based investments for medical countermeasures that provide more flexible and sustainable capabilities over the long term. In this regard, PHEMCE seeks to promote technologies that have more than one application or are able to be quickly modified to respond to new threats. For example, according to the PHEMCE Strategy and Implementation Plan, HHS agencies continue to expand their broad-spectrum antimicrobial programs to address both biodefense disease threats, such as plague and tularemia, and the more general public health concern of antimicrobial resistance. Investments in multiplex diagnostic tools also represent a move beyond single-agent detection capabilities.
Once threats have been established and capability gaps have been identified, agencies reported using threat awareness information to help prioritize their investments across various biodefense enterprise activities—threat awareness, prevention and protection, surveillance and detection, and response and recovery—to support their missions (see fig. 2).
The following figures present examples, based on our analysis of agency documents and interviews, of how agencies use threat awareness information to help direct resources and investments across the biodefense pillars. This presentation is not a comprehensive catalogue of all biodefense investments in these areas, but rather examples of the diversity of activities agencies conduct to fulfill their biodefense missions for threat awareness, prevention and protection, surveillance and detection, and response and recovery. Appendix II includes information organized by agency.
Multiple Mechanisms Exist to Share Biological Threat Information, and New Biodefense Strategy Could Help Agencies Better Use Threat Information to Leverage Resources across the Enterprise
Federal agencies with key roles in biodefense share biological threat information through many different mechanisms designed to facilitate collaboration among government partners, including working groups and interagency agreements. However, as we and others have observed in recent reports, opportunities remain to enhance threat awareness across the entire biodefense enterprise, leverage shared resources, and inform budgetary tradeoffs among various threats and agency programs.
Federal Partners Share Biological Threat Information through a Combination of Working Groups, Interagency Agreements, and Other Mechanisms
Officials from key federal agencies, including DHS, DOD, EPA, HHS, and USDA, identified multiple mechanisms that facilitated biodefense collaboration and shared awareness of biological threats. These mechanisms often serve multiple purposes; for example, a working group can develop policy and also aid in information sharing, among other benefits. Officials from these key biodefense agencies reported using collaborative mechanisms to share biological threat information, as well as to coordinate activities, avoid duplication and overlap, implement specific programs for addressing biological threats, and assist in policy development at the agency and White House level. The existence of working groups and similar bodies to help promote information sharing, align policies and procedures, and coordinate to leverage resources is consistent with key practices and mechanisms that we have previously reported as useful for enhancing and sustaining interagency collaboration. Figure 7 provides examples of collaborative mechanisms identified for biodefense.
Officials at key federal agencies reported participating in several types of collaborative mechanisms, including interagency bodies, working groups at the agency and executive level, formalized agreements, colocation, joint projects and funding efforts, and shared expertise. Examples within each mechanism include the following: Interagency bodies. Key federal agencies reported participating in formal interagency bodies that have their own authority and resources and are established to coordinate activities related to biodefense. One such group is PHEMCE, the federal interagency decision-making body for medical countermeasure development and acquisition. PHEMCE is led by HHS, and includes both internal HHS partners, such as CDC, FDA, and NIH, and external interagency partners, such as DOD, DHS, USDA, and the Department of Veterans Affairs. In addition, other key agency officials reported participating in interagency bodies coordinated by HHS and USDA to determine additions and removals to the select agent list.
Working groups. Officials in each of the key agencies said they participate in established and ad hoc working groups to provide subject- matter knowledge and expertise, share information, prioritize research, and avoid duplicating efforts. For example, officials from over a dozen agencies and components participate in an Interagency Bioterrorism Working Group through DHS that provides a conduit for interagency review of technical inputs and assumptions for biological agents and other parameters in the BTRA. DHS officials stated that this working group also works to obtain wider interagency understanding and ownership of the DHS BTRA. Officials from DOD’s JPEO-CBD also stated that they sit on multiple interagency working groups with DHS officials that focus on combating terrorism, biosurveillance, and research and development, among other topics. Similarly, CDC officials stated they participated on approximately 10 to 20 separate working groups with specialized purposes, such as integrated process teams for specific research programs.
Collaborative mechanisms within the Executive Office of the President. Some working groups and other collaboration mechanisms have been led by the National Security Council and other offices within the Executive Office of the President in order to ensure a comprehensive and coordinated approach to biodefense across agencies. For example, the Subcommittee on Biological Defense Research and Development was led by the White House Office of Science and Technology Policy and included representatives from 16 agencies and three White House offices. This subcommittee evaluated U.S. biological defense capabilities to identify future priorities and actions. The National Security Council has also led integrated policy committees focused on a particular threat or range of threats, such as genome editing and synthesis and select agents and toxins.
Written interagency agreements. Agencies have executed written agreements in order to define their relationships for a particular aspect of biodefense. For example, in March 2015, DOD, DHS, and EPA renewed a formalized relationship through a memorandum of understanding for chemical and biological defense research, development, and acquisition—all of which require shared threat awareness. The agreement identifies roles and responsibilities for chemical and biological defense, establishes senior and technical working groups, and establishes cross-agency responsibilities. In particular, DOD, DHS, and EPA agreed to exchange and identify program needs and overlapping interests; establish interagency agreements between parties for joint projects and funding; conduct research and provide data to the partner agencies; and facilitate the establishment of interagency projects and working groups. DOD officials stated that the activities carried out under the memorandum have varied over time, but ongoing collaborative activities included efforts in biosurveillance, wearable sensors, decontamination, and a repository for threat agent data.
Joint facility locations. As we reported in 2014, to maximize resource sharing and facilitate scientific exchange on the study of biological threat agents and other pathogens, DOD, HHS, and DHS share a joint biological campus, known as the National Interagency Biodefense Campus, located at Fort Detrick, Maryland. DHS officials said that, in addition to gaining efficiencies by sharing biosecurity and infrastructure requirements among all three facilities (U.S. Army Medical Research Institute of Infectious Diseases, DHS’s NBACC, and NIH’s Integrated Research Facility), personnel at the three laboratories can communicate more regularly than would otherwise be possible with different locations. The agencies represented on the National Interagency Biodefense Campus also conduct a research consortium to coordinate projects.
Joint funding and program efforts. Key federal biodefense agencies have provided funding to partner organizations and agencies in order to obtain technical assistance or expertise for individual projects. DOD and EPA officials stated that DHS’s S&T Directorate often funds subject- matter experts to perform research and testing to assist in the development of answers to technical questions. For example, DHS funded staff at the U.S. Army Medical Research Institute of Infectious Diseases to research the characteristics of a particular agent in an aerosolized environment.
Leveraging expertise. Agency officials also stated how more informal mechanisms, such as relationships between key personnel and soliciting input for research projects, provide the opportunity to leverage expertise to share threat awareness information and can increase collaboration and positive results between agencies. For example, DHS holds interagency stakeholder panels and outreach events (separate from existing working groups) to gather expertise during development of several biodefense products, including the BTRA. DHS officials said that DOD personnel from DTRA and DHS’s Biological Threat Characterization Program also conduct joint program reviews, and DHS personnel contribute expertise to DTRA’s contract evaluation teams.
Biodefense Strategy Provides an Opportunity to Use Enterprise-Wide Threat Awareness to Help Leverage Resources and Inform Resource Tradeoffs
The collaborative mechanisms in which the key agencies in our review participate may facilitate information sharing in support of specific federal activities and in individual programs, or in response to specific biological events after they begin to unfold, but there is no mechanism in place to develop enterprise-wide threat awareness and assess the relative risks. For example, the BTRA is a dedicated effort to identify and assess the risk of biological events that stem from nonstate actors intentionally seeking to harm U.S. interests using biological agents. By design, it is focused on the consequences and likelihood of terrorist events threatening human health, and does not assess the risk from other types of biological threats. However, there is no similar comprehensive mechanism in place that integrates threat awareness information for all sources of intentional biological threats, as well as naturally occurring events that could harm or destabilize U.S. interests by catastrophically affecting humans, animals, and plants. Similarly, HHS officials stated that PHEMCE is a primary mechanism used to communicate threat awareness and other information on biodefense. However, the primary purpose of PHEMCE is to make decisions about human health countermeasures to be acquired for the Strategic National Stockpile. As a result, biological threat information pertaining to other domains, such as plant or animal health, may not be discussed and shared within this venue without a connection to human health.
In addition, there is no existing mechanism that can leverage threat awareness information to direct resources and set budgetary priorities across all agencies for biodefense. Agencies use threat awareness mechanisms for resource planning according to the individual agency’s mission. For example, DOD guidance states that budgeting and planning for biodefense relies, in part, on DIA’s CBRN Warfare Capstone Threat Assessment. Similarly, DHS officials stated they use the BTRA to help plan DHS investments in future research or to help inform domestic biodefense preparations. According to DOD officials, because the DOD mission is different, they only use the BTRA indirectly and do not specifically rely on it for prioritizing activities or planning efforts.
HSPD-10 requires the development of periodic assessments of the evolving biological weapons threats. DHS officials stated that the BTRA was created, in part, to fulfill the need for an assessment of the risk of intentional use of biological weapons by nonstate terrorists. However, the nation faces other biological threats, including naturally occurring diseases that affect human, animal, and plant health, and biological weapons used by state actors. Without a mechanism that is able to assess the relative risk from biological threats across all sources and domains, the nation may be unable to prioritize resources, defenses, and countermeasures against the most pressing threats.
We previously reported in 2011 that the overarching biodefense enterprise would benefit from strategic oversight mechanisms, including a national strategy, to ensure efficient, effective, and accountable results. We noted that the complexity and fragmentation of roles and responsibilities across numerous federal and nonfederal entities presents challenges to ensuring efficiency and effectiveness across the entire biodefense enterprise. In light of that complexity and fragmentation, we observed that a national biodefense strategy could help address the key fragmentation issues across the biodefense enterprise, such as ensuring strong linkage and identifying gaps in investments across the four pillars. In response to our observations, National Security Council staff in December 2014 identified three presidential policy documents—the National Strategy for Countering Biological Threats, the National Biosurveillance Strategy, and Presidential Policy Directive 8—they reported work in concert to provide comprehensive strategic guidance. However, none of these documents comprehensively addresses all four pillars of biodefense, and, even when taken together, they do not fully address the fragmentation issues we have previously identified.
Other independent observers have also commented on challenges presented by fragmentation and complexity across the biodefense enterprise. For example, in October 2015, the Blue Ribbon Study Panel on Biodefense reported that the United States lacked strategic leadership to promote collaboration within the federal government and other biodefense partners and achieve innovation throughout the enterprise. The study panel also recommended that the federal government develop, implement, and update a comprehensive national biodefense strategy that would define all organizational structures, future plans, and resource requirements along with unified budgetary authority.
We testified in 2016 that several high-level biodefense strategies had been created in the past. However, there is no broad, integrated strategy that can be used to identify risk, assess resources, and prioritize investments. For example, the National Security Council’s National Strategy for Countering Biological Threats is focused solely on outlining the federal government’s approach to reducing the risks of biological weapons proliferation and terrorism, while the National Health Security Strategy authored by the Assistant Secretary for Preparedness and Response (ASPR) seeks to strengthen communities’ abilities to protect against and respond to any incidents with negative health consequences. While these and other strategies, such as the National Strategy for Biosurveillance, address aspects of biodefense, no single strategy provides a comprehensive approach for the nation to prepare and plan for biological threats. In addition, as we reported in 2016, the individual strategies related to pieces of the biodefense enterprise do not currently address the need for prioritization and tradeoffs among approaches when faced with limited resources and expansive threat.
In addition, there is no individual or entity with responsibility, authority, and accountability for overseeing the entire biodefense enterprise. White House officials have previously told us that the National Security Council and the Homeland Security Council act together as focal points for federal biodefense efforts. As noted above, many federal departments and agencies participate in National Security Council groups and mechanisms, and biodefense efforts at the White House level are recognized collaboration mechanisms. However, as described in the Blue Ribbon Study Panel report and reported to us by HHS and DHS officials, these mechanisms may not persist from one presidential administration to the next. As a result, any mechanism located within bodies such as the National Security Council and Homeland Security Council may not provide the continuity and leadership needed to address persistent biological threats. The absence of mechanisms to develop shared threat awareness across the full set of biological threats and use that information to identify opportunities for leveraging resources to mitigate risk across the enterprise is another example of the fragmentation we have previously identified.
However, opportunities exist to enhance shared threat awareness across the biodefense enterprise. Enacted on December 23, 2016, the National Defense Authorization Act (NDAA) for Fiscal Year 2017 required DOD, HHS, DHS, and USDA to jointly develop a national biodefense strategy and associated implementation plan. The law requires the strategy and implementation plan to: inventory and assess all existing strategies, plans, policies, laws, and interagency agreements related to biodefense; describe biological threats from warfare, terrorism, naturally occurring infectious disease, and accidental exposure; describe current federal efforts preventing the proliferation and use of biological weapons, preventing accidental or naturally occurring outbreaks, and mitigating the effects of an epidemic; describe roles and responsibilities of the agencies for biodefense; describe interagency capabilities required to support the national recommend actions for strengthening current biodefense capabilities and structures, and for improving interagency coordination.
According to DHS officials, as of September 2017, the White House National Security Council is currently overseeing an interagency workgroup to develop that strategy. DOD officials confirmed that the process to create such a strategy is under way, and the effort may include revising or consolidating existing guidance in addition to developing a new national biodefense strategy.
As the departments fulfill their obligations under the NDAA for 2017, key federal organizations have the opportunity to institutionalize mechanisms to help the nation make the best use of limited biodefense resources, to include broader shared threat awareness to inform opportunities to leverage resources. However, until the strategy is developed, we will not know the extent it will address shared threat awareness, if at all. The NDAA for 2017 requires the strategy to be submitted to Congress not later than 275 days after enactment (September 2017) and requires us to review it 180 days after the date of submittal. We will continue to monitor progress toward developing strategic mechanisms to help confront fragmentation and complexity across the biodefense enterprise.
NBACC Threat Characterization Research Is Largely Driven by Knowledge Gaps Identified through the BTRA Process
According to DHS officials, the threat characterization research agenda at NBACC is based primarily on the results and knowledge gaps identified through evaluation of the BTRA. Each year NBACC produces an annual plan that, among other elements, outlines new research projects intended to address priority knowledge gaps for identified biological threat agents. These projects are identified through a multistep process that incorporates a combination of DHS-designated priorities, interagency stakeholder input, and additional planning criteria, such as resource availability and ongoing maintenance of required technical capabilities. (See fig. 8.)
Priority Knowledge Gaps and Research Needs
The first step in the project selection process is the identification of knowledge gaps by officials within DHS’s Biological Threat Characterization Program (BTCP) based on their evaluation of the BTRA. According to these officials, identification of the most critical knowledge gaps involves determining which inputs have a relatively high impact on BTRA consequence estimates and have a relatively high degree of uncertainty, for example, because data about agent attributes are limited. The officials said they aim to enhance the value of BTRA conclusions by increasing the accuracy and completeness of the data used as modeling inputs through the work of NBACC. DHS has historically relied on the opinions of subject-matter experts to review the BTRA and support determinations regarding data quality but has also recently developed more quantitative methods to integrate BTRA results into the research planning and prioritization process for NBACC.
Using data from the 2010 BTRA, DHS identified a total of 22 priority knowledge gaps that it is currently working to address through NBACC research and plans to complete within 6 to 10 years. BTCP program officials reported that although research priorities generally target Tier 1 Select Agents, they also seek to advance research projects that broadly encompass (1) a variety of biological threat agents (e.g., bacteria, viruses, and toxins); (2) agents representing different characteristics that affect threat (e.g., means of acquisition or production, dissemination and exposure attributes, and expected medical consequences), and (3) a selection of traditional, emerging, enhanced, and advanced biological threat agents.
In addition to the identification of BTRA-related knowledge gaps, BTCP officials stated that emerging events and specific stakeholder needs could also influence research priorities. For example, during the 2014 Ebola outbreak, BTCP officials directed NBACC to perform research to better understand the risk factors associated with disease transmission, such as the persistence of the virus on various surfaces, and the efficacy of common disinfectants to inform decontamination and public health response efforts. DHS officials also noted that the needs of the Federal Bureau of Investigation, particularly through its casework at the National Bioforensics Analysis Center, may drive some of NBACC’s research priorities.
Proposed Annual Research Plan
The second step in the process for identifying NBACC threat characterization research projects includes the development of a proposed annual research plan. The annual plan is developed using a combination of inputs including DHS’s research priorities, annual NBACC budgetary resources, and technical capability and staff development needs. Although the plan documents the DHS knowledge gaps that serve as a key driver for developing specific project proposals, in some cases these gaps are identified only as general areas of research, such as the virulence of specified threat agents, which could require a broad scope of research to address. As the plan notes, these priority knowledge gaps exceed the resources available for threat characterization each year. For this reason, NBACC uses a combination of additional criteria to further refine research priorities and select projects for inclusion in the new scope of work, such as consideration of the time and resources required and which knowledge gaps are most likely to provide clear and compelling answers through experimentation. Other factors that may influence final project selection include addressing the knowledge gaps that could be completed reasonably comprehensively in 3 to 4 years or may have potential to provide a framework to better understand other priority agents or emerging threats, such as the Ebola virus or other infectious diseases (see fig. 9). In developing the annual research plan, NBACC also sets aside a small portion of its threat characterization budget to respond to emerging requests, and the plan notes that project plans may be readjusted due to any emerging requirements.
The annual plan also identifies priorities needed to maintain four core technical capabilities (aerobiology, bacteriology, virology, and comparative medicine) and accreditation standards required to perform ongoing threat characterization research on potential threat agents in a maximum security national biocontainment laboratory. For example, one of the priorities identified within the 2016 annual plan includes the installation and verification of new equipment intended to enhance aerobiology capabilities. Each annual plan includes a crosswalk between the proposed projects and the associated capabilities that will be utilized. For example, the 2016 NBACC annual plan outlines a scope of work that includes seven research studies that collectively cover all four of the core technical capabilities. Examples of some of the research conducted in recent years include assessment of the decay rates of aerosolized Tier 1 agents and the virulence of select agents based on particle size and production methods.
Plan Approval and Oversight
Once NBACC develops a proposed annual research plan, stakeholders review it before the plan goes for S&T approval. According to S&T officials we interviewed, the BTCP program solicits input and feedback on the draft annual plan from interagency stakeholders within DOD, HHS, and the Intelligence Community, among others. According to these officials, the community of practice for conducting this type of research is small and is generally well coordinated to avoid potential duplication of work. Once S&T officials approve the plan, it then undergoes a final approval process through DHS’s Compliance Review Group to ensure adherence with the Biological Weapons Convention.
According to S&T officials, they also participate in periodic project reviews to maintain oversight regarding the extent to which each research study is achieving its objectives, and an overall assessment is performed as part of the annual evaluation process of the NBACC contract performer. The purpose of these periodic reviews is to help identify any changes to the project plan that may be required and help ensure that the research is making progress toward addressing identified knowledge gaps. S&T officials stated that although some projects have been modified based on preliminary results, they rely much more heavily on advance review of the experimental methodology by technical subject-matter experts before a project is initiated to help ensure the research will address identified gaps and help inform future iterations of the BTRA.
NBACC Impacts
Consistent with its strategic goals, S&T officials reported that NBACC research has directly contributed to the closing of identified knowledge gaps and the development of capabilities that are used to respond to emerging threat characterization needs. According to these officials, NBACC products have improved BTRA consequence and hazard modeling by reducing the uncertainty associated with key data inputs. Specifically, officials cited that significant changes were made to the underlying risk models as a result of NBACC research conducted since the completion of the 2010 BTRA, including updates to 62 individual data points associated with eight biological hazards. As noted in the 2016 NBACC annual plan, the limited research available on authentic threat agents has historically entailed the use of data from surrogate or unrelated biological agents to evaluate the threat and consequences of a biological attack on the homeland. According to S&T officials, the use of authentic threat agents at NBACC addresses this shortcoming and has enhanced confidence in estimates of risk and operational response planning. Although NBACC research currently remains focused on closing specific knowledge gaps, officials noted that this research is also intended to lay a foundation for more predictive modeling, such as using the data to identify shared characteristics among a class of agents.
Although the focus of NBACC threat characterization research is generally on the intentional use of Tier 1 biological agents, S&T officials stated that NBACC capabilities could also be employed to address challenges associated with emerging infectious diseases. They further noted that, because many of the high-priority biological threat agents that affect humans also may affect livestock, NBACC’s studies could also be useful for informing risk associated with animal health.
NBACC expertise has also been leveraged by other DHS components. For example, S&T officials reported that the U.S. Coast Guard requested information from NBACC to help inform its global vaccine program for its workforce, and DHS’s National Protection and Programs Directorate and the Secret Service have requested NBACC to review their own biological risk assessments. Within S&T, CBD officials stated that NBACC- produced products were used to inform the development of new biological sensor technologies. In addition to sharing NBACC research findings through briefings and reports, NBACC officials also reported that they are currently pursuing efforts to establish an electronic repository for NBACC scientific products at the Unclassified/ For Official Use Only, Secret, and Top Secret levels. The goal of this repository site is to facilitate the ability of end users to search, view, and download documents according to their approved access.
Agency Comments and Our Evaluation
We provided a draft of this report to DHS, DOD, EPA, HHS, and USDA for review and comment. Each of these departments provided technical comments that we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Defense, Health and Human Services, and Homeland Security; and the EPA Administrator. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Bioterrorism Risk Assessment
The Department of Homeland Security (DHS) is responsible for assessing the risks posed by biological agents as directed by the Project BioShield Act of 2004 and Homeland Security Presidential Directives 10— Biodefense for the 21st Century, and 18—Medical Countermeasures against Weapons of Mass Destruction. To this end, DHS’s Science & Technology Directorate (S&T) has developed four Bioterrorism Risk Assessments (BTRA) since 2006 to assess the relative risks posed by various biological agents based on estimates of likelihood and consequence parameters for a number of potential attack scenarios.
BTRA Scope and Methodology
The BTRA is a probabilistic risk assessment intended to quantify risk for rare yet potentially catastrophic intentional attacks using biological agents by nonstate actors. Results are based on risk modeling for a vast number of potential scenarios derived from multiple event trees representing specific decisions or actions an adversary may pursue. The most recent iteration issued in 2017, called the BTRA 5.0, includes over 600,000 scenarios with predicted impacts on human health, fatalities, and economic costs. These consequence estimates are based in part on inputs provided or validated by the Intelligence Community, various estimates of likelihood, and applicable consequence parameters, such as specific agent attributes and threat characterization research results from the National Biodefense Analysis and Countermeasures Center (NBACC).
The BTRA incorporates a number of different models related to the various attack scenarios being assessed. For example, DHS utilizes unique models to assess risk for indoor attacks (in 12 different target categories, such as transportation hubs and sporting events), outdoor attacks (including the top 100 most populated U.S. cities and their associated weather patterns), and potential dissemination via food or water systems, as well as a model that estimates the ability for the public health system to mitigate potential illnesses or fatalities based on disease progression, response timelines, and available medical countermeasures. According to S&T officials, one of the key updates in the BTRA 5.0 is the introduction of adversary-decision models, which allow BTRA program officials to incorporate inputs from subject-matter experts and other data sources regarding the likelihood of various attack scenarios. Selected factors that are considered to help identify potential agents or dissemination methods chosen by an adversary include data on agent acquisition or the means of production in various countries, as well as the likelihood of interdiction during transport.
BTRA 5.0 Updates and Prior Recommendations
According to S&T officials, the BTRA 5.0 is intended to address previous recommendations of the National Research Council of the National Academies (National Academies) and provide additional information regarding data and intelligence inputs provided by subject-matter experts. The BTRA 5.0 was released in May 2017 and represents the first full BTRA product since 2010. According to BTRA program officials, a series of limited reports were issued in 2012, but S&T management instructed the division to address previous criticisms of the BTRA, including the National Academies’ recommendations, before developing another full report. S&T program officials reported taking action on 12 of the 13 National Academies’ recommendations, and determined, after subsequent review by DHS, that no action was required to address the final recommendation. Some notable changes that DHS reported making in response to the National Academies’ recommendations include:
Officials reported implementing adversary-decision models to assess the probabilities of terrorist decisions for transporting materials and selecting targets to respond to National Academies’ criticism that the BTRA methodology may not fully consider adversaries’ efforts to maximize their chance of success.
Officials reported publishing models and methodology reports and sending biological data for interagency review to respond to the National Academies’ recommendation to improve transparency. In addition, officials said that DHS had made this information available to stakeholders on a secured electronic site for those with access.
Officials reported developing additional tools and methods to assess consequences and probabilities of changing threats to address the National Academies’ concern that the BTRA did not allow for incorporation of newly recognized threats or those that may not yet be well understood.
Officials reported developing an economic consequence model and beginning to incorporate assessments of agricultural risk in addition to human mortality and morbidity to respond to the National Academies’ recommendation that DHS add economic and agricultural effects, among other losses, to its consequence modeling.
According to S&T officials, another change implemented in the BTRA 5.0 is an effort to collect more detailed information about the sources and confidence level of the data inputs provided by subject-matter experts. These officials reported that they obtained expertise by survey primarily from terrorism subject matter experts, including members of the Intelligence Community. Data results now indicate whether inputs are based upon official reporting or the contributor’s opinion based upon subject knowledge.
Additional BTRA Tools and Model Development
DHS also reported working on additional tools and models that officials expected would enhance the BTRA and make the results more useful to stakeholders. The following are examples of new developments identified to us by S&T officials:
Research Prioritization Matrix (RPM) Tool. The RPM tool is intended to help identify areas of research that will be of greatest benefit to further inform future iterations of the BTRA. The RPM Tool uses a mathematical formula to develop a score based on numerous factors including (1) estimates of likelihood and consequences calculated by the BTRA, (2) the results of a sensitivity analysis of individual data parameters, and (3) an estimate of the confidence in the underlying and supporting data. According to officials, the result is a parameter and agent-specific score that can be used to support decisions regarding research prioritization in a structured, transparent manner that can be tracked over time to demonstrate progress. For example, a specific parameter in the RPM tool may include the decay rate of an agent in a particular substance (for example, in food items), and another parameter might be how much of a certain agent can likely be produced by certain adversaries. According to S&T officials, the RPM tool was recently updated with the latest data and results from the BTRA 5.0 and is expected to be more influential on the development of the research plan for fiscal year 2018. S&T program officials also said that the RPM tool will be made available to other federal entities so that they may use it for their own research prioritization needs, as well as customize the results, such as restricting the model to include only indoor attacks.
Agricultural Terrorism Modeling. S&T officials have initiated efforts to develop additional modeling of potential agricultural impacts of a biological attack. Although a risk assessment of agricultural terrorism was completed in 2012 that assessed potential impacts from five animal diseases and two plant pathogens, officials reported that it was criticized for having substandard modeling and employing limited scenarios. The current effort includes representatives from the U.S. Department of Agriculture, the Food and Drug Administration, and the Federal Bureau of Investigation, and is focused on development of modeling for biological attacks on agriculture that may occur pre- harvest (before food processing begins) to differentiate it from attacks on the food system itself. DHS and stakeholders are currently evaluating available modeling tools and they plan to include the new modeling within the BTRA 6.0.
Key threat awareness activities identified by the agency The U.S. Department of Agriculture (USDA) operates numerous programs designed to help prevent the entry and spread of agricultural pests and diseases, and protect the health of U.S. agricultural resources by addressing zoonotic diseases (transmissible from animals to humans) and implementing surveillance, preparedness and response, and control efforts. Examples of program activities include the following:
High Consequence List. A three-tier classification system of foreign animal diseases determined to pose a significant threat to animal health if introduced into the United States. The list was developed in 2013 to help prioritize investments in the National Veterinary Stockpile. produced annually to provide an assessment of pests deemed most important in terms of likelihood or potential consequence. These guidelines define the procedures that stakeholders are to use to identify, characterize, survey, and respond to a particular pest if detected in the United States.
Vulnerability Assessments. The Food Safety and Inspection Service conducts vulnerability assessments that, among other things, can inform the development of countermeasures to help prevent or mitigate the impacts of an intentional attack on the food supply.
Scientific Research. The Agricultural Research Service conducts research to help characterize the status of diseases worldwide and assess their spread patterns. This work can also include basic research on various biological agents, as well as identification of specific scientific and technology gaps related to effective preparedness and response efforts.
Chemical and Biological Defense (JPEO-CBD) manages the development and acquisition of different technologies and prototypes in order to provide biological defense products to the military services. The technologies can include biological detection systems and laboratory equipment, medical countermeasures, protective equipment for individual warfighters to provide deployed units detection and protection capabilities against different types of biological weapons.
Threat assessment. The Defense Intelligence Agency produces the Chemical, Biological, Radiological, and Nuclear Warfare Capstone Threat Assessment, a report on chemical and biological programs of countries and technology that could be used by adversaries in a threat environment. DOD officials said that JPEO-CBD uses the report to identify biological warfare threats against military and civilian populations and help prioritize resources and investments into research and development. assessments of potential impacts to water systems and the environment in the event of a biological incident. EPA officials said EPA relies on the Department of Homeland Security’s Bioterrorism Risk Assessment and information on adversary capabilities and tactics to better assess potential environmental countermeasures for attacks on water systems and indoor/outdoor areas, to steer research resources, and to support responders who may need to address the consequences of an attack. EPA Water Security Division officials said they develop tools, training, and programs to address intentional contamination, detection in distribution networks, vulnerability assessments, emergency response capabilities, and how to monitor incidents and threats.
Research and Development. The Office of Research and Development’s Homeland Security Research Program aims to help increase the capabilities of EPA and communities to prepare for and respond to chemical, biological, and radiological disasters. EPA relies on information from the BTRA in addition to its own research to inform preparedness activities and its research agenda. EPA’s homeland security research is organized into three topic areas that support these objectives: (1) characterizing contamination and assessing exposure; (2) water system security and resilience; and (3) remediating wide areas. (PHEMCE). Includes various HHS agencies and other federal departments, such as the Department of Defense (DOD), DHS, and the U.S. Department of Agriculture, to advise the Secretary of HHS on medical countermeasure priorities and approaches to the development, acquisition, stockpiling, and distribution of medical countermeasures for biological weapons attack agents, pandemic influenza, and other emerging infectious diseases.
Global disease surveillance. Helps identify and respond to emerging infections, including pathogenic avian influenza, which remains an urgent global infectious disease threat.
Medical and Public Health Consequence Modeling. HHS’s medical and public health consequence modeling reports use the exposure information from DHS’s material threat assessments (MTA) to calculate the number of individuals who may become ill, be hospitalized, or die based on the MTA scenario with and without medical countermeasures. HHS reported using the modeling reports as part of an assessment process to establish requirements for medical countermeasures that need to be developed and acquired to respond to a biological incident. aimed at reducing large public health consequences of attacks on the food supply. FDA assesses public health and economic impact of an attack, the accessibility of a target and ease of an attack, the ability to recover, the loss of production due to an attack, and target selection. FDA also said it considers the health, economic, and psychological impacts of an attack on the food industry.
Scientific Research. Studies include thermal stability of microbial agents and ability to inactivate biological agents in the food supply, and studies of pathogenic properties of viruses to help understand the epidemiology, transmission, evolution and origin of an outbreak. which is a system of environmental monitoring intended to provide early warning and detection of a biological attack. DHS also houses and supports the National Biosurveillance Integration Center—a collaboration of 14 federal partners intended to integrate information about threats to human, animal, plant, and environmental health from thousands of sources to develop a more comprehensive picture of the threat landscape.
Research and Analysis. DHS operates the National Biodefense Analysis and Countermeasures Center, which conducts scientific research and develops reports and products intended to address identified knowledge gaps associated with current and future biological threats, including the effectiveness of potential countermeasures and the characterization of key attributes of biological attacks by an adversary such as agent acquisition; agent production; dissemination methods; and virulence. Additional research and analysis efforts are supported by the Biodefense Knowledge Center and multiple National Laboratories.
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgements
In addition to the contact named above, Kathryn Godfrey (Assistant Director), Ryan Lambert (Analyst-in-Charge), Amy Bowser, Ben Emmel, Ashley Grant, Eric Hauswirth, Susanna Kuebler, Cody Raysinger, and Amber Sinclair made key contributions to this report.
Related GAO Products
Biological Defense: Additional Information That Congress May Find Useful as It Considers DOD’s Advanced Development and Manufacturing Capability. GAO-17-701. Washington, D.C.: July 17, 2017.
Chemical and Biological Defense: DOD Has Identified an Infrastructure Manager and Is Developing the Position’s Roles and Responsibilities. GAO-17-522R. Washington, D.C.: July 7, 2017.
Emerging Infectious Diseases: Actions Needed to Address the Challenges of Responding to Zika Virus Disease Outbreaks. GAO-17-445. Washington, D.C.: May 23, 2017.
Avian Influenza: USDA Has Taken Actions to Reduce Risks but Needs a Plan to Evaluate Its Efforts. GAO-17-360. Washington, D.C.: April 13, 2017.
Defense Civil Support: DOD, HHS, and DHS Should Use Existing Coordination Mechanisms to Improve Their Pandemic Preparedness. GAO-17-150. Washington, D.C.: February 10, 2017.
Bioforensics: DHS Needs to Conduct a Formal Capability Gap Analysis to Better Identify and Address Gaps. GAO-17-177. Washington, D.C.: January 11, 2017.
Defense Intelligence: Additional Steps Could Better Integrate Intelligence Input into DOD’s Acquisition of Major Weapon Systems. GAO-17-10. Washington, D.C.: November 1, 2016.
High-Containment Laboratories: Actions Needed to Mitigate Risk of Potential Exposure and Release of Dangerous Pathogens. GAO-16-871T. Washington, D.C.: September 23, 2016.
High-Containment Laboratories: Improved Oversight of Dangerous Pathogens Needed to Mitigate Risk. GAO-16-642. Washington, D.C.: August 30, 2016.
Biodefense: The Nation Faces Multiple Challenges in Building and Maintaining Biodefense and Biosurveillance. GAO-16-547T. Washington, D.C.: April 14, 2016.
Emerging Infectious Diseases: Preliminary Observations on the Zika Virus Outbreak. GAO-16-470T. Washington, D.C.: March 2, 2016.
Air Travel and Communicable Diseases: Comprehensive Federal Plan Needed for U.S. Aviation System’s Preparedness. GAO-16-127. Washington, D.C.: December 16, 2015.
Emerging Animal Diseases: Actions Needed to Better Position USDA to Address Future Risks. GAO-16-132. Washington, D.C.: December 15, 2015.
Biosurveillance: DHS Should Not Pursue BioWatch Upgrades or Enhancements Until System Capabilities Are Established. GAO-16-99. Washington, D.C.: October 23, 2015.
Climate Change: HHS Could Take Further Steps to Enhance Understanding of Public Health Risks. GAO-16-122. Washington, D.C.: October 5, 2015.
Biosurveillance: Challenges and Options for the National Biosurveillance Integration Center. GAO-15-793. Washington, D.C.: September 24, 2015.
Chemical and Biological Defense: Designated Entity Needed to Identify, Align, and Manage DOD’s Infrastructure. GAO-15-257. Washington, D.C.: June 25, 2015.
Biological Defense: DOD Has Strengthened Coordination on Medical Countermeasures but Can Improve Its Process for Threat Prioritization. GAO-14-442. Washington, D.C.: May 15, 2014.
National Preparedness: HHS Is Monitoring the Progress of Its Medical Countermeasure Efforts but Has Not Provided Previously Recommended Spending Estimates. GAO-14-90. Washington, D.C.: December 27, 2013.
Homeland Security: An Overall Strategy Is Needed to Strengthen Disease Surveillance in Livestock and Poultry. GAO-13-424. Washington, D.C.: May 21, 2013.
Influenza: Progress Made in Responding to Seasonal and Pandemic Outbreaks. GAO-13-374T. Washington, D.C.: February 13, 2013.
Managing for Results: Key Considerations for Implementing Interagency Collaborative Mechanisms. GAO-12-1022. Washington, D.C.: September 27, 2012.
Biosurveillance: DHS Should Reevaluate Mission Need and Alternatives before Proceeding with BioWatch Generation-3 Acquisition. GAO-12-810. Washington, D.C.: September 10, 2012.
Chemical, Biological, Radiological, and Nuclear Risk Assessments: DHS Should Establish More Specific Guidance for Their Use. GAO-12-272. Washington, D.C.: January 25, 2012.
Biosurveillance: Nonfederal Capabilities Should Be Considered in Creating a National Biosurveillance Strategy. GAO-12-55. Washington, D.C.: October 31, 2011.
National Preparedness: Improvements Needed for Acquiring Medical Countermeasures to Threats from Terrorism and Other Sources. GAO-12-121. Washington, D.C.: October 26, 2011.
Homeland Security: Challenges for the Food and Agriculture Sector in Responding to Potential Terrorist Attacks and Natural Disasters. GAO-11-946T. Washington, D.C.: September 13, 2011.
Homeland Security: Actions Needed to Improve Response to Potential Terrorist Attacks and Natural Disasters Affecting Food and Agriculture. GAO-11-652. Washington, D.C.: August 19, 2011.
National Preparedness: DHS and HHS Can Further Strengthen Coordination for Chemical, Biological, Radiological, and Nuclear Risk Assessments. GAO-11-606. Washington, D.C.: June 21, 2011.
Live Animal Imports: Agencies Need Better Collaboration to Reduce the Risk of Animal-Related Diseases. GAO-11-9. Washington, D.C.: November 8, 2010.
Biosurveillance: Efforts to Develop a National Biosurveillance Capability Need a National Strategy and a Designated Leader. GAO-10-645. Washington, D.C.: June 30, 2010.
Agricultural Quarantine Inspection Program: Management Problems May Increase Vulnerability of U.S. Agriculture to Foreign Pests and Diseases. GAO-08-96T. Washington, D.C.: October 3, 2007.
Global Health: U.S. Agencies Support Programs to Build Overseas Capacity for Infectious Disease Surveillance. GAO-07-1186. Washington, D.C.: September 28, 2007. | Why GAO Did This Study
Biological threats come from a variety of sources and can pose a catastrophic danger to public health, animal and plant health, and national security. Threat awareness, which consists of activities such as collecting and analyzing intelligence, developing risk assessments, and anticipating future threats, is vital to help federal agencies identify necessary biodefense capabilities and ensure investments are prioritized to make effective use of federal funds.
GAO was asked to review how key federal agencies develop and share threat awareness information, and how that information informs further investments in biodefense. This report describes: (1) the types of actions that key federal agencies have taken to develop biological threat awareness, and how that information is used to support investment decisions; (2) the extent to which these agencies have developed shared threat awareness; and (3) how DHS's NBACC determines what additional threat characterization knowledge to pursue.
GAO analyzed federal policies, directives, and strategies related to biodefense, as well as agency documents such as threat assessments and modeling studies. We identified five key biodefense agencies based on review of the roles designated in these documents. GAO interviewed officials from these agencies about threat awareness activities, and reviewed prior GAO work and related biodefense studies. Each of the key agencies reviewed a draft of this report and provided technical comments that GAO incorporated as appropriate.
What GAO Found
Key biodefense agencies—the Departments of Homeland Security (DHS), Defense (DOD), Agriculture (USDA), and Health and Human Services (HHS), and the Environmental Protection Agency—conduct a wide range of activities to develop biological threat awareness for intentional and naturally occurring threats, and reported using that information to support investment decisions.
Intelligence gathering: Agencies use a combination of intelligence gathering on adversaries' capabilities to cause harm with a biological weapon and global disease surveillance to monitor threats from naturally occurring health threats that might impact humans, animals, or plants.
Scientific research: Agencies use traditional laboratory research to help understand the characteristics of various threat agents, including their virulence, stability, and ability to be dispersed through various methods. Scientific research is also performed on emerging pathogens to understand their means of transmission, host susceptibility, and effects of infection.
Analysis activities: Agencies use modeling studies and other analytical work to help determine the scope and impact of possible biological threats.
These three activities help agencies identify and prioritize the most dangerous biological threats, which can then be used to guide biodefense investments. For example, USDA told GAO it uses threat information to determine which foreign animal diseases represent its highest priorities based on the potential of those agents to cause catastrophic harm, and those priorities are used to inform investments. Similarly, HHS said it conducts threat awareness activities to help inform the development and acquisition of human medical countermeasures.
Federal agencies with key roles in biodefense share biological threat information through many different mechanisms designed to facilitate collaboration among government partners, including working groups and interagency agreements. For example, agency officials reported using collaborative mechanisms to coordinate activities and avoid duplication and overlap. However, as GAO and others have noted, opportunities exist to better leverage shared resources and inform budgetary tradeoffs. Recent legislation requires key biodefense agencies to create a national biodefense strategy that has the potential to help address these issues, by, among other things, supporting shared threat awareness. Until the strategy is developed, the extent to which it will meet this need is unknown.
The threat characterization research agenda at DHS's National Biodefense Analysis and Countermeasures Center (NBACC) is based primarily on the results and knowledge gaps identified through the Bioterrorism Risk Assessment (BTRA). According to DHS officials, the knowledge gaps deemed most critical include data about biological agents that have a high impact on BTRA consequence estimates and also a high degree of uncertainty. Each year NBACC produces an annual plan that outlines new research projects intended to address these knowledge gaps, and incorporates additional planning criteria, such as interagency stakeholder input, resource availability, and maintenance of required technical capabilities. According to DHS officials, the results of NBACC research were used to directly enhance the BTRA, including updating data associated with eight biological agents since 2010. |
gao_GAO-18-294 | gao_GAO-18-294_0 | Background
Overview of Election Administration
In the United States, election authority is shared by federal, state, and local officials, and election administration is highly decentralized and varies among state and local jurisdictions. Congressional authority to regulate elections derives from various constitutional sources, depending upon the type of election. Federal election laws have been enacted that include provisions pertaining to voter registration, protecting the voting rights of certain minority groups, and other areas of the elections process. States regulate various election activities, including some requirements related to these federal laws, but generally delegate election administration responsibilities to local jurisdictions.
Federal Roles and Responsibilities
Congress has passed legislation in major functional areas of the voting process. For example, HAVA includes a number of provisions related to voting equipment and other election administration activities, including, for instance, requiring at least one voting system equipped for persons with disabilities at each polling place in federal elections. After HAVA was enacted, Congress appropriated more than $3 billion for the EAC to distribute to states to make election administration improvements, such as the replacement of punch card and mechanical lever voting equipment.
In addition to HAVA, federal laws have been enacted in other areas of the voting process. For example, the Voting Rights Act of 1965, as amended, contains, among other requirements, provisions designed to protect the voting rights of U.S. citizens of certain ethnic groups whose command of the English language may be limited. In accordance with the act, covered states and jurisdictions must provide written materials—such as ballots or registration forms—in the language of certain “language minority groups” in addition to English, as well as other assistance, such as bilingual poll workers.
State and Local Roles and Responsibilities
The responsibility for the administration of elections resides at the state and local levels. States regulate various election activities, such as absentee and early voting requirements and Election Day procedures, but generally delegate election administration responsibilities to local jurisdictions. Some states have mandated statewide election administration guidelines and procedures that foster uniformity in the ways local jurisdictions conduct elections, including the types of voting equipment used. Other states have guidelines that generally permit local election jurisdictions considerable autonomy and discretion in the way they run elections. Although some states bear some election costs, including those associated with voting equipment, local jurisdictions generally pay for most aspects of election administration. Unless states require otherwise, local jurisdictions generally have discretion over activities such as training election officials and, in most states, over the selection and purchase of voting technology. Among other things, local election officials register eligible voters; design ballots; educate voters on how to use voting technology; provide information on the candidates and ballot measures; arrange for polling places; recruit, train, organize, and mobilize poll workers; prepare and test voting equipment for use; and count ballots.
The Voting Process
Voting before Election Day
States have established alternatives for voters to cast a ballot other than at the polls on Election Day, including absentee voting and early voting. All states and the District of Columbia have provisions allowing voters to cast their ballots before Election Day by voting absentee, with variations on who may vote absentee, whether the voter needs to provide an excuse for requesting an absentee ballot, and the time frames for applying for and submitting absentee ballots. Some states also permit registered voters to apply for an absentee ballot on a permanent basis so that those voters automatically receive an absentee ballot in the mail prior to every election without providing an excuse or reason for voting absentee. In addition to absentee voting, some states allow early in- person voting. In general, early voting allows voters from any precinct in the jurisdiction to cast their vote in person without providing an excuse, before Election Day either at one specific location or at one of several locations. Further, three states and a number of local election jurisdictions in other states conduct vote-by-mail elections, wherein ballots are automatically sent to every eligible voter.
In-Person Voting on Election Day
For in-person voting on Election Day, election authorities subdivide local election jurisdictions into precincts. Voters generally cast their ballots at the polling places for the precincts to which they are assigned by election authorities. In addition, some states provide jurisdictions the discretion to allow voters to cast their ballots at vote centers, which are polling places at which any registered voter in the local election jurisdiction may vote on Election Day, regardless of the precinct in which the voter resides.
Within the polling place, poll workers check in voters and determine their eligibility to vote by verifying their registration using voter lists or poll books—a list of individuals eligible to vote within the voting precinct or local jurisdiction. After checking the voters in, poll workers direct them to a voting booth to mark their electronic or paper ballots, and then voters submit the ballots for counting. The manner in which votes are cast and counted can vary depending on the voting method and technology employed by the jurisdiction.
Postelection Activities
Following the close of the polls on Election Day, election officials and poll workers complete steps such as securing equipment and ballots, transferring paper ballots or electronic records of vote counts to a central location for counting, and determining the outcome of the election. Votes counted include those cast on Election Day, absentee ballots, early votes (where applicable), and valid provisional ballots. While preliminary results are available usually by the evening of Election Day, the certified results are generally not available until a later date.
EAC Voluntary Voting System Guidelines and Testing and Certification Program
Overview of Voluntary Guidelines and Testing and Certification Program
The EAC has responsibility for developing the voluntary voting system guidelines and overseeing the testing and certification of voting systems based on these guidelines. The EAC works in conjunction with NIST and the Technical Guidelines Development Committee (TGDC) to develop the voluntary guidelines. According to the EAC, these guidelines are a set of specifications and requirements against which voting systems, including hardware and software, can be tested to receive a certification from the EAC. According to NIST, the guidelines are intended to ensure that federal testing provides assurance to state and local election officials that the voting systems meet a defined set of requirements. The EAC testing and certification program verifies that voting systems comply with basic functionality, accessibility, and security capabilities established by the voluntary guidelines. Typically, voting system vendors submit their systems to the EAC for testing and certification and the systems are evaluated by EAC-accredited voting system test laboratories against the guidelines. These laboratories make recommendations regarding certification to the EAC. According to the EAC, an EAC-certified voting system means that the voting system has been tested by a federally accredited test laboratory and complies with the guidelines.
Establishment of Federal Voting System Guidelines and Updates
According to the EAC, prior to its establishment and the creation of its voluntary voting system guidelines, the first set of federal voluntary Voting System Standards were adopted in 1990 by the Federal Election Commission. The National Association of State Election Directors voluntarily assumed the role of accrediting voting system test laboratories and certifying voting systems to the federal standards. In 2002, the Federal Election Commission adopted a new version of the federal standards.
After the EAC’s creation, in 2005, the EAC developed and adopted the third iteration of federal standards, in accordance with HAVA, and the standards were renamed the Voluntary Voting System Guidelines (VVSG). This third iteration of federal voting system guidelines was referred to as the 2005 VVSG or VVSG 1.0, as it is called today. According to the EAC, VVSG 1.0 increased security requirements for voting systems and were intended to expand access, including opportunities to vote privately and independently, for individuals with disabilities. In 2006, the National Association of State Election Directors terminated its voting system testing program and subsequently, in 2007, the EAC launched its own testing and certification program. In March 2015, a fourth iteration of the voluntary guidelines was adopted by the EAC, referred to as VVSG 1.1. According to the EAC, VVSG 1.1 clarified the guidelines to improve testability by testing laboratories, among other updates, and focused on areas that could be improved without requiring significant changes to the testing and certification process. In January 2016, the EAC adopted an implementation plan for VVSG 1.1 whereby all new voting systems being tested for certification would be required to be tested against the VVSG 1.1 beginning on July 6, 2017. As of November 2017, no voting systems have been certified using VVSG 1.1.
The EAC, NIST, and TGDC are in the process of developing the next iteration of the voluntary guidelines (known as VVSG 2.0), and these guidelines are expected to be issued in late summer 2018. Typically, a lag exists between when guidelines are issued and when they are used for testing and certification. EAC officials stated that it has generally taken about 18 months before the guidelines are ready for use for testing voting systems. This is due in part to the need for the voting system test laboratories to be reaccredited to test to the new voluntary guidelines by the EAC. According to EAC officials, after the guidelines are approved for use, it typically takes 2 to 4 years before voting system vendors can develop voting systems that are ready for testing and certification.
States’ Participation in the EAC Testing and Certification Program
Participation in the EAC testing and certification program is voluntary. Each state determines its own standards for voting systems in statute or administrative regulation, which can be based on the voluntary guidelines established by the EAC. Specifically, most states require some level of participation in the EAC testing and certification program as mandated by their state laws or regulations. As of December 2017, 13 states require federal certification of their voting systems, 24 states and the District of Columbia require testing by a federally accredited laboratory or require testing to federal voting system standards, and 13 states have no federal requirements. Some states have their own voting system standards and conduct their own testing and certification to these standards, either in addition to or as an alternative to the federal voluntary guidelines. Vendors that want to supply their voting systems to local jurisdictions and states must comply with state requirements. See appendix II for federal certification and testing requirements by state, including the associated statutes and regulations we reviewed.
Local Election Jurisdictions Primarily Used Two Types of Voting Equipment, Monitored Such Equipment, and Were Generally Satisfied with Equipment Performance
Local Election Jurisdictions Primarily Used Optical/Digital Scan and Direct Recording Electronic Equipment during the 2016 General Election
According to our analysis of the predominant type of equipment used to process the largest number of ballots during the 2016 general election, jurisdictions using optical/digital scan equipment represented the largest estimated share of the population nationwide, followed by jurisdictions using direct recording electronic (DRE) equipment. Specifically, on the basis of our local election jurisdiction survey, we estimate that jurisdictions with about 63 percent of the population nationwide used optical/digital scan equipment as their predominant voting equipment during the election, while jurisdictions with an estimated 32 percent of the population nationwide used DREs. Jurisdictions with less than 1 percent of the population nationwide used paper hand-counted ballots. See figure 1.
Within the optical/digital scan equipment category, the most widely used model of optical/digital scan equipment was the precinct count optical/digital scan, with jurisdictions having an estimated 46 percent of the population nationwide using it as their predominant voting equipment. Figure 2 shows the predominant types of voting equipment that were used by jurisdictions during the 2016 general election, broken out by model of equipment used.
While many jurisdictions predominantly used one type of voting equipment, some reported using multiple types. Jurisdictions may choose to use more than one type of equipment as a means to process different types of ballots such as absentee or provisional or to provide accessibility options for voters with disabilities. Overall, we estimate that jurisdictions with about 59 percent of the population nationwide used only one type of equipment during the 2016 general election, while jurisdictions with about 37 percent of the population nationwide used multiple types of equipment during the election. Jurisdictions that used two types of equipment are estimated to have about 30 percent of the population nationwide, while those that used more than two types of voting equipment had approximately 6 percent of the population nationwide. See figure 3 for the types of voting equipment used.
Local Election Jurisdictions Monitored Equipment Performance in Various Ways
According to results from our survey of local election jurisdictions, jurisdictions monitored the performance of their voting equipment during the 2016 general election through a variety of methods, such as equipment testing, performance measurement and tracking of malfunctions, and postelection audits and recounts. Such monitoring can provide information to jurisdictions about how their equipment is functioning and help ensure the accuracy of the outcomes of elections and address any identified issues or problems.
Testing of Voting Equipment
Results from our survey of local election jurisdictions indicate that the extent to which jurisdictions tested their voting equipment varied by test type. Key types of voting equipment testing include acceptance testing, logic and accuracy testing, and parallel testing. Acceptance testing verifies that new equipment or any equipment that has been outside election administrators’ control (e.g., for repair) conforms to the purchase agreements and is identical to equipment that was tested and certified by state or federal testing organizations. According to our local jurisdiction survey results, jurisdictions with an estimated 49 percent of the population nationwide conduct acceptance testing of their equipment. Logic and accuracy (also known as functional or readiness) testing is performed in advance of an election to determine whether voting equipment will function properly, such as displaying the correct ballot, collecting votes, and tabulating results. Parallel testing is performed on Election Day by running test votes cast with known results, then comparing the actual and expected results. Of these two types of testing, according to our local jurisdiction survey results, logic and accuracy testing was the most widely performed type of testing as jurisdictions with 99 percent of the population nationwide conducted such testing for the 2016 general election. Jurisdictions with an estimated 37 percent of the population nationwide conducted parallel testing.
Performance Measures and Reported Errors and Malfunctions
According to our local jurisdiction survey results, jurisdictions monitored the performance of their predominant voting equipment during the 2016 general election using a variety of measures. Accuracy of the equipment in counting votes was tracked, measured, or assessed by jurisdictions having an estimated 87 percent of the population nationwide. Another widely monitored aspect of voting equipment performance was the accuracy of the equipment in recording voter selections before counting— jurisdictions with 78 percent of the population nationwide tracked, measured, or assessed that aspect. Overvotes and undervotes were also widely used measures, with jurisdictions having about 63 and 64 percent of the population nationwide, respectively, tracking, measuring, or assessing those measures.
According to the results of our local jurisdiction survey, most jurisdictions did not experience extensive or widespread errors or malfunctions with their equipment during the 2016 general election. We estimate that jurisdictions with 93 percent of the population did not experience equipment errors or malfunctions on a “somewhat” or “very” common basis during the election. Of those that did experience equipment errors or malfunctions of some type on a “somewhat” or “very” common basis, the error or malfunction most frequently encountered was jams or misfeeds. We estimate that this error or malfunction was experienced on a “very common” basis by jurisdictions with about 1 percent of the population nationwide and on a “somewhat common” basis by jurisdictions with about 3 percent of the population nationwide. The next most frequent error or malfunction experienced as a “very” or “somewhat” common occurrence was that equipment response was sluggish or slower than acceptable, which was experienced by jurisdictions with an estimated 3 percent of the population nationwide.
Postelection Audits and Recounts
State and local election officials also determined how their voting equipment performed and verified election results by conducting postelection audits and recounts. According to 35 out of 46 respondents to our state survey, the state election agency or local election jurisdictions in their states conducted postelection audits or targeted recounts of results from the 2016 general election. On the basis of our local jurisdiction survey, we estimate that jurisdictions with approximately 45 percent of the population nationwide conducted postelection audits or targeted recounts. Among jurisdictions of different size, large jurisdictions had a higher estimated share of their population within jurisdictions that conducted postelection audits or recounts than did medium or small jurisdictions. Specifically, jurisdictions with 82 percent of the population within large jurisdictions conducted postelection audits or recounts. In contrast, an estimated 55 percent and 37 percent of the population within medium and small jurisdictions, respectively, was represented by jurisdictions that conducted postelection audits or recounts.
Local Election Jurisdictions Experienced Various Benefits and Challenges with Voting Equipment and Were Generally Satisfied or Very Satisfied with Equipment Performance Benefits and Challenges of Predominant Equipment Used
According to the results of our local election jurisdiction survey, jurisdictions using the two main types of voting equipment (DRE or optical/digital scan) experienced mostly similar benefits as a result of using their respective type of predominant equipment. Table 1 shows the top benefits experienced by jurisdictions according to the type of predominant voting equipment used.
In addition to the benefits mentioned above, jurisdictions experienced other benefits associated with using their respective type of predominant voting equipment. For example, jurisdictions that had an estimated half or more of the population within jurisdictions using each of the different types of voting equipment also experienced the following benefits from using their equipment: Jurisdictions predominantly using DREs: accessibility for individuals with disabilities or impairments, timely election night reporting, ease of presenting lengthy ballots in a clear and understandable way, protection and preservation of votes cast against potential non- cybersecurity related threats, and customer support and problem resolution assistance from vendor.
Jurisdictions predominantly using optical/digital scan equipment: timely election night reporting, ease of troubleshooting or resolving equipment malfunctions during Election Day, preventing or alerting voters of any overvotes or undervotes before ballot is cast, ability to facilitate a postelection audit, security of equipment against outside electronic hacking or intrusion, and ease of conducting routine maintenance.
Jurisdictions also experienced challenges while using their predominant voting equipment, although to a lesser extent overall than they experienced benefits. Table 2 shows the top challenges experienced by jurisdictions according to the type of predominant voting equipment used.
The next most frequently experienced challenges by jurisdictions were the following (estimates with the values for the 95 percent confidence intervals are shown in parentheses): Jurisdictions predominantly using DREs: cost to maintain voting equipment (an estimated 12 percent; 6, 19); cost to operate voting equipment (8 percent; 3, 14); and ease of conducting routine maintenance (7 percent; 2, 14).
Jurisdictions predominantly using optical/digital scan equipment: cost to operate voting equipment (an estimated 11 percent; 7, 15); preventing or alerting voters of any overvotes or undervotes before ballot is cast (9 percent; 2, 23), and ease of connectivity with other election administration systems (e.g., voter registration, election night reporting) (9 percent; 2, 23).
Satisfaction with Predominant Voting Equipment
On the basis of our local election jurisdiction survey, we estimate that jurisdictions with approximately 96 percent of the population nationwide were very satisfied or generally satisfied with the performance of their predominant voting equipment during the 2016 general election. Specifically, we estimate that jurisdictions with approximately 70 percent of the population nationwide were very satisfied with their voting equipment’s performance and 26 percent were generally satisfied (see fig. 4). Jurisdictions with about 2 percent of the population nationwide were generally dissatisfied or very dissatisfied with the performance of their predominant voting equipment.
When comparing satisfaction with the performance of their predominant voting equipment used in the 2016 general election against the performance of their predominant equipment used in the 2012 general election, we estimate that jurisdictions with 67 percent of the population nationwide were just as satisfied with their equipment’s performance in 2016 as in 2012, while 16 percent reported they were more satisfied (see fig. 5). Among jurisdictions that used different predominant types of equipment, jurisdictions that predominantly used optical/digital scan equipment that were more satisfied with their equipment’s performance in 2016 had a larger estimated share of their population (20 percent) compared to jurisdictions that predominantly used DRE equipment (4 percent).
Local Election Jurisdictions and States Consider Multiple Factors and Selected Jurisdictions Have Varying Approaches When Replacing Voting Equipment
Local Election Jurisdictions and States Consider Multiple Factors When Deciding Whether to Replace Voting Equipment
On the basis of our review of literature and studies, interviews with election subject matter experts, and analysis of our local election jurisdiction and state surveys, we identified four key factors and related issue areas within them that jurisdictions and states consider when deciding whether to replace voting equipment. After considering the factors, jurisdictions may decide to replace their equipment or continue using their existing equipment. The four key factors we identified are: (1) the need for voting equipment to meet federal, state, and local voting system standards and requirements; (2) the cost to acquire new equipment and availability of funding; (3) the ability to maintain equipment and receive timely vendor support; and (4) the overall performance and features of voting equipment. In our local election jurisdiction and state surveys, we asked election officials to rate issue areas related to each of these factors as to how important they were when determining whether to replace voting equipment and then rank the issue areas in terms of which were “most important” in making the determination. Analysis of the results of our surveys indicates that the 24 issue areas within the four factors vary in their relative importance to jurisdictions and states when determining whether to replace voting equipment.
Need for Voting Equipment to Meet Federal, State, and Local Voting System Standards and Requirements
The need for voting equipment to meet applicable federal, state, and local voting system standards and requirements is a factor considered by local election jurisdictions and states when determining whether to replace equipment. At the federal level, HAVA generally requires that voting equipment be accessible to individuals with disabilities. As discussed earlier, HAVA also established the EAC which developed and maintains the voluntary guidelines that voting equipment can be tested against to receive federal certification. In turn, many states have established requirements that voting equipment be federally certified or meet some or all of the standards established by the federal guidelines. According to election subject matter experts we spoke with, in addition to federal requirements and standards, some states have imposed additional requirements that voting equipment must meet or satisfy such as having the capability to present all ballot issues and candidates on one page or presenting ballots in multiple languages, for example.
We identified four issue areas related to this factor. Figure 6 shows the importance local jurisdictions and state election officials attributed to the various issue areas within this factor when determining whether to replace voting equipment. For example, the need for equipment to meet state and local requirements and standards was considered “very important” by jurisdictions with 87 percent of the population nationwide and as one of the three “most important” issue areas overall by jurisdictions with 36 percent of the population nationwide. Among the states, this issue area was considered as “very important” by 18 out of the 25 states that indicated having a role in determining whether to replace voting equipment and as one of the three “most important” issue areas overall by 7 out of the 25 states.
According to election subject matter experts we spoke with, the costs to acquire new equipment and the availability of funding to pay those costs is a key factor that jurisdictions and states consider when determining whether to replace voting equipment. Acquiring new voting equipment involves a variety of costs and expenses. For example, in addition to the cost of the equipment itself, there can be other associated costs, such as training for poll workers and elections staff on the new equipment and voter outreach and education about the change in equipment, that may be incurred as existing equipment is replaced. These related acquisition and transition costs and expenses are incurred by the jurisdictions and states, which in turn must obtain or allocate resources to cover those costs.
We identified four issue areas related to this factor. Figure 7 shows the importance local jurisdictions and state election officials attributed to these issue areas when determining whether to replace voting equipment. For example, the availability of state and local funds was considered “very important” by jurisdictions with 62 percent of the population nationwide and as one of the three “most important” issue areas overall by jurisdictions with 18 percent of the population nationwide. Among the states, this issue area was considered as “very important” by 20 out of the 25 states that indicated having a role in determining whether to replace voting equipment and as one of the three “most important” issue areas overall by 9 out of the 25 states.
Given the importance of funding for the acquisition of new voting equipment and the assistance federal HAVA grants have previously provided, we asked states and jurisdictions additional questions in our surveys about their funding practices and the extent to which they have HAVA grant funds remaining to acquire voting equipment. The results from our surveys provided the following additional information about these issues:
Use of local and state funding sources for acquisition of new voting equipment: On the basis of our local election jurisdiction survey, we estimate that, among various potential funding sources, jurisdictions with 79 percent of the population nationwide obtain funds to acquire new voting equipment through local general funds or budgets as a direct appropriation. Additionally, we estimate that jurisdictions with 43 percent of the population nationwide use state financial assistance or cost sharing as a source of funds for new equipment. According to the results from our state survey, states have different levels of involvement in providing funds for the acquisition of voting equipment. Over half (24) of the 46 states that responded to our survey indicated that they do not provide any financial assistance or cost sharing to local jurisdictions for equipment acquisition, while 11 indicated that they cover all acquisition costs. Eight states indicated that their state provides some financial assistance or cost sharing with local jurisdictions for equipment acquisition, while 2 states indicated a different type of involvement in funding the acquisition of voting equipment, such as covering only the costs of acquiring accessible voting equipment.
Availability of HAVA funds: On the basis of our local jurisdiction survey, we estimate that jurisdictions with 10 percent of the population nationwide had HAVA funds remaining to apply toward the acquisition of new voting equipment, with jurisdictions representing 6 percent of the population only having enough HAVA funds to acquire a portion of the equipment needed. Additionally, we estimate that jurisdictions with 42 percent of the population nationwide had no HAVA funds remaining while jurisdictions with 46 percent of the population did not know whether they had any HAVA funds remaining.
Impact of lack of HAVA funds: Among jurisdictions that did not have any HAVA funds remaining or only enough to buy a portion of the equipment needed, jurisdictions with an estimated 36 percent of the population indicated that the lack of HAVA funds had affected their decisions regarding the replacement of voting equipment. Further, jurisdictions with an estimated 57 percent of the population in this subgroup (of jurisdictions that indicated that the lack of HAVA funds affected their replacement decisions) delayed the replacement of voting equipment while jurisdictions with 25 percent of the population in this subgroup were not able to acquire the equipment that would best meet their needs.
Ability to Maintain Equipment and Receive Timely Vendor Support
The ability of local election jurisdictions and states to maintain voting equipment and receive timely vendor support is a factor considered when determining whether to replace equipment, particularly as the equipment ages. Election subject matter experts we spoke with noted the importance of access to replacement parts for existing voting equipment as something jurisdictions and states may consider when determining whether to replace equipment. Without adequate access to replacement parts and technical service, either from vendors or supplied by in-house expertise, it can be difficult for jurisdictions and states to maintain their current equipment at a satisfactory level.
We identified five issue areas related to this factor. Figure 8 shows the importance local jurisdictions and state election officials attributed to these issue areas when determining whether to replace voting equipment. For example, the sufficiency of vendor support and problem resolution was considered “very important” by jurisdictions with 81 percent of the population nationwide and as one of the three “most important” issue areas overall by jurisdictions with 7 percent of the population nationwide. Among the states, this issue area was considered as “very important” by 15 out of the 25 states that indicated having a role in determining whether to replace voting equipment but no state considered it as one of the three “most important” issue areas overall.
The overall performance and features, both of the existing voting equipment and of potential replacement equipment, is also a factor considered by local election jurisdictions and states when determining whether to replace voting equipment. For example, jurisdictions and states may consider the age of their current equipment and how well it is performing, as well as how its performance compares to that of new equipment available for acquisition. In addition, according to elections literature we reviewed and election subject matter experts we spoke with, jurisdictions and states may also take into account specific features new voting equipment can provide that might better meet their needs. The desired features may vary from jurisdiction to jurisdiction depending on specific needs and circumstances, but such features may include an enhanced ability to process a high volume of absentee ballots, capability to present ballots in multiple languages, or ease for poll workers to set up and for voters to use, for example.
We identified 11 issue areas related to this factor. Figure 9 shows the importance local jurisdictions and state election officials attributed to these issue areas when determining whether to replace voting equipment. For example, the overall performance of the voting equipment was considered “very important” by jurisdictions with 83 percent of the population nationwide and as one of the three “most important” issue areas overall by jurisdictions with 20 percent of the population nationwide. Among the states, this issue area was considered as “very important” by 18 out of the 25 states that indicated having a role in determining whether to replace voting equipment while 4 out of the 25 states considered it as one of the three “most important” issue areas overall.
Given the potential challenges local election officials have identified with using aging or outdated equipment, in our local election jurisdiction survey we asked jurisdictions when they first used their predominant voting equipment. Based on their responses, we estimate that jurisdictions with over half of the population nationwide used predominant voting equipment in the 2016 general election that was first deployed between 2002 and 2006 (see fig. 10) Jurisdictions with the next largest estimated share of the population (28 percent) used equipment that was first deployed between 2012 and 2016.
Approaches to Replacing Voting Equipment Varied across Selected Jurisdictions
The five local election jurisdictions we selected to include in our review either replaced their voting equipment between 2012 and 2016 or plan to replace their equipment in time for the 2020 general election. We selected these jurisdictions to obtain variation in, to the extent possible, population of jurisdiction, type of voting equipment replaced and selected, and state involvement in selecting and funding voting equipment replacement, among other factors. Table 3 summarizes information related to voting equipment replacement across the five selected jurisdictions.
These jurisdictions illustrate varying approaches that localities have used or are using to replace their voting equipment based on their specific needs, circumstances, and resources. For example,
Los Angeles County, California. The county has a large and diverse electorate and is in the process of self-designing its own voting system, which is expected to consist of ballot marking devices that produce paper ballots to be tallied on central count digital scanners. County officials stated that the current design concept for the new equipment is intended to provide greater flexibility in administering elections, provide a more user-friendly and accessible voting experience, enhance accuracy and auditability, and could potentially lower costs for system upgrades if developed as planned. For example, according to officials, the ballot marking device is intended to provide the ease of use of a touch screen interface, which would incorporate features such as scrolling and tapping that are familiar to voters who use mobile devices, and will include a headset, tactile keypad, and other devices for voters with disabilities. It would also allow the county to have ballots with multiple formats and a large number of races.
The county’s process for developing and deploying its new voting equipment began in 2009 and has five phases—(1) public opinion and stakeholder baseline research, (2) establishment of voting system guiding principles, (3) system design and engineering, (4) manufacturing and certification, and (5) phased implementation. According to officials, the county has taken a user-centered approach to the design of the new voting equipment that prioritizes the specific needs and expectations of the voters. The county is currently in the manufacturing and certification phase and reported that about $19 million has been expended to develop the new voting equipment as of December 31, 2017. County officials told us they plan to retain ownership of the intellectual property rights of the new voting equipment so that the system remains publicly owned and not proprietary like traditional vendor equipment. The county plans to pilot the new equipment in some early voting locations in 2019 and fully roll it out in 2020.
Travis County, Texas. The county began its efforts to design its own voting equipment based in part on findings and recommendations from an election study group it convened in 2009. In 2012, it developed a concept for a DRE with a voter-verified paper audit trail that centered on system security, auditability, and the use of commercial off-the-shelf technology. In September 2017, the county announced that it had decided to no longer pursue building the voting equipment because the proposals it received from vendors and other organizations for developing key components of the equipment were not sufficient to build a complete voting system, among other reasons. According to county officials, the county plans to acquire either DREs or ballot marking devices with precinct count digital scanners from a voting system vendor with the goal that whatever equipment it acquires incorporates some of the key features it had intended for its self-designed equipment. For example, officials stated that the new equipment must produce printed paper records that can be tallied and connected with electronic voting records through an automated process and allow for third party verification of results and better postelection audits. They noted that they are prepared to work with vendors to customize existing equipment to meet the county’s requirements if needed. County officials estimate that the new equipment will cost about $16 million and stated that acquisition will be funded through local bonds. The county issued a request for proposals for the equipment in November 2017 and plans to have it in place for the 2020 election.
Anne Arundel County, Maryland. In 2016, the county replaced its DREs with a system in which voters manually mark paper ballots and insert them into precinct count digital scanners which then count them. Maryland requires the use of uniform voting equipment in polling places statewide and the state and counties each pay 50 percent of the costs of acquiring equipment. In 2007, Maryland enacted a law that prohibited the use of a voting system unless the State Board of Elections (SBE) determined that the system provides a voter-verifiable paper record, thereby requiring the state’s DREs to be replaced. According to Maryland SBE officials, state law specifically required the purchase of precinct count scanners so the board did not consider other types of voting equipment.
The SBE issued a request for proposals for the new voting equipment in July 2014 and four vendors responded. The board formed an evaluation committee to analyze the technical and financial details of the proposals, and according to officials, the committee hosted a public demonstration to collect feedback on the equipment under consideration and worked with the University of Baltimore to perform usability and accessibility testing on the equipment. The SBE decided to lease rather than purchase the equipment for a number of reasons. For example, officials said that leasing provided increased flexibility to update or replace equipment more frequently and had lower upfront costs. According to SBE officials, the current payment to the vendor for leasing the digital scan equipment statewide is approximately $1.1 million per quarter. SBE and Anne Arundel County officials stated that deployment of the new equipment in the 2016 general election went smoothly with no significant challenges. The state contracted with a third party vendor to conduct a postelection audit of the 2016 general election by using independent software to tally all digital ballot images. The audit confirmed the accuracy of the election results. According to SBE officials, the new equipment’s ability to capture and store digital images of the ballots made this type of audit possible. Anne Arundel County officials stated that the ability to conduct such an audit is one of the main benefits of the new equipment.
Lafayette County, Florida. Lafayette County has a small population and, in 2016, replaced its precinct count optical scan equipment with precinct count digital scan equipment. The county formed a consortium with 11 other counties in the state to help acquire its new equipment. According to the county’s Supervisor of Elections, having the consortium approach state officials as a group helped secure HAVA funds to help the counties purchase the voting equipment. In addition, he stated that being a part of the consortium helped the counties negotiate a lower price for their equipment than what they could have obtained individually because they pooled their purchases and acquired a higher volume of machines. According to the Supervisor of Elections, the consortium decided to purchase precinct count digital scanners from the same vendor the counties had used before because county staff were familiar with the vendor and equipment, among other reasons. He stated that the total cost to purchase Lafayette County’s new voting equipment was about $70,000.
The Supervisor of Elections said that the digital scanners have features that were an improvement over the county’s previous optical scan equipment. For example, he told us that the new scanners have more robust security features, such as locking panels, seals, and a requirement for a passcode to access the system. He also noted that the scanners digitally capture and store ballot images. The Supervisor of Elections and the two poll workers we interviewed stated that deployment of the new voting equipment went smoothly and the county did not experience any challenges because the new and previous equipment are both precinct count scanning systems. According to the Supervisor of Elections, a postelection audit that was conducted, in which the county manually tallied ballots from a randomly selected race and precinct, found that the results were accurate.
Beaver County, Utah. Beaver County has a small population and previously used DREs with a voter-verified paper audit trail. In 2014, Beaver County began conducting vote-by-mail elections and replaced its DREs with central count digital scan equipment to support this change. County officials said that, in 2014, they verbally requested proposals for the new equipment from their current vendor and an elections services company that the county had employed in 2012 to provide training, systems testing, and other support for elections. According to the Deputy Clerk, the county requested proposals from these two entities because county officials were familiar with them and were not aware of other vendors that might submit proposals. Officials stated that the county received a proposal from the elections services company, and selected the company because it was the only bid received and the equipment the company sold met the county’s needs and was federally certified. The county reported that the cost to purchase the equipment was about $46,000. Officials said that they are very satisfied with the performance of the new voting equipment. They noted that conducting vote-by-mail elections and using central count scanners allow them to administer elections from one location on Election Day, which requires less time and resources than having to manage multiple polling places. Officials also stated that the new digital scanners are able to count a high volume of ballots in a short period of time. According to officials, the county conducted two postelection audits for the 2016 general election—one required by the state and another that the county initiated. They reported that both audits validated the election results.
See appendix V for additional details about voting equipment replacement in our five selected jurisdictions, including the factors that influenced their decisions to replace voting equipment; selection, acquisition, and implementation of their equipment; and perspectives on the process.
Stakeholders Have Varying Views on How the Voting System Guidelines Affect Equipment Replacement and Development, and the EAC is Updating the Guidelines with Stakeholder Input
Stakeholders Provided Varying Perspectives on How the Current Voluntary Guidelines and Testing and Certification Processes Affect Replacing and Developing Voting Equipment
On the basis of our survey of state election officials and interviews with officials from selected voting system vendors and subject matter experts—representatives from nongovernmental research and other organizations involved in the field of election administration—we found that these stakeholders have varying perspectives on how the current Voluntary Voting System Guidelines (VVSG 1.0 and VVSG 1.1) and their associated testing and certification processes facilitated or posed challenges to the replacement and development of voting equipment. The states we surveyed and the other selected stakeholders we interviewed primarily had experience with VVSG 1.0. As discussed earlier, the VVSG 1.1 were issued in March 2015, but due to the time it generally takes to implement updates to new guidelines, including developing testing programs, among other things, no systems had been certified under this version of the guidelines as of November 2017. One vendor’s system underwent partial testing using VVSG 1.1 but the vendor withdrew the system before the testing was completed.
Perspectives on How the Voluntary Guidelines Facilitate Replacing and Developing Voting Equipment
States and selected vendors and subject matter experts provided varying perspectives on how aspects of the current voluntary voting system guidelines and their associated testing and certification processes facilitate the replacement and development of voting equipment. Generally, stakeholders indicated that the guidelines and processes provide assurance that new equipment meets certain requirements, provide guidance for equipment developers, provide a model for state standards, and provide cost savings for states that do not have to duplicate federal testing. For example, 15 of the 26 state survey respondents said the guidelines provide assurance that new voting equipment meets baseline requirements related to security, functionality, usability, accessibility, and privacy. One of these 15 state respondents noted that if the EAC certified voting equipment against the federal guidelines, he believes it meets the highest election standards and also meets requirements set by his state. Another of these 15 state respondents noted that voting equipment that has been tested using the federal guidelines and certified by the EAC will have a higher level of reliability than equipment that has not met these guidelines or been certified by the EAC.
Subject matter experts from one nongovernmental organization noted that states that establish their own voting system standards often use the federal guidelines as a base to help develop their standards because the federal guidelines have comprehensive requirements and are well vetted. Experts from another nongovernmental organization said that the guidelines establish a standard for voting equipment features and performance, which may help small jurisdictions that want to acquire new voting equipment but may not have the expertise to independently evaluate the equipment. Further, officials from most of the vendors we interviewed agreed that the federal standards serve as effective baseline requirements. For example, officials from five of the seven vendors we interviewed said that when they are developing voting systems, the federal guidelines help them define the baseline standards that their systems should meet, and five of the nine subject matter experts said the federal guidelines provide baseline requirements.
Further, 4 of the 26 state survey respondents indicated that the current voluntary guidelines help reduce the costs and resources needed for states to test and approve new voting equipment. For example, one of the 4 state respondents reported that states do not have to rely on their own voting system testing laboratories for all aspects of the testing and certification of new voting equipment to meet state requirements because most of the testing and certification relevant to state requirements has already been done by EAC-accredited testing laboratories and the EAC. The official noted that this allows the states to do less testing, which could save them money.
Perspectives on How the Voluntary Guidelines Pose Challenges to Replacing and Developing Voting Equipment
The states we surveyed and selected vendors and subject matter experts we interviewed also reported that aspects of the current voluntary voting system guidelines and their associated testing and certification processes could pose challenges to the replacement and development of voting equipment in a number of ways. Specifically, some stakeholders indicated that aspects of the guidelines and processes could discourage innovation in equipment development, could limit the choices of voting equipment on the market because the testing and certification processes take too long, and could be costly for states and vendors. For example, officials representing three of the seven vendors we interviewed said the current federal guidelines may discourage innovation for new voting equipment because they are too specific or overly prescriptive. Officials from one of these three vendors said the current guidelines require a specific oval size on the ballots, prescribing how tall and wide the oval should be. Instead of such requirements, the officials said they would like the guidelines to be more performance-based and state, for example, that voters should be able to successfully mark a ballot a specified percentage of the time. Further, officials from another vendor said that the current guidelines are generally written for the purpose of testing and certifying end-to-end voting systems rather than system components such as ballot marking devices, which are generally developed by smaller vendors. As a result, according to this vendor, smaller vendors may face challenges getting new technology certified and into the market. EAC officials stated that they recognize that the current guidelines should be more flexible because specificity may limit innovation and they believe the updates to the VVSG 2.0 should help address this issue.
In addition, some stakeholders said they believed that the voluntary guidelines and associated testing and certification processes take too long, and thus limit the choices of voting equipment on the market and make it difficult to make improvements to existing equipment. For example, officials from 8 of the 27 state survey respondents and three subject matter experts said the guidelines and their respective processes limit the number of voting systems that are available for acquisition. Three of the 8 states and three subject matter experts said, in their view, the EAC testing and certification process takes too long. In addition, according to one subject matter expert, if a jurisdiction wants to make changes to its existing voting equipment, such as incorporating new software, it can be a difficult and lengthy process to certify the modified equipment, and in some cases the entire system must be recertified. Also, an official from one vendor said that the federal certification processes are complicated, onerous, and time-consuming and they discourage vendors from making modifications to their voting systems even though the modifications might improve the systems. EAC officials said they have heard from stakeholders that the certification process takes too long but stated that this perception was more accurate in the years immediately following the EAC’s issuance of the VVSG 1.0 in 2005. They said that if voting equipment has been modified and is ready for testing and there are no significant problems encountered during the testing, certifying modifications should take a few weeks to a few months to complete and full system testing and certification of new systems should take about 6 to 9 months.
Further, officials from 4 of the 27 states that responded to our survey said the EAC testing and certification process can be costly. One state election official said that the cost of certification may discourage vendors from developing new systems and pursuing EAC certification for their systems, which could limit their ability to sell or supply their systems to state and local election jurisdictions. In addition, this state election official noted that costly federal certification of voting systems has limited the voting equipment choices for election officials. Further, officials from one vendor said that they submitted a new voting system for EAC testing and certification and spent over $12 million before they learned that there were significant issues with getting their system certified. According to EAC officials, this was an uncommon occurrence that resulted from the vendor submitting a system that needed additional work and was not ready for certification. The vendor decided to withdraw its system from the testing and certification process.
The EAC Is Updating the Voluntary Voting System Guidelines with Stakeholder Input and Plans to Issue the New Version in 2018
Shortly after the adoption of VVSG 1.1 in March 2015, the EAC, in conjunction with NIST and the TGDC, began work to develop the next iteration of the guidelines, VVSG 2.0, and anticipates issuing the new version in late summer 2018. The EAC, NIST, and the TGDC have taken actions to develop VVSG 2.0 that may address some of the issues with the earlier iterations of the guidelines that were raised by stakeholders. For example, they have established goals to guide the VVSG 2.0 development process, established working groups to inform the guidelines, and developed VVSG 2.0 high-level principles and guidelines.
Establishment of Voluntary Voting System Guidelines Development Goals and Working Groups
According to the EAC and NIST, in August 2014, the Future VVSG Working Group, which consisted of officials from state and local election offices, technical experts in such areas as security and disability, and voting system vendors, among others, began work which culminated in the creation of 12 goals to guide the development efforts for the voluntary guidelines. One goal, for example, states that the guidelines’ requirements should be performance based and technology neutral. The goal statement further elaborates that the guidelines should be free from detailed descriptions of any technology, and that the guidelines should be functional in nature so that they can more easily be redefined as technology changes. Another development goal states that the voluntary guidelines and its testing and certification processes should not impose unanticipated cost burdens onto organizations. These goals are designed to address some of the issues with the current voluntary guidelines identified by the stakeholders we interviewed as posing challenges to the replacement and development of voting systems, such as discouraging innovation because they are too specific and discouraging vendors and other voting system developers from pursuing EAC certification for their systems because the process is potentially costly.
After the 12 goals for the voluntary guidelines were developed, the EAC and NIST established a new process for developing the next guidelines that is intended to allow for broader and more transparent stakeholder involvement than prior guidelines’ development efforts. This new process brings stakeholders together through a working group structure to develop the guidelines. According to the EAC, the previous process did not fully allow for stakeholder input or effectively leverage stakeholder expertise in developing the guidelines because comments on the guidelines were solicited from the Standards Board and external stakeholders after most of the work had been done.
In 2015, the EAC and NIST established seven working groups to obtain feedback and input from stakeholders early in the voluntary guidelines development process. According to the EAC and NIST, the four constituency and three election cycle working groups were created as a public/private partnership to inform the development of the guidelines and are composed of state and local election officials, representatives from the federal and private sectors, members of standards bodies, EAC committee members, academic researchers, and other interested parties.
The working groups are led by EAC and NIST staff, and have more than 600 participants across the seven groups. EAC and NIST officials stated that they have informed election officials and other stakeholders about opportunities to participate on these working groups to share their ideas. The four constituency working groups represent areas related to human factors (accessibility and usability), cybersecurity, interoperability, and testing and are charged with developing guidance or other deliverables related to these four areas. For example, one objective for the human factors working group is to identify gaps or issues with current accessibility and usability requirements for voting. The election cycle working groups—focused on pre-election, election, and postelection activities—develop process models related to election activities. For example, an objective for the election working group is to identify the necessary functionality of election systems needed to administer early voting and Election Day activities. The work by these seven working groups will help inform the development of the voluntary guidelines’ requirements. Table 4 shows the seven working groups and their respective responsibilities.
Some of the stakeholders we interviewed participate in these working groups. For example, officials from six of the seven voting system vendors we contacted said they have a representative on one or more of the constituency working groups. Generally, these six vendors said the working groups are a positive feature of the voluntary guidelines’ development process. For example, officials from one vendor said they have been encouraged by the amount of collaboration on the working groups, and officials from another vendor said it is beneficial that vendors are part of the working groups because they bring experience and expertise with designing and developing various types of voting systems.
Development of the VVSG 2.0 High-Level Principles and Guidelines
In August 2017, the TGDC adopted high-level principles and supporting guidelines for the VVSG 2.0. These principles and guidelines are intended to provide system design goals and broad descriptions of the functions that make up a voting system, in contrast to the VVSG 1.1 which focused more on device- or system-specific requirements. The VVSG 2.0 will be supplemented by requirements consisting of technical details voting system vendors can use to design devices that meet the new guidelines. The supplemental requirements will also detail test assertions for how the accredited test laboratories will validate that a system complies with the requirements. One of the VVSG 2.0 principles, for example, is that ballots and vote selections should be presented in a clear, understandable way so that they can be marked, verified, and cast by all voters. The corresponding guidelines for this principle focus on ballots being perceivable, operable, and understandable. For example, the guideline for perceivable ballots notes that default voting system settings for displaying ballots should work for the widest range of voters and allow voters to adjust settings and preferences to meet their needs.
Another VVSG 2.0 principle is that the voting system should be designed to support interoperability, including having voting devices that can interface with each other. The corresponding guidelines for this principle include using standard data formats and commercial off-the-shelf devices if they meet applicable requirements. According to NIST officials, one goal of the interoperability working group is to develop guidance that will enable election equipment and interfacing software to interoperate more easily and “speak the same language.” NIST officials stated that this goal is intended to allow vendors to build and certify system components instead of a full voting system. These principles are designed to help address some of the issues reported by stakeholders, such as the impact of prescriptive requirements for ballot designs on vendor innovation and the challenges encountered with component certification under the current voluntary guidelines.
Further, officials from the EAC told us that one key change with the VVSG 2.0 is that the EAC commissioners no longer have to approve changes to the supplemental requirements and test assertions, which will instead be vetted by the EAC’s Board of Advisors and Standards Board. EAC officials noted that this allows for greater flexibility to make improvements to the requirements and testing process, including making changes in response to technological advancements. Additionally, depending on the situation, the new voluntary guidelines are intended to allow for more streamlined testing and certification processes. For example, EAC officials said that under the new guidelines, if there are modifications that have been made to a voting system that has already been certified, the changes can be tested without having the entire voting system go back through the testing and certification process.
Next Steps in Developing the VVSG 2.0
According to EAC officials, the next steps in the VVSG 2.0 development process are to share the high-level principles and guidelines with the EAC’s Board of Advisors and Standards Board for further vetting, provide the public the opportunity to comment on them, and provide them to the EAC commissioners for approval. Specifically, before final adoption of the guidelines, both boards are to review and submit comments and recommendations regarding the guidelines to the commissioners. EAC officials anticipate that the EAC boards will likely review and pass resolutions in support of the principles and guidelines in April 2018. Following the board reviews, there will be a 90-day period for public comment on the VVSG 2.0, as required by HAVA. The EAC hopes that the time it typically takes to respond to public comments will be shorter than for prior voluntary guidelines, due to the extensive feedback and comments received and considered by the working groups during the development phase. EAC officials anticipate that the EAC commissioners will vote on the VVSG 2.0 principles and guidelines in August or September 2018, and the VVSG 2.0 will be issued after they are approved. According to EAC and NIST officials, the working groups have begun developing the supplemental requirements for the new guidelines. They said that the requirements are expected to be drafted by the summer of 2018 and test assertions for most voting systems are expected to be developed by the summer of 2019.
EAC officials noted that it will likely take 12 to 24 months after the EAC commissioners approve the new guidelines before they are ready for use. EAC officials plan to submit to the EAC commissioners a range of recommended dates to consider for implementation. They added that in developing these dates, including when vendors will be required to test new equipment against the updated guidelines, they must consider various factors such as the time voting equipment vendors will need to build their new equipment to VVSG 2.0, and reaccreditation of voting system test laboratories to ensure they can test to VVSG 2.0. Because of the lag between when the guidelines will be issued and when they will be used for testing and certification, EAC officials stated that it is unlikely that systems will be certified in time to be ready for use in the 2020 election. However, these officials noted that they are available to meet with vendors that would like to start developing equipment based on the new guidelines.
Agency and Third- Party Comments
We provided a draft of this report to the EAC, NIST, and election offices in the five local election jurisdictions that we selected and their respective states for review and comment. The EAC, two jurisdictions, and two states provided technical comments, which we incorporated in the report as appropriate. NIST, three jurisdictions, and three states indicated that they had no comments in e-mails received from March 1 through March 23, 2018.
We are sending copies of this report to the EAC, NIST, election offices in the five selected local jurisdictions and their respective states that participated in our research, appropriate congressional committees and members, and other interested parties. In addition, this report is available at no charge on GAO’s website at http://www.gao.gov.
If you or your staff have any questions, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix VI.
Appendix I: Objectives, Scope, and Methodology
This report addresses the following questions: 1. What types of voting equipment did local election jurisdictions use for the 2016 general election, and what are jurisdiction perspectives on equipment use and performance? 2. What factors are considered when deciding whether to replace voting equipment and what approaches have selected jurisdictions taken to replace their equipment? 3. What are selected stakeholders’ perspectives on how federal voting system guidelines affect the replacement and development of voting equipment, and what actions has the Election Assistance Commission (EAC) taken to update the guidelines?
Objective 1
For our first objective, we conducted a web-based survey of officials from a stratified random sample of 800 local election jurisdictions nationwide to obtain information from the jurisdictions on the voting equipment used during the 2016 general election and perspectives on equipment use and performance. In total, we received 564 completed questionnaires for a weighted response rate of 68 percent. We surveyed the officials about the types of voting equipment they used, various characteristics of the equipment used, their perspectives on the benefits and challenges they experienced while using the equipment, and how satisfied they were with its performance during the election.
Overall, there are 10,340 local election jurisdictions nationwide that are responsible for conducting elections. States can be divided into two groups according to how they delegate election responsibilities to the local election jurisdictions. One group is composed of 41 states that delegate election responsibilities primarily to counties. We also included the District of Columbia in this group of states. However, even within this group there are some exceptions to how election responsibilities are delegated. For example, there are no counties in Alaska, so the state groups all of its Boroughs and Census Areas into four election regions; and 6 states—Illinois, Maryland, Missouri, Nevada, New York, and Virginia—delegate responsibilities to some cities independently from counties. The group of 41 states and the District of Columbia contains about one-fourth of the local election jurisdictions nationwide. The other group is composed of 9 states that delegate election responsibilities to subcounty governmental units, known by the U.S. Census Bureau as Minor Civil Divisions (MCD). This group of states contains about three- fourths of the local election jurisdictions nationwide. The categorization of the 50 states and the District of Columbia by how election responsibilities are organized is as follows (states in bold delegate election responsibilities to some cities independently from counties):
County-level states: Alabama, Alaska (four election regions), Arizona, Arkansas, California, Colorado, Delaware, the District of Columbia, Florida, Georgia, Hawaii, Idaho, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maryland, Mississippi, Missouri, Montana, Nebraska, Nevada, New Jersey, New Mexico, New York, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, South Carolina, South Dakota, Tennessee, Texas, Utah, Virginia, Washington, West Virginia, and Wyoming
MCD–level states: Connecticut, Maine, Massachusetts, Michigan, Minnesota, New Hampshire, Rhode Island, Vermont, and Wisconsin While 27 percent of election jurisdictions nationwide are in states that delegate election responsibilities primarily to counties, according to the 2010 Census, 89 percent of the U.S. population lived in these states. The U.S. population distribution between the two state groups is shown in table 5.
The sampling unit for our survey was the geographically distinct local election jurisdiction at the county, city, or MCD level of local government (or, in Alaska, the election region). We constructed our nationwide sample frame of all local election jurisdictions using 2010 decennial Census data and information on local jurisdictions from state election office websites. Census population data were available for all counties, county equivalents, and MCDs.
To obtain a representative sample that included a mix of both rural and non-rural jurisdictions, we used a two-level stratified sampling method in which the sample units, or jurisdictions, were broken out into rural and non-rural strata. To do this, we used the U.S. Department of Agriculture’s Economic Research Service’s Rural-Urban Continuum Code (RUCC) system which classifies counties into a nine-category continuum based on their characteristics and location relative to metropolitan areas. The RUCC continuum coding scheme is shown in table 6.
To assign a continuum code to each local election jurisdiction, we matched the RUCC county code to each county in the population frame. Cities that are independent local election jurisdictions and spread geographically across one or more counties received the lowest numbered code among the counties which contain them (i.e., most urban). For independent cities that administer their own elections but are contained geographically within a single county, the city received the code assigned to the county. Where necessary, the parent state’s 2010 decennial Census report was checked to make sure all counties that included part of the independent city were identified. MCDs in New England and the Midwest received the code of the parent county that contained them. For our sampling purposes, the rural stratum was defined as all local election jurisdictions with an RUCC code of 7, 8, or 9. The non-rural stratum was defined as all local election jurisdictions with a code of 1, 2, 3, 4, 5, or 6. Of the 10,340 local election jurisdictions nationwide, 70 percent were classified as non-rural while 30 percent were classified as rural.
We selected a two-level stratified sample of 800 local election jurisdictions. Using the RUCC codes, we allocated 600 sampling units, or jurisdictions, to the non-rural stratum and 200 to the rural stratum. To obtain a sample that also reflected the population distribution across jurisdictions nationwide, we used the population of the local election jurisdiction as the measure of unit size and selected the sample units within each stratum with probability proportionate to population of the local election jurisdiction, without replacement. We used jurisdiction population size, rather than the number of eligible or registered voters, because these Census data were readily available for all counties and MCDs nationwide. Because the sample was selected with probability proportionate to population size, any jurisdiction (county or MCD) with more than about 225,000 people was selected with certainty. Table 7 shows the breakout of jurisdictions by population size, the total population within each size grouping, and the number of jurisdictions sampled.
After selecting the units to be included in our survey sample, we obtained contact information for the chief election official within the jurisdictions selected. To do this, we first collected contact information for local election jurisdictions from state election office websites and other publicly available sources. We then called the jurisdiction offices directly to confirm the accuracy of the information and the appropriate official and e- mail address to which the survey URL and the respondent’s login information for the questionnaire should be sent. We launched our web- based local election jurisdiction survey on March 27, 2017, and made it available to respondents to complete online through July 14, 2017. Log in information to the survey was e-mailed to the chief election official of each sampled jurisdiction. Between April 4, 2017, and July 10, 2017, we conducted follow-up with nonrespondents by phone and e-mail. During this follow-up, we learned that some MCDs in Minnesota contract with their respective counties to carry out election administration responsibilities, including those concerning the use of voting equipment. In these cases, we reassigned and sent the questionnaire for the particular MCD to the appropriate county election official for completion. Finally, we adjusted the sampling weights to compensate for nonresponse using weighting classes within each stratum that were based upon population size of the jurisdictions.
All sample surveys are subject to sampling error—that is, the extent to which the survey results differ from what would have been obtained if the whole population had been observed. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. As each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals based on our web-based survey includes the true values in the sample population.
In addition to the reported sampling errors, the practical difficulties of conducting any survey may introduce other types of errors, commonly referred to as nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We took numerous steps in questionnaire development, data collection, and the editing and analysis of the survey data to minimize nonsampling errors. For example, to inform the development of our questionnaire, we reviewed existing reports and studies about voting equipment and elections, such as those by various national public policy research organizations and professional associations of state and local officials involved in election administration, as well as previous GAO surveys and work related to this issue area. In addition, we interviewed election subject matter experts and representatives from organizations in the field of election administration and voting equipment to obtain their views and perspectives on potential issues and subject areas to consider covering in our questionnaire. We also pretested the draft questionnaire by telephone with officials in 4 local election jurisdictions (3 counties and 1 MCD) of various sizes in 4 states and had the draft questionnaire reviewed by two election experts. We used these pretests and reviews to further refine our questions, develop new questions, clarify any ambiguous portions of the questionnaire, and identify any potentially biased questions, and made revisions, as necessary. Further, during our analysis of the responses, we found that due to a higher level of nonresponse by very small jurisdictions of 2,500 persons or less, some national-level estimates that included responses from jurisdictions of all sizes had wider than desired confidence intervals. To improve the precision of these national-level estimates, we subsequently excluded the very small jurisdictions of 2,500 persons or less from our analysis. Computer analyses were conducted to identify any inconsistencies in response patterns or other indications of questionnaire response errors. All computer syntax was peer reviewed and verified by separate programmers to ensure that the syntax had been written and executed correctly.
Unless noted otherwise, the point estimates we report are national-level point estimates representing the experiences, views, and opinions of all local election jurisdictions nationwide with populations greater than 2,500. We also provide some point estimates for jurisdiction population subgroups, such as large jurisdictions (greater than 100,000 persons), medium jurisdictions (25,001 to 100,000 persons), and small jurisdictions (2,501 to 25,000 persons), and jurisdictions that used a particular type of voting equipment, in cases where statistically significant differences exist between the subgroups that may be of interest. The jurisdictions we surveyed were selected with probability proportionate to population size, so rather than expressing the point estimates in terms of the percentage of jurisdictions nationwide that had a specified characteristic, we express the point estimates for the survey responses in terms of the percentage of the population nationwide that resides within jurisdictions that had a specified characteristic. Similarly, in instances where we report point estimates for jurisdiction subgroups, we express the point estimate in terms of the percentage of the population that resides within jurisdictions of that respective subgroup that had a specified characteristic.
Objective 2
For our second objective, we used our local election jurisdiction survey as described above to obtain information from jurisdictions about the factors they consider when determining whether to replace their voting equipment. In addition to the local election jurisdiction survey, we also conducted a web-based survey of the state-level election offices in the 50 states and the District of Columbia about issues pertaining to the states’ role in selecting and acquiring voting equipment, including the factors considered when determining whether to replace voting equipment. In total, we obtained 46 responses (a 90 percent response rate). We took the same steps to develop the state questionnaire as we did in developing the local election jurisdiction questionnaire described above. We conducted pretests of our draft state questionnaire by telephone with election officials of 4 states with varying election system characteristics such as type of voting equipment used, population size, use of federal voting equipment certification processes, and age of equipment, among other characteristics. We also had the draft questionnaire reviewed by two election experts. We used these pretests and reviews to help further refine our questions, develop new questions, clarify any ambiguous portions of the survey, and identify any potentially biased questions, and made revisions, as necessary.
Prior to fielding our state survey, we contacted the secretaries of state or other responsible state-level officials, as well as officials from the District of Columbia, to confirm the contact information for the director of elections or comparable official for their respective state. We launched our web-based state survey on April 6, 2017, and made it available to respondents to complete online through May 19, 2017. Log-in information to the survey was e-mailed to directors of elections or comparable officials. Between April 12, 2017, and May 16, 2017, we conducted follow- up with nonrespondents by phone and e-mail. The total number of responses to individual questions may be fewer than 46, depending upon how many respondents were eligible or chose to respond to a particular question. For example, survey respondents who indicated that their state did not have a role in determining whether to replace voting equipment were directed to skip all subsequent questions related to the factors considered when determining whether to replace equipment.
Because this survey was not a sample survey, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce nonsampling errors. For example, differences in how a particular question is interpreted, the sources of information available to respondents, or the types of people who do not respond can introduce unwanted variability into the survey results. We included steps in both the data collection and data analysis stages for the purpose of minimizing such nonsampling errors. For example, we examined the survey results and performed computer analyses to identify inconsistencies and other indications of error. Where these occurred, survey respondents were contacted to provide clarification and the response was modified to reflect the revised information. A second, independent analyst checked the accuracy of all computer analyses. The scope of this work did not include verifying states’ survey responses with local election officials.
For additional perspectives and context on the factors considered by jurisdictions and states when replacing voting equipment, we also used our reviews of existing reports and studies about voting equipment and elections and interviews with election subject matter experts, including representatives from nongovernmental research and other organizations involved in the field of election administration and voting equipment. For our review of existing reports and studies, we reviewed literature covering the period from 2005 through 2017 including general news, trade and industry articles, association and nonprofit publications, and government reports related to voting system technology, specifically on the replacement and development of voting systems and voting system standards or guidelines. For our interviews, we identified and selected nine subject matter experts based on our review of reports and studies on voting equipment, their expertise and work in this area, and recommendations from these and other researchers. These subject matter experts represented the following organizations: (1) Brennan Center for Justice, (2) National Conference of State Legislatures, (3) National Association of Secretaries of State, (4) National Association of Counties, (5) National Association of State Election Directors, (6) Verified Voting, (7) Kennesaw State University Center for Election Systems, (8) Center for Election Innovation and Research, and (9) Election Data Services, Inc. The information we obtained from these experts cannot be generalized; however, these experts provided additional perspectives and information on the factors considered by jurisdictions and states when replacing voting equipment.
In addition, we interviewed election officials from five local jurisdictions— Los Angeles County, California; Travis County, Texas; Anne Arundel County, Maryland; Lafayette County, Florida; and Beaver County, Utah— that replaced their voting equipment between 2012 and 2016 or plan to replace their equipment in time for the 2020 general election to learn about the approaches and practices they used and obtain their perspectives on the replacement process. We selected these jurisdictions to reflect variation in, to the extent possible, population of jurisdiction, type of voting equipment replaced and selected, state involvement in selecting and funding voting equipment, and particular practices used to replace equipment (e.g., self-designing equipment, leasing equipment), among other factors. For each jurisdiction, we interviewed—on site or by phone—local election officials, state election officials in the jurisdiction’s state, and individuals who have served as poll workers at the jurisdiction’s polling locations if applicable. While these five jurisdictions are not representative of all local election jurisdictions nationwide that replaced or plan to replace their voting equipment, they provide examples of various approaches for replacing voting equipment and perspectives on key issues with replacing equipment. We corroborated various information we obtained through these interviews by reviewing relevant state statutes and documentation that these jurisdictions provided to us, such as postelection reports, voting system studies, expenditure summaries, and solicitations for vendor proposals to provide voting equipment and services.
Objective 3
To address objective 3, we used responses to our survey of state election officials and interviews with seven selected voting system vendors, the nine selected subject matter experts mentioned above, and officials from the EAC and National Institute of Standards and Technology (NIST) to obtain perspectives on how federal voting system guidelines and their associated testing and certification processes affect the replacement and development of voting equipment. We obtained perspectives on the most recent federal voluntary voting system guidelines (Voluntary Voting System Guidelines, versions 1.0 and 1.1) because they are currently being used to federally test and certify voting systems. We selected the seven voting system vendors based on the prevalence of jurisdictions’ use of their equipment, and to obtain variation in the type of voting system manufactured, such as optical scanners and direct recording electronic voting equipment, and whether systems were federally certified, under test to be certified, or not certified. We also wanted to include a company that plans to enter the voting system market and potentially submit its product for federal certification. Based on these criteria, we selected the following voting equipment vendors—Dominion Voting Systems, DFM Associates, Election Systems and Software, Everyone Counts, Hart InterCivic, Open Source Election Technology Institute, and Unisyn Voting Solutions.
To determine the actions taken or planned by the EAC to update the federal voluntary voting system guidelines, we reviewed EAC and NIST documents and interviewed officials from the EAC and NIST about these actions. We also interviewed the seven selected voting system vendors about their involvement, if any, in updating the guidelines and their perspectives on these actions.
The perspectives of the seven voting system vendors and nine subject matter experts are not generalizable but provide examples of views on the federal guidelines and their associated testing and certification processes from a range of stakeholders.
We conducted this performance audit from June 2016 to April 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Categories of State Requirements for Federal Certification and Testing of Voting Systems
We reviewed state statutes and regulations as of December 2017 regarding the testing and certification of voting systems to describe the extent to which state laws and regulations reference federal voting system certification or testing standards and the extent to which states require the use of these standards. As shown in table 8 below, we grouped the state laws into three categories for the purposes of this report: (1) requires full federal certification; (2) requires testing by a federally accredited laboratory and/or testing to federal voting system standards; and (3) no federal requirements. Category 2 includes states that use some aspect of the federal testing and certification program but do not require full certification. A number of states in this category require both testing by a federally accredited laboratory and testing to federal standards, but we included in this category states that had either requirement in state law or regulation. Category 3 includes some states that utilize the federal certification or testing standards to some extent but that do not require certification or testing to meet federal standards by law or regulation. We then sent our categorization to state officials in the 50 states and the District of Columbia and incorporated changes that we received from those officials.
Appendix III: Results of GAO’s Survey of Local Election Jurisdictions on Voting Equipment
To determine the types of voting equipment local election jurisdictions used for the 2016 general election, jurisdiction perspectives on equipment use and performance, and the factors jurisdictions consider when deciding whether to replace voting equipment, we conducted a web- based survey of officials from a stratified random sample of 800 local election jurisdictions nationwide. In total, we received 564 completed questionnaires for a weighted response rate of 68 percent. The questions we asked in our survey are shown below. Our survey was composed of closed- and open-ended questions. In this appendix, we include all survey questions and results of responses to the closed-ended questions; we do not provide information on responses provided to open- ended questions.
The tables below represent the estimated percentages of the jurisdictions’ responses to the closed-ended questions. The estimates we report are rounded to the nearest percentage point and are national-level point estimates representing the experiences, views, and opinions of all local election jurisdictions nationwide with populations greater than 2,500. Because our estimates are from a generalizable sample, we express our confidence in the precision of our particular estimates as 95 percent confidence intervals which are also provided in the tables. As the jurisdictions we surveyed were selected with probability proportionate to population size, rather than expressing the point estimates in terms of the percentage of jurisdictions nationwide that had a specified characteristic, we express the point estimates for the survey responses in terms of the percentage of the population nationwide that resides within jurisdictions that had a specified characteristic. For a more detailed discussion of our survey methodology, see appendix I.
Survey Contact
Question 1 (open-ended question): What is the name, title, telephone number, and e-mail address of the primary person completing this questionnaire so that we may contact someone if we need to clarify any responses?
Use of Commercial Off-the-Shelf (COTS) Components
The Election Assistance Commission’s (EAC) Voluntary Voting System Guidelines, Version 1.1, defines commercial off-the-shelf (COTS) products as software, firmware, devices, or components that are used in the United States by many different people or organizations for many different applications other than certified voting systems and are incorporated into the voting system with no manufacturer- or application- specific modification. Examples of COTS components include hardware that can be purchased commercially (e.g., tablet devices, scanners, printers, memory cards or chips, etc.) and integrated as part of voting equipment. The next series of questions asks about your jurisdiction’s integration of COTS components into voting equipment that was acquired from a vendor or self-designed by your jurisdiction. For the purpose of questions 30-36 (the next 7 questions), the term “voting equipment” refers only to the equipment your jurisdiction used to cast and count votes.
Additional Comments
Question 58 (open-ended question): If you have any additional comments concerning any of the topics covered in this questionnaire, please use the space below.
Appendix IV: Results of GAO’s Survey of States on Voting Equipment
To obtain information on the types of voting equipment used in the 2016 general election and the factors states consider when deciding whether to replace voting equipment, we conducted a web-based survey of state- level election offices in the 50 states and the District of Columbia. The questions we asked in our survey of state election offices are shown below. Our survey was composed of closed- and open-ended questions. In this appendix, we include all survey questions and results of responses to the closed-ended questions; we do not provide information on responses provided to open-ended questions that required manually entered text responses. The tables below represent the frequencies of state responses to the questions. We received surveys from 46 states (a 90 percent response rate), while 5 states did not respond. However, the total number of responses to individual questions may be fewer than 46, depending upon how many states were eligible or chose to respond to a particular question. For a more detailed discussion of our survey methodology, see appendix I.
Survey Contact
Question 1 (open-ended question): What is the name, title, telephone number, and e-mail address of the primary person completing this questionnaire so that we may contact someone if we need to clarify any responses?
Additional Comments
Question 46 (open-ended question): If you have any additional comments concerning any of the topics covered in this questionnaire, please use the space below.
Appendix V: Approaches to Voting Equipment Replacement in Selected Local Election Jurisdictions
The five local election jurisdictions we selected to include in our review— Los Angeles County, California; Travis County, Texas; Anne Arundel County, Maryland; Lafayette County, Florida; and Beaver County, Utah— used varying approaches in replacing their voting equipment. Election officials in these jurisdictions and in their respective state election offices provided a range of perspectives on their experiences and the replacement process.
Los Angeles County, California
Los Angeles County is the most populous local election jurisdiction in the nation. It currently uses hand-marked paper ballots that are tallied using central count optical scan equipment, which has been in place since 2003. Prior to 2003 and dating back to 1968, these same ballots were used for its punch card voting system. The county is in the process of self-designing its own voting system, which is expected to consist of electronic ballot marking devices (BMDs) that produce paper ballots to be tallied on central count digital scanners, and plans to fully implement it in 2020.
Key Factors That Influenced the County’s Decision to Replace Its Voting Equipment
According to county officials, the overall performance and features of the county’s voting equipment and the need for the equipment to meet potential state and local requirements were among the key factors that influenced the county’s decision to begin the process of replacing its optical scan system. County election officials stated that while the county’s current voting equipment is reliable, accurate, and familiar to voters, the design and the age of the equipment do not offer the technical and functional flexibility necessary to continue to accommodate potential state regulatory changes and the growing and increasingly diverse county electorate. For example, officials stated that the current equipment may not be able to effectively accommodate state mandates that may require changes to ballot formats or length. Specifically, officials said that state legislation enacted in 2015 requires many cities within Los Angeles County to consolidate their elections with the county’s by 2022, and as a result, the number of races and measures on the ballot may exceed the 12-page capacity that the current equipment can accommodate. They also noted that the technical limitations of the equipment present challenges to providing voters with greater voting options, such as early voting or the use of vote centers on Election Day, and features that enhance accessibility and ease of use.
Planned New Voting Equipment and In-Person Voting Process
The county has developed a design concept and specifications for its new voting equipment and is in the process of soliciting and selecting vendors to manufacture it. It has acquired several functional prototypes of the current design for the new equipment and has outlined the planned in- person voting process using this equipment, as shown in figure 11. According to county officials, the equipment specifications and in-person voting process have not been finalized and continue to be refined.
County officials stated that the current design concept for the new equipment is intended to provide greater flexibility in administering elections, provide a more user-friendly and accessible voting experience, enhance accuracy and auditability, and could potentially lower costs for system upgrades if developed as planned:
Greater flexibility for administering elections. According to county election officials, the new equipment is designed to provide more flexibility for administering elections and to respond to changing legislative provisions on conducting elections. For example, the California Voter’s Choice Act, which was enacted in September 2016, generally authorizes Los Angeles County to conduct vote center elections beginning in 2020 if certain conditions are met. Officials stated that the proposed new equipment is expected to facilitate the use of vote centers because it would have the capability to electronically retrieve a voter’s ballot regardless of the precinct in which the voter is registered. They also noted that the BMD would allow the county to have ballots with multiple formats and a large number of races.
A more user-friendly and accessible voting experience. County election officials stated that the BMD is intended to provide the ease of use of a touch screen interface, which would incorporate features such as scrolling and tapping that are familiar to voters who use mobile devices. The BMD would also allow voters to select from English or the 11 other languages the county plans to support and is designed to include accessibility devices, such as a headset and tactile keypad for voters with vision impairments and other disabilities. Voters would be able to make their selections and cast their paper ballot without having to handle the ballot. Officials stated that these features are expected to allow voters with special needs to use the same equipment as all other voters and cast their votes independently and privately. The county’s proposed design also includes an interactive sample ballot which voters can access from their computers or mobile devices to pre-mark their vote selections, convert to a Quick Response (QR) code, and then scan into the voting equipment to populate their ballots. Officials stated that this feature may help reduce lines by decreasing the time it takes for voters to mark their ballots once they reach the BMD.
Enhance accuracy and auditability. The new voting equipment is designed to record vote selections on paper in human readable text. County officials stated that this is expected to more clearly capture voter intent than manually marked ballots, reduce the time and resources needed by county staff to interpret voters’ intent, and increase the accuracy of election results and public trust in the voting process. Officials stated that the new equipment is also expected to improve the county’s auditing capabilities. For example, the digital scanner is designed to allow the county to efficiently audit the results of individual races and measures, including conducting risk-limiting audits in which a specified number of ballots cast for a particular race are reviewed to confirm the election result for that race. According to officials, the county’s current equipment tallies ballots by precinct and does not keep an electronic record of the specific votes cast on individual ballots. As such, it provides the capability of auditing the results by precinct but not individual races at the ballot level.
Easier and less costly upgrades. According to county officials, the design of the voting equipment is intended to be modular so that key components can be replaced individually. Officials stated that this is intended to allow the county to more easily update equipment and incorporate technological advances because it will be able to swap out components if more affordable, better technology becomes available on the market. Officials said that the cost of replacing equipment parts is expected to be lower than with traditional voting systems.
Process for Developing the New Voting Equipment
Los Angeles County’s Voting Systems Assessment Project (VSAP) was established by the Registrar-Recorder/County Clerk in 2009 to help guide the development and acquisition of the county’s new voting equipment. According to county election officials, the VSAP has taken a user- centered approach to the design of the new voting equipment that prioritizes the specific needs and expectations of the voters and incorporates the requirements of county election administrators. Officials also stated that they sought to have a transparent design process that included voter input and participation to help promote public confidence in the new voting equipment. The project has five phases—(1) public opinion and stakeholder baseline research, (2) establishment of voting system guiding principles, (3) system design and engineering, (4) manufacturing and certification, and (5) phased implementation. The county is currently in the manufacturing and certification phase. Officials reported that about $19 million has been expended to develop the new voting equipment as of December 31, 2017. Officials also stated that after the new system is certified, an additional $49 million in state funds from the Voting Modernization Bond Act of 2002 will be available to the county. Table 108 describes the VSAP phases, their associated expenditures and funding sources, and examples of key actions taken or planned in each phase.
County officials told us they plan to retain ownership of the intellectual property rights of the new voting equipment so that the system remains publicly owned and not proprietary like traditional vendor equipment. The county also plans to use an open source technology framework wherein the source code for the system software is available for review and use by other election jurisdictions and entities by license. According to county election officials, this will allow other jurisdictions to, for example, have similar systems manufactured for their use. Officials stated that having the county own the system design on behalf of the public and using an open source software model are expected to provide greater flexibility for any jurisdictions using the software to cost-effectively make modifications to the equipment and adapt it to their varying needs and requirements. For example, jurisdictions would no longer be limited to relying on a single manufacturer if they would like to make an enhancement to the equipment or replace parts.
Officials noted that there is currently no licensing model or institutional framework in use for a publicly owned elections system. However, they stated that open source technology solutions in other industries have been successfully implemented and administered, and the county’s new system software could potentially be licensed and administered in a similar manner. In addition, county officials stated that they have outlined a clear business plan in the Request for Proposal (RFP) and during various information sessions with vendors which officials believe will help incentivize them to participate in building the system without potentially owning the equipment or its intellectual property rights. Specifically, officials noted that vendors would primarily receive revenue from the services they would provide, such as building the equipment and software platform and providing ongoing maintenance and support, rather than from selling the equipment itself.
County officials stated that implementing the new voting equipment and moving to vote center elections in 2020 are changes to administering elections for the county that will require a substantial educational and informational effort. Officials noted that they have involved numerous stakeholders throughout the VSAP process to help effectively prepare for these changes and plan to allocate resources to educate voters and train poll workers. Some of these efforts are already underway. For example, the county has posted information and videos on the planned new voting equipment and process on the VSAP website and has been using the BMD prototype for public demonstrations and internal training on the new voting process.
Travis County, Texas
Travis County currently uses direct recording electronic (DRE) equipment without a voter-verified paper audit trail (VVPAT), which has been in place since 2001. The county also has conducted vote center elections since 2011. Starting in 2009, the county took steps to design and build its own equipment, including developing a concept for a DRE with a VVPAT that centered on system security and auditability. In September 2017, the county decided to no longer pursue building the voting equipment and plans to purchase equipment from a vendor. The county plans to have the new equipment in place for the 2020 election.
Key Factors That Influenced the County’s Decision to Replace Its Voting Equipment
According to county officials, the overall performance and features of the county’s voting equipment was the primary reason for deciding to begin the process of replacing its DREs. In 2009, the Travis County Clerk convened an Election Study Group to assess the county’s current equipment and make recommendations for future equipment. This group was composed of 45 members representing election officials and workers, advocacy organizations, voters with disabilities, computer security experts, academics, and other segments of the community. According to the report that the group issued, most members expressed confidence in the way Travis County conducted elections and in the accuracy of its current equipment. However, they also expressed concerns over the equipment’s age and the lack of a paper trail, which they said decreased voter trust in the system and increased the risk of election equipment tampering. The group noted that the Travis County Clerk’s Office’s use of safeguards and security and testing procedures beyond those required by law helped minimize the risk of tampering. The report recommended that the county move toward using equipment that offers an electronic count and paper record as soon as an alternative that met the county’s requirements became available.
Selection and Acquisition of New Voting Equipment
Development of Self-Designed Voting Equipment
The Election Study Group outlined 19 key requirements that Travis County’s new equipment should meet. The requirements included, for example, producing a paper voting record that can be verified by the voter and be used to independently, transparently, and efficiently reconcile an electronic tally in an audit or recount; allowing voters with special needs to vote using the same equipment as other voters; enabling early voting and the use of vote centers; and having reasonable purchase, operational, and system upgrade costs. The group found that no equipment on the market in 2009 met the needs of the county and, as a result, the county began exploring options to design its own equipment. Officials stated that this effort was also intended to provide an alternative to the current vendor model that could reduce maintenance costs and annual licensing fees that are incurred with proprietary systems.
In 2012, the county Clerk convened a group of election administrators, usability experts, and academic experts in computer science and statistics, and through a series of discussion sessions, developed the concept for the county’s new system, which they named STAR (Secure, Transparent, Auditable, and Reliable) Vote. STAR-Vote was designed to be centered around a DRE that produces verifiable and auditable paper records. At the polling place, voters would make their selections on a DRE device with a commercial off-the-shelf (COTS) tablet, which would also be equipped with an auditory interface for visually impaired voters and other features to assist individuals with special needs. The voters’ selections would be encrypted and stored on the internally networked DRE devices, and voters would also receive a printed paper record with their choices. After reviewing the paper record and confirming their selections, voters would feed the paper record into a ballot box scanner to cast their vote. Once the polls closed, the devices storing the votes would be transported to receiving stations, where voting data are transmitted for electronic tabulation. The paper records would be available for audit or recount purposes.
In addition, county officials stated that the equipment’s proposed encryption technology was designed to potentially allow for the following features without revealing any individual’s vote:
Voters would receive a receipt that was attached to their paper records at the polling place and could go online after Election Day and use a code on the receipt to verify that their ballots had been cast and counted.
Third parties, such as the League of Women Voters or political parties, could access encrypted voting data to verify that the results the county had reported matched vote totals they had independently derived from the data.
The county could conduct risk-limiting audits to verify the consistency between the electronic and printed vote records and test the accuracy of the reported election outcomes. Audits could be conducted on individual ballots or races if needed.
In June 2015, the county issued a Request for Information for STAR-Vote to solicit input on the design, development, implementation, and maintenance of the equipment. Based on information gathered from the request, it issued an RFP in October 2016 to solicit proposals from voting system vendors and others for the development and implementation of key components of the equipment for in-person voting. The county also issued a Statement of Intent for the equipment to inform interested parties of the county’s planned approach for the long-term management and support of STAR-Vote. According to these documents, the county planned to own the intellectual property rights for the equipment and provide open source software for its system to the elections community under a licensing agreement, which would allow other jurisdictions to use similar equipment. The Statement of Intent described the formation of a nonprofit organization to manage and support STAR-Vote and sought $25 million in funding from interested parties to complete the development of the open source software components, support the organization’s operating budget for the first 5 years, and provide a cash reserve. The county planned to use these funding commitments and local budget appropriations to develop, build, and deploy the equipment.
In September 2017, the county announced that it had decided to no longer pursue developing and building STAR-Vote. The county stated that it received 12 proposals in response to the RFP but they were not sufficient to build a complete voting system. According to county officials, none of the proposals included the election management system for the equipment that would handle ballot definition and the tallying of results, among other related tasks. In addition, officials stated that they received limited responses to their solicitation for financial commitments in the Statement of Intent and thus lacked the necessary funding to develop and build the equipment. Officials noted that the open source software platform they had envisioned was seen by voting equipment vendors as a low-revenue business model in the current elections marketplace. They added that potential participants in a STAR-Vote entity may not have had a clear concept of how its business model might work, which they said was perhaps due to the county’s more limited focus on this aspect when they were initially designing the system. Given these obstacles and the age of the county’s current equipment, the county decided that it needed to move toward acquiring more immediately deliverable voting equipment through a voting system vendor.
Selection and Acquisition of New Voting Equipment from a Vendor
The county has incorporated some of the features of STAR-Vote into its requirements for new voting equipment. According to county officials, the county plans to acquire either DREs or ballot marking devices with precinct count digital scanners because, in their view, they are accurate (e.g., prevent voter errors, such as overvotes or stray marks on the ballot, and minimize questions about voter intent), allow individuals with disabilities to vote on the same equipment as other voters, support vote center elections, and offer fast reporting of election results. The county also plans to require that its next voting equipment have the following features:
A voter-verified, paper list of choices for recount purposes. County officials stated that the equipment must produce printed paper records that can be tallied and connected with electronic voting records through an automated process. This electronic connectivity would allow paper-ballot recounts to be conducted on individual races.
Security features that include support for third party verification of results and better postelection audits. According to county officials, the equipment they acquire must allow for third parties to independently verify reported election results and must support risk- limiting audits.
Officials stated that they believe there is or will be equipment on the market in the near future that could support these features. They noted that they are also prepared to work with vendors to customize existing equipment to meet the county’s requirements if needed, acknowledging that such additions may increase expenses or require additional time to recertify parts of the voting system. County officials estimate that the new equipment will cost about $16 million and stated that acquisition will be funded through local bonds.
County officials said they would like to have the new equipment in place for the 2020 election, which would require them to start deploying it no later than May 2019. The county issued an RFP for the system in November 2017, and officials stated that they plan to assemble a group of stakeholders similar to those who participated in the 2009 Election Study Group, as well as the individuals who designed STAR-Vote, to help evaluate the proposals received. Officials noted that their current equipment is functioning and robust, but that the new equipment must be deployed before the current equipment begins to degrade. In addition, they stated that the May 2019 implementation date is the latest possible date in order to allow sufficient time to educate voters and train county staff and election judges on the new equipment before using it in the 2020 election.
Anne Arundel County, Maryland
Anne Arundel County had used DREs without a VVPAT since 2004 and replaced its equipment in 2016 with a system in which voters manually mark paper ballots and insert them into precinct count digital scanners which then count them. Maryland requires the use of uniform voting equipment in polling places statewide and the state and counties each pay 50 percent of the costs of acquiring equipment. In our state survey, Maryland officials reported that the state determines when voting equipment is to be acquired and selects the type and model of voting equipment that local jurisdictions use.
Key Factors That Influenced Maryland’s Decision to Replace Its Voting Equipment
According to the Maryland State Board of Elections (SBE) and Anne Arundel County Board of Elections officials, the need for voting equipment to meet state requirements, the overall performance and features of the equipment, and the ability to maintain the equipment were among the key factors that influenced the state’s decision to replace its equipment.
Specifically, in 2007, Maryland enacted a law that prohibited the use of a voting system unless the SBE determined that the system provides a voter-verifiable paper record, thereby requiring the state’s DREs to be replaced. SBE officials said that the passage of the new law was driven primarily by a push from voting advocates to move to new equipment that used paper ballots and provided a verifiable paper trail. Although the law was enacted in 2007, state funding for the new equipment was not available until 2014 due to budgetary constraints.
While the change in state law was the main reason for replacing its voting equipment, both SBE and Anne Arundel County officials noted that the state’s previous DRE equipment was nearing the end of its life cycle and various problems had begun to occur more frequently. For example, SBE officials said that nonresponsive touch screens and battery unit failures became more common with the equipment used in the state. In addition, Anne Arundel County officials stated that while their equipment generally performed satisfactorily, some of the touch screens had begun to degrade and develop calibration issues, which resulted in the appearance of incorrectly recording voters’ selections. In addition, county officials said that the equipment could no longer support certain software or security updates, and replacement parts were challenging to acquire.
Selection and Acquisition of New Voting Equipment
According to SBE officials, state law specifically required the purchase of precinct count scanners so the board did not consider other types of voting equipment. The SBE issued an RFP in July 2014 and four voting system vendors submitted proposals. The SBE formed an evaluation committee to analyze the technical and financial details of the proposals. According to SBE officials, the committee’s members included a state official with expertise on voting systems, a county election director, a county technical specialist, and election experts and researchers, among others. Anne Arundel County election officials stated that the SBE also established various subcommittees to solicit input from county officials as the state made its selection. They said that relevant local elections staff members were involved in the selection process and that in their view, the process had worked well.
According to SBE officials, in addition to assessing the vendors’ proposals, the evaluation committee worked with the University of Baltimore to perform usability and accessibility testing on the equipment under consideration. The committee also hosted a public demonstration to collect feedback on such areas as ease of use and confidence that votes were accurately cast. Officials stated that after conducting its assessment of the equipment, the committee presented its findings to the SBE, and in October 2014, the board selected the voting equipment to be acquired based on the committee’s recommendation.
Maryland requires equipment to be certified by the EAC and the SBE before use in the state. The selected equipment had been certified by the EAC in July 2014 and was certified by the SBE in December 2014. As part of the certification process, the SBE tested the equipment to ensure that it met requirements in the Maryland elections code, including simulating primary and general elections using ballots typically used by jurisdictions in the state, and reviewed the findings from the public demonstration and usability testing performed during the selection process.
The SBE decided to lease rather than purchase the equipment for a number of reasons. Specifically, SBE officials said that leasing provided increased flexibility to update or replace equipment more frequently and had lower upfront costs. In addition, the state did not want to buy new equipment until the implementation of updated federal guidelines. Under the current contract to lease the digital scan equipment, payments are made to the vendor on a quarterly basis. According to SBE officials, the current payment to the vendor for leasing the digital scan equipment statewide is approximately $1.1 million per quarter.
SBE officials said that the process to acquire new equipment is inherently challenging, but in their view, the process generally went well. Knowing what type of equipment the state needed to acquire simplified the process and reduced the number of proposals that officials needed to review. Nevertheless, they noted that the process took more of their time and resources than they had anticipated, which presented challenges because the state was holding elections during the same time period it was selecting and acquiring the equipment. However, the SBE met its goal of implementing the new equipment by 2016.
Deployment of New Voting Equipment
SBE and Anne Arundel County officials stated that deployment of the new equipment in the 2016 general election went smoothly with no significant challenges. The officials said they took a number of steps to help ensure a successful rollout. For example, SBE officials said that they established a strong project management team and hired contractors to assist with tracking progress toward key deadlines; drafting policies, procedures, and training manuals; and testing equipment and sending it to the counties. Anne Arundel County officials said that they hired about 40 temporary staff to assist with deploying the new equipment and other tasks during the general election. In addition, they stated that the county conducted extensive election judge training and held mock elections using the new equipment. The officials noted that with the new paper-based system, the county needed to recruit and train more election judges compared to past elections to hand out ballots, show voters how to operate the equipment, and handle provisional voting. The two election judges we interviewed stated that the training they received was very comprehensive and effectively prepared them for Election Day.
Both SBE and Anne Arundel County officials stated that additional voter education efforts would have been beneficial. According to SBE officials, the SBE had developed plans for a statewide multimedia effort to educate voters on the new equipment but did not receive funding to implement it. A scaled down effort was carried out instead, which included demonstrating voting equipment at meetings and fairs around the state, producing local media news stories, and posting a video on the SBE’s website on how to use the new equipment. SBE and Anne Arundel County officials stated that the more limited voter education efforts might have contributed to longer lines on Election Day in some polling places because many voters were unfamiliar with the equipment and some had questions or needed assistance with using it. However, these officials noted that voter wait times were not a widespread or significant issue during the general election. The two election judges we interviewed stated that some voters needed help inserting their ballots into the scanner, but observed that voters generally appeared to find the new equipment easy to use. They also noted that some voters commented that paper ballots provided them with reassurance with regards to the security of their vote.
SBE and Anne Arundel County officials said that the equipment itself performed satisfactorily in the 2016 general election with only minor problems. For example, state officials said that the scanners jammed occasionally, but this was easily resolved by elections personnel. In addition, most polling locations in the state were allocated only one scanner, so some jurisdictions with two-page ballots, such as Anne Arundel County, experienced lines because of the length of time it took for voters to scan their ballots. Anne Arundel County officials plan to analyze voter registration data to help determine the number of scanners needed at each polling place and share the information with the SBE to help inform allocations for future elections. More generally, SBE officials noted that the new system has less equipment to manage—about 2,600 digital scan units compared to the approximately 18,000 DRE units used statewide in prior elections—so there is less pre-election testing and postelection maintenance that has to be done, saving time and labor for the state and counties.
The state contracted with a third party vendor to conduct a postelection audit of the 2016 general election by using independent software to tally all digital ballot images. The audit confirmed the accuracy of the election results. According to SBE officials, the new equipment’s ability to capture and store digital images of the ballots made this type of audit possible. Anne Arundel County officials stated that the ability to conduct such an audit is one of the main benefits of the new equipment.
Lafayette County, Florida
Lafayette County has a small population and, in 2016, replaced its precinct count optical scan equipment with precinct count digital scan equipment. The county formed a consortium with other counties in the state to help acquire its new equipment.
Key Factors That Influenced the County’s Decision to Replace Its Voting Equipment
According to the county’s Supervisor of Elections, the cost to acquire new equipment and availability of funding and the need to meet state requirements were among the key factors that influenced the county’s decision to replace its voting equipment. He stated that Lafayette County’s optical scanners were approximately 15 years old but were generally in good condition and performed satisfactorily in prior elections. County officials had planned to replace the county’s aging voting equipment by 2018 or 2020, but decided to replace it in 2016 because of the opportunity to join a consortium of counties that formed to acquire new equipment, which the Supervisor stated helped secure funding for and lower the costs of purchasing the equipment.
In addition, the Supervisor of Elections said that, to comply with state law, the county needed to acquire a paper ballot system with a BMD to replace the DRE it had used for voters with disabilities. Specifically, as of July 2008, Florida law required all voting in the state to be done using mark-sense paper ballots, which are generally counted using optical or digital scanners, except for voting by individuals with disabilities. Current state law requires jurisdictions to use these paper ballots for accessible voting by 2020. As such, according to the Supervisor of Elections, part of the impetus for acquiring new voting equipment was to replace the county’s DRE to meet the 2020 deadline in the law.
Selection and Acquisition of New Voting Equipment
The Supervisor of Elections stated that Lafayette County is a small county and does not have much purchasing power. He said that Lafayette County and other small counties in the state formed a consortium to lobby the state for assistance and to leverage their collective purchasing power. The 12-county consortium was established in a 2015 meeting that was attended by county election officials, the Florida Deputy Secretary of State, and the vendor that supplied the counties’ previous voting system. According to the Lafayette County Supervisor of Elections, the consortium decided to purchase precinct count digital scanners from the same vendor the counties had used before because county staff were familiar with the vendor and equipment, and the cost for the equipment was lower than similar equipment from another vendor that some counties in the consortium had considered. In addition, the Supervisor of Elections stated that the digital scanners have features that were an improvement over the county’s previous optical scan equipment. For example, he stated that the new scanners have more robust security features, such as locking panels, seals, and a requirement for a passcode to access the system. He also noted that the scanners have touch screens that flip up and are back-lit, which are easier for voters and poll workers to read and more clearly identify overvotes. Further, he stated the scanners digitally capture and store ballot images. The two Lafayette County poll workers we interviewed confirmed that the new equipment more clearly identified overvotes for them and for voters than did the previous equipment.
According to the county’s Supervisor of Elections, having the consortium approach state officials as a group helped secure HAVA funds to help the counties purchase the voting equipment. In addition, he stated that being a part of the consortium helped the counties negotiate a lower price for their equipment than what they could have obtained individually because they pooled their purchases and acquired a higher volume of machines. While the consortium negotiated as a unit, each county has an individual contract with the vendor. The Supervisor of Elections stated that the total cost to purchase Lafayette County’s new voting equipment—which included seven digital scanners, seven BMDs for voters with disabilities, and various system components—was about $70,000. The equipment was acquired primarily with HAVA funds, although he noted that the county allocated about $12,000 in local funds to purchase three additional BMDs. A memorandum of agreement for funding and purchasing the equipment was signed by Lafayette County and the state in November 2015 and, according to the Supervisor of Elections, the equipment was acquired in late 2015 and first used in the March 2016 primary election.
Deployment of New Voting Equipment
The Supervisor of Elections and the two poll workers we interviewed stated that deployment of the new voting equipment went smoothly and the county did not experience any challenges because the new and previous equipment are both precinct count scanning systems. The Supervisor noted that the voting process remained the same for the voter, so extensive voter education efforts were not needed. He stated that Lafayette County did not experience any equipment malfunctions during the November 2016 general election, and a postelection audit that was conducted, in which the county manually tallied ballots from a randomly selected race and precinct, found that the results were accurate.
Beaver County, Utah
Beaver County has a small population and previously used DREs with a VVPAT. In 2014, Beaver County began conducting vote-by-mail elections and replaced its DREs with central count digital scan equipment to support this change.
Key Factors That Influenced the County’s Decision to Replace Its Voting Equipment
According to Beaver County officials, the overall performance and features of the equipment and the ability to maintain the equipment were among the key factors in their decision to replace the county’s equipment. Officials stated that the county had been using DREs since 2005 and that by 2013, they had come to the conclusion that the equipment was not very efficient or user-friendly for administering elections. For example, the Deputy Clerk stated that it was time consuming to both set up the equipment and tally the votes, which required collecting and uploading the memory component from each of the DREs. She also noted that the operating software for the equipment’s election management system had become out-of-date and did not have a user-friendly interface. According to the Deputy Clerk, this made it difficult for staff to navigate without detailed training, which was time consuming and costly. In addition, county election officials said that they were unsure about future maintenance and system upgrade costs and decided it would be more cost-effective to spend funds on purchasing new voting equipment rather than on upgrades to equipment with which they were not very satisfied.
In 2013, the county decided to begin conducting vote-by-mail elections the following year and to acquire new equipment to support this change. According to county officials, this decision was due to the performance of their DREs and a desire to reduce costs and increase the efficiency of administering elections, among other reasons. Officials said that because the county was moving to vote-by-mail elections and DREs would no longer be needed for each precinct, the county would instead acquire central count scanners designed to count the mail-in ballots it would receive at the county elections office.
Selection and Acquisition of New Voting Equipment
According to Beaver County officials, the main individuals involved in the process to select and acquire the county’s new voting system included the current Beaver County Clerk, Deputy Clerk, a county information technology official, and the previous county clerk, among others. When the county started the process in 2013, the state had not initiated any efforts to help local jurisdictions acquire new equipment. As such, both Utah and Beaver County election officials said that the state was aware of the county’s decision to replace its equipment but was not involved in the selection and acquisition process.
County officials stated that they wanted to acquire central count scanners to support conducting vote-by-mail elections and a BMD for in-person voting at the elections office for individuals with disabilities. Officials said that, in 2014, they verbally requested proposals from their current vendor and an elections services company that the county had employed in 2012 to provide training, systems testing, and other support for elections. According to the Deputy Clerk, the county requested proposals from these two entities because county officials were familiar with them and were not aware of other vendors that might submit proposals. Officials said that the county received a proposal from the elections services company, and selected the company because it was the only bid received and the equipment the company sold met the county’s needs and was federally certified. They stated that one of the challenges they experienced as a small county looking to purchase equipment was that vendors were not actively marketing to them. In addition, the Deputy Clerk noted that she had limited elections and information technology experience when the county started the selection process. However, she said that the election services company was familiar with Utah’s elections code and federal voting system requirements, helped negotiate with the vendor to acquire the new equipment, and educated county staff on the equipment.
Beaver County reported that the cost to purchase the equipment—two central count digital scanners, a BMD, and associated system components—was about $46,000. Local funds were used to purchase the scanners and HAVA funds were used to purchase the BMD. According to Beaver County officials, county commissioners approved the procurement of the equipment in spring 2014 and it was first used in the June 2014 primary elections.
Deployment of New Voting Equipment
Beaver County officials stated that they deployed the new equipment in 2014 because it was more manageable to conduct such a transition during a non-presidential election year. They noted that they needed to educate the public about both voting by mail and the new voting equipment. Officials stated that the county used local newspaper ads, social media posts, and direct mailings to provide information on these changes. Officials also posted information on the county’s website and allowed people to observe logic and accuracy testing of the equipment. They noted that educating the public on the new voting method and equipment in smaller elections during 2014 and 2015 helped voters become more comfortable with what to expect for the presidential election in 2016.
County officials said that they are very satisfied with the performance of the new voting equipment. They noted that conducting vote-by-mail elections and using central count scanners allow them to administer elections from one location on Election Day, which requires less time and resources than having to manage multiple polling places. Officials also stated that the new digital scanners are able to count a high volume of ballots in a short period of time. They said that, for the November 2016 general election, the vote tallying was completed within an hour of the polls closing, which allowed the county to report results quickly. However, one challenge they experienced was that the new equipment’s data format for election night reporting of results to the state was not compatible with the state’s reporting system. To address this issue, county officials reformatted the data to produce a report that could be uploaded into the state’s system, but cautioned that this may not be feasible for larger jurisdictions.
According to officials, the county conducted two postelection audits for the 2016 general election—one required by the state and another that the county initiated. For the state audit, the county hand counted 1 percent of total ballots from a randomized list. In addition, the county conducted its own audit by running all ballots on its other digital scanner to compare results. According to officials, both audits validated the election results.
Appendix VI: GAO Contact and Acknowledgments
GAO Contact
Acknowledgments
In addition to the contact named above, Tom Jessor (Assistant Director), David Alexander, Carl Barden, Chuck Bausell, Brett Fallavollita, Sally Gilley, Christopher Hatscher, Eric Hauswirth, Richard Hung, Jill Lacey, Serena Lo, Jan Montgomery, Heidi Nielson, Shannin O’Neill, Claire Peachey, Jeff Tessin, and Johanna Wong made significant contributions to this report.
We gratefully acknowledge the substantial time and cooperation of the state and local election officials, and stakeholders and experts whom we interviewed. | Why GAO Did This Study
Much of the voting equipment acquired with federal funds after the enactment of the Help America Vote Act in 2002 may now be reaching the end of its life span, and some states and local election jurisdictions—which number about 10,300 and generally have responsibility for conducting federal elections—have or are considering whether to replace their equipment. GAO was asked to examine voting equipment use and replacement.
This report addresses (1) the types of voting equipment jurisdictions used for the 2016 general election and their perspectives on the equipment; (2) factors considered when deciding whether to replace equipment and replacement approaches in selected jurisdictions; and (3) stakeholder perspectives on how federal voting system guidelines affect replacing and developing equipment.
GAO surveyed officials from a nationwide generalizable sample of 800 local jurisdictions (68 percent weighted response rate) and all 50 states and the District of Columbia (46 responded) to obtain information on voting equipment use and replacement. GAO also interviewed officials from (1) five jurisdictions, selected based on population size and type of voting equipment used, among other things, to illustrate equipment replacement approaches; and (2) seven voting system vendors, selected based on prevalence of jurisdictions' use of equipment, type of equipment manufactured, and systems certified, to obtain views on federal voting system guidelines. These interviews are not generalizable, but provide insights into jurisdictions' and vendors' experiences.
What GAO Found
Local election jurisdictions primarily used optical scan and direct recording electronic (DRE), also known as touch screen, equipment during the 2016 general election and were generally satisfied with voting equipment performance. Specifically, on the basis of GAO's nationwide generalizable survey of local election jurisdictions, GAO estimated that jurisdictions with 63 percent (from 54 to 72 percent) of the population nationwide used optical or digital scan equipment as their predominant voting equipment during the election, while jurisdictions with 32 percent (from 23 to 41 percent) of the population nationwide used DREs. In addition, the survey results indicated that accurate vote counting and efficiency of operation were top benefits experienced by jurisdictions for both types of equipment, and storage and transportation costs were a top challenge. Further, GAO estimated that jurisdictions with 93 percent (from 88 to 96 percent) of the population nationwide did not experience equipment errors or malfunctions on a very or somewhat common basis and jurisdictions with 96 percent (from 94 to 98 percent) of the population were very or generally satisfied with the performance of their equipment during the 2016 general election.
GAO identified four key factors that jurisdictions and states consider when deciding whether to replace voting equipment—(1) need for equipment to meet federal, state, and local voting system standards and requirements; (2) cost to acquire new equipment and availability of funding; (3) ability to maintain equipment and receive timely vendor support; and (4) overall performance and features of equipment. When replacing equipment, the five jurisdictions GAO selected for interviews used varying approaches based on their specific needs and resources. For example, Los Angeles County, California, which has a large and diverse electorate, is self-designing its own voting equipment and, according to officials, has incorporated a user-centered approach that prioritizes the needs and expectations of its voters. Lafayette County, Florida, which has a small population, joined a consortium of other small counties to help obtain funding and pool purchasing power to replace its equipment.
The state election officials we surveyed and the seven selected voting system vendors we interviewed, among other stakeholders, had varying perspectives on how the current voluntary federal voting system guidelines affected the replacement and development of voting equipment. These guidelines can be used to test and certify equipment to verify that it meets baseline functionality, accessibility, and security requirements. The stakeholders we surveyed or interviewed generally indicated that the guidelines and their associated testing processes provide helpful guidance for equipment developers, cost savings for states that do not have to duplicate federal testing, and assurance that certified equipment meets certain requirements. However, some of these stakeholders stated that aspects of the guidelines could discourage the development of innovative equipment and limit the choices of voting equipment on the market. The Election Assistance Commission (EAC), which is responsible for developing the federal guidelines, is updating them with stakeholder input and plans to issue a new version in late summer 2018.
GAO incorporated technical comments provided by the EAC and election officials from the selected local jurisdictions and their respective states as appropriate. |
gao_GAO-19-36 | gao_GAO-19-36_0 | Background
Spending Authority and Permanent Appropriations
“Backdoor Authority” or “Backdoor Spending” These are similar but not identical terms for spending authority and permanent appropriations. Backdoor authority and backdoor spending are colloquial phrases for budget authority that Congress has provided in laws other than appropriations acts. This includes contract authority and borrowing authority, as well as entitlement authority. Entitlement authority is a type of permanent appropriation. It refers to the authority to make payments for which budget authority is not provided in advance by appropriation acts to any person or government if, under the provisions of law containing such authority, the U.S. government is legally required to make such payments. The terms backdoor authority and backdoor spending refer to the process by which federal money “goes out the door.” Annual appropriations are said to go out the “front door” as the annual appropriations cycle provides a regularly- scheduled forum where Congress may exercise oversight over spending. Other appropriations are said to go out the “back door” as they do not go through the annual appropriations process.
For the purposes of this report, our definition of spending authority and permanent appropriations includes the five types of budget authority described in figure 1. We are defining spending authority as budget authority made available through laws other than annual appropriation acts. Also, we are defining a permanent appropriation as budget authority to incur obligations and make payments that is available permanently by law without further legislative action. A permanent appropriation may have been made available through an annual appropriations act or through laws other than the annual appropriations acts. We are including both in our inventory based on the intent of the request for developing our inventory.
For some accounts, Congress provides spending authority and permanent appropriations to allow agencies the flexibility to spend fee revenue without further legislative action. Specifically, Congress has authorized some agencies to establish working capital funds—a type of intragovernmental revolving fund—in which an agency may deposit fees from federal, and sometimes nonfederal, customers for performing administrative services, or the sale of government products, within their statutory authority. For example, in addition to appropriations and reimbursements from federal agencies, the Department of Energy (DOE) has a working capital fund with the authority to collect funds. Those collections are then made available for DOE expenses necessary for the maintenance and operation of common administrative services for economy and efficiency, such as office space and communication services. This and other working capital funds operate as a self- supporting entity conducting business-like activities for the agency.
Spending authority and permanent appropriations may be subject to further restrictions from Congress. For example, in one or more annual appropriations acts, Congress could restrict the use of some or all of the budget authority, thereby using the annual appropriations process to control the use of spending authority and permanent appropriations. For example, the U.S. Department of Agriculture (USDA) has a permanent appropriation which states that 10 percent of all receipts from the use and occupancy of national forest system lands during each fiscal year are available for maintaining roads and trails within the national forests. In past annual appropriations acts, Congress has limited that permanent appropriation by transferring all funds made available for that fiscal year to the General Fund of the Treasury. Those funds are then unavailable for obligation unless appropriated once again.
Trends in Federal Spending
In fiscal year 2017, the federal government’s total outlays were almost $4 trillion of which about $2.5 trillion was in outlays for mandatory spending. Mandatory spending, also known as direct spending, refers to budget authority provided in laws other than appropriations acts and the outlays that result from such budget authority. Medicare is an example of a program that is funded by mandatory spending. Discretionary spending, on the other hand, refers to budget authority that is provided in and controlled by appropriations acts. During the annual appropriations process, Congress may choose to appropriate the amount in the President’s budget request, increase or decrease those levels, eliminate proposals, or add other programs. For example, most defense and education programs are funded with discretionary spending. As shown in figure 2, mandatory spending as a share of all federal spending grew from about 51 percent in fiscal year 1997 to about 63 percent in fiscal year 2017. Another form of federal spending is net interest, which is primarily interest paid on debt held by the public.
While the majority of the accounts in our inventory have mandatory budget authority, not all mandatory spending fits our definition of spending authority and permanent appropriations. For example, while annually appropriated entitlement programs—such as the Supplemental Nutrition Assistance Program—are provided for in annual appropriations acts, they are treated as mandatory spending because the authorizing legislation entitles beneficiaries to receive payment or otherwise obligates the government to make a payment. As annually appropriated entitlements are subject to the annual appropriations process, they did not meet our definition of spending authority and permanent appropriations. Conversely, not all spending authority and permanent appropriations are mandatory spending. For example, our inventory includes permanent appropriations made available in annual appropriations acts.
The increase in mandatory spending, and corresponding increase in spending authority and permanent appropriations, has long-term implications for the nation’s fiscal outlook overall, including the growing federal debt. The growth in mandatory spending drove federal spending that outpaced revenue growth in fiscal year 2017 and, absent policy change, is projected to continue to do so in the future given the aging population and rising health care costs and their relation to large federal budget accounts funding programs, such as Social Security and Medicare.
Spending Limits and Sequestration
Sequestration—cancellation of budgetary resources under a presidential order—was first established in the Balanced Budget and Emergency Deficit Control Act of 1985 (BBEDCA) to control the deficit. BBEDCA, as amended, requires OMB to calculate the reduction to budgetary resources required each year to reduce the deficit by at least an additional $1.2 trillion, over a 10 year period. A percentage reduction, or sequestration rate (calculated by OMB), is applied to nonexempt (subject to sequestration) accounts to achieve the total reduction amount required for the fiscal year. The sequestration rate varies from year to year based on a formula outlined in BBEDCA. The annual reduction amount OMB calculates is split evenly between the defense and nondefense functions. The calculated amount is then allocated between discretionary appropriations and mandatory spending in each function in proportion to the share of total spending within the function.
Prior to BBEDCA, the Congressional Budget and Impoundment Control Act of 1974 (CBA) attempted to limit the creation of new contract authority and authority to borrow. In 1990, Congress further sought to limit spending authority by establishing controls over discretionary spending and a system of controls over legislative changes in mandatory spending. The Budget Enforcement Act of 1990 (BEA) amended both CBA and BBEDCA. In addition to establishing dollar limits for total annual appropriations, BEA contained a “pay-as-you-go” provision requiring that any legislation that reduced taxes or expanded mandatory spending programs be offset by mandatory spending cuts or revenue increases. This provision was to be enforced through sequestration of nonexempt mandatory spending programs at the end of the congressional session. Both the discretionary limit and “pay-as-you-go” rules were extended through fiscal year 2002 and were not subsequently reauthorized. In 2010, the Statutory Pay-As-You-Go Act of 2010 reinstated a version of the “pay-as-you-go” requirement. The act provided that if the net effect of mandatory spending and revenue legislation enacted in a year increases the deficit, then a sequestration of nonexempt mandatory spending will occur to eliminate the increase.
The Budget Control Act of 2011 (BCA) further amended BBEDCA and revived sequestration as a budgetary enforcement mechanism to reduce the deficit. BCA established the Joint Select Committee on Deficit Reduction (Joint Committee). The Joint Committee was tasked with proposing legislation to reduce the deficit. Such legislation was not proposed or enacted, which triggered the sequestration process provided in section 251A of BBEDCA, known as the Joint Committee sequestration. BBEDCA currently requires a sequestration of mandatory spending in each year through fiscal year 2027 and a reduction of discretionary spending limits in fiscal years 2020 and 2021. A sequestration of discretionary spending could still occur in any year through fiscal year 2021 if Congress and the President enact appropriations that exceed discretionary spending limits established by BBEDCA. As of September 2018, the President has ordered the sequestration of mandatory spending in each year since fiscal year 2013, and the sequestration of discretionary appropriations in fiscal year 2013.
Reported Use of Spending Authority and Permanent Appropriations Has Increased Government-Wide, and Agencies Using the Authorities Have Changed Reported Budget Authority Amount Was Higher for Three of the Five Authority Types in Fiscal Year 2015, as Compared to Fiscal Year 1994
The amount of spending authority and permanent appropriations reported government-wide grew 88 percent, from fiscal years 1994 through 2015 adjusted for inflation. Specifically, in fiscal year 2015, approximately $3.2 trillion was reported, compared with approximately $1.2 trillion in fiscal year 1994 ($1.7 trillion in fiscal year 2015 dollars, see figure 3).
Although the total reported amount of spending authority and permanent appropriations increased over time, the changes for each authority type varied when comparing fiscal years 1994 to 2015 (see figure 4). Reported budget authority grew for three of the five authority types—permanent appropriations, offsetting collections, and contract authority—in fiscal year 2015, as compared to fiscal year 1994. For example, about $2.6 trillion permanent appropriations were reported in fiscal year 2015, up from approximately $982.5 billion in fiscal year 1994 ($1.5 trillion adjusted for inflation to 2015 dollars). Generally, the reported amount of permanent appropriations increased gradually, with the biggest growth occurring in fiscal year 2008. Borrowing authority decreased, and agencies reported no use of monetary credits or bartering at any time during fiscal years 1995 through 2015.
Table 1 provides comparisons between reported budget authority in fiscal years 1994 and 2015 for all authority types.
The amount of budget authority is not necessarily indicative of the prevalence of spending authority and permanent appropriations since the amount of budget authority in different accounts can vary by billions of dollars. From fiscal years 1995 through 2015, agencies had 1,089 authorities in 902 budget accounts. We previously reported on the use of 670 authorities in 540 budget accounts in fiscal year 1994. In comparing fiscal years 1994 to 2015, we found that the number of accounts with permanent appropriations and offsetting collections increased while contract and borrowing authority decreased. Figure 5 summarizes the number of accounts with each type of authority over the years.
The overall growth in spending authority and permanent appropriations is driven primarily by permanent appropriations growth. Entitlement programs, such as Medicare and the Social Security Administration’s (SSA) Old-Age and Survivors Insurance and Disability Insurance programs, are funded through permanent appropriations, and are a significant proportion of budget authority in our inventory, as discussed below. Since many spending authorities and permanent appropriations provide agencies budget authority based on program use and eligibility, demographic and program demand changes can affect the amount of budget authority. For example, since the Old-Age and Survivors Insurance and Disability Insurance programs administer benefits based on eligibility requirements and statutory formulas, the amount of budget authority used for the programs increases as more people become entitled. Higher income levels result in higher average benefit amounts and cost of living adjustments increase monthly benefit amounts for current beneficiaries.
Other factors affected the growth in the use of spending authority and permanent appropriations to a lesser extent.
Enactment of new authorities. From 1995 to 2015, 329 new authorities for spending authority and permanent appropriations were enacted. For example, the Housing and Economic Recovery Act of 2008 granted the Department of the Treasury (Treasury) the authority to purchase any obligations and other securities issued by government-sponsored enterprises, such as Fannie Mae. According to the act, Treasury was authorized to use this authority until December 31, 2009, with certain actions permitted after that date. This authority resulted in $200 billion in permanent appropriations reported in both fiscal years 2008 and 2009, and another $46 billion in fiscal year 2013.
Amendment of existing authorities. Some existing authorities were amended to allow for increased use—permanently or temporarily. Some authorities have amounts specified by statute—such as maximum amounts the agency can use or set amounts that the agency can charge users. For these authorities, increases in the use of an authority may be attributed to enacted increases in the specified amounts. For example, the National Flood Insurance Fund reported an increase of $878 million in offsetting collections in 2012 after legislation increased the annual limitation on premium increases for certain insurance premiums.
Increased use of spending authority and permanent appropriations, at agency’s discretion. Other authorities did not experience statutory changes, but agencies increased the use of the authorities at their discretion to meet program needs. When no maximum amount is specified as a limit on the agency’s authority, variation in use is due to agency discretion in response to circumstances. For example, the Federal Deposit Insurance Corporation’s (FDIC) Deposit Insurance Fund account began reporting increased offsetting collections for amounts assessed against depository institutions insured by FDIC, in fiscal years 2009 and 2010. The reported collections increased to highs of $26.5 billion in fiscal year 2009 and $57.3 billion in fiscal year 2010, after reported budget authority levels of $2.2 billion in fiscal year 2006. According to an agency official and as stated in the FDIC’s 2009 Annual Report, this increase primarily resulted from its adoption of the Deposit Insurance Fund Restoration Plan and the prepayment of future risk-based deposit insurance assessments by depository institutions to provide FDIC with the necessary liquidity to resolve failed depository institutions during the financial crisis. A FDIC official stated that the Deposit Insurance Fund Restoration Plan addressed the need to return the Deposit Insurance Fund to its mandated minimum reserve ratio of 1.15 percent of estimated insured deposits.
Events other than legislative or agency actions. Programs may experience increased fee revenue, penalty payment, or use of the authority for circumstances that do not involve legislative or agency action. For example, the United States Coast Guard’s Maritime Oil Spill Programs account reported $743 million in permanent appropriations in fiscal year 2010 after receiving transfers from the Oil Spill Liability Trust Fund to assist with cleanup after the 2010 Deepwater Horizon oil spill. Amounts from the Oil Spill Liability Trust Fund are available to fund federal response activities in the event of an oil spill or imminent threat of an oil spill on navigable waters of the United States. In the case of the 2010 Deepwater Horizon oil spill, the Coast Guard was authorized to obtain one or more advances from the Oil Spill Liability Trust Fund, as needed to address costs associated with federal activities in response to the oil spill, with up to a maximum of $100 million for each advance.
As a result of the growth of spending authority and permanent appropriations from fiscal years 1994 through 2015, more budget authority is available to agencies that does not require them to await congressional action to incur obligations. For example, USDA has the authority to use its portion of the fee for Agricultural Quarantine Inspection without congressional action.
Agencies Reporting the Largest Amount of Spending Authority and Permanent Appropriations in Fiscal Year 2015 Have Changed, as Compared to Fiscal Year 1994
The majority of spending authority and permanent appropriations reported in fiscal year 2015 was concentrated in large agencies and budget accounts that fund entitlement programs such as Social Security and Medicare. The Department of Health and Human Services (HHS) reported the highest use of spending authority and permanent appropriations. HHS also had the most accounts in the list of top 10 accounts in fiscal year 2015. This is a change since fiscal year 1994 when SSA reported using the most spending authority and permanent appropriations. Together, in fiscal year 2015, the top three agencies— HHS, SSA, and Treasury—comprised three quarters of the total government-wide spending authority and permanent appropriations (see figure 6).
HHS reported the largest amount of spending authority and permanent appropriations in fiscal year 2015 with about $979 billion, or about 30 percent. (See appendix III for a list of budget authority use by agency for fiscal year 2015.) HHS’s largest three accounts in our inventory all fund Medicare. SSA, which oversees the Old-Age and Survivors Insurance program, the Disability Insurance and Supplemental Security Income programs, as well as the Special Benefits for Certain World War II Veterans program, reported about $920 billion or about 28 percent of total spending authority and permanent appropriations. Programs administered by HHS and SSA continue to show spending increases largely as a result of the aging of the population and increasing health care costs. Treasury reported the third highest amount of spending authority and permanent appropriations, about $542 billion, the majority of which is for interest on debt held by the public and intragovernmental debt.
These agency usage patterns are echoed when analyzing spending authority and permanent appropriations by account. The 10 largest accounts represented about 72 percent of spending authority and permanent appropriations in fiscal year 2015, as shown in table 2. All of these are permanent appropriations, except for the Postal Service’s Postal Service Fund account which is an offsetting collection that includes revenue for mail services. Seven of the 10 accounts fund entitlement programs.
Similar to the fiscal year 2015 data for all spending authority and permanent appropriations, HHS, SSA, and Treasury reported the greatest use of permanent appropriations in fiscal years 2015, 2005, and 1994 (see figure 7). HHS reported the highest dollar amount of permanent appropriations for the first time in fiscal year 2006, likely due to rising health care costs.
Permanent Appropriations Budget authority to incur obligations and make payments that is available permanently by law without further legislative action.
Contract authority is concentrated, with only five agencies having this authority from fiscal years 1995 through 2015. Four of these agencies used the authority, while one agency—the Judicial Branch; Courts of Appeals, District Courts, and Other Judicial Services—has the authority, but did not use it. The Department of Defense (DOD) and the Department of Transportation (DOT) were the two agencies that reported the largest percentages of dollar amounts of contract authority in fiscal years 1994 and 2015, as well as 2005 (see figure 8). This figure shows three of the four agencies that reported contract authority in our timeframe. One other agency, the Department of Housing and Urban Development, used contract authority in fiscal year 2007.
Contract Authority Authority to incur obligations in advance of appropriations, including collections sufficient to liquidate the obligation or receipts. It is unfunded, and a subsequent appropriation or offsetting collection is needed to liquidate the obligations.
From fiscal years 1994 through 2015, 15 agencies reported the use of borrowing authority of varying amounts and an additional two agencies had unused borrowing authority. Since 1995, seven accounts reported receiving new borrowing authority across five different agencies including the Department of Commerce and DOT. USDA reported the largest dollar amount of borrowing authority in most years, including fiscal years 1994, 2005, and 2015, which represented 73 percent, 82 percent, and 60 percent of each fiscal years’ total borrowing authority, respectively (see figure 9).
USDA’s large share of the total borrowing authority, and most of the overall variability in our borrowing authority data throughout our timeframe, is for the Commodity Credit Corporation Fund. The fund reported about $7.8 billion or 60 percent of government-wide borrowing authority in fiscal year 2015. The Commodity Credit Corporation has authority to borrow funds to carry out its programs, which include providing income and price support to agricultural producers, payments for conservation practices on farms, assistance in the development of international agricultural markets, and international feeding programs. Some of the primary drivers of its borrowing authority variability are legislation, changes in commodity yields and price, weather disasters, and market conditions, according to a USDA official.
The Railroad Retirement Board (RRB)—which administers a retirement benefit program similar to Social Security for railroad workers and their families—began reporting borrowing authority in fiscal year 1996 and reported the second largest borrowing authority amount in fiscal year 2015. The Railroad Social Security Equivalent Benefit account reported between about $3 billion and $4 billion per year through 2015. The Tennessee Valley Authority reported borrowing authority periodically during our time frame, reporting a high of $3.1 billion in fiscal year 2003. For more information on the top five accounts for the use of borrowing authority, see appendix IV. The text box below provides additional information on the Tennessee Valley Authority account and a Department of Commerce account.
Examples of Different Accounts with and Uses of Borrowing Authority Tennessee Valley Authority, Tennessee Valley Authority Fund. The Tennessee Valley Authority is a corporate agency that provides electricity for business customers and local power companies in parts of seven southeastern states. The agency is authorized to issue and sell up to $30 billion of bonds, notes, and other debt instruments at any one time to assist in financing its power program. The proceeds from these bonds are authorized for the construction, acquisition, enlargement, improvement, or replacement of electrical power facilities and other purposes authorized by the Tennessee Valley Authority Act of 1933. Department of Commerce, Public Safety Trust Fund. The Middle Class Tax Relief and Job Creation Act of 2012 (the Act) created the First Responder Network Authority (FirstNet), an independent authority within the Department of Commerce’s National Telecommunications and Information Administration (NTIA) and required it to establish a nationwide, interoperable public-safety broadband network. In order to provide initial funding for FirstNet, NTIA was authorized to borrow up to $2 billion from the Treasury to implement the program. The Act required NTIA to reimburse Treasury, without interest, from funds deposited into the Public Safety Trust Fund.47 U.S.C. § 1427.
A couple of accounts had temporary spikes in the use of borrowing authority from fiscal years 1994 through 2015. Specifically, the Department of Labor’s Unemployment Trust Fund had several years of increased use of borrowing authority, with a high of $26.2 billion in fiscal year 2010. As we have reported, the recession that occurred during 2007 through 2009 sharply increased the number and duration of claims for unemployment benefits. The National Credit Union Administration’s Central Liquidity Facility, which was created to improve the general financial stability of credit unions by serving as a liquidity lender to credit unions experiencing unusual or unexpected liquidity shortfalls, reported borrowing authority for the first time in our inventory in the amount of $19.4 billion in fiscal year 2009. The Central Liquidity Facility is authorized by statute to borrow, from any source, an amount not to exceed 12 times its subscribed capital stock and surplus.
Offsetting Collections Authority is Widespread among Agencies
The majority of agencies had offsetting collections authority. Offsetting collections authority generally authorizes agencies to collect fines, charge fees, or charge for permits among other uses. These functions have a number of applications across the government. Since fiscal year 1995, 129 accounts received new offsetting collections authority. We did not rank the top agencies that used offsetting collections because we, and the agencies when asked, were unable to reliably subtract collections from federal sources or refunds of prior paid obligations. The text box below provides examples of accounts with offsetting collections authority.
Examples of Different Accounts with and Uses of Offsetting Collections Department of Transportation (DOT), Motor Carrier Safety Operations and Programs. The Unified Carrier Registration Act of 2005 tasked DOT with establishing and implementing the Unified Carrier Registration System to serve as a repository of information on, and identification of, all foreign and domestic motor carriers, motor private carriers, brokers, freight forwarders, and others required to register with DOT. DOT is authorized to collect fees associated with the system, including registration and filing fees, and may use collected funds for these activities without further appropriation. Environmental Protection Agency, Damage Assessment and Restoration Revolving Fund. Under the Oil Pollution Act, responsible parties for a vessel or a facility from which oil is discharged are liable for, among other things, damages for injury to, destruction of, loss of, or loss of use of, natural resources. The Oil Pollution Act authorizes certain departments and agencies, such as the Department of the Interior, designated by executive order as a “trustee for natural resource damages” to recover such damages, and retain and use the funds without further appropriation to reimburse or pay costs incurred by the trustee with respect to the damaged natural resources. For the limited purpose of the Deepwater Horizon Oil Spill, the Environmental Protection Agency was also designated a trustee. The premiums are placed into a revolving fund, which is available without further appropriation, to pay claims. Department of Defense (DOD), Working Capital Fund, Defense Wide. DOD’s working capital fund is used to charge for goods and services provided to the military services and other customers. In addition to any funds appropriated to the working capital fund, the working capital fund may also collect funds from providing services or procuring supplies, or through the sale and disposal of DOD property. Funds are available without further appropriation. 33 U.S.C. § 2706(f); see also, 33 U.S.C. § 2702; 40 C.F.R. § 300.600.
Six Agencies with Authority for Monetary Credits or Bartering Did Not Report Using It
Six agencies have the authority to use monetary credits or bartering, but none of these agencies reported using this authority from fiscal years 1995 through 2015. These are the same agencies that we reported in 1996—the Departments of the Interior and State, DOE, DOD, USDA, and the Tennessee Valley Authority. OMB staff said that monetary credits are used infrequently government-wide, and that agencies are not required to record this type of authority separately in the budget. When we asked, no other agencies reported having monetary credits or bartering authority. The text box below provides examples of accounts that are authorized to use monetary credits or bartering.
Monetary Credits or Bartering Monetary credits or bartering are used by agencies having the authority to make purchases by giving the seller credits or something other than money in dollar amounts reflecting the purchase price. The holder of credits may apply them later to reduce an amount owed to the government in other transactions.
Examples of Different Accounts with Monetary Credits or Bartering Authority Department of State, Embassy Security, Construction, and Maintenance. The Department of State is authorized to exchange property or property interest for the use of diplomatic and consular establishments in foreign countries or in the United States. The Department of State is authorized to receive payment in any form, or in kind, to cover damage to or destruction of diplomatic or consular property abroad. Department of Defense (DOD), Operations and Maintenance, Army National Guard. DOD is authorized to acquire logistic support, supplies and services for the armed forces deployed outside the United States from specified governments and international organizations. Supplies or services of equal value may be exchanged to facilitate these transactions. 10 U.S.C. § 2344; see also 10 U.S.C. § 2341.
The Percentage of Spending Authority and Permanent Appropriations Authorities Subject to Sequestration in Fiscal Year 2015 Decreased Compared to Fiscal Year 1994
The majority of spending authority and permanent appropriations authorities were exempt from sequestration in fiscal year 2015. This is a reversal from fiscal year 1994, when the majority of spending authority and permanent appropriations authorities were subject to sequestration. Congress first established exemptions to sequestration in the 1980s when BBEDCA was enacted and has amended them since then.
To determine the requisite percentage reduction to nonexempt budgetary resources pursuant to BBEDCA, OMB must define the sequestrable base, which is the total of nonexempt budgetary resources within each function. BBEDCA directs OMB to calculate a sequestration consistent with special rules and exemptions described by law. OMB provides guidance to agencies for implementing sequestration, and is also required under BBEDCA to report to Congress its calculations and other estimates at various stages. We worked with OMB to classify by OMB’s sequestration designation the agencies’ spending authority and permanent appropriations authorities that were in our inventory, as shown in table 3. Each authority in our inventory is assigned a designation, which defines how the authority is treated when sequestration is in effect.
As shown in table 4, in fiscal year 2015, 57 percent of spending authority and permanent appropriations authorities were exempt from sequestration, and therefore were not subject to this budgetary enforcement mechanism for helping to control the deficit. This is a 20 percentage point increase since fiscal year 1994. Correspondingly, the proportion of spending authority and permanent appropriations authorities that were subject to sequestration decreased 35 percentage points from fiscal year 1994 to fiscal year 2015. The proportion of authorities that was partially subject to sequestration increased from 4 percent in fiscal year 1994 to 11 percent in 2015. Designations were not available for 11 percent of the authorities in our inventory, due to methodological differences with OMB data explained in appendix I. None of the authorities in our inventory were classified as optionally sequestrable and two were classified as sequestrable/906.
The sequestration procedures established under BBEDCA were designed to serve as a budget enforcement mechanism and thereby reduce the federal budget deficit. Under current law, sequestration applies to mandatory spending through fiscal year 2027. Our finding that the majority of the agencies’ spending authority and permanent appropriations authorities in our inventory are exempt from sequestration is consistent with our prior work on mandatory sequestration. In 2016, we reported that the majority of mandatory spending authority was exempt from sequestration. Since spending authority and permanent appropriations permit agencies to obligate budget authority without further congressional action, when these authorities are exempt from sequestration, agencies can continue to use these authorities without reductions when sequestration is in effect.
Agency Comments
We provided a draft of this report and the online dataset to the Director of OMB for review and comment. OMB staff provided technical comments, which we incorporated as appropriate.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies in our review, and other interested parties. In addition, the report is available at no charge on GAO’s website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact Tranchau (Kris) T. Nguyen at (202) 512-6806 or [email protected], or Julia C. Matta at (202) 512-4023 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to (1) identify and analyze federal budget accounts with spending authority and permanent appropriations, including the statutory references for the authorities, changes in the number of accounts and dollar amounts since fiscal year 1994, and other relevant information; and (2) describe whether the identified accounts are subject to or exempt from sequestration or subject to any special sequestration rules or limitations.
This report is an update to our previous report that covered spending authority and permanent appropriations using financial data from fiscal years 1985 through 1994. For this report, we analyzed data from fiscal years 1995 through 2015, the most recent year for which data were available when we began our work. We are also providing an online dataset of our inventory of accounts with spending authority and permanent appropriations on our public website at https://www.gao.gov/products/GAO-19-36.
For the purposes of this report, we are defining spending authority as budget authority made available through laws other than annual appropriations acts. We are defining a permanent appropriation as budget authority to incur obligations and make payments that is available permanently by law without further legislative action. A permanent appropriation may have been made available through an annual appropriations act or through laws other than the annual appropriations acts. A similar but not identical term for spending authority and permanent appropriations is “backdoor authority”—a colloquial phrase for budget authority that Congress provided in laws other than annual appropriations acts. This includes contract authority and borrowing authority, as well as entitlement authority and the outlays that result from that budget authority. The term “spending authority and permanent appropriations” indicates the authority to make obligations and expenditures without further action from Congress. For purposes of this report, spending authority and permanent appropriations include five types of budget authority: contract authority, authority to borrow, monetary credits or bartering, permanent appropriations, and offsetting collections. For more detail on the definitions and the inclusions and exclusions for our inventory of accounts and our reasoning, see appendix II.
Data Sources
To identify and analyze accounts that used spending authority and permanent appropriations during this time frame, we used the Office of Management and Budget’s (OMB) MAX A-11 Data Entry system (MAX). MAX is a computer system used to collect and process most of the information required for preparing the President’s budget for the federal government. Agencies develop their budget information and enter the data into MAX. The data undergo rigorous review by OMB. MAX contains numerous edit checks to help ensure data consistency. Thus, we found the data to be sufficiently reliable for our purposes of identifying our initial inventory of accounts.
We used the Program and Financing Schedule’s Budgetary Resources line number descriptions in OMB Circular A-11—OMB’s guidance to agencies for preparing and submitting budget information—to select line numbers in MAX that align with our definition of spending authority and permanent appropriations.
We reviewed each line number’s description to confirm it met the definition of spending authority and permanent appropriations. We also confirmed with OMB staff our understanding of changes to the line numbers over the years, as well as our approach to implementing exclusions. MAX does not have specific line numbers for monetary credits or bartering—agencies report use of monetary credits as cash equivalents in the budget. This is a broader category than just monetary credits. OMB staff said that agencies are not required to report monetary credits elsewhere. Therefore, we are unable to identify agencies’ use of monetary credits in MAX data.
The table below summarizes the line numbers we analyzed while building our inventory of accounts by authority type. To avoid double counting, we did not include lines that represent totals. For example, line 6300 was not in our scope for fiscal years 1995 through 1998 because it represented total appropriations. However, line 6300 is in our scope for years when it represented reappropriations, which is a form of permanent appropriations that would be included in our scope.
For offsetting collections, we included line numbers labeled in MAX as discretionary or mandatory. Although discretionary spending generally refers to outlays from budget authority that is provided in and controlled by appropriations acts—which would not be spending authority and permanent appropriations—OMB staff said this distinction does not always apply in MAX data. This is partly because, prior to fiscal year 1999, the Program and Financing Schedule did not distinguish between mandatory and discretionary offsetting collections. Although distinct line numbers for mandatory and discretionary collections were added, the designation in MAX is not always correct, according to OMB staff.
For fiscal years 1995 through 1998, lines 6800 to 6885 could represent discretionary or mandatory offsetting collections. Starting with fiscal year 1999, OMB reported discretionary and mandatory collections separately in the Program and Financing Schedule. Discretionary collections were reported on lines 6800 to 6885, and mandatory collections on lines 6900 to 6985. In later years, the numbers changed but the distinction between the two remained.
If any dollar amount was reported in MAX on any of the selected lines for any year from fiscal years 1995 through 2015, we included the account in our initial inventory. Many accounts reported budget authority amounts for more than one line number. In other words, they used different types of spending authority and permanent appropriations, or had multiple uses of the same authority. To the extent possible, we implemented the exclusions described in appendix II into the data, resulting in our initial inventory of accounts.
We compared the accounts and authorities identified in our 1996 report with our MAX data for our initial inventory of accounts. We found five accounts that were in the 1996 report but not the MAX data, which still had active budget authority reported in the fiscal years 2013, 2014, or 2015 budgets. We reviewed these for potential inclusion in our inventory and included three authorities.
Data Collection and Confirmation with Agencies
To learn more about the accounts in our initial inventory, we developed a data collection instrument (or worksheet) to send to the agencies. After our review and final agency verification, the results from the worksheets became our final inventory of accounts as shown in our online dataset. We took the following steps for collecting and reviewing agency information.
We asked that the agencies review the data for accounts for which they have responsibility. We asked them to confirm or correct account information that we obtained from MAX and, if applicable, from the 1996 report. If information was unavailable from the 1996 report, we asked agencies to provide it. We asked agencies to review the basic account descriptors (e.g., account names and numbers), MAX line number(s), budget authority type (which we determined based on the line number description), source of offsetting collections (if applicable), and a statutory reference and enactment year for each authority.
We used our 1996 report to identify accounts that may have authority to use monetary credits or bartering, and asked those agencies to confirm this information. We asked agencies to identify any accounts that have authority to use monetary credits or bartering, and to include the source of the monetary credits or bartered items, and identify their value in dollars.
We asked agencies to identify any additional accounts that have spending authority and permanent appropriations that were not presented in the worksheet because they were not identified through MAX.
We confirmed or corrected information in each agency’s completed worksheet and updated our inventory accordingly. We excluded accounts from our inventory if we determined that the authority for the account did not meet the definition of spending authority and permanent appropriations. When possible, we reviewed the President’s Budget Appendix to confirm corrections from the agencies. For some authorities, such as certain offsetting collections, we relied on the agency’s description of whether the account included nonfederal sources to make our decision about inclusion in our inventory.
We had discussions with agencies, as needed, to agree on the presentation of the account information and statutory references.
As described above, to compile our inventory and provide statutory references providing the authorities, we primarily relied on the MAX database and information agencies provided to us. While we made every attempt to confirm the information provided by agencies and provided agencies opportunities to review the information on their accounts, in some cases we included authorities for which neither we nor the agency could determine a statutory reference because we could not rule out the use of spending authority and permanent appropriations. We note that authorities and the statutes providing them can change over time. Our inventory of accounts should therefore not be used as a substitute for original legal research.
For some accounts, agencies identified errors in the MAX budget authority or other fields. We updated our inventory if the agency provided documentation, such as a SF-133, Report on Budget Execution and Budgetary Resources. We did not make changes to the budget authority classifications. For some authorities, we reported no budget authority in certain fiscal years, but MAX contained a budget authority amount. If an account had spending authority and permanent appropriations in only certain years, but reported other budget authority on the same lines that did not meet our definition, we only reported dollar amounts for the spending authority and permanent appropriations, when possible. The changes described above were only applied to our inventory data and not to the MAX database.
From our final inventory, we selected examples of accounts to highlight in the text boxes in our report. We made these selections based on the following criteria: variety of size of agencies, the authority was used sometime from fiscal years 2013 to 2015 (with exception of monetary credit or bartering authority), different examples of how the budget authority was used, and large or easy to understand programs.
Factors that Affect Our Totals for Spending Authority and Permanent Appropriations
There are several factors that affect our reported total spending authority and permanent appropriations. In working with agencies, we were unable to parse out the amounts of budget authority that do not meet our definition. Therefore, our reported budget authority amounts likely overstate the amount of spending authority and permanent appropriations used during the time period of our analysis.
Although some agencies informed us that certain offsetting collections contained collections from federal sources—which would not be considered spending authority and permanent appropriations—we could not reliably subtract the federal sources from all accounts covered in our inventory. While we excluded any account lines that the agencies reported consisted only of collections from federal sources, we did not exclude account lines for which we and the agencies could not reliably separate collections from federal sources from offsetting collections amounts that meet our definition of spending authority and permanent appropriations. As a result, our total budget authority amount for offsetting collections (and our overall totals) contains budget authority which does not meet our definition of spending authority and permanent appropriations.
The budget authority amounts for offsetting collections may also include some amounts that consist of refunds of prior paid obligations. While we excluded any account lines that the agencies reported consisted only of refunds of prior paid obligations, we did not exclude account lines for which we and the agencies could not reliably separate refund amounts from offsetting collections amounts that meet our definition of spending authority and permanent appropriations.
Additionally, our budget authority amounts include sequestered and rescinded amounts, which are usually negative in the MAX database. Sequestered and rescinded funds are not generally available to agencies, and therefore do not represent spending authority and permanent appropriations. Some agencies may have reported sequestered amounts in various budget authority lines in MAX, which we cannot reliably identify. Therefore to consistently include these amounts, we retained all sequestration-related lines. As a result of this inclusion, our totals are decreased by the negative sequestered amounts. Further, when budget authority amounts were totaled for each agency, some agency totals were negative. These negative values were generally small enough that they did not affect the overall percentages, so we removed them from our rankings of top agency users of permanent appropriations and borrowing authority.
Our inventory includes authorities that may have expired or been repealed during the time period of our analysis, even if the account is still active. We also did not examine whether Congress subsequently restricted or rescinded the agency’s ability to use all or a portion of its spending authority and permanent appropriations. We did not review annual appropriations acts or other legislation to identify the extent to which authorities in our inventory were restricted or rescinded.
Identification of Statutes Providing the Authorities
To note the statutes providing spending authority and permanent appropriations for the identified accounts, we used the worksheets— described above that were provided or corrected by the agencies—to collect and review the statutory references and enactment years for each account and type of authority. We reported only the earliest identifiable year of enactment for the statute providing the authority. There are some instances where budget authority data are reported for years prior to the enactment year for an account’s authority in our data. This may be stemming from a variety of factors, including repeal of earlier enacted authorities coupled with newly enacted authorities, and challenges identifying original enactment dates when sections of the U.S. Code were recodified. In other instances, neither we nor the agency could determine a statutory reference because of the age of the data, because the account or agency no longer exist, or other reasons. These authorities are included in our inventory because we could not rule out the use of spending authority and permanent appropriations. These accounts are categorized in our online dataset as either (1) the agency could not provide this information—we identified a potentially applicable statutory reference—or (2) the statutory reference could not be determined.
Sequestration Designation
To determine whether the identified accounts are subject to or exempt from sequestration, or subject to any special sequestration rules or limitations, we used datasets provided by OMB to identify the sequestration designation for accounts in our final inventory. Sequestration designations include sequestrable, partially sequestrable, exempt, optionally sequestrable, and sequestrable/906.
OMB generates the data annually through a government-wide data collection exercise to calculate the sequestration percentage and reductions by account as part of a report required under the Joint Committee process. For authorities that did not have a sequestration designation in OMB’s data but did report actual budget authority in fiscal years 2013, 2014, or 2015, we asked OMB to provide additional sequestration designation information. Authorities that OMB did not classify, or for which it could not provide additional information, have “None” listed as the sequestration status in our final inventory. The primary dataset we used includes accounts with mandatory budget authority in fiscal year 2015 and the corresponding sequestration designation. However, to identify the sequestration designation for accounts in our final inventory with offsetting collections authority that OMB categorized as discretionary spending, we used the fiscal year 2013 sequestration dataset. The fiscal year 2013 dataset was the most recent available for which sequestration occurred for discretionary spending when we began our work, and we used fiscal year 2015 data to match the end year of our inventory data.
We assessed the reliability of the sequestration datasets based on interviews with OMB staff. OMB staff told us that the data must pass a series of automated checks, and are reviewed at several points by OMB staff. Thus, we found the data to be sufficiently reliable for the purpose of identifying the sequestration status of the accounts in our final inventory. We confirmed the definition of each sequestration designation with OMB. In some cases, because of differences in the scope of the data that OMB collected for Joint Committee reports, a sequestration designation was not available. For some of those authorities, OMB provided a designation based on information collected from agencies. We compared the sequestration designation data from our 1996 report to the designations in fiscal year 2015 or 2013, as applicable to analyze changes over time.
Combining Data to Develop Our Inventory Dataset
We combined data from the worksheets confirmed by the agency into a single dataset to create our final inventory of accounts. We also added the dollar amounts from MAX to create the final dataset we used for analysis in the report. To combine our data, we had to make several decisions to help eliminate double-counting and to simplify the supplemental data that accompanies this report.
We have provided a final inventory dataset—which includes the agency accounts and related budget information, statutory references, enactment years, and sequestration designation—online as a supplement to this report. Table 6 lists the variables and definitions used in our online data.
We analyzed the final combined dataset for trends and compared it with the 1996 report. In some cases, we adjusted the fiscal year 1994 dollars for inflation. Once the inventory of accounts was finalized and we completed our review, each agency received a statement of facts to review, which summarized the final inventory information for their agency. Agencies that have examples of accounts highlighted in our report received those excerpted examples that are specific to their agencies for review and comment. We provided OMB the draft report and online data for review and comment.
We conducted this performance audit from March 2016 to November 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Additional Description of Spending Authority and Permanent Appropriations
This appendix provides additional details on how we defined spending authority and permanent appropriations for the purpose of this report and how we applied the definition to decide which accounts to include or exclude from our inventory. We are defining spending authority as budget authority made available through laws other than annual appropriation acts. Also, we are defining a permanent appropriation as budget authority to incur obligations and make payments that is available permanently by law without further legislative action. A permanent appropriation may have been made available through an annual appropriations act or through laws other than the annual appropriations acts. We are including both in our inventory based on the intent of the request for developing our inventory. Spending authority and permanent appropriations permit obligation and expenditures without further action from Congress. These include permanent appropriations, contract authority, borrowing authority, offsetting collections, and monetary credits or bartering, all of which are defined in table 7.
In building our inventory of accounts with spending authority and permanent appropriations authority, we made categorical decisions on what to include and exclude. We included authorities that met our definition of spending authority and permanent appropriations, as described above in table 7. A particular type of offsetting collections— collections from nonfederal sources that were enacted for the first time in an appropriations act—does not meet our definition, but nonetheless permits obligation and expenditure without further action from Congress, and therefore falls within the purview of this request. We included these authorities in our inventory and included a variable to identify them in our online dataset. Certain types of budget authority do not meet our definition of spending authority and permanent appropriations, as described in table 8.
Appendix III: Spending Authority and Permanent Appropriations Use by Agency and Authority Type, Fiscal Year 2015
Table 9 lists total spending authority and permanent appropriations reported by agency in fiscal year 2015. We listed the 24 agencies in the Chief Financial Officers Act of 1990 as amended, Legislative Branch and Judicial Branch entities, and Executive Office of the President and other entities. We did not include monetary credits or bartering in this table given no agencies reported use of this authority in fiscal year 2015.
Appendix IV: Five Largest Permanent Appropriations, Contract, and Borrowing Authority Accounts, Fiscal Year 2015
The tables below list the accounts reporting the largest amounts of budget authority for permanent appropriations, contract authority, and borrowing authority in fiscal year 2015. We did not rank the agencies that reported the largest amounts of offsetting collections because we, and the agencies, when asked, were unable to reliably subtract collections from federal sources or refunds of prior paid obligations. Agencies did not report using monetary credits or bartering in fiscal year 2015.
Appendix V: GAO Contacts and Staff Acknowledgements
GAO Contacts
Staff Acknowledgments
In addition to the contacts named above, Janice Latimer (Assistant Director), Lisa Motley (Assistant General Counsel), Lindsay Swenson (Analyst-in-Charge), Michael Bechetti, Shari Brewster, Charles Culverwell, Ann Marie Cortez, Erika Huber, Susan J. Irving, John Mingus Jr., Katherine D. Morris, Cynthia Saunders, Albert Sim, and Stewart Small made key contributions to this report. | Why GAO Did This Study
Congress can provide budget authority to federal agencies and programs through the annual appropriations process. It can also provide budget authority through laws other than annual appropriations acts, or through permanent appropriations that permit the agency to obligate budget authority without further congressional action. Analysis of these authorities helps provide Congress with visibility into spending authority that is not considered during the annual appropriations process.
GAO was asked to update its 1996 report that had provided an inventory of accounts with spending authority and permanent appropriations for fiscal years 1985 through 1994. This report discusses (1) federal budget accounts with spending authority and permanent appropriations, including the statutory references for the authorities, changes in the number of accounts and dollar amounts since fiscal year 1994, and other relevant information; and (2) whether the identified accounts are subject to or exempt from sequestration, or subject to any special sequestration rules or limitations. GAO also is providing an online dataset of the inventory of accounts with spending authority and permanent appropriations on GAO's public website at https://www.gao.gov/products/GAO-19-36 .
GAO analyzed Office of Management and Budget (OMB) budget data to identify accounts with spending authority and permanent appropriations. GAO reviewed data through fiscal year 2015 because that was the most recent data available when GAO began its work. GAO reviewed agency information to confirm data and statutory authority. Agencies also reviewed and verified the final data for their accounts. For the sequestration designation, GAO analyzed OMB data for fiscal years 2013 and 2015--the most recently completed years for which sequestration occurred and OMB identified designations when GAO began its work.
GAO provided a draft of this report and the online dataset to the Director of OMB for review and comment. OMB staff provided technical comments, which GAO incorporated as appropriate.
What GAO Found
A total of $3.2 trillion in spending authority and permanent appropriations was reported in fiscal year 2015; an increase of 88 percent from fiscal year 1994 adjusted for inflation in fiscal year 2015 dollars. Fiscal year 1994 was the last year included in GAO's prior work. For the purposes of this report, spending authority and permanent appropriations is budget authority provided to agencies through laws other than annual appropriations acts or available permanently by law without further legislation. These authorities include permanent appropriations, contract authority, borrowing authority, offsetting collections, and monetary credits or bartering. Permanent appropriations were the primary driver of the increase in spending authority and permanent appropriations. Offsetting collections authority--which includes certain fees, fines, and penalties--also grew. Agencies reported no use of monetary credits or bartering.
a
Permanent appropriations fund federal entitlement programs, such as Medicare, administered by the Department of Health and Human Services (HHS), and the Social Security Administration's (SSA) Old-Age, Survivors, and Disability Insurance program. These programs are a significant proportion of reported budget authority in GAO's inventory of accounts in fiscal year 2015. These programs continue to show spending increases largely as a result of the aging population and rising health care costs and are projected to continue to increase in the future. In fiscal year 2015, 7 of the 10 accounts reporting the largest dollar amounts of spending authority and permanent appropriations funded entitlement programs.
Three agencies comprised three quarters of the total government-wide spending authority and permanent appropriations in fiscal year 2015.
HHS reported the largest amount of spending authority and permanent appropriations with $979 billion, or about 30 percent--mainly from Medicare. HHS overtook SSA and reported the highest dollar amounts of permanent appropriations for the first time in fiscal year 2006.
SSA reported $920 billion, or about 28 percent of total spending authority and permanent appropriations--mainly from its Old-Age and Survivor's Insurance program and the Disability Insurance program.
The Department of the Treasury reported the third largest amount--$542 billion, or about 17 percent--the majority of which is for interest on debt held by the public and intragovernmental debt. This interest dropped as a percentage of permanent appropriations since fiscal year 1994, due to lower interest rates that allow the government to borrow money more cheaply. However, interest rates are predicted to rise in the long term, which would increase the net interest costs on the debt.
The second largest reported budget authority type was offsetting collections--a total of $421 billion in fiscal year 2015, more than double the fiscal year 1994 amount, adjusted for inflation. The Postal Service reported the largest use of offsetting collections authority in fiscal year 2015 in its Postal Service Fund, which includes revenue from mail services.
Sequestration--cancellation of budgetary resources under a presidential order--is a process established in statute which helps to enforce spending limits and thereby control the deficit. In fiscal year 2015, 57 percent of spending authority and permanent appropriations authorities were exempt from sequestration, up from 37 percent in fiscal year 1994. This means that fewer of these authorities were subject to this budgetary enforcement mechanism in fiscal year 2015. |
gao_GAO-18-201 | gao_GAO-18-201_0 | Background
VA has faced a growing demand by veterans for its health care services, due in part to both service members returning from military operations in Afghanistan and Iraq and to the growing needs of an aging veteran population. As part of providing care to millions of veterans, VA is expected to provide a safe environment not only for the veterans, but also for staff and visitors at a diverse makeup of VHA facilities.
Although many of these facilities face similar challenges, differences in facilities may require different levels and types of security. For example, medical centers with large numbers of staff, patients, and visitors may require more resources for securing the facility compared to smaller medical centers with fewer people frequenting the facility daily. Some medical centers are located in densely populated urban areas, while others are located in non-urban areas, and their security challenges may differ. For example, facilities in urban areas may be located near busy public roads, making it more difficult to implement physical security enhancements such as barriers or setbacks from the street. Furthermore, some VHA medical centers consist of a single hospital and others may include a campus with many buildings. According to VA officials, these differences can lead to unique security challenges. Medical centers offer different types of services, which can influence the types of security required. For example, officials from multiple medical centers we reviewed told us that emergency rooms and mental health areas experience high levels of security incidents, requiring additional security measures in these areas.
VA specifies various physical security requirements for its medical centers. These include physical access control systems, security cameras, silent alarm distress signaling, and perimeter fencing. Furthermore, each VHA facility has its own police department to help deter, detect, defend against, and respond to security threats. See appendix II for more information regarding the roles and responsibilities of VA police departments. See figure 1 for a depiction of a medical center that consists of a campus and a variety of buildings and examples of the physical security elements deployed.
To determine the specific countermeasures needed at each facility, VA has a two-part risk management process that begins with VA police assessing a facility’s security risk(s) by conducting “vulnerability assessments” biennially (see fig. 2). VA police at each of VHA’s medical centers report the findings, including recommended countermeasures, to medical center directors. These directors are responsible for developing an action plan in response to the assessments and making decisions about if and how recommended countermeasures will be addressed.
Across VA, numerous entities at the headquarters, regional, and local level have some role in carrying out physical security responsibilities. Figure 3 provides an overview of VA components with physical security roles and responsibilities at VHA facilities.
At the headquarters level, VA’s Office of Security and Law Enforcement (OSLE), located within VA’s Office of Operations, Security, and Preparedness, develops policies and standards for assessing physical security risks and providing physical security for facilities under VA’s custody and control, including VHA facilities used for providing healthcare services to veterans. VA organizes its system of care into regional networks called Veterans Integrated Service Networks (VISN). Each VISN is responsible for managing and overseeing medical centers within a defined geographic area. However, the primary operational responsibility for VA’s physical security program is at the medical centers themselves, where the medical center directors at each of VHA’s 170 medical centers are responsible for implementing OSLE’s policies and standards and overseeing VHA police activities. Police at each facility conduct the key activities involved in this program, including conducting risk assessments and identifying needed countermeasures. Beyond risk assessment, VA police have additional responsibilities for protecting the safety of medical centers. For information about their additional responsibilities and oversight of their operations, see appendix II.
The ISC was established via Executive Order 12977 in 1995 to enhance security at federal facilities. Its mission is to develop standards and best practices. ISC’s Risk Management Process for Federal Facilities, among other things, includes standards for agencies’ facility risk assessment methodologies. This process can help agencies effectively prioritize efforts to protect their facilities. ISC’s process consists of six steps designed to help agencies identify the appropriate protective measures for their facilities, and to ensure their effectiveness. (see fig 4.)
ISC’s Risk Management Process is applicable to all buildings and facilities in the United States occupied by federal employees for nonmilitary activities, including special-use facilities. Agencies may customize their implementation of elements of ISC’s standards, such as the countermeasures they determine are appropriate for their facilities or situations. Changes to these elements are to be made as a result of a risk-based analytical process. In December 2016, ISC issued its Agency and Facility Compliance Benchmarks to provide guidance to departments and agencies for ensuring compliance with ISC’s standards.
VA’s Risk Management Process Partially Reflects the ISC’s Standard
VA’s risk management process does not fully reflect the standards established by ISC shown in figure 4. Although structured differently, we found that VA’s process includes some elements of ISC’s process but is missing other elements, gaps that could result in risks’ not being fully assessed and appropriate countermeasures not being identified. See figure 5.
Determine facility security level: ISC’s standard requires that facility security levels (I-V) are to be based on an equal weighting of five factors (mission criticality, symbolism, facility population, facility size, and threats) and the consideration of “intangibles.” According to the ISC, each of these factors is important to quantifying a facility’s attractiveness as a target for adversarial acts and the severity of consequences should such an act occur.
VA policy calls for three of the factors to be used in determining a facility’s risk level, which partially reflects the ISC Standard. VA policy indicates that VA police are to identify an “asset risk value” that reflects the expected effect a threat would have to the functioning of VHA facilities and the continued delivery of services. This score is used to calculate an “overall risk value.” The greater the threat a facility faces relative to its physical security posture and the greater the impact on VA operations, the higher the overall risk value. The determination of the overall risk value reflects the ISC’s prescribed use of facility security levels to identify a facility’s level of risk.
VA’s policy does not articulate that factors used to determine the overall risk value be equally weighted, nor does it include facility population and facility size as factors. As a result, VA may not be considering all the relevant risk factors that make a facility a more or less desirable target for threats.
Identify the facility’s baseline countermeasures: The ISC Standard calls for baseline countermeasures to vary based on facility’s risk level. For example, depending on a facility’s security level and the type of undesirable threat posed, the use of X-ray or magnetometers may be required to screen visitors. Alternatively, agencies are allowed to create templates by facility type. That is, an agency can identify the specific risks posed to particular facility types and customize different sets of countermeasures that can serve as the baseline for those facility types.
VA has created templates based on facility types rather than varying its baseline countermeasures relative to a facility’s risk level, which is permissible under the ISC Standard. These templates outline the specific minimum countermeasures for different types of facilities or components of VHA facilities such as medical center pharmacies. VA’s minimum requirements for countermeasures in their facilities were designed to meet the needs of the medical center environment and clientele.
Identify and assess risk: ISC has established 33 specific undesirable events that agencies are to use when assessing risks to facilities. Additionally, the ISC requires that an agency’s risk assessment methodology consider three factors—threat, vulnerability, and consequence—in examining these events in order to be credible. Agencies may customize the threats they assess to their specific situations, after having considered the 33 undesirable events. According to ISC officials, agencies are expected to periodically review their list of undesirable events as updates to the standards occur and document determinations and justifications for excluding any undesirable event.
VA has identified 8 categories of threats that VA police are to review as part of vulnerability assessments, which includes consideration for threat, vulnerability, and consequence. These threat categories are: 1) assault, 2) physical threats of violence, 3) illegal weapons, 4) suicidal behavior, 5) theft/vandalism, 6) explosive devices, 7) mail-borne hazards, and 8) protection of hazardous materials and narcotics. This listing reflects the ISC Standard that agencies examine risks from undesirable events.
However, VA cannot demonstrate how its categories relate to ISC’s 33 undesirable events. According to VA officials, VA originally selected its threat categories in 2001 and updated them in 2009 to the current 8 categories. They told us that officials at the time considered the ISC’s full list of undesirable events and that these eight threat categories were and remain the most prevalent in the health care’s operating environment that represents the majority of VHA facilities. However, officials could not provide documentation of how their eight categories related to ISC’s defined undesirable events and why certain undesirable events appear to be included and others excluded within VA’s policies pertaining to risk management. By not reviewing all the undesirable events identified by the ISC, VA may be overlooking some potential threats present at its facilities.
Determine necessary countermeasures: ISC calls for agencies to determine if their baseline countermeasures or templates address a facility’s established risk level following an assessment. ISC has also clarified that its standards allow for countermeasures to be customized to specific facilities and situations. For instance, if the risks from undesirable events at a specific facility are found to be higher or lower than the level of protection afforded by the baseline set of countermeasures, the baseline countermeasures can be changed (up or down) to meet the level of assessed risk.
VA policy calls for police at each of VHA’s medical centers to conduct vulnerability assessments biennially. As a part of these assessments, VA police are to recommend countermeasures that represent the best value in terms of providing protection against multiple threats given the existing level of defense or security equipment. This procedure reflects the ISC Standard that necessary countermeasures be identified at the facility level by an agency’s security organization.
However, VA policy does not require recommended countermeasures to be related to the baselines established in the templates. This policy is inconsistent with the ISC Standard, which calls for countermeasures to be increased or decreased from the baseline to meet the level of assessed risk. This policy could leave staff, patients, and visitors, as well as property vulnerable to unmitigated risks.
Implement countermeasures or accept unmitigated risk: The ISC Standard requires agencies to document decisions, in particular, any decision to reject or defer implementation of countermeasures due to cost (or other factors). The ISC Standard also requires agencies to document the acceptance of risk in these instances and outline alternative strategies considered or implemented, and opportunities in the future to implement needed countermeasures. The ISC Standard notes, in particular, that risks accepted at the facility level may have a bearing on agency-wide risk management efforts and therefore documentation of risk acceptance shall be provided to the headquarters security office.
As previously discussed, medical center directors are to determine if and how to implement recommended countermeasures. This reflects the ISC Standard that information from assessments be forwarded to and used by decision makers. However, VA policy does not require the documentation of risk acceptance. That is, VA has no policy requiring its officials to document the rationale for rejected or deferred countermeasures, proposed alternative mitigations, and future planning. Without such a requirement, OSLE does not have full knowledge of the extent of risk acceptance that has occurred or what alternative countermeasures have been pursued.
Measure performance: According to the ISC Standard, agencies are to assess and document the effectiveness of their security program through performance measurement and testing. Measures should be based on agency mission goals and objectives. As examples of performance measures, the ISC Standard suggests that agencies could track the number of countermeasures in use or the percentage of facility assessments completed. Moreover, the ISC Standard states that agency- level leadership must communicate its priority and commitment to performance measurement and ensure that the physical security performance measures enhance accountability, prioritize security needs, and justify investment decisions to maximize available resources.
VA lacks documented policies or performance measures in place for assessing the effectiveness of its security program, which does not reflect the ISC Standard. VA policy outlines that local medical-facility directors at VHA facilities shall ensure that law enforcement activities (such as vulnerability assessments) are conducted in a legally and technically correct manner, but provides no guidance to ensure uniform measures and processes are being used to assess the performance of security programs. Without a policy that establishes uniform performance measures, VA cannot evaluate the effectiveness of physical security programs being locally implemented across its facilities.
According to VA officials, VA’s risk management process was developed before the ISC’s standard for risk management processes was originally issued in 2013. VA officials we spoke with said as a member of ISC they utilize it as a forum for exchanging ideas on best practices and interpreting the standards but it is then up to each agency to determine how best to apply ISC standards. VA officials said that they are currently reexamining their policies but have not reached out to the ISC for assistance. ISC officials told us they are available to act as resource for any agency requesting aid in developing or reviewing risk management processes.
VA cannot assure that the differences between its process and the ISC Standard are inconsequential to how it identifies and manages risk at local facilities and across its real property portfolio. According to the ISC Standard, not using an appropriate risk-management process can result in facilities that may either have (1) less protection than needed resulting in inadequate security or (2) more protection than needed resulting in an unnecessary use of resources. This situation might reduce the availability of resources that could be applied elsewhere. For example, although all VHA medical centers have the same mission, variations in location and physical configuration of a facility may create unique risks or risks that are relatively higher or lower in some cases than at other VHA facilities with the same mission.
VA Does Not Assess the Effectiveness of Its Risk Management Process
Agencies are expected to manage the effectiveness of program operations in achieving their missions. A range of federal standards and guidance assist agencies improve the accountability and effectiveness of their programs by helping agencies adapt to shifting environments, evolving demands, changing risks, and new priorities. For example, in July 2016, OMB updated guidance to establish management’s responsibilities for enterprise risk management (ERM). ERM is intended to yield an “enterprise-wide,” strategically aligned portfolio view of organizational challenges that provides better insight about how to most effectively prioritize resource allocations to ensure successful mission delivery. More specifically, the guidance discusses both internal control and ERM and how these fit help together to manage agency risks. Additionally, Standards for Internal Control in the Federal Government describes internal control as a process put in place by an entity’s oversight body, management, and other personnel, a process that provides reasonable assurance that objectives related to operations, compliance, and reporting will be achieved, and that serves as the first line of defense in safeguarding assets. Elements within these standards include: holding people accountable for their responsibilities, having effective operations that produce intended results in a manner that minimizes the waste of resources, and using quality information to achieve objectives.
However, according to OSLE officials, OSLE does not assess program effectiveness, Instead, officials said that OSLE’s role in overseeing VHA’s risk management process is limited to reviewing the activities of each VHA medical center’s police department’s activities. Specifically, as it relates to the risk assessment process discussed earlier, the OSLE review focuses on whether (1) vulnerability assessments are completed within the required time frame (at least every 2 years); (2) annual physical security surveys that are used to inform the vulnerability assessments are completed and documented, and (3) intruder detection tests are completed. The OSLE inspectors may also spot-check specific areas to determine whether physical security measures that are in place meet VA’s standards. The areas checked are at their discretion and not identified in policy. Findings from these inspections, including any deficiencies identified in physical security, are reported to the medical center director for action.
According to OSLE officials, they do not have any authority to ensure deficiencies are corrected and thus generally do not follow up on the status of their findings prior to the next inspection. Although the results of these inspections are stored by OSLE, we did not find that it uses them to identify trends in security deficiencies or track medical centers’ risk levels.
OSLE does not assess the medical center’s compliance with VA’s overall risk management process, the extent to which recommended security measures have been implemented, or decisions not to implement security recommendations. Furthermore, OSLE does not collect data that would allow it to know what security deficiencies have been identified across all VHA facilities and the status of recommended countermeasures. Because VHA lacks an oversight strategy that includes these elements, it cannot begin to assess the effectiveness of security at its facilities.
The lack of a system-wide oversight strategy is particularly troublesome given the authority and autonomy of medical center directors to determine the appropriate physical security measures needed for their facilities. At the nine medical centers, we found differences in how they implemented the risk management requirements and countermeasures and in how they collected security related data. Without a strategy for system-wide oversight, VA cannot ensure that local physical security-decisions are based on actual risk, are appropriate to protect the facility, and are effective, or whether the variations or the security impact of them are important.
Implementing VA’s risk management requirements: A key element of internal controls is having a process in place to hold people accountable and ensure that the agencies’ policies are being implemented as intended. While OSLE’s inspections assess whether the vulnerability assessments were completed, we found that they did not assess the quality of those assessments or whether they aligned with VA’s policy requirements. Specifically, we found differences in how the assessments were done at the nine medical centers we reviewed and that some were not consistently reviewing the full range of threats required by VA policy. For example, none of the vulnerability assessments we reviewed included documentation that all eight of VA’s threat categories were reviewed, and at three locations, no threat categories were documented as reviewed in the assessments. Additionally, in some instances, VA police assessed different threat categories than the required 8 categories. OSLE officials told us that local VHA police have the discretion to review any threats they perceive relevant to their facility; however, they reported that this should be done in addition to the eight threat categories identified in VA guidance. In a decentralized environment such as VA’s, there may be greater risk that VA police will inconsistently apply VA’s risk management process. Furthermore, as discussed earlier, VA has not established performance measures, in accordance with ISC standards, for its risk management process. This, according to the ISC, would help to ensure accountability, prioritize security needs, and justify investment decisions to maximize available resources.
Implementing countermeasures: Internal controls guidance speaks to having effective operations that produce intended results in a manner that minimizes the waste of resources. ERM also speaks to the effective and efficient use of resources. We found wide variation in the progress made in implementing countermeasures across the nine locations we reviewed. This variation happens, in part, because of competing priorities and lack of dedicated physical- security budgets. As a result, medical center directors make localized decisions about where they spend their resources. The police force is responsible for identifying appropriate countermeasures, but it is then up to the medical center directors and the managers in the areas for which deficiencies have been identified to implement the corrective actions. All of the medical center directors we interviewed reported weighing decisions to fund infrastructure deficiencies affecting healthcare delivery versus funding physical security projects. For example, one acting director told us that the center needs to repair a leaking roof in its hospice care unit. The director told us that this project, which uses funding from the same pool of money as physical security projects, will be prioritized because it directly impacts the quality of patient care.
Officials at the sites we reviewed described varying levels of commitment from medical center directors to prioritize physical security infrastructure projects. Officials at one site said that they currently have difficulty getting the resources they request to implement security countermeasures, but that the same had not been the case at previous medical centers where they worked. Specifically, one official noted that it can be difficult to convince a medical center director to fund security measures designed to protect the site from situations that have not yet occurred, such as countermeasures to improve perimeter security or increase standoff distance for critical areas, which are important parts of prevention for active-shooter type scenarios.
One of the key countermeasures medical centers use for physical security is the police force. We noted variations in police staffing at the nine locations we studied. VA policy sets a minimum level for the number of VA police officers who must be on patrol at any given time if certain conditions are met. Some local VHA officials we spoke with said they need to staff above this level because following the minimum staffing level can be problematic when officers are needed to respond to multiple incidents at the same time, such as escorting one patient and responding to a disruptive patient in a different wing of the hospital, officials stated. Officials noted that incidents can be the driving factor for changes. One site we reviewed increased their police presence in the emergency room, in response to a stabbing incident that occurred there.
The critical role that police play at these medical centers can be adversely affected, however, because of challenges related to recruiting and retaining law enforcement personnel. All sites we reviewed reported hiring vacancies in their departments, and multiple sites discussed challenges in maintaining any police at the recommended level at their facilities, hindering the ability of the police to respond to multiple incidents. As further described in appendix II, each VHA medical center police force is managed locally, under the control of the medical center director.
We also found varying levels of security provided by VA medical centers for their community based outpatient clinics. VA policy does not require a permanent security presence at the community-based outpatient clinics, and medical centers may rely on local police to respond to security incidents. However, some sites we reviewed use contract guards to provide a security presence at outpatient clinic locations, and one site reported completing an effort to staff VA police officers at each of the outpatient clinics under the medical center director’s authority. In the absence of system-wide oversight strategy, VA does not know if these variations in countermeasures are resulting in different levels of security, which may leave some facilities at risk and not be the most strategic use of resources at other facilities.
Tracking security deficiencies: The availability of reliable data is essential for assessing the effectiveness of policies and programs and for allowing managers to make sound decisions. In the absence of a VHA- wide strategy and guidance about how to collect data or track deficiencies, individual sites have established their own processes for tracking the status of identified security deficiencies. For example, one of the medical centers in our review reported 15 deficiencies resulting from its assessment, whereas another medical center reported over 540 deficiencies. In reviewing the data further, we found that the numbers may be misleading as to the extent of security concerns, because of the different ways in which the findings were reported. For example, in reporting the results of inspections of information telecommunication and data closets, one location identified a recurring deficiency as one issue, where another location identified a similar deficiency in each closet they inspected resulting in over 200 identified deficiencies. A system-wide oversight strategy could help VA identify what information is needed to assess the effectiveness of its security programs and the impact of varying practices at its facilities.
In the past, VA collected system wide information and tracked physical security across medical centers. When VA first started conducting vulnerability assessments in 2010, the assessments were done by a central team directed by OSLE, and the findings were tracked in a central database. In addition, a work group tracked how facilities were meeting VA’s standards and requirements and which countermeasures were getting prioritized and implemented. However, VA officials told us that this database crashed and that the information is no longer accessible. Moreover, the central team was dissolved, and medical center directors became fully responsible for ensuring that vulnerability assessments were conducted. The collection or assessment of data also became the responsibility of local medical centers.
Although OSLE has no current plans to re-establish a database, in 2015 the Acting Deputy Under Secretary for Health for Operations Management identified a need for information about the level of security at its facilities. He has directed VISN management to identify gaps between its facilities and VA’s 2015 physical-security design standards. This effort is separate from VA’s risk management process but would be expected to identify some of the same security deficiencies. VISNS are expected to use these results to develop and prioritize projects to bring facilities in line with the current VA physical security standards.
Conclusions
VA faces the challenge of providing secure, open, and welcoming medical facilities while providing medical care for nearly 9-million veterans annually. Having a process that incorporates ISC standards is critical to VA and ensuring that it is positioning itself to appropriately protect its facilities. However, until VA reviews its policies against the ISC standards to explore areas where it differs from these standards, it will not be able to ensure that its approach to risk management will yield and has yielded the appropriate security posture relative to the different risks faced by its diverse set of facilities. While not currently required, collaboration with the ISC would be helpful for the VA as it reexamines its risk management process. Additionally, the decentralized nature of VA’s organizational structure can help VHA tailor its programs to local situations. But without a system-wide oversight process, VA cannot assess the overall performance of its security program and whether medical centers are adequately protected. Thus, it may be missing opportunities to leverage resources nationally, or make informed, proactive policy decisions.
Recommendations for Executive Action
We are making the following two recommendations to VA: The Secretary of VA should, in collaboration with ISC, review and revise VA’s risk management policies for VHA facilities to ensure VA incorporates ISC standards, as appropriate. (Recommendation 1)
The Secretary of VA should develop an oversight strategy that allows VA to assess the effectiveness of risk management programs at VHA facilities system-wide. (Recommendation 2)
Agency Comments
We provided a draft of this report to the Department of Veterans Affairs (VA) and Department of Homeland Security (DHS) for comment. In written comments, which are reproduced in appendix III, VA agreed with our conclusions and concurred with our recommendations. In its comments, VA stated that it is in the process of updating its vulnerability assessment program and will work with the ISC to ensure VA is in compliance with applicable standards. VA also stated that it will work with the ISC as VA updates its risk management process to ensure it reflects the applicable standards established by the ISC. VA also intends to evaluate its current roles and responsibilities for assessing internal controls for risk management. VA estimates that it will complete these actions by January 2019. VA also provided a technical comment, which we have clarified in the report. DHS provided only technical comments, which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees; the Secretary of the Department of Veterans Affairs; the Secretary of the Department of Homeland Security; and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV.
Appendix I: Objectives, Scope, and Methodology
The objectives of our report were to assess (1) the extent that VA’s policies for physical-security risk management reflect elements of federally established risk management standards and (2) VA’s oversight of risk management of physical security at VHA facilities. To help inform our research, we reviewed reports and documentation on physical security. For example, we reviewed prior reports from GAO on the security of federal government facilities and effective program management, as well as documentation from the Department of Homeland Security’s Interagency Security Committee (ISC), including physical security standards it has developed by the ISC. Our review focused on security at medical facilities under the custody control of VHA.
To determine how VA policies for physical security risk management reflect key elements of federally established risk management standards, we assessed how VA’s methodologies reflect ISC’s risk management standards. This included reviewing the Risk Management Process for Federal Facilities (the ISC Standard) for assessing physical security and providing recommended countermeasures at federal facilities. We obtained and analyzed VA’s facility-security policies and procedures for a risk management methodology. According to the ISC Standard, agencies’ risk management methodologies should determine facility security level (FSL); identify facility’s baseline countermeasure; identify and assess risk; determine necessary countermeasures; implement protective measures and/or accept risk; and To assess VA’s oversight of risk management of physical security at VHA facilities, we identified and examined oversight and management mechanisms at the national, regional, and local levels, including reporting mechanisms that prioritize or track facility risks or the implementation of countermeasures at VHA facilities. We also reviewed Standards for Internal Control in the Federal Government because internal controls play a significant role in helping agencies achieve their mission related responsibilities using proper oversight mechanisms. To help determine if VA has established an environment in which it can ensure it is achieving its objectives, we reviewed agency documentation, such as vulnerability reports, police inspections, and the tracking reports related to security countermeasure recommendations at a non-generalizable sample of 9 VA medical centers. At these locations, we also conducted semi-structured interviews with facility management, VA police, and union representatives to identify the officials’ approach to physical security. Our findings from our review of the selected medical centers are not generalizable to all VHA facilities, but provide insight into and illustrative examples about risk- management and oversight methodologies at selected facilities.
We selected these sites based on a mix of criteria that included: (1) geographic location, including medical centers in various Veteran Integrated Service Networks (VISN), and in cities of different sizes; (2) patient volume, including medical centers with a mix of different levels of patient population; (3) reported security incidents, including locations with high and low levels of reported security incidents ; and (4) patient to incident ratio, including medical centers with high and low ratios of incidents per patient, among other considerations. Based on the selection criteria listed above, the team selected the following nine medical center locations for our review: 1. Bedford, MA 2. Houston, TX 3. Greater Los Angeles 4. Bay Pines, FL 5. Sheridan, WY 6. Washington, D.C. 7. Puget Sound, WA 8. Orlando, FL 9. Louisville, KY Considering the extent to which VA uses its police force in its risk management approach, we also reviewed the lines of authority and oversight for VA police personnel. For example, we identified VA’s police- reporting structures and data-collecting efforts.
We conducted this performance audit from September 2016 to January 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Overview of VA Police Departments’ Roles and Responsibilities
The Department of Veterans Affairs (VA) police consist of over 4,000 uniformed police officers in 153 police units across the nation. Each VHA medical center has, in effect, its own police force.
Key Activities
Aside from VA’s role in assessing physical security risks, VA police’s day- to-day role at VHA medical centers largely revolves around their law enforcement functions. Specifically, police officers patrol medical center campuses in an effort to deter, detect, defend, and respond to threats to patients and staff. Officers can make arrests for violations of federal law, can confiscate drugs, alcohol or other contraband, and can conduct investigations and collect basic evidence to the extent necessary to determine whether a crime has been committed. In addition, VA police officers might respond to incidents involving disruptive patient behavior— a continual concern for staff at VHA facilities, according to officials from the sites we spoke with. Staff can alert VA police to such incidents through means such as duress alarm systems at their facilities, and at some locations we spoke with, police respond as part of multi-disciplinary teams that try to de-escalate incidents involving disruptive patients. For example, police can be on the Disruptive Behavior Committees at their facilities. These multi-disciplinary committees review incidents involving disruptive patients and can suggest mitigations for future incidents including placing a “flag” on a patient’s record. These flags alert staff to prior concerns with a patient’s behavior and may include instructions for preventative measures such as a requiring the patient to check in with VA police when arriving on campus or requiring the patient to have a police escort while at the facility.
VA police at some sites included in our review described challenges officers face when responding to incidents. For example, according to VA police officials, not all incidents involving disruptive patients constitute a violation of the law, limiting the ability of a police officer to intervene. Police officials spoke about trying to de-escalate situations first, before making arrests or physically intervening in an altercation. Furthermore, VA police officers are limited in their authority to engage in certain actions such as pursuing non-federal offenses, investigating crimes off-campus, and carrying service weapons off campus, officials told us. In addition, some VA police we spoke with stated that the Assistant U.S. Attorney’s office is reluctant to prosecute veterans, so the VA police do not have much leeway or leverage in detaining, arresting or pressing charges against patients or visitors. For example, according to VA police officials from one site we spoke with, the Assistant U.S. Attorney declined to prosecute a stabbing incident. As a result the police had to work with the local police to recharge the case and go through the state court for prosecution.
As a part of the policing role, police have various reporting responsibilities. For example, police officers are expected to report their daily operational activity into a computerized database called the VA Police System that: (1) documents all criminal activity at the medical centers, (2) records daily incident reporting at each facility in a 24-hour period, and (3) lists all individuals who come into contact with VA police. VA police chiefs at each location use this data to generate a localized Unified Crime Report (UCR) for each campus. Each police chief maintains his or her own UCR, which can include all incidents reported by officers, from petty theft to homicide. VA police are to conduct predictive analysis of crime patterns and adjust patrols or investigative activities accordingly.
In addition to recording all activities into the database, VA police are required to report certain incidents (including incidents that are likely to result in national media or congressional attention), to the VA’s Integrated Operations Center through a Serious Incident Report. Police officers are required to report serious incidents as soon as possible, but no later than 2 hours after awareness of the incident. Reportable incidents include, among others, sexual or aggravated assaults and VA police-involved shootings. The Integrated Operations Center staff provides reports and real-time information on these incidents to the Secretary and the VA administrators for their awareness; however, the staffers do not conduct their own investigations into incidents. Officials from the Office of Security and Law Enforcement told us that they have started pulling together internal, monthly rollups of law-enforcement-related serious incident reports. These reports are provided to the VA police chiefs to inform them of serious incidents and provide situational awareness on law enforcement and criminal activity happening at VHA medical centers across the nation. These reports contain law-enforcement sensitive information and are intended for internal VA police use for crime analysis specific to VA law enforcement matters affecting VA campuses and are not to be released to the public or individuals or organizations outside law enforcement.
Police Oversight and Management
The Office of Security and Law Enforcement (OSLE) develops and issues policies and procedures for physical security, law enforcement, and training activities for VA police. In addition, OSLE and VISN police chiefs share responsibility for the police inspection program described in this report. OSLE does not provide any sort of centralized command over police chiefs or officers, however. This level of oversight and management of VHA police is done through the senior leadership at each local medical center. Police chiefs set the standard- operating procedures for their departments and report to an associate or assistant medical director, who provides daily supervision and approves their performance management appraisals. Medical center directors are ultimately responsible for the hiring of VA police officers and funding their training through VA’s Law Enforcement Training Center.
If allegations of police misconduct arise, the local VA police departments, and specifically the police chiefs, are responsible for investigating these claims. According to officials we spoke with, there are multiple methods police misconduct can be reported: directly through the medical center; to the VA Inspector General complaint hotline, or, in some instances, directly to OSLE within VA’s headquarters. OSLE’s Criminal Investigation Division will generally investigate criminal allegations and if appropriate will refer issues to the US Attorney for action. OSLE does not have supervisory authority over the VA police departments, and so any administrative actions must be taken by the local medical center officials.
Appendix III: Comments from the Department of Veterans Affairs
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Maria Edelstein (Assistant Director); William Carpluk; Raymond Griffith; Geoffrey Hamilton; Joshua Ormond; Amy Rosewarne; Friendly Vang-Johnson; and Elizabeth Wood made key contributions to this report. | Why GAO Did This Study
The Veterans Health Administration (VHA is responsible for providing a safe and secure, yet welcoming environment for staff, patients, and visitors at nearly 170 medical centers. These facilities have been the target of violence, threats, and other security-related incidents. Assessing and managing risks a critical element for ensuring adequate physical security at these facilities.
GAO was asked to review VA's physical security risk-management policies and practices. This report: (1) assesses how VA's policies for risk management reflect prevailing standards, and (2) evaluates VA's oversight of risk management at VHA medical facilities. GAO compared VA policies to ISC standards; reviewed VA documents; interviewed VA and ISC officials; and assessed risk assessment activities at nine medical centers selected based on factors such as patient and security-incident data and geographical diversity. While not generalizable, these nine locations provide illustrative examples of how VA's policies are carried out.
What GAO Found
The Department of Veterans Affairs' (VA) risk management policies include some but not all of the elements of standards set by the Interagency Security Committee (ISC). ISC was established via executive order to develop security standards and best practices that federal agencies are to follow when developing and conducting risk assessments. As part of this process, VA's policy identifies minimum countermeasures as called for in ISC's standards. In other areas, VA policy only partially adheres or does not adhere to ISC's standards, for example:
Of the five factors ISC calls for when calculating a facility's security level, VA considers three but does not consider a facility's population and size.
VA policy does not include performance measures, such as the number of countermeasures in use or the percentage of facility assessments completed; this percentage is a key element of ISC's standards for assessing the effectiveness of an agency's security programs.
Officials at VA said that its risk management program was developed prior to the ISC standards' being issued in 2013 and that it is up to each agency to determine how to best apply the standards. Nevertheless, VA officials said they are currently reexamining their policies. Until VA reviews its policies in accordance with ISC standards, its approach to risk management may not yield the appropriate security posture needed to adequately protect its medical centers.
VA's oversight activities for risk management do not encompass key aspects of the Standards for Internal Control in the Federal Government and Circular A-123 from the Office of Management and Budget that require agencies to conduct oversight activities to ensure the accountability and effectiveness of agency programs. VA has an oversight process to ensure that biennial assessments of individual facilities' security are completed. However, VA:
does not review the quality of medical centers' required risk assessments,
does not identify whether countermeasures were implemented appropriately by the medical centers, and
does not collect system-wide data to gain an understanding of physical security issues across medical centers.
In the absence of a comprehensive VA-wide strategy or guidance that reflects these internal control standards, individual sites have established their own approaches to carrying out VA's risk management policy. For example, the nine sites GAO reviewed conducted their security assessments differently, and none of the assessments indicated that all of the threat categories in VA's policy were reviewed. The lack of a system-wide oversight strategy means that the differences among medical center approaches, along with the security effects of those different approaches, are unknown. Accordingly, VA does not know if its medical centers are adequately protected, and it may be missing opportunities to leverage resources nationally and make better informed, proactive policy decisions.
What GAO Recommends
GAO recommends that the Department of Veterans Affairs review and revise its risk management policies to reflect prevailing standards, and develop an oversight strategy to assess the effectiveness of risk management programs at VHA facilities. VA agreed with GAO's recommendations and identified steps to implement them. |
gao_GAO-18-642T | gao_GAO-18-642T_0 | Background
BSA/AML Regulation and Enforcement for Banks and Money Transmitters
The BSA established reporting, recordkeeping, and other AML requirements for financial institutions. Regulation under and enforcement of BSA involves several federal agencies. FinCEN is responsible for administering the BSA and has authority for enforcing compliance with its requirements and implementing regulations, including through civil money penalties. FinCEN issues regulations under BSA and delegated BSA/AML examination authority for banks to the federal banking regulators. The federal banking regulators have issued their own BSA regulations that require banks to establish and maintain a BSA/AML compliance program. The federal banking regulators may take enforcement actions for violations of BSA/AML requirements. They may also assess civil money penalties against financial institutions and individuals independently, or concurrently with FinCEN.
Both federal and state agencies oversee money transmitters. FinCEN has delegated examination authority for BSA compliance for money transmitters to the Internal Revenue Service (IRS). Money transmitters must register with FinCEN and provide information on their structure and ownership. According to Treasury, in all states except one, money transmitters are required to obtain licenses from states in which they are incorporated or conducting business.
All banks and money transmitters are required to establish an AML compliance program that includes policies, procedures, and processes which, at a minimum, must provide for (1) a system of internal controls to ensure ongoing compliance, (2) a designated individual or individuals responsible for managing BSA compliance (BSA compliance officer), (3) training for appropriate personnel, and (4) independent testing for BSA/AML compliance. Additionally, as of May 11, 2018, banks and certain other financial institutions are required to implement appropriate risk-based procedures for conducting ongoing customer due diligence. Banks must also have policies and procedures for opening accounts and verifying the identity of each customer and monitoring transactions and reporting suspicious activity. Finally, banks and money transmitters must comply with certain reporting requirements, including the following:
CTR: A bank must electronically file a CTR for each transaction in currency—such as a deposit or withdrawal—of more than $10,000
SAR: Banks are required to electronically file a SAR when a transaction involves or aggregates at least $5,000 in funds or other assets, and the institution knows, suspects, or has reason to suspect that the transaction meets certain criteria qualifying as suspicious.
Remittance Transfer Methods
Remittances can be sent through money transmitters and banks, among other organizations. International remittances through money transmitters and banks may include cash-to-cash money transfers, international wire transfers, some prepaid money card transfers, and automated clearinghouse transactions. If a remittance sender’s bank does not have a direct relationship with the remittance recipient’s bank, the bank-to-bank transfer scenario becomes more complicated. In such cases, one or more financial institutions may rely upon correspondent banking relationships to complete the transaction. A typical remittance sent through a bank may be in the thousands of dollars, while the typical remittance sent by money transmitters is usually in the hundreds of dollars.
Historically, many consumers have chosen to send remittances through money transmitters due to convenience, cost, familiarity, or tradition. Money transmitters typically work through agents—separate business entities generally authorized to, among other things, send and receive money transfers. Money transmitters generally operate through their own retail storefronts, or through grocery stores, financial services outlets, convenience stores, and other retailers that serve as agents. Figure 1 shows one type of common money transmitter transaction known as cash-to-cash transfer.
Remittances to Case Study Countries
Remittances from the United States are an important source of funds for our case-study countries—Haiti, Liberia, Nepal, and Somalia. The Organisation for Economic Co-operation and Development identified these countries as fragile states because of weak capacity to carry out basic governance functions, among other things, and their vulnerability to internal and external shocks such as economic crises or natural disasters.
Risks Related to Money Laundering Appeared to Be a Factor in Reduced Access to Banking Services for Southwest Border Customers
In our February 2018 report, we found that money laundering risk is high in the Southwest border region because of the high volume of cash transactions, the number of cross-border transactions, and foreign account holders. Our nationally representative survey found that many Southwest border banks may be engaging in derisking. Nationally, our econometric analysis suggested that counties that were urban, younger, had higher income, or had higher money laundering-related risk were more likely to lose branches. Money laundering-related risks were likely to have been relatively more important drivers of branch closures in the Southwest border region.
Southwest Border Banks Reported Heightened BSA/AML Compliance Risks and Challenges Due to Volume of High-Risk Customers
In February 2018, we reported that money laundering risk is high in the Southwest border region because of the high volume of cash transactions, the number of cross-border transactions, and foreign account holders, according to bank representatives, federal banking regulators, and others we spoke with. Cash transactions increase the BSA/AML compliance risk for banks because the greater anonymity associated with using cash results in greater risk for money laundering or terrorist financing. Our review of data on banks’ CTR filings confirmed that bank branches that operate in Southwest border region counties handled more large cash transactions than bank branches elsewhere. Specifically, in 2016, bank branches in Southwest border region counties filed nearly 30 percent more CTRs, on average, than bank branches in comparable counties elsewhere in their same state, and about 60 percent more than those in other high-risk counties outside the region. Similar differences occurred in 2014 and 2015.
We also reported that cross-border transactions are at a higher risk for money laundering because international transfers can present an attractive method to disguise the source of funds derived from illegal activity. Southwest border banks cited foreign account holders as another type of high-risk customer for money laundering and terrorist financing. These types of customers are prevalent in the Southwest border region, examiners said, and can create challenges for banks to verify and authenticate their identification, source of funds, and source of wealth.
The volume of high-risk customers and cross-border transactions can lead to more intensive account monitoring and investigation of suspicious transactions, Southwest border bank representatives said. Performing effective due diligence and complying with customer identification requirements for higher-risk customers and transactions can be more challenging because banks might need specialized processes for higher- risk customers and transactions than for those that are lower risk. Southwest border bank representatives we spoke with said addressing these compliance challenges can also require more resources for monitoring high-risk customers and investigating suspicious transactions. For example, in 2016, bank branches in the Southwest border region counties filed three times as many SARs, on average, as bank branches operating in other counties within Southwest border states and about 2.5 times as many SARs, on average, as bank branches in other high-risk financial crime or drug trafficking counties in nonborder states. These differences in SAR filings showed a similar pattern in 2014 and 2015.
Some Account Terminations and Limitations Were Consistent with BSA/AML Purposes
In February 2018, we found that most Southwest border banks reported terminating accounts for reasons related to BSA/AML risk. Based on our survey results, from January 1, 2014, through December 31, 2016, we estimated that almost 80 percent of Southwest border banks had terminated personal or business accounts for reasons related to BSA/AML risk. The most common reasons related to BSA/AML risk Southwest border banks reported for terminating accounts were the filing of SARs associated with the accounts, the failure of the customer to respond adequately to requests for information as part of customer due diligence processes, and the reputational risk associated with the customer type (an estimated 93 percent, 80 percent, and 68 percent, respectively). Of the high-risk businesses for money laundering and terrorist financing that we identified in our survey, cash-intensive small businesses (for example, retail stores, restaurants, and used car dealers) were the most common type of business accounts that Southwest border banks reported terminating accounts for reasons related to BSA/AML risk.
Over 70 percent of Southwest border banks reported terminating these accounts.
A majority of Southwest border banks and banks that did not operate in the Southwest border region (non-Southwest border banks) reported limiting or not offering accounts to certain types of businesses considered high risk for money laundering and terrorist financing, particularly money services businesses and foreign businesses. The most common reason (cited by 88 percent of Southwest border banks) for limiting, or not offering, an account to these types of businesses was that the business type fell outside of the bank’s risk tolerance—the acceptable level of risk an organization is willing to accept around specific objectives. Similarly, 69 percent of Southwest border banks cited the inability to manage the BSA/AML risk associated with the customer (for example, because of resource constraints) as a factor for limiting, or not offering, accounts. Similarly, the most common reason that non-Southwest border banks reported limiting, or not offering accounts, to certain types of businesses considered high risk for money laundering and terrorist financing was that the customer type fell outside of the bank’s risk tolerance.
Other Account Terminations and Limitations Raised Concerns about Derisking
Further, in February 2018 we found that the second most common reason—cited by 80 percent of Southwest border banks—for limiting, or not offering, accounts to certain types of businesses considered high risk for money laundering and terrorist financing, was that the customer type drew heightened BSA/AML regulatory oversight—behavior that could indicate derisking. For example, representatives from one Southwest border bank explained that they no longer offer accounts to money services businesses because they want to be viewed from a good standpoint with their regulator. They added that banking for these types of customers is very high risk for the bank with very little reward. Another bank that operates in the Southwest border region explained that rather than being able to focus on their own BSA/AML risk assessment and the performance of accounts, they feel pressured to make arbitrary decisions to close accounts based on specific concerns of their examiners.
Several Southwest border bank representatives also described how recent BSA/AML law enforcement and regulatory enforcement actions have caused them to become more conservative in the types of businesses for which they offer accounts. In addition, while banks may terminate accounts because of SAR filings as a method to manage money laundering and terrorist financing risk and to comply with BSA/AML requirements, some of these terminations may be related to derisking. For example, some Southwest border bank representatives we spoke with for our Southwest border report, as well as other banks and credit unions we spoke with in a February 2009 review, told us that they have filed SARs to avoid potential criticism during examinations, not because they thought the observed activity was suspicious. Non- Southwest border banks also commonly cited the inability to manage risk associated with the customer type and heightened regulatory oversight as reasons for limiting, or not offering, accounts.
Southwest Border Bank Branch Closures Have Been Concentrated in a Small Number of Communities
Counties in the Southwest border region have been losing bank branches since 2012, similar to national and regional trends, as well as trends in other high-risk financial crime or drug trafficking counties that are outside the region. In February 2018, we found that most of the 32 counties (18 counties or nearly 60 percent) comprising the Southwest border region did not lose bank branches from 2013 through 2016, but 5 counties lost 10 percent or more of their branches over this time period (see top panel of fig. 2). Those 5 counties are Cochise, Santa Cruz, and Yuma, Arizona; Imperial, California; and Luna, New Mexico.
Within those counties we identified as having the largest percentage loss of branches, sometimes those losses were concentrated in smaller communities within the county (see bottom panel of fig. 2). For example, Calexico in Imperial County, California, lost 5 of its 6 branches from 2013 through 2016. In Santa Cruz County in Arizona, one zip code in Nogales accounted for all of the branch losses in the county from 2013 through 2016, losing 3 of its 9 branches. More generally, branch losses varied substantially across different zip codes in a county (see for example bottom panel of fig. 2). In other instances, counties that lost a relatively small share of their branches contained communities that lost a more substantial share—for example San Ysidro in San Diego County lost 5 of its 12 branches (about 42 percent) while the county as a whole lost only 5 percent of its branches from 2013 through 2016.
Based on our analysis, counties losing branches in the Southwest border region tended to have substantially higher SAR filings, on average, than Southwest border region counties that did not lose branches. That is, counties that lost branches from 2013 through 2016 had about 600 SAR filings per billion dollars in deposits, on average, and counties that did not lose branches had about 60 SAR filings per billion dollars in deposits, on average (see fig. 3).
Empirical Evidence Suggested Demographic and Money Laundering- Related Risk Factors Are Drivers of Branch Closures
The econometric models we developed and estimated for our February 2018 report generally found that demographic and money laundering- related risk factors were important predictors of national bank branch closures. In general, our results suggested that counties were more likely to lose branches, all else equal, if they were (1) urban, had a higher per capita personal income, and had a younger population (proportion under 45); or (2) designated as a HIFCA or HIDTA county, or had higher SAR filings. We termed the latter three characteristics (HIFCA, HIDTA, and SAR filings) “money laundering-related risk factors.”
Our results were consistent with those demographic characteristics associated with the adoption of mobile banking. As such, our results were consistent with the hypothesis that mobile banking is among the factors leading some banks to close branches. The most urban counties were about 22 percentage points more likely to lose one or more branches over the next year than the most rural counties. A county with 70 percent of the population under 45 was about 9 percentage points more likely to lose one or more branches over the next year than a county with half the population under 45. A county with per capita income of $50,000 was about 7 percentage points more likely to lose one or more branches over the next year than a county with per capita income of $20,000.
Money laundering-related characteristics of a county were also important predictors of branch closures in our models. HIDTA counties were about 11 percentage points more likely to lose one or more branches over the next year than non-HIDTA counties (the effect in HIFCA counties is less significant statistically and smaller in magnitude). A county with 200 SARs filed per billion dollars in bank deposits was about 8 percentage points more likely to lose one or more bank branches over the next year than a county where no bank branch had filed a SAR.
Money laundering-related risk factors were likely to have been relatively more important drivers of branch closures in the Southwest border region because it had much higher SAR filings and a larger share of counties designated as HIDTAs than the rest of the country. More generally, given the characteristics of Southwest border counties and the rest of the United States, our models suggested that while demographic factors have been important drivers of branch closures in the United States overall, risks associated with money laundering were likely to have been relatively more important in the Southwest border region.
Southwest border bank representatives we interviewed told us they considered a range of factors when deciding whether or not to close a branch. Nearly half of the Southwest border bank representatives we spoke with (4 of 10), mentioned that BSA/AML compliance costs could be among the factors considered in determining whether or not to close a branch.
Money Transmitters Serving Selected Fragile Countries Noted Loss of Banking Access, Although Treasury Saw No Reduction in Remittance Flows
In March 2018, we found that money transmitters serving Haiti, Liberia, Nepal, and especially Somalia reported losing bank accounts or having restrictions placed on them, which some banks confirmed. As a result, some money transmitters relied on nonbanking channels, such as cash couriers, to transfer remittances. All of the 12 money transmitters we interviewed at the time reported losing some banking relationships in the last 10 years. Some money transmitters, including all 4 that served Somalia, said they relied on nonbanking channels, such as moving cash, to transfer funds, which increased their operational costs and exposure to risks. Further, in our interviews some banks reported that they had closed the accounts of money transmitters because of the high cost of due diligence actions they considered necessary to minimize the risk of fines under BSA/AML regulations. Treasury officials noted that despite information that some money transmitters have lost banking accounts, Treasury saw no evidence that the volume of remittances was falling or that costs of sending remittances were rising.
All Money Transmitters We Interviewed Reported They Lost Bank Accounts, Which for Many Resulted in Higher Costs and a Shift to Nonbanking Channels
All 12 money transmitters we interviewed for our March 2018 report stated that they or their agents had lost accounts with banks during the last 10 years. All 4 Somali money transmitters and many agents of the 2 Haitian money transmitters we spoke with reported they had lost some bank accounts, and 2 of the 4 Somali money transmitters reported losing all bank accounts. Additionally, all 4 large money transmitters that process transfers globally (including to our case-study countries of Haiti, Liberia, and Nepal) also reported that their agents had lost accounts. Almost all of the money transmitters said they also faced difficulties in getting new accounts. While some money transmitters said the banks that closed their accounts did not provide a reason, in other cases, money transmitters said the banks told them that they had received pressure from regulators to terminate money transmitter accounts.
As a result of losing access to bank accounts, several money transmitters, including all of the Somali money transmitters, reported that they were using nonbanking channels to transfer funds. In some cases the money transmitter was forced to conduct operations in cash, which increased the risk of theft and forfeitures and led to increased risk for agents and couriers. Nine of the money transmitters that we interviewed reported they rely on couriers or armored trucks to transport cash domestically (to the money transmitter’s main offices or bank) or, in the case of Somalia, internationally. Money transmitters reported they use cash couriers either because the money transmitter or their agents had lost bank accounts or because it was cheaper to use armored trucks than banks to move funds.
Money transmitters we interviewed reported increased costs associated with moving cash and bank fees. Two of the money transmitters we spoke to stated that they did not have options other than to pay any fees the bank required due to the difficulty in finding new bank accounts. Money transmitters with access to bank accounts reported that bank charges for services had in some cases doubled or tripled, or were so high that it was less expensive to use a cash courier. For example, some money transmitters stated that their banks charged a monthly fee for compliance-related costs that ranged from $100 a month to several thousand dollars a month.
Some Banks Reported Closing or Denying Accounts for Money Transmitters, Citing Insufficient Profit to Offset Risks and Costs
Most of the banks we interviewed for our March 2018 report expressed concerns about account holders who are money transmitters because they tended to be low-profit, high-risk clients. Most of the banks we interviewed that serve money transmitters stated that BSA/AML compliance costs have significantly increased in the last 10 years because they had to hire additional staff and upgrade information systems to conduct electronic monitoring of all transactions processed through their system. Some banks indicated in our survey and interviews that the revenue from money transmitter accounts was at times not sufficient to offset the costs of BSA/AML compliance, leading to terminations and restrictions on money transmitter accounts. A few banks we interviewed stated that they do not allow money transmitters to open accounts because of the BSA/AML compliance resources they require.
Banks also expressed concerns over the adequacy of money transmitters’ ability to conduct due diligence on the money transmitter’s customers. A few banks we interviewed expressed concern that they would be held responsible if, despite the bank carrying out due diligence, authorities detected an illicit transaction had been processed through the bank on behalf of a money transmitter.
Treasury Officials Said Remittance Flows to Fragile Countries Have Not Declined; Remittance Senders Reported No Major Difficulties
In our March 2018 report, we found that Treasury officials reported remittances continue to flow to fragile countries even though money transmitters faced challenges. Through engagement with money transmitters and banks, Treasury found some evidence of money transmitter bank account closures. However, according to Treasury officials, World Bank estimates of remittance flows show that the volume of international transfers from the United States has continued to increase. At the same time, World Bank data indicate that the global average cost of sending remittances has continued to decrease. Citing these trends, and anecdotal evidence from Treasury’s engagement with banks, the officials stated that there were no clear systemic impacts on the flow of remittances from closures of money transmitter bank accounts and correspondent banking relations.
Treasury officials acknowledged that such closures can be a significant challenge for money transmitters that serve certain regions or countries, including Somalia. Further, Treasury officials said they were aware that some Somali money transmitters resorted to nonbanking channels by carrying cash overseas. They noted that although physically moving cash is risky, it is not unlawful. Additionally, Treasury officials stated that the use of cash couriers to remit funds had not been a concern for regulators because this practice had not increased the remittance fees that money transmitters charge their consumers.
Remittance senders in the United States who remit to our case-study countries reported that they frequently used money transmitters and had not encountered major difficulties in sending remittances. Senders told us that they generally preferred using money transmitters over other methods because money transmitters were cheaper than banks and were quicker in delivering the funds than other methods. In addition, money transmitters were often more accessible for recipients collecting the remittances because the money transmitters had more locations than banks in recipient countries. However, some remittance senders told us that they were unable to send large amounts of money through money transmitters.
Regulators Have Not Evaluated All Factors Influencing Banks to Derisk and Treasury Lacks Data Needed to Assess Possible Effects on Remittance Flows
In February 2018 we reported that to address concerns about derisking, FinCEN and the federal banking regulators had taken actions including issuing guidance to banks and conducting some evaluations to assess the extent to which derisking is occurring. However, the actions regulators had taken to address concerns raised in their BSA/AML regulatory reviews were limited in scope (for example, they focused primarily on the burden resulting from the filing of CTRs and SARs) and had not evaluated all factors that may influence banks to derisk or close branches. Moreover, in March 2018 we found that Treasury could not assess the effects of money transmitters’ loss of banking access on remittance flows because existing data did not allow Treasury to identify remittances transferred through banking and nonbanking channels.
Regulators Issued Guidance and Took Some Actions Related to Derisking
In February 2018, we reported that FinCEN and the federal banking regulators responded to concerns about derisking on a national level by issuing guidance to banks and conducting some evaluations within their agencies to understand the extent to which derisking is occurring. The guidance issued by regulators was aimed at clarifying BSA/AML regulatory expectations and discouraging banks from terminating accounts without evaluating risk presented by individual customers or banks’ abilities to manage risks. The guidance generally encouraged banks to use a risk-based approach to evaluate individual customer risks and not to eliminate entire categories of customers. Some of the guidance issued by regulators attempted to clarify their expectations specifically for banks’ offering of services to money services businesses, including money transmitters. For example, in March 2005, the federal banking regulators and FinCEN issued a joint statement on providing banking services to money services businesses to clarify the BSA requirements and supervisory expectations as applied to accounts opened or maintained for this type of customer. The statement acknowledged that money services businesses were losing access to banking services as a result of concerns about regulatory scrutiny, the risks presented by these types of accounts, and the costs and burdens associated with maintaining such accounts.
The agencies issuing these guidance documents told us they took some steps to assess the effect of their guidance on bank behavior. For example, Treasury officials said that Treasury periodically engaged with banks and money transmitters on an ad hoc basis to learn their views and gain insight into their concerns. According to Federal Reserve officials, anecdotal information suggested that some money transmitters lost bank accounts after FinCEN and federal banking agencies issued the joint guidance in 2005, and that outcome was contrary to the regulators’ intent. To address concerns about the guidance, according to these officials, Treasury held several public discussions on money transmitter account terminations.
In addition to issuing guidance, FDIC and OCC took some steps aimed at trying to determine why banks may be terminating accounts because of perceived regulatory concerns. For example, in January 2015, FDIC issued a memorandum to examiners establishing a policy that examiners document and report instances in which they recommend or require banks to terminate accounts during examinations. From January 2015 through December 2017, FDIC officials stated that examiners had not documented any recommendations or requirements for account terminations. In 2016, OCC reviewed how the institutions it supervises develop and implement policies and procedures for evaluating customer risks as part of their BSA/AML programs and for making risk-based determinations to close customer accounts. OCC focused its review on certain large banks’ evaluation of risk for foreign correspondent bank accounts. This effort resulted in OCC issuing guidance to banks on periodic evaluation of the risks of foreign correspondent accounts. The federal banking regulators also met with residents and businesses in the Southwest border region to discuss concerns about derisking in the region.
Treasury and the federal banking regulators also participated in a number of international activities related to concerns about the decline in the number of correspondent banking and money services business accounts. For example, FDIC, OCC, and the Federal Reserve participate in the Basel Committee on Banking Supervision’s Anti-Money Laundering/Counter Financing of Terrorism Experts Group. Recent efforts of the group involved revising guidelines to update and clarify correspondent banking expectations. Treasury leads the U.S. engagement with the Financial Action Task Force—an intergovernmental body that sets standards for combating money laundering, financing of terrorism, and other related threats to the integrity of the international financial system—which has issued guidance on correspondent banking and money services businesses.
BSA/AML Regulatory Reviews Had Not Evaluated All Factors Influencing Banks to Derisk and Close Branches
Executive orders encourage and legislation requires FinCEN and the federal banking regulators to review existing regulations to determine whether they should be retained, amended, or rescinded, among other things. Retrospective reviews of existing rules help agencies evaluate how existing regulations are working in practice. Recent presidents have directed agencies to evaluate or reconsider existing regulations. In addition to the executive orders, the Economic Growth and Regulatory Paperwork Reduction Act (EGRPRA) requires federal banking regulators to review the regulations they prescribe not less than once every 10 years and request comments to identify outdated, unnecessary, or unduly burdensome statutory or regulatory requirements.
In February 2018, we reported that FinCEN and the federal banking regulators had all participated in retrospective reviews of different parts of the BSA/AML regulations. For example, FinCEN officials told us that they review each new or significantly amended regulation to assess its clarity and effectiveness within 18 months of its effective date. As part of fulfilling their requirements under EGRPRA, the federal banking regulators— through the Federal Financial Institutions Examination Council (FFIEC)— have also participated in retrospective reviews of BSA/AML regulations.
As part of the 2017 EGRPRA review, FFIEC received several public comments on BSA/AML requirements, including increasing the threshold for filing CTRs, the SAR threshold, and the overall increasing cost and burden of BSA compliance. FinCEN officials and the federal banking regulators stated that the agencies are working to address the BSA- related EGRPRA comments—particularly those related to CTR and SAR filing requirements—through the BSA Advisory Group (BSAAG).
However, the actions FinCEN and the federal banking regulators took related to derisking were not aimed at addressing and, if possible ameliorating, the full range of factors that influence banks to engage in derisking, in particular banks’ regulatory concerns and BSA/AML compliance efforts. Further, the actions regulators took to address concerns raised in BSA/AML retrospective reviews focused primarily on the burden resulting from the filing of CTRs and SARs, but these actions did not evaluate how regulatory concerns may influence banks to engage in derisking or close branches. Federal internal control standards call for agencies to analyze and respond to risks to achieving their objectives. Further, guidance implementing executive orders states that agencies should consider conducting retrospective reviews on rules that unanticipated circumstances have overtaken. In February 2018, we concluded that without assessing the full range of BSA/AML factors that may be influencing banks to derisk or close branches, FinCEN, the federal banking regulators, and Congress would not have the information they need to determine if adjustments are needed to ensure that the BSA/AML regulations and their implementation are achieving their regulatory objectives in the most effective and least burdensome way.
U.S. Data on Remittances Did Not Allow Treasury to Assess the Effects of Money Transmitters’ Loss of Banking Access on Remittance Flows to Fragile Countries
In March 2018, we found that Treasury could not assess the effects of money transmitters’ loss of banking access on remittance flows because existing data did not allow Treasury to identify remittances transferred through banking and non-banking channels.
Recent efforts to collect international remittance data from banks and credit unions did not include transfers these institutions make on behalf of money transmitters. Since these data collection efforts are designed to protect U.S. consumers, the remittance data that banks and credit unions report are limited to remittances individual consumers send directly through these institutions. Additionally, as of the first quarter of 2018, about half the states (24) adopted reports to collect remittance data from money transmitters and of these, 12 states had made it mandatory to report remittance data by destination country. However, these data do not distinguish money transmitters’ use of banking and nonbanking channels to transfer funds.
Finally, we found that while Treasury has a long-standing effort to collect information on travelers transporting cash from U.S. ports of exit, this information did not identify cash transported for remittances. We concluded that without information on remittances sent through banking and nonbanking channels, Treasury could not assess the effects of money transmitter and foreign bank account closures on remittances, especially shifts in remittance transfers from banking to nonbanking channels for fragile countries. Nonbanking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and other illicit financial transactions. Additionally, while risks associated with shifts of remittances to nonbanking channels may vary by country, these risks are likely greater for fragile countries, such as Somalia, where the United States has concerns about terrorism financing.
Conclusions and Recommendations for Executive Action
The collective findings from our work indicate that BSA/AML regulatory concerns have played a role in banks’ decisions to terminate and limit accounts and close branches. However, the actions taken to address derisking by the federal banking regulators and FinCEN and the retrospective reviews conducted on BSA/AML regulations had not fully considered or addressed these effects. As a result, in our February 2018 report, we recommended that FinCEN and the three banking regulators in our review—FDIC, the Federal Reserve, and OCC— jointly conduct a retrospective review of BSA/AML regulations and their implementation for banks, focusing on how banks’ regulatory concerns may be influencing their willingness to provide services. In their written responses, the Federal Reserve, FDIC, and OCC agreed to leverage ongoing interagency work reviewing BSA/AML regulations and their implementation for banks to address our recommendation. GAO requested comments from Treasury, but none were provided.
A lack of data on remittances sent through banking and nonbanking channels limits the ability of Treasury to assess the effects of money transmitter and foreign bank account closures on remittances, in particular shifts of remittances to non-banking channels for fragile countries. Therefore, in the March 2018 report we recommended that Treasury assess the extent to which shifts in remittance flows from banking to non-banking channels for fragile countries may affect Treasury’s ability to monitor for money laundering and terrorist financing and, if necessary, should identify corrective actions. GAO requested comments from Treasury, but none were provided.
Chairman Luetkemeyer, Ranking Member Clay, and members of the Subcommittee, this concludes my statement. I would be pleased to respond to any questions you may have.
GAO Contact and Staff Acknowledgments
If you or your staff have any questions about the issues related to access to banking services along the Southwest border in this testimony or the related report, please contact Michael E. Clements, Director, Financial Markets and Community Investment, at (202) 512-8678 or [email protected]. For questions about the issues related to remittance flows to fragile nations in this testimony or related report, please contact Thomas Melito, Managing Director, International Affairs and Trade, at (202) 512-9601, or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Lawrance Evans, Jr. (Managing Director), Stefanie Jonkman (Assistant Director), Mona Sehgal (Assistant Director), Christine McGinty (Analyst in Charge), Kyerion Printup, Madeline Messick, and David Dayton. Other staff who made key contributions to the reports cited in the testimony are identified in the source products.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
In recent years, some Southwest border residents and businesses reported difficulty accessing banking services, including experiencing bank account terminations and bank branch closings in the region. In addition, the World Bank and others have reported that some money transmitters have been losing access to banking services with depository institutions.
This statement is based on findings from GAO's February 2018 report on access to banking services along the Southwest border ( GAO-18-263 ) and March 2018 report on the effects of derisking on remittance flows to fragile countries ( GAO-18-313 ). GAO discusses (1) the extent to which banks are terminating accounts and closing branches in the Southwest border region, (2) the extent to which money transmitters serving selected fragile countries are facing banking access challenges, and (3) actions relevant U.S. agencies have taken to respond to these challenges. For those reports, GAO surveyed more than 400 banks, developed an econometric model on the drivers of branch closures, and conducted case studies on four countries to assess the effects of derisking on remittances flows.
What GAO Found
“Derisking” is the practice of depository institutions limiting certain services or ending their relationships with customers to, among other things, avoid perceived regulatory concerns about facilitating money laundering or other criminal activity such as financing to terrorist groups. In its February 2018 report, GAO found that money laundering risk is high in the Southwest border region because of the high volume of cash transactions, the number of cross-border transactions, and foreign account holders. According to GAO's nationally representative survey of banks, an estimated 80 percent (+/- 11) of Southwest border banks limited or did not offer accounts to customers that are considered high risk for money laundering because the customers drew heightened Bank Secrecy Act/anti-money laundering (BSA/AML) oversight—behavior that could indicate derisking. Nationally, GAO's econometric analysis suggested that counties that were urban, younger, had higher income, or had higher money laundering-related risk were more likely to lose branches.
In March 2018, GAO found that money transmitters (businesses that facilitate global money transfers) serving Haiti, Liberia, Nepal, and especially Somalia— countries it identified as fragile—all reported losing bank accounts or having restrictions placed on them during the last 10 years. As a result, 9 of the 12 money transmitters GAO interviewed, including all 4 that served Somalia, reported using channels outside the banking system (hereafter referred to as nonbanking channels), such as transporting cash to transfer funds, and that this increased their operational costs and exposure to risks. Furthermore, some banks GAO interviewed reported that they closed the accounts of money transmitters because of the high cost of due diligence actions they considered necessary to minimize the risk of fines under BSA/AML regulations. Department of the Treasury (Treasury) officials noted that despite information that some money transmitters have lost bank accounts, Treasury saw no evidence that the volume of remittances was falling or that costs of sending remittances were rising.
To address concerns about derisking, Treasury and federal banking regulators (the Board of Governors of the Federal Reserve System, the Office of the Comptroller of the Currency, and the Federal Deposit Insurance Corporation), have taken actions including issuing guidance to banks and conducting some evaluations to assess the extent to which derisking is occurring. While agencies were engaged in BSA/AML regulatory reviews, these were limited in scope and had not evaluated how regulatory concerns may influence banks to engage in derisking or to close branches. Without assessing the full range of BSA/AML factors that may be influencing banks to derisk or close branches, Treasury, the federal banking regulators, and Congress do not have the information needed to determine if BSA/AML regulations and their implementation can be made more effective or less burdensome. Moreover, in March 2018 GAO reported that Treasury could not assess the effects of money transmitters' loss of banking access on remittance flows because existing data did not allow Treasury to identify remittances transferred through banking and nonbanking channels. Nonbanking channels are generally less transparent than banking channels and thus more susceptible to the risk of money laundering and terrorism financing.
What GAO Recommends
GAO made five recommendations in the two reports: to Treasury and the federal banking regulators to conduct a retrospective review of BSA/AML regulations and their implementation, and to Treasury to assess shifts in remittance flows to nonbanking channels. Banking regulators agreed with the recommendations. GAO requested comments from Treasury, but none were provided. |
gao_GAO-17-785 | gao_GAO-17-785_0 | Background
Set-top boxes provide a variety of functions, including enabling consumers to access their video subscriptions. They also secure the video provider’s content to ensure that the subscriber can access only the channels subscribed to, and prevent unauthorized use, such as recording of content that subscribers do not have the right to record. Among other features, set-top boxes may also allow subscribers to: view a channel guide and search for programming and record content for later viewing; view linear programming—meaning video programming that appears on a given channel at a given time; and view video on demand—meaning video programming available for consumers to access when they want to instead of at a specific time.
Traditionally, video content flows from content producers to households through various intermediaries (see fig. 1). Content producers negotiate and agree to a variety of terms and conditions with the networks or local television stations that carry the content, and those networks further negotiate and agree to terms and conditions with the MVPDs that distribute the content to subscribers. For example, a content producer may agree that in addition to its program showing on the linear cable channel at a specific time, its program is also available on demand, but only for a specific period of time. Furthermore, networks may negotiate for and agree to a range of terms with MVPDs regarding channel placement and other items. Protections programmed into the set-top box help ensure that such agreements are implemented.
For over two decades, federal statutes and regulations have sought to foster consumer choice for video services and devices to access such services. The Cable Television Consumer Protection and Competition Act of 1992, for example, requires FCC to report annually on the status of competition in the video marketplace. Furthermore, Section 629 of the Communications Act of 1934, as amended by the Telecommunications Act of 1996 (“the Act,”) directed FCC to assure the commercial availability of devices that access MVPD service (which currently are typically set-top boxes) by making them available from third parties unaffiliated with MVPDs.
In response to the Act, FCC adopted regulations in October 2003 that allowed the direct connection of digital navigation devices (typically, set- top boxes) purchased from third parties to MVPD systems. To receive and display MVPD content, these devices require a CableCARD, a card provided by a subscriber’s MVPD and installed in the third party set-top box or other device, allowing a subscriber to view secure content they subscribe to with their MVPD. As a result, such third party devices, which remain available today, are known as CableCARD devices.
Subsequent to the adoption of its CableCARD regulations, as noted earlier, FCC issued a Notice of Proposed Rulemaking in February 2016 that was intended to provide consumers additional choice for set-top boxes. In the proposed rule, FCC tentatively concluded that despite the availability of CableCARD devices, the market for navigation devices (such as set-top boxes) was not competitive, citing a previous analysis that found that approximately 99 percent of MVPD subscribers continued to lease a set-top box from their MVPD. Therefore, FCC stated in the proposed rule that it should adopt new regulations. Moreover, FCC stated that technological advances since the CableCARD regulations had been adopted enabled new solutions that, with certain ground rules, would make it easier to finally fulfill the purpose of the Act. One goal of the proposed rule was to allow third party manufacturers to create new devices and user interfaces—the means through which users interact with a set-top box such as the menus, remote control, and methods of searching for programming—to access MVPD services. For such devices to work, the proposal required MVPDs to transmit to third party devices video programming content and data about that programming, including channel listings and schedules and data on what programming subscribers are entitled to access. Such devices, as proposed, would not rely on a CableCARD and would be compatible with any MVPD’s service. As such, as envisioned by FCC, the proposed rule would enable a consumer to switch MVPDs without having to change the set-top box.
In September 2016, after receiving input from a wide range of stakeholders, the former FCC Chairman issued a three page fact sheet providing an overview of a proposed final rule that the Chairman scheduled for Commission vote in September 2016. According to the fact sheet, MVPDs would have been required to offer consumers a free electronic application (commonly referred to as an “app”), which would be controlled by the MVPD, that subscribers could download onto a variety of Internet-capable devices such as tablets and smartphones to access the programming they subscribe to. Under this scenario, control over the user interface would have been maintained by the MVPD, not the third- party device manufacturer, as the original proposed rule envisioned. However, the former Chairman ultimately deleted the proposed final rule from the list of items scheduled for consideration at the September 2016 meeting, and action on the proposed rule is no longer pending for consideration.
Set-Top Boxes Play a Significant but Diminishing Role in Delivering Programming in an Evolving Video Market
The Internet Provides Opportunities for Viewing Video Programming without the Need For a Set-Top Box
While over 75 percent of households still subscribe to MVPDs for video services and rely on a set-top box leased from their provider to access content, the Internet has created more opportunities for consumers to access video programming services in ways that do not require a leased set-top box. These providers vary regarding the types of video services they offer:
Content aggregators (e.g., Netflix and Amazon): These providers offer video on-demand through a subscription. They aggregate content from multiple sources and may provide their own content (e.g., Netflix’s original series House of Cards) along with content from other programmers. There are also niche aggregators such as Indie Flix that provide specialized programming.
Direct to Consumer (e.g., CBS All Access, HBONow, and Univision Now): Some programmers and networks that distribute their content through MVPDs are now separately providing live and on-demand content directly to consumers through the Internet for a monthly subscription. Consumers do not have to subscribe to an MVPD to subscribe to such content. For example, HBO provides its content on demand to its customers through the HBO Now app without requiring a customer to subscribe through an MVPD.
Virtual Service Providers (e.g. Sling TV, DIRECTV Now, and PlayStation Vue): These providers use a model similar to the MVPD model by providing live and on-demand programming from a variety of networks over the Internet in generally smaller channel lineups. Such services are targeted to households looking for a smaller channel line- up at a lower cost than from MVPDs.
According to Kagan, subscriptions to content aggregators and direct to consumer Internet-based services are expected to grow from 109 million in 2016 to 137 million in 2020. Many new Internet video services have launched since 2005, and there has been a particularly large growth since 2014. (See fig. 2.)
Subscribers can access Internet-based video services using many different Internet-connected devices and do not need a set-top box. These devices include stand-alone devices such as video game consoles (e.g., Xbox One), laptops, tablets, smart phones, and smart TVs—which include an integrated computer with an Internet browser, operating system, and apps to stream Internet video subscriptions without a separate device. Third party manufacturers have also developed streaming media devices (e.g., Roku) designed to allow viewers to watch Internet-provided content on their television set. Some of these devices, such as tablets, allow consumers to view video programming content in or out of the home with an Internet connection. Figure 3 below shows the variety of devices, including set-top boxes, households can use to access video programming.
These new Internet-based providers offer greater choice in video services, eliminating the need to lease a set-top box for households that choose to subscribe to one or more of these providers in lieu of an MVPD subscription. According to Kagan, the percentage of households subscribing to MVPDs is down from a peak of approximately 91 percent of households wired for service in 2009 to 79 percent in 2016, and Kagan estimates that in 2016 there were 29 million households that either cancelled their MVPD subscription or never had it. Additionally, Kagan projects that there will be a continued decline in MVPD video subscriptions by 2020, when 74 percent of households will subscribe to MVPDs, in part due to competition from Internet video programming. Eight of 11 industry experts and analysts we interviewed also stated that they believed MVPDs’ market share is falling due in part to Internet video. Kagan reports based on results of an online survey it conducted of households that never had an MVPD subscription that many in this group are generally younger and have less income than other households, and have in the past relied on over-the-air television because the cost of MVPD service is too high. One industry expert told us that it is unclear what will happen to these younger households’ viewing habits as they age. According to this expert, in the past these younger non-subscribers would eventually subscribe to MVPDs as their income grew, but it is no longer clear that this will happen due in part to Internet video options.
While consumers are increasingly subscribing to Internet programming that does not require a set-top box, the market for alternative devices to access programming is also growing. According to Kagan, sales of these alternative devices, such as streaming devices and smart TVs, have been growing. (See fig.4.) For example, Kagan estimates that 70 percent of television shipments in 2016 were smart TVs.
MVPDs Generally Still Require a Set-Top Box but Have Offered Subscribers Additional Ways to Access Video in Response to the Changing Marketplace
Many subscribers to MVPDs are still reliant on at least one set-top box, usually leased from their provider, to access video programming. In the wake of FCC’s 2003 CableCARD regulations, third-party providers developed CableCARD devices that consumers could purchase at retail outlets and use to access their MVPD subscription with a CableCARD. Such devices are still available currently. For example, one of the better- known of these options, the TiVo set-top box, was available on Amazon.com as of July 2017. However, in spite of the commercial availability of these devices, according to FCC in its 2016 proposed rule, about 99 percent of subscribers to MVPDs lease at least one set-top box from their MVPD. While all five of the large cable providers we interviewed said that their customers have the option of using a third party device, they all added that very few customers do so and the majority lease their set-top box. All five of the large cable providers we interviewed cited limited customer interest as key reason consumers did not adopt third party CableCARD devices. Each also cited one or more of the following reasons: limited functionality, including limited ability to access on-demand content when devices were first available; high up-front costs to purchase a third party device; and the ease of leasing a set-top box from a provider, which will replace the box if it breaks, compared to owning a third party device where if it breaks the consumer may have to buy a new one.
However, public interest organizations we interviewed stated they believe that the low rate of adoption of CableCARD devices was due to limited support from MVPDs. Specifically, representatives of one public interest group we interviewed stated that MVPDs have not been advocates of third party devices and have not devoted customer service toward this effort, for example by providing their technicians with training. They also stated that MVPDs have made it difficult for customers to use CableCARD devices by, for example, requiring technicians to install the CableCARD. Representatives of one public interest group also stated that because MVPDs charge their customers a monthly fee for using CableCARD devices, as they do for a set-top box, customers have little financial incentive to adopt these alternative devices. Another public interest group stated that MVPDs do not make their subscribers aware of their ability to purchase and use such devices.
Although subscribers to MVPDs generally require a set-top box in most cases to access content they subscribe to, many MVPDs are also offering their video programming over the Internet and through alternative devices. For example, according to Kagan, MVPDs have started to allow consumers to access their subscription content via the Internet in and out of the home, on multiple devices, and when they want, for example:
Many cable networks allow subscribers to MVPDs that carry that network to access live or on-demand content through an app or website specific to that network. MVPDs do not develop or control these apps and websites. Such service is often referred to as “television everywhere.” Kagan forecasts that views of Internet-based television everywhere from MVPDs will increase from approximately 5.4 billion views in 2016 to 11 billion views in 2020. All nine of the larger MVPDs we interviewed told us that their customers can access some “television everywhere” content online.
Many MVPDs have also developed their own apps allowing their subscribers to access a range of content. Eight of the nine larger MVPDs told us they have developed apps for Internet-capable devices such as smart phones and tablets that allow their subscribers to access content in and out of the home. Such apps may allow for viewing both live and on-demand content. For example, consumers can use a Comcast application on their smart phone out of their home to view content. In addition, some MVPDs have developed apps for streaming devices such as Roku. In some, but not all, cases such apps can be used as a replacement for a set-top box; however, only three of the nine larger MVPDs we interviewed said that their subscribers may be able to use apps and alternative devices to access their subscriptions without the need for any set-top box. For example, one MVPD told us that customers can use an app on a Roku streaming device to access content without needing any set-top boxes.
These changes by MVPDs may be due to competition from new Internet- based services; 10 out of 11 industry experts and analysts we interviewed told us that MVPDs are providing access to their programming through alternative devices other than set-top boxes due to such competition.
Despite growth in alternative devices and services, a Kagan report indicated and MVPDs we interviewed told us that set-top boxes will still play an important role in the near future for accessing video content from MVPDs as the industry replaces many current set-top boxes with higher end versions. For example, the set-top box for one MVPD we interviewed now provides advanced functions such as voice control, universal searching, and increased storage of programming. All nine larger MVPDs we interviewed told us that they foresee the set-top box still playing a role in their service in the near future, and only three said their customers may be able to access their subscriptions solely on alternative devices without the need for a set-top box. One MVPD told us that although it sees video providers moving to apps on their own in the future, there will still be an option for consumers to access content from their set-top box. This MVPD has made upgrades to its set-top box to provide more features and has incorporated Internet video applications such as Netflix directly into its set-top box. Additionally, eight out of the 11 experts and industry analysts we interviewed said that they expect the set-top box to continue to be needed for traditional provider services for households in the future. One expert stated that the set-top box is the most efficient way to access and deliver programming, and that it remains the best solution for consumers and an important component of video programming.
Some Consumers May have Difficulty Taking Advantage of Internet Services That Do Not Require a Set-Top Box
While the Internet has provided consumers with more choice for accessing video programming without subscribing to an MVPD and using an associated set-top box, consumers must have broadband access to be able to use these alternative products. However, FCC, in a 2016 broadband progress report, estimated that 10 percent of the population does not have adequate access to in-home fixed broadband Internet and the lack of broadband access is particularly concentrated in rural and tribal areas.
Although subscriptions to broadband Internet service are rising as those to MVPD video services are declining, most households are dependent upon MVPDs to receive broadband Internet service. According to FCC, 97 percent of consumers are reliant on their MVPD for broadband service, and according to Kagan the ten largest video providers account for 91 percent of broadband subscriptions. However, as we recently reported, continuing technological changes may provide new options for obtaining access to broadband as, in the future, wireless Internet access may be able to serve as a substitute to in-home broadband for some consumers, and satellite-provided Internet service may also become an option for consumers who don’t have access to in-home wired broadband. For example, Kagan expects wireless broadband to serve as a growing substitute choice for consumers with the advancement of higher speeds in the future.
Experts and Stakeholders Suggest Additional FCC Efforts on Choice in Set-Top Boxes are Not Needed, but FCC has Conducted Limited Analysis of this Issue
Generally, Selected Stakeholders and Experts Did Not See Need for FCC Regulation to Increase Consumer Choice for Set- Top Boxes
Most selected stakeholders and industry experts we spoke to did not see a need for FCC to intervene in the set-top box market at this time, given the changes taking place that provide consumers with more choices for services and devices to access video programming. All 11 of the experts and analysts we interviewed said that the industry is moving away from set-top boxes on its own by providing content through other means and 9 of those 11 added that, as a result, there is no need for FCC regulatory intervention. Furthermore, only 8 of the 35 total industry stakeholders we interviewed stated that regulations are needed. These stakeholders pointed to the development of apps and devices beyond set-top boxes that consumers can use to access video content. For example, one of the larger MVPDs said that competitive pressures have pushed the company to offer consumers new ways and devices with which to access the content they subscribe to.
However, representatives of all three public interest organizations we interviewed said that FCC regulations are still needed to promote consumer choice for devices. Specifically, representatives of one public interest organization we interviewed said that although the market has evolved to provide more device choices for consumers, the fact that almost all MVPD subscribers lease a set-top box shows that the intent of the Act has not yet been met. They added that while MVPDs have been increasing the development of apps for their subscribers to access content, these apps so far do not have all the functionality of leased set- top boxes, meaning that the apps are not an adequate substitute. As discussed earlier, despite the growth in apps, most larger MVPDs we interviewed still require their subscribers to have at least one set-top box.
Some Experts and Industry Stakeholders Raised Concerns about the Potential Effects of FCC’s Recent Proposal to Expand Consumers’ Choices for Devices
Some industry stakeholders and experts and analysts we interviewed thought that FCC’s proposed rule could have had negative effects on MVPDs as well as other industry participants, including content providers. As discussed earlier, the proposed rule would have required MVPDs to transmit information—including video programming itself—to third party devices. According to representatives of one industry association we interviewed, this could have meant that MVPDs, and the programmers whose content they distribute, would lose control over content that they had created or purchased the distribution rights to. Programmers negotiate terms and conditions—such as channel lineup and other issues—with MVPDs that distribute their content. Some stakeholders expressed concern that under the proposed rule there would be no guarantee that third party device and service companies would adhere to all those terms and conditions under which that content was provided to the MVPDs. Some MVPDs and programmers expressed concern that some third-party device companies might modify the stream of programming by, for example, changing channel placement or overlaying advertising.
Five of the 11 experts and analysts we interviewed thought that the proposed rule could have led to copyright violations. Almost all larger MVPDs, broadcast networks, and independent and diverse programmers and interest groups we interviewed expressed concerns that should there be copyright violations, content providers could also be negatively affected. For example, one industry association said if a third party device were to overlay advertising on a program, the value of advertising availability that is usually sold by broadcast or cable networks or by cable distributors would decrease since there might be competing advertising displayed to viewers. This stakeholder added that any reduced ad revenues would, in turn, reduce the ability to invest in content. Seven of the 11 experts and analysts we interviewed reported that the proposed rule could negatively affect content providers. Furthermore, some stakeholders told us that they believed the possible negative effects of the proposed rule could have especially affected independent and diverse programmers such as Vme, a national Spanish language network. According to one independent and diverse programmer we interviewed, its business is dependent upon agreements with MVPDs that distribute its programming. Those agreements include a range of terms including advertising restrictions and channel placement. To the extent a third party could modify the content—such as by overlaying advertising—that programmer would have a harder time negotiating with MVPDs, potentially reducing the compensation received from MVPDs for carrying its channel, thus harming its business model. Furthermore, according to a letter written by the Copyright Office, the proposed rule could have interfered with the rights of copyright owners to license their works by requiring MVPDs to provide content to third parties that would not necessarily have a contractual relationship with the copyright owner.
However, some other stakeholders we interviewed stated that they believed there was little likelihood that the proposed rule would have led to licensing terms not being followed and reported that the proposed rule may have provided public benefits, specifically:
Two public interest groups we interviewed said that because there have not been violations with copyrights on CableCARD devices, such violations would be unlikely on any new devices that would have been created under the rule.
Representatives with one industry association representing technology companies said that the proposed rule could have benefited independent and diverse programmers by increasing the number of devices available to consumers to access content, providing such programmers with increased opportunities for consumers to find their content.
Representatives with one public interest group said that consumers would benefit from the proposed rule as new devices created in response to the rule would increase access to programming on new devices, thus increasing programming options overall.
Representatives with a device manufacturer said that the proposal could have provided consumers with new and innovative ways to access video content.
FCC Has Conducted Limited Analysis to Support Response to Statutory Requirement Regarding Consumer Choice for Set-Top Boxes
In commenting on a draft of this report, FCC noted that the limited action of taking a not- yet-adopted proposal off circulation would not generally be an occasion for providing a regulatory impact analysis since such an action would have no regulatory effect. limited interest in adopting such devices for a variety of reasons, such as the ease of leasing a set-top box from a provider, which will replace the box if it breaks, compared to owning a third party device where if it breaks the consumer may have to buy a new one.
The proposed rule also contained limited analysis of the potential effects of this rule on consumers, MVPDs, or others. For example, while FCC supported the proposed rule by stating that the average household pays over $230 a year in set-top box lease fees, the proposal did not estimate the extent to which any increased competition in the market for set-top boxes might lead to cost savings for consumers. More broadly, FCC has conducted some analysis of the evolving video market, which, as discussed earlier, is providing consumers with more choices for both video services as well as devices to access services. For example, FCC’s most recent congressionally mandated annual video competition report— published in January 2017—includes discussion of the increasing popularity of Internet-based video services and the competitive pressures they have placed on MVPDs, among other things.
While the Act requires FCC to set regulations to assure the commercial availability of devices to access MVPD services, it also states that any regulations implemented under the statute shall cease to apply if FCC deems that: (1) the market for MVPDs is fully competitive, (2) the market for devices used to access MVPD services is fully competitive, and (3) the elimination of the regulations would promote competition and the public interest. While, as discussed above, FCC has conducted some analyses related to these issues, neither the proposed rule nor the recent video competition report reflect a comprehensive analysis looking at how these interrelated issues affect each other. In addition, May 2017 letters to Congress from the new FCC Chairman stating his intention to not move forward with this issue did not contain or cite any analysis supporting that decision.33 Specifically, FCC’s analyses do not consider the effect that increasing consumer choice for video services has on the importance of consumer choice for devices to access MVPD services. Increased consumer choice for services may reduce the market power of MVPDs and may restrict what they can do and what they can charge for set-top boxes—as well as potentially spurring innovation in how they offer access to their MVPD services. While the 2017 video competition report touches on consumer choice for both services and for devices, it does not discuss the extent to which new choices for services have affected the importance of consumer choice for devices. Furthermore, this analysis does not consider what level of consumer choice for devices must exist for the market for devices to be “fully competitive.”
While FCC’s former Chairman believed that new regulations were needed to fulfill the requirements of the Act, the current Chairman believes that the 2016 proposed rule did not further his goal of promoting a clear, consumer-focused, fair, and competitive regulatory path for video programming delivery. As stated earlier, the proposed rule contained limited analysis. In addition, the new Chairman’s letters to Congress noted that he had removed his predecessor’s proposal from circulation but were silent as to whether the Commission would take any future action in this proceeding. A future Commission may again determine that regulations are needed or decide not to take any further action on this issue.
In commenting on a draft of this report, FCC noted that the limited action of taking a not- yet-adopted proposal off circulation would not generally be an occasion for providing a regulatory impact analysis since such an action would have no regulatory effect. access MVPDs. Such an analysis, conducted as part of FCC’s existing annual video competition reports—which, as discussed, already include relevant analyses—could help FCC determine if additional regulations are needed.
Conclusions
The market for video services and devices to access video services has evolved significantly in recent years so that consumers now have considerably more choices for video services and devices to access such services than when Congress passed the Telecommunications Act of 1996. Given the fast pace of change in the video market in recent years and the likelihood that it will continue to evolve to offer consumers more choices in how they access video content, it is important that FCC analyze the implications of these changes for its responsibilities under the Act to assure the commercial availability for devices that can access MVPD programming.
However, FCC has not conducted a comprehensive analysis to support an informed decision as to whether further action is needed or not. FCC’s recently proposed rule and most recent annual video competition report contain limited analysis of the extent to which Internet-based providers affect consumer choice for video programming and what that change means for the importance of consumer choice for devices in the context of the Act. In contrast, a comprehensive analysis could inform FCC as to whether the market conditions of competition for both video services and devices have been reached under which, as stated in the Act, any regulations implemented under the statute shall cease to apply. Should such analysis show that those market conditions have not yet been reached, a clear articulation by FCC of what elements have and have not yet been met could help as a benchmark in FCC’s further consideration of this issue as the market likely continues to evolve. Without more comprehensive analysis of the industry’s evolution and its effects on consumer choice for devices to access MVPD services, FCC could potentially take regulatory action—or choose not to take action—in a way that is not beneficial to consumers and does not meet the goals of the Act.
Recommendation
To help ensure that any future decisions by FCC regarding its efforts under the Act are based on comprehensive analysis, we recommend that FCC, as part of its future annual video competition reports, analyze how the ongoing evolution in the video programming market affects competition in the related market for set-top boxes and devices, including how this evolution affects the extent to which consumer choice for devices to access MVPD content remains a relevant aspect of the competitive environment. (Recommendation 1)
Agency Comments
We provided a draft of this report to FCC and the Library of Congress for review and comment. FCC responded with a letter in which it agreed with our recommendation. This letter is reprinted in appendix II. FCC also provided technical comments that we incorporated as appropriate. The Library of Congress reviewed our report and did not provide any comments.
We are sending copies of this report to interested Congressional committees and the Chairman of the FCC. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made significant contributions to this report are listed in Appendix III.
Appendix I: Industry Stakeholders and Experts and Analysts Interviewed
The following tables list the industry stakeholders and industry analysts and experts GAO interviewed as part of this engagement.
Appendix II: Comments from the Federal Communications Commission
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Mark Goldstein, (202) 512-2834 or [email protected].
Staff Acknowledgments
In addition to the contact above, Alwynne Wilbur (Assistant Director); Matt Rosenberg (Analyst in Charge); Amy Abramowitz; West Coile; Leia Dickerson; Sharon Dyer; Camilo Flores; Joshua Ormond; Nitin Rao; Amy Rosewarne; and Elizabeth Wood made key contributions to this report. | Why GAO Did This Study
Millions of households subscribe to cable, satellite, and telephone companies—known as MVPDs—for television, which is generally delivered via a set-top box attached to a television. Congress directed FCC to adopt regulations to assure a commercial market for devices to access MVPDs, and in February 2016, FCC proposed a rule intended to do so. Many industry stakeholders raised concerns about the proposal's potential effects, and FCC did not issue the proposed rule. This report examines: (1) the role of set-top boxes in accessing video programming content and (2) views of selected stakeholders and experts on the need for FCC regulation regarding set-top boxes and FCC's analysis of such need.
GAO analyzed data from a media research group regarding the video market and interviewed 35 industry stakeholders including 12 MVPDs, 5 video content producers, 3 device manufacturers, 12 industry associations, and others; GAO selected stakeholders based on comments filed with FCC on its 2016 proposed rule. GAO also interviewed 11 industry analysts and experts selected based on industry coverage and publications.
What GAO Found
Set-top boxes play a significant but diminishing role in delivering video content in an evolving video market. Subscribers to multichannel video programming distributors (MVPD)—companies that provide pay television services via subscriptions such as cable and satellite companies—generally need a set-top box to access MVPD television services, and most subscribers lease a set-top box from their MVPD. However, consumers can now access video through a wide range of Internet-based services without a set-top box, using a variety of Internet-capable devices, such as tablets. Internet-based services include those providing on-demand video such as Netflix and some, such as Sling TV, providing live content similar to that from MVPDs. Some Internet-capable devices, such as Roku, allow people to watch Internet-based video on televisions. In recent years, subscriptions to MVPDs have fallen as more Internet-based services have become available. Partly in response to this competition, many MVPDs have begun offering content over the Internet to subscribers, accessible on many Internet-capable devices, including streaming devices that display it on televisions. While in most cases, MVPD subscribers still need a set-top box, a few MVPDs GAO interviewed now allow subscribers to access content they subscribe to solely over the Internet, without a set-top box.
The Federal Communications Commission (FCC) has conducted limited analysis of the need for regulations to assure a commercial market for devices, such as set-top boxes, to access MVPD services. Most stakeholders and experts GAO interviewed said that further regulations for this purpose were not needed, given recent changes in the video content market. FCC is directed by law to set regulations to assure a commercial market for devices to access MVPD services. However, the law also specifies that any such regulations may no longer apply if FCC determines that the markets for both MVPD services and devices to access MVPDs are fully competitive. Moreover, while it does not extend to independent agencies, Office of Management and Budget guidance says agencies could use analyses to evaluate the need for proposed actions. However, FCC proposed a new rule in 2016 to promote a commercial set-top box market without undertaking a comprehensive analysis of the competitiveness of the market to support the proposed rule. FCC did not enact a final rule. Stakeholders had differing views on the potential effects of the proposed rule, but some raised concerns that the rule could have had negative effects on MVPDs and content providers. As described above, widespread changes in the video market in recent years have expanded consumers' choices for video services as well as devices to access those services. Nineteen of the 35 industry stakeholders GAO interviewed said rules are not needed at this time, while 8 said rules are still needed. (The rest gave uncertain answers or did not comment on this issue.) Without a comprehensive analysis, FCC lacks information on the extent of consumer choice and, furthermore, the extent to which increased options for video services affect the relative importance of consumer choice for devices to access MVPDs. Such an analysis could help FCC determine if additional regulations are needed and, as the market likely continues its rapid evolution, could serve as a benchmark in FCC's further consideration of whether market conditions have been met such that regulations may no longer apply.
What GAO Recommends
GAO recommends that FCC conduct a comprehensive analysis of how recent industry changes related to video services affect consumer choice for devices to access video services.
FCC agreed with GAO's recommendation and provided technical comments that GAO incorporated as appropriate. |
gao_GAO-18-493 | gao_GAO-18-493_0 | Background
Commercially Hosted Payloads
DOD defines a hosted payload as an instrument or package of equipment—a sensor or communications package, for example— integrated onto a host satellite, which operates on orbit making use of the host satellite’s available resources, including size, weight, power, or communications. A commercially hosted DOD payload is a DOD payload on a commercial satellite. In general, hosted payloads may be either experimental or operational. Experimental payloads demonstrate new or existing technologies on orbit for potential use on future operational space systems. Operational payloads deliver required capabilities to end users. Hosted payload arrangements may be unsuitable for some missions. For example, some payloads may be too large or need too much power for a host satellite to feasibly accommodate, or may require unique satellite maneuvers that, if exercised, would negatively affect a host satellite’s primary mission. Civil government agencies, like NASA and the National Oceanic and Atmospheric Administration (NOAA), have used or have plans to use commercially hosted payloads. For more information on the commercially hosted payloads that civil agencies have used or plan to use, see appendix I.
Potential Benefits of Using Commercially Hosted Payloads
We and others have identified potential benefits of using commercially hosted payloads to gain space-based capability, such as:
Cost savings—Commercially hosted payloads may increase affordability because the government payload owner pays for only a portion of the satellite development and shared launch and ground systems costs, rather than for the entire system. Also, smaller, lighter, and less complex systems may shorten procurement timelines, reduce research and development investment, and reduce risk in technology development. Some government agencies have reported saving hundreds of millions of dollars to date from using innovative arrangements such as hosted payloads.
Faster on-orbit capability—Because commercial satellites tend to take less time from concept development to launch than DOD systems do and have relatively frequent launches, hosting government payloads on commercial satellites may achieve on-orbit capability more quickly.
Increased deterrence and resilience—Distributing capabilities across more satellites increases the number and diversity of potential targets for an adversary and may make it more difficult for an adversary to decide which assets to attack, serving as a deterrent. Additionally, more frequent launches could increase DOD’s ability to reconstitute its satellite groups—or constellations—more quickly in case of unexpected losses of on-orbit capabilities. Recent strategic and policy guidance government-wide and at DOD have stressed the need for U.S. space systems to be survivable, or resilient, against intentional and unintentional threats—both types of which have increased over the past 20 years. Intentional threats can include purposeful signal jamming, laser dazzling and blinding of satellite sensors, missiles intended to destroy satellites, and ground system attacks. Some unintentional threats to satellites are created by the harsh space environment itself, like extreme temperature fluctuations and radiation, and the growing number of satellites, used rocket parts, and other space debris on orbit, which could collide with orbiting satellites.
Continual technology upgrades and industrial base stability— New technologies may be continually incorporated into space systems using hosted payloads, which may be uniquely suited for higher rates of production and launches than traditional DOD satellites. Using commercial satellites for government payloads could help maintain the U.S. commercial space industry’s ongoing technology developments by maintaining stable business and incentivizing new companies to enter the marketplace. Further, increased production may be distributed over multiple contractors—including traditionally lower-tier contractors—to foster more competition.
As we reported in October 2014, hosted payloads are among several avenues DOD is considering to increase the resilience of its satellites in the face of growing threats. DOD has been looking at ways to break up larger satellites into multiple smaller satellites or payloads after decades of building large, complex satellites to meet its space-based requirements. The broader concept of breaking up larger satellites into smaller ones is known as disaggregation. In 2014, we reported that DOD lacked critical knowledge about the concept of disaggregation, including how to quantify a broad range of potential effects. At the time, for example, DOD did not have common measures for resilience, which we found is a key consideration in making a choice as to whether to continue with a current system architecture or to change it. Recently, senior DOD officials have also made public statements that indicate a willingness to consider innovative acquisition approaches so that acquisition timelines can be reduced. For example, in a 2016 strategic intent document, the Commander of Air Force Space Command stated that the Air Force should seek innovative acquisition approaches that leverage DOD’s buying power across the industry. Additionally, the Secretary of the Air Force stated that the Air Force is exploring more affordable and innovative ways to acquire its satellite communication services through investments in commercial industry and international partnerships.
Matching Payloads with Commercial Host Satellites
Opportunities to match a DOD payload with a commercial host can arise in various ways. DOD may first develop a payload and seek to match it with a commercial host, DOD may work in tandem with a commercial company to develop a payload to be hosted, or commercial companies— likely the satellite owner, operator, or system integrator—can first identify upcoming satellite hosting opportunities to DOD. In each scenario, the DOD program (or payload owner) and the commercial host generally consider the basic properties of both the payload and host satellite in attempting to find a match. These properties—including the size, weight, area, power, and required orbital characteristics of the payload and host satellite—should be complementary to create an arrangement that is mutually compatible for each party, according to Aerospace Corporation recommendations and officials we spoke with. Specifically, these properties include:
The size of the payload when it is stowed and when it is deployed on orbit, including the available area on the host satellite;
The available weight and mass distribution the host satellite can
The available power on the host satellite;
The thermal requirements of the payload and corresponding capability of the host satellite;
The requirements to limit electromagnetic interference—disturbances that affect electrical circuits on the payload and host satellite;
The available command, telemetry, and mission data rate requirements of the payload and corresponding capability of the host satellite;
The compatibility of interfaces between the payload and host satellite;
The pointing accuracy and stability of the host satellite; and
The necessary orbits, including altitude and inclination.
Other considerations when matching a DOD payload with a host satellite are the compatibility of radio frequency spectrum (spectrum) needs between the payload and host, and the satellite’s intended orbital location. Spectrum is a natural resource used to provide essential government functions and missions ranging from national defense, weather services, and aviation communication, to commercial services such as television broadcasting and mobile voice and data communications. The frequencies, or frequency bands, of spectrum have different characteristics that make them more or less suitable for specific purposes, such as the ability to carry data long distances or penetrate physical obstacles. Each frequency band has a limited capacity to carry information. This means that multiple users operating at approximately the same frequency, location, and time have the potential to interfere with one another. Harmful interference occurs when two communication signals are either at the same frequencies or close to the same frequencies in the same vicinity, a situation that can lead to degradation of a device’s operation or service. As such, a payload or satellite’s specific placement in any given orbit could potentially interfere with a neighboring payload or satellite in the same orbit.
In the United States, the National Telecommunications and Information Administration (NTIA) of the Department of Commerce is responsible for establishing policy on regulating federal government spectrum use and assigning spectrum bands to government agencies. The Federal Communications Commission (FCC) allocates spectrum and assigns licenses for various consumer and commercial purposes. Additionally, all government and commercial satellite programs must apply for approval to operate at a given orbital location using a given band of spectrum internationally through the International Telecommunication Union (ITU). The ITU is an agency of the United Nations and coordinates spectrum standards and regulations.
The Air Force’s Hosted Payload Office
In 2011, the Air Force created the Space and Missile Systems Center’s (SMC) Hosted Payload Office (HPO) to provide acquisition architectures that achieve on-orbit capability more quickly and affordably. The HPO uses various resources and capabilities to meet its objectives:
Hosted Payload Solutions Contract: In 2014, SMC established the Hosted Payload Solutions (HOPS) multiple award indefinite delivery indefinite quantity (IDIQ) vehicle. According to HPO documents, SMC established the contract—available to all DOD and civil agencies—to streamline commercially hosted payload arrangements by selecting a pool of commercial vendors that government payload owners can use to access space on commercial host satellites. Programs do not have to use HOPS, however, and may contract with commercial companies directly. The HOPS vehicle includes 14 vendors across the commercial satellite industry. SMC awarded task orders for studies to each of the vendors with a contract to gather information on potential host opportunities, orbits and launch schedules, cost estimates for hosting fees, and existing host satellite interfaces.
Feasibility Studies: Using the information it gathered from the 14 vendor studies, the HPO stated that it built a database to provide information on potential satellite hosts and the suitability of certain payloads for host opportunities, including cost estimates. The HPO stated that it can use this information to assess the feasibility of a hosted payload opportunity for interested SMC space programs. The HPO also conducts feasibility studies for interested programs based on publicly available information and from industry requests for information.
Hosted Payload Interface Design guidelines: The HPO published hosted payload interface design guidelines to provide technical recommendations for hosted payload developers. According to HPO officials, the intent of these guidelines is to reduce integration costs and improve the host-ability of all hosted payloads.
Hosted Payload Data Interface Unit: The HPO is developing a secure hosted payload data interface unit to protect payload data from unauthorized access by the host. Following its release of draft documentation to industry stakeholders in March 2018, the HPO is currently integrating National Security Agency requirements into its request for data interface unit prototype proposals. According to HPO officials, the office plans to issue a request for prototype proposals in May 2018, integrate a data interface unit and payload in 2020, and launch the integrated system in 2022.
Hosted Payload Expertise: The HPO provides general advice and expertise to programs in the form of hosted payload architectural studies, input on acquisition planning and strategy documents, and other research efforts, according to the office.
DOD Has Used Commercially Hosted Payloads Three Times and Three More Missions Are Planned or Underway
Since 2009, DOD has launched three experimental payloads on commercial host satellites and plans to conduct three more missions through 2022, as shown in figure 1. DOD estimates that it has achieved cost savings of several hundred million dollars from these experimental payloads. According to DOD officials, DOD expects to realize additional cost savings and be able to place capabilities on orbit more quickly from several hosted payload efforts that are planned or underway. Opportunities for additional hosted payload efforts may arise in the near term amid DOD planning for upcoming and follow-on space systems.
DOD Has Used Commercial Satellites to Host Three Experimental Payloads
Since 2009, DOD has placed experimental payloads—intended to test or demonstrate an on-orbit capability—for three programs on commercial host satellites. Several officials within DOD told us that experimental payloads tend to be smaller, less expensive, and their missions more risk- tolerant than traditional operational DOD payloads. In these ways, they said experimental payloads are better-suited to hosting arrangements than operational DOD payloads. The Air Force has not yet used the HOPS multiple award IDIQ vehicle—which was awarded to facilitate commercially hosted payload arrangements—to match a government payload with a commercial host. The HPO told us that, in 2019, NASA and NOAA will be the first agencies to use the HOPS vehicle to find a host satellite for two of their payloads. Table 1 describes the three experimental payloads hosted on commercial satellites to date. For more information on civilian agencies that use or plan to use commercially hosted payloads, see appendix I.
Air Force officials told us that using commercial host satellites for their experimental payloads has saved several hundred million dollars across these programs and shortened timelines for launching payloads into space. For example, the HPO estimated that the Air Force saved nearly $300 million by using a commercial host satellite for its Commercially Hosted Infrared Payload (CHIRP), as compared to acquiring the same capability using a dedicated, free-flying satellite. In addition, Air Force officials estimated that using commercial host satellites for its Responsive Environmental Assessment Commercially Hosted (REACH) effort saved the Air Force approximately $230 million. The REACH effort consists of over 30 payloads hosted on multiple satellites. Further, because of the commercial host’s launch schedule, the Air Force achieved its on-orbit capability sooner than if it had acquired free-flying satellites. In April 2013, we found that the Internet Protocol Routing in Space (IRIS) payload, launched in 2009, was a commercially hosted payload pilot mission that would provide internet routing onboard the satellite, eliminating the need for costs associated with certain ground infrastructure.
DOD Has Three Commercially Hosted Payload Efforts Planned or Underway
DOD and Air Force officials told us they are planning to pursue commercially hosted payloads for three programs in the coming decade to achieve cost savings and on-orbit capability more quickly. In each case, officials said they have identified cost and schedule benefits for their respective programs. For example, the Missile Defense Agency (MDA) stated that it expects to save approximately $700 million compared to the cost of traditional, free-flying satellites by acquiring its Spacebased Kill Assessment capability as payloads on commercial host satellites, and expects to achieve on-orbit capability years earlier than if it had acquired dedicated satellites for these payloads. Additionally, a program official from the Defense Advanced Research Projects Agency (DARPA) told us DARPA plans to use a commercially hosted payload for the Phoenix Payload Orbital Delivery effort to test more affordable ways to access space. Moreover, Air Force officials told us they expect to save $900 million over free-flying satellites by using two Space Norway satellites to fly an Enhanced Polar System Recapitalization payload. Space Norway plans to launch its satellites in 2022, which the Air Force expects will allow it to meet its need for DOD’s required capability. See table 2 for additional details on DOD’s planned hosted payloads.
Additional opportunities for commercially hosted payloads may be forthcoming as DOD develops requirements and designs for new and follow-on space programs. DOD has been analyzing various alternatives to explore possible future space system designs and acquisition strategies for several of its upcoming follow-on programs. In these cases, the analysis of alternatives (AOA) study guidance, set forth by DOD’s Office of Cost Assessment and Program Evaluation, included direction for the studies to consider new approaches for acquiring space capabilities. For example, AOA guidance directed study teams to include hosted payloads or other disaggregated designs, and commercial innovations in technology and acquisition to meet some space mission requirements. Table 3 provides further details of recently completed and ongoing AOAs to study new designs—or architectures—for upcoming follow-on satellite systems.
Logistical and Data Challenges Contribute to Limited Use of Hosted Payloads
Two factors have contributed to DOD’s limited use of commercially hosted payloads. First, DOD officials identified logistical challenges to matching government payloads with any given commercial host satellite. For example, most of the offices we spoke with cited size, weight, and power constraints, among others, as barriers to using hosted payloads. Second, while individual DOD offices have realized cost and schedule benefits, DOD as a whole has limited information on costs and benefits of hosted payloads. Further, the knowledge it has gathered is fragmented across the agency—with multiple offices collecting piecemeal information on the use of hosted payloads. The limited knowledge and data on hosted payloads that is fragmented across the agency has contributed to resistance among space acquisition officials to adopting this approach.
DOD Officials Cite Logistical Challenges to Matching Payloads to Hosts
DOD acquisition officials within the Office of the Secretary of Defense told us matching requirements between government payloads and commercial satellites is typically too difficult for programs to overcome. Specifically, they said the cumulative complexity of matching size, weight, power, and spectrum needs; aligning government and commercial timelines; and, addressing concerns over payload control and cybersecurity amounts to too great a challenge.
DOD’s Hosted Payload Office is developing tools designed to help address these challenges and DOD offices that have used hosted payloads have also found ways to overcome them.
Matching Size, Weight, and Power
Officials from DOD acquisition and policy offices, as well as Air Force and industry officials we spoke with, cited matching size, weight, and power between DOD payloads and commercial host satellites as a challenge. We similarly found in April 2013 that ensuring compatibility between payloads and host satellites can pose challenges because not all commercial satellites are big enough or have enough power to support hosting a payload. Whether a host satellite can accommodate a payload can depend on the size of the payload. Additionally, according to industry representatives, the space taken up by the hosted payload affects the amount of revenue-generating payloads the host may place on its satellite, such as additional transponders—devices that emit and receive signals—for the communications services it provides to customers. The complexity of integrating a government payload onto a commercial host can also drive the overall cost of the arrangement.
However, officials said these challenges can be mitigated through the use of various expertise and lessons learned. HPO officials and industry representatives have proposed several approaches to help match properties like size, weight, and power between a DOD payload and a commercial host satellite. The HPO is developing a hosted payload interface unit that could potentially provide a standard for payload developers and system integrators to develop and test their systems. One commercial company proposed an interface unit that would accommodate a “universal” DOD payload. Additionally, industry experts stated that with sufficient planning and time for system integration, nearly any payload can be accommodated on a host satellite.
The HPO issued guidelines in 2017 to assist DOD payload developers in working toward typical payload requirements and standards for host satellites in low Earth orbit and geostationary Earth orbit. These guidelines inform the payload’s electrical power and mechanical designs. The principal guideline—echoed by the successful CHIRP demonstration in 2011—is that the hosted payload must “do no harm” to the mission performance of its host. Also, satellite interfaces can vary from company to company. Some commercial companies had experience with the task—and business opportunity—of integrating multiple customers’ payloads onto satellites since at least the 1990s.
Matching Spectrum Needs
Air Force, HPO, and industry officials told us that, ideally, the payload should use the same spectrum allocation as the commercial host. They said that this is due in part to the lengthy satellite registration process that takes place in the United States and through the ITU that must be undertaken prior to placing a satellite on orbit. Some DOD officials added that the process for all new satellites from initial filing to ITU approval takes around 7 years. If a satellite owner registers for one frequency band of spectrum and later requires a different band, the owner has to begin the registration process from the beginning—restarting the 7-year timeline. This can be problematic for DOD payload owners seeking to match their military communications payload with an already-registered host satellite—particularly if the host satellite’s spectrum allocation is incompatible with the DOD payload. HPO and other DOD officials said that very different spectrum needs between payload and host would therefore preclude the match.
Moreover, a need for military—as opposed to commercial—spectrum for communications payloads can introduce additional complications. Although a process exists for a commercial satellite owner to license military spectrum for use by a hosted payload, representatives from DOD’s Chief Information Officer’s (CIO) office could cite only one instance where this has happened. One possible explanation stems from a 2012 memorandum from DOD’s CIO that outlines various preferred processes for a commercial host satellite to host military communications payloads. Several industry officials we spoke to said that the various processes outlined in the 2012 memorandum would add to the already-lengthy process of spectrum registration. Further, the memorandum instructs that contractual terms between the payload and host satellite owners should restrict all military spectrum use exclusively to the U.S. military. However, one industry official told us that international entities do not necessarily recognize U.S. military spectrum, and commercial companies that obtain licenses through other countries are permitted to use those frequencies. For example, a senior official of one commercial company we met with stated that the company licensed U.S. military spectrum through another North Atlantic Treaty Organization government after failing to successfully coordinate an FCC request with DOD and NTIA. DOD and industry representatives told us that from a business perspective, it makes little sense for a commercial company to seek hosting opportunities for DOD payloads that require U.S. military spectrum.
Matching Government and Commercial Development and Acquisition Timelines
Government and industry officials we spoke with said that aligning the development and acquisition timelines of a government payload and commercial host satellite is a challenge. The timeline associated with developing government sensors is generally much longer than that of commercial satellites, potentially creating difficulties in scheduling and funding commercially hosted payload arrangements. For example, DOD satellite systems take, on average, over 7 years to develop and launch a first vehicle, while commercial satellite programs typically take between 2 and 3 years. DOD payload owners may find it challenging to accelerate development and acquisition schedules to match those of the commercial satellite host. Additionally, DOD officials we spoke with said that their budget and planning processes require funding commitments up to 2 years in advance of actually receiving those funds. This can further complicate alignment with commercial timelines because the development of a government sensor would need to be underway well in advance of a decision to fund a commercially hosted payload approach. Furthermore, federal law generally prohibits agencies from paying in advance for a future service or from obligating future appropriations.
However, several DOD and other government agency officials we spoke with said that it is possible to align government and commercial timelines. For example, MDA adopted the commercial host’s schedule to ensure its Spacebased Kill Assessment payload was ready for integration and launch without delaying the host satellite or worse—missing its own ride to space. DARPA officials told us they were also able to align DARPA acquisition and development schedules with the commercial host. The Air Force’s Enhanced Polar System (EPS) Recapitalization program officials were able to leverage existing documents such as requirements documents and acquisition strategies from the predecessor program to speed up the acquisition process. According to Air Force officials, the EPS Recapitalization program had a unique opportunity to take advantage of the availability of a commercial host and had the support of a high ranking Air Force official that enabled the program to move forward using a commercially hosted payload approach.
Maintaining Payload Control and Cybersecurity
Some officials cited concerns with combining government and commercial space missions. For example, officials across DOD told us they were wary of losing control over a hosted payload should a commercial company’s needs change. They said that theoretically, a commercial provider could decide to turn off power to the government’s payload if the host satellite needed extra power to perform a certain function. Additionally, DOD space program officials expressed concern that commercial practices for ensuring the mission success of the payload may not be up to government standards—that commercial testing and integration standards may be less robust than those used by traditional government programs to ensure success, adding risk to the government payload. Furthermore, officials in one DOD program office expressed a distrust of commercial host motives in offering to support a government payload on their satellite, suggesting that a company could be intending to steal government technologies. However, industry officials we spoke with said that DOD can generally issue a solicitation that includes necessary stipulations. For example, including a condition to preserve the payload’s priority of mission and other terms to protect the government’s investment may provide some assurance to those officials that perceive security risks.
Additionally, some officials we spoke to cited cybersecurity concerns. They cited loss of control over data security as a challenge to using hosted payloads. Officials told us the data could be vulnerable to eavesdropping or manipulation as it travels between government ground systems and the commercially hosted government payload. However, according to HPO officials, the Air Force overcame this challenge on the CHIRP mission by procuring a secure interface that provided a data link between the payload and dedicated transponder and ground terminal. As mentioned previously, the Hosted Payload Office is developing a hosted payload data interface unit to mitigate this challenge by securing payload data communications from the host satellite.
Department-wide Information on Commercially Hosted Payloads Is Limited and Fragmented Across Offices
DOD, at the department-wide level, has limited information on commercially hosted payloads—mostly due to a lack of experience in using hosted payloads and complexities associated with them. For example, acquisition officials in the Office of the Secretary of Defense told us that DOD needs more data and analysis of the potential costs and benefits. However, realistic cost modeling for commercially hosted DOD payloads is unavailable because costs can vary across potential hosts and DOD has minimal experience using commercial hosts. Similarly, the HPO performs market research and cost estimates based on data from commercial companies, but according to one official in the HPO, the costs tend to vary based on the supply and demand in the commercial satellite industry. Additionally, HPO officials said their cost savings analyses are based on only two real-world commercially hosted DOD payloads— CHIRP and REACH. HPO officials told us that with additional government data they could compare the costs of system architectures that include free-flier satellites with those that use commercially hosted payloads. Additionally, some potential benefits of using commercially hosted payloads, such as resilience, may be difficult to measure. In our 2014 report on disaggregation, we recommended that DOD define key measures related to disaggregation, including developing metrics to measure resilience. DOD is in the process of developing standard metrics for resilience.
DOD’s knowledge of commercially hosted payloads is also fragmented across the agency. Several DOD offices are independently conducting activities related to commercially hosted payloads, such as pursuing commercially hosted payload arrangements, developing lessons learned, and determining demand for commercial hosts. For example, MDA officials told us they have developed cost and technical data and lessons learned based on MDA’s Spacebased Kill Assessment payload— launched earlier this year—but have not shared it across the agency. On the other hand, the Space Test Program, also housed within the Air Force’s SMC develops lessons learned on its payloads, which are government payloads on government host satellites and officials there told us they provide lessons learned to the HPO. In October 2017, SMC’s Launch Office sent a request for data on hosted payloads to DOD agencies, research laboratories, and universities, but the HPO was not an active participant in this request. Independent efforts within DOD to collect and analyze cost, schedule, and performance results from hosted payloads can create fragmentation in DOD’s knowledge base and can increase the risk of duplicative efforts within DOD.
DOD does not collect or consolidate agency-wide knowledge on commercially hosted payloads and has no plans to do so. Agency officials stated that DOD does not require programs outside of SMC to consult the HPO when seeking commercially hosted payload arrangements. The Air Force established the HPO to facilitate commercially hosted payloads, however, the 2011 Program Management Directive that established the HPO states that the HPO will coordinate with SMC directorates for detailed implementation of hosted payloads but does not address coordination with agencies or directorates outside of SMC. According to an HPO official, programs are not required to use HPO expertise or tools as they pursue using hosted payloads. Further, this official stated that programs are not required to provide any data or lessons learned to the HPO, or any other central point within DOD, following the pursuit or completion of a hosted payload arrangement. The 2011 Program Management Directive directs the HPO to provide lessons learned to SMC directorates but does not direct SMC offices to share information— such as costs, technical data and lessons learned on completed commercially hosted payload efforts—with the HPO. An HPO official indicated that the HPO obtains data through informal communication with those programs using hosted payloads that are willing to share data.
We found that limitations and fragmentation of data and knowledge are contributing to resistance within DOD to using hosted payloads. Several DOD acquisition and program officials we spoke with who did not have experience with hosted payloads generally stated that the potential risks to using hosted payloads outweighed the benefits, and that there was little evidence-based analysis to prove otherwise. They were not aware of existing tools that could assist them in making decisions even though the HPO has been developing these tools and has made efforts to share them within SMC. DOD acquisition and program officials consistently cited a preference for maintaining the acquisition status quo over introducing any perceived added risk to their programs. At the same time, however, officials who have used hosted payloads were able to overcome logistical and technical challenges and realize cost savings. However, according to an HPO official, there is currently no requirement in place to facilitate sharing their approaches to doing so. We have reported in the past that DOD’s culture has generally been resistant to changes in space acquisition approaches and that fragmented responsibilities for acquisitions have made it very difficult to coordinate and deliver interdependent systems.
Moreover, our past studies of commercial strategic sourcing best practices have found that that leading companies centralize procurement decisions by aligning, prioritizing, and integrating procurement functions within the organization. Establishing the Hosted Payload Office is one step in this direction, but the office is organized under the Advanced Systems and Development Directorate—a research and development organization—under SMC. Moreover, the 2011 directive that established the HPO does not address coordination or responsibilities for agencies or directorates beyond SMC. Consolidating knowledge is important because it allows organizations to share information and data upon which to develop consistent procurement tactics, such as ways to overcome challenges in matching a government payload with a commercial host. As we found in our work on commercial strategic sourcing best practices, organizations that struggled with fragmented information in the past overcame this challenge in part by consolidating their data on costs and spending. While hosted payload acquisitions are not a typical service acquisition, successful organizations have found that these techniques work for highly specialized technical services for which few suppliers exist.
Conclusions
As DOD considers new architectures and acquisition approaches, commercially hosted payloads have the potential to play a role in delivering needed capabilities on orbit more quickly and at a more affordable cost than traditional DOD space acquisitions. Placing DOD payloads on commercial satellites might also be an effective method by which to increase resiliency. However, DOD’s experience and the data collected so far are limited in informing decisions on the use of these payloads. DOD would benefit from leveraging the knowledge and information gained from each hosted payload experience. Centralized collection and assessment of agency-wide data would help enable DOD to mitigate the logistical challenges inherent in matching payloads to hosts, and better position DOD to make reasoned, evidence-based decisions on whether a hosted payload would be a viable solution to meet warfighter needs. Without such knowledge, and a way for interested programs to leverage it, DOD may not be fully informed about using hosted payloads and may risk missing opportunities to rapidly and affordably address emerging threats in space.
Recommendation for Executive Action
The Secretary of Defense should require programs using hosted payloads to provide cost and technical data, and lessons learned to a central office. In implementing this recommendation, DOD should consider whether the Hosted Payload Office is the most appropriate office to centralize agency-wide knowledge. (Recommendation 1)
Agency Comments
We provided a draft of this report to the Department of Commerce, NASA, and DOD for comment. The Department of Commerce provided technical comments, which we incorporated as appropriate. NASA did not have comments on our draft report. In its written comments, DOD concurred with our recommendation and stated that SMC had initiated a major reorganization since we drafted our report and that under the new organizational construct, the Hosted Payload Office had changed and may not be the appropriate office for centralizing DOD-wide hosted payload knowledge. DOD’s comments are reproduced in appendix II. DOD also provided technical comments which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Commerce, the Secretary of Defense, the Administrator of NASA, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-4841 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Civil and Other Agency Commercially Hosted Payloads
As shown in table 4, civil and other government agencies use commercially hosted payloads to enhance navigation systems, monitor environmental pollution, conduct scientific missions, and improve search and rescue systems. Officials from all of the agencies we spoke with cited cost savings and the ability to leverage existing commercial schedules and technologies among the reasons they use commercial host satellites.
Appendix II: Comments from the Department of Defense
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Cristina T. Chaplain (202) 512-4841 or [email protected].
Staff Acknowledgments
In addition to the contact named above, Rich Horiuchi (Assistant Director), Erin Cohen (Analyst in Charge), Claire Buck, Jon Felbinger, Stephanie Gustafson, Matthew Metz, Sylvia Schatz, and Roxanna Sun made key contributions to this report. | Why GAO Did This Study
Each year, DOD spends billions of dollars to develop, produce, and field large, complex satellites. For such satellite systems, a single adversary attack or on-orbit failure can result in the loss of billions of dollars of investment and significant loss of vital capabilities. As DOD plans new space systems and addresses an increasingly contested space environment, it has the opportunity to consider different acquisition approaches. One such approach is to integrate a government sensor or payload onto a commercial host satellite.
House Armed Services Committee report 115-200, accompanying a bill for the Fiscal Year 2018 National Defense Authorization Act, included a provision for GAO to review DOD's use of commercially hosted payloads. This report (1) determines the extent to which DOD uses commercially hosted payloads and (2) describes and assesses factors that affect their use.
GAO reviewed DOD policies, documentation, and planning documents, and interviewed a wide range of DOD and civil government officials, and commercial stakeholders.
What GAO Found
GAO and others have found that using commercial satellites to host government sensors or communications packages—called payloads—may be one way DOD can achieve on-orbit capability faster and more affordably. Using hosted payloads may also help facilitate a proliferation of payloads on orbit, making it more difficult for an adversary to defeat a capability. Since 2009, DOD has used three commercially hosted payloads, with three more missions planned or underway through 2022 (see figure below).
DOD estimates that it has achieved cost savings of several hundred million dollars from using commercially hosted payloads to date, and expects to realize additional savings and deliver faster capabilities on orbit from planned missions. Cost savings can result from sharing development, launch, and ground system costs with the commercial host company.
Among the factors that affect DOD's use of hosted payloads are
a perception among some DOD officials that matching government payloads to commercial satellites is too difficult; and
limited, fragmented knowledge on how to mitigate various challenges
GAO found that further opportunities to use hosted payloads may emerge as DOD plans new and follow-on space systems in the coming years. However, DOD's knowledge on using hosted payloads is fragmented, in part because programs are not required to share information. In 2011, the Air Force created a Hosted Payload Office to provide expertise and other tools to facilitate matching government payloads with commercial hosts. However, GAO found that DOD programs using hosted payloads are not required and generally do not provide cost and technical data, or lessons learned, to the Hosted Payload Office, or another central office for analysis. Requiring programs that use hosted payloads agency-wide to provide this information to a central location would better position DOD to make informed decisions when considering acquisition approaches for upcoming space system designs.
What GAO Recommends
GAO recommends that DOD require programs using commercially hosted payloads to contribute resulting data to a central location. In implementing this recommendation, DOD should assess whether the Air Force's Hosted Payload Office is the appropriate location to collect and analyze the data. DOD concurred with the recommendation. |
gao_GAO-19-52 | gao_GAO-19-52_0 | Background
Internet Industry and Consumer Privacy
To varying extents, Internet content providers—also called “edge providers”—and Internet service providers collect, use, and share information from their customers to enable their services, support advertising, and for other purposes. Many companies describe these and other privacy-related practices in privacy policies, to which consumers may be required to consent in order to use the service. Consumers access such services through a variety of devices, including mobile phones and tablets, computers, and other devices connected to the Internet by wired or wireless means.
A nationwide survey that the U.S. Census Bureau conducted for NTIA in 2017 found that 78 percent of Americans ages 3 and older used the Internet. Another nationwide survey that the Pew Research Center conducted in 2018 found that 69 percent of American adults reported that they use some kind of social media platform such as Facebook.
No comprehensive federal privacy law governs the collection, use, and sale or other disclosure of personal information by private-sector companies in the United States. Rather, the federal privacy framework for private-sector companies is comprised partly of a set of tailored laws that govern the use and protection of personal information for specific purposes, in certain situations, or by certain sectors or types of entities. These laws include the Fair Credit Reporting Act, which protects the security and confidentiality of personal information collected or used to help make decisions about individuals’ eligibility for such products as credit or for insurance or employment; the Gramm-Leach-Bliley Act, which protects nonpublic personal information that individuals provide to financial institutions or that such institutions maintain; and the Health Insurance Portability and Accountability Act which establishes a set of national standards for the protection of certain health information. In addition, as detailed in this report, FTC addresses consumer concerns about Internet privacy using its broad authority to protect consumers from unfair and deceptive trade practices.
We have reported on a variety of Internet privacy concerns in recent years that include the collection and use of data such as people’s Internet browsing histories, purchases, locations, and travel routes, including: Internet of things: In 2017, we found that as new and more devices become connected, they increase not only the opportunities for security and privacy breaches, but also the scale and scope of any resulting consequences.
Vehicle data privacy: We found in 2017 that most selected automakers reported limiting their data collection, use, and sharing, but their written notices did not clearly identify data sharing and use practices.
Information resellers: In a 2013 report on companies that collect and resell information on individuals, we found that no overarching federal privacy law governs the collection and sale of personal information among private-sector companies, including information resellers. We found that gaps exist in the federal privacy framework, which does not fully address changes in technology and the marketplace. Among the issues we noted were the potential need for changes to privacy controls for web tracking, mobile devices, and other technologies. We recommended that Congress consider strengthening the consumer privacy framework to reflect the effects of changes in technology and the marketplace. Such legislation has not been enacted to date.
Mobile device location data: In 2012, we found that, according to privacy advocates, consumers are generally unaware of how their location data are shared with and used by third parties. We recommended that FTC consider issuing guidance establishing FTC’s views regarding mobile companies’ appropriate actions to protect location data privacy. FTC implemented that recommendation in 2013.
To guide their privacy practices, many organizations and governments have used the Fair Information Practice Principles. As noted above, these principles—which are not limited to Internet privacy—address the collection and use of personal information, data quality and security, and transparency, among other things, and have served as the basis for many of the privacy recommendations federal agencies have made. The Organisation for Economic Co-Operation and Development developed a version of these principles in 1980 that has been widely adopted and was updated in 2013. In 2000, FTC recommended that Congress enact a consumer Internet privacy statute that would require companies to comply with broad and flexible definitions of the principles, and an FTC commissioner said in a 2014 speech that they are a solid framework and are flexible and effective. While they are principles, not legal requirements, they provide a possible approach for balancing the need for privacy with other interests. Table 1 provides more detailed information about the principles.
FTC and FCC Oversight of Internet Privacy
FTC is primarily a law enforcement agency that, among other responsibilities, currently has the lead in overseeing Internet privacy at the federal level. Specifically, it addresses consumer concerns about Internet privacy, both for Internet service providers and content providers, using its general authority under section 5 of the FTC Act. Section 5, as amended in 1938, prohibits “unfair or deceptive acts or practices in or affecting commerce.” Although the FTC Act generally empowers FTC to take enforcement action, it prohibits FTC from taking action against common carriers such as telecommunication services, airlines, and railroads under certain circumstances. FTC also does not have jurisdiction over banks, credit unions, or savings and loans institutions.
Even though the FTC Act does not speak in explicit terms about protecting consumer privacy, the Act authorizes such protection to the extent it involves practices FTC defines as unfair or deceptive. According to FTC, an act or practice is “unfair” if it causes, or is likely to cause, substantial injury not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or competition as a result of the practice. FTC has used this “unfairness” authority to address situations where a company has allegedly failed to properly protect consumers’ data. According to FTC, a representation or omission is “deceptive” if it is material and is likely to mislead consumers acting reasonably under the circumstances. For example, the omission of terms in an advertisement would need to be material and likely to mislead consumers in order to be deceptive. FTC applies this “deceptive” authority to address deceptions or violations of written privacy policies and representations concerning data security.
FTC’s Bureau of Consumer Protection investigates Internet privacy complaints from various sources, including consumers, other agencies, Congress, and industry, and also initiates investigations on its own. If the bureau has reason to believe that an entity is engaging in an unfair or deceptive practice, it may forward an enforcement recommendation to the commission. The commission then determines whether to pursue an enforcement action, which can include the following: litigating commission-filed administrative complaints before an FTC administrative law judge; filing and litigating complaints in federal district court seeking preliminary and permanent injunctions, monetary redress for consumers or other equitable relief; or referring complaints seeking civil penalties for violations of rules authorizing such penalties or for violations of administrative orders to the Department of Justice (DOJ) and assisting DOJ in litigating those cases (if DOJ does not take action, FTC can pursue the action on its own).
FTC’s Internet privacy enforcement cases may be settled without the imposition of civil penalties. Instead, FTC typically enters into settlement agreements requiring companies to take actions such as: implementing reasonable privacy and security programs; being subject to long-term monitoring of compliance with the settlements by outside entities; providing monetary redress to consumers; forfeiting any money gained from the unfair or deceptive conduct; deleting illegally obtained consumer information; and providing transparency and choice mechanisms to consumers.
If a company violates an FTC final consent order, the agency can then request civil monetary penalties in court for the violations. In addition, as discussed below, FTC can seek to impose civil monetary penalties directly for violations of certain privacy statutes and regulations such as the statute pertaining to the Internet privacy of children and its implementing regulations. Although FTC can levy civil penalties up to $41,484 per violation, per day, against an entity that violates a trade regulation rule under the FTC Act, it has not promulgated trade regulation rules under section 5 specific to privacy.
Although FTC has not implemented its section 5 authority by issuing regulations regarding Internet privacy, it has issued regulations to implement other statutory authorities. Likewise, other federal agencies use regulations to implement the statutes they are charged with administering. The process by which federal agencies typically develop and issue regulations is spelled out in the Administrative Procedure Act (APA). Section 553 of the APA establishes procedures and requirements for what is known as “informal” rulemaking, also known as notice-and- comment rulemaking. Among other things, section 553 generally requires agencies to publish a notice of proposed rulemaking in the Federal Register. After giving interested persons an opportunity to comment on the proposal by providing “data, views, or arguments,” the statute then requires the agency to publish the final rule in the Federal Register. Regulations may be enforced in various ways, for example, by seeking civil penalties for non-compliance. FTC has authority to seek civil penalties, for example, when a company knowingly violates a regulation or, as discussed below, a final consent order.
In contrast to the APA section 553 rulemaking process, the rulemaking process that FTC generally must follow to issue rules under the FTC Act is spelled out in the Magnuson-Moss Warranty Act amendments to the FTC Act (Magnuson-Moss). The Magnuson-Moss amendments— enacted in 1975 partly in response to industry opposition to FTC’s trade regulations, and amended in 1980—require additional rulemaking steps beyond APA section 553. For example, Magnuson-Moss requires FTC to publish an advance notice of proposed rulemaking in addition to the notice of proposed rulemaking required by the APA, and to offer interested parties the opportunity for an informal hearing involving oral testimony. FTC has not promulgated any regulations using the Magnuson-Moss procedures since 1980; according to FTC staff, the additional steps required under Magnuson-Moss add time and complexity to the rulemaking process.
The Children’s Online Privacy Protection Act (COPPA), enacted in 1998, governs the online collection of personal information from children under the age of 13 by operators of websites or online services, including mobile applications. COPPA required FTC to issue and enforce regulations concerning children’s online privacy and directed FTC to promulgate these regulations using the APA section 553 notice-and- comment rulemaking process. COPPA contained a number of specific requirements that FTC was directed to implement by regulation, such as requiring websites to post a complete privacy policy, to notify parents directly about their information collection practices, and to obtain verifiable parental consent before collecting personal information from their children or sharing it with others. The commission’s original COPPA regulations became effective on April 21, 2000, and amended COPPA regulations took effect on July 1, 2013. According to an FTC staff member, COPPA and FTC’s implementing regulations reflect various principles that are similar to the Fair Information Practice Principles.
FCC regulates the telecommunications industry pursuant to the Communications Act of 1934, as amended (Communications Act). FCC follows the APA section 553 notice-and-comment rulemaking process to promulgate regulations implementing the Communications Act. FCC also has an enforcement bureau that pursues violations of its regulations and the Communications Act.
The Communications Act establishes separate definitions for “information services” and “telecommunications services” and treats these two types of services differently. Specifically, information services are subject to less regulation by FCC than telecommunications services under the Communications Act. However, FTC is prohibited from regulating telecommunications carriers (a provider of telecommunications services) under the common carrier exemption. Prior to 2015, Internet services were considered information services under the Communications Act, and thus FTC was not prohibited from considering the privacy practices of Internet service providers under its FTC Act authority to protect consumers from unfair and deceptive practices. This changed in 2015 when FCC classified broadband as a telecommunications service, which meant that broadband Internet service providers were considered telecommunications carriers and FCC asserted primary oversight over them. As a result of the reclassification, FTC no longer had jurisdiction over Internet service providers. Once FCC had asserted primary oversight over Internet service providers, FCC promulgated privacy regulations specific to them. However, before the privacy regulations went into effect, Congress repealed them under the Congressional Review Act. In December 2017, FCC reclassified broadband as an information service—reverting Internet service providers’ classification to what it had been prior to 2015. When that reclassification became effective in June 2018, jurisdiction of Internet privacy for Internet service providers was effectively transferred from FCC back to FTC. As a result, FCC currently has limited Internet privacy oversight responsibilities, as shown in figure 1.
Stakeholders’ Views Varied on the Benefits and Concerns with Collecting and Using Consumers’ Data from the Internet
Perspectives on the benefits of and concerns about the collection and use of consumers’ data from the Internet varied somewhat across stakeholder groups. Various stakeholders we interviewed—including those from academia, industry, and government—said that there should be a balance between the freedom of companies to collect and use consumers’ data needed to provide services and the necessity to protect consumers’ privacy. In general, industry stakeholders highlighted the benefits of data collection and use, such as facilitating innovation, while consumer advocacy groups and other stakeholders emphasized concerns about consumers’ loss of control over their data and their lack of understanding of how companies collect and use their information. Additionally, surveys and other literature that we reviewed on Internet privacy highlighted concerns among consumers. The key benefits of information collection were identified as:
Enables certain services. According to two industry stakeholders, the collection and use of consumer data from the Internet enable content providers to provide services. These stakeholders said that sometimes a content provider must collect and use information from consumers to provide the service. For example, a mapping service must collect and use consumers’ current location to provide them with up-to-date directions.
Provides low-cost or free services. A representative from a content provider said that revenue from targeted advertising helps allow some content providers’ services to be offered to consumers at little or no charge. Instead of charging a subscription fee, a social media company may be able to provide free service because it uses information that it collects from consumers to target advertisements to users on a customized, user-by-user basis. These ads are targeted to users based on interests they express through their use of social media, among other things. According to a representative from an Internet search engine, using consumer data for targeted advertising may be relatively less important for some kinds of content providers, such as search engines. This company representative said that search engines may use keywords entered for a particular Internet search to provide advertisements relevant to the search. For example, a search for “car insurance” can offer the consumer advertisements from car insurance companies without any additional data from the consumer other than the search’s keywords.
Supports innovation and customization. According to some stakeholders, the collection and use of data also benefit consumers through other means such as providing innovative products or customized services. According to a representative from a content provider, the collection of personal information, with consent, for commercial purposes can at times have benefits. The representative said, for example, that collection of images containing identifiable information, like faces, can help in the development of new technologies such as object and facial recognition. According to two content providers, consumers may also benefit from customized services and content. For example, according to a representative from a travel-related company, that company can collect information about a consumer to suggest travel itineraries and suggestions for activities. Additionally, representatives from a consumer advocacy group and a content provider stated that direct-marketing approaches are enabled through data collection. Such marketing approaches allow consumers to receive advertisements that are uniquely tailored to their interests. For example, a consumer that a content provider has identified as being a hiker may receive advertisements for hiking boots.
Despite these benefits, public opinion surveys have shown concerns about the collection and use of consumers’ information on the Internet. For instance, recent analyses based on surveys by the Pew Research Center and NTIA showed that the public lacks trust in Internet privacy, a concern that may limit economic activities. NTIA’s survey results show that privacy concerns may lead to lower levels of economic productivity as people decline to make financial transactions on the Internet. According to the NTIA analysis, in 2017, 24 percent of American households surveyed avoided making financial transactions on the Internet due to privacy or security concerns. Consumers NTIA surveyed indicated that their specific concerns were identity theft, credit card or banking fraud, data collection by online services, loss of control over personal information, data collection by government, and threats to personal safety. Stakeholders we interviewed elaborated on some of these concerns:
Public disclosure and data breaches. Some stakeholders, including representatives from content providers, said that personal information from the Internet can be publicly disclosed, including through data breaches. An academic and a former FCC commissioner told us that such disclosures are becoming more frequent. Various consumer advocacy groups and state governments continue to report data breaches. This personal information can include financial information such as credit card information, the disclosure of which can result in financial harm to the consumer. It can also include other kinds of sensitive information such as political views or medical conditions, the disclosure of which can cause non-financial harms such as embarrassment or harassment. According to public reports, the 2017 breach of consumer information from Equifax, a credit-reporting agency, resulted in the disclosure of 143 million American consumers’ sensitive information. According to NTIA’s 2017 survey, 45 percent of households surveyed reported major concerns about credit card fraud. Regarding non-financial information, in a recent case FTC alleged that an Internet-based company publicly disclosed patients’ sensitive medical information without their knowledge after patients submitted what they thought were confidential reviews of physicians. According to FTC, these reviews were then publicly posted on the company’s website.
Financial and other harms. Stakeholders identified both potential financial and non-financial harms associated with misuse of personal information from the Internet. A former FTC acting chair has said that privacy and data-security incidents can cause injuries that do not only involve financial loss and that it may be difficult to measure this type of non-financial injury. In a February 2018 speech, this former acting FTC chair cited a case that the agency filed involving the misuse of personal information from the Internet that resulted in people losing jobs or job opportunities or being threatened, stalked, and harassed. The acting chair said that in another case, there was evidence that several people committed suicide after their names and other data were disclosed. The commission can, by bringing suit in district court, obtain an order compelling content providers to provide monetary relief to consumers if a data disclosure results in financial harm to a consumer. However, an academic noted that many data disclosures of sensitive information cannot be financially redressed; information can indefinitely persist on the Internet once it is disclosed.
Consumers’ lack of understanding. A range of stakeholders we interviewed, including those from industry, said that consumers lack an understanding of how their data are collected and used. Some stakeholders said content providers are insufficiently transparent about how they collect and use data. For instance, content providers’ privacy policies, according to various stakeholders, may contain technical language that is difficult for typical consumers to understand, may be located in a difficult-to-access or inconspicuous part of the content provider’s website, or may be lengthy to the point where it becomes prohibitively difficult for a consumer to set aside enough time to read. Furthermore, according to an academic, companies may have an incentive to intentionally obscure their privacy practices, since clarity could put the companies at a competitive disadvantage.
The academic also stated that different privacy policies may apply to different parts of a consumer’s experience on a single website. For example, the academic described how a website may have contracts with third-party vendors for specific services included on the website that consumers use, such as an online shopping cart’s features. The privacy policy for the website and the third-party shopping cart can be separate and unrelated to each other, and consumers may not be aware of this since these policies may never appear to consumers or be hard to obtain. A representative from a consumer advocacy group also mentioned that consumers may be unaware that companies track consumers’ Internet activity in order to target those consumers with customized prices. An academic said that these practices may disproportionately affect people with low computer literacy, as they may not be aware of tracking or know of ways to counteract it. In 2015, we found that the lack of computer and Internet skills is one of the primary barriers people face in using the Internet and that this is a particular problem for certain demographic segments who may lack exposure to or knowledge about computers, such as those of age 65 and older and those with low levels of income and education.
Consumer lack of control. Some academics and consumer advocacy groups also identified a lack of control as a concern with respect to Internet privacy—consumers have little or no control over how their information is collected, used, and shared. In a 2015 survey conducted by Pew Research Center, 65 percent of respondents said it is very important to be in control of what information is collected about them. However, according to an academic and a consumer advocacy group we interviewed, privacy policies offer consumers little or no bargaining power, and consumers may be forced to either accept the terms of the policy as written or not use the application or service at all. Furthermore, we recently reported that sometimes consumers’ information is used for purposes that are altogether separate from what those consumers originally anticipated. For example, FTC alleged in an enforcement action that in 2009 and 2010, a company told consumers that it would track the websites they visited in order to provide them with personalized offers, when in fact the company was also transmitting credit card information it collected through such tracking to third parties. The company settled with FTC. We also recently reported on how devices that comprise the Internet of Things pose privacy concerns for consumers, including that information collected by such Internet-connected devices can be used in ways to which the consumer was not given the option to opt out.
As discussed above, stakeholders described various types of harm that could result from Internet privacy violations. Regardless of whether violations involve financial or other types of harm, a challenging factor in providing Internet privacy oversight is identifying the responsible parties. A former federal government official with experience in privacy issues said that it frequently is difficult to identify which Internet entity in the chain is ultimately responsible for a privacy-related harm. For example, if a consumer is harmed by the theft of his or her Social Security number, it can be difficult to determine which entity is responsible if multiple entities have suffered data breaches of information systems that contained the Social Security number. In addition to the challenges in identifying responsible parties, the federal government has faced challenges in providing Internet privacy oversight. Our prior work has found that such efforts lack clearly defined roles, goals and performance measures, and that gaps exist in the current privacy framework.
FTC and FCC Have Used Different Approaches to Oversee Internet Privacy FTC Primarily Uses Settlement Agreements with a Range of Companies to Address Internet Privacy Violations
We found that during the last decade, FTC filed 101 Internet privacy enforcement actions for practices that the agency alleged were unfair, deceptive, a violation of COPPA, a violation of a settlement agreement, or a combination of those reasons. Most of these actions pertained to first-time violations of the FTC Act for which FTC does not have the authority to levy civil penalties. In those cases where a party violated an FTC regulation or settlement agreement, however, FTC does have the authority to impose civil penalties. The 101 cases—filed between July 1, 2008 and June 30, 2018—involved a variety of products, services, and industries that collect and use personal information from the Internet. During the years for which we examined full-year data, the number of enforcement actions taken per year ranged from 5 in 2010 and 2016 to 23 in 2015. For example, in recent years, FTC took enforcement action against the following entities for alleged conduct that the agency contended violated section 5 or COPPA: a toy manufacturer for collecting personal information from children online without providing direct notice and obtaining their parents’ consent; a computer manufacturer for pre-loading laptops with software that compromised security protections in order to deliver ads to consumers; a mobile ride-hailing business for misrepresenting the extent to which it monitored its employees’ access to personal information about users; a television manufacturer for installing software on its televisions to collect viewing data on 11 million consumers without their knowledge or consent and providing the viewing data to third parties; and a mobile advertising network for deceptively tracking the locations of hundreds of millions of consumers, including children, without their knowledge or consent, to serve them geographically targeted advertising.
Of the 101 actions filed during the 10-year period, 51 involved Internet content providers, 21 involved software developers, 12 involved the sale of information or its use in advertising, 5 involved manufacturers, 1 involved an Internet service provider, and 11 involved a variety of different products, such as those provided by rent-to-own companies or certification services. In nearly all 101 cases, companies settled with FTC, which required the companies to make changes in their policies or practices as part of the settlement. FTC levied civil penalties against two of those companies for violating their settlement agreements. Also during this 10-year period, FTC levied civil penalties against 15 companies (a total of $12.7 million) for alleged violations of the COPPA regulations. The COPPA civil penalties ranged from $50,000 to $4 million and the average amount was $847,333. FTC can also seek to compel companies to provide monetary relief to those they have harmed. During this time period, FTC levied civil penalties against companies for violations of consent decrees or ordered monetary relief to consumers from companies for a total of $136.1 million. These payment orders ranged from $200,000 to $104.5 million and the average amount was $17 million.
In the majority of these 101 enforcement actions that FTC settled, FTC alleged that companies engaged in practices that were deceptive. Examples of the charges FTC brought include: “Deceptive practices” cases (61 cases): In 2016, FTC alleged that Turn, Inc., an Internet advertising company, continued to track the Internet activities of consumers for targeted advertising purposes after the company had made representations that it would stop doing so. According to FTC, the company led consumers to believe they could turn off such tracking when in fact they were unable to do so. “Unfair practices” cases (4 cases): In 2014, FTC alleged that LeapLab, a data broker, knowingly provided scammers with hundreds of thousands of consumers’ sensitive personal information, including Social Security and bank account numbers. “Unfair and deceptive” practices cases (19 cases): In 2015, FTC alleged that Equiliv Investments, a software developer, lured consumers into downloading its “rewards” application, saying it would be free of malware, when the application’s main purpose was actually to load the consumers’ mobile phones with malicious software to mine virtual currencies for the developer.
COPPA and COPPA regulations cases (6 cases): In 2011, FTC alleged that Broken Thumbs Apps, a software developer, had collected information from Internet applications that the developer specifically targeted toward children under the age of 13. FTC’s complaint stated that the company had, among other things, failed to provide notice of what information it collected and how it was used and also had failed to inform parents of these practices and receive their consent as COPPA required.
Violation of settlement agreement cases (2 cases): In 2012, Google agreed to pay a $22.5 million civil penalty to settle FTC charges that it misrepresented to users of Apple’s Safari Internet browser that Google would not place tracking cookies or provide targeted ads to those users, violating an earlier settlement agreement between the company and FTC.
In 14 of the 101 cases, FTC required companies to be audited by outside entities to monitor compliance with the terms of the settlement. The audit period ranged from 5 years to 20 years, with an average of 17.5 years.
As noted above, 2 of the 101 cases involved a violation of FTC settlement agreements. In addition, in March 2018, FTC announced that it is investigating whether Facebook’s privacy practices violate a 2012 Facebook settlement agreement with FTC. In the case that resulted in the 2012 settlement, FTC charged Facebook with deceiving consumers by telling them they could keep their information private, but then allowing it to be shared and made public.
Appendix II contains more detailed information about the 101 cases.
FCC Developed Internet Privacy Rule for Internet Service Providers That Was Later Repealed
As stated earlier, in 2015, FCC classified broadband Internet service as a telecommunications service, placing primary oversight of broadband Internet service providers’ privacy practices under FCC’s jurisdiction instead of FTC’s jurisdiction. In 2016, FCC filed a privacy enforcement action against a mobile Internet service provider, alleging, in part, violation of section 222 of the Communications Act and FCC’s Open Internet Transparency Rule. Section 222 requires telecommunications carriers to protect the confidentiality of customers’ proprietary information. In that case, FCC fined Verizon Wireless $1.4 million for failing to disclose that it was inserting “unique identifier headers,” also called “perma- cookies” or “super cookies” (mobile web tracking cookies that users cannot remove), into customers’ Internet traffic over its wireless network. Although the settlement was finalized during the 2015-2017 period when FCC had asserted jurisdiction over the privacy practices of Internet providers, the Verizon Wireless practices occurred prior to the classification of Internet service providers as telecommunications carriers. The investigation therefore did not rely upon FCC’s subsequent assertion of authority over Internet service providers’ privacy practices.
In October 2016, after FCC had reclassified broadband as a telecommunications service, the commission issued Internet service provider privacy regulations, asserting its authority under section 222 of the Communications Act. In April 2017, however, Congress repealed these regulations under the Congressional Review Act before they took effect. In December 2017, FCC then reversed its 2015 classification of broadband, and oversight of broadband Internet service providers’ privacy practices reverted to FTC once the decision took effect in June 2018. In explaining the December 2017 decision, FCC’s new chair said that FTC’s privacy oversight approach regarding Internet service providers—using its authority to protect consumers against unfair, deceptive, and anti- competitive practices—had worked well in the past and that this action would “put the nation’s most experienced privacy cop back on the beat.” Under FCC’s new legal approach, it no longer asserts jurisdiction to take enforcement action against Internet service providers for privacy-related matters, including mobile Internet service providers. As part of FTC’s resumption of Internet service provider oversight, FCC and FTC entered into a memorandum of understanding in December 2017 spelling out their roles and responsibilities regarding oversight of these companies. FTC staff said that they regularly communicate with FCC and have an agreement to share Internet privacy complaints.
Selected Stakeholders Provided Various Views on the Effectiveness of Current Internet Privacy Oversight and How It Could be Enhanced Industry Stakeholders View Current Enforcement Approach as Providing Flexibility, While Consumer Stakeholders See Limitations with This Approach
As previously discussed, no federal statute comprehensively and specifically governs Internet privacy across all sectors. FTC oversees some aspects of Internet privacy by using its FTC Act section 5 authority to protect consumers from unfair and deceptive practices. FTC also uses its specific COPPA authority to police the collection and use of personal information from children by online services. Some industry representatives said that FTC’s enforcement has been effective because the agency has expertise and experience in privacy issues and has the flexibility to take enforcement action on a case-by-case basis. In addition, a content provider said that FTC has taken enforcement actions against companies of various sizes in different sectors and has a powerful tool by being able to require companies to be audited by outside entities for up to 20 years.
Industry stakeholders we interviewed generally said that “direct enforcement” of a statute is preferable to promulgating and enforcing regulations implementing that statute (which constitutes enforcement of the statute as well). These stakeholders noted several key concerns they believe exist with regulatory versus statutory enforcement of Internet privacy:
Regulations can stifle innovation. Two industry stakeholders said that regulations can hinder companies’ ability to innovate. For example, representatives from an Internet service provider said that innovation can stop during the rulemaking process as the industry waits for the regulation to be finalized.
Regulations may create loopholes. Representatives from an Internet industry group and a content provider said that regulations can also contain loopholes that can be legally exploited because imprecise language in a regulation may allow a company to legally engage in an action that was originally unforeseen by the regulator.
Regulations can become obsolete. Several industry stakeholders said regulations also may become obsolete quickly because the Internet industry is rapidly changing. An Internet industry representative noted that there can be large shifts in the Internet industry from year to year, while it often takes an agency much longer than a year to adopt a rule. Industry stakeholders said the flexibility of FTC’s approach allows FTC to adapt continuously to changing market conditions.
Rulemakings can be lengthy. FCC officials said that in some cases, rulemakings can take a long time, especially when the issues are complex and there is no statutory deadline. Our previous work on rulemaking found that length of time required for the development and issuance of final rules varied both within and among agencies.
Additionally, while some stakeholders suggested that regulations can clarify acceptable practices, other stakeholders, including from industry and academia, said that enforcement actions can send a similar message. According to both a representative from a content provider and an academic, enforcement actions such as settlement agreements, for example, establish precedents that companies can follow, similar to the way that case law developed by courts provides guidance for companies.
Although some industry representatives we interviewed said that FTC’s use of settlement agreements provides companies with guidance, certain trade associations took a different position in a recent case brought before the U.S. Court of Appeals for the Third Circuit, FTC v. Wyndham Worldwide Corp. 799 F.3d 236 (3d Cir. 2015). However, the court did not agree with the associations’ arguments. The case involved an enforcement action against Wyndham Worldwide Corporation where FTC alleged that data security failures led to three data breaches at the company in less than 2 years. The court considered whether FTC could bring an enforcement case involving cybersecurity using FTC’s section 5 “unfair practices” authority and, if so, whether Wyndham had “fair notice” that its specific cybersecurity practices could be deemed “unfair.” A group of companies and the U.S. Chamber of Commerce wrote a friend- of-the-court brief supporting Wyndham, criticizing FTC’s “regulation- through-settlements” approach. The companies argued this approach subjects businesses to “vague, unknowable, and constantly changing data-security standards” and businesses often are unaware of the standards to which they are held until after they receive a notice of investigation from FTC, at which point they must settle or expend considerable resources fighting the agency.
Potential Limits on Federal Trade Commission (FTC) Remedies A recently decided federal appeals court case illustrates potential limits on the remedies that FTC can order in an “unfair practices” enforcement proceeding. In this 2018 case, LabMD, Inc. v. FTC, 891 F.3d 1286 (11th Cir. 2018), the U.S. Court of Appeals for the Eleventh Circuit found that FTC could not direct a medical laboratory to create and implement wholesale data-security protective measures as a remedy to the laboratory’s alleged unfair practices. FTC had filed a complaint against LabMD under section 5 of the FTC Act for allegedly committing an unfair act or practice by failing to provide reasonable and appropriate security for personal information on its computer networks. The commission found that LabMD’s inadequate security constituted an unfair act or practice and ordered LabMD to take various actions, including establishing and maintaining a reasonable and comprehensive information security program. On appeal, the Eleventh Circuit ruled that FTC’s order exceeded its authority because it did not prohibit a specific act or practice but instead, mandated a complete overhaul of the company’s data-security program. FTC had argued that the FTC Act gives it broad discretion to prevent unfair or deceptive acts or practices that injure the general public and that FTC had spelled out standards for LabMD to craft a reasonable security program. The court ruled, however, that such a general approach would make it difficult for a reviewing court to determine if LabMD had complied with the order, in the event of a future FTC challenge. company can reasonably foresee that a court could construe its conduct as falling within the meaning of the statute.”
A majority of non-industry stakeholders we interviewed identified limitations in the current Internet privacy oversight approach because they view regulations in conjunction with enforcement as being more effective. These stakeholders include all of the former FTC commissioners we interviewed, three of the four former FCC commissioners we interviewed, and representatives from consumer advocacy groups we interviewed. In addition, a former FCC commissioner said that the current Internet privacy oversight approach is limited in part because he viewed regulations applying equally to all players in the Internet ecosystem in conjunction with enforcement as being more effective. A representative from a consumer advocacy group also said that regulations in conjunction with enforcement are essential for effective privacy protection. Some of these stakeholders noted key ways that they believe Internet privacy regulations can provide clarity to industry and consumers, as well as fairness and flexibility in enforcement:
Regulations can provide clarity. An Internet industry group representative said that various companies have favorable views of regulations because they can provide clear expectations about what actions are permissible. Similarly, a former congressional staff member with expertise on privacy issues said that some companies have favorable views of regulations because the regulations often provide clearer expectations about what the companies can do. FCC officials said that with respect to telephone privacy provisions of the Communications Act, the telephone industry wanted rules because it sought greater clarity about what it should be doing, what constituted a violation, how to comply, and what behaviors were acceptable.
Regulations may promote fairness. Some other stakeholders discussed the ability of regulations to provide fairness. For example, a former federal enforcement official described regulations as creating a fair and consistent oversight regime across the entire industry in a way that case-by-case enforcement actions do not. Another former federal enforcement official said that regulations give companies fair notice of what actions may be violations and thus help those companies avoid surprising or unexpected enforcement.
Regulations can be flexible. An academic said that by targeting behaviors and not specific technologies, regulations can be written in such a way that they do not become obsolete. An academic also said that regulations based on broad performance-standards principles can avoid being overly prescriptive. FCC officials also noted that regulations can be amended to adapt to changes in technology often faster than new laws can be enacted. Furthermore, regulations determined to be obsolete can be repealed. FTC staff told us that the agency systematically reviews all of its regulations every 10 years, even though it is only legally required to review its most significant ones, and that the number of FTC regulations has decreased because the agency determined prior ones were obsolete. The Regulatory Flexibility Act requires federal agencies to analyze the effect of their regulations on small entities.
Regulations can be a deterrent. FCC officials said that rules can have a deterrent effect on bad practices in the industry or have a role in mitigating the negative effects of bad practices after they occur. They said, for example, that the practice of pretexting (improperly obtaining people’s telephone records) was greatly curtailed by an FCC regulation prohibiting such practices. They also said that rules can foreclose arguments by companies claiming that because no rule was in place, they had no reasonable notice or awareness that they should behave in a particular way.
Consumer advocacy groups and other stakeholders, including some former FTC and FCC commissioners, had concerns about the efficacy of an enforcement approach such as FTC’s approach to Internet privacy oversight, which focuses on enforcing a statute rather than implementing regulations. They said that FTC’s enforcement approach limits the ability of the agency to affect companies’ behavior, and that any enforcement activity occurs after the violation, undesirable behavior, harm, or illegal action has already occurred. A former federal enforcement official also said that regulations can prevent companies from engaging in bad practices in the first instance and thus have a preventive effect. A former FCC commissioner said that by the nature of a direct statutory- enforcement approach (as opposed to rulemaking), an agency would only address a harm after it has occurred. As discussed above, for example, data often cannot be removed from the Internet because copies of the data can exist among many bad actors, and it can be difficult to identify the entity responsible for unwanted disclosures. Therefore, it may be more important to avoid such Internet privacy harms from occurring in the first place. Another former FCC commissioner told us that Internet privacy oversight should be returned to FCC because it has APA section 553 notice-and-comment rulemaking authority and considerable enforcement experience.
Representatives from consumer advocacy groups said that FTC’s enforcement action has been insufficient because it investigates only a small portion of actual Internet-privacy violations or takes action regarding only the most egregious or outrageous cases that it can win. FTC has also stated in its strategic plan that it focuses on investigating and litigating cases that cause or are likely to cause substantial injury to consumers and that by focusing on practices that are actually harming or likely to harm consumers, FTC can best use its limited resources. Representatives from an Internet association said that FTC’s Internet- privacy enforcement actions should focus on concrete harms. An FTC staff member from the Division of Privacy and Identity Protection said that the agency has been effective with the limited enforcement resources it has available. Furthermore, the staff member said the agency uses no formal written criteria or template to assess individual cases but considers the size and scale of a company’s effect on consumer privacy when deciding whether to take enforcement action. However, a former FTC commissioner told us that the agency needs more resources to effectively oversee Internet privacy.
We asked stakeholders whether it was clear under what circumstances FTC will take Internet privacy enforcement action. In response, some stakeholders said that FTC’s enforcement priorities are reflected in its settlement agreements, which provide information that is similar to a body of case law. Individual commissioners also may issue statements explaining their decisions. Two stakeholders also said that FTC’s closing letters, which the agency sends to companies and posts on its website when it closes an investigation without taking enforcement action, may explain its decisions. Other stakeholders said that more guidance would be helpful to provide additional clarity on how the agency uses its Internet privacy enforcement authority. FTC staff and other stakeholders also said that FTC has provided useful Internet privacy guidance. For example, in 2015, FTC published guidance for businesses on complying with COPPA.
Stakeholders and FTC Identified Potential Actions to Enhance Federal Oversight of Consumers’ Internet Privacy
Various stakeholders we interviewed said that opportunities exist for enhancing Internet privacy oversight. A key component of FTC’s mission, as specified by the FTC Act, is to protect consumers against unfair and deceptive practices. As discussed earlier, some stakeholders believe that FTC’s reliance on its unfair and deceptive practices authority to address Internet privacy issues has limitations. In addition, although the Fair Information Practice Principles provide internationally recognized principles for protecting the privacy and security of personal information, they are not legal requirements and FTC cannot rely on them to define what constitutes unfair and deceptive practices related to privacy and data security.
We stated in our 2013 information resellers report that the current U.S. privacy framework is not always aligned with the Fair Information Practice Principles and that these principles provide a framework for balancing the need for privacy with other interests. We found that there are limited privacy protections under federal law for consumer data used for marketing purposes. We said that although the Fair Information Practice Principles call for restraint in the collection and use of personal information, the scope of protections provided under current law has been narrow in relation to: (1) individuals’ ability to access, control, and correct their personal data; (2) collection methods and sources and types of consumer information collected; and (3) new technologies, such as tracking of web activity and the use of mobile devices. Although we recommended in that report that Congress consider strengthening the consumer privacy framework to reflect the effects of changes in technology and the marketplace, this matter for congressional consideration was not specific to Internet privacy or to the oversight authorities of any particular agency or agencies.
As noted above, various stakeholders expressed concern about the ability of consumers to control their data and understand how that data are used. These concerns suggest that companies are not always following the Fair Information Practice Principles, such as that companies’ data practices should be transparent, allow consumers the right to access and edit their data, and limit the collection of data to the extent feasible.
Those stakeholders who believe that FTC’s current authority and enforcement approach is unduly limited identified three main actions that could better protect Internet privacy: (1) enactment of an overarching federal privacy statute to establish general requirements governing Internet privacy practices of all sectors; (2) APA section 553 notice-and- comment rulemaking authority; and (3) civil penalty authority for any violation of a statutory or regulatory requirement, rather than allowing penalties only for violations of settlement agreements or consent decrees that themselves seek redress for a statutory or regulatory violation.
Privacy Statute
Stakeholders from a variety of perspectives—including from academia, industry, consumer advocacy groups, and former FTC and FCC commissioners—told us that a privacy statute could enhance Internet privacy oversight by, for example, clearly articulating to consumers, industry, and privacy enforcers what behaviors are prohibited, among other things. In addition, a former FCC commissioner said that a new privacy statute could enhance Internet privacy oversight by creating uniform standards for all players in the Internet ecosystem that is focused on the consumer rather than the regulatory legacy of the companies involved (regulations that apply to specific types of companies based on what they are or used to be, such as telecommunications carriers, cable companies, broadcasters, and mobile wireless providers). The former FCC commissioner said that as companies, technologies, and markets change, there is a question about whether existing law should be modernized. In 2015, FTC staff recommended that Congress enact broad-based legislation that is flexible and technology-neutral, while also providing clear rules of the road for companies about such issues as how to provide choices to consumers about data collection and use practices. Some stakeholders suggested that such a framework could either designate an existing agency as responsible for privacy oversight (such as FTC) or create a new privacy-oriented agency. A representative from a consumer advocacy group mentioned that the European Union, for example, has established the European Data Protection Supervisor, an independent data protection authority, to monitor and ensure the protection of personal data and privacy. Similarly, in Canada, the Office of the Privacy Commissioner, an independent body that reports directly to the Parliament, was established to protect and promote individuals’ privacy rights.
Some stakeholders also stated that the absence of a comprehensive Internet privacy statute affects FTC’s enforcement. For example, a former federal enforcement official said that FTC is limited in how it can use its authority to take action against companies’ unfair and deceptive trade practices for problematic Internet privacy practices. Similarly, another former federal enforcement official said that FTC is limited in how and against whom it can use its unfair and deceptive practices authority noting, for example, that it cannot pursue Internet privacy enforcement over exempted industries such as common carriers. In addition, a former FCC commissioner said that it is more difficult for FTC to take effective action because its enforcement comes only after a complaint and after an often lengthy review process. The former FCC commissioner also said that without “ex ante” rules (rules that define prohibited activity before it has occurred), there inevitably will be delay, confusion, and lack of knowledge about what is and is not acceptable behavior.
In addition, some stakeholders—including a representative from a consumer group, a former federal enforcement official, and a former FCC commissioner—said FTC’s section 5 “unfair or deceptive practices” authority may not enable it to fully protect consumers’ Internet privacy because it can be difficult for FTC to establish that Internet privacy practices are legally “unfair.” For example, under section 5, FTC has charged companies with committing a “deceptive” practice if their privacy policies said they would not collect or use consumers’ personal information but then did so. However, a former congressional staff member said that companies often write broad and vague policy statements, making it difficult for FTC to charge companies with committing deceptive practices. Instead, according to a representative from a consumer advocacy group, FTC would have to show the companies’ actions were “unfair,” which, according to the representative, is legally difficult to establish. We found in our 2017 report on vehicle data privacy that most automakers’ written privacy notices used vague language. Similarly, we found in our 2012 report on mobile device location data that although companies’ policies stated that they shared location data with third parties, they were sometimes vague about which types of companies these were and why they were sharing the data.
Some stakeholders said that FTC relies more heavily on its authority to take enforcement action against deceptive trade practices compared with the agency’s unfair trade practices authority. This was confirmed in our analysis of FTC’s Internet privacy enforcement actions discussed previously. However, a representative from a consumer advocacy group said that FTC’s ability to take such action is limited practically to instances where a company violates its own privacy policy—companies generally can collect and use data in any way they want if they include language in their policies asserting their intent to do so. According to a former FCC commissioner, a privacy statute could clarify the situations in which FTC could take enforcement action.
APA Notice-and-Comment Rulemaking
Various stakeholders said that there are advantages to overseeing Internet privacy with a statute that provides APA section 553 notice-and- comment rulemaking authority. As discussed above, that provision lays out the basic process by which so-called informal agency rulemaking shall be conducted, namely, publication of proposed regulations in the Federal Register; an opportunity for public comment (written and possibly oral submission of data and views); and publication of final regulations in the Federal Register with an explanation of the rules’ basis and purpose. Also as noted above, Congress imposed additional rulemaking steps on FTC in the Magnuson-Moss Act when FTC is promulgating rules under section 5 of the FTC Act. These additional steps include providing the public and certain congressional committees with advance notice of proposed rulemaking (in addition to notice of proposed rulemaking). FTC’s rulemaking under Magnuson-Moss also calls for, among other things, oral hearings, if requested, presided over by an independent hearing officer, and preparation of a staff report after the conclusion of public hearings, giving the public the opportunity to comment on the report. Finally, Congress made it easier for the public to appeal FTC’s Magnuson-Moss rules by making the agency meet a higher standard when the rules are challenged in court. FTC staff said that these additional steps add time and complexity to the rulemaking process.
In congressional testimony in 2010, the then-Director of FTC’s Bureau of Consumer Protection said that “if Congress enacts privacy legislation, the commission agrees that such legislation should provide APA rulemaking authority to the commission.” According to FTC, this testimony was voted on and approved by the commissioners and, therefore, constituted the commission’s official position at the time.
Moreover, according to stakeholders, in many cases regulations can be used to implement statutes. Officials from other consumer and worker protection agencies we interviewed described their enforcement authorities and approaches. For example, officials from the CFPB and the FDA, both of which use APA section 553 notice-and-comment rulemaking, said that their rulemaking authority assists in their oversight approaches and works together with enforcement actions. OSHA officials said that the standards that the agency promulgates under its authority specify what employers are required to do to reduce safety and health risks to workers. Such standards lay out the workplace conditions that must be maintained by employers and require that employers implement certain practices, operations, or processes that ensure worker protections. EEOC officials said that regulations are used to guide investigations that establish whether enforcement action is appropriate. CPSC officials said that the agency conducts consumer protection not only by establishing and enforcing mandatory regulations, but also through collaborative actions such as educating industry, developing consensus voluntary safety standards, removing defective products from the marketplace through voluntary corrective actions, and litigating when necessary. In addition, in contrast to FTC’s approach, FCC has APA section 553 notice-and-comment rulemaking authority and has issued regulations implementing section 222 of the Communications Act using that rulemaking authority to protect the privacy of telephone users.
Ability to Levy Civil Penalties for Initial Violations and to Impose Larger Civil Penalties
Some stakeholders suggested that FTC’s current ability to levy civil penalties could also be enhanced. Currently, FTC can levy civil penalties against companies for violating certain regulations, such as COPPA regulations, or if the company violates the terms of a settlement agreement already in place. According to most former FTC commissioners and some other stakeholders we interviewed, FTC should be able to levy fines for initial violations of section 5 of the FTC Act. An academic told us that the power of an agency to levy a fine is a tangible way to hold industries accountable. Another academic noted, however, that fines may be relatively less effective in industries where there is limited competition because the costs of those fines may be more effectively passed on to consumers in the form of higher prices for services. In addition, some stakeholders said that payments required by FTC orders are not large enough to act as a deterrent and that companies may consider them to be a cost of doing business.
There is a growing debate about the federal government’s role in overseeing Internet privacy. In a July 2018 congressional hearing, FTC’s new chair testified that the FTC Act cannot address all privacy and data- security concerns in the marketplace. The chair said, for example, that FTC’s lack of civil penalty authority for violations of the FTC Act reduces its deterrent capability. He also noted the agency lacks authority over non-profits and over common carrier activity, even though those entities and activities often have serious implications for consumer privacy and data security. In November 2018, FTC’s chair testified before Congress and urged Congress to consider enacting privacy legislation that would be enforced by FTC. A majority of the commission has indicated support for APA rulemaking and civil penalty authority for privacy. FTC also held hearings in September, November, and December 2018 to advance the discussion around privacy issues, among other topics, and FTC plans to hold an additional hearing on data security and consumer privacy in February 2019. In a Federal Register notice, FTC announced that it is interested in the benefits and costs of various state, federal and international privacy laws and regulations, including the potential conflicts among those standards. FTC also indicated that it is particularly interested in the efficacy of the commission’s use of its current authority and the identification of any additional tools or authorities the commission may need to adequately deter unfair and deceptive conduct related to privacy and data security. Also in July 2018, an NTIA official announced that NTIA, in coordination with the Commerce Department’s International Trade Administration and National Institute of Standards and Technology, had recently started holding stakeholder meetings to identify common ground and formulate core, high-level principles on data privacy.
Regarding the development of the Administration’s approach to consumer privacy, in September 2018, NTIA requested comments on ways to advance consumer privacy while protecting prosperity and innovation. Our 2009 report on a framework for assessing proposals for modernizing the financial regulatory system similarly found that regulators should have the authority to carry out and enforce their statutory missions. We further said that a regulatory system should be flexible and forward looking, allowing regulators to readily adapt to market innovations and changes, including identifying and acting on emerging risks in a timely way without hindering innovation. These factors are useful considerations as the federal government explores how it can better oversee privacy and data security. Having sufficient and appropriate authorities and providing flexibility to address a rapidly evolving Internet environment could better ensure that the federal government can protect consumers’ privacy.
Conclusions
Recent developments regarding Internet privacy suggest that this is an appropriate time for Congress to consider comprehensive Internet privacy legislation. Although FTC has been addressing Internet privacy through its unfair and deceptive practices authority, among other statutes, and other agencies have been addressing this issue using industry-specific statutes, there is no comprehensive federal privacy statute with specific standards. Debate over such a statute could provide a vehicle for consideration of the Fair Information Practice Principles, which are intended to balance privacy concerns with the need for using consumers’ data. Such a law could also empower a specific agency or agencies to provide oversight through means such as APA section 553 rulemaking, civil penalties for first time violations of a statute, and other enforcement tools. Comprehensive legislation addressing Internet privacy that establishes specific standards and includes APA notice-and-comment rulemaking and first-time violation civil penalty authorities could help enhance the federal government’s ability to protect consumer privacy, provide more certainty in the marketplace as companies innovate and develop new products using consumer data, and provide better assurance to consumers that their privacy will be protected.
Matter for Congressional Consideration
Congress should consider developing comprehensive legislation on Internet privacy that would enhance consumer protections and provide flexibility to address a rapidly evolving Internet environment. Issues that should be considered include: which agency or agencies should oversee Internet privacy; what authorities an agency or agencies should have to oversee Internet privacy, including notice-and-comment rulemaking authority and first-time violation civil penalty authority; and how to balance consumers’ need for Internet privacy with industry’s ability to provide services and innovate.
Agency Comments
We provided a draft of this report to FTC, FCC, and the Department of Commerce for their review and comment. FTC and FCC provided technical comments, which we incorporated as appropriate. The Department of Commerce indicated that it did not have comments.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the FTC chair, the FCC chair, the Secretary of Commerce, and interested congressional committees. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or members of your staff have any questions about this report, please contact Alicia Puente Cackley at (202) 512-8678 or [email protected] or Mark Goldstein at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix III.
Appendix I: Interviewees
For this review, we interviewed staff from agencies with roles in Internet privacy; officials from other consumer- and worker-protection agencies; stakeholders from consumer advocacy groups, industry groups, Internet service providers, and Internet content providers; academics; and former government officials. To obtain a variety of perspectives, we selected Internet service providers that represented different industry sectors and Internet content providers that provide a variety of information and social media services. Academic stakeholders were selected because of their expertise in privacy, consumer protection, and regulatory issues. We also interviewed former Federal Trade Commission (FTC) and Federal Communications Commission (FCC) commissioners who served during the Barack Obama and George W. Bush administrations and are from different political parties.
Academics
Consumer advocacy groups
Federal government agencies
Consumer Financial Protection Bureau (CFPB) Consumer Product Safety Commission (CPSC) Department of Commerce, National Telecommunications and Information Administration (NTIA) Equal Employment Opportunity Commission (EEOC) Federal Communications Commission (FCC) Federal Trade Commission (FTC) Food and Drug Administration (FDA) Occupational Safety and Health Administration (OSHA)
Former government officials
Industry groups
Internet content providers
Internet service providers
Appendix II: Federal Trade Commission Internet Privacy Enforcement Cases
The following table identifies 101 Federal Trade Commission (FTC) Internet privacy enforcement actions filed between July 1, 2008 and June 30, 2018 in which the agency alleged a violation of either the Federal Trade Commission Act (FTC Act) or the Children’s Online Privacy Protection Act (COPPA) and implementing COPPA regulations and subsequently entered into a settlement agreement with the target entity. Although some of these cases may involve both Internet data privacy and security issues, this table does not include cases that involved data security issues only.
Appendix III: GAO Contacts and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the contact names above, Andrew Huddleston, Assistant Director; Kay Kuhlman, Assistant Director; Bob Homan, Analyst-in- Charge; Melissa Bodeau; John de Ferrari; Camilo Flores; Erica Miles; Josh Ormond; and Sean Standley made significant contributions to this report. | Why GAO Did This Study
In April 2018, Facebook disclosed that a Cambridge University researcher may have improperly shared the data of up to 87 million of its users with a political consulting firm. This disclosure followed other recent incidents involving the misuse of consumers' personal information from the Internet, which is used by about three-quarters of Americans. GAO was asked to review federal oversight of Internet privacy. This report addresses, among other objectives: (1) how FTC and FCC have overseen consumers' Internet privacy and (2) selected stakeholders' views on the strengths and limitations of how Internet privacy currently is overseen and how, if it all, this approach could be enhanced.
GAO evaluated FTC and FCC Internet privacy enforcement actions and authorities and interviewed representatives from industry, consumer advocacy groups, and academia; FTC and FCC staff; former FTC and FCC commissioners; and officials from other federal oversight agencies. Industry stakeholders were selected to represent different sectors, and academics were selected because of their expertise in privacy, consumer protection, and regulatory issues.
What GAO Found
The United States does not have a comprehensive Internet privacy law governing the collection, use, and sale or other disclosure of consumers' personal information. At the federal level, the Federal Trade Commission (FTC) currently has the lead in overseeing Internet privacy, using its statutory authority under the FTC Act to protect consumers from unfair and deceptive trade practices. However, to date FTC has not issued regulations for Internet privacy other than those protecting financial privacy and the Internet privacy of children, which were required by law. For FTC Act violations, FTC may promulgate regulations but is required to use procedures that differ from traditional notice-and-comment processes and that FTC staff said add time and complexity.
In the last decade, FTC has filed 101 enforcement actions regarding Internet privacy; nearly all actions resulted in settlement agreements requiring action by the companies. In most of these cases, FTC did not levy civil penalties because it lacked such authority for those particular violations. The Federal Communications Commission (FCC) has had a limited role in overseeing Internet privacy. From 2015 to 2017, FCC asserted jurisdiction over the privacy practices of Internet service providers. In 2016, FCC promulgated privacy rules for Internet service providers that Congress later repealed. FTC resumed privacy oversight of Internet service providers in June 2018.
Stakeholders GAO interviewed had varied views on the current Internet privacy enforcement approach and how it could be enhanced. Most Internet industry stakeholders said they favored FTC's current approach—direct enforcement of its unfair and deceptive practices statutory authority, rather than promulgating and enforcing regulations implementing that authority. These stakeholders said that the current approach allows for flexibility and that regulations could hinder innovation. Other stakeholders, including consumer advocates and most former FTC and FCC commissioners GAO interviewed, favored having FTC issue and enforce regulations. Some stakeholders said a new data-protection agency was needed to oversee consumer privacy. Stakeholders identified three main areas in which Internet privacy oversight could be enhanced:
Statute . Some stakeholders told GAO that an overarching Internet privacy statute could enhance consumer protection by clearly articulating to consumers, industry, and agencies what behaviors are prohibited.
Rulemaking . Some stakeholders said that regulations can provide clarity, enforcement fairness, and flexibility. Officials from two other consumer protection agencies said their rulemaking authority assists in their oversight efforts and works together with enforcement actions.
Civil penalty authority. Some stakeholders said FTC's Internet privacy enforcement could be more effective with authority to levy civil penalties for first-time violations of the FTC Act.
Comprehensive Internet privacy legislation that establishes specific standards and includes traditional notice-and-comment rulemaking and broader civil penalty authority could enhance the federal government's ability to protect consumer privacy.
What GAO Recommends
Congress should consider developing comprehensive legislation on Internet privacy that would enhance consumer protections and provide flexibility to address a rapidly evolving Internet environment. Issues that should be considered include what authorities agencies should have in order to oversee Internet privacy, including appropriate rulemaking authority. |
gao_GAO-18-446 | gao_GAO-18-446_0 | Background
How Reverse Auctions Work
In a traditional auction, the intent is for multiple buyers to bid against one another by submitting bids to purchase a good or service that is for sale. Generally speaking, the bidder offering the highest price receives the item for sale and the seller benefits from receiving more money due to competition. In contrast, reverse auctions are intended to encourage multiple vendors to compete against one another to win a contract from the government by lowering the price for which the vendor is willing to sell a particular good or service. The buyer—typically a contracting official—then evaluates the technical proposals and bids, and selects a winning vendor—generally the bidder who submitted the lowest price bid with an acceptable proposal—to meet the government’s need. Figure 1 compares these two types of auctions.
Reverse auctions can be opened to any vendor on the open market or can be limited to vendors that hold contracts on existing contract vehicles, such as indefinite-delivery vehicles under which the government has already determined that a specific group of vendors is qualified to sell specific goods or services. Existing vehicles provide a simplified way to procure commercial products and services. Agencies can use reverse auctions as a tool to further promote competition and lower prices, among other potential benefits. Agencies can use reverse auctions to order from various existing contract vehicles, including:
The Army’s CHESS program. CHESS is the Army’s primary source for commercial information technology hardware, software, and services.
DHS’s First Source II. First Source II is a 100 percent small business contract vehicle, specifically designed as a preferred source to acquire commercially available information technology commodities, solutions, and value-added vendor services to support DHS programs.
GSA’s Federal Supply Schedules program. The Federal Supply Schedules provide federal agencies a simplified method of purchasing commercial products and services off of multiple schedules, from numerous vendors, at prices associated with volume discount buying.
National Aeronautics and Space Administration’s Solutions for Enterprise-Wide Procurement (SEWP). SEWP allows federal agencies government-wide to purchase from over 140 vendors and offers a wide range of commercial advanced technology products and product-based services.
Reverse auction providers can be private companies or offices within federal agencies, and the providers may provide reverse auction services across the government or to specific agencies. Since we last reported on this issue in December 2013, two federal agencies developed platforms to facilitate reverse auctions through existing contract vehicles, by adapting existing electronic platforms. In July 2013, GSA’s Federal Acquisition Service launched its platform, GSA Reverse Auctions, which was built off its e-Buy tool and initially offered reverse auctions for a limited number of GSA and VA Federal Supply Schedule contracts, expanding to additional schedule contracts and agency-specific multiple award contracts over the following 2 years. In November 2015, GSA Reverse Auctions expanded further to offer open market auctions. In January 2016, Army’s CHESS program launched a capability using its IT e-mart to run reverse auctions on certain CHESS contracts. Similarly to when the private sector builds a platform, new government capabilities have costs associated with development and ongoing maintenance. According to GSA officials, development of the reverse auction capability cost approximately $2 million, and operations and maintenance costs are expected to total about $650,000 over the next 3 fiscal years. According to CHESS officials, its capability was developed at no additional financial cost under the fixed-price contract for the IT e-mart, although there were opportunity costs because other lower priority actions were delayed. Table 1 includes information about the reverse auction providers we identified in our review.
Reverse auction providers offer differing levels of service, ranging from simply providing a web-based reverse auction platform to a full-service model. Full-service providers may offer services such as creating draft auctions, soliciting vendors to participate, helping create a marketplace of vendors, and encouraging vendor participation for low-bid-count auctions. Agency buyers can select which additional services, if any, to use. FedBid is an example of a full-service provider, whereas Army CHESS provides a self-service web-based reverse auction platform, the IT e-mart.
While the government pays some reverse auction providers directly, other reverse auction providers, including FedBid and GSA, collect reverse auction fees through an indirect payment process. Generally, in the indirect payment process, the reverse auction provider adds a fee onto the winning vendor’s bid. Then, the agency pays the winning vendor this total amount. In turn, the reverse auction provider collects the fee from the winning vendor (see figure 2).
Prior GAO Work
In December 2013, we reviewed the use of reverse auctions at four agencies—Army, DHS, Interior, and VA—and found that these agencies steadily increased their use of reverse auctions (in number and dollar value) from fiscal years 2008 to 2012. For auctions in 2012 across the four agencies, we found:
Agencies awarded about 95 percent of reverse auctions for $150,000 or less.
Information technology goods and services were among the top categories purchased.
Products made up about 90 percent of total dollar value of awarded reverse auctions.
47 percent of reverse auctions were for orders from existing contracts.
80 percent of reverse auction dollars and about 86 percent of reverse auctions were awarded to small businesses.
In addition, we found that the four agencies in our review did not maximize the potential benefits of reverse auctions—competition and savings. We found that over one-third of reverse auctions in 2012 had no iterative bidding and that it was unclear whether savings calculated for reverse auctions were accurate because cost estimates developed before the auction may have been set too low or too high. In addition, we found that almost half of the reverse auctions were used to obtain items from existing contracts.
We further noted that there was a lack of comprehensive government-wide guidance and that the Federal Acquisition Regulation (FAR) did not specifically address reverse auctions, resulting in confusion about their use. We recommended the Director of the Office of Management and Budget (OMB) take steps to amend the FAR to address agencies’ use of reverse auctions and issue government-wide guidance to maximize competition and savings when using reverse auctions. OMB’s OFPP subsequently issued guidance in June 2015 on reverse auctions, and the proposed FAR changes are currently being reviewed prior to being published for public comment.
Government-wide Regulations and Guidance
Prior to 1997, the FAR prohibited agencies from using auctioning techniques. In 1997, the FAR was revised to eliminate these prohibitions as part of an overall effort to make the source selection process more innovative, simplify the acquisition process, and facilitate a best value acquisition approach.
In June 2015, OFPP issued guidance to federal agencies on the effective use of reverse auctions. This memorandum reviewed the benefits of reverse auctions, offered a set of reminders to help contracting offices maximize the value of this tool, and asked agencies to work with OFPP in identifying and collecting data that can be used to evaluate and improve results. Specifically, the memorandum noted that some of the benefits of reverse auctions are price reductions, enhanced competition, and significant small business participation. In addition, the memorandum noted that reverse auctions are not a “one size fits all” solution and are likely to be most effective in the following circumstances: are steady and relatively simple and might otherwise be acquired using either a sealed bid or achieving best value through “low price technically acceptable” source selection criteria; and result in fixed price agreements.
Typically, these circumstances exist in acquisitions for commercial items and simple services that often fall under the simplified acquisition threshold.
The memorandum reminds agencies that, as with any procurement, market research must be conducted to understand the marketplace and to determine if it is reasonable to assume that the potential benefits of a reverse auction can be achieved. It also notes that agencies should regularly evaluate their experiences with reverse auctions and the effectiveness of existing practices and policies as part of procurement management reviews so that refinements can be made as necessary. The issues addressed in the OFPP memorandum have not yet been incorporated into the FAR. While the FAR does not specifically address reverse auctions, several provisions facilitate agencies’ use of them, such as allowing the use of innovative strategies and electronic commerce.
Federal Agencies’ Use of Reverse Auctions Decreased between Fiscal Years 2013 and 2017
We found the value of awarded reverse auctions decreased approximately 22 percent across the government between 2013 and 2017, from about $1.9 billion to about $1.5 billion. Although the number of auctions consistently decreased each year from 2013 to 2017, the dollar value of auctions increased after 2015, indicating that some individual reverse auctions have been for larger dollar values in the past couple of years (see figure 3).
During this same period, the overall trend in federal contract obligations initially decreased from 2013 through 2015 and then increased overall through 2017—from about $490 billion in 2013 to $508 billion in 2017. Hence, since 2013, contracts awarded through reverse auctions have consistently represented less than 0.5 percent of federal contract spending. In addition, almost all auctions and the vast majority of the dollars agencies awarded between 2013 and 2017 resulted from the use of the FedBid reverse auction platform.
We also found that the dollar value of awarded reverse auctions varied from 2013 to 2017 across the six agencies we reviewed, with total reverse auction value greater in 2017 than in 2013 for half of the agencies (DHS, Navy, and State) (see figure 4).
Our analysis indicates that agencies’ and components’ policies may influence the use of reverse auctions. Specifically, two agencies that experienced substantial reductions in their use of reverse auctions changed their policies so that contracting officers would no longer be required to use reverse auctions. For example, Interior’s August 2015 policy rescinded a previous requirement to first consider using reverse auctions for commercial items using simplified procedures above the micro-purchase threshold and below the simplified acquisition threshold. The revised policy encouraged contracting officials to use procurement tools as appropriate, allowing for the use of reverse auctions at contracting officials’ discretion. VA’s Veterans Healthcare Administration—formerly one of the largest users of reverse auctions— revised its procurement manual in February 2014 to suspend the use of any reverse auction platform to conduct new reverse auctions. The Veterans Healthcare Administration amended its procurement manual again in October 2015 to lift the suspension of GSA Reverse Auctions, but kept in effect the suspension of all other reverse auctions platforms. VA and Veterans Healthcare Administration officials stated that they revised their policies following investigations about the use of reverse auctions at the Veterans Healthcare Administration by the VA Office of Inspector General.
Other agencies and components we reviewed have policies that encourage the use of reverse auctions. For example:
State’s May 2015 policy memorandum established a requirement that contracting officials first consider using reverse auctions conducted through FedBid for all noncomplex commodities.
DHS’s Customs and Border Protection’s August 2014 standard operating procedure required that reverse auctions conducted through FedBid be given priority consideration when acquiring non-complex commodities.
A Naval Supply Systems Command’s November 2014 policy letter required use of reverse auctions for commercial off-the-shelf supply items valued from $25,000 to the simplified acquisition threshold.
The Army’s Mission Installation Contracting Command Desk Book has generally required use of reverse auctions for all acquisitions above the micro-purchase threshold for commercial supplies in certain categories.
Overall, of the almost 15,000 reverse auctions conducted and awarded in 2016 by the five agencies for which we reviewed detailed data, we found that about 94 percent were for contracts valued below $150,000. However, we found that nearly two-thirds of the dollar value of awarded reverse auctions was for purchases above $150,000 (see figure 5).
Further, we found that reverse auctions valued at more than $1 million in 2016 accounted for less than 1 percent of the number of auctions and 32 percent of the dollar value. Most (about 80 percent) of these higher- dollar-value auctions were for information technology-related products and services, while the remainder included hand tools, cabling equipment, radios, uniforms, air rifles, and vehicle trailers.
Our analysis also found that the selected agencies generally used reverse auctions with fixed-price contracts, commercial items, products, and to promote small business participation—a few of the effective uses outlined in the June 2015 OFPP memorandum. For example, in terms of award value, 87 percent was for products and 13 percent for services. In addition, 60 percent of auction award value was for information technology-related purchases. Further, 83 percent of auction value was for awards made to small businesses.
Agencies Obtained Benefits of Enhanced Competition and Reduced Administrative Burden, but Savings Estimates Should be Viewed with Caution
The agencies we reviewed obtained iterative bidding, indicating enhanced competition between multiple vendors, in nearly three-quarters of reverse auctions, and contracting officials cited reduced administrative burden as another key benefit, but determining the actual amount of savings is challenging due to data issues. Overall, in fiscal year 2016, the agencies we reviewed achieved iterative bidding for 75 percent of reverse auctions. However, in 20 percent of auctions only one bidder participated. Auctions representing nearly half of the value of State’s reverse auction awards had only one bidder, driven by large dollar value procurements, in part due to State’s requirement to use reverse auctions for all non-complex commodities without regard to expectations for competition. Contracting officials we spoke to cited reduced administrative burden, particularly at the end of the fiscal year, as a key factor in the decision to use reverse auctions. Based on data from reverse auction providers, reverse auctions that took place in 2016 resulted in contract awards that were an estimated $100 million below the government’s pre-auction estimate, though the extent to which this figure represents actual savings is difficult to determine.
Three-Quarters of 2016 Auctions Resulted in Enhanced Competition through Iterative Bidding, But Competition Results Varied by Agency and Other Factors
Reverse Auctions Generally Resulted in Iterative Bidding
We found the agencies we reviewed achieved iterative bidding on 75 percent of auctions in fiscal year 2016, accounting for 68 percent of dollars spent. However, in 20 percent of the auctions, only one bidder participated (see figure 6). OFPP’s June 2015 guidance states that reverse auctions are likely to be most effective in highly competitive marketplaces.
We found that auctions with iterative bidding resulted in award prices that were, on average, about 12 percent lower than pre-auction cost estimates, which generally reflect the government’s independent cost estimate. In contrast, this difference was about 6 percent among those auctions without iterative bidding. Of the 40 auctions we selected for in-depth review, we reviewed 29 auctions with iterative bidding. Review of the bid history for some of these auctions demonstrated the potential benefits of iterative bidding. For example:
State awarded an approximately $4.3 million contract for night vision goggles following an open market reverse auction that got 110 bids from 16 vendors. The winning vendor bid 17 times and lowered its price by roughly 30 percent over the course of the auction, not including the reverse auction provider’s indirect fee.
DHS’s Customs and Border Protection awarded an approximately $268,000 contract, including an option period, for tires following an open market reverse auction that got 35 bids from 13 vendors. The winning vendor bid three times and lowered its bid by roughly 25 percent over the course of the auction, not including the reverse auction provider’s indirect fee.
Army National Guard Bureau awarded an approximately $14,000 contract for ice climbing equipment following an open market reverse auction that got 20 bids from 7 vendors. The winning vendor bid six times and lowered its price by roughly 10 percent over the course of the auction, not including the reverse auction provider’s indirect fee.
About One-Fifth of Reverse Auctions Had Only One Bidder
Although three-quarters of 2016 auctions achieved iterative bidding for the agencies we reviewed, we found that in 20 percent of the awarded reverse auctions only one bidder participated, representing 27 percent of the dollars awarded. This percentage is higher than the percent of obligations on all 2016 competitive procurements for which there was only one offer received across the government (14 percent). However, this varied by agency. Four of the five agencies we reviewed had higher proportions of only one bidder participating on reverse auctions, by dollar value, than for their competitive procurements in general, particularly at State. The other agency, Interior, had a lower proportion of only one bidder participating in reverse auctions. Table 2 describes differences in competition for selected agencies in 2016.
Our analysis indicates that requiring the use of reverse auctions through agency or component-level guidance may contribute to agencies obligating more money through reverse auctions that attract only one bidder. Specifically, State’s percentage of dollar value for auctions with one-bidder—almost 40 percent—was substantially higher than other agencies in our review and more than twice State’s percentage of dollars obligated on competitive procurements in general when only one offer was received. This was driven by the results of reverse auctions for larger dollar value contracts. In 2016, State awarded more auctions valued over $1 million than any of the other agencies we reviewed. Of 36 State auctions valued at more than $1 million, 13 had only one bidder— accounting for 27 percent of the total dollar value of State’s reverse auctions in 2016. State’s May 2015 guidance requires contracting officials to first consider using FedBid’s reverse auction platform for the acquisition of non-complex commodities, but does not mention competition or its benefits. While the policy allows contracting officers to seek waivers in certain circumstances, none of the potential exceptions listed in the policy include the expectation of a lack of robust competition. Some State contracting officials we spoke to said that the requirement encourages the use of reverse auctions even if there is not a reasonable expectation of competition.
We reviewed four State auctions valued at more than $1 million where there was only one bidder. Contracting officials responsible for three of the four auctions cited the guidance as a reason they used a reverse auction. For example, State awarded a $12 million contract for brand name computer and storage infrastructure equipment following a 2-day reverse auction at the end of the fiscal year open to National Aeronautics and Space Administration SEWP vendors. The contracting official responsible for this auction told us that market research indicated that two SEWP vendors could meet their needs, but only one vendor had responded to inquiries during market research. However, she said that she used a reverse auction because State policy required it for contracts of this type.
In the fourth instance, State officials acknowledged that other factors, including poor acquisition planning that resulted in tight timeframes, led them to use a reverse auction as a “crisis management tool”. State awarded a $19 million contract, including option periods, for construction support services in Afghanistan following a 17-hour reverse auction among Federal Supply Schedule vendors, although only one vendor had responded to market research inquiries. Officials said that they had sought to combine this contract with another set of services for which the same vendor was the only identified source likely to respond, but coordinating with the customers took too long, and they ultimately ran out of time before the predecessor contract was set to expire and services would stop. Under tight timeframes that risked the program losing critical services, contracting officials said they used a reverse auction because it allowed them to make a contract award quickly while still opening the requirement to multiple vendors, even though there was little chance of multiple vendors bidding.
OFPP’s June 2015 reverse auctions guidance states that market research—the process used to collect and analyze data about the capabilities in the market to satisfy agency needs—must be conducted to understand the marketplace and to determine if it is reasonable to assume that the potential benefits of reverse actions can be achieved. State’s requirement to first consider using FedBid’s reverse auction platform for all non-complex supplies, even with exceptions, may contribute to State using and paying for reverse auctions when a different approach could garner more competition and potentially a better price.
Competition Rates Were Lower When Agencies Used Existing Contract Vehicles
For the almost 15,000 auctions the five selected agencies conducted in 2016, nearly $590 million—about 65 percent—of total awarded reverse auction value was for orders on existing contract vehicles. We found that, in comparison to open market auctions, reverse auctions using existing contract vehicles had 1) higher rates of only one bidder participating, and 2) were less likely to have iterative bidding (see table 3).
The 40 auctions we reviewed in-depth included 24 auctions that used existing contract vehicles, including 5 in which only one bidder participated—4 awarded by State and 1 by DHS’s Customs and Border Protection. However, our review of these examples did not identify clear reasons why auctions on existing contract vehicles have lower competition rates overall than open market auctions. Agency procurement officials told us that they are aware of variations in the competition obtained for particular existing vehicles more generally than when reverse auctions are used, and suggested that it would be useful to examine the competition dynamics for reverse auctions vehicle by vehicle.
None of the agency guidance we reviewed comprehensively addressed how to use reverse auctions effectively when ordering from existing contract vehicles. Further, none of the five agencies we reviewed have collected data on or assessed why the number of reverse auctions with only one bidder on existing contract vehicles was significantly higher than reverse auctions using open markets. OFPP’s June 2015 reverse auctions guidance states that agencies should be evaluating their experiences with reverse auctions and the effectiveness of existing practices and policies so that refinements can be made as necessary. Standards for internal control require management to periodically review policies and procedures for continued relevance and effectiveness in achieving the entity’s objectives. Without understanding what factors indicate that conducting reverse auctions using existing contract vehicles is appropriate and providing this information to contracting officials so that they can consider it when developing their acquisition strategies, agencies may be using and paying for reverse auctions when another approach might yield better competition and pricing.
Decreased Workload and Ease of Use Are Key Reasons Officials Use Reverse Auctions
Similar to what we found in December 2013, of the 35 contracting officials we interviewed, 29 cited ease of use and reduced administrative burden as key reasons why they chose to use reverse auctions, particularly at the end of the fiscal year. Officials noted that certain reverse auction providers, such as FedBid, offer acquisition support services in addition to the reverse auction platform itself that can decrease the workload for contracting officials. In particular, contracting officials noted the following as ways that reverse auctions assisted them in performing their responsibilities:
The reverse auction provider performed functions such as building complex auctions and following up with vendors to encourage participation. In some instances, such as at State or Customs and Border Protection, FedBid provides support personnel on-site at agencies. Contracting officials told us that this is helpful because they are able to obtain in-person support for troubleshooting and time-sensitive purchases. Officials said that they used these additional services for 7 of the 29 FedBid auctions about which we interviewed contracting officials.
Reverse auction platforms produced auction documentation that decreased the administrative burden of producing a contract file. For example, Army officials responsible for a $14,000 award for ice climbing equipment explained that the summary document produced by the FedBid platform includes much of the competition information, such as auction participants and bids, needed for the contract file.
The reverse auction platforms enabled contracting officials to replicate past auctions for similar items, then update auction-specific information. For example, a DHS Immigration and Customs Enforcement contracting official responsible for a $38,000 award for detention uniforms said that he makes frequent purchases of the same items, so the ability to clone past auctions and update the quantities, pre-auction cost estimates, clauses, and sources (open market or existing contracts) saves a lot of time. He said that with other procurement methods he must re-enter procurement information each time.
Reverse auctions enabled them to work on multiple procurements simultaneously, rather than sending emails or making phone calls to individual vendors to obtain quotes. For example, a DHS Customs and Border Protection contracting official responsible for two auctions we reviewed said that reverse auctions allow him to work on multiple contract awards at a time at the end of the fiscal year.
Data we collected from reverse auction providers found that contracting officials make greater use of reverse auctions at the end of the fiscal year. While the agencies we reviewed made a disproportionate number of new awards in the last fiscal quarter of 2016—42 percent—reverse auctions were used even more heavily, with agencies conducting 53 percent of reverse auctions in the last quarter (see figure 7).
Reverse Auctions Data Indicate $100 million in Savings in 2016, but Savings Estimates Should be Viewed with Caution
Based on fiscal year 2016 data from reverse auction providers, Army, Navy, DHS, Interior, and State awarded contracts with values that totaled more than $100 million less than the agencies’ pre-auction cost estimates, after including any reverse auction provider fees (see table 4).
The agencies we reviewed generally rely on reverse auction providers to report savings estimates to them. FedBid—the largest provider used by our selected agencies—and GSA Reverse Auctions generally calculate savings as the difference between the pre-auction cost estimate— represented by the auction’s “target price” set by buyer—and the award price, which is the winning vendor’s bid plus the reverse auction provider’s fee. In some cases, however, FedBid will modify this approach to account for potential shortcomings in the quality of pre-auction cost estimates. FedBid does this in two different scenarios.
First, to correct for situations when using the agency target price results in abnormally high savings—generally defined by FedBid as savings more than 50 percent above the target price—instead FedBid uses a target price based on an average of bids received during the auction. FedBid representatives explained that these adjustments help avoid overstating savings caused by outlier target prices.
Second, to correct for situations when the agency target price was lower than the winning bid, and would result in a calculated savings of less than $0, instead FedBid uses a target price equal to the winning bid, so that calculated savings equal $0. FedBid representatives explained that, in their opinion, a contracting official would not proceed with an award if the winning bid was higher than the target price unless the contracting official believed that the pre-auction estimate was invalid.
Overall, we found that in 4 of the 33 FedBid auctions we reviewed, the awarded reverse auction prices were collectively $900,000 higher than the pre-auction cost estimates (which were used as the target prices). Prior to reporting savings to the agencies, FedBid adjusted the target prices to match the award values and reported that these auctions resulted in no savings. FedBid representatives said that they have provided details about this data normalization process to the contracting officers responsible for their agency contracts.
We identified other approaches to calculating savings resulting from reverse auctions. For example, in December 2016, the Army negotiated a new contract with FedBid that established a different method for calculating savings in an attempt to isolate the savings due to the specific effects of reverse auctions. The Army calculates savings as the difference between the “initial leading bid”—the second bid usually—and the winning bid. GSA Reverse Auctions and Army CHESS have also calculated savings through different methods, including as the difference between the highest bid and the lowest bid, as well as between the winning vendor’s initial and lowest bids.
Contracting officials acknowledged several challenges in using the pre-auction cost estimate as a baseline from which to calculate savings. For example:
Contracting officials at Interior’s US Geological Services stated that it is critical to ensure that the pre-auction cost estimates they set in the reverse auction system are based on good market research, and that the target price is set at the lowest price they can obtain outside of a reverse auction. They noted that before conducting a reverse auction for water filters, these officials lowered the pre-auction cost estimate by about $450,000 from the program office’s initial cost estimate, to reflect a lower price identified in subsequent market research. During the reverse auction, Interior obtained five bids from four vendors, resulting in an award valued at $1.4 million, including option periods. The auction’s savings were then calculated to be $670,000.
In another auction resulting in a $430,000 contract awarded by the Army for laptops, the contracting official noted that the pre-auction cost estimate was developed by the customer based on historic pricing. In turn, the price obtained through the reverse auction reflected a calculated savings of $67,000 or about 13 percent from the pre-auction estimate. However, the contracting official said that this method is not a reliable way to calculate savings as his customers typically use a high estimate to make sure they do not have to request additional funds. The contracting officer also noted that, in his experience, using historical pricing for technology products can be problematic since pricing changes very quickly as new technology is developed and old products become obsolete.
We reported in December 2013 that it was unclear whether comparing auction award prices to the pre-auction cost estimate produced an accurate estimate of savings, as it depended on the quality of the pre-auction cost estimate, which is generally informed by market research. In our current review, contracting officials reiterated this perspective. Federal regulations provide flexibility in terms of the extent to which market research should be conducted, and how that research should be conducted, including for low dollar procurements. Because the FAR has not yet been amended to address any specific requirements for reverse auctions as we recommended in our previous report, we are not making additional recommendations on this issue.
Agency Guidance and Contracting Approaches Lack Sufficient Information to Ensure Good Business Decisions and Appropriate Contract Oversight
For reverse auctions conducted in 2016, the five agencies we reviewed indirectly paid more than $13 million in fees. Similar to our findings from our December 2013 review, we found that agency contracting officials we interviewed generally did not have a complete and accurate understanding of reverse auction fee structures. This hinders their ability to make informed decisions about when to use reverse auctions or which reverse auction platform to use for a specific procurement, potentially leading to paying more fees than necessary for reverse auctions for the level of service required. Our analysis of agency- and component-level guidance found that none of the agency-level guidance we reviewed fully informed contracting officials about the availability of reverse auction providers and platforms and any applicable reverse auction fee structures, nor did the guidance ensure that contracting officials would compare the options available to them when considering whether to use reverse auctions. In addition, agencies that used the services of FedBid, the largest reverse auction provider, did not always draft sufficiently detailed fee arrangements to ensure that the agencies were knowledgeable about and could conduct oversight of FedBid’s indirect fees.
Selected Agencies Paid over $13 Million for Reverse Auctions Conducted in 2016
The five agencies we reviewed indirectly paid about $13.4 million in fees to reverse auction providers in 2016. As discussed previously, generally, in the indirect payment process, the reverse auction provider adds a fee onto the winning vendor’s bid. Then, the agency pays the winning vendor this total amount. In turn, the reverse auction provider collects the fee from the winning vendor.
Agencies we reviewed primarily conducted reverse auctions using three reverse auction providers’ platforms in 2016. The agencies paid indirect fees to two of these reverse auction providers in 2016—FedBid and GSA—while the third provider, Army CHESS, did not charge a fee for its services. Indirect fees paid to FedBid and GSA generally varied from 0 to 3 percent of the value of the transaction, though both FedBid and GSA cap certain fees and will waive fees in certain circumstances. For example, GSA does not charge an indirect reverse auction fee for Federal Supply Schedule orders or agency contracts based on Federal Supply Schedule contracts. See table 5 for additional details on typical fee structures of reverse auction providers used by the agencies we reviewed.
Agency Guidance Does Not Provide Sufficient Information to Contracting Officials on Reverse Auction Fees to Help Ensure Good Business Decisions
We found that none of the guidance we reviewed from the five agencies included the information needed to help ensure that contracting officials understand reverse auction indirect fees and their roles in assessing those fees. OFPP’s June 2015 guidance states that contracting officers should consider the amount of fees paid when evaluating whether the price of a product or service in a reverse auction is fair and reasonable, including any additional fees for use of another agency’s existing contract. This expectation is further established in agency guidance at the Army, DHS, and Interior. Our review found, however, that contracting officers generally did not understand how fees would be applied or the amount they would actually pay to use a reverse auction. This finding is consistent with our observation from our December 2013 report that agency officials were uncertain about how reverse auction fees were paid. Understanding reverse auctions’ costs is essential to making informed business decisions about when to use reverse auctions or which reverse auction platform to use for a particular procurement. Without such understanding, the risk increases that agencies may be paying more in fees than necessary for the level of service required.
Agency officials we interviewed generally did not have an accurate understanding of reverse auction indirect fee structures. For example, acquisition policy officials at State told us that their contract with FedBid has no cost to the agency because the fees are paid from the companies that win the auctions and it is up to the companies whether or not to include the fee in their final price to the government. As discussed above, however, FedBid automatically adds fees on to all vendor bids. An official who was involved in developing policy related to reverse auction use at Interior told us that agency officials were not fully aware of the fee structure used by FedBid when they initially contracted for the company’s reverse auction services in October 2010. The official added that in hindsight, the fee structure is something that should have been more closely considered.
Additionally, while the contracting officials we interviewed for the 30 auctions we reviewed that incurred an indirect reverse auction fee were generally aware that they were paying a fee, officials responsible for 28 of these 30 auctions were uncertain about one or more elements of the reverse auction fee structure. For example:
Lack of understanding of fee amount charged: Contracting officials who conducted 18 of the 29 FedBid auctions in our review were not aware of the fee charged for the reverse auction. All but three of these officials told us that they generally do not see the fee amount because it is included in the vendors’ bids and is not broken out separately, so they evaluate the price inclusive of the fee. In response, FedBid representatives told us that since March 2014 they have offered functionality in the FedBid system that displays the fee separately. However, FedBid only turns this functionality on at the request of agency officials, which had not occurred at the time of our review. We found that procurement officials at all five of the agencies we reviewed were unaware that this feature was available. According to FedBid representatives, they have since notified the contracting officers responsible for their agency contracts about this feature.
Confusion about circumstances for fee waivers or reductions: Although FedBid will waive or reduce its fee when the fee causes the auction to be above the pre-auction cost estimate or an established contract price, contracting officials responsible for 22 of the 29 FedBid reverse auctions did not accurately understand how this would work when we asked about it. For instance, some contracting officials at State and Customs and Border Protection told us in error that FedBid would waive its fee if there was only one bidder in an auction. Additionally, contracting officials for two auctions told us that they thought the fees associated with their auctions had been waived and expressed surprise when they learned the fee amount. For one auction, a State contracting officer told us that if she had been aware of the amount of the potential fee for an auction for construction services for which only one bid was received, she may have considered other alternatives for awarding the contract.
Uncertainty about how fee caps are applied: While FedBid generally caps its reverse auction fees at $10,000 per transaction, officials we interviewed that were responsible for 20 of 29 FedBid auctions told us they were not aware of this or did not know the dollar threshold for the fee cap.
Additionally, while increased competition is typically cited as a benefit of reverse auctions, we found that about 18 percent of fees paid to reverse auction providers in 2016—approximately $2.5 million—were for auctions in which there was only one bidder participating (see table 6 for detail by agency).
Further, we found that agencies in our review indirectly paid approximately $3.3 million in fees for reverse auctions conducted in 2016 even when an alternative no-fee reverse auction platform was likely available. The availability of an alternative platform does not necessarily mean that the no-fee platform is the most appropriate option, because different platforms provide different levels of service. We did not determine whether particular platforms were more appropriate or resulted in lower overall prices to the government. However, we found that agencies paid these fees to FedBid to conduct reverse auctions for orders on Federal Supply Schedule contracts or Army CHESS contracts when they might have used GSA Reverse Auctions or the Army CHESS IT e-mart without paying a fee. Our 40 case studies included 10 auctions for orders off GSA’s Federal Supply Schedules or Army CHESS contracts that used FedBid rather than using GSA Reverse Auctions or the Army CHESS IT e-mart. For five auctions at Army and State, contracting officials told us they were required or strongly encouraged by agency or component policy to use FedBid. For the other five auctions, contracting officials told us that they preferred FedBid because it was easier to use or they were more familiar with it than GSA Reverse Auctions. Without considering which provider best meets its needs in these cases, the agencies may have paid more in fees than necessary for the required level of service.
We found that none of the agency guidance we reviewed was sufficient to ensure that contracting officials understood reverse auction fees and their roles in assessing those fees. A clear understanding is necessary to make informed decisions about when to use reverse auctions or which reverse auction platform to use for a particular procurement (see table 7).
We found that agency guidance we reviewed at two of the five agencies—Navy and State—did not address the role of contracting officials in understanding and assessing reverse auction fees. Specifically:
Navy does not have agency-wide guidance that addresses the circumstances and processes for using reverse auctions. At the component level, the Naval Supply Systems Command’s November 2014 guidance states that contracting officials may use any available government or commercial reverse auction platform for reverse auctions, unless ordering off GSA’s Federal Supply Schedule or other contract vehicle posted at GSA’s eBuy site, but the guidance does not provide information about how contracting officers should consider reverse auction fees in deciding which platform to use.
State’s guidance on reverse auctions does not address the role of contracting officers in considering reverse auction fees. As noted previously, State’s May 2015 policy memorandum requires that contracting officers first consider using FedBid for acquisition of all non-complex commodities unless a waiver is obtained.
Guidance we reviewed at the other three agencies—Army, Interior, and DHS—did address the role of contracting officials in understanding and assessing reverse auction fees, although the level of detail varied among the three agencies. Specifically:
A June 2015 policy alert from the Army stated that contracting officials are required to be aware of reverse auction fees and consider them in evaluating whether the price of the product or service being acquired is fair and reasonable.
Similarly, Interior’s August 2015 guidance states that contracting officers need to evaluate the estimated amount of reverse auction fees that will be paid when assessing whether prices are fair or reasonable.
DHS’s May 2017 guidance states that contracting officers need to understand the fees charged by a provider, and determine and document that the fee structure represents a fair and reasonable cost and offers the best value to the government.
None of the agency-wide guidance we reviewed at the five agencies detailed the fee structure of each reverse auction platform used by the respective agency. As a result, contracting officials’ ability to understand and assess the fees—an existing requirement in OFPP guidance and at the Army, Interior, and DHS—is hindered. Neither State nor Interior had guidance that detailed the specific fee structures of reverse auction providers used by contracting officials at those agencies. While one Army command developed guidance on FedBid’s fee structure, the Army has not provided any agency-wide guidance on FedBid or GSA Reverse Auctions fee structures, even though the Army awarded reverse auctions valued at approximately $326 million using these two providers in 2016. Similarly, while the Navy’s May 2017 memorandum of understanding for using GSA Reverse Auctions informs contracting officials of GSA Reverse Auctions’ fee structure, the Navy does not have guidance that details FedBid’s fee structure. In 2016, the Navy conducted more than 10 times as many auctions using FedBid’s platform as it did using GSA’s platform.
Additionally, we found that none of the agencies had agency-wide guidance that required contracting officials to consider whether no-fee reverse auction alternatives, such as GSA Reverse Auctions for Federal Supply Schedule orders and the Army’s CHESS IT e-mart for Army CHESS orders, would meet their needs. State, DHS, and Interior guidance does not address this issue at all. Similarly, while neither the Army nor Navy have agency-wide guidance that does so, each agency has component or command-level guidance that addresses this issue to a limited extent. For example, Naval Supply Systems Command guidance issued in November 2014 requires that contracting officials use GSA Reverse Auctions for products or services off the Federal Supply Schedule. More recently, according to Army officials, as of July 2017, the Army’s CHESS program began recommending that reverse auctions for orders off Army CHESS contracts be conducted using the Army CHESS IT e-mart.
Standards for internal control in the federal government require agencies to develop policies that address operational processes and the responsibilities of individuals for carrying out those processes. Our review found that, while certain agencies or agency components had guidance that provided some information about reverse auction fees, none of the agency-level guidance we reviewed fully addresses contracting officials’ role in understanding and assessing reverse auction fees, details fee structures for reverse auction platforms used by the agency, or requires that contracting officers compare the options for reverse auction providers that are available to them, particularly regarding no-fee alternatives. Without such guidance, contracting officers are at risk of paying more in fees than necessary for the level of service they require.
Agencies’ Contracting Approaches Do Not Provide Sufficient Information on Reverse Auction Fees to Facilitate Oversight and Adherence to Internal Control Standards
We found that while nearly all reverse auction fees were paid to FedBid since FedBid was by far the largest reverse auction provider used by the selected agencies, agencies’ approaches to contracting with FedBid did not result in sufficiently detailed fee arrangements to ensure that the agencies were knowledgeable about the fees they were paying and could conduct oversight of whether FedBid was applying indirect fees as expected. For the five agencies we reviewed that conducted reverse auctions using FedBid in 2016, two did not have documented agency- level fee arrangements with FedBid, while the other three had contracts that did not fully address at least one element of FedBid’s fees, as shown in table 8.
Three of the five agencies we reviewed that used FedBid—Army, Navy, and State—had agency-wide contracts in place with FedBid, but we found that these contracts did not always document key aspects of the fee terms with FedBid. Specifically:
Lack of clarification on how the fee cap applies to contracts with option years: FedBid representatives stated that their standard practice is that the fee cap will apply separately to each option year awarded. The Navy’s January 2018 contract with FedBid is consistent with this practice and explains how the fee cap will apply to contracts’ option years. In contrast, Army’s and State’s December 2016 contracts with FedBid do not specify how the fee cap would apply to option years. Contracting officials who were responsible for managing the FedBid contract at the Army told us they believed that the fee cap was a total of $10,000 per contract awarded, including for the base and all option years.
Lack of detail on calculation of fee cap: Navy and State’s contracts with FedBid did not include full details on how the fee cap would be applied. As discussed above, FedBid generally caps its fee at $10,000. However, due to the way FedBid calculates fees, if the lowest bid is not selected, the fee on the selected bid may be over $10,000. We found that 19 reverse auctions in 2016 resulted in FedBid fees over $10,000. Neither the Navy’s January 2018 contract nor State’s December 2016 contract explains that the fee may be above $10,000.
According to agency officials, DHS and Interior did not have agency-wide contracts with FedBid for reverse auctions conducted in 2016. While three DHS components had their own contracts with FedBid that were active in 2016, four additional components plus DHS headquarters used FedBid in 2016 without either an agency- or component-level contract in place. At Interior, the contract with FedBid expired in September 2015 and was not renewed, although contracting officials at Interior components continued to conduct reverse auctions on FedBid. Contracting officials at these agencies used FedBid’s services by agreeing to its standard terms and conditions each time they accessed the FedBid platform. FedBid representatives told us they consider the terms of use to be the contract between FedBid and the government when there is no agency- or component-level contract in place, and that this is similar to how commercial e-commerce marketplaces operate with federal agencies for micro-purchases. FedBid’s standard terms and conditions, however, do not provide detailed information on fees, such as the precise fee percentage charged or the amount of the fee cap. FedBid representatives told us that they typically charge federal agencies a 3 percent fee, but that fee details are not included in the standard terms and conditions because commercial and government customers may pay different fees. At DHS and Interior, when there are not agency- or component-level contracts in place and contracting officials use FedBid by agreeing to the standard terms and conditions, there is a risk that they may agree to fees or other terms that have not been reviewed and approved by agency acquisition and legal offices.
Lastly, we found that only two of the agencies we reviewed—the Army (since December 2016) and the Navy (since May 2012)—required and received regular monthly reporting from FedBid on reverse auction fees paid indirectly by the agency. Both agencies also have contractual requirements for FedBid to provide this information annually, in addition to the monthly reporting. Army officials told us that requiring additional data in their December 2016 contract with FedBid was a result of lessons learned from their September 2012 contract, and was intended, in part, to improve oversight of fees paid. Army and Navy officials provided examples of FedBid reverse auction fee reports, and described how they used this information to oversee their contracts with FedBid. The Army and Navy also both requested and received monthly reports from GSA Reverse Auctions that included detailed information on fees.
In contrast, DHS, Interior, and State did not require or receive regular reporting on fees from FedBid or GSA Reverse Auctions. As previously discussed, according to officials, DHS and Interior do not have agency- wide contracts with FedBid and, therefore, do not have a mechanism in place to require agency-wide reporting. Interior officials told us they do not receive any reports on fees paid from FedBid. For the two DHS components we reviewed, Immigration and Customs Enforcement officials told us that they received ad hoc reporting on fees paid to FedBid and provided us with a sample report that included fee data. While Customs and Border Protection’s contract with FedBid requires reporting on costs incurred by the government, officials told us that they do not receive any reporting on fees. State neither requires nor receives reporting on fees from FedBid. State and Customs and Border Protection officials told us that they do not receive such reporting since fees are paid by winning vendors and therefore there is no direct cost to the government to use FedBid. We found, however, that these agencies indirectly paid almost $4.2 million in fees to FedBid in 2016.
Standards for internal control require agencies to appropriately document transactions and significant events to assist with oversight and help ensure that agency objectives are being achieved effectively and efficiently. Without a documented contract or arrangement in place between agencies or components and FedBid that provides a clear and common understanding of payment terms and fee structure, agencies lack sufficient information to conduct contract oversight to determine whether FedBid is applying its indirect fees as the agencies expect.
Further, internal control standards emphasize timely and reliable information and data so that agencies can effectively monitor their operations. Without requiring reporting on reverse auction fees, agencies may not have sufficient information to understand and oversee their use of reverse auction platforms and conduct contract oversight to ensure that the fees they are being charged are appropriate.
Conclusions
The landscape of reverse auctions has changed slightly since our last review in December 2013. There are more reverse auction providers, including government providers, in the marketplace, with the vast majority of auctions conducted through FedBid. The use of reverse auctions, however, continues to constitute a relatively small percentage of federal contract spending. For the most part, agencies are using reverse auctions to acquire low-cost, commercial products and benefitting from the ease of use and reduced administrative burden that reverse auctions can provide. Agencies are also achieving more robust competition in the form of iterative bidding on nearly three-quarters of reverse auctions. Despite this level of competition, however, precisely quantifying the amount of savings is inherently difficult. Given that the vast majority of auctions are small dollar procurements which are, by design, intended to be simpler and to pose less administrative burden on the acquisition workforce, it may be counterproductive to expend more time and resources to produce a better estimate of savings. Nevertheless, there is room for improvement in the guidance agencies provide to their contracting personnel to ensure the appropriate use of reverse auctions, increase benefits, and reduce costs. Agencies could benefit from paying more attention to rates of one-vendor participation, provider fee structures, and contracts with reverse auction providers.
Across the agencies in our review, often only one bidder participates, in particular when agencies conduct a reverse auction using existing contract vehicles rather than opening the auction to all potential vendors. At State, its requirement for contacting officers to use reverse auctions for all non-complex acquisitions may result in reverse auction use in situations where it is not warranted; that is, without the type of highly competitive marketplace that can result in savings.
Our work also identified a need for agencies to provide contracting officers better information on the fee structures so that they can make informed decisions as to whether to use a reverse auction and which reverse auction platform to use. Further, agencies are not requiring data on or analyzing the fees they are paying. The indirect nature of provider fees—combined with fee arrangements that are missing important details or are nonexistent and a lack of visibility into those fees—puts agencies at risk of paying more than necessary for the level of service needed. These issues are not new: we raised similar concerns in our report more than 4 years ago. Taken together, these issues put the government at risk of failing to maximize the benefits that the effective use of reverse auction can provide, and worse, put agencies at risk of paying millions of dollars more in fees than necessary for the level of service needed.
Recommendations
We are making a total of 21 recommendations, including 3 to Army, 4 to Navy, 4 to DHS, 4 to Interior, and 6 to State.
We are making the following seven recommendations to heads of agencies within the Department of Defense:
The Secretary of the Army should: assess why reverse auctions that are conducted using existing contract vehicles have only one bidder at higher rates than reverse auctions conducted on the open market; determine what factors indicate that conducting reverse auctions is appropriate when using existing contract vehicles; and provide this information to contracting officials so that they can consider it when developing their acquisition strategies. (Recommendation 1)
The Secretary of the Army should: document and provide information to contracting officials that describes available reverse auction providers and platforms, and any associated fee structures; and provide guidance, as appropriate, to contracting officials to ensure that they compare the options that are available to them when considering whether to use reverse auctions. (Recommendation 2)
The Secretary of the Army should clarify with FedBid how fees apply when contract option years are exercised. (Recommendation 3)
The Secretary of the Navy should: assess why reverse auctions that are conducted using existing contract vehicles have only one bidder at higher rates than reverse auctions conducted on the open market; determine what factors indicate that conducting reverse auctions is appropriate when using existing contract vehicles; and provide this information to contracting officials so that they can consider it when developing their acquisition strategies.(Recommendation 4)
The Secretary of the Navy should review the agency’s current guidance to assess whether it adequately addresses contracting officer responsibilities to consider the cost of any fees associated with reverse auction options they may be considering when developing their acquisition strategies, and revise its guidance as appropriate. (Recommendation 5)
The Secretary of the Navy should: document and provide information to contracting officials that describes available reverse auction providers and platforms, and any associated fee structures; and provide guidance, as appropriate, to contracting officials to ensure that they compare the options that are available to them when considering whether to use reverse auctions. (Recommendation 6)
The Secretary of the Navy should clarify with FedBid how FedBid’s fee cap will be calculated. (Recommendation 7)
We are making the following four recommendations to DHS:
The Secretary of the Homeland Security should: assess why reverse auctions that are conducted using existing contract vehicles have only one bidder at higher rates than reverse auctions conducted on the open market; determine what factors indicate that conducting reverse auctions is appropriate when using existing contract vehicles; and provide this information to contracting officials so that they can consider it when developing their acquisition strategies.(Recommendation 8)
The Secretary of Homeland Security should: document and provide information to contracting officials that describes available reverse auction providers and platforms, and any associated fee structures; and provide guidance, as appropriate, to contracting officials to ensure that they compare the options that are available to them when considering whether to use reverse auctions. (Recommendation 9)
The Secretary of Homeland Security should determine if it would be advantageous for the agency to enter into contracts with third-party reverse auction providers. (Recommendation 10)
The Secretary of Homeland Security should obtain timely information on how much the agency is paying for reverse auction services. (Recommendation 11)
We are making the following four recommendations to Interior:
The Secretary of the Interior should: assess why reverse auctions that are conducted using existing contract vehicles have only one bidder at higher rates than reverse auctions conducted on the open market; determine what factors indicate that conducting reverse auctions is appropriate when using existing contract vehicles; and provide this information to contracting officials so that they can consider it when developing their acquisition strategies.(Recommendation 12)
The Secretary of the Interior should: document and provide information to contracting officials that describes available reverse auction providers and platforms, and any associated fee structures; and provide guidance, as appropriate, to contracting officials to ensure that they compare the options that are available to them when considering whether to use reverse auctions. (Recommendation 13)
The Secretary of the Interior should determine if it would be advantageous for the agency to enter into contracts with third-party reverse auction providers. (Recommendation 14)
The Secretary of the Interior should obtain timely information on how much the agency is paying for reverse auction services. (Recommendation 15)
We are making the following six recommendations to State:
The Secretary of State should review the agency’s current guidance to assess whether it leads contracting officials to use reverse auctions in situations where there is not a highly competitive marketplace, and revise its guidance as appropriate. (Recommendation 16)
The Secretary of State should: assess why reverse auctions that are conducted using existing contract vehicles have only one bidder at higher rates than reverse auctions conducted on the open market; determine what factors indicate that conducting reverse auctions is appropriate when using existing contract vehicles; and provide this information to contracting officials so that they can consider it when developing their acquisition strategies. (Recommendation 17)
The Secretary of State should review the agency’s current guidance to assess whether it adequately addresses contracting officer responsibilities to consider the cost of any fees associated with reverse auction options they may be considering when developing their acquisition strategies, and revise its guidance as appropriate. (Recommendation 18)
The Secretary of State should: document and provide information to contracting officials that describes available reverse auction providers and platforms, and any associated fee structures; and provide guidance, as appropriate, to contracting officials to ensure that they compare the options that are available to them when considering whether to use reverse auctions. (Recommendation 19)
The Secretary of State should clarify with FedBid how FedBid’s fee cap will be calculated and how fees apply when contract option years are exercised. (Recommendation 20)
The Secretary of State should obtain timely information on how much the agency is paying for reverse auction services. (Recommendation 21)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD, DHS, Interior, State, the Department of Housing and Urban Development, GSA, VA, and OMB. Collectively, the agencies concurred with 18 of the 21 recommendations we made, and did not concur with three.
In its written response, reproduced in appendix IV, DOD concurred with our seven recommendations—three to the Army and four to the Navy— and stated that the department expected to complete actions to address the recommendations by the end of calendar year 2018.
In its written response, reproduced in appendix V, DHS concurred with two recommendations and did not concur with two recommendations. DHS concurred with our recommendation that it assess why reverse auctions conducted using existing vehicles have higher one bidder rates and provide information to contracting officials about factors that indicate conducting reverse auctions using existing vehicles is appropriate. However, DHS did not believe that it needed to conduct an assessment specific to reverse auctions. The department stated that the factors that contribute to one bidder participating in other procurements—such as inadequate market research and poorly defined requirements—would similarly affect reverse auctions. Nevertheless, DHS stated that the Office of the Chief Procurement Officer will communicate to its contracting officials that when market research for a planned reverse auction buy on an existing contract vehicle demonstrates that only one bid is expected, a reverse auction must not be used to conduct the procurement. DHS expects to complete actions in response to this recommendation by the end of November 2018.
DHS also concurred with our recommendation that it determine if it would be advantageous for the agency to enter into contracts with third party reverse auction providers. DHS stated that an assessment should be done periodically to determine if there is a need to have a department- wide reverse auction provider. In that regard, DHS stated that an assessment was conducted in 2016 to evaluate providers and platforms and, based on this evaluation, DHS made the decision to continue to provide contracting offices the flexibility to choose their own reverse auction provider. DHS stated that it believes its past actions address our recommendation. However, the intent of our recommendation is not to suggest that DHS consider whether to mandate a certain provider be used agency-wide. Rather, we are recommending that DHS assess whether agency-level contracts with reverse auction providers—be it one or several different providers—are desirable to protect against the risk that individual contracting officials may be agreeing to fees or other terms that have not been reviewed and approved by agency acquisition and legal offices. It is unclear whether DHS’s 2016 assessment considered these issues.
DHS did not concur with our recommendation that it provide information to contracting officials regarding available reverse auction providers and fee structures and, as appropriate, provide guidance to contracting officials to ensure they compare available options for reverse auctions. In its response, DHS stated that there is limited value in centrally collecting and updating this information, and that it is the contracting officer’s responsibility, as a part of market research, to be knowledgeable about reverse auction providers and fee structures. DHS stated that its May 2017 reverse auctions policy requires contracting officers to understand the fees that will be charged and determine and document that the fee structure represents a fair and reasonable cost and offers best value to the government. DHS stated that the Office of the Chief Procurement Officer will issue an alert reminding contracting professionals of these responsibilities by the end of November 2018. Given the pervasive confusion we found among contracting officials about the fee structures of reverse auction providers, we continue to believe that DHS should document and provide information to contracting officials, which could help eliminate confusion and minimize the duplication of individual reverse auction users repeatedly collecting the same information.
DHS also did not concur with our recommendation that it obtain timely information on how much the agency is paying for reverse auction services, stating that aggregating fee data at the department level would require systems changes or manual collection that would not inform DHS as to whether reverse auctions were used correctly or if the fee was too high. In this case, however, our work found that reverse auction providers have this data available upon request. As such, in lieu of making changes to systems or attempting to have contracting officers manually collect this information, we believe DHS could obtain this information from its reverse auction provider and use this information to help DHS understand what it pays for reverse auction services. This approach would also better inform the department in its periodic assessments of contractual relationships with reverse auction providers.
In its written response, reproduced in appendix VI, State concurred with all six recommendations, and described actions the Office of Acquisitions Management intends to take to address them, including reviewing current guidance and revising it as appropriate; increasing contracting officer awareness through training and policy guidance; and engaging with its primary reverse auction provider to obtain a better understanding of the fee structure and timely reporting of fees. State did not provide information as to when it expected these actions to be completed.
In its written response, reproduced in appendix VII, Interior concurred with three recommendations and did not concur with one recommendation. Interior concurred with our recommendation that it assess why reverse auctions conducted using existing vehicles have higher one bidder rates and provide information to contracting officials about factors that indicate conducting reverse auctions using existing vehicles is appropriate. The department stated that it will implement policy regarding the use of reverse auctions with existing contract vehicles. Interior also concurred with our recommendation that it provide information to contracting officials regarding available reverse auction providers and fee structures and, as appropriate, provide guidance to contracting officials to ensure they compare available options for reverse auctions. The department stated it would review and update guidance to provide contracting officials with current and relevant information on available reverse auction providers, platforms, and associated fee structures. Interior also concurred with our recommendation that it obtain timely information on how much the agency is paying for reverse auction services. Interior did not provide information as to when it expected the above actions to be completed.
Interior did not concur with our recommendation to determine if it would be advantageous for the agency to enter into contracts with third-party reverse auction providers, stating that it would be more efficient to provide guidance to contracting officials so that they can make the best business decision. Interior officials told us verbally that they have already considered whether or not to enter into contracts with reverse auction providers and determined that it is not to the department’s advantage to do so. Interior officials told us they would provide us information about the factors considered in making this decision, but we did not receive this information prior to issuing this report.
In its written response, reproduced in appendix VIII, VA provided information about its use of reverse auctions for energy purchases through GSA and its energy reverse auction provider, EnerNOC. The Department of Housing and Urban Development, GSA, and OMB informed us that they had no comments on this report.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of Homeland Security, the Secretary of Housing and Urban Development, the Administrator of General Services, the Secretary of the Interior, the Secretary of State, the Secretary of Veterans Affairs, and the Administrator of Federal Procurement Policy. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IX.
Appendix I: Defense Logistics Agency and Army Computer Hardware Enterprise Software and Solutions Reverse Auctions
Defense Logistics Agency’s Use of Reverse Auctions
The Defense Logistics Agency’s (DLA) use of reverse auctions declined over 80 percent from fiscal years 2013 to 2017 from about $7 billion to about $1 billion in constant fiscal year 2017 dollars, according to data we obtained from DLA’s provider Procurex for all auctions conducted (that may or may not have resulted in an award). According to DLA officials, the agency’s declining use is largely due to a policy revision that no longer requires, but rather allows contracting officers to consider using reverse auctions for all procurements over $150,000. DLA pays a flat fee to its reverse auction provider for use of the reverse auction platform. This payment mechanism is different from the fee arrangements in contracts between agencies and many other reverse auction providers, for which providers calculate fees on a per-transaction basis. In addition, DLA generally uses a reverse auction as a price negotiation tool among a group of selected vendors that the agency determined to be technically acceptable based on vendors’ initial responses to a solicitation. Because of these differences, DLA does not have a need to track the reverse auctions awarded for its reporting and oversight purposes.
Army Computer Hardware Enterprise Software and Solutions (CHESS) Information Technology (IT) e-mart Reverse Auction
The Army Computer Hardware Enterprise Software and Solutions (CHESS) Information Technology (IT) e-mart program introduced its reverse auction capability in January 2016. It offers fee-free reverse auctions for a number of the CHESS contracts. According to Army officials, in July 2017, the CHESS program began recommending use of its reverse auction capability rather than other reverse auction platforms. According to data provided by the CHESS program office for all auctions conducted (that may or may not have resulted in an award), use of reverse auctions increased over 225 percent between fiscal years 2016 and 2017 from about $28 million to about $91 million in constant fiscal year 2017 dollars. The CHESS IT e-mart does not track which auctions result in awards. According to officials, users capture award information in the agency’s contract writing system. While CHESS officials told us they are interested in that kind of information, CHESS does not charge a fee and does not have a need to track the reverse auctions awarded for its oversight purposes.
Appendix II: Agency Policies and Guidance Reviewed
Appendix III: Objectives, Scope, and Methodology
This report examines (1) federal agencies’ use of reverse auctions between 2013 and 2017, (2) the extent to which selected agencies achieved benefits through reverse auctions, and (3) the extent to which selected agencies have insight into reverse auction fees.
For all objectives, we reviewed policies and guidance related to reverse auctions from Office of Federal Procurement Policy (OFPP) and at selected agencies and relevant components of those agencies we reviewed, as well as the Standards for Internal Control in the Federal Government and relevant work by agency Inspectors General. We also interviewed procurement policy officials from the selected agencies and representatives from reverse auction providers.
To examine federal agencies’ use of reverse auctions between 2013 and 2017, we collected data from reverse auction providers we identified by reviewing our past work in this area, reviewing federal procurement solicitation and award information, conducting interviews with agency officials, and conducting internet searches about federal use of reverse auctions. Through these efforts, we identified eight reverse auction providers that offered reverse auction services either government-wide or to specific agencies (see table 10 below).
While it is possible that our efforts did not identify all reverse auction providers that federal agencies use, we are reasonably confident we have included the largest reverse auction providers used by the selected agencies in our review. In addition to the identification efforts described above, for the selected agencies in our review, we asked component officials to identify reverse auction providers with which the agency has a contractual relationship and which reverse auction platforms the agency’s contracting officials use. We also asked numerous individual contracting officers about the various platforms the individual has used. No additional providers or platforms were identified as part of those efforts.
We collected fiscal year 2013 through 2017 data on reverse auctions use from these reverse auction providers and analyzed it to identify the number of reverse auctions conducted annually across the government and the dollar value of those reverse auctions. For our analysis of the number and dollar value of the auctions, we analyzed auctions that resulted in a contract award between the agency and a vendor in a particular year, according to provider data. We describe these as awarded reverse auctions. The dollar value of an awarded auction is based on the dollar amount of the bid selected for award; however, the dollar amount of the bid selected for award is not necessarily equivalent to the amount ultimately obligated on the resulting contract. We present the dollar value of agencies’ awarded auctions from 2013 through 2017 in constant fiscal year 2017 dollars using the Congressional Budget Office’s June 2017 Gross Domestic Product price index projection—the most recent projection available at the time of our analysis. We generally collected data from reverse auction providers because information about reverse auction use is not available in the Federal Procurement Data System-Next Generation, a government-wide source of contract data. In addition, the selected agencies we reviewed do not separately track use of reverse auctions. We collected data from the Department of Housing and Urban Development directly because the agency tracks its reverse auction use, including which auctions it awards. Two of the providers we identified, Procurex and the Army CHESS IT e-mart reverse auction platform, do not track the reverse auctions that agencies award to vendors. The agencies using these providers, Defense Logistics Agency and the Department of the Army, do not require this information for their own reporting and oversight purposes or for paying for the reverse auction services.
For purposes of this report, all references to reverse auction use exclude auctions conducted with these providers. Therefore, our analysis includes only the value and number of known, awarded auctions between 2013 through 2017. As a result, we underestimate total federal reverse auction use. Using available data for the Department of the Army, we estimate our analysis includes over 95 percent of the value and 99 percent of Army auctions. For the Defense Logistics Agency, Procurex reported that over the five-year period the agency conducted approximately 7,100 auctions valued at about $19 billion. While we cannot say with certainty the number and value of awarded auctions, we can assume the agency awarded fewer auctions than it conducted. Based on information from other providers for which we have data on the number of auctions conducted and awarded, agencies using these providers awarded about 45 percent of the auctions conducted between 2013 and 2017. Of the six providers with awarded auction data, FedBid accounted for almost all auctions and the vast majority of dollars agencies awarded using reverse auctions from 2013 through 2017.
We also used this data to identify six of the largest users of reverse auctions for that period—Departments of the Army, Homeland Security (DHS), the Interior, the Navy, State, and Veterans Affairs (VA)—by number of auctions and dollar value. In determining the largest users of reverse auctions, we excluded energy-related auctions from our analysis. Energy-related auctions represented a sizable portion—10 percent—of reverse auction value, but less than 1 percent of auctions. We determined that conducting a detailed review of energy-related auctions was not likely to provide insight for other procurements because the unique characteristics of energy markets make it difficult to compare to reverse auctions for other goods and services that were included in our review.
For five of the six selected agencies (Army, Navy, DHS, Interior, and State), we collected additional data on auctions awarded in fiscal year 2016—the most recent year of detailed data available at the time that we began our review. We limited our analysis to auctions for which we identified a start, end, and contract award date in 2016, according to provider data. Our analysis of fiscal year 2016 auctions included almost 15,000 auctions with a total awarded value of approximately $910 million. We excluded reverse auctions for which the data indicated that they were awarded in 2016 but for which the auction dates indicated that the auctions were conducted in a prior year. At least some of these auctions represent options exercised on earlier auctions, rather than new auctions, and we wanted to ensure we could compare auction activity to policies and procedures in place for a specific period. Our analysis of awarded auctions excluded auctions identified as cancelled or with an auction start, end, or award date outside of 2016. The sixth agency (VA) conducted less than a dozen new auctions in 2016, and so we excluded them from our analysis of 2016 data, as well as our analysis of the benefits and fees associated with reverse auctions.
We analyzed agencies’ use of reverse auctions, including but not limited to the number and dollar value of the awarded auctions, types of products and services purchased, level of competition achieved (number of participating vendors and bids received), savings from government pre-auction estimates, and fees associated with the auctions.
For our analysis of the number and dollar value of the awarded auctions, we included auctions that resulted in a contract award between the agency and a vendor, according to provider data. Actual award obligations may differ. For example, an agency may adjust the procurement (such as increasing or decreasing the number of items purchased) between the auction and the final award, which may not be reflected in the data we used. In addition, the number of awarded auctions may differ. While we took steps to exclude awarded auctions for which agencies had cancelled the resulting contracts, if the provider data did not identify an auction as cancelled we may have included it in our analysis. For the analysis of products and services, we examined auctions conducted and awarded in 2016 by two of the three reverse auction providers, both of which had product and service code data available for awarded reverse auctions. These two providers accounted for almost all contracts awarded via reverse auctions that year. Provider data included an overall product and service code for the auction. The auction may include goods and services outside that particular code. For our purposes, we used the code provided to categorize the auction as a product or service and the type of purchase. The third provider, GSA Reverse Auction, does not capture similar product and service code data.
Using other data GSA Reverse Auction provided, we were able to estimate that about 20 percent of dollars awarded using GSA’s Reverse Auctions platform included information technology products and services.
For our analysis of contract vehicles, we used provider data on whether the buyer selected to conduct the auction on the open marketplace or limit the auction to vendors qualified to bid on existing contract vehicles. For example, buyers may have conducted auctions on the open market, which is available to any vendor selling the good or service that is registered to bid via the reverse auction provider or conducted auctions that were limited to vendors with specific agency or government-wide contracts. For our analysis of competition, we included all vendors and associated bids submitted in provider data. During our interviews with contracting officials, we learned that in some auctions officials determined particular vendors were not technically acceptable following an auction. This information is not available in provider data and, as a result, our analysis includes vendors that contracting officials determined were not technically acceptable.
We also obtained contract-related information from the Federal Procurement Data System-Next Generation for awarded auctions with available contract or order numbers to identify if agencies used commercial acquisition procedures and firm-fixed-price contracts in accordance with the effective practices outlined in the June 2015 OFPP memorandum. Government auditing standards require that we assess the reliability of data we use in our products. As part of our assessment, we reviewed the reverse auction data collected for obvious issues, such as missing data elements, duplicates, and outliers. We also tested the relationships between variables. In addition, we interviewed agency and reverse auction provider officials to understand the data and collected information on the systems used to collect and store the data, as well as how those data are used. Further, we compared the data for a non-generalizable sample of 40 auctions to contract files. We assessed the reliability of the data used in this report and determined they were sufficiently reliable for describing the known number and value of awarded reverse auctions by federal agencies from 2013 through 2017 and identifying salient characteristics of selected agencies’ awarded auctions in 2016, including the number of participating vendors and bids, type of good or service purchased, and indirect fees associated with the auction.
To identify the extent to which selected agencies achieved the benefits of reverse auctions, we analyzed the 2016 data we collected on reverse auction use at our five selected agencies to identify factors related to competition (e.g., the number of participating vendors in auctions and the number of bids received, and the frequency of iterative bidding, defined as when there are multiple bidders and at least one bidder submits more than one bid during the auction) and savings (e.g., savings as calculated by the reverse auction providers). This analysis excludes auctions conducted using the Army CHESS IT e-mart because it does not track which auctions result in awards. However, the analysis still includes at least 93 percent of reverse auction award value and 98 percent of the awarded auctions in 2016. To obtain a more in-depth understanding of the benefits achieved by selected agencies, we selected and reviewed a nongeneralizable selection of 40 contracts awarded from 2016 reverse auctions across the five agencies. These contracts were chosen to obtain variety across the following characteristics: buying agency and component; contract vehicle (open market or orders on existing contracts such as Federal Supply Schedules or agency indefinite-delivery / indefinite-quantity contracts); dollar value; fees charged by the reverse auction providers; and goods and services being purchased (see table 11).
At DHS, we selected case studies from two components, Customs and Border Protection and Immigration and Customs Enforcement. Customs and Border Protection had an active contract with FedBid in 2016 and Immigration and Customs Enforcement did not, so we selected these two components in order to understand the difference in how components with and without an active contract used FedBid.
For each of the selected case studies, we reviewed contract documentation related to the reverse auction, such documentation of market research, pre-auction cost estimates (e.g. independent cost estimates), price negotiation memoranda, and contract award documents. In addition, to obtain contracting officials’ perspectives on the benefits of reverse auctions, we interviewed the contracting officials involved with 35 of these 40 auctions: for the remaining 5, knowledgeable officials were not available to interview. We conducted our interviews using a semi- structured interview process in which we asked contracting officials a standard set of questions about their experiences conducting reverse auctions. We did not compare reverse auctions to alternative acquisition methods to compare the relative costs and benefits.
To identify the extent to which selected agencies had insight into reverse auction fees, we analyzed provider data on fees paid indirectly to FedBid and GSA Reverse Auctions in 2016 for the five agencies selected for our review. Fees paid to these two reverse auction providers were paid indirectly by the agencies through the winning vendor. Our analysis included the total amount of fees paid by each agency in 2016 to each reverse auction provider and the amount of fees paid by each agency in 2016 for auctions with only one bidder.
We also analyzed agency guidance to determine the extent of information provided to contracting officials on reverse auction fees. Specifically, we assessed whether agency guidance identified roles and responsibilities of contracting officials in understanding and assessing reverse auction fees and provided sufficient information to help ensure contracting officers understood how reverse auction fees are applied. Further, we interviewed contracting officials for 35 of our 40 selected auctions to develop an understanding of the officials’ knowledge of the fees related to the auctions they conducted. As noted above, officials for the other 5 auctions were not available to interview. The 40 selected auctions included 33 that incurred an indirect fee, 2 for which the provider waived the fee, and 5 for which no fee applied. We interviewed the contracting officials involved with 30 of the auctions that incurred a fee and 5 of the auctions for which the fee was waived or no fee applied. To determine whether contracting officials we interviewed had a complete and accurate understanding of reverse auction fee structures, we analyzed their responses to questions about reverse auction fee structures and the fee paid for the reverse auction we reviewed in detail, and compared their responses to fee structures documented in agency contracts and reverse auction provider terms and conditions.
Lastly, to determine whether agencies had sufficient insight into reverse auction fees to conduct appropriate oversight, we analyzed contracts between the selected agencies and FedBid as well as other fee arrangements, including provider terms of service and GSA’s Federal Supply Schedule contract with FedBid. Our analysis included both contracts that were in place in fiscal year 2016 in order to understand the terms and conditions that covered the reverse auctions we reviewed in detail, as well as contracts agencies awarded subsequent to fiscal year 2016 so that we could understand whether and how agencies fee arrangements with reverse auction providers had changed. We analyzed the contracts and other fee arrangements to determine the extent to which they explained details of how the fees were applied, such as what fee percentage would be charged, how the fees would apply to contract option years, and how fee caps were applied.
We also used a variety of investigative tools and techniques to determine if reverse auction procurement officials and commercial and government providers have engaged in potential fraud, waste, abuse, and mismanagement associated with reverse auction use. We reviewed fraud alerts to learn about potential complaints, coordinated with agency inspector general offices regarding work related to reverse auctions, inquired about contracting officials’ awareness of fraud incidents among the 35 case studies for which we interviewed contracting officials, and conducted a limited review for obvious financial relationships among agency officials responsible for drafting reverse auction policy and commercial reverse auction providers. While steps we took did not uncover any obvious fraud, waste, abuse, or systemic mismanagement, we cannot definitively state that there is no fraud, waste, abuse, or mismanagement in federal use of reverse auctions.
We conducted this performance audit from January 2017 to July 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. We conducted our related investigative work from April 2017 to March 2018 in accordance with investigative standards prescribed by the Council of the Inspectors General on Integrity and Efficiency.
Appendix IV: Comments from the Department of Defense
Appendix V: Comments from the Department of Homeland Security
Appendix VI: Comments from the Department of the Interior
Appendix VII: Comments from the Department of State
Appendix VIII: Comments from the Department of Veterans Affairs
Appendix IX: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Janet McKelvey (Assistant Director), Luqman Abdullah, Cory Ahonen, Peter Anderson, Leslie Ashton, Matthew Crosby, Alexandra Dew Silva, Lorraine Ettaro, April H. Gamble, Anne McDonough, Miranda Riemer, Robin Wilson, and Helina Wong made key contributions to this report. | Why GAO Did This Study
Reverse auctions are intended to result in enhanced competition, lower prices, and reduced acquisition costs. GAO has previously found that agencies did not maximize these benefits.
GAO was asked to review federal agencies' use of reverse auctions. This report examines (1) the use of reverse auctions and the extent to which selected agencies achieved benefits, such as competition; and (2) the extent to which selected agencies had insight into reverse auction fees.
GAO collected and analyzed data on federal agencies' use of reverse auctions from fiscal years 2013 to 2017. For five of the largest users of reverse auctions—the Departments of the Army, Homeland Security, Interior, Navy, and State—GAO reviewed documentation for 40 auctions that resulted in contract awards in fiscal year 2016 (the most recent data available when the review began), and that were selected to obtain a mix of dollar values and levels of competition, among other factors. GAO also interviewed contracting officials and analyzed agency guidance.
What GAO Found
Federal agencies' use of reverse auctions—a process where vendors bid against each other with lower prices to win government contracts—declined between fiscal years 2013 and 2017, from about 34,000 to 19,000 auctions valued at about $1.9 billion and $1.5 billion, respectively. In fiscal year 2016, the year GAO studied in detail, nearly three-quarters of auctions at the agencies GAO reviewed resulted in iterative bidding—when there are multiple bidders and at least one bidder submits more than one bid during the auction (see figure).
Contracting officers said reverse auctions reduce administrative burden, especially during peak contracting times. Reverse auctions data indicate that selected agencies may have saved more than $100 million in 2016.
The five agencies GAO reviewed indirectly paid about $13 million in fees to reverse auction providers through awardees in 2016. However, 28 of the 30 contracting officials GAO interviewed did not fully understand how fees were set. Further, in 2016, agencies GAO reviewed indirectly paid approximately $3 million in fees for reverse auctions for which a fee-free alternative was likely available. None of the guidance GAO reviewed provided sufficient information for contracting officers to assess the appropriateness of these fees (see table). Without better information, contracting officials may be offsetting potential savings by paying more in fees than necessary for the level of services required.
What GAO Recommends
GAO is making a total of 21 recommendations to the five agencies in GAO's review, including that agencies inform contracting officials about fees to better compare available provider options. Defense, State, and Interior concurred with this recommendation. DHS did not, stating that contracting officials should obtain this knowledge during market research. GAO believes managing this information centrally could eliminate confusion and minimize duplicate efforts. |
gao_GAO-19-130 | gao_GAO-19-130_0 | Background
The Pell Grant Program
First authorized in 1972, the Pell Grant Program awards federally-funded grants to low-income undergraduate and certain post-baccalaureate students who are enrolled in a degree or certificate program (which can include vocational programs) and have federally-defined financial need. Education’s Office of Federal Student Aid administers the Pell Grant program and other federal student aid programs—grants, loans, and work-study—authorized under Title IV of the Higher Education Act of 1965, as amended. Students are eligible to receive Pell grants for no more than 12 semesters (or the equivalent). The maximum allowable Pell grant for the 2018-2019 school year was $6,095.
The amount a student receives is based on a formula that compares the estimated cost to attend a particular school with a student’s expected family contribution toward that cost. A student’s expected family contribution is determined by considering his or her income and assets, or for students who are dependent or independent students who are married, their income and assets as well as that of their parents or spouses. Students are eligible for federal need-based aid if their cost of attending a school is more than their expected family contribution. Students incarcerated in federal or state penal institutions have been ineligible for Pell grants since the enactment of the Violent Crime Control and Law Enforcement Act of 1994. Beginning in the 2016-2017 school year, the Second Chance Pell pilot has allowed a limited number of students to receive Pell grants despite their incarceration.
Federal Student Aid Eligibility
In general, to be eligible to receive federal student aid (including Pell grants), Department of Education guidance states that an applicant must: be a citizen or eligible noncitizen of the United States; have a valid Social Security Number; have a high school diploma or a General Education Development certificate, or have completed homeschooling; be enrolled in an eligible program as a regular student seeking a maintain satisfactory academic progress; not owe a refund on a federal student grant or be in default on a federal student loan; register (or already be registered) with the Selective Service System, if the person is a male and not currently on active duty in the U.S. Armed Forces; and not have a conviction for the possession or sale of illegal drugs for an offense that occurred while the person was receiving federal student aid (such as grants, work-study, or loans).
For the Pell grant program, an applicant must also demonstrate financial need and not have obtained a bachelor’s degree or a first professional degree.
Applying for Financial Aid
In the 2016-2017 school year, more than 18.6 million prospective students applied for federal student aid by submitting the Free Application for Federal Student Aid (FAFSA). The FAFSA consists of more than 100 questions that collect information ranging from basic contact information to the current value of assets. Several questions ask for financial information, which could require applicants (and their parents and spouses, if they are dependent or married) to rely upon information located on tax returns, as well as information from bank, business, and investment records. Incarcerated individuals in the Second Chance Pell pilot are required to apply for financial aid using the same process as students in the non-incarcerated population.
After Education processes an applicant’s FAFSA, a report is sent to the applicant or made available online. This report includes the applicant’s expected family contribution, the types of federal aid for which the applicant qualifies, and information about any errors—such as questions the applicant did not complete—that Education identified during FAFSA processing. Schools send applicants award letters after admission, providing students with types and amounts of federal, state, and institutional aid, should the student decide to enroll.
Education’s FAFSA Verification Process
Education uses a process called “verification” to help identify and correct erroneous or missing information in students’ FAFSAs, which helps the department’s efforts to reduce improper payments of federal student aid.
Education selects approximately 30 percent of FAFSAs for verification each academic year and schools are required to work with the selected students to confirm the accuracy of the information provided on their FAFSAs. A student is responsible for gathering the necessary documentation—such as prior years’ tax returns or proof of having obtained a high school diploma—and providing it to the school financial aid office, which compares the information submitted in the FAFSA to the student’s supporting documentation. If there is a difference between the student’s documentation and what he or she submitted on the FAFSA, the FAFSA information may need to be corrected.
When selecting FAFSAs for verification, Education aims to select those FAFSAs with the highest statistical probability of error and the impact of such error on award amounts. Education’s specific criteria for selecting FAFSAs for verification is not public information; however, the department periodically refines its process for selecting FAFSAs to reduce the burden of verification on applicants, their families, and schools while maintaining the integrity of the federal student aid programs.
Education publishes a list of potential verification items for each award year in the Federal Register. The items that schools are required to verify for a given application are selected by Education from that list. For the 2018-2019 school year, the items for verification are shown below:
Adjusted gross income,
U.S. income tax paid,
Untaxed portions of Individual Retirement Arrangement distributions,
Untaxed portions of pensions, Individual Retirement Arrangement deductions and payments,
Tax-exempt interest income, Income earned from work,
Household size,
Number of household members in college,
High school completion status,
Education credits, and Identity and statement of educational purpose.
Research on the Effects of Participating in Education while Incarcerated
The body of literature on prisoners’ participation in educational programs while incarcerated suggests there may be benefits for participants, the facilities in which they are housed, and taxpayers. However, positive benefits attributed to postsecondary correctional education are not always clear because the students who would have done better post-release may have been more willing or motivated to participate in the program anyway. See appendix II for a summary of selected research on correctional education. See appendix III for additional information on the educational attainment of the prison population.
Education, Participating Schools, and Other Stakeholders Took Several Actions to Implement the Second Chance Pell Pilot
Education Selected 64 Schools to Participate in the Second Chance Pell Pilot
In response to an August 2015 Federal Register notice announcing the pilot, Education officials reported receiving applications from over 200 schools seeking to participate. The officials said they selected schools for the pilot that varied along several characteristics, including location and size, as well as ensuring that selected schools did not have a history of compliance issues or other problems delivering federal student aid. Education selected 64 schools to participate in the pilot and officially notified schools in June of 2016 that Pell-funded courses could begin as early as July 1 of that year. The 64 schools are located across 26 states and include public and private nonprofit 2- and 4-year schools. Figure 1 below shows the locations and numbers of the 64 schools selected to participate in the pilot and figure 2 includes additional information on 3 schools participating in the pilot that were included our sample. Appendix IV includes a complete list of the schools Education selected to participate in the pilot and select characteristics of those schools.
Education, Schools, Prisons, and Others Collaborated to Prepare for and Implement the Pilot
To prepare for the pilot, Education took a number of actions. For example, Education hosted four webinars for officials at schools selected to participate in the pilot. The first two webinars occurred in September 2015, during which Education officials discussed the pilot’s objectives and strategies for establishing effective partnerships between schools and prisons. The third webinar took place in July 2016 and covered how to navigate the federal financial aid application process and the information Education planned to collect from schools, among other topics. Education held the final webinar in August 2016 in collaboration with the Department of Justice. The webinar contained information on how schools and their prison partners could develop shared goals, roles, expectations, policies, and procedures, and how these might be incorporated into a memorandum of understanding. Education also developed a Frequently Asked Questions page on its website and responded to questions submitted by school officials via email. In addition, Education hosted breakout sessions for Second Chance Pell schools at its annual Federal Student Aid Training Conference in 2016, 2017, and 2018.
School officials reported working with a variety of stakeholders to prepare for and to implement the pilot. For instance, officials from 7 of 12 schools we interviewed said they collaborated with one or more additional stakeholders within the school, such as individuals working in academic departments, financial aid, the registrar, the bursar, and academic advising. For example, officials from one school said administrators partnered with the bursar and the registrar to ensure that incarcerated students were not unenrolled from classes if their Pell grants took longer to be disbursed than those for non-incarcerated students. Officials from 10 of 12 schools we interviewed talked about the importance of coordinating with staff at the prison, and officials from 9 schools said coordinating with their states’ departments of corrections was important for implementing the pilot. For example, officials from one school said their state Department of Corrections demolished a wall at one participating prison in order to provide more classroom and study space for the program.
Finally, schools described collaborating with organizations that help facilitate college courses in prisons. For example, officials from all 12 schools we interviewed said that Vera provided technical assistance, such as information-sharing and opportunities to network with other pilot schools. Officials from one school also noted that they partner with Hudson Link, an organization that recruits students for postsecondary correctional education programs and supports students’ reentry upon release, among other activities.
Almost 8,800 Incarcerated Students Received a Pell Grant in the Pilot’s First 2 Years
Across the pilot’s first 2 years, 59 Second Chance Pell schools disbursed approximately $35.6 million in Pell grants to a total of 8,769 individual students. See table 1 for a comparison between the first and second school years.
Not all of the 64 schools selected for the pilot began offering Pell-funded classes at the start of year one. Specifically, 11 of the 64 selected schools were unable to offer classes in the pilot’s first year and 5 of the 64 selected schools did not offer classes in the second year. Education officials told us that some schools needed additional time to stand up their programs, as the department allowed, for a number of reasons. For example, officials said:
Some schools with new correctional education programs faced delays obtaining accreditation for those programs.
Some schools needed additional time to work out operational details, such as obtaining credentials or security clearances in order for faculty and staff to enter the prison.
Some schools needed additional time to build relationships with correctional partners.
Figure 3 shows incarcerated students taking college classes inside two New York prisons.
Officials from Selected Schools Reported Experiencing Some Challenges Implementing the Pilot, but Developed New Approaches to Address these Challenges
School Officials Reported Challenges in a Few Areas Required to Establish Incarcerated Applicants’ Eligibility for Pell Grants
School officials we interviewed said that they experienced some challenges establishing incarcerated applicants’ eligibility for aid, including establishing an applicant’s citizenship or eligible non-citizenship and providing accurate Social Security Numbers or Alien Registration Numbers. For example, officials from 6 of the 12 schools we interviewed said that some of their incarcerated applicants did not know or have access to their Social Security Number. The two most commonly- identified reasons applicants were initially ineligible for Pell grants were (1) some applicants had not registered for Selective Service, and (2) some had an existing federal student loan in default status. Schools and applicants faced challenges addressing these reasons.
Selective Service. Generally, to be eligible to receive Pell grants, applicable male students must have registered with the Selective Service. However, for male students who have not registered, institutions may determine that the student is not ineligible for a Pell grant if the student can demonstrate by submitting evidence to the institution that (1) he was unable to present himself for registration because of reasons beyond his control—such as hospitalization, incarceration, or institutionalization—or (2) he is over 26 and when he was between the ages of 18 to 26, he did not knowingly and willfully fail to register with the Selective Service.
Education data showed that about 15 percent of the FAFSAs submitted in the pilot’s first year were from applicants who had not registered for Selective Service. In comparison, 2 percent of FAFSAs in the overall population were submitted by applicants who had not registered. School officials said that many applicants had been continuously incarcerated between ages 18 to 26, but that obtaining documentation to demonstrate this was difficult in some circumstances. For example, officials from one school reported that obtaining records from juvenile correctional facilities was challenging and officials at another school said that applicants did not always know or have access to their exact dates of incarceration.
Men over age 26 who had not been continuously incarcerated but who wished to apply for federal financial aid must obtain an official response from the Selective Service System confirming that the individual did not register, but should not be denied federal benefits. To obtain this official response, the student can write or call the Selective Service System with a detailed description of the circumstances he believed prevented him from registering at the required time. The individual would then provide the official written response from the Selective Service System to his school financial aid office, which would evaluate whether his failure to register was knowing or willful. Officials from 7 of the 12 schools we interviewed said the process to obtain documentation from the Selective Service System was difficult or time-consuming.
Student Loan Default. Applicants are generally ineligible for Pell grants if they have a prior federal student loan in default status. Education data showed that about 10 percent of FAFSAs in the first year of the pilot were submitted by applicants with an existing federal student loan in default status. In comparison, about 2 percent of FAFSAs in the overall population were submitted by applicants with an existing loan in default status. Officials from all 12 schools we interviewed said at least some of their incarcerated applicants had existing federal student loans in default status. There are options, however, for individuals to remove default status from their loans, potentially regaining eligibility for Pell grants. For example, borrowers may rehabilitate their student loans by entering into and completing a written agreement that requires the borrower to make nine on-time monthly payments within 10 consecutive months. These income-driven payments can be as low as $5 per month.
According to school officials, however, removing default status from loans can be challenging for incarcerated individuals. For example, officials from one school we interviewed said applicants generally cannot make phone calls to set up loan repayment plans and instead have to rely on postal mail for completing the necessary paperwork. Also, officials from another school we interviewed said that for applicants who must rely on family members outside the prison to make the required payments, there is no guarantee that the family will do so. Additionally, borrowers may rehabilitate a loan only once. Despite these challenges, officials from five schools said they had applicants who were working to rehabilitate their loans, such as by paying from wages earned through prison work or by having family members make payments on their behalf. Officials from two of those schools said they had one or more applicants who successfully rehabilitated their loans and were able to enroll in the pilot.
School Officials Reported Challenges Verifying Incarcerated Applicants’ Income and Assets
According to school officials we interviewed, verifying incarcerated applicants’ income and assets was challenging, in particular, because of circumstances unique to applicants being in prison. Communication between the applicant, the applicant’s family, and the school’s financial aid office is limited by virtue of the applicant’s confinement. For example, incarcerated applicants were typically unable to be reached via phone or email to answer questions, according to school officials we interviewed, and completing verification paperwork sometimes required multiple trips to the prison, which in some cases was more than an hour away. Further, incarcerated applicants sometimes did not have access to their personal files or records and faced difficulties obtaining documentation, such as copies of high school transcripts and tax records, which may be required for financial aid officers in the event the applicant is chosen for verification. Education guidance indicates that under certain circumstances, the school may accept alternate forms of documentation from the correctional facility if that documentation provides the information the school has requested. For example, the school may accept documentation from the correctional facility that shows an individual was incarcerated for the entire corresponding tax year, rather than requiring the applicant to obtain a letter of non-filing from the Internal Revenue Service.
School officials said that some dependent and married students had trouble providing the school financial aid office with income documentation for others, such as a parent or spouse. According to Education data, approximately 2 percent of incarcerated applicants in the first year of the pilot were dependent, and nearly 11 percent were married. If an applicant selected for verification is dependent or married, he or she is required to provide the school with documentation to verify household income. Officials from 7 of the 12 schools we interviewed said that sometimes an applicant had trouble securing required documents from a parent or spouse. If an applicant cannot provide the required documentation of the income and assets of his parent or spouse, the school cannot verify the individual’s FAFSA information and cannot award a Pell grant.
School officials indicated that these challenges were compounded by the selection of a high percentage of Second Chance Pell FAFSAs for verification. Education uses a number of criteria to select FAFSAs for verification, which the department does not share publicly. However, Education officials said that being eligible for a Pell grant and reporting no income are two such criteria. As a result, schools that serve more Pell- eligible applicants are likely to have more of their applicants’ FAFSAs selected for verification than schools that serve fewer Pell-eligible applicants. Accordingly, 76 percent and 59 percent of pilot FAFSAs were selected for verification in the 2016-2017 and 2017-2018 school years, respectively. Education’s verification selection rate for non- incarcerated, Pell-eligible applicants was 53 percent in the 2017-2018 school year. Figure 4 below shows Education’s verification selection rates for non-incarcerated Pell-eligible applicants and incarcerated applicants in these first two school years.
Schools Hired Staff and Developed New Approaches to Address Challenges
Officials from 8 of the 12 schools we interviewed reported hiring additional staff or allocating more staff hours to help manage the increased administrative workload. For example,
Officials from one school said their school added six full-time employees to process financial aid for their pilot students.
A financial aid officer from another school stated that her workload has increased since the pilot began, and she has taken on additional tasks, such as training other staff to fill in when she could not travel to the prison.
Officials from another school said they have added positions in the academic, administrative, and financial aid departments to handle the additional administrative workload.
In addition, officials from 9 of 12 schools said they developed new approaches to address challenges related to processing FAFSAs submitted by incarcerated applicants. For example:
Start Early: Officials from one school reported collecting FAFSAs earlier in the second year than they had in the first year to allow for additional time to collect documentation for applicants who may be selected for verification. An incarcerated student we spoke with echoed this challenge when he spoke of difficulties locating prior years’ tax returns. See sidebar for additional experiences shared by incarcerated students we met with. Officials from two schools reported having applicants complete verification-related paperwork, such as requests for supporting documentation from federal entities like the Internal Revenue Service, at the same time they completed their FAFSA. The officials said this approach reduced the number of visits the officials had to make to the prison and helped school officials and incarcerated applicants keep track of the required paperwork.
Pre-screen Applicants: Officials from two schools reported pre- screening their incarcerated applicants for common issues that affect financial aid eligibility so that they could work with applicants to begin to correct these issues (such as helping applicants learn how to make payments to rehabilitate loan default status). Other schools used pre- screening to reduce the school’s workload, since they were able to exclude ineligible applicants before they submitted a FAFSA.
Track and Report on Status: Officials from one school said their information technology department developed a system that generates a report on the documentation that incarcerated applicants have provided and the documentation that remains outstanding. The report also contains notes from staff members on their document requests with the Selective Service System, Internal Revenue Service, and other agencies.
Officials from Selected Schools Reported Logistical Challenges in Providing Prison-based Classes, but Many Schools Developed New Approaches to Address Them
School officials we interviewed reported that providing college courses in prisons required them to develop new processes and generate creative solutions to help overcome technology limitations, space limitations, and the transfer of students to other prisons, among other limitations. For example, officials from 9 of the 12 schools said that limited technology in prisons, especially limited access to the Internet, presented a challenge. An official from one school said that classroom discussions were enhanced by the low-technology setting. To overcome technology limitations, officials from one school said that it partnered with the state libraries to develop a solution to deliver research materials to students. Specifically, an incarcerated student mails a research request to a state library. Once received, a librarian will locate the requested articles and electronically send the material to the prison’s secure printer. A prison staff member will then deliver the material to the student. default status and could not access a prior year’s tax return, making the process take longer.
Officials from 9 of the 12 schools we interviewed said that space and scheduling limitations in prisons also presented a challenge. School officials told us they must compete for classroom space with other programming that is offered—or in some cases required by law, such as GED education—to inmates. Officials from two schools said they hold night and weekend classes to address such limitations. Officials from one school also reported that prison staff changed incarcerated students’ schedules (such as meal times and other scheduled activities) to accommodate their academic needs. Additionally, some prison officials reported relocating all the student inmates into the same housing unit to help create a positive learning environment. helps inmates be less idle and therefore less likely to engage in negative behavior. elevates the status of students in the prison, and the younger people look up to him and his college-going peers.
Officials from 7 of the 12 schools we interviewed said at least one incarcerated student was either transferred to another prison or was released during the pilot. To address the issue of students being transferred to a different prison, officials from three schools said they developed an agreement with their state’s department of corrections that students participating in the pilot would not be transferred to other facilities until the end of the academic term. and open a nonprofit organization serving youth.
December 2018. He plans to work toward becoming a home inspector and attend classes at the main campus, where he has applied for an academic scholarship. released by the end of 2018, was proud to be leaving prison with a college degree. He plans to start a business and mentor young men to pursue education. year sentence. He plans to start a business upon release and had developed a business plan as part of his studies. He also plans to work with at-risk young men to steer them away from crime and towards education.
To monitor Pell dollars spent and other aspects of the pilot, Education systematically collects data from participating schools. Education requires schools to report data monthly, to complete an annual report, and to respond to a survey each academic year. Education officials said they use schools’ monthly reporting—which is limited to the participating students’ Social Security Numbers and last names—to monitor Pell grant disbursements. Education requires schools to report annually on the students who completed FAFSAs, including the number of credits that students attempted and earned and the dollar amount students were assessed for tuition and fees, for example. Education officials reported that they will follow up with schools that are not reporting data to determine if the school either has no data to report or needs further assistance from the department.
As part of its annual survey to schools, Education asks officials to describe any challenges their schools faced when implementing the pilot, such as the roles and responsibilities of schools and corrections partners for helping incarcerated applicants complete FAFSAs, as well as how academic programs were determined. In addition, Education asks schools to share examples of any challenges their schools faced when implementing the pilot. Education sent its first annual survey to Second Chance Pell schools in August 2018, in which it asked school officials to reflect on the pilot’s first year (2016-2017 school year). Education officials reported that all schools had completed the required reporting for the first year of the pilot (2016-2017) and that as of November 2018, 47 schools had completed their reporting for the second year of the pilot (2017- 2018). Specific data elements collected by Education for the pilot are presented in appendix V.
Education Has Not Yet Evaluated Pilot Results
A key component of the Experimental Sites Initiative—of which Second Chance Pell is a part—is rigorous evaluation of whether experiments achieve their stated objectives. Education is directed to review and evaluate the experiences of schools participating in its experimental sites and report biennially on the findings and conclusions reached regarding each of the experiments conducted. Further, the department is directed to make recommendations for amendments to improve and streamline the Higher Education Act, which includes the delivery of federal student financial aid, based on the results of the experiments. However, Education has not established how it intends to evaluate Second Chance Pell or measure the pilot’s performance against its objectives.
During the course of our review, Education officials provided us with several reasons as to why they were not planning to evaluate the pilot. First, officials said there was no dedicated funding set aside for an external evaluation of the pilot. Second, Education officials said they did not intend to make recommendations regarding changes to federal student financial aid eligibility based on the results of the pilot. Rather than conducting an evaluation, they explained, Education intends to report descriptive information on the pilot, such as the number of students served and the amount of aid disbursed, as it has done in prior reports on its experimental sites. In Education’s most recent report on the experimental sites (of the 2010-2011 school year), the department reported that it aggregated outcome measures (such as numbers of students in each experiment) and reviewed comments submitted by participating schools. However, the report noted that this type of anecdotal information could not be used to determine whether experiments were ultimately successful.
The purpose of a pilot is generally to inform a decision on whether and how to implement a new approach in a broader setting. In this context, leading practices for effective pilot design state that agencies should evaluate the final results of a pilot in order to draw conclusions on whether, how, and when to integrate pilot activities into overall efforts. As noted above, Education is required to review and evaluate experiments under the Experimental Sites Initiative and make subsequent recommendations, as appropriate, for amendments to improve and streamline the Higher Education Act, which includes the delivery of federal student financial aid. In this context, we inquired about steps Education could take now, should an evaluation of Second Chance Pell be pursued (including an evaluation limited to an internal effort using existing resources). Education officials agreed that even without funding for an external evaluation, they could use the data they are already collecting to internally evaluate the pilot. In its comments on the draft report, Education stated that it was now planning to evaluate the pilot, consistent with the objectives set out in the Federal Register, and described a number of steps it was taking to do so. We are pleased to see the Department taking these important steps to determining the pilot’s impact. An evaluation of Second Chance Pell can help provide policymakers with the information needed to make decisions about the future of Pell grants for incarcerated students.
Conclusions
Pell grants help open the door to a college education for millions of low- income students every school year. However, over the past 24 years, incarcerated students have been generally ineligible for Pell grants. Education’s Second Chance Pell pilot presents an opportunity for policy makers and others to see whether participation in postsecondary educational opportunities increases when Pell grants are again made available, and to determine what impacts a college education has on an incarcerated person’s academic and life outcomes. These impacts may be consistent with past research, which suggests possible benefits to formerly-incarcerated individuals, prisons, and local communities. Second Chance Pell, by the end of its second year of implementation, has allowed thousands of incarcerated students to receive financial aid for college. Evaluating the pilot can help assure Education and Congress have the information needed to make decisions about the future of Pell grants for incarcerated students.
Recommendation for Executive Action
We are making the following recommendation to Education:
The Secretary of Education should complete its evaluation of Second Chance Pell in order to report on the pilot’s findings and conclusions reached.
Agency Comments and Our Evaluation
We provided a copy of this report to Education and DOJ for review and comment. Education provided written comments, which are reproduced in full in appendix VI. DOJ did not provide written comments.
Regarding our recommendation to evaluate Second Chance Pell and report on its findings, Education concurred, with clarification. Education stated that it is already taking a number of actions to evaluate the pilot, including gathering information from participating schools and other sources. Education also stated that it will be analyzing the data it is collecting to report on the pilot’s objectives. Education, accordingly, suggested the recommendation should be worded that the Department “continue to” evaluate Second Chance Pell. We describe Education’s data collection efforts in our report; however, at the time of our review Education was not able to provide evidence that it was evaluating the pilot and stated on more than one occasion that it planned to report descriptive information about the pilot’s outcomes (such as the amount of Pell dollars disbursed), because it did not have funding for an evaluation. We are pleased to see that the Department is now planning to evaluate the pilot and report on the pilot’s objectives, and accordingly, we revised our report and recommendation to state that Education should complete its evaluation. An evaluation of Second Chance Pell that goes beyond summarizing descriptive information can help provide policymakers with the information needed to make decisions about the future of Pell grants for incarcerated students.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Education, Attorney General, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff members that made key contributions to this report are listed in appendix VII.
Appendix I: Scope and Methodology
To identify the actions Education and other stakeholders took to implement the Second Chance Pell pilot, we reviewed summary-level data from the Department of Education (Education) regarding the first two years of the pilot—school years 2016-2017 and 2017-2018—on the schools that participated in the pilot, the number of incarcerated individuals who applied for and received Pell grants, and other aspects of the pilot. To ensure the reliability of these data, we reviewed agency documentation about the data and the system that produced them and interviewed officials from Education responsible for collecting and validating the data. We found the data to be sufficiently reliable for our purposes. To further identify the actions taken, we reviewed Education’s published guidance on implementing the Second Chance Pell pilot, including the department’s webinars, action plans, and Frequently Asked Questions document. Additionally, we interviewed officials from the Department of Justice, as well as Education’s Office of Federal Student Aid, on the actions taken to prepare for the pilot and the guidance and support provided to participants, among other topics. We also interviewed representatives from three research groups—the Urban Institute, the Vera Institute of Justice (Vera), and New America—in order to gain additional insight on the effects of postsecondary correctional education as well as the design and implementation of the Second Chance Pell pilot.
To further identify what actions schools and correctional facilities took to implement the pilot, we interviewed officials from a non-generalizable sample of 12 schools participating in the pilot. We also interviewed officials from seven correctional facilities who partnered with the participating schools. We used a sampling procedure in which we selected participating schools with particular characteristics to capture both common experiences and important variations among those with differing characteristics. We selected schools to represent a range of characteristics, including public and private nonprofit schools; schools with existing postsecondary correctional education programs and those with programs launched for the pilot; and schools with a varying number of correctional institution partners (ranging from 1 to 18 partners). We selected schools that offered bachelor’s degrees to students participating in the pilot as well as those that offered certificates and associate’s degrees. We included one school serving a women’s prison, one school that is classified as one of the Historically Black Colleges and Universities, and four schools that are classified as Hispanic Serving Institutions in our sample. Results from nonprobability samples cannot be used to make inferences about a population. Although our findings cannot be generalized to all schools that are participating in the pilot, they do provide useful insight into the experiences of pilot participants.
To describe the experiences that participating schools are having as they implement the Second Chance Pell pilot, we interviewed officials from the non-generalizable sample of schools (and correctional partners) we described above. Additionally, we visited three prisons (Jessup Correctional Institution in Maryland, Mission Creek Corrections Center for Women in Washington State, and Sing Sing Correctional Facility in New York) and one school campus (City University of New York) in order to observe classrooms and student resources such as libraries and study spaces and to talk with selected individuals about their experiences participating in the pilot. Specifically, one of the prisons for men that we visited identified five Second Chance Pell students for us to interview. Each interview was conducted in a private classroom setting with one student and two of our staff members. Each interview lasted between 5 and 10 minutes. Each student was asked the same set of questions about his experience applying for and participating in the Second Chance Pell Pilot Program. Although these interviews were only conducted at one site and are therefore not generalizable to all students participating in the pilot program, they provide insight about the students’ experiences. We also observed a pilot-funded class in session at that prison. On one college campus, we interviewed a student who participated in the pilot while he was incarcerated and who was now released and continuing his education on campus. These sites were selected for variation in experience delivering college classes in prisons, number of students served, and to allow us to observe both men’s and women’s prison facilities. To further understand schools’ experiences as they implement the pilot, in June 2018 we attended the third-annual convening of Second Chance Pell partners, which was a 2-day conference for participating schools, their correctional partners, and other stakeholders, hosted by Vera.
To assess how Education is monitoring and evaluating the pilot, and what opportunities, if any, exist for improvement, we reviewed Education’s documentation on the pilot’s objectives (including any evaluation objectives), and analyzed the data collection instruments Education uses to monitor the pilot. We met with Office of Federal Student Aid officials to discuss the department’s plans for evaluating and reporting on the pilot’s results. We compared Education’s efforts to leading practices we identified for effective pilot design and evaluation. We interviewed officials knowledgeable in the area of evaluation and prison education, including officials from the Urban Institute, Vera, the Washington State Board of Community and Technical Colleges, and New America. Finally, we asked officials from our purposive sample of schools about their experiences with Education’s reporting requirements, perspectives on what additional information Education could collect to demonstrate the outcomes of the pilot, and how schools themselves were measuring the performance of their programs apart from what they were reporting to Education.
We conducted this performance audit from January 2018 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Research on the Effects of Participating in Education while Incarcerated
Our Method to Select Research Literature
To determine what is known about the effects of participation in postsecondary correctional education, we conducted a literature search for studies that analyzed the relationship between inmate participation in postsecondary educational programs while incarcerated and outcomes both while in prison and after release. Our literature search identified 221 published studies for review using a three-stage process. We: (1) Searched 16 authoritative bibliographic databases such as SCOPUS, ERIC, PsychINFO, and ProQuest’s Dissertations and Theses Professional using relevant search terms, such as “postsecondary correctional education,” “postsecondary education,” and “prison,” (2) Identified citations in the studies detailed above that appeared germane to our research interests and did not already appear in our list of studies, and (3) Identified several organizations with subject matter expertise, based on mentions in the studies detailed above and organizations identified in our prior work. We consulted the website of each organization for any studies on the effects of correctional education.
To assess each study’s methodological rigor, we obtained information about each study’s methodology. We based our assessments on generally accepted social science standards. We eliminated studies that met any of the following criteria: (1) published prior to 2000; (2) considered the education level of inmates, rather than participating in education while incarcerated; (3) did not include postsecondary educational programs; (4) did not use appropriate statistical methods to adjust, or control, for group differences; or (5) involved a comparison group that was not applicable to our research interests, such as juveniles. In the first stage of the review, we examined the study abstracts. Following the first stage of the review, 42 studies remained. In the second stage, we read the full description of the study’s methodology. Following the second stage, 20 studies remained for our in-depth review.
Research Findings
Based on our review of the literature described above, studies found that inmates who participated in a correctional education program while incarcerated generally achieved more positive outcomes after release (e.g. higher employment, lower recidivism) than inmates who did not participate in a correctional education program while incarcerated. In 2013, RAND Corporation published a meta-analysis of 58 studies and found that inmates who participated in correctional education had 43 percent lower odds of recidivating than non-participants, and 13 percent higher odds of obtaining employment. Many studies we reviewed that tested impacts on one or more measures of recidivism have also found that incarcerated students who participated in a postsecondary program or earned a postsecondary degree while in prison were less likely to be re-arrested or re-incarcerated than those who did not participate. Some research, however, has found that program completion may lead to positive effects more than participation alone. For example, in one study, researchers found completion of a postsecondary program while in prison was associated with significantly and substantively lower odds of returning to prison for either a new crime or a parole violation, but participation in a postsecondary program without completion offered no benefit relative to not having participated at all. Additionally, not all researchers have observed positive effects in all study settings. In one three-state study, researchers found that those who participated in a correctional education program were less likely to be re-arrested, re- convicted, and re-incarcerated in two states; in the third state, there were no significant differences between participants and non-participants.
Additionally, some research suggests incarcerated students who participated in a postsecondary program while in prison were more likely to find employment after release, work more hours, or earn higher wages than those who did not participate, but this was not always found. For example, in one study, earning a postsecondary credential while incarcerated was associated with an increase in total hours worked and total wages earned in the first 2 years after release; however, it was not associated with an increase in the odds of finding employment. Additionally, one study of inmates in three states found no statistically significant difference in post-release employment in the 3-year follow-up among participants in a correctional education program compared to non- participants.
Several studies found that correctional education had positive outcomes for taxpayers due to lower re-incarceration costs. For example, the RAND Corporation estimated that for every dollar spent on correctional education, five dollars are saved on three-year re-incarceration costs. Another cost analysis in Washington State found that correctional education had a return-on-investment of $19.62 for participants and taxpayers for each dollar spent, and vocational education in prison had a return-on-investment of $13.21 for each dollar spent.
A few studies focused on outcomes for participants while they were still in prison, and these generally suggest positive effects. For example, one qualitative study found that participants in a postsecondary correctional education program reported experiencing increased self-esteem and motivation to reach their goals. A few other studies suggested that participation in education programming reduced misconduct. In one study, participants in college programs (but not other education programs) reported receiving fewer tickets for misconduct. A 2006 meta-analysis, however, found that participating in an educational or vocational program was not as effective at reducing misconduct as were other types of programming.
Research Limitations
The research we identified on correctional education has several limitations. First, the identified studies often measure dependent and independent variables in a variety of ways, which makes comparison of outcomes across studies difficult. For example, some studies define “recidivism” as rearrest within 3 years, while others measure it as re- arrest or reincarceration within 1 year. Another example is that many studies define “participation in education” as participation in a vocational, secondary, or postsecondary program, while others define it as participation specifically in a postsecondary program. Second, of the studies we reviewed all but one include a small, geographically limited, or otherwise non-generalizable sample. Third, many of the studies we reviewed do not examine whether and how characteristics of facilities or implementation procedures may have influenced—negatively or positively—outcomes among participants. We identified nine articles that specifically discuss implementation and facility characteristics; however, none employ robust methodologies to test whether and how these characteristics lead to better outcomes among participants. A fourth limitation is selection bias, which is the possibility that incarcerated students who choose to take classes are meaningfully different from those who choose not to enroll, and that difference is the underlying cause of their positive outcomes. For example, it is possible that incarcerated people who take educational classes are already at the lowest risk of recidivating and have the highest motivation to succeed after release. If this is the case, then lower rates of recidivism and higher rates of employment may be an effect of these characteristics rather than an effect of taking classes while incarcerated. While some of the studies we reviewed took methodological steps to reduce selection bias, not all did.
Appendix III: Select Characteristics and Educational Attainment Levels of the Incarcerated Population
The United States had an estimated 6.6 million prisoners under the jurisdiction of state and federal correctional authorities as of December 31, 2016 (year-end), according to the Bureau of Justice Statistics. According to an analysis of 2009 American Community Survey data, Black, Hispanic, and other non-white individuals make up about 32 percent of the total household population but are about 64 percent of the male prison population. Further, 23 percent of incarcerated men had received some postsecondary education, compared to about 56 percent of men in the household (non-incarcerated) population as shown below in figure 5.
Among the incarcerated population, the analysis also found differences in educational attainment by race. Specifically, for men age 18-24, about 10 percent of black men and about 11 percent of Hispanic men had completed at least some college, compared to about 17 percent of white (non-Hispanic) men.
The educational characteristics of incarcerated women were similar to that of men. Specifically, incarcerated women have lower levels of educational attainment compared to women living in households; however, incarcerated women had overall higher levels of educational attainment compared to incarcerated men. Fifty-eight percent of women in the household population had some postsecondary education compared to about 31 percent of incarcerated women, as shown below in figure 6.
Appendix IV: Selected Characteristics of Schools Education Selected to Participate in the Second Chance Pell Pilot
Table 2. Selected Characteristics of Schools Education Selected to Participate in the Second Chance Pell Pilot
Institution Name **Mercy College
Appendix V: Data Items Collected by the Department of Education for the Second Chance Pell Pilot
Appendix VI: Comments from the Department of Education
Appendix VII: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact name above, Melissa Emrey-Arras (Director), Brett Fallavollita (Assistant Director), Charlotte Gamble (Analyst in Charge), Sarah Williamson, Marissa Jones Friedman, Billy Commons, Elizabeth Dretsch, Eric Hauswirth, Debra Prescott, Kevin Reeves, and Ben Sinoff made key contributions to this report. | Why GAO Did This Study
Incarcerated students are generally prohibited from receiving Pell grants, which provide need-based federal financial aid to low-income undergraduate students. However, Education has the authority to waive specific statutory or regulatory requirements for providing federal student aid at schools approved to participate in its experiments. Accordingly, the department initiated the multi-year Second Chance Pell pilot in 2015 to test whether allowing incarcerated individuals to receive Pell grants increases their participation in higher education programs and influences their academic and life outcomes, or creates any obstacles to schools' administration of federal financial aid programs.
GAO was asked to review the Second Chance Pell pilot. This report examines (1) actions Education, schools, and other stakeholders have taken to implement the pilot; (2) experiences participating schools are having as they implement the pilot; and (3) how Education is monitoring and evaluating the pilot and whether opportunities for improvement exist.
GAO analyzed summary-level Education data from the 2016-2017 and 2017-2018 school years and interviewed a non-generalizable sample of 12 schools (and associated prison partners) that were selected for variation in type of school (i.e., public and private nonprofit), type of prisons served, and other variables. GAO also interviewed Education officials.
What GAO Found
The Department of Education (Education) selected 64 schools across 26 states to participate in the Second Chance Pell pilot, and participating schools collaborated with prisons and other stakeholders to implement the pilot. Across the pilot's first 2 years, schools awarded approximately $35.6 million in Pell grants to about 8,800 incarcerated students.
Officials from the 12 schools GAO interviewed reported experiencing some challenges implementing the pilot. First, school officials said they experienced challenges establishing incarcerated applicants' eligibility for Pell grants, since some applicants had not registered for Selective Service and some had an existing federal student loan in default. However, many applicants were able to complete the necessary steps—such as making a set number of payments on their defaulted loans—to reestablish eligibility. Second, obtaining documents from incarcerated applicants to support verification—which helps the department's efforts to reduce improper payments of federal student aid—was another challenge officials reported. School officials also said that providing college classes in prisons required them to develop new processes and creative solutions to overcome technology limitations, space limitations, and the transfer of students to other prisons. Officials from 8 of 12 schools told GAO they hired additional staff or developed new approaches in response to their pilot efforts.
Incarcerated College Students inside New York's Sing Sing Correctional Facility
Education monitors the pilot by collecting data from participating schools, but had not established how it intended to evaluate Second Chance Pell or measure the pilot's performance against its objectives. Education is required to review and evaluate experiments under the Experimental Sites Initiative—of which Second Chance Pell is a part—and make recommendations, as appropriate, to improve the delivery of federal student aid. In its comments on the draft report, Education stated that it was planning to evaluate the pilot, consistent with the pilot's objectives, and described a number of steps it was taking to do so. Completing this evaluation can help ensure policymakers have the information needed to make decisions about the future of Pell grants for incarcerated students.
What GAO Recommends
GAO recommends that the Secretary of Education complete its evaluation of the pilot to report on its findings and conclusions. Education concurred, with clarification, and stated that it had actions underway to evaluate the pilot. |
gao_GAO-18-703T | gao_GAO-18-703T_0 | Background
SSA’s mission is to deliver Social Security services that meet the changing needs of the public. The Social Security Act and amendments established three programs that the agency administers:
Old-Age and Survivors Insurance provides monthly retirement and survivors benefits to retired and disabled workers, their spouses and their children, and the survivors of insured workers who have died. SSA has estimated that, in fiscal year 2019, $892 billion in old-age and survivors insurance benefits are expected to be paid to a monthly average of approximately 54 million beneficiaries.
Disability Insurance provides monthly benefits to disabled workers and their spouses and children. The agency estimates that, in fiscal year 2019, a total of approximately $149 billion in disability insurance benefits will be paid to a monthly average of about 10 million eligible workers.
Supplemental Security Income is a needs-based program financed from general tax revenues that provides benefits to aged adults, blind or disabled adults, and children with limited income and resources. For fiscal year 2019, SSA estimates that nearly $59 billion in federal benefits and state supplementary payments will be made to a monthly average of approximately 8 million recipients.
SSA Relies Extensively on IT
SSA relies heavily on its IT resources to support the administration of its programs and related activities. For example, its systems are used to handle millions of transactions on the agency’s website, maintain records for the millions of beneficiaries and recipients of its programs, and evaluate evidence and make determinations of eligibility for benefits. According to the agency’s most recent Information Resources Strategic Plan, its systems supported the processing of an average daily volume of about 185 million individual transactions in fiscal year 2015.
SSA’s Office of the Deputy Commissioner for Systems is responsible for developing, overseeing, and maintaining the agency’s IT systems.
Comprised of approximately 3,800 staff, the office is headed by the Deputy Commissioner, who also serves as the agency’s CIO.
SSA Has a History of Unsuccessful IT Management
SSA has long been challenged in its management of IT. As a result, we have previously issued a number of reports highlighting various weaknesses in the agency’s system development practices, governance, requirements management, and strategic planning, among other areas. Collectively, our reports stressed the need for the agency to strengthen its IT management controls.
In 2016, we reported that SSA’s acting commissioner had stated that the agency’s aging IT infrastructure was not sustainable because it was increasingly difficult and expensive to maintain. Accordingly, the agency requested $132 million in its fiscal year 2019 budget to modernize its IT environment. As reflected in the budget, these modernization efforts are expected to include projects such as updating database designs by converting them to relational databases, eliminating the use of outdated code, and upgrading infrastructure.
Among the agency’s priority IT spending initiatives in the budget is its Disability Case Processing System, which has been under development since December 2010. This system is intended to replace the 52 disparate Disability Determination Services’ component systems and associated processes with a modern, common case processing system. According to SSA, the new system is to modernize the entire claims process, including case processing, correspondence, and workload management.
However, SSA has reported substantial difficulty in the agency’s ability to carry out this initiative, citing software quality and poor system performance as issues. Consequently, in June 2016, the Office of Management and Budget (OMB) placed the initiative on its government- wide list of 10 high-priority programs requiring attention.
Congress and the Administration Have Undertaken Efforts to Improve Federal IT
As previously mentioned, Congress enacted federal IT acquisition reform legislation (commonly referred to as FITARA) in December 2014. This legislation was intended to improve agencies’ acquisitions of IT and enable Congress to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. It includes specific requirements related to seven areas: (1) agency CIO authority enhancements, (2) federal data center consolidation initiative, (3) enhanced transparency and improved risk management, (4) portfolio review, (5) IT acquisition cadres, (6) government-wide software purchasing program, and (7) the Federal Strategic Sourcing Initiative.
In June 2015, OMB released guidance describing how agencies are to implement FITARA. The guidance identifies a number of actions that agencies are to take to establish a basic set of roles and responsibilities (referred to as the common baseline) for CIOs and other senior agency officials and, thus, to implement the authorities described in the law.
More recently, on May 15, 2018, the President signed Executive Order 13833, Enhancing the Effectiveness of Agency Chief Information Officers. Among other things, this executive order is intended to better position agencies to modernize their technology, execute IT programs more efficiently, and reduce cybersecurity risks. The order pertains to 22 of the 24 Chief Financial Officers Act agencies; the Department of Defense and the Nuclear Regulatory Commission are exempt.
For the covered agencies, including SSA, the executive order strengthens the role of the CIO by, among other things, requiring the CIO to report directly to the agency head; to serve as the agency head’s primary IT strategic advisor; and to have a significant role in all management, governance, and oversight processes related to IT. In addition, one of the cybersecurity requirements directs agencies to ensure that the CIO works closely with an integrated team of senior executives, including those with expertise in IT, security, and privacy, to implement appropriate risk management measures.
In June 2018, we issued a report that examined the cybersecurity workforce of the government. We noted that most of the 24 agencies we examined had developed baseline assessments to identify cybersecurity personnel within their agencies that held certifications, but the results were potentially unreliable. However, SSA’s baseline was found to be reliable because it addressed all of the reportable information, such as the extent to which personnel without professional certifications were ready to obtain them or strategies for mitigating any gaps. Further, we found that most of the 24 agencies had established procedures to assign cybersecurity codes to positions, including SSA. We also have ongoing work at SSA, including reviewing its cybersecurity workforce; standardized approach to security assessment, authorization, and continuous monitoring; cybersecurity strategy; and intrusion detection and prevention capabilities.
From July 2011 through January 2018, we issued a number of reports that addressed specific weaknesses in SSA’s management of IT acquisitions and operations and in the role of its CIO. These reports included 15 recommendations aimed at improving the agency’s efforts with regard to data center consolidation, incremental development, IT acquisitions, and software licenses. We also made a recommendation to SSA to address weaknesses related to the role of the CIO in key management areas.
SSA Has Improved the Management of Selected Areas of IT Acquisitions and Operations, but Has Not Fully Addressed the Role of Its CIO
SSA has taken steps to improve its management of IT acquisitions and operations by addressing 14 of the 15 recommendations that we previously directed to the agency regarding data center consolidation, incremental development, IT acquisitions, and software licenses.
Data center consolidation. OMB established the Federal Data Center Consolidation Initiative in February 2010 to improve the efficiency, performance, and environmental footprint of federal data center activities. The enactment of FITARA in 2014 codified and expanded the initiative. In addition, pursuant to FITARA, in August 2016, the Federal CIO issued a memorandum that announced the Data Center Optimization Initiative as a successor effort to the Federal Data Center Consolidation Initiative. Further, in August 2016, OMB released guidance which established the Data Center Optimization Initiative and included instructions on how to implement the date center consolidation and optimization provisions of FITARA. Among other things, the guidance required agencies to consolidate inefficient infrastructure, optimize existing facilities, improve their security posture, and achieve cost savings.
In addition, the guidance directed agencies to develop a data center consolidation and optimization strategic plan that defines the agency’s data center strategy for fiscal years 2016, 2017, and 2018. This strategy is to include, among other things, a statement from the agency CIO indicating whether the agency has complied with all data center reporting requirements in FITARA. Further, the guidance indicates that OMB is to maintain a public dashboard to display consolidation-related cost savings and optimization performance information for the agencies.
In a series of reports that we issued from July 2011 through August 2017, we noted that, while data center consolidation could potentially save the federal government billions of dollars, weaknesses existed in agencies’ data center consolidation plans and data center optimization efforts. Specifically with regard to SSA, in 2011, we reported that the agency had an incomplete consolidation plan and inventory of IT assets. In 2016, we reported that SSA did not meet any of the seven applicable data center optimization targets, as required by OMB. In addition, in 2017, we reported that the agency had an incomplete data center optimization plan. We stressed that until SSA completed these required activities, it might not be able to consolidate data centers, as required, and realize expected savings.
We made a total of four recommendations to SSA in our 2011, 2016, and 2017 reports to help improve the agency’s reporting of data center-related cost savings and to achieve data center optimization targets. As of September 2018, SSA had implemented all four recommendations. Consequently, the agency is better positioned to improve the efficiency of its data centers and achieve cost savings.
In addition, we reported in May 2018 that the agencies participating in the Data Center Optimization Initiative had communicated mixed progress toward achieving OMB’s goals for closing data centers by September 2018. With regard to SSA, we noted that the agency had not yet achieved its planned savings but that its data centers were among the most optimized that we reviewed. In particular, while SSA reported that it planned to save $1.08 million on its data center initiative from 2016 through 2018, it had not achieved any of those savings. However, the agency reported having met the goal of closing 25 percent of its tiered data centers.
Further, SSA reported the most progress among the 22 applicable agencies in meeting OMB’s data center optimization targets. Specifically, SSA reported that it had met four of the five targets. (One other agency reported that it had met three targets, 6 agencies reported having met either one or two targets, and 14 agencies reported meeting none of the targets). Consequently, we did not make any additional recommendations to SSA in our May 2018 report. We also have ongoing work involving SSA related to agencies’ progress on closing data center and achieving optimization targets.
Incremental development. OMB has emphasized the need to deliver investments in smaller parts, or increments, in order to reduce risk, deliver capabilities more quickly, and facilitate the adoption of emerging technologies. In 2010, it called for agencies’ major investments to deliver functionality every 12 months and, since 2012, every 6 months. Subsequently, FITARA codified a requirement that covered agency CIOs certify that IT investments are adequately implementing incremental development, as defined in the capital planning guidance issued by OMB. Further, subsequent OMB guidance on the law’s implementation, issued in June 2015, directed agency CIOs to define processes and policies for their agencies to ensure that they certify that IT resources are adequately implementing incremental development.
In November 2017, we reported that 21 agencies, including SSA, needed to improve their certification of incremental development. We pointed out that, as of August 2016, agencies had reported that 103 of 166 major IT software development investments (62 percent) were certified by the agency CIO for implementing adequate incremental development in fiscal year 2017, as required by FITARA.
With regard to SSA, we noted that only 3 of the agency’s 10 investments primarily in development had been certified by the agency CIO as using adequate incremental development, as required by FITARA. In addition, we noted that SSA’s incremental development certification policy did not describe the CIO’s role in the certification process or how CIO certification would be documented. However, accurate agency CIO certification of the use of adequate incremental development for major IT investments is critical to ensuring that agencies are making the best effort possible to create IT systems that add value while reducing the risks associated with low-value and wasteful investments.
As a result of these findings, we recommended that SSA ensure that its CIO (1) reports major IT investment information related to incremental development accurately, in accordance with OMB guidance; and (2) updates the agency’s policy and processes for the certification of incremental development and confirm that the policy includes a description of how the CIO certification will be documented. SSA agreed with our recommendations and implemented both of them. Thus, the agency should be better positioned to realize the benefits of incremental development practices, such as reducing investment risk, delivering capabilities more rapidly, and permitting easier adoption of emerging technologies.
IT acquisitions. FITARA includes a provision to enhance covered agency CIOs’ authority through, among other things, requiring agency heads to ensure that CIOs review and approve IT contracts. OMB’s FITARA implementation guidance expanded upon this aspect of the legislation in a number of ways. Specifically, according to the guidance, CIOs may review and approve IT acquisition strategies and plans, rather than individual IT contracts, and CIOs can designate other agency officials to act as their representatives.
In January 2018, we reported that most of the CIOs at 22 selected agencies, including SSA, were not adequately involved in reviewing and approving billions of dollars of IT acquisitions. In particular, we found that SSA’s process to identify IT acquisitions for CIO review did not involve the acquisition office, as required by OMB. In addition, we noted that SSA had a CIO review and approval process in place that fully satisfied the requirements set forth in OMB’s guidance. However, while SSA provided evidence of the CIO’s review of most of the IT contracts we examined, the agency had not ensured that the CIO or a designee reviewed and approved each IT acquisition plan or strategy. Specifically, of 10 randomly selected IT contracts that we examined at SSA, 7 acquisitions associated with those contracts had been reviewed and approved, as required by OMB.
We pointed out that, until SSA ensured that its CIO or designee reviewed and approved all IT acquisitions, the agency would have limited visibility and input into its planned IT expenditures and would not be effectively positioned to benefit from the increased authority that FITARA’s contract approval provision is intended to provide.
Further, the agency could miss an opportunity to strengthen the CIO’s authority and the oversight of IT acquisitions—thus, increasing the potential to award IT contracts that are duplicative, wasteful, or poorly conceived.
Accordingly, we made three recommendations to SSA to address these weaknesses. As of September 2018, the agency had made progress by implementing two of the recommendations: to ensure that (1) the acquisition office is involved in identifying IT acquisitions and (2) the CIO or designee reviews and approves IT acquisitions according to OMB guidance. By taking these actions, SSA should be better positioned to properly identify and provide oversight of IT acquisitions.
However, SSA has not yet implemented our third recommendation that it issue guidance to assist in the identification of IT acquisitions. SSA stated that, in September 2017, it updated its policy for acquisition plan approval to address this recommendation; however, upon review of this policy, we did not find guidance for identifying IT acquisitions. Without the proper identification of IT acquisitions, SSA’s CIO cannot effectively provide oversight of these acquisitions.
Software licenses. Federal agencies engage in thousands of software licensing agreements annually. The objective of software license management is to manage, control, and protect an organization’s software assets. Effective management of these licenses can help avoid purchasing too many licenses, which can result in unused software, as well as too few licenses, which can result in noncompliance with license terms and cause the imposition of additional fees.
As part of its PortfolioStat initiative, OMB has developed policy that addresses software licenses. This policy requires agencies to conduct an annual, agency-wide IT portfolio review to, among other things, reduce commodity IT spending. Such areas of spending could include software licenses.
In May 2014, we reported on federal agencies’ management of software licenses and determined that better management was needed to achieve significant savings government-wide. Of the 24 agencies we reviewed, SSA was 1 of 22 that lacked comprehensive policies that incorporated leading practices.
In particular, SSA’s policy partially met four of the leading practices and did not meet one. Further, we noted that SSA was among 22 of the 24 selected agencies that had not established comprehensive software license inventories—a leading practice that would help the agencies to adequately manage their software licenses.
As such, we made six recommendations to SSA to improve its policies and practices for managing software licenses. These included recommendations that the agency develop a comprehensive policy for the management of software licenses and establish a comprehensive inventory of software licenses. SSA agreed with the recommendations and, as of September 2018, had implemented all six of them. As a result, the agency should be better positioned to manage its software licenses and identify opportunities for reducing software license costs.
SSA Needs to Further Address the CIO’s Role in Its Policies
While SSA has taken steps that improved its IT management in the areas of data center consolidation, incremental development, IT acquisitions, and software licenses, we reported in August 2018 that the agency had not fully addressed the role of the CIO in its policies.
As previously mentioned, FITARA and the President Executive Order 13833, among other laws and guidance, outline the roles and responsibilities for agency CIOs in an attempt to improve the government’s performance in IT and related information management functions. Within these laws and guidance, we identified IT management responsibilities assigned to CIOs in six key IT areas:
Leadership and accountability. CIOs are responsible and accountable for the effective implementation of IT management responsibilities. For example, CIOs are to report directly to the agency head or that official’s deputy and designate a senior agency information security officer.
Strategic planning. CIOs are required to lead the strategic planning for all IT management functions. An example of a CIO requirement related to the strategic planning area is measuring how well IT supports agency programs and reporting annually on the progress in achieving the goals.
IT workforce. CIOs are to assess agency IT workforce needs and develop strategies and plans for meeting those needs. For example, CIOs are responsible for annually assessing the extent to which agency personnel meet IT management knowledge and skill requirements, developing strategies to address deficiencies, and reporting to the head of the agency on the progress made in improving these capabilities.
IT budgeting. CIOs are responsible for the processes for all annual and multi-year IT planning, programming, and budgeting decisions. For example, CIOs are to have a significant role in IT planning, programming, and budgeting decisions.
IT investment management. CIOs are to manage, evaluate, and assess how well the agency is managing its IT resources. In particular, CIOs are required to improve the management of the agency’s IT through portfolio review.
Information security. CIOs are to establish, implement, and ensure compliance with an agency-wide information security program. For example, CIOs are required to develop and maintain an agency-wide security program, policies, procedures, and control techniques.
In our August 2018 report, we noted that SSA, along with 23 other agencies, did not have policies that fully addressed the role of the CIO in these six key areas, consistent with the laws and guidance.
To its credit, SSA had fully addressed the role of the CIO in the IT leadership and accountability area. In particular, the agency’s policies addressed the requirements that the CIO report directly to the agency head, assume responsibility and accountability for IT investments, and designate a senior agency information security officer.
However, the policies did not fully address the role of the CIO in the other five areas (i.e., strategic planning, workforce, budgeting, investment management, and information security). For example, the agency’s policies did not address the IT workforce area at all, including the requirements that the CIO annually assess the extent to which agency personnel meet IT management knowledge and skill requirements, develop strategies to address deficiencies, and report to the head of the agency on the progress made in improving these capabilities.
Further, SSA’s policies minimally addressed the requirements for IT strategic planning. Specifically, while the agency’s policies required the CIO to establish goals for improving agency operations through IT, the policies did not require the CIO to measure how well IT supports agency programs and report annually on the progress in achieving the goals.
Table 1 summarizes the extent to which SSA’s policies addressed the role of its CIO, as reflected in our August 2018 report.
As a result of these findings, we made a recommendation to SSA to address the weaknesses in its policies with regard to the remaining five key management areas. In response, the agency agreed with our recommendation and, subsequently, stated that it planned to do so by the end of September 2018. Following through to ensure that the identified weaknesses are addressed in its policies will be essential to helping SSA overcome its longstanding IT management challenges.
In conclusion, effective IT management is critical to the performance of SSA’s mission. Toward this end, the agency has taken steps to improve its management of IT acquisitions and operations by implementing 14 of the 15 recommendations we made from 2011 through 2018 to improve its IT management. Nevertheless, SSA would be better positioned to effectively address longstanding IT management challenges by ensuring that it has policies in place that fully address the role and responsibilities of its CIO in the five key management areas, as we previously recommended.
Chairman Johnson, Ranking Member Larson, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have.
GAO Contact and Staff Acknowledgments
If you or your staffs have any questions about this testimony, please contact Carol C. Harris at (202) 512-4456 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony statement. GAO staff who made key contributions to this statement are Kevin Walsh (Assistant Director), Jessica Waselkow (Analyst in Charge), and Rebecca Eyler.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
SSA delivers services that touch the lives of almost every American, and relies heavily on IT resources to do so. Its systems support a range of activities, such as processing Disability Insurance payments, to calculating and withholding Medicare premiums, and issuing Social Security numbers and cards. For fiscal year 2018, the agency planned to spend approximately $1.6 billion on IT.
GAO has previously reported that federal IT projects have often failed, in part, due to a lack of oversight and governance. Given the challenges that federal agencies, including SSA, have encountered in managing IT acquisitions, Congress and the administration have taken steps to improve federal IT, including enacting federal IT acquisition reform legislation and issuing related guidance.
This statement summarizes GAO's previously reported findings regarding SSA's management of IT acquisitions and operations. In developing this testimony, GAO summarized findings from its reports issued in 2011 through 2018, and information on SSA's actions in response to GAO's recommendations.
What GAO Found
The Social Security Administration (SSA) has improved its management of information technology (IT) acquisitions and operations by addressing 14 of the 15 recommendations that GAO has made to the agency. For example,
Incremental development . The Office of Management and Budget (OMB) has emphasized the need for agencies to deliver IT investments in smaller increments to reduce risk and deliver capabilities more quickly. In November 2017, GAO reported that agencies, including SSA, needed to improve their certification of incremental development. As a result, GAO recommended that SSA's CIO (1) report incremental development information accurately, and (2) update its incremental development policy and processes. SSA implemented both recommendations.
Software licenses . Effective management of software licenses can help avoid purchasing too many licenses that result in unused software. In May 2014, GAO reported that most agencies, including SSA, lacked comprehensive software license policies. As a result, GAO made six recommendations to SSA, to include developing a comprehensive software licenses policy and inventory. SSA implemented all six recommendations.
However, SSA's IT management policies have not fully addressed the role of its CIO. Various laws and related guidance assign IT management responsibilities to CIOs in six key areas. In August 2018, GAO reported that SSA had fully addressed the role of the CIO in one of the six areas (see table). Specifically, SSA's policies fully addressed the CIO's role in the IT leadership and accountability area by requiring the CIO to report directly to the agency head, among other things.
In contrast, SSA's policies did not address or minimally addressed the IT workforce and IT strategic planning areas. For example, SSA's policies did not include requirements for the CIO to annually assess the extent to which personnel meet IT management skill requirements or to measure how well IT supports agency programs. GAO recommended that SSA address the weaknesses in the remaining five key areas. SSA agreed with GAO's recommendation and stated that the agency plans to implement the recommendation by the end of this month.
What GAO Recommends
GAO has made 15 recommendations to SSA to improve its management of IT acquisitions and operations from 2011 through 2018, and 1 recommendation to improve its CIO policies. While SSA has implemented nearly all of them, it would be better positioned to overcome longstanding IT management challenges when it addresses the CIO's role in its policies. |
gao_GAO-19-72 | gao_GAO-19-72_0 | Background
Federal Open Data Policy
Recognizing the federal government’s role as a major supplier of data, the 2018 President’s Management Agenda announced the creation of a Federal Data Strategy. According to the agenda, this strategy promises to leverage data as a strategic asset to grow the economy, increase the effectiveness of the federal government, facilitate oversight, and promote transparency. It proposes improving data dissemination by making data available more quickly and in more useful formats, maximizing nonsensitive data shared with the public, and enabling external users to access and use government data for commercial and other public purposes.
The Federal Data Strategy builds on existing policy governing the federal government’s websites and data. In 2016, OMB memorandum M-17-06, Policies for Federal Agency Public Websites and Digital Services, established policy for the federal government’s online information resources, such as the need to ensure that information is searchable and to inform users about information quality issues. In addition, in 2013, OMB memorandum M-13-13, Open Data Policy—Managing Information as an Asset, established an information management framework to promote interoperability and openness at each stage of the information life cycle.
These efforts are consistent with the international Open Government Partnership, which aims to make governments more inclusive, responsive, and accountable. Seventy-five countries have committed to the Open Government Partnership by endorsing the Open Government Declaration. In doing so, these countries have committed to increasing the availability of information about government activities, supporting public participation in government, and using new technologies for openness and accountability, among other things.
DATA Act and USAspending.gov
Enacted in 2006, FFATA requires agencies to report information on federal awards—such as contracts, grants, and loans. In 2014, the DATA Act expanded on FFATA by establishing new requirements intended to allow policymakers and the public to more effectively track federal spending, including:
Reporting additional data. Agencies are required to report additional financial data from different points in the spending life cycle.
Setting government-wide standards. OMB and Treasury are responsible for establishing government-wide financial data standards for any federal funds made available to or expended by federal agencies. These standards define and describe the data elements that agencies must report.
Reporting consistently. Agencies reporting financial information are required to comply with the standards established by OMB and Treasury so that information can be compared across the government.
Improving data access. The data must be made available in machine-readable and open formats, to be downloaded in bulk, and— to the extent practicable—for automated processing.
The DATA Act required agencies to begin reporting data in accordance with the data standards issued by Treasury and OMB within three years of its enactment, and required that those data be displayed on USAspending.gov or a successor system. USAspending.gov has been the platform to provide federal spending information to the public since 2007 (see figure 1). In May 2017, Treasury released a new website, Beta.USAspending.gov, where it began to publish information submitted under the DATA Act. In March 2018, this new website assumed the USAspending.gov web address and Treasury retired the old version of USAspending.gov.
Data on USAspending.gov come from a variety of sources, including files that agencies began submitting quarterly for DATA Act reporting in May 2017. When agencies submit quarterly data, Treasury’s DATA Act Broker ingests the data and validates certain information before the data are published on the website. Agency Senior Accountable Officials certify that the agency’s submission is valid and reliable. In addition to agencies’ quarterly DATA Act reporting files, USAspending.gov includes data from government-wide systems. Government-wide procurement data on the website are updated daily, while government-wide financial assistance data are updated biweekly. The new USAspending.gov also includes older award data that had been available on the prior version of the website.
Our Prior Report Identified Data Quality and Website Issues
In November 2017, we issued our first report on data quality as required by the DATA Act. We found issues with the completeness and accuracy of the data that agencies submitted for the second quarter of fiscal year 2017 as well as the use of data elements. For example:
Of the 78 agencies that submitted data on time, 13 submitted the data file intended to link budgetary and award information without providing any data in the file.
Between 56 and 75 percent of the newly-required budgetary records were fully consistent with agency sources, but only between 0 and 1 percent of award records (such as grants, contracts, and loans) were fully consistent.
Agencies differed in how they interpreted and applied OMB’s definitions for two data elements—Primary Place of Performance and Award Description—raising concerns regarding data consistency and comparability. These two award data elements are particularly important to achieving the transparency goals envisioned by FFATA because they provide the public with information on where the federal government spends money and what it spends it on, respectively.
We also found issues with the presentation of the data on Beta.USAspending.gov, including fragmented or incomplete search results and insufficient disclosure of data limitations. Among other things, we recommended that Treasury disclose known data quality issues and limitations on the new USAspending.gov website. We provide an update on actions Treasury has taken to address this recommendation later in this report.
Key Practices for Transparently Reporting Government Data
We identified five key practices that managers of open government data programs can consider to help ensure the transparent presentation of their data. We also identified key actions for implementing each key practice. We identified these key practices and key actions by systematically evaluating and synthesizing information from literature on open data as well as interviews with open data experts and good governance groups. These key practices and key actions are listed in table 1.
These key practices and actions are intended to be used in tandem with requirements for federal government websites and open data programs, such as relevant laws and OMB guidance. They are not intended to replace or supersede any applicable requirements. When considering an individual open government data program, some key practices and actions may be more relevant than others because the purpose and characteristics of open government data programs may vary. In addition, while this report focuses on the presentation of open government data, open data practitioners should also consider other elements—including data quality and data governance—to ensure that the public has access to high-quality information.
Provide Free and Unrestricted Data
To promote transparency, we found that open data should be freely and equally available to users without restrictions. As such, we identified two key actions for providing free and unrestricted data (see figure 2).
Make government data open by default, while protecting sensitive or restricted information. Making government data open by default ensures that the data are equally open to all types of users; in contrast, when government information is available by request, it may favor citizens with greater information about and access to government institutions. In addition, some open data practitioners we spoke with said that providing open data can minimize the burden of responding to information requests. For example, according to Connecticut officials, before the state’s open spending data website was launched, payroll data were the most frequently requested information under the state’s Freedom of Information Act (FOIA). Officials said that providing open payroll data on the website significantly reduced FOIA requests, which allowed state officials to spend time and resources on other activities.
However, not all government information is appropriate to publish. Some datasets may contain sensitive information such as personally identifiable information, information that is classified or similarly not subject to disclosure, or intellectual property. Other legal restrictions may also prohibit the disclosure of certain information. In some cases the information in an individual dataset may not pose a risk of identifying sensitive information, but may pose such a risk when combined with other available information. For that reason, when considering whether or not information may be disclosed, OMB M-13-13 requires agencies to determine whether it may be combined with existing publically available data to identify an individual or otherwise pose a security concern. In such situations, agencies must conduct a risk-based privacy analysis to determine whether the information can be made publicly available that accounts for the nature of the information, the availability of other information, and the technology in place that could facilitate the process of identification.
As an example of how open data practitioners can balance these types of considerations, Montgomery County, Maryland, applies safeguards such as a review by internal departments. Additionally, if Department officials request a secondary fact-specific review, the Office of the County Attorney will review the information to further ensure that protected information is not published. In some cases, sensitive information is removed from a dataset prior to publication. For example, according to county officials, the names of housing assistance recipients are removed from spending data to protect resident privacy. Users can see nonsensitive aspects of these data, such as the amount spent, with identifying details removed.
Do not charge users for access to the data. Providing data for free can help ensure equitable access to users independent of their ability to pay. Lowering barriers, such as cost, increases the value of open data, as more users are able to access it.
Engage with Users
Open government data only create value to the extent that they are used. With that in mind, we identified three key actions for engaging with users (see figure 3).
Identify data users and their needs. By identifying who is using the data and what content or features are important to them, data providers can better prioritize their efforts to present information to data consumers. Open data experts we spoke with emphasized that data providers should engage with users both inside and outside of government, including groups that may typically have less access to government institutions. For example, to further New York City’s Open Data for All vision to provide open data for people from all walks of life and all five of the city’s boroughs, Columbia University students conducted user research on behalf of the city to better understand the extent to which community organizations use open data and what barriers they face, according to the capstone report for this project. By surveying and interviewing these organizations, the students learned that users found the city’s data portal interface difficult to use. In response, the city worked with users to design and test a new, more streamlined portal.
Solicit and be responsive to user feedback. Soliciting and being responsive to user feedback—both when the website is being developed and on an ongoing basis—can help ensure that the website meets users’ needs. Feedback can also surface issues with the functionality of the website and the quality of the data, thus enabling the data provider to make corrections when needed. User feedback mechanisms vary and can include online comment forms, forums, and discussion boards, as well as in-person public forums. Open data experts we spoke with said it is particularly helpful to list the contact information for a responsible official on the website in case users have questions about the data. In addition, timely response to feedback encourages engagement by assuring users that their voices have been heard.
Monitoring how the public is using the data can also help practitioners determine which content and features are most useful. Web analytics can show how the data are being used, such as by identifying commonly-used search terms and datasets, and showing trends over time.
Web Analytics Web analytics is the collection, reporting, and analysis of website data, such as the number of users who visit the website.
According to a city official, Los Angeles uses web analytics to measure how frequently its datasets are accessed. Web analytics data showed that some datasets were often accessed on certain dates or in conjunction with current events, while other datasets were rarely used. City officials use this information to adjust how data are presented on the website, which has increased overall use of the city’s data. For example, the city created data visualizations and links to data—including its City Revenue and City Budget Expenditures datasets—to accompany the release of its Comprehensive Annual Financial Report.
Reach out to potential users to encourage data use. Actively engaging potential users can provide an opportunity to educate them on how the data can be appropriately used and encourage innovation. Data trainings can provide potential users with important context and information, which can include teaching users how the data can be used. Resources such as how-to guides can also encourage data use. For example, as shown on the website, New York City’s open data portal includes a “How To” page with a step-by-step guide to help users get started with open data, and directs them toward additional resources such as data dictionaries.
We previously found that open data collaboration and prize competitions or challenges are two strategies that agencies can use to harness the ideas, expertise, and resources of those outside of their organization.
Agencies engage in open data collaboration by mobilizing participants to use their open data in innovative ways, such as sharing, exploring, and analyzing publicly-available datasets; using the data to conduct research; designing data visualizations; or creating web and mobile applications and websites that help people access and use the data. In addition, agencies use prize competitions or challenges for help solving a problem or reaching a specific goal by asking members of the public to submit potential solutions. The agency then evaluates these proposals and provides a monetary or nonmonetary award to selected winners.
New York City encourages the use of its open data using these strategies by hosting data literacy trainings, hackathons, and contests. For example, in the spring of 2018 the city hosted a contest to recognize projects that effectively use its open data and showcase the diversity of potential uses, according to city officials and contest documentation. Winning projects were posted to a gallery on the city’s open data website. As shown on the city’s open data website, one winner—a project called “Plan(t)wise”— predicts various tree species’ likelihood of survival in locations throughout the city based on tree census data, and recommends which type of tree to plant at a given address.
Provide Data in Useful Formats
Data are most useful when they are provided in formats that allow them to be analyzed in a variety of ways. We identified four key actions for providing data in useful formats (see figure 4).
Provide users with detailed and disaggregated data. Data are most useful when they are provided in as much granularity as possible. For example, Ohio’s online checkbook allows users to view detailed, disaggregated data in a user-friendly checkbook format, as shown on the website (see figure 5). The representation of the expenditure is displayed as a check, and includes the vendor’s name and address, the amount paid, payment date, check number, and contact information for the appropriate state office.
Provide machine-readable data that can be downloaded in bulk and in selected subsets. Providing data in machine-readable formats makes them easier to process and analyze, which is particularly important for large datasets. For example, Kansas City officials said the city has been working to convert information from the PDF format to machine-readable formats such as CSV because PDF documents are challenging for the city to update and for users to navigate. In one instance, officials said that converting the city’s list of vehicles for sale in its tow lot from PDF to CSV format allowed the city to update the data more frequently so that users can see what vehicles are for sale at any given time.
Making data available to download in bulk allows users who need the full dataset to easily access it rather than retrieving information record-by- record. If the full dataset is large, allowing users to download selected subsets can make it easier for them to work with only the data they need. Data can also be provided to users through an Application Programming Interface (API), which allows users to connect directly with the dataset by enabling machine-to-machine communication. APIs can be particularly useful for large, frequently updated, or highly complex datasets because they offer users flexibility to obtain the data they need. In addition, developers can use APIs to build applications based on the data.
Non-Proprietary File Formats File formats describe what type of information a file contains, as well as how that information is stored and structured. Some file formats are proprietary, meaning that they can only be opened by specific commercial software applications. In contrast, non-proprietary formats are publicly available and can be used by all software developers. Examples of non-proprietary file formats include:
CSV, which stores tabular data; RDF, which stores metadata; TXT, which stores unformatted text; and XML, which stores both the format and content of data.
Provide data downloads in a non-proprietary format. To ensure broad and equitable access, data downloads should be available in formats that do not require specific commercial software to access, and therefore do not exclude users who do not have access to such software. Non- proprietary data formats include, but are not limited to, CSV, RDF, TXT, and XML. For example, Kansas City, Missouri’s, open data portal allows users to export spatial data in an open format that does not require proprietary mapping software, according to city officials and the city’s open data portal website. Open data experts we spoke with said that practitioners should consult stakeholders when determining which format is appropriate for a given program, and that the appropriate format may change over time as technology advances.
Make the data interoperable with other datasets. Making data interoperable with other datasets can make them more useful because users may want to create new opportunities for analysis by linking datasets together. This can be done by standardizing the way that the data are reported. For example, using standard definitions for the specific items included in a set of data—known as data elements—can promote consistency with other datasets. Additionally, documentation such as data dictionaries can help ensure that definitions are clear and avoid misunderstandings.
To promote interoperability between datasets that use geographic information, Kansas City uses standard land parcel identification numbers across departments. This allows different datasets that contain location information to be used in combination. For example, officials said that the city is linking different datasets that use those identification numbers— including building code violation data, 311 calls, and dangerous buildings data, among others—to build a model to prioritize code enforcement inspections.
Fully Describe the Data
Providing information about a dataset allows users to determine whether it is suitable for their intended purpose, and make informed decisions about whether and how to use it. With that in mind, we identified four key actions for fully describing the data (see figure 6).
Disclose known data quality issues and limitations. Disclosing data quality issues and limitations helps users make informed decisions about whether and how to use the data. Disclosure of data quality issues and limitations can include descriptions of the completeness, timeliness, and accuracy of the data, such as an explanation of why certain data may not be disseminated. For example, we observed that Connecticut’s “OpenCheckbook” website includes an “About” page explaining that some information is excluded to protect privacy, or because it is not processed through the state’s financial system, such as the state’s Airport Authority, jury duty payments, and unclaimed property.
Disclose data sources and timeliness. Disclosing where the data come from and how frequently they are updated provides context that helps users judge their quality and determine whether they can be appropriately used for the intended purpose. Without this information, users may view, download, or use data without full knowledge of the extent to which they are timely, complete, or accurate, and therefore could inadvertently draw inaccurate conclusions from the data.
Metadata Metadata provide descriptive information about a dataset in a structured, machine- readable format. They describe aspects of the dataset—such as the source of the data and when it was last updated—in clearly delineated fields.
Clearly label data and provide accompanying metadata. In addition, data should be clearly labeled and accompanied by structured metadata so that users can easily find information about the dataset. Metadata describe the characteristics of data in clearly defined, machine-readable fields, which can include attributes such as the date the data were created or modified, or the license used, among other things. Structuring metadata in clearly defined fields makes it possible for search tools to filter and match content pertaining to those fields. As shown in figure 7, Kansas City’s budget data are accompanied by metadata showing when they were last updated, the source of the data, and the name and contact link for the dataset owner, among other things.
Publish data under an open license and communicate licensing information to users. Documentation for a dataset should also specify what license applies to the data because a data license provides users with information about how they may use the data, including whether there are any restrictions, such as copyrights. An open license indicates that there are few to no restrictions on how the data may be used. An open license can encourage innovation, for example, by assuring users that they are permitted to use the data to develop commercial applications. To realize these benefits, licensing information should be clearly communicated to users, ideally in machine-readable and human- readable formats. As shown in figure 7, metadata can be used to communicate licensing information in a clear and structured way. Including licensing information in metadata can help ensure that it is machine readable—which makes it easier to process and analyze—as well as help users discover the licensing information and compare it across datasets.
Facilitate Data Discovery for All Users
Data discovery is facilitated by presenting the data in a way that enables users to easily explore them. We identified five key actions for facilitating data discovery for all users (see figure 8).
Provide an interface that enables intuitive navigation and ensures that the most important information is made visible. To facilitate data discovery for all users, practitioners of open government data should ensure that the data are provided on a website that is simple and intuitive so that users can easily navigate it to find the information they need. Obtaining user feedback and conducting usability testing can help practitioners assess whether the website is easy to use, and identify any aspects that do not work well for users. In addition, websites designed to work on mobile devices, as well as mobile applications such as Ohio’s “OhioCheckbook” app (see figure 9), can allow users to access data on a variety of devices, according to the state’s website and our observations.
Provide users with appropriate interpretations of the data, such as visualizations or summaries. Summaries and visualizations can help users explore data. For example, our review of Montgomery County, Maryland’s, “spendingMontgomery” website found that the website provides summary data of the top five services, vendors, and expense categories with the greatest amount of spending, as well as a chart of annual spending along with historical averages, as shown in figure 10. This summary information provides a starting point for users, who can then navigate through the website to access more granular data.
Ensure that the website’s content is written in plain language. The content of an open data website should also be written in a way that is clear and direct, using plain language. Using commonly understood terms rather than technical jargon can help users understand the information provided. For example, to use well-understood terms when communicating budget information, Kansas City officials told us they participated in plain language training and applied that knowledge to the city’s open budget data website. In addition, we found that in cases where it is necessary to use technical language, providing a glossary that defines key terms can help make the information understandable to users.
Provide a search function that is optimized for easy and efficient use. Open data websites should also include a search function that is optimized for easy and efficient use so that users can find information that is relevant to them. For example, Connecticut officials said that the search function on the state’s open spending data website is designed so that users do not need to be familiar with the state government’s structure or terminology to find meaningful results. When a user enters a search term, the search bar will return a list of items that include this term and a description of what they are. For example, when we typed “Education” into the search bar, the website suggested Department of Education spending, bilingual education programs, and a vendor called Family Life Education.
In addition, Connecticut officials told us that they track the most commonly-used search terms—such as “housing” and “voter turnout”—on the state’s open data portal, and test them to verify that the information is discoverable. Similarly, Ohio’s online checkbook includes a “Popular Searches” tool that provides presaved searches that allow users to see expenditures for a variety of categories—such as travel, roads and highways, or parks—by clicking a single button. In addition, officials told us that if a user’s search returns a large volume of results, a pop-up appears prompting the user to narrow their search, which could help them focus on more relevant information.
Use central data repositories and catalogues to help users easily find the data they are looking for. Central data repositories and catalogues—also known as data portals—are websites that provide a “one-stop shop” for users to access a variety of datasets. These websites host the data directly, link to other websites where users can access the data, or a combination of the two. They typically provide descriptions of the datasets, as well as structured metadata, to help users find data suitable for their purpose. New York City’s open data portal also includes a number of tools to help users find datasets, including a search function as well as lists of new datasets, popular datasets, and datasets by category, as shown in figure 11.
USAspending.gov Aligns with Several Key Practices, but Does Not Fully Meet Licensing, Search Functionality, and Other Requirements USAspending.gov Provides Free and Unrestricted Spending Data
We found that USAspending.gov aligns with the key practices of providing free and unrestricted data and engaging with users. However, Treasury does not fully describe the data and two data elements required by law are not searchable. In addition, Treasury lacks a process to ensure all pages on the website are secure, consistent with federal requirements.
Spending data are open by default and sensitive information is protected. The Federal Funding Accountability and Transparency Act of 2006 (FFATA) requires the website displaying the data that agencies must provide to be accessible to the public at no cost. In response to requirements in FFATA, as amended by the Digital Accountability and Transparency Act of 2014 (DATA Act), in May 2014, OMB and Treasury developed standard definitions for data elements for agencies to report, and Treasury displays these open data on USAspending.gov. Agencies should not report classified or protected information, such as personally identifiable information (PII). However, they are required to aggregate some awards containing PII at the county or state level if they are unable to report spending at the individual level.
All data are available for free. Treasury has made all of the data on USAspending.gov available to users at no cost, as required by the DATA Act and FFATA. During the course of our work, we found that users could only download the complete database after registering for an account with the database host—Amazon Web Services. Further, we also found that users would incur a charge when attempting to download the entire database. Treasury officials said they intended for the data to be available for free and were unaware that users were being charged to access the data. In response to our inquiries on this issue, in July 2018, Treasury resolved this issue and provided an option for users to download the entire database for free without creating an account.
Treasury Engages Users by Encouraging Feedback and Data Use
Treasury identifies data users and their needs through user research. Treasury researches users to understand their needs when working with USAspending data. Treasury has developed profiles for eight types of users ranging from data consumers like “Citizen” or “Journalist” (see figure 12) to budget analysts or chief financial officers. These profiles are part of Treasury’s user-centered design process in which officials told us they learn from users, make changes to the website, and test whether those changes make the website more useful and intuitive to users.
Treasury obtains and responds to user feedback. Treasury officials told us that they track user feedback, which informs improvements they make to the website. We previously found that Treasury has a variety of user feedback mechanisms, including a community forum, one-on-one interviews, and a “contact us” link that allows users to provide feedback by email. As of July 2017, Treasury officials said they had interviewed more than 130 users, such as citizens, funding recipients, and federal agency officials, regarding USAspending.gov website features. They have since conducted 20 additional interviews about the user experience and received feedback from another 130 users about the Data Lab, a related website that offers visual interpretations of the spending data. Treasury has also conducted “intercept” interviews where interviewers go to a location with large groups of people and request feedback about the website from random individuals. For example, figure 13 shows a Treasury contractor interviewing a visitor about an early version of USAspending.gov at the U.S. Capitol Visitor Center.
Treasury officials said they respond to user feedback about USAspending.gov on two websites. They respond directly to user comments on the USAspending.gov Community website, where users can share feedback and find answers to frequently asked questions. Treasury officials told us they also track users’ issues as “stories” on an open development platform called JIRA, which is their primary way of documenting website development decisions and tracking potential improvements to the website. For example, Treasury added new functionality to the Application Programming Interface (API) based on user feedback from agencies. Officials said this feature allowed some agencies to more efficiently manage their quarterly DATA Act submissions. Treasury announces updates to the API and other changes to the website via an email newsletter.
Treasury reaches out to potential users to encourage data use. Treasury educates the public about the use of the spending data on USAspending.gov and the Data Lab through how-to guides and outreach activities. For example, the website offers an “API Guide” for users seeking to utilize computer programs to request and receive the data, and the Data Lab features an “Analyst Guide” that answers questions about using the data.
Treasury officials told us that they have directly engaged with various audiences about USAspending.gov. For example, they have engaged with the Syracuse University Maxwell School of Government to map the use of federal funds in New York State. In April 2017, we observed Treasury’s participation in a hackathon where participants developed ways to use federal spending data, including using the spending data to evaluate block grant formulas and track the economic impact of stimulus money. Treasury officials said they have held information sessions with Congress, federal agencies, and nongovernment organizations.
USAspending.gov Data Are Detailed and in Standardized Formats, but Treasury Lacks Certain Security Processes
USAspending.gov provides users with detailed and disaggregated data. As shown in figure 14, an award summary page on the website displays information on specific awards, including the awarding agency, recipient, award amount, description, and location. These pages also include transaction histories so that spending can be tracked over time. As of October 2018, we found that USAspending.gov listed more than 53 million pages of prime awards representing more than $33 trillion in obligated funds between fiscal year 2008 and 2018.
Data are machine readable and can be downloaded in bulk and in selected subsets, but Treasury lacks a process to fully ensure security. As shown in figure 15, USAspending.gov provides six ways for users to download the data, including subsets of the data or the complete database. An API is also available, which enables machine-to-machine communication that allows real-time updates.
During the course of our review, we found that some of the data download web addresses did not point to a government domain and were unsecured. OMB guidance requires that federal web pages be hosted on a .gov domain and be encrypted by the secure HTTPS protocol. In response to our inquiries on this issue, Treasury updated USAspending.gov in October 2018 so that the web pages for the database download and agency submission files use the secure HTTPS protocol and are on a government domain. As a result, the users requesting this information from USAspending.gov now have better assurance of the integrity of the data requested, the privacy of their connection to USAspending.gov, and that the website they are using is a trusted government domain.
Treasury officials said they take steps to ensure they meet federal information security requirements, but had not noticed that the web pages were unsecured or on a nongovernment domain. According to Treasury officials, the agency has a process for the team developing a website to vet whether proposed pages are secured and hosted properly, but they acknowledged unintended gaps in how the process was applied in this case, which caused some pages to be omitted. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives to achieve the entity’s objectives and address related risks.
Until the gaps in Treasury’s information security process are addressed, the agency does not have assurance that any new pages that may be added to USAspending.gov will conform to federal information security requirements. In response to our inquiries, Treasury provided documentation showing that the agency is in the process of addressing the issue to prevent future unintended gaps. The agency has taken initial steps to revise its process to ensure that all pages on USAspending.gov are secured and hosted properly. We will continue to monitor Treasury’s efforts to develop and implement this new process.
Downloads are available in standard, non-proprietary formats. Downloads of search results, agency files, and subsets of the USAspending.gov database are available in file formats that can be opened using common office software, including CSV and XML files.
Spending data are potentially interoperable with other datasets. The data in the USAspending.gov database are organized according to a government-wide standard which can potentially support interoperability with related government datasets. The DATA Act Information Model Schema (DAIMS) provides standardized definitions for federal spending information, including 57 data standards that federal agencies are required to report for DATA Act implementation. These standards come with technical specifications describing the format and structure of each data element, which are intended to facilitate consistent data reporting across the federal government, and allow for interoperability between agencies’ data submissions.
According to Treasury’s DAIMS Architecture document, DAIMS could eventually support interoperability between USAspending.gov and related nonfederal datasets such as state, local, and international spending data. For example, state governments could make their data interoperable with the federal spending data on USAspending.gov by developing their own data standards and definitions aligned with DAIMS as appropriate. In addition, the DAIMS Architecture document specifies that future DAIMS content could include federal receipt and financing balances with accounts and sources, as well as performance measures and outcomes linked to federal grants, awards, or other financial assistance.
Treasury Improved Descriptions of Data, but USAspending.gov Lacks Metadata and Complete Licensing Information
Website still does not completely disclose data quality issues. Treasury has improved the disclosure of data quality issues and limitations, but other issues have not yet been described to users. In November 2017, we found that the website did not sufficiently disclose known limitations affecting data quality. We recommended that Treasury disclose known data quality issues and limitations. Treasury agreed with the recommendation and took the following steps to disclose limitations:
By May 2018, Treasury had added a “Learn More” box to the website with information about the data, including an explanation that the Department of Defense reports its data later than other agencies.
In June 2018, Treasury added information on unreported spending to the Spending Explorer tool that visualizes federal spending, clarifying that information reported on the website does not capture the totality of federal spending. Treasury explains to users that data might not be reported when an agency reports incomplete data, has a submission deadline extension, is not required to submit certain data elements, and for accounts that are not reported to Treasury.
While the steps Treasury has taken are positive, they do not fully address our recommendation. This is because one purpose of the DATA Act is to allow users to track federal spending more effectively by linking specific awards to financial budgetary information. However, we found that award data do not appear in the Spending Explorer for combinations of certain agencies and program activities. For example, as figure 16 shows, there are “no associated awards” for the program activity “Vaccines for Children” within the Department of Health and Human Services account for Medicaid grants to states. However, the account page for this program elsewhere on USAspending.gov shows approximately $3.6 billion in obligations and various associated awards for the first three quarters of fiscal year 2018.There is no context for a user to understand whether this information is required for this federal account, missing, or searchable elsewhere on the website.
Treasury officials informed us of a number of data limitations that could cause spending data and award data to be disconnected in the Spending Explorer, but these issues are not disclosed on the website. According to Treasury officials, agencies might not currently report certain data fields as some fields are optional, there are inconsistencies between several agency data systems, and some agencies have not been able to link financial and award data. As a result, the Spending Explorer does not consistently provide a clear and complete presentation of federal spending, and because Treasury does not disclose these limitations, it could limit the ability of taxpayers and policy makers to fully track federal spending with this tool.
More broadly, we have raised concerns that USAspending.gov does not sufficiently disclose other, broader government-wide data quality issues. For example, we found in November 2017 that only between 0 to 1 percent of awards were fully consistent with agency records. While the consistency of individual data elements varied, our prior report found inconsistencies with agency records in at least 41 percent of the data for Award Description, Current Total Value of Award, and Primary Place of Performance Address from the second quarter of Fiscal Year 2017.
Website discloses data sources and timeliness. The “About” page and “Frequently Asked Questions” describe data sources, data quality, and legal requirements. There is also a diagram on the “About” page showing how the data go from agencies to the database for USAspending.gov, and the frequency with which the data are updated, which is a useful way to visualize how the types of award data are updated either daily, bi- monthly, or every quarter.
Website labels some data, but lacks structured metadata. Treasury labels some of the data on USAspending.gov in tables and data visualizations, and describes it in narrative form. The website also includes data dictionaries that provide definitions for the data elements. However, the website lacks structured, machine-readable metadata. OMB guidance requires agencies to use metadata to describe their datasets so that all users can understand and process open data. Agencies must consult with the best practices from Project Open Data, OMB’s online repository of tools and schema, to help agencies meet the requirements of its open data policy. According to Project Open Data, metadata are structured information that describe, explain, locate, or otherwise make it easier to retrieve, use, or manage datasets like that displayed on USAspending.gov. This guidance also indicates that making metadata machine readable greatly increases their utility.
Treasury officials said the types of information found in metadata are already available in a number of separate documents on Treasury’s Fiscal Service web page. Treasury officials told us that they decided not to provide structured metadata on USAspending.gov because it is more efficient to provide external links to other websites. Further, Treasury officials asserted that providing metadata on those websites is sufficient to comply with OMB guidance.
However, the information found on these various websites does not align with best practices outlined in Project Open Data, or the key action to clearly label data and provide accompanying metadata, because it is not provided in a single place on USAspending.gov as structured metadata in a machine-readable format. Without easy access to information that fully describes the data, it may be difficult or time consuming for users of USAspending.gov to find the information available on other websites, and determine whether or how to use the data for their purposes.
Website lacks complete licensing information. While the website describes restrictions on the use of proprietary contract data from Dun & Bradstreet Inc.’s Data Universal Numbering System (DUNS), it does not include general licensing information for the rest of the data. The website includes a link to a notice specifying the “Limitation on Permissible Use of Dun & Bradstreet, Inc. Data.” According to Treasury officials, most data on USAspending.gov are in the public domain, but we found that the website does not clearly indicate which data are openly available to use without restrictions.
OMB M-13-13 specifies that federal agencies “must apply open licenses, in consultation with the best practices found in Project Open Data, to information as it is collected or created so that if data are made public there are no restrictions on copying, publishing, distributing, transmitting, adapting, or otherwise using the information for non-commercial or for commercial purposes.” According to OMB staff, agencies should include licenses in metadata so that this information is machine readable. If data access is limited, this should also be prominently featured in the metadata. In addition, Project Open Data specifies that licensing information should be provided in metadata.
Treasury officials said that the agency is evaluating options and approaches for including open data licensing information on the website, consistent with OMB M-13-13. In addition, Treasury officials said they had only received one question from users about licensing.
However, not displaying licensing information for the majority of data elements on the website is not consistent with the key action to publish data under an open license and communicate licensing information to users. Without licensing information for all of the data, users will likely be unable to determine what license, if any, applies to USAspending.gov, and it will be unclear to the public whether there are any restrictions to reusing data that they can download from the website.
USAspending.gov Makes Data Easy to Discover, but Does Not Fully Meet Search Requirements
Website includes a user interface to assist navigation. USAspending.gov’s top menu gives users various ways to explore, search, download, and understand the most important information. The menu links to the Spending Explorer, Award Search, Profiles, Download Center, and Glossary. There is also “featured content” on the home page guiding users to the Data Lab, and other new features such as a download option for Federal Account data and recipient profile pages for any entity that has received federal money in the form of contracts, grants, loans, or other financial assistance.
Interactive visualizations enable exploration. Search results can be visualized by prime award or subaward aggregated in a table, in a chart showing awards over time, in a map showing the geographic distribution of awards (see figure 17 for an example of social security insurance results mapped by congressional district), or in a bar chart showing the top 10 awards by category. The visualizations show how spending has increased over time, the regional concentration of spending, and a list of the top recipients.
We found that the Spending Explorer provides a simple, graphical interface that allows users to navigate spending data by budget function, agency, and object class. It gives users the option to drill down from these three high-level categories to specific program activities, federal accounts, recipients, or awards. It displays the total amount obligated for the selected category, and a breakdown of the amounts in dollars and as a percentage of the total.
The Data Lab is a separate website linked to USAspending.gov that offers users visual interpretations of the spending data. Treasury officials said the “Contract Explorer Sunburst” is a popular Data Lab visualization. As shown in figure 18, it provides users an interactive overview of about $500 billion in federal contract data organized as a set of concentric circles starting from the funding agency (inner ring) to the recipients (outer ring). Treasury officials noted that analyses and visualizations in the Data Lab are updated with varying frequency because it can be a challenge to continually update some of the visualizations.
The website includes a glossary that provides plain language definitions of terms that describe the spending data. To help users understand the data on USASpending.gov, the website provides a “Glossary” sidebar that is available on every page of the website, and provides users both “plain language” and official definitions of financial terms that are used on the website, as shown in figure 19. According to the key practices we identified, using commonly understood terms rather than technical jargon can help users understand the information provided.
A variety of search tools are available, but program source and city are not searchable. We found that the website features a variety of search tools to help users find and interpret the data. Users can search the data using generic keyword search and advanced search filters. These features allow users to explore and quickly obtain large volumes of award results. For example, we found that searching by funding agency returns all spending by that particular federal agency and by award.
However, we tested the search functionality of the website and found that two data elements required to be searchable by FFATA, as amended by the DATA Act, were not: (1) program source (Treasury Account Symbol (TAS)) and (2) city. Our search testing of a nongeneralizable, random sample of awards for data elements required by FFATA successfully found most of the data elements, but we were unable to search for program source (TAS) or city. TAS and city data can be downloaded and are displayed on award web pages, but we were not able to search for them using either the advanced or keyword search pages. Treasury officials said they did not include functionality to search for these two required data elements on the new website because users searching by these data elements on the Beta version of the website had received confusing results. This is due in part to the fact that agency submissions with these data elements used different standards before and after the DATA Act. Instead, according to our review of the website, users can access TAS and city information using the website’s navigation features, which officials said meets the spirit of the FFATA requirement.
However, simply displaying TAS or city information only on award or federal account pages does not meet the FFATA requirement that users be able to search for this information. Users currently have to click on a specific award or federal account page, and scroll through the web page to find the relevant section that shows city or TAS information. If federal agencies and Congress are not able to search for TAS, they cannot easily connect detailed information on financial transactions to federal accounts for management or oversight purposes. In addition, users looking for geographic information related to recipients or federal programs cannot easily search USAspending.gov by city.
Government data catalogues and repositories link to USAspending.gov. Treasury facilitates discovery of the DATA Act data by linking to USAspending.gov from centralized data repositories and catalogues. Information and links to USAspending.gov can be found on DATA.gov, which is a data catalogue for a variety of U.S. government datasets.
Treasury maintains a web page on GitHub, a public online collaboration website, designed to share information about its process in meeting the requirements of the DATA Act, including information and links to USAspending.gov. Associated pages on this GitHub site serve as a data repository for the computer code behind the central data submission platform for the DATA Act, called the Data Act Broker, the API for USAspending.gov, and the USAspending.gov website itself.
Conclusions
USAspending.gov is a major open government data program with the potential to be a model for transparently reporting government data—if Treasury takes additional steps to further align it with the five key practices and associated key actions for open data in addition to DATA Act requirements. USAspending.gov has already followed several key actions such as providing the data on the website for free, engaging the public online and in person, providing detailed and disaggregated data for download, and making interactive tools so users can interpret and visualize the data. Treasury has also made progress in disclosing limitations of the data, although it has not fully addressed our prior recommendation to do more to make users fully aware of issues that affect its quality.
However, Treasury has not fully aligned USAspending.gov with some key practices or federal website standards, and has not fully implemented the search functionality required by FFATA, as amended by the DATA Act. As a result, users may not be able to find the information they need, and may not have confidence in the integrity of the data. Treasury updated USAspending.gov in October 2018 so that the web pages for the database download and agency submission files available at that time used the secure HTTPS protocol and a government domain. However, without an effective control process in place, Treasury does not have assurance that any new pages that may be added to USAspending.gov will conform to certain federal information security requirements. Furthermore, without easy access to structured metadata, it may be difficult or time consuming for users of USAspending.gov to find the information they need to determine whether or how to use the data. Similarly, the lack of an explicit open license might discourage some users from using the data to develop innovative commercial products. Users are also not able to easily search award information by program source or city, as required by FFATA, which could limit their ability to find and use these data to inform future decision making.
Recommendations for Executive Action
We are making a total of five recommendations to Treasury. Specifically: The Secretary of the Treasury should establish a process to ensure all pages on the USAspending.gov website use the secure HTTPS protocol, consistent with OMB requirements. (Recommendation 1)
The Secretary of the Treasury should establish a process to ensure all content on USAspending.gov is available from a government domain, consistent with OMB requirements. (Recommendation 2)
The Secretary of the Treasury should fully comply with OMB’s requirements by providing metadata in a single location that are easy to find on the USAspending.gov website. (Recommendation 3)
The Secretary of the Treasury should fully comply with OMB’s requirements by communicating licensing information on USAspending.gov. (Recommendation 4)
The Secretary of the Treasury should ensure that users can easily search for awards by city and program source (TAS), consistent with FFATA requirements. (Recommendation 5)
Agency Comments, Third-Party Views, and Our Evaluation
We provided a draft of this report to the Secretary of the Treasury, the Director of OMB, and the Administrator of GSA for review and comment. Treasury provided written responses, which are summarized below and reproduced in appendix II. Treasury and OMB also provided technical comments, which we incorporated as appropriate. GSA responded that the agency had no comments on the report.
In its written response, Treasury highlighted areas where USAspending.gov aligned with the key practices that we identified for transparently reporting government data, such as engaging users and providing the data in useful formats. Treasury agreed with our recommendations. Treasury stated that the agency has already taken steps to address our first two recommendations, to establish processes to ensure that all pages on USAspending.gov use the secure HTTPS protocol and to ensure that all content on the website is available from a government domain. Treasury provided us with documentation of a revised process that is intended to address these issues. We revised the report to acknowledge that Treasury has taken these steps. We will continue to monitor Treasury’s efforts to develop and implement this new process and update the status of our recommendations accordingly.
We also provided excerpts of the draft report to Connecticut; Kansas City, Missouri; Los Angeles, California; Montgomery County, Maryland; New York City, New York; and Ohio. Los Angeles, Montgomery County, New York City, and Ohio provided technical comments, which we incorporated as appropriate. Connecticut and Kansas City officials responded that they had no comments.
We are sending copies of this report to the Secretary of the Treasury, the Director of OMB, and the Administrator of the General Services Administration, as well as interested congressional committees and other interested parties. This report will be available at no charge on our website at https://www.gao.gov.
If you or your staff has any questions about this report, please contact Triana McNeil 202-512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of our report. Key contributors to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
The Digital Accountability and Transparency Act of 2014 (DATA Act) includes a provision for us to review implementation of the act. Over the last 4 years, we have issued 13 reports assessing various aspects of DATA Act implementation. This report builds on our body of work on the DATA Act and (1) identifies key practices for transparently reporting government data on a centralized website, and (2) evaluates the extent to which the new USAspending.gov is consistent with these key practices, as well as existing standards for federal websites.
Identifying Key Practices
To identify key practices for transparently reporting open government data on a centralized website, we conducted a literature review and interviewed experts on open data and representatives of good governance groups. We also identified illustrative examples by interviewing open data practitioners from state and local governments.
Literature review. To conduct the literature review, we first identified relevant publications using a number of bibliographic databases, including ProQuest, the Organisation for Economic Co-Operation and Development’s (OECD) iLibrary, the National Technical Information Service, and the Public Affairs Information Service. We reviewed articles that focused on open data programs and practices in OECD countries, including scholarly peer-reviewed articles, working papers, conference papers, and reports by policy research organizations, nonprofit organizations, and associations. We conducted our search in March 2017 and subsequently added relevant articles identified during our background research. To systematically review these articles, one analyst reviewed each article to identify relevant themes. A second analyst then reviewed the documentation to verify categorization decisions. Then, both analysts met to resolve any discrepancies. We evaluated and synthesized the categorized information to identify commonly-reported key practices for transparently reporting open government data.
Interviews with experts. We selected open data and good governance experts based on recommendations made by other experts, frequent citations in others’ work, and recent contributions to the field. We also selected experts that represent a variety of sectors and backgrounds (such as government, academia, and nonprofit organizations). We obtained the views of the following individuals and organizations:
Andrew Stott, former United Kingdom Director for Transparency and
Center for Open Data Enterprise,
Code for America,
Dr. Anneke Zuiderwijk-van Eijk, Delft University of Technology,
General Services Administration (GSA),
Global Initiative for Fiscal Transparency,
Governance Laboratory of New York University, IBM Center for the Business of Government, Johns Hopkins University Center for Government Excellence,
Project on Government Oversight,
Results for America,
U.S. Public Interest Research Group,
What Works Cities, and
World Bank.
We first had open-ended conversations with experts to obtain their views on what key practices exist for transparently reporting open government data. After developing an initial list of key practices, we then conducted a second round of interviews with experts to finalize the list. We shared a draft of the key practices with the Department of the Treasury (Treasury), the Office of Management and Budget (OMB), and GSA.
Illustrative examples. To obtain illustrative examples showing how those key practices can be implemented, we selected open data practitioners from six selected state and local governments:
Kansas City, Missouri;
Los Angeles, California;
Montgomery County, Maryland;
New York City, New York; and
Ohio.
We selected these practitioners because they were identified in our literature search and by the experts we spoke with as having well- regarded open data websites. We also selected practitioners that have websites with both a general open data portal and visualizations showing budget or spending data. We also selected practitioners that represent different locations and levels of government, including cities, counties, and states. We reviewed these practitioners’ open data websites and related documentation, and interviewed cognizant state and local government officials.
Evaluating USAspending.gov
To assess the extent to which USAspending.gov is consistent with the key practices and selected standards for federal websites, we reviewed the website, reviewed agency documents, observed Treasury’s participation in a hackathon, and interviewed OMB staff and Treasury officials. Specifically, we analyzed the USAspending.gov website to determine how it aligned with the key practices and the extent to which data elements were searchable as required by the Federal Funding Accountability and Transparency Act of 2006 (FFATA). We also assessed USAspending.gov against criteria for federal websites and open data programs, including OMB M-17-06, Policies for Federal Agency Public Websites and Digital Services, and OMB M-13-13, Open Data Policy— Managing Information as an Asset.
To evaluate the extent to which the USAspending.gov search functionality complies with FFATA requirements, as amended by the DATA Act, we randomly selected a nongeneralizable sample of 30 awards (consisting of 15 contracts and 15 financial assistance awards) downloaded from USAspending.gov for fiscal year 2017. We identified the required FFATA data elements from these awards, searched for these elements on USAspending.gov in July 2018, recorded whether each search successfully resulted in a matching award, and observed any other issues that occurred during testing. Finally, we interviewed Treasury officials to corroborate our observations on search functionality and other aspects of the website, and discussed any planned improvements to the website. We also interviewed GSA officials and OMB staff to clarify policies and procedures for federal websites.
We conducted this performance audit from February 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of the Treasury
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgements
In addition to the contact named above, Thomas J. McCabe, Assistant Director, and Laurel Plume, Analyst-in-Charge, supervised the development of this report. Colenn Berracasa, Samuel Gaffigan, and Parke Nicholson made major contributions to this report. Also contributing to this report in their areas of expertise were Michael Bechetti, Steven Campbell, Mark Canter, Jenny Chanley, Jacqueline Chapin, Peter Del Toro, Nancy Donovan, Kathleen Drennan, Sarah Gilliland, Sarah Kaczmarek, Michael LaForge, Paula M. Rascona, Andrew J. Stephens, and James Sweetman, Jr.
Related GAO Products
DATA Act: Reported Quality of Agencies’ Spending Data Reviewed by OIGs Varied Because of Government-wide and Agency Issues. GAO-18-546. Washington, D.C.: July 23, 2018.
DATA Act: OMB, Treasury, and Agencies Need to Improve Completeness and Accuracy of Spending Data and Disclose Limitations. GAO-18-138. Washington, D.C.: November 8, 2017.
Open Innovation: Executive Branch Developed Resources to Support Implementation, but Guidance Could Better Reflect Leading Practices. GAO-17-507. Washington, D.C.: June 8, 2017.
DATA Act: As Reporting Deadline Nears, Challenges Remain That Will Affect Data Quality. GAO-17-496. Washington, D.C.: April 28, 2017.
DATA Act: Office of Inspector General Reports Help Identify Agencies’ Implementation Challenges. GAO-17-460. Washington, D.C.: April 26, 2017.
DATA Act: Implementation Progresses but Challenges Remain. GAO-17-282T. Washington, D.C.: December 8, 2016.
DATA Act: OMB and Treasury Have Issued Additional Guidance and Have Improved Pilot Design but Implementation Challenges Remain. GAO-17-156. Washington, D.C.: December 8, 2016.
Open Innovation: Practices to Engage Citizens and Effectively Implement Federal Initiatives. GAO-17-14. Washington, D.C.: October 13, 2016.
DATA Act: Initial Observations on Technical Implementation. GAO-16-824R. Washington, D.C.: August 3, 2016.
DATA Act: Improvements Needed in Reviewing Agency Implementation Plans and Monitoring Progress. GAO-16-698. Washington, D.C.: July 29, 2016.
DATA Act: Section 5 Pilot Design Issues Need to Be Addressed to Meet Goal of Reducing Recipient Reporting Burden. GAO-16-438. Washington, D.C.: April 19, 2016.
DATA Act: Progress Made but Significant Challenges Must Be Addressed to Ensure Full and Effective Implementation. GAO-16-556T. Washington, D.C.: April 19, 2016.
DATA Act: Data Standards Established, but More Complete and Timely Guidance Is Needed to Ensure Effective Implementation. GAO-16-261. Washington, D.C.: January 29, 2016.
DATA Act: Progress Made in Initial Implementation but Challenges Must be Addressed as Efforts Proceed. GAO-15-752T. Washington, D.C.: July 29, 2015.
Federal Data Transparency: Effective Implementation of the DATA Act Would Help Address Government-wide Management Challenges and Improve Oversight. GAO-15-241T. Washington, D.C.: December 3, 2014.
Government Efficiency and Effectiveness: Inconsistent Definitions and Information Limit the Usefulness of Federal Program Inventories. GAO-15-83. Washington, D.C.: October 31, 2014.
Data Transparency: Oversight Needed to Address Underreporting and Inconsistencies on Federal Award Website. GAO-14-476. Washington, D.C.: June 30, 2014.
Federal Data Transparency: Opportunities Remain to Incorporate Lessons Learned as Availability of Spending Data Increases. GAO-13-758. Washington, D.C.: September 12, 2013.
Government Transparency: Efforts to Improve Information on Federal Spending. GAO-12-913T. Washington, D.C.: July 18, 2012.
Electronic Government: Implementation of the Federal Funding Accountability and Transparency Act of 2006. GAO-10-365. Washington, D.C.: March 12, 2010. | Why GAO Did This Study
Open data can foster accountability and public trust by providing citizens with information on government activities and their outcomes. It can also promote private sector innovation. The DATA Act requires that the federal government collect and present open data on roughly $4 trillion in annual federal spending. The DATA Act also includes a provision requiring GAO to review its implementation.
This report (1) identifies key practices for transparently reporting government data; and (2) evaluates the extent to which USAspending.gov is consistent with those key practices and other requirements. GAO developed the key practices by systematically evaluating and synthesizing information from literature on open data, as well as interviews with open data experts and good governance groups. GAO used these key practices as well as existing federal website standards and applicable laws to evaluate USAspending.gov.
What GAO Found
GAO identified five key practices for transparently reporting government data, as well as actions to implement each practice. These key practices and actions can assist managers of open government data programs in the transparent presentation of their data. Open data are information that can be freely used, modified, or shared by anyone for any purpose.
USAspending.gov aligns with several key practices. However, the Department of the Treasury (Treasury) has not fully aligned the website with all of the key practices, the requirements of the Federal Funding Accountability and Transparency Act of 2006 (FFATA), and Office of Management and Budget (OMB) guidance (see table.) FFATA, as amended by the Digital Accountability and Transparency Act of 2014 (DATA Act), directed Treasury to develop and manage USAspending.gov to provide detailed information on federal spending.
What GAO Recommends
GAO is making five recommendations including that Treasury (1) establish a process to ensure that additions to USAspending.gov meet security requirements, (2) provide structured metadata and licensing information on the website, and (3) ensure that users can search for awards by city and program source as required by law. Treasury agreed with GAO's recommendations. |
gao_GAO-19-71 | gao_GAO-19-71_0 | Background
Categories of Document Services
Document services at DOD are generally encompassed by three broad categories, shown in figure 1.
Printing and reproduction includes the high-speed, high-volume reproduction of printed documents, as well as the distribution of those products. Documents are printed internally by DOD components, which include the military services, or printing is procured through an organization such as DLA Document Services, the Government Publishing Office (GPO), or a commercial vendor. Device procurement covers the acquisition of all office-level and production-level equipment. Office-level equipment includes printers; copiers; multi-function devices (MFDs), which perform multiple functions—printing, copying, scanning, and faxing—in one device; and all other devices that produce documents on-site and in low volume. Production-level equipment can include offset printers, digital presses, and other devices that are capable of high- speed, high-volume production of documents. Electronic content management is the digitization of printed documents and the creation and management of electronic content management systems, such as databases and automation services.
Roles and Responsibilities for Document Services
The Under Secretary of Defense for Acquisition and Sustainment is the principal staff assistant and advisor to the Secretary of Defense on document services policies and programs and provides policy guidance regarding the operation and management of document services. DOD’s Instruction on document services also designates DLA Document Services as DOD’s single manager for printing and high-speed, high- volume duplication. This includes both the operation of DOD’s in-house print facilities and the procurement of such services from outside DOD. It also establishes DLA Document Services as the preferred provider of document conversion and automation services within DOD. DOD is in the process of revising its instruction on document services and is considering changes to DLA’s single manager role. DLA Document Services customer service network is comprised of a headquarters located in New Cumberland, Pennsylvania and 132 production facilities worldwide.
Each military service also provides internally some document services of the type assigned to DLA. Service-level implementing guidance governs how each military service will provide document service-related activities to its components, commands, and organizations, such as through the Army Publishing Directorate, the Navy’s Chief Information Officer, and the Marine Corps Publishing and Logistics Systems Management Section. The Air Force’s major commands operate their own printing operations, according to a service official.
Funding of Document Services
DLA funds document services through the Defense-wide Working Capital Fund, which covers DLA’s costs for purchasing various commodities and providing services. DOD components and other customers, such as other federal agencies, reimburse the Defense-wide Working Capital Fund through the purchase of these commodities and services. In obtaining document services from DLA, DOD components—including the military services—use annual appropriations and their own working capital funds to reimburse the Defense-wide Working Capital Fund. DLA Document Services’ primary customers, by sales, are shown in table 1. DOD components can also fund document services outside of DLA Document Services with annual appropriations.
Efforts to Increase Efficiencies in Providing Document Services
Beginning in 2011, Congress, the federal government, and DOD initiated efforts to increase efficiencies in various areas involving document services. For example, Executive Order 13589 directed agencies to pursue steps to reduce administrative costs across the federal government by setting reduction goals for certain areas, such as printing and employee use of IT devices. According to DOD, it set—and achieved—a goal of a 20 percent reduction in fiscal year 2013 spending in these areas. Following this effort, in 2015, the Senate Committee on Appropriations recommended that DOD work with the Office of Management and Budget to reduce costs for printing and reproduction by 34 percent. DOD issued a report in December 2016 that identified the reductions it would make to achieve this goal. The plan focused on two main areas: emphasizing electronic content management over a reliance on printed materials and reducing the number of print devices. Starting in fiscal year 2015, DLA Document Services undertook a separate but complementary effort to further increase efficiencies and better accomplish its mission of providing document services to DOD and the military services. Figure 2 provides a time line of efficiency initiatives related to DOD’s document services. We discuss the status of these efforts later in this report.
DOD Has Made Progress toward Achieving Efficiencies in its Document Services, but Opportunities May Exist for Further Gains
DOD has taken steps toward achieving efficiencies in its document services, including implementing a transformation plan for DLA Document Services, taking steps to reduce the cost and number of office print devices, and increasing its use of electronic content management.
However, we identified four areas where further gains may be possible: better managing fragmentation in printing and reproduction services, reducing overlap in procuring print devices, meeting goals to reduce the number of print devices, and consolidating locations that provide mission specialty printing.
DOD Has Taken Steps toward Achieving Efficiencies
Implementing DLA Document Services’ Transformation Plan
In fiscal year 2015, DLA Document Services developed and, starting in fiscal year 2017, began implementing a transformation plan to further increase efficiencies and better accomplish its mission of providing document services to DOD and the military services. The objective of this transformation plan is to transition DOD from on-site printing to digital, online services by transforming the way customers, the workforce, and in- house facilities operate. Based on the plan, DLA Document Services is closing or consolidating 74 of its 112 brick and mortar facilities in the continental United States over the course of fiscal years 2018 and 2019, bringing its footprint to 38 facilities. An internal analysis of the transformation plan, conducted by DLA, estimates annual savings of 20 percent compared to DLA Document Services’ fiscal year 2017 operating costs once the plan is fully implemented in fiscal year 2019. Figure 3 shows DLA Document Services’ facility footprint prior to the implementation of its transformation plan and the locations it intends to retain following completion of the plan in fiscal year 2019.
The transformation plan also calls for DLA Document Services to adjust the size and composition of its workforce by the plan’s completion in fiscal year 2019. For example, DLA Document Services intends to reduce its total number of full-time equivalent positions from about 600 to about 400, mainly through Voluntary Early Retirement Agreements and Voluntary Separation Incentive Payments. According to officials, DLA Document Services is also in the process of converting existing positions and hiring staff as customer relations specialists at each of the consolidated facilities. These officials noted that these positions are intended to help customers learn about and access the full range of services offered by DLA Document Services, including printing and reproduction services, office print devices, and electronic content management services. The goal of establishing these positions, officials stated, is to help facilitate the increased use of technology to meet customers’ needs, because DLA Document Services intends to transition customers to using an online portal to fulfill their printing needs. According to DLA, it is hiring many of the customer relations specialists from current DLA Document Services locations, and the planned reduction in its total full-time equivalent positions is a net reduction that accounts for the hiring of, and conversion of existing positions to, these customer relations specialists.
DLA Document Services also plans to use and expand its existing public and private sector partnerships to support an increased emphasis on online services as it implements its transformation plan. For example, DLA Document Services currently works in partnership with GPO’s GPOExpress, an online portal for fulfilling printing and reproduction services in cooperation with FedEx Office. For those customer orders that DLA Document Services is unable to fulfill in-house, whether due to workload or lack of capability, GPO and GPOExpress meet these needs. According to a GPO official, GPOExpress will also serve customers located in areas where DLA Document Services has closed or consolidated 74 of its 112 U.S. facilities.
We found that DLA Document Services’ transformation plan generally reflects leading practices for initiatives to consolidate physical infrastructure or management functions. For example, DLA Document Services identified goals for its transformation plan, ensured top leadership engagement, dedicated an implementation team, and established metrics that it is using to track progress toward the plan’s goals. As of June 2018, DLA Document Services is ahead in its goals for overall personnel reductions and for hiring customer service representatives and is behind on its goal for closing facilities, as shown in table 2.
According to DLA Document Services officials, delays in reducing facilities have been due to a variety of factors, including earlier delays in hiring customer service representatives, equipment removal, and administrative delays at installations. There have also been delays as DLA Document Services has sought to minimize the effect of the consolidations on affected employees by offering buyout packages or transfers. DLA Document Services officials told us they anticipate that their efforts to consolidate facilities and reduce the overall number of employees will begin to achieve savings by fiscal year 2020.
Reducing the Cost and Number of Print Devices
DOD, including the military services, has also taken steps to reduce the cost and number of office-level print devices, including identifying goals for reducing the number of print devices and plans for each military service to establish a mandatory source (e.g., one particular contract or organization) for obtaining print devices. The Army and Air Force have each established their own service-wide contracts for obtaining print devices and have mandated their use, while the Department of the Navy has mandated that the Navy and Marine Corps use DLA Document Services to obtain these devices. Military service officials told us that consolidating purchases with a single service-wide source reduces the cost of these devices by taking advantage of economies of scale, because vendors can offer better pricing for larger numbers of customer orders. Our previous work on strategic sourcing—a process that moves agencies away from numerous individual purchases to an aggregate approach—shows that such practices can allow agencies to better manage acquisitions and reduce costs.
In addition, DOD and the military services have identified reducing the number of print devices as an opportunity for significant savings and have established guidance on reducing the number of these devices. DOD’s Chief Information Officer (CIO) issued a memorandum in 2012 on, among other things, reducing the number of print devices to one per office space of 12 or fewer users and assessing the ratio of printers to employees in larger spaces. In response to this memorandum and to Army Audit Agency findings of excessive user-to-printer ratios, the Secretary of the Army issued guidance in fiscal year 2013, requiring all Army commands, organizations, and activities to assess print capacity and plan for reductions, if necessary, based on the results of those assessments, which the Army last completed in fiscal year 2014. The Department of the Navy, in adopting DLA Document Services as the exclusive source for acquiring and sustaining print devices for the Navy and Marine Corps, also directed Department of the Navy officials to work with DLA Document Services to conduct assessments and develop a phased execution plan regarding the number and type of print devices Navy and Marine Corps organizations require. DLA began conducting these assessments for the Navy and Marine Corps in fiscal year 2014. In conducting these assessments, DLA Document Services reviews the inventory, cost, and use of output devices within an organization and then conducts an analysis that results in recommendations. According to DLA Document Services, its recommendations are designed to optimize an organization’s equipment to meet the organization’s needs, while reducing cost by shifting from single-function, or standalone devices, to shared, multifunction devices.
Increasing the Use of Electronic Content Management
Led by DLA Document Services, DOD has also made greater use of electronic content management, with the objective of reducing the volume and cost of printed materials. DLA Document Services is using a number of electronic content management systems, including its Document Automation and Content Services, and has deployed those systems for a number of DOD customers, such as DLA Distribution and U.S. Transportation Command. According to DLA Document Services officials, because Document Automation and Content Services functions as one large system with separate libraries for individual customers, and costs for the system are shared, increasing adoption of the system will reduce costs for each organization using the system.
Opportunities May Exist to Achieve Further Efficiency Gains
DOD’s document services initiatives have gained efficiencies, but we identified four areas where further gains may be possible, including (1) managing fragmentation in printing and reproduction services, (2) reducing overlap in procuring print devices, (3) meeting goals to reduce the number of print devices, and (4) consolidating locations that provide mission specialty printing.
Managing Fragmentation in Printing and Reproduction Services
Our review found that DOD components, including the military services, use multiple approaches to obtain printing and reproduction services. These approaches include (1) using DLA Document Services to obtain printing and reproduction services, which, in turn, can outsource the work to GPO; (2) obtaining these services directly from GPO and its network of private sector vendors without first involving DLA Document Services; and (3) providing these services at in-house print locations, as shown in figure 4.
For example, according to DLA Document Service officials, the Army Publishing Directorate, which is responsible for obtaining print services for the Department of the Army and local commands in the Washington, D.C. region, has been given authority by DLA Document Services to obtain printing and reproduction services directly from GPO under a contract that DLA Document Services established for that purpose. In contrast, the Army Marketing and Research Group (AMRG), which is responsible for developing and distributing printed materials for recruitment, obtains services directly from GPO without the involvement of DLA Document Services. Finally, some DOD components, such as the Navy, Marine Corps, and National Guard Bureau, also operate their own in-house print facilities.
In our interviews with military service officials, they stated that they obtained services outside of DLA Document Services because of concerns regarding the cost, quality, and timeliness of its work, including inefficiencies that can result from using DLA Document Services to obtain printing services that are ultimately outsourced to GPO. For example, an analysis by the Army Publishing Directorate found that ordering directly through GPO results in savings of 35 percent, compared to fulfilling the same orders in house through DLA Document Services. In addition, headquarters officials with the Army and Navy stated that there have been significant delays in obtaining services through DLA Document Services, including cases where GPO ultimately fulfilled the orders. Navy officials also said that there were issues with the quality of DLA Document Services’ work, including orders they had to return repeatedly because of quality issues. Further, Army officials—as well as DLA Document Services—acknowledged that certain print jobs, including some bulk printing or magazine- and advertising-quality printing, are beyond DLA Document Services’ capabilities to provide in house.
According to DLA Document Services officials, DLA Document Services offers value as a single manager for printing and reproduction services, including when GPO fulfills printing and reproduction orders. For example, DLA Document Services may be able to identify different options that allow customers to reduce costs, such as different contract options that GPO may not identify. Officials also said that DLA provides administrative support, such as centralized billing and record keeping, that the military services would have to replicate in their absence. These officials also stated that they were unaware of any persistent problems with the quality or timeliness of DLA Document Services’ work, and that they work with customers to resolve such issues when they arise.
As noted above, DOD is in the process of revising DOD Instruction 5330.03, and a draft of the revision continues to assign DLA as the single manager for printing and reproduction services within DOD. However, despite the concerns expressed by some military service officials, DOD has not assessed the extent to which DLA Document Services is fulfilling its duties in accordance with DOD Instruction 5330.03 when considering any revisions to the instruction. Specifically, DOD has not assessed whether the products and services DLA Document Services provides are based on “best value,” as determined by quality, price, and delivery time, in accordance with the instruction.
According to both DLA Document Services officials and the official at the office of the Under Secretary of Defense for Acquisition and Sustainment who is responsible for document services policy, the office of Acquisition and Sustainment has had minimal involvement in ensuring that DLA Document Services is fulfilling its duties in accordance with the instruction. For example, DOD’s last formal report on defense agencies and DOD field activities, including DLA Document Services, was completed in 2013, before DLA Document Services began implementing its transformation plan. Because it has not assessed DLA Document Services’ provision of document services since 2013, DOD has not ensured that DLA Document Services is providing the best value in an efficient and effective way.
In light of changes such as DLA Document Services’ transformation plan, DOD has also not determined whether DLA’s single manager role as it is currently constituted is the most effective and efficient model for providing printing and reproduction services, or whether additional efficiencies may be possible. For instance, as a part of its transformation plan, DLA Document Services is increasing its use of GPO to fulfill customer orders, in lieu of using its in-house print facilities. As previously discussed, DLA Document Services can provide certain arrangements—such as establishing term contracts with GPO for certain customers while still providing administrative support for those customers—which may allow for greater efficiencies in printing and reproduction services. However, the draft revision to DOD Instruction 5330.03 does not address how DLA Document Services might use or expand these more flexible arrangements in light of its transformation plan. DOD Instruction 5025.01 requires that, when revising DOD issuances—such as DOD Instructions—the relevant Office of the Secretary of Defense component head will ensure that each assignment of authority or responsibility is verified to be a current requirement and is appropriately assigned. Without assessing whether DLA’s single manager role as it is currently constituted is the most effective and efficient model for providing printing and reproduction services in light of the current transformation plan, DOD may miss opportunities to gain additional efficiencies and better manage fragmentation when obtaining these services.
Reducing Overlap in Procuring Print Devices
Our review found that DOD has not implemented a department-wide approach for acquiring print devices, and DOD components use at least four different sources to acquire them, with costs that vary widely for similar devices. For example, as one of its services, DLA Document Services provides print devices, as well as associated maintenance and supplies, to DOD components. The Department of the Navy has adopted DLA Document Services as the exclusive source for acquiring and sustaining print devices for the Navy and Marine Corps. In addition, both the Army and Air Force have established their own contracts for print devices. Further, the Defense Information Systems Agency’s Joint Service Provider delivers print devices to organizations in the Pentagon and the national capital region, including the headquarters organizations of some of the military services, and officials noted that they use a government-wide contract managed by the National Aeronautics and Space Administration.
Based on DLA Document Services’ assessments of customers’ print device requirements, its print device procurement service resulted in savings of between 33 and 45 percent compared to the customers’ prior costs for devices, primarily because of reductions in unnecessary devices and efficiencies that are gained through the economies of scale of a single organization procuring these devices. More specifically, DLA Document Services, as a part of its print device procurement service, assesses customers’ device requirements, which officials told us generally results in reducing the number of devices and the associated costs. In addition, DLA Document Services is pursuing, with the support of the General Services Administration, a “best-in-class” designation for its print device procurement service as a part of an effort to reduce costs by using multi-agency and government-wide acquisition vehicles.
Army and Air Force officials told us that they had established their own print device procurement sources primarily because they believed that these sources are less expensive than using DLA Document Services. This is primarily because DLA Document Services charges administrative and overhead costs to support its operations, such as facility and maintenance costs, whereas the services’ own contracts do not require any additional fees, according to these officials. However, service officials were unable to provide any analyses or other documentation to support these determinations, and some service officials have been reassessing their approach to obtaining devices. For example, Air Force officials told us they recognize that print procurement services like those provided by DLA Document Services can result in savings, and these officials plan to issue guidance instructing commands to use either DLA Document Services or a similar service offered through the General Services Administration. Conversely, the Marine Corps official responsible for implementing the Department of the Navy’s policy on print devices told us that two installations had reported that the mandated use of DLA Document Services for print device procurement had not yielded savings. That official told us that the office plans to survey additional Marine Corps installations and may make recommendations on the current policy as a result.
Our analysis found differences in cost among the contracts for similar devices and associated services (see fig. 5). However, we were unable to determine which sources provided the greatest value, because of differences in device specifications (such as handling different paper sizes or the capability to be used on classified networks), approaches to obtaining devices, and whether associated maintenance services and supplies were included. We analyzed DLA Document Services’ standard pricing for customers, contractor quotes for the Army’s mandatory source, and standard pricing for the Air Force’s mandatory source for devices with similar capabilities offered by two or more of the sources, and we found that prices varied widely. For example, we found that DLA Document Services offered customers high capacity color multifunction devices for between $280 and $315 a month, including maintenance and supplies. Vendor quotes we reviewed for similar devices through the Army’s mandatory source were for between $185 and $479 a month, not including maintenance and supplies, while the cost under the Air Force’s mandatory source was between $92 and $145, including maintenance but excluding supplies.
Our prior work on strategic sourcing—an approach to procurement that moves away from numerous individual procurements to a broader aggregate approach—has found that this approach can result in considerable savings. OMB has also promoted category management— an approach that includes strategic sourcing as well as improving data analysis and more frequently using private sector (as well as government) best practices. OMB also encourages the use of multi-agency and government-wide approaches to acquiring goods and services. Our work has further found that collecting and using transactional data—information generated when the government purchases goods or services from a vendor, including specific details such as descriptions, part numbers, quantities, and prices paid for the items purchased—can help ensure that the benefits of strategic sourcing are maintained.
The proposed revisions to DOD Instruction 5330.03 would designate the DLA Director as DOD’s single manager for procuring print devices. The current version of the Instruction designates DLA Document Services as the preferred provider for document conversion and automation services, which includes print device procurement services. Further consolidation of print device procurement, such as under DLA Document Services, might reduce costs. However, it is unclear what approach represents the best value to the government. This is because DOD has not conducted an analysis to establish which approach—or approaches—to obtaining print devices would be most cost effective, according to officials from DOD, DLA, and the military services. By assessing which approach to acquiring print devices represents the best value to the department, DOD would be better positioned, as it revises DOD Instruction 5330.03, to establish a policy that consolidates print device procurements and further reduces its costs.
Meeting Goals to Reduce the Number of Print Devices
Beginning in fiscal year 2012, the DOD CIO and some of the military services established goals for reducing the number of print devices, which—according to internal DOD analyses—would save millions of dollars annually. DOD’s Chief Information Officer (CIO) issued a memorandum in 2012, which instructed DOD components, including the military services, to issue guidance to, among other things, reduce the number of print devices to one per office space of 12 or fewer users and assess the ratio of printers to employees in larger spaces. However, the services have not demonstrated that they have achieved their goals for print device reductions. Specifically, we found the following:
Army: The Secretary of the Army issued guidance in 2013, requiring all Army commands, organizations, and activities to assess print device capacity and plan for reductions if necessary based on the results of those assessments. The guidance noted that those reductions could save millions of dollars annually. The guidance also included a requirement for biannual reporting by all Army commands, organizations, and activities on their print device inventory, number of printing devices required, and annual costs for printing device acquisitions. In June 2014, Army commands reported an average of 5 users for each single function printer, compared to an industry standard of 7 users per device and a DOD goal of one print device per office space of 12 or fewer users and assessing the ratio of printers to employees in larger spaces. According to Headquarters, Department of the Army officials, however, Army commands objected to the workload associated with this reporting requirement and discontinued issuing the reports. As a result, the Army did not follow through with enforcing the reporting, which limited the ability of Army officials to ensure that Army commands achieved the planned reductions.
Navy and Marine Corps: The Department of the Navy established guidance in 2013, directing Department of the Navy officials to work with DLA Document Services to conduct assessments and develop a phased execution plan for the number and type of print devices Navy and Marine Corps organizations require. The guidance also directed Department of the Navy officials to develop policy requiring that the acquisition of new devices be exclusively through DLA Document Services. DLA subsequently conducted these assessments and found that the Navy and Marine Corps had an average of one device for every seven users. DLA Document Services recommended further reductions in the number of print devices across the Navy and Marine Corps, which it estimated could save over $63 million annually. However, Department of the Navy officials were unable to provide us with data on the total number of Navy and Marine Corps print devices that would indicate whether these device reductions and savings had occurred.
Air Force: The Air Force did not issue any guidance based on the CIO memorandum. In response to our review, the Air Force developed draft guidance on print device management, which includes a goal of increasing the ratio of users to devices from 4 users per device to 12 users per device. The draft guidance also includes requirements for quarterly reporting by the Air Force Information Technology Business Analytics Office on the number of devices and related metrics to monitor progress. According to an Air Force analysis, doing so would achieve savings of over $67 million as it replaces or retires devices. As of July 2018, the Air Force had not fully implemented this guidance.
Efforts by the military services to demonstrate that they have achieved print device reduction goals have been limited because they have not monitored the actions they have taken to reduce the number of print devices. Military service officials we interviewed said they were unaware of any efforts by the DOD CIO to ensure that device reductions occurred and that DOD components achieved their planned savings, such as providing information to the CIO on the status of their efforts to implement the guidance in the memorandum or data on reductions in the number of devices. Standards for internal control state that management should implement control activities through policies that use quality information to achieve an entity’s objectives, monitor the internal control system, and evaluate the results of the system.
Efforts to implement the memorandum to achieve print device reduction goals have also been limited because responsibility for implementation was not clearly assigned. According to a DOD CIO official, the responsibility for the memorandum is not clearly assigned to a member of the CIO staff. This official also stated that because of the consolidation of information technology services in the Pentagon and the national capital region, the Defense Information Systems Agency’s Joint Service Provider assumed responsibility for implementing the memorandum. According to Joint Service Provider officials, however, they were only responsible for implementing the memorandum for the customers they serve in the Pentagon and the national capital region, and not for other DOD components outside those areas, such as military services. Standards for internal control state that management should ensure that key roles in operating the internal control system are clearly assigned. In the absence of these controls, such as reporting procedures to monitor actions to reduce the number of print devices and establishing clear responsibility for implementing the CIO memorandum, DOD has been unable to ensure that it is achieving any estimated savings, which could represent tens of millions of dollars annually.
Consolidating Locations That Provide Mission Specialty Printing
DLA Document Services may be able to realize additional savings from further consolidating facilities beyond those already identified, but it does not currently plan to do so, and it does not have the complete data it would need to make those determinations. As a part of its transformation plan, DLA Document Services identified 38 of its 112 facilities in the continental United States that it would retain. DLA Document Services officials stated that they considered a number of factors in determining whether to consolidate or retain facilities, including the number of staff and customers and the facilities’ workloads, but that they generally consolidated or retained facilities based on whether the facility provided “mission specialty” services. These mission specialties are services that DLA Document Services officials believe cannot be easily outsourced, such as printing and reproduction of classified and sensitive documents and on-demand printing and distribution of certain technical materials.
However, our analysis of DLA Document Services data found that some facilities retained for certain mission specialties were responsible for a relatively small share of business for those specialties in fiscal year 2016 (the last full year for which data were provided), which suggests that further consolidations are possible. For example, for each of the four mission specialties for which DLA Document Services provided us with revenue data, the bottom quartile (25 percent) of the facilities retained for each specialty were responsible for less than 5 percent of the total revenue for that specialty, as shown in figure 6. We also found some cases in which DLA Document Services retained facilities that reported less revenue for a given specialty than facilities that it did not retain. According to officials, DLA Document Services took a number of factors into consideration in deciding on consolidations, including the complexity of the work at a facility and whether nearby sites could fulfill the orders. According to these officials, this allowed them to consolidate some facilities even if those facilities had greater revenue from a given mission specialty than other facilities.
DOD Instruction 5330.03 requires DLA Document Services to provide effective and efficient document services support to DOD components. Our key practices for efficiency initiatives also note the importance of targeting both short-term and long-term efficiency initiatives. DLA Document Services officials stated that they would consider additional consolidations of facilities, but they have not conducted any analysis or planning to gain further efficiencies and do not currently have plans to do so. These officials stated they are committed to implementing the current transformation plan as announced. Officials also stated that they want to have a better sense of the results from the current transformation, including how workloads may change among facilities as consolidations occur, before considering additional consolidations. DLA Document Services’ current transformation plan includes the possible consolidation of facilities outside the continental United States following the implementation of its current plan (which only addressed facilities inside the continental United States); it does not have any plans for further consolidations within the continental United States.
We also found that DLA Document Services did not have revenue data on all of its mission specialties to inform any future decisions on facility consolidations. Standards for internal control state that entities’ management should use quality information to achieve the entities’ objectives. However, DLA Document Services could not provide revenue data on three specific mission specialties—sensitive, classified, and Naval Nuclear Propulsion Information—for which it retained 30 of its facilities, including some that it retained exclusively for those specialties. According to DLA Document Services officials, they did not collect revenue data for these mission specialties because the facilities responsible for processing this type of information were generally retained, regardless of the revenue they produced, due to the sensitive nature of this work. As noted above, our analysis of available mission specialty data found that some facilities that DLA retained for certain mission specialties did a relatively small share of business for those specialties, indicating that there may be opportunities for additional facility consolidations. DLA Document Services officials told us that they had consulted with managers at the facilities about the amount of sensitive and classified they conducted. Because of these consultations, DLA Document Services is closing some facilities that handled sensitive and classified information. However, DLA Document Services does not routinely collect these data as it does for other mission specialties. By collecting and analyzing more complete revenue data on its mission specialties and using those data to evaluate opportunities for further consolidations, DLA Document Services would be better positioned to determine if opportunities exist to achieve additional cost savings.
DOD Does Not Report Accurate Financial Information about Its Document Services
DOD reports some financial information regarding its document services, but this information does not accurately capture the scope of its document services mission. We reviewed the O&M obligations for printing and reproduction in fiscal years 2012 through 2016 that were reported to Congress by the military services. The total obligations ranged from about $534 million to about $736 million annually for the 5-year period (see fig. 7).
Our analysis found that DOD’s O&M budget materials for printing and reproduction are inaccurate in two ways. First, the budget materials include obligations that are primarily for non-printing activities, such as the purchase of advertising and radio and television time. DOD and military service financial management officials prepare budget justification materials for their O&M funding requests on an annual basis. DOD and the services report printing and reproduction costs in the Summary of Price and Program Changes budget exhibit (the “OP-32”). It contains information by line item, detailing, among other items, printing and reproduction and related operations performed by the military services, DLA, or GPO. It also contains elements of expenses for purchases related to document services that are provided by DLA. The OP-32 exhibits are provided to Congress with the budget justification materials accompanying the President’s annual budget request.
Officials from AMRG told us that, in accordance with Army guidance, printing and reproduction obligations are coupled with other obligations, including the purchase of advertising space and radio and television time for recruiting activities. Data provided by these officials show that in fiscal year 2016, AMRG’s obligations for printing and reproduction accounted for only about $2 million, or 2 percent, of the Army’s total fiscal year 2016 obligations included in the printing and reproduction line of the OP-32. Obligations for the publication of notices, advertising, and radio and television time accounted for about $78 million, or 63 percent, of the obligations reported for printing and reproduction. According to officials, the Navy, Air Force, and Marine Corps also follow their respective guidance on reporting printing and reproduction obligations together with these other obligations.
Second, the budget justification information does not represent the full scope of the military services’ document services mission. Specifically, we found that the military services’ annual budget requests do not provide distinct information on two areas of their document services mission— print device procurement and electronic content management. Data we reviewed indicate that the military services obligate a considerable amount of resources in these areas. For example, according to DLA Document Services, sales to DOD and the military services for its print device services are comparable to sales for its printing and reproduction services. According to DLA data, in fiscal year 2017, it received in revenue about $108 million for print device and about $105 million for printing and reproduction services. Officials from the military services told us that obligations for these activities are included within the budget requests for various IT procurement categories. For example, Army Budget Office officials noted that the budget request for IT procurement and office supplies would include estimates associated with the purchase and sustainment of devices, but those line items would include other, non-printing obligations as well. According to these officials, the Army has made efforts to standardize the procurement of information technology, including collecting better data on spending for these types of devices. They told us that these efforts will result in shifts in how those obligations are reported in budget justification materials.
The accuracy and completeness of DOD’s financial information about its document services can affect the allocation of budgetary resources, and inaccurate or incomplete information can hamper initiatives to gain further efficiencies. The Handbook of Federal Accounting Standards states that its managerial cost accounting concepts and standards are aimed at agencies providing reliable and timely information on the full costs of their federal programs that congressional and executive decision makers can use in making decisions about allocating federal resources and program managers can use in making decisions to improve operating economy and efficiency. DOD’s Financial Management Regulation lays out the structure of the budget exhibits that the military services develop during the department’s budget process. According to a DOD Comptroller official, DOD has historically reported its budget requests following the format prescribed by the Financial Management Regulation, and it follows this format in its reporting of printing and reproduction costs that are coupled with non-printing costs.
Although the department has followed this format, the House Armed Services Committee has expressed concern about the military services’ printing budgets, noting that they were excessive and that portions of the budgets should be realigned to address unfunded readiness priorities. Further, as we discussed earlier in this report, DOD has outlined specific steps it intends to take to achieve a recommended goal of 34 percent reduction in spending on its printing and related activities. Without quality information on the scope of its document services mission, DOD will lack the information it needs to assess whether it is achieving this goal. To assess its progress toward achieving this goal, it will be critical for decision makers to have accurate financial information. According to a DOD Comptroller official, the Financial Management Regulation provides flexibility in how obligations are categorized and reported internally and to Congress, but DOD has not evaluated options to report more accurate funding information on its document services. Unless DOD evaluates options to report more accurate funding information and takes steps to improve the accuracy of its budgetary and financial information reporting, DOD and Congress will not have the full visibility over these costs that they need to make informed decisions.
Conclusions
DOD is taking important steps to address congressional concerns about its spending on document service activities. Most notably, DOD is implementing its plan to transform its DLA Document Services mission and has taken certain steps to reduce the number and cost of print devices. These efforts have begun to produce results, but DOD can do more to build on these gains. By better managing fragmentation in printing and reproduction services, DOD could ensure that DLA Document Services is providing the best value in obtaining document services. DOD could further reduce overlap in print device procurement by assessing the various approaches employed by DLA and the military services to determine what constitutes the most cost-effective approach for the department.
DOD has set goals intended to reduce the number of print devices and realize tens of millions of dollars in savings each year, but it has not demonstrated that it has achieved these savings, because of limitations in internal controls. Additional efforts aimed at collecting and analyzing information to examine areas for further consolidation of DLA Document Services’ mission specialty locations might provide DOD with additional cost savings. DOD’s O&M budget materials for printing and reproduction activities include information on non-printing activities that make up a much larger portion of its reported spending than printing does. In addition, these O&M budget materials omit information that would capture the full scope of DOD’s document services mission, such as device procurement and electronic content management, which are included with information technology budget materials. By providing more accurate costs for its document services activities, DOD would ensure that Congress and departmental leaders have the insight needed to make informed decisions.
Recommendations for Executive Action
We are making a total of six recommendations to DOD.
The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition and Sustainment assesses whether DLA Document Services’ single manager role for printing and reproduction provides the best value to the government—as determined by quality, price, and delivery time and in light of DLA Document Services’ transformation plan—and whether any additional efficiencies are possible, and use the results of that assessment to inform the revision of DOD Instruction 5330.03. (Recommendation 1)
The Secretary of Defense should ensure that the Under Secretary of Defense for Acquisition and Sustainment assesses whether DOD’s current approach to obtaining print devices represents the best value to the government or whether other approaches, such as further consolidations under DLA Document Services as a proposed single manager for print device procurement, would be more cost effective. (Recommendation 2)
The Secretary of Defense should ensure that the DOD CIO implements controls, such as reporting procedures, to routinely monitor actions to reduce the number of print devices, consistent with department-wide goals for reducing the number of print devices that are included in the CIO’s 2012 memorandum. (Recommendation 3)
The Secretary of Defense should ensure that the DOD CIO assigns responsibility for implementing the CIO’s 2012 memorandum on optimizing the use of employee information technology devices. (Recommendation 4)
The Secretary of Defense should ensure that the Director, DLA, in coordination with the Director, DLA Document Services and following implementation of the current transformation plan, gathers data on workload revenue at retained facilities and all mission specialties and evaluate whether additional opportunities for consolidation exist based on those data. (Recommendation 5)
The Secretary of Defense should ensure that the Under Secretary of Defense (Comptroller), in consultation with the military services and DLA, evaluates options to report more accurate funding information and takes steps to improve the accuracy of its budgetary and financial information reporting on document services internally and to Congress, including making distinctions between printing and non-printing-related costs and information on device procurement and electronic content management. This information could be provided as part of DOD’s annual O&M budget justification materials. (Recommendation 6)
Agency Comments and Our Evaluation
We provided a draft of this report to DOD for review and comment. In its written comments, DOD concurred with five recommendations and identified specific actions and time frames for addressing them, and it partially concurred with the remaining recommendation. DOD’s written comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report, where appropriate.
DOD partially concurred with our recommendation that the Under Secretary of Defense (Comptroller), in consultation with the military services and DLA, evaluate options to report more accurate funding information and take steps to improve the accuracy of budgetary and financial information reporting on document services internally and to Congress, including making distinctions between printing and non- printing-related costs and information on device procurement and electronic content management. Our recommendation noted that this information could be provided as part of DOD’s annual O&M budget justification materials. DOD stated that the budget materials it submits to Congress are in compliance with OMB Circular A-11’s definitions of printing and reproduction and equipment. It further noted that Working Capital Fund exhibits provided with each annual budget include a breakout, by service, of the appropriated and Working Capital Fund activities and a detailed accounting of unit cost and pricing for all sub- activities of DLA Document Services.
As we noted in our report, a DOD Comptroller official told us that the Financial Management Regulation provides DOD with flexibility in categorizing and reporting obligations internally and to Congress. However, we found that, based on this flexibility, DOD’s O&M budget materials reported obligations for printing and reproduction that were primarily for non-printing activities, such as the purchase of advertising and radio and television time. This budget information did not represent the full scope of DOD’s document services mission, since it omitted obligations for print device procurement and electronic content management. We also reported that DOD had not evaluated options to report more accurate funding information on its document services. DOD’s comments did not include plans to address this recommendation. We continue to believe that by providing more accurate costs for its document services activities, DOD would ensure that Congress and departmental leaders have the insight needed to make more informed decisions.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the DOD Chief Information Officer, the Under Secretary of Defense (Comptroller), the Under Secretary of Defense for Acquisition and Sustainment, the Director, Defense Logistics Agency, the Secretaries of the Army, Navy, and Air Force, and the Commandant of the Marine Corps. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2775 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
Our objectives were to evaluate (1) the progress the Department of Defense (DOD) has made in achieving efficiencies in its document services and opportunities, if any, for further efficiencies, and (2) the extent to which DOD reports accurate financial information about its document services to key stakeholders.
For our first objective, we reviewed DOD documents and interviewed DOD officials in order to understand how each military service obtains document services and identify department-wide and military service efficiency initiatives for these services. We also reviewed the Defense Logistics Agency’s (DLA) and the military services’ document services activities and compared them with a DOD statutory periodic review; DOD Instructions and other guidance; Office of Management and Budget (OMB) guidance; internal control standards; and best practices for consolidation initiatives, efficiency initiatives, and strategic sourcing to identify any potentially unnecessary duplication, overlap, or fragmentation and any opportunities for greater efficiencies. For specific efficiency initiatives identified by DOD officials or in DOD documents, we interviewed DOD officials regarding their progress in implementing and meeting the goals of these initiatives.
To evaluate DLA Document Services’ transformation plan, we interviewed DLA Document Services officials, reviewed DLA Document Services documents regarding the plan, and assessed that plan based on leading practices for consolidation and efficiency initiatives. To assess the plan against these practices, one analyst reviewed the testimony and documents provided and compared it to our key questions to consider when evaluating proposals to consolidate physical infrastructure and management functions. A second analyst reviewed and concurred with the first analyst’s assessments. In any cases where there was a disagreement, the analysts discussed any discrepancies. If they were not resolved, a third analyst reviewed the assessments.
To assess the extent to which there may be additional opportunities for facility consolidations, we obtained DLA Document Services data on revenue reported by each facility, which DOD Document Services officials told us they used in determining which facilities to consolidate as a part of their transformation plan. We analyzed the share of mission specialty revenue reported by facilities that (1) were retained by DLA Document Services for a given mission specialty, (2) were retained but not for a given specialty, and (3) were not retained. We further divided those facilities retained for a given specialty into quartiles to better understand the concentration of revenue in those facilities. To assess the reliability of these data, we interviewed DLA Documents Services officials regarding how the data were gathered, analyzed, reported, and used. We found that these data were reliable for the purpose of analyzing the shares of mission specialty revenue represented by each facility or group of facilities.
To compare the cost of print devices offered by DLA Document Services, the Army, and the Air Force, we gathered and analyzed data on the monthly cost of multifunction devices with comparable specifications. We compared costs for similar devices based on device specifications including print speeds, monthly volumes, and paper capacities. Because Army and Air Force costs are estimated and there might be other differences in device specifications, approaches to obtaining devices, and which associated services were included, this analysis does not allow us to conclude which sources provide the greatest value. However, it illustrates differences in the cost of print devices across sources.
For DLA Document Services, we used DLA Document Services’ standard monthly pricing for 2018 for various categories of multifunction devices.
For the Army, Army officials were unable to provide data on the cost of multifunction devices purchased by Army customers. Instead, they provided us with documentation of vendor responses to requests for quotes from the Army’s mandatory source for print devices from April 2017 through January 2018. We reviewed those documents and assigned each device to a DLA Document Services category, based on the device’s specifications as identified in the documentation. We then estimated the monthly cost for each device. For leased devices, we used the monthly cost of the lease. For purchased devices, we used the total cost of the device divided by an estimated service life for the device. We estimated this service life using some indication available in the documentation, such as the length of time a maintenance agreement or extended warranty was provided for the device. Army officials provided 183 quotes for devices. Of those, we were able to include 24 in our analysis. We excluded the other 159 because either we could not determine the cost for individual devices in a quote, there was not enough information on a device’s specifications, there was no DLA Document Services equivalent for the device, or we were unable to estimate a service life based on the information provided. Because the information included all vendor quotes provided and not just those that were selected by a customer, the costs may not represent the actual costs of devices to the customer.
For the Air Force, we used an estimated average monthly cost based the standard pricing included in the Air Force’s 2018 catalog for print devices. We reviewed the catalog and assigned each multifunction device offered to a DLA Document Services category, based on the devices’ specifications. The Air Force’s catalog contained 32 devices; we were able to determine the equivalent DLA Document Services category for 13 of those devices. All devices in the Air Force’s catalog are available for purchase and include a 4-year maintenance agreement; therefore, we estimated the average monthly cost as the purchase price divided by 48.
To evaluate the extent to which DOD reports accurate and complete financial information to key stakeholders to manage its document services, we analyzed DOD’s operation and maintenance (O&M) budget justification materials for fiscal years 2012 through 2016 and Defense Logistics Agency data on its document services mission. We focused our review on O&M obligations reported by DLA and the military services, which accounted for an average of about 92 percent of DOD’s total document service costs reported by DLA Document Services in fiscal years 2012 through 2016. We interviewed officials, including officials from the Office of the Under Secretary of Defense (Comptroller), DLA Document Services, and the military services to determine how they reported costs for document services. We assessed the information we collected against federal accounting standards on how information should be recorded and communicated to management and others. To determine the reliability of the O&M budget justification data provided to us by DOD, we obtained information on how the data were collected, managed, and used through interviews with relevant officials. We determined that the data were sufficiently reliable to represent the military services’ total O&M obligations for document services for fiscal years 2012 through 2016.
We interviewed officials and, where appropriate, obtained documentation, from the following organizations:
Office of the Under Secretary of Defense for Acquisition, Technology
Office of the Under Secretary of Defense (Comptroller)
Department of Defense Chief Information Officer
Defense Logistics Agency – Chief Information Officer
Defense Logistics Agency – Document Services
Defense Information Systems Agency – Joint Service Provider
Army Chief Information Officer
Army Publishing Directorate
Army Marketing Research Group
Army 7th Signal Command
Headquarters Air Force – Chief Information Officer
Department of the Navy – Chief Information Officer
Headquarters Marine Corps Command, Control, Communications,
Headquarters Marine Corps Publishing and Logistics
Headquarters Marine Corps Budget and Execution
Marine Corps Combat Camera
Marine Corps Reprographic Equipment Management Program We conducted this performance audit from August 2017 to October 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Defense
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Matthew Ullengren (Assistant Director), Adam Hatton (Analyst in Charge), Adam Brooks, Joanne Landesman, Amie Lesser, Daniel Ramsey, Carter Stevens, and Walter Vance made key contributions to this report. | Why GAO Did This Study
DOD has reported printing costs that totaled about $608 million, on average, during fiscal years 2010 through 2015. DLA Document Services has key DOD-wide responsibilities for (1) printing and reproduction, (2) print device procurement, and (3) electronic content management (e.g., digital document repositories). Other DOD components, including the military services, also maintain some document services capabilities at various locations.
House Report 115-200 accompanying a bill for the National Defense Authorization Act for fiscal year 2018 included a provision for GAO to examine DOD's document services. This report evaluates (1) the progress DOD has made in achieving efficiencies in its document services and opportunities, if any, to achieve further efficiencies, and (2) the extent to which DOD reports accurate financial information about its document services to key stakeholders. GAO reviewed documents and interviewed officials regarding DOD's efficiency initiatives, including DLA Document Services' transformation plan; reviewed print device procurement contracts and pricing information; and analyzed DOD budget data for fiscal years 2012 through 2016.
What GAO Found
The Department of Defense (DOD) has taken steps to achieve efficiencies in its document services, including implementing a transformation plan to consolidate existing Defense Logistics Agency (DLA) Document Services facilities. However, GAO identified four areas where further gains may be possible:
Managing fragmentation in printing and reproduction services. DOD has designated DLA Document Services as the single manager for printing and reproduction services, but DOD customers, citing concerns with DLA's services, have also obtained these services directly from the Government Publishing Office and via in-house print facilities (see fig.). DOD has not assessed DLA's performance in this role or whether additional efficiencies may be possible in light of DLA's transformation plan.
Reducing overlap in procuring print devices. GAO found that DOD components used at least four different contract sources to acquire print devices. DOD has not assessed which acquisition approach represents the best value; doing so might better position DOD to further reduce its costs.
Meeting goals to reduce the number of print devices. DOD and the military services have not demonstrated that they achieved established goals for reducing the number of print devices. Additional controls and assignment of oversight responsibilities to monitor progress could better enable DOD to achieve its cost savings goals, estimated to be millions of dollars annually.
Consolidating DLA facilities. DLA is closing or consolidating 74 of its 112 facilities in the United States. However, GAO found that for four of seven types of specialty services, DLA plans to retain facilities that are responsible for less than 5 percent of the total revenue for each of those specialties, which suggests that further consolidations are possible.
DOD includes the cost of non-printing activities, such as the purchase of advertising time for recruiting, within its budget materials for printing and reproduction. It does not include costs to acquire print devices and for electronic content management. As a result, DOD and the Congress lack the oversight into total document services costs needed to make informed decisions.
What GAO Recommends
GAO is making six recommendations, including that DOD evaluate options to achieve additional cost savings and other efficiencies in its document services and report more accurate budget data. DOD generally agreed with the recommendations. |
gao_GAO-19-138 | gao_GAO-19-138_0 | Background
Personal Identity Verification Cards
Developed in response to HSPD-12, Personal Identity Verification (PIV) cards are a common authentication mechanism used across the federal government, and are a component of physical access control systems. PIV cards are used to securely identify federal government employees and contractor personnel seeking access to valuable and sensitive federal resources, including facilities and information systems. Also known as a “smart card,” a PIV card is similar in size to a credit card and contains information that is either printed on the outside or stored on the card’s integrated circuit chip (see fig. 1 below). PIV cards are required to be interoperable with all GSA-approved physical access control system equipment included on the Approved Products List, regardless of that equipment’s manufacturer. Likewise, GSA-approved physical access control system equipment is required to be interoperable with all PIV cards.
Access to Controlled Areas
Physical access control systems are used to manage access to controlled areas, such as a building or a room in a building. Physical access control products include devices such as card readers and the ID cards used to validate an individual’s authorization to enter a building (see fig.2 below). This report focuses on physical access and does not address logical (computer network) access.
A physical access control system works as follows. When employees or contractors who are PIV cardholders attempt to enter a controlled area managed by a physical access control system, they will encounter the physical access control system at the “front end.” At this point, depending on the controlled area’s level of security, the cardholder will scan their PIV cards via a card reader, insert their PIV cards and enter personal identification numbers (PIN) via an input device such as a keypad, or insert their PIV cards, enter PINs, and provide biometric identification (such as a fingerprint) via an input device.
After the cardholders present their identification information, the cardholders’ identification information from a PIV card’s integrated circuit chip is transmitted to the physical access control system’s “back end,” which consists of physical and logical access control systems and authorization data. At the back end, the physical access control system determines the validity of the cardholder’s access authorization. The cardholder will be able to access the area only if the authorization is valid.
When deciding which access control mechanisms to implement, agencies must first understand the level of risk associated with the facility. The higher the risk level, the greater the need there is for agencies to implement a high-assurance-level access control system. Physical access control systems can be electronically connected in different ways, including within a given building or across an agency or department. The level of interoperability determines the level at which PIV cards and access authorization will be accepted. For example, a PIV card and corresponding access authorization may be accepted within a single building, across an agency, or potentially across the federal government. In this report, we describe a system in which a PIV card works in multiple physical access control systems as “interoperable”. In order to realize the full security benefit of PIV cards, physical access control systems must have a network connection that enables them to validate a given cardholder’s access credentials.
Federal Requirements
Homeland Security Protection Directive-12: HSPD-12 is a 2004 presidential directive establishing the requirement for a mandatory, government-wide standard for secure and reliable forms of identification (PIV cards) issued by the federal government to its employees and contractors. It specified that the standard must include criteria authenticating employees’ identities and permissions at graduated levels of security, depending on the agency environment and the sensitivity of facilities and data accessed.
Federal Information Processing Standards (FIPS) 201: The Department of Commerce’s NIST initially published the Federal Information Processing Standards (FIPS) 201 in 2005 to support HSPD- 12. The FIPS 201 standard established the PIV card as a common authentication mechanism across the federal government. FIPS 201 set standards for PIV systems in three areas: (1) identity proofing and registration, (2) card issuance and maintenance, and (3) protection of card applicants’ privacy. In addition, the standard provides technical specifications for the implementation and use of interoperable smart cards in physical access control systems. An update to FIPS 201 (called FIPS 201-2), was released in August 2013. Among other things, it made the collection of a facial image mandatory for PIV cards and changed the maximum lifespan of a card from 5 to 6 years.
Approved Products List – The Approved Products List is a list of all physical access control system equipment that is compliant with FIPS identification standards. Agencies must acquire federally-approved products and services from this list in order to help ensure government- wide interoperability of physical access control systems. All products on the Approved Product List have gone through end-to-end testing and evaluation, as part of a complete physical access control system. Federal agencies are required to use the Approved Products List when purchasing physical access control system equipment. The Approved Products List is intended to provide assurance to federal agencies that listed vendors’ products comply with the various federal standards and requirements.
Government-wide Roles and Responsibilities
Office of Management and Budget (OMB)
HSPD-12 designates OMB as the lead entity with responsibility for ensuring that federal government departments and agencies implement this directive in a manner consistent with ongoing government-wide activities and existing OMB policies and guidance.
General Services Administration (GSA)
GSA supports OMB by administering product testing through a contractor, managing the Approved Products List, and making the physical access control system’s products and services available to federal agencies via GSA’s Federal Acquisition Service. The Federal Acquisition Service manages a large portion of GSA’s Federal Supply Schedules program (GSA Schedule), which establish long-term government-wide contracts with commercial firms to provide federal agencies access to millions of commercial products and services at volume discount pricing. Further, GSA’s Office of Government-wide Policy (OGP) provides tools and support for identity, credential, and access management activities across the federal government, including for physical access control systems. GSA also has a government-wide landlord role through its Public Buildings Service and installs physical access control systems in many GSA-owned and leased buildings that it manages.
Interagency Security Committee
The ISC is chaired by DHS and consists of 60 federal departments and agencies. ISC’s mission is to develop security standards, best practices, and guidelines for nonmilitary federal facilities in the United States. Each of the five selected agencies included in our report, or their home departments, is a member of the ISC. The ISC has the authority to convene working groups from its member agencies to produce documents, which are task-based and provide ISC’s members with a forum for information sharing to address a wide range of issues related to physical security at federal buildings. ISC also produces standards and best practices guidance for agencies to use when addressing security issues. For example, in December 2015 ISC released Best Practices for Planning and Managing Physical Security Resources: An Interagency Security Committee Guide. This document is intended to identify practices most beneficial for physical security programs, determine the extent to which federal agencies currently use these practices, and compile and circulate best practices agencies can use as a supplement to the ISC’s existing security standards.
OMB and GSA Have Taken Steps to Fulfill Their Responsibilities to Implement Physical Access Control Systems, but Oversight Is Limited
OMB and GSA Have Supported Implementing Physical Access Control Systems
OMB and GSA have taken steps to help agencies procure and implement secure and interoperable GSA-approved physical access control systems across the federal government. For example, OMB has issued three guidance memorandums to clarify agency responsibility to use GSA’s Approved Products List. 1. In 2005, OMB designated GSA as the “executive agent for government-wide acquisitions of information technology” for the products and services required for physical access control and delineated agency responsibilities with regard to implementing HSPD- 12. Also, to ensure government-wide interoperability, all agencies must acquire products and services that are compliant with standards and included on the Approved Products List. 2. In 2006, OMB reiterated that agencies must purchase physical access control systems from GSA’s Approved Products List and that GSA will make approved products and services available through acquisition vehicles (Schedules) that are available to federal agencies. 3. In 2011 OMB issued a memo that cited DHS guidance that stated effective in fiscal year 2012 agencies must upgrade existing physical and logical access control systems to use PIV credentials prior to using relevant funds for other activities. The memorandum further stated that the upgrades must be in accordance with NIST standards.
In addition, GSA, as the lead agency for government-wide acquisition of information technology, has undertaken a number of efforts to promote the implementation of GSA-approved physical access control systems: 1) Testing and evaluation: GSA administers and conducts testing and evaluation to develop an Approved Products List. Testing is performed by either third-party accredited testing labs or GSA- managed testing labs. GSA tests a variety of products and services including smart cards; physical access control systems; which include card readers and infrastructure for example; and integrators which provide or install access control services. According to GSA officials, GSA has fully tested all physical access control system equipment included on the Approved Products List and evaluated and approved the suitability of vendors and system integrators. GSA shares information about vendors, system integrators and Approved Products List equipment with federal agencies. 2) Guidance and support: GSA has taken several actions to improve guidance and facilitate the implementation of physical access control systems. First, GSA manages IDManagement.gov, which guides federal agencies through the process of identifying Approved Products List-compliant physical access control system equipment. Second, GSA established the U.S. Access program to enable federal civilian agencies to issue common HSPD-12 approved credentials to their employees and contractors. Finally, GSA developed a list of system integrators that can be used to install physical access control systems that have been approved for the Approved Products List. These integrators (there are 25 as of November 2018) are listed on the GSA’s IDManagement.gov website. 3) Information sharing: According to GSA officials, GSA responds to email questions from agencies about the Approved Products List, and GSA makes subject matter experts available to any agency representatives with questions. 4) Procurement support: According to GSA officials, GSA provides standard procurement language for agencies to include in statements of work before their requests for proposal go out for physical access control systems. However, according to officials, GSA has no control over whether agencies decide to include the language that it provides.
Stakeholders including agencies and manufacturers that we interviewed generally considered the Approved Products List to have achieved its intent. For example, government and industry officials said that they believe the list provides assurance to government agencies that physical access control systems will work as intended and will help facilitate a more interoperable system government-wide, thereby enhancing security. Moreover, stakeholders we interviewed said they generally thought the associated costs and burdens of going through GSA’s testing and evaluation have been worth the effort. Without the Approved Products List, these stakeholders believe that the quality and interoperability of products would diminish. According to some stakeholders, prior to the current end-to-end testing of products, companies submitted products to the Approved Products List that either did not work as intended or were not compatible with other products. Stakeholders also commented on the improvements to the Approved Products List since GSA took over the certification testing, noting that use of manufacturer self-testing prior to 2012 was not successful. In addition, the cost to industry to do self-testing was high, according to vendors, and some companies did not do it well, according to GSA, EPA, and TSA officials.
OMB Lacks Necessary Information to Conduct Oversight
We found that neither OMB nor GSA currently collect data on agency efforts to implement physical access control system requirements, including use of the Approved Products List. This is significant because our interviews with physical access control systems’ manufacturers, integrators, and selected agencies indicate that government-wide implementation of physical access control systems may be limited and raises questions about government-wide progress. Officials from four of the five selected agencies we reviewed told us that, since 2013, when physical access control system end-to-end testing requirements began, they had only purchased GSA-approved physical access control system equipment for a limited number of their facilities. Moreover, they said that where purchasing occurred, it was sometimes for physical access control systems that required replacement because they were nearing the end of their useful life.
For the five selected agencies, we found the following:
General Services Administration: According to GSA officials, a limited number of GSA facilities have physical access control systems that fully adhere to the latest requirements. According to GSA officials, GSA has met federal physical access control system requirements for 70 out of approximately 340 of its non-courthouse buildings with another 90 being partially in line with requirements (e.g., PIV access credentials are used). The remaining facilities do not yet meet federal physical access control system requirements. GSA staff also told us that GSA administers the public spaces in approximately 360 courthouse buildings and is developing a security implementation plan for these spaces. GSA officials told us that GSA also administers about 8,000 leased buildings where the tenants in these spaces are generally responsible for setting up physical access control systems and GSA does not track this information.
Environmental Protection Agency: According to EPA officials, none of EPA’s 72 facilities (including, for example, its headquarters building in the District of Columbia and 10 regional headquarters buildings) currently adhere to the latest physical access control system requirements. Specifically, EPA officials told us that the agency used GSA’s Approved Products List to purchase physical access control system equipment in the past. However, because requirements have changed over time, the 72 buildings where EPA is responsible for physical access control need to be upgraded to the latest requirements. To do so, EPA officials said they will procure these systems using the Approved Products List and prioritize implementation in the future to those facilities with the highest assessed risk. EPA officials said that in August 2013, changes to physical access control systems’ standards required the agency to purchase and install complete physical access control systems that GSA has tested end-to-end and that adhere to the latest requirements. EPA officials said they expect the end-to-end tested physical access control systems to lead to systems that are more secure and interoperable.
Bureau of Prisons: The Bureau of Prisons has implemented Approved Products List-compliant physical access control system equipment in regional and central offices according to agency officials we interviewed. According to officials, the Bureau of Prisons purchased physical access control systems using the Approved Products List for its headquarters complex (three buildings) and six regional offices beginning in 2009 and made upgrades to this equipment in 2015 to adhere to federal physical access control system requirements at the time. However, Bureau of Prisons officials told us that the agency has not implemented physical access control systems at its institutions (prisons). Bureau of Prisons officials told us that physical security and screening procedures at prisons are more stringent than those that occur with typical building-access procedures as persons and belongings are scanned and searched. Physical access control system equipment at these prisons may in fact be problematic because, according to Bureau of Prisons officials, doors should not automatically be opened based on a PIV card without manual checks to ensure staff are not under duress or fraudulent access is being attempted. Bureau of Prisons officials said that at the prisons, identification credentials are first visually examined by prison personnel before access is granted, and all gates and points of entry are controlled by prison personnel.
Transportation Security Administration: According to TSA officials, since 2013, 64 TSA facilities have implemented some physical access control system upgrades using products from the Approved Products List, while an additional 75 leased facilities have been upgraded by GSA. While the 139 facilities are not fully compliant, the only item missing to make these facilities compliant, according to TSA officials, is the capability for interoperable, secure identification checks among federal agencies. This would allow TSA’s physical access control systems to recognize revoked PIVs from any federal agency. TSA told us that it plans to roll out this capability in fiscal year 2019. Our review of TSA’s 2015 plan to meet the latest physical access control system requirements indicates that the agency is taking steps toward full compliance. TSA’s implementation plan was developed in response to DHS’s 2012 Modernization Strategy for Physical Access Control Systems, which provides guidance to DHS for implementing secure and compliant end-to-end physical access control systems from GSA’s Approved Products List. Over the next 5 years, TSA plans to spend about $73 million in physical access control system implementation with the bulk of these funds ($51 million) going toward the acquisition of new systems from the Approved Products List.
United States Coast Guard: Coast Guard officials told us that none of the agency’s 1,400 facilities where it has security responsibilities fully adhere to the latest federal physical access control system requirements. However, 53 of these facilities have been prioritized for physical access control system implementation. In addition, since 2013, four Coast Guard locations have begun to implement GSA- approved physical access control systems using the Approved Products List. These locations are Jacksonville, FL; New York, NY; Corpus Christi, TX; and the Coast Guard’s Security Center in Chesapeake, VA. Decisions about physical access control system equipment are made on a facility-by-facility basis, according to Coast Guard officials. These officials said that due to the decentralized nature of Coast Guard’s decision-making process for physical access control systems, it is difficult to say where purchases have been made, and there is no systematic tracking. The Coast Guard does not have a formal plan for upgrading its physical access control systems, but Coast Guard officials told us that they continue to pursue opportunities to upgrade facilities with physical access control system equipment using the Approved Products List. For example, Coast Guard officials told us that they currently emphasize system upgrades for those systems that reach the end of their useful life or otherwise necessitate replacement.
These five selected agencies are illustrative of the oversight difficulties that OMB faces because it does not have baseline information about agencies’ efforts to implement physical access control systems, including implementation of GSA-approved systems. This lack of information hampers OMB’s efforts to (1) meaningfully track and monitor agencies’ adherence to physical access control system requirements, or (2) provide an incentive for agencies to be more accountable with regard to where their physical access control systems stand in terms of their ability to prevent security breaches. Federal internal-control standards state that establishing a baseline is an internal control that can be used to perform monitoring activities. Baseline data allow organizations to identify and address performance issues and deficiencies. Establishing a baseline to understand the current status of physical access control system implementation could improve efforts to evaluate progress federal agencies are making and could also provide an incentive to agencies to further improve. Moreover, federal internal-control standards also direct agencies to hold organizations accountable for their assigned responsibilities.
OMB staff said that OMB oversees physical access control systems’ requirements as part of its normal process of reviewing agencies’ budget submissions but does not conduct oversight outside of this process. This approach, however, does not allow OMB to identify or monitor the extent to which agencies are purchasing physical access control systems that meet the latest requirements or take action if agencies lag in this area.
Selected Agencies Have Faced Various Challenges in Meeting Physical Access Control Systems’ Requirements and May Benefit from Additional Government-wide Support
Selected federal agencies face cross-cutting, as well as agency-specific, challenges to acquiring and integrating physical access control system equipment, according to agency representatives and industry stakeholders we spoke to. These challenges include cost, confusion regarding GSA Schedule’s use, lack of trained agency officials, adapting legacy systems, and security concerns about integrating physical access control systems.
Cost: Officials from most of the five selected agencies, from physical access control system manufacturers, and from integrators we interviewed told us that the cost of buying GSA-approved physical access control systems using the Approved Products List and installing them in adherence to federal physical access control system requirements is a challenge in the current budget environment. Agency representatives also told us they view the regulatory and OMB requirement to upgrade physical access control systems as a costly unfunded mandate that these agencies have difficulty meeting. For example, TSA officials estimate that TSA will need over $14 million per year to continue implementing GSA-approved physical access control systems using the Approved Products List in its 625 facilities, an expense for which the agency receives no additional funds. However, OMB staff told us that agencies have had 13 years in which to replace physical access control systems’ technology with products that meet federal requirements, and that the issue may be agencies’ training and planning, rather than cost. OMB staff told us that the expectation was, that over time, agencies would implement physical access control systems that used equipment that was exclusively from the APL and compliant with FIPS.
Confusion regarding GSA Schedules: Officials from some of the five selected agencies and some stakeholders told us that there is some uncertainty in government and industry about which GSA contracting Schedule should be used to acquire GSA-approved physical access control system equipment and services. For example, some stakeholders are unsure which GSA Schedule they should use to provide their services. GSA Schedule 70 is generally used for information technology purchases.
GSA Schedule 84 is generally used for physical security equipment purchases, including products such as security alarms and surveillance equipment. However, some stakeholders told us they found federal guidance unclear as to whether Schedule 70 or 84 should be used for GSA-approved physical access control system purchases. For example, some integrators told us that it was not always clear for what Schedule they should seek approval to be on to sell their services. Federal regulations and an OMB memo both mention Schedule 70 as being the appropriate Schedule for purchasing physical access control systems, but do not explicitly exclude the use of Schedule 84.
Complicating matters, some stakeholders told us some companies are only approved for Schedule 84 because getting approved for both Schedules was time-consuming and costly, and not worth the effort given the lack of clarity regarding which Schedule is required. According to OMB staff, guidance is clear that Schedule 70 should be used to purchase physical access control equipment because this equipment is considered to be information technology. OMB staff explained that their memo on this subject was not intended to introduce ambiguity on the issue of what Schedule is appropriate for use, but to accommodate practices at the Department of Defense, which performs some of its own product testing separate from GSA’s testing program. According to GSA’s Office of Government-wide Policy (OGP), GSA is aware of the confusion among GSA’s federal customers regarding GSA Schedule use. To address this situation, GSA convened a “reverse industry” training event in September 2018, at which industry representatives provided feedback to GSA on the acquisition process and ways that it could be improved, including issues pertaining to acquisitions related to physical access control systems. According to federal officials, one point of emphasis by industry was that purchasing physical access control equipment from the Approved Products List was not sufficient for having a functioning physical access control system; system integration was also necessary. During this event, GSA officials took the position that both Schedule 70 and Schedule 84 could be used to purchase physical access control systems, but OMB staff maintain that Schedule 70 is preferred. OMB staff explained that Schedule 84 does not have the testing and evaluation requirements for PACS equipment on it that Schedule 70 does. According to OMB, this frustrates industry vendors that follow the Schedule 70 approval process because these vendors are spending time and money to get approved for Schedule 70, while others are still selling their equipment on Schedule 84 and skirting this process because GSA allows the sale of physical access control system equipment on both Schedules. Schedule 84 has historically been used for security hardware while Schedule 70 is used for information technology. Since physical access control systems are essentially information technology systems today, OMB believes that Schedule 70 should be used exclusively for physical access control system equipment.
Adapting legacy systems: According to officials at most of the five selected agencies, most manufacturers, and all integrators we spoke to, integrating new physical access control systems’ equipment with existing legacy systems can be challenging. Some stakeholders told us that integrating new physical access control systems with old equipment is often more difficult and more costly than starting from scratch. As an illustration of this difficulty, TSA officials told us that integrating new physical access control system equipment with legacy systems has contributed to delays in the integration of TSA’s newly installed physical access control system equipment. Partly as a result, only one TSA region is currently integrated into DHS’ agency-wide network.
Security concerns about integrating physical access control systems: Officials at two of the selected agencies and one system integrator we spoke to told us that some agency officials are reluctant to more fully integrate their physical access control systems. This reluctance is due to concern about a perceived increase in security risks resulting from more broadly networking physical access control systems’ equipment and access credentials like PIV cards. However, other federal officials told us that this concern is unfounded. According to these officials, integrating agencies’ physical access control systems will enhance security, increase government efficiency, reduce identity fraud, and protect personal privacy by electronically authenticating the validity of access credentials.
Lack of trained agency officials: Stakeholders told us they believe that some federal agency officials have limited knowledge of physical access control system requirements. According to most physical access control systems’ manufacturers and integrators we spoke to, federal agencies’ contracting officers commonly lack sufficient understanding of federal physical access control system requirements. This insufficient understanding of physical access control system requirements may lead contracting officers to award contracts for the installation of physical access control systems to under-qualified integrators, which can lead to systems being improperly deployed or integrated. These experts said that this situation could lead to security vulnerabilities at these agencies and expensive future costs. OMB staff told us that it may be desirable to raise agencies’ awareness of federal physical access control system requirements, and a DHS official told us that this issue could be addressed by the training of program staff by GSA who support contracting officers.
OMB staff and officials from ISC and GSA indicated that they are aware of some of the challenges described above, as well as the possibility that some may be more broadly present across the federal government. Staff said that OMB and GSA are working with ISC to develop a consolidated guidance document concerning federal identification credentials. However, OMB staff told us that this guidance is primarily intended to consolidate and replace existing guidance documents, and does not contain new information related to the challenges identified by the selected agencies or other stakeholders we spoke to. Best practices that we have previously identified indicate that an interagency mechanism, such as an interagency group led by component or program-level staff, can help federal agencies address policy and program challenges. The guidance of such an interagency group could help agencies to address the challenges that we identified and that are related to implementing physical access control systems.
ISC, with its unique role in addressing interagency security issues, is well- positioned to assess how the physical security community can help to address the government-wide challenges with physical access control system implementation. For example, ISC is well-positioned to determine through its membership the extent to which the challenges we identified are present across the federal government. In addition, ISC may be able to harness recent interagency efforts, such as the interagency information sharing and collaboration that produced ISC’s guidance on planning and managing security resources, to develop guidance addressing agencies’ cost issues through the mechanisms that we have previously identified, such as leveraging resources. Further, working with GSA, ISC could help to resolve confusion about which Schedule is the appropriate contracting vehicle, to the extent that this lack of clarity persists. ISC may also be positioned to provide a venue for information sharing to allow agencies to address training needs, such as those related to technical challenges, associated with legacy equipment and establish compatible policies to address this challenge. Finally, ISC’s experience with interagency communication and collaboration could also facilitate agencies’ response to concerns about the benefits of interoperable physical access control systems, and could work to reach consensus on the matter. According to a senior ISC official, the ISC has updated its countermeasures standard to assist the physical security community to better understand the references and policies associated with procuring and installing physical access control systems. Additionally, an ISC official told GAO that the ISC has approved commissioning a working group to assess what additional guidance related to physical access control would be beneficial for to the federal physical security community. However, without a government- wide review of the challenges we have identified, those challenges will be difficult to overcome. If these issues are not addressed, the fully interoperable, physical access control system network envisioned post September 11, 2001, and the increased security and efficiency that it would entail, will be difficult to attain.
Conclusions
OMB and GSA have taken various actions to help federal agencies implement GSA-approved physical access control systems. However, selected agencies have made limited progress, and have faced challenges that impede their progress. Lacking a baseline level of information on adherence to physical access control system requirements prevents OMB from gauging the level of progress being made by agencies. Likewise, an increased understanding of the extent and nature of the challenges that federal agencies may face as they implement physical access control systems may help enhance adherence to physical access control system requirements. This two-pronged approach, the establishment of a baseline and a better understanding of the challenges agencies face as they implement physical access control systems, could prove beneficial in achieving the vision of secure, interoperable systems across departments and agencies.
Recommendations for Executive Action
We are making one recommendation to OMB, and one recommendation to DHS.
The Director of OMB should determine a government-wide baseline level of progress in meeting physical access control system requirements, including implementation of GSA-approved systems, and should monitor progress in meeting these requirements. (Recommendation 1)
The Secretary of Homeland Security should direct the ISC, in collaboration with member agencies, to assess the extent of, and develop strategies to address, government-wide challenges to implementing physical access control systems. (Recommendation 2)
Agency Comments
We provided a draft of this report to the Departments of Commerce, Justice, and Homeland Security, EPA, GSA, and OMB for their review and comment. DHS, GSA, and OMB provided technical comments, which we incorporated as appropriate. DHS provided written comments and concurred with our recommendation. DHS’s comments are reprinted in appendix II. OMB staff told us that they did not have a comment on our recommendation. The Departments of Commerce and Justice and EPA did not have any comments on our report.
We will send copies of this report to the Ranking Member, Subcommittee on Oversight and Management Efficiency, Committee on Homeland Security, House of Representatives and the Secretaries of Commerce and Homeland Security, the Assistant Attorney General for the Department of Justice, the Administrator of the General Services Administration, the Director of the Office of Management and Budget, and the Acting Administrator of the Environmental Protection Agency. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
Our objectives were: (1) to assess the steps the Office of Management and Budget (OMB) and the General Services Administration (GSA) have taken to fulfill their government-wide responsibilities related to physical access control system implementation requirements and (2) to identify challenges selected federal agencies face in adhering to federal physical access control system requirements.
To assess the steps OMB and GSA have taken to fulfill their government- wide efforts to implement Homeland Security Presidential Directive 12’s (HSPD-12) requirements, and to assess progress in these efforts, we interviewed OMB and GSA about their efforts to ensure that agencies meet the requirement to use GSA’s Approved Products List. We also asked them to provide data, if available, on agencies’ Approved Products List usage. We interviewed seven physical access control system manufacturers (AMAG, Gallagher Group, HID Global, Identiv, Lenel, Software House, and XTec), five integrators (contractors that install the equipment and connect it to agency networks with software) (Convergint Technologies, Chenega Corporation, MC Dean, Parsons, and Systems Engineering, Inc.), as well as other industry organizations—GSA Schedules Inc., the Secure Technology Alliance, and CertiPath— based on multiple recommendations from previous interviews.
To identify illustrative examples of the progress that individual agencies have made in using the Approved Products List and implementing other HSPD-12 requirements, as well as the challenges that they have faced in doing so, we selected five executive branch agencies. These included (1) U.S. Coast Guard in the Department of Homeland Security (DHS); (2) Bureau of Prisons in the Department of Justice; (3) Transportation Security Agency in DHS; (4) Environmental Protection Agency (EPA); and (5) GSA. We interviewed officials from these agencies about the Approved Products List and collected data on agencies’ purchases of GSA-approved physical access control system equipment using the Approved Products List since 2013. Our criteria for agency selection included agencies with facilities (1) held by non-defense executive branch agencies; (2) located in the United States; (3) totaling 200 or more buildings; and, (4) that are geographically dispersed (having buildings in 10 or more states). We also gave consideration to agencies with large numbers of buildings (choosing four larger, one smaller) and selected at least two agencies with homeland security responsibilities. We limited our scope to non-defense agencies because we have ongoing work related to these issues at the Department of Defense. We also requested and reviewed documents concerning Approved Products List usage and physical access control systems’ deployment from each of these five selected agencies. Our use of the term stakeholders may include agencies, physical access control manufacturers, integrators, and knowledgeable organizations or officials. Results from our interviews with the selected agencies cannot be generalized. To identify the challenges most frequently cited by agencies, manufacturers, integrators, and other stakeholders, we conducted an analysis of our interviews, reviewed documents provided by agencies, and performed a literature review. In addition to considering the range of federal requirements related to physical access control, we considered relevant internal control standards from federal standards for internal-control in the areas of monitoring, enforcement, planning, and training and collaboration best practices identified in prior GAO work. Further, we reviewed other relevant documents including GAO reports, GSA documentation, OMB memorandums, National Institute of Standards and Technology standards, Interagency Security Committee guidance, a report from the DHS Office of the Inspector General, and additional federal guidance related to physical access control systems.
We conducted this performance audit from October 2017 to December 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Homeland Security
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contacts
Staff Acknowledgments
In addition to the individual name above, Dave Sausville (Assistant Director); Kieran McCarthy (Analyst in Charge); Adam Gomez; Cam Flores; Elizabeth Wood; Josh Ormond; and Melissa Bodeau made key contributions to this report. | Why GAO Did This Study
A 2004 federal directive and the related standard set forth a vision for using information technology to verify the identity of individuals accessing federal buildings. The vision calls for secure and reliable forms of identification that work in conjunction with access control systems. Interoperability of these systems across departments and agencies is part of the vision. OMB and GSA have government-wide responsibilities related to this effort. ISC provides guidance to non-military executive branch agencies on physical security issues. GAO was asked to examine PACS implementation efforts.
This report discusses (1) steps OMB and GSA have taken to fulfill their government-wide responsibilities related to PACS and (2) challenges selected federal agencies face in meeting current requirements. For review, GAO analyzed documents from Commerce, GSA, ISC, and OMB. GAO selected five non-military agencies based on factors including number of buildings and geographic location. GAO reviewed relevant requirements and key practices. GAO also interviewed federal agency officials, PACS vendors, and knowledgeable industry officials.
What GAO Found
The Office of Management and Budget (OMB) and the General Services Administration (GSA) have taken steps to help agencies procure and implement secure, interoperable, GSA-approved “physical access control systems” (PACS) for federal buildings. PACS are systems for managing access to controlled areas within buildings. PACS include identification cards, card readers, and other technology that electronically confirm employees' and contractors' identities and validate their access to facilities (see figure). Steps taken include the following:
OMB issued several memos to clarify agencies' responsibilities. For example, OMB issued a 2011 memo citing Department of Homeland Security (DHS) guidance that agencies must upgrade existing PACS to use identity credentials before using relevant funds for other activities. But, GAO found OMB's oversight efforts are hampered because it lacks baseline data on agencies' implementation of PACS. Without such data, OMB cannot meet its responsibility to ensure agencies adhere to PACS requirements or track progress in implementing federal PACS requirements and achieving the vision of secure, interoperable systems across agencies.
GSA developed an Approved Products List that identifies products that meet federal requirements through a testing and evaluation program. Federal agencies are required to use the Approved Products List to procure PACS equipment. GSA also has provided procurement guidance to agencies through its identity management website.
Officials from the five selected agencies that GAO reviewed identified a number of challenges relating to PACS implementation including cost, lack of clarity on how to procure equipment, and difficulty adding new PACS equipment to legacy systems. Officials from OMB, GSA, and industry not only confirmed that these challenges exist but also told GAO that they were most likely present across the federal government. The Interagency Security Committee (ISC), chaired by the DHS and consisting of 60 federal departments and agencies, has a mission to develop security standards for non-military agencies. In this capacity the ISC is well-positioned to determine the extent that PACS implementation challenges exist across its membership and to develop strategies to address them. An ISC official told GAO that the ISC has taken steps to do so including setting up a working group to assess what additional PACS guidance would be beneficial.
What GAO Recommends
GAO recommends (1) that OMB determine and regularly monitor a baseline level of progress on PACS implementation and (2) that ISC assess the extent of, and develop strategies to address, government-wide challenges to implementing PACS. OMB had no comment on the recommendation. DHS concurred with the recommendation to ISC. |
gao_GAO-18-147 | gao_GAO-18-147_0 | Background
BOP’s Organization and Workforce
Justice Management Division (JMD) JMD provides the Federal Bureau of Prisons senior management with guidance as it relates to Department of Justice (DOJ) policy for all matters pertaining to organization, management, and administration, including the use of human capital flexibilities such as retention incentives.
BOP is responsible for incarcerating all federal offenders sentenced to prison. To carry out its mission, BOP, under the oversight of DOJ’s JMD, manages the human resource operations of its institutions, including the use of retention incentives. BOP administers, monitors, and oversees retention incentives through its Central Office, regional offices, and institutions.
Central Office. The Central Office serves as BOP’s headquarters and provides oversight of BOP operations and program areas. Within the Central Office is BOP’s Human Resource Management Division (HRMD) which is responsible for developing, implementing and administering human resource policies and programs, including the use of retention incentives that meet OPM and DOJ requirements. In addition, the Central Office’s Program Review Division (PRD) is responsible for assessing BOP programs, including human resources, to ensure that they are managed and operated effectively.
Regional offices. BOP has six regional offices that cover the Mid- Atlantic, North Central, Northeast, South Central, Southeast, and Western regions of the United States. These offices, each led by a regional director, oversee the operations of the 122 federal institutions within their respective geographic regions of the country. According to BOP officials, regional office staff also provide local level oversight of institutions’ human capital programs, such as retention incentives, among other things.
Institutions. BOP institutions are managed by a warden and other officials, including an executive assistant and associate warden who generally provide overall direction and, in part, administer the institution’s human capital policies, including policies on retention incentives. Correctional services staff represent the largest segment of each institution’s workforce and are responsible for the correctional treatment, custody, and supervision of inmates. Non-correctional services staff include, among others, those employees assigned to non-correctional services management, facility operations, and the health services unit. Workers in health services and psychology services are responsible for providing inmates with medical, dental, and mental health services and include, for example, dentists, pharmacists, physicians, nurses, psychologists, and drug treatment specialists.
Federal Laws and Regulations Related to Retention Incentives
The Federal Employees Pay Comparability Act of 1990 first authorized OPM to allow federal agencies to give incentives, including retention incentives, to employees. The Federal Workforce Flexibility Act of 2004 provided federal agencies increased flexibilities regarding these incentives. For example, individual retention incentives that were capped at 25 percent of an employee’s basic pay rate could be increased up to 50 percent in cases of critical agency need with OPM’s approval. Generally, under OPM regulations, an agency is authorized to pay a retention incentive to employees. This happens when the agency determines that the unusually high or unique qualifications of the employee or a special need of the agency for the employee’s services makes it essential to retain the employee and that the employee would be likely to leave federal service in the absence of an incentive. In addition, OPM requires agencies to develop plans for using retention incentives outlining, in part, the required documentation for justifying the retention incentive and any criteria for determining the amount of incentive and the length of the service period. Generally, agencies must require that employees sign a written service agreement that outlines the terms of the service such as the employee’s agreement to remain a certain length of time with the agency. Additionally, according to OPM regulations, to qualify for a retention incentive, each employee must have a performance rating of at least “fully successful” or an agency’s equivalent performance rating.
BOP’s Retention Incentive Program
BOP funds the majority of its retention incentives through its Salaries and Expenses appropriation account which represented almost 93 percent of BOP’s budget in FY 2016. According to BOP officials, BOP’s Central Office allocates funding from the Salaries and Expenses account to the regional offices. These regional offices then determine how to allocate their budget among various salary and expense activities, including retention incentives. HRMD delegates retention incentive determinations to each institution. In accordance with OPM requirements and BOP’s October 2016 Program Statement on Compensation, the wardens make retention incentive requests based on documented evidence that the employee possesses unusually high or unique qualifications or meets a special need of the agency and has a performance rating of at least “successful or its equivalent.” These incentives are calculated as a percentage of the employee’s basic pay and are disbursed in installments to the employee each pay period.
Other Compensation- Based Human Capital Flexibilities
In addition to retention incentives, BOP has authority to provide other compensation-based human capital flexibilities to employees, in certain circumstances. The following summarizes some of the compensation- based human capital flexibilities that BOP uses in addition to retention incentives, to retain and recruit staff:
Recruitment and relocation incentives. BOP pays recruitment incentives to new hires and relocation incentives to current employees who elect to move to a different geographic area, when a position is likely to be difficult to fill in the absence of an incentive.
Student loan repayments. Using this authority, BOP may repay federally-insured student loans to attract job candidates or retain current employees.
Special salary rates. With OPM approval, BOP may establish higher rates of pay for an occupation or group of occupations nationwide or in a local area when it finds the government’s recruitment or retention efforts are, or would likely become, significantly handicapped without those higher rates.
Physicians and dental comparability allowances. Comparability allowances may be paid to certain eligible physicians or dental professionals who enter into service agreements. These allowances are paid only to categories of physicians and dentists for which the agency is experiencing recruitment and retention problems and are fixed at the minimum amounts necessary to deal with such problems.
BOP Increased Its Use of Retention Incentives and Used Them Primarily to Retain Staff in California and for Medical Professionals Nationwide
BOP’s Total Retention Incentive Expenditures and the Number of Employees Receiving Retention Incentives Generally Increased from Fiscal Year 2012 through Fiscal Year 2016
BOP retention incentive expenditures generally increased from $10.7 million in fiscal year 2012 to $14.0 million in fiscal year 2016. Additionally, as illustrated in table 1, the number of employees who received retention incentives increased each year from 2,024 employees in fiscal year 2012 to 2,460 employees in fiscal year 2016.
In general, BOP employees who received retention incentives received the incentive for more than one year. For example, from fiscal year 2012 through fiscal year 2016, a total of 3,382 BOP employees received retention incentive payments. Of those, 82 percent (2,766 of 3,382) received retention incentive payments for at least 2 years and 39 percent received retention incentives all 5 years, as shown in figure 1.
BOP Used Retention Incentives Primarily at Four California Institutions and for Medical Professionals Nationwide
From fiscal years 2012 through 2016, BOP spent more than 97 percent of its total retention incentive expenditures on employees at four California institutions and for medical professionals nationwide. BOP’s total retention incentive expenditures for the four California institutions and medical professionals nationwide in fiscal year 2016 are provided in figure 2.
Four California Institutions. The California institutions—United States Penitentiary (USP) Atwater, Federal Correctional Institution (FCI) Herlong, FCI Mendota, and Federal Correctional Complex (FCC) Victorville—constituted the largest portion of BOP’s total retention incentive expenditures, and the level of their expenditures remained relatively steady from fiscal year 2012 through 2016. BOP provides group retention incentives for staff at the General Schedule (GS) grades level 12 and below and those in the Federal Wage System at three institutions—USP Atwater, FCI Herlong, and FCC Victorville. BOP also provides individual retention incentives to its employees at GS grades level 12 and below and in the Federal Wage System at FCI Mendota. As shown in figure 3, our analysis of BOP data found that from fiscal years 2012 through 2016, these four California institutions had the largest percentage of retention incentive expenditures across institutions as well as the largest percentage of employees who received retention incentives.
Additionally, the four California institutions’ retention incentive expenditures remained relatively steady—around $8.1 to $8.2 million during the 5-year period—even though the overall number of employees who received the incentives generally increased. BOP officials told us that these California institutions’ retention incentive expenditures remained relatively steady in spite of an overall increase in the number of employees receiving incentives, in part, because in fiscal year 2013 BOP reduced the retention incentive rate—the percentage of an employee’s basic pay that determines the employee’s retention incentive— by 3 percent at the four California institutions.
BOP officials reported using retention incentives primarily at these four institutions to supplement correctional officers’ salaries and compensate for the gap between BOP’s and other institutions’ salaries. Specifically, officials told us that these four California institutions were consistently understaffed as a result of their lower salaries in comparison to salaries offered at California state and local prisons and at other BOP institutions in California metropolitan areas. The Department of Labor’s Bureau of Labor Statistics reports that the average salary for correctional officers in California in 2016 was $70,020. For the same year, the annual average salary for BOP correctional officers at these four institutions was $50,859. To bring these four California institutions’ salaries in line with those offered by state, local, and other BOP institutions in California metropolitan areas, BOP officials told us that they first use recruitment incentives to attract and hire staff and then provide retention incentives to employees with a performance rating of at least “successful.”
Medical Professionals. From fiscal years 2012 through 2016, BOP retention incentive expenditures for medical professionals increased by an average of approximately 21 percent per year. Our analysis showed that most recently—for fiscal years 2015 and 2016—BOP retention incentive expenditures for medical professionals accounted for the largest portion of BOP’s total retention incentive expenditures across the various occupation groups and was primarily responsible for the overall increase in BOP’s total retention incentive expenditures from fiscal year 2012 through fiscal year 2016. For example, in fiscal year 2016, BOP spent approximately 42 percent of total retention incentives expenditures for medical professionals ($5.8 million), 27 percent on correctional officers ($3.8 million), and the remaining 31 percent on employees in other occupations. In total, BOP retention incentive expenditures for medical professionals increased from approximately $2.7 million in fiscal year 2012 to $5.8 million in fiscal year 2016, as shown in figure 4. The increase accounted for 92 percent of BOP’s total increase in retention incentive expenditures during the five-year period. In comparison, BOP’s retention incentive expenditures for correctional officers and all other occupations remained relatively steady from fiscal year 2012 through fiscal year 2016, increasing by an average of approximately 1 percent per year.
According to our analysis, the increase in retention incentive expenditures for medical professionals during the five years is partially explained by the increase in the number of institutions providing retention incentives to medical professionals. Specifically, from fiscal years 2012 through 2016, the number of institutions providing retention incentives to medical professionals increased from 53 institutions with 341 employees in medical occupations receiving retention incentives to 84 institutions providing retention incentives to a total of 646 employees in medical occupations.
According to BOP officials, BOP primarily uses retention incentives for medical professionals in an effort to retain these employees by supplementing BOP salaries which are generally lower than salaries offered to medical professionals in the private sector. Officials told us that BOP has designated medical professional positions as hard-to-fill and, therefore, BOP retaining these professionals in a correctional setting requires the use of a variety of incentives, including retention incentives, in order to increase pay.
BOP Has a Variety of Internal Controls in Place throughout the Retention Incentive Process
BOP’s Internal Controls Are Intended to Ensure That Retention Incentive Applications and Approvals Meet Requirements
BOP has a number of internal controls in place to ensure that retention incentive applications meet BOP and other requirements. BOP officials told us that these controls are part of a multilayered application and review process that begins at the institution and culminates at BOP’s Central Office. Our review of a random sample of 40 application packet case files for retention incentives awarded from fiscal year 2014 through fiscal year 2016 found that they all generally incorporated the internal controls described by officials. The key controls in this process include: Application review at the institution and regional levels. According to BOP officials, the retention incentive application process begins with an institution’s human resources office, whose staff complete a retention incentive application on behalf of an employee. The institution’s human resources office verifies that the information in the application justifies a retention incentive and that funds are available to pay the incentive. Although it is not required, BOP officials said that they use a retention incentive application checklist to help institutions ensure that retention incentive applications are complete. The institution’s human resources office then submits the completed application packet, which includes supporting documentation, to the warden for review. Next, the application packet is forwarded to the respective BOP regional director who also reviews it for accuracy and completeness. The regional director then adds an approval statement and forwards the packet to the Central Office for final review and approval. Of the 40 randomly selected application packet case files that we reviewed, 36 included a retention incentive checklist used by the institutions and all contained information to justify the retention incentive as well as a statement of the regional director’s approval.
Central Office’s final application approval. BOP policy requires that all retention incentive applications undergo two levels of review in BOP’s Central Office: first by the Human Resource Management Division’s (HRMD) Staffing and Employee Relations Section (SERS) and next by HRMD’s Personnel Director, for final review and approval. According to BOP officials, during the review process there is ongoing communication between the various entities to ensure that applications are complete and accurate; for example, if SERS finds an error in the application or requests additional information, SERS returns the application to the regional or institutional level for correction and re-review. All of the 40 BOP application packet case files that we reviewed included approvals by HRMD’s Personnel Director or an authorized official, as required by BOP policy.
Annual review and re-certification to continue retention incentives. According to BOP policy, on an annual basis, institutions’ human resources offices are required to review employees’ retention incentives to determine whether the incentive is still warranted. Payment of a retention incentive may be recertified and continued as long as the conditions giving rise to the original determination to pay the incentive still exist and funds are available. For each retention incentive, an institution’s human resources office must determine whether to continue, adjust, or terminate the incentive within one year of the initial or most recent approval. If the human resources office decides to continue the retention incentive, the institution’s warden must again submit a retention incentive application. Applications to continue the retention incentive proceed through the same review and approval process as initial applications. Of the 40 application files that we reviewed, 29 were continuations and 8 were initial requests for a retention incentive.
BOP Institutions Use Internal Controls to Help Monitor the Expiration, Continuation, or Termination of Retention Incentives
According to BOP officials, after the initial approval of a retention incentive, an institution’s human resources office has primary responsibility for the monitoring of retention incentive payments. According to officials, institutions use a variety of internal controls to monitor the expiration, continuation, or termination of retention incentives, for example: Monitoring expiration dates. BOP officials stated that institutions’ human resources offices monitor retention incentives in order to identify incentives that are approaching their expiration date and need to be terminated or renewed. For example, according to BOP officials from USP Atwater, FCC Butner and FCI Phoenix, staff from their institutions’ human resources offices may generate a retention incentive activity report and cross reference this report with their locally generated tracking sheets. This process helps identify retention incentives approaching their expiration dates so that the human resources offices can submit a request for continuation before the incentive expires.
Using automated reminders to prompt file review. BOP officials stated that institutions use automated reminders to alert human resources staff to check the records of retention incentive recipients for human resources-related events such as promotions or relocations that could affect the continuation of a retention incentive.
Following a checklist of steps for relocation processes. BOP officials told us that in April 2016 they instituted a checklist that outlines steps that an institution’s human resources staff must take when employees relocate to a different institution. Based on our review of this checklist, one step on the sheet prompts human resources staff to review the employee’s retention incentive. According to BOP policy, when an employee receiving a retention incentive transfers to another location, the human resources office where the employee was receiving the retention incentive is responsible for submitting a request to terminate the incentive. The termination must be effective the last day of the pay period that the employee occupies the position.
Submitting forgiveness waivers. BOP officials told us that institutions submit forgiveness waivers if a request to continue a retention incentive is not submitted and approved prior to the retention incentive expiring. BOP officials said that a forgiveness waiver is considered an acknowledgement of an administrative error and is a late submission of a retention incentive renewal that was still warranted. The waiver is not a request to forgive an overpayment since the employee was still considered to be eligible for the retention incentive. Of the 40 retention incentive applications that we reviewed, 5 applications included forgiveness waivers to excuse the tardiness of the filing and request continuations of the retention incentive.
BOP and DOJ Conduct Periodic Reviews of Retention Incentive Controls
According to BOP officials, BOP conducts periodic audits and reviews of its human capital activities and related internal controls, to ensure that retention incentives are being used appropriately. The following offices conduct various audits and reviews involving BOP’s retention incentives: BOP’s Program Review Division (PRD) audits regional and institutional human resources functions. PRD audits BOP’s regional and institutional human resources offices to ensure that they are in compliance with BOP policies and procedures. According to BOP officials, as part of the audit process, PRD audits retention incentives to ensure that they have the proper approvals and are justified. PRD audits each institution’s human resources office at least every three years. During these audits, PRD generates retention incentive activity reports (the same reports that institutions run when monitoring for expiration dates), to check the accuracy of retention incentive programs under review. Following each audit, PRD issues a final report with findings to the institution and to the staff operating the program area under audit. Institutions respond to the report with corrective actions that the institution will take to address the findings. When the institution has resolved all corrective actions from the audit, the audit is closed. Additionally, each quarter, PRD provides HRMD with a report that summarizes its quarterly audit findings. According to BOP officials, HRMD uses these reports to identify any agency-wide trends that need to be addressed.
Our review of BOP data showed that between fiscal years 2012 and 2016, PRD conducted nearly 200 audits. For example, in the fourth quarter of fiscal year 2016, PRD audited five institutions’ and regional offices’ human resource management functions. During these audits, PRD identified nine deficiencies, one of which pertained to retention incentives. Specifically, it found that one audited institution did not terminate an employee’s retention incentive after the employee had relocated to another institution. To correct the deficiency, the institution cancelled the retention incentive which discontinued future disbursements. According to BOP officials, a bill was generated to recoup the overpayment from the employee.
BOP institutions conduct annual operation reviews of internal functions, such as human resources. BOP officials told us that each institution conducts annual operational reviews of various internal functions, such as human resources. According to BOP’s Program Review Guidelines for Human Resource Servicing Offices, during these reviews, institutions are required to review supporting documentation for staff currently receiving an incentive to determine if the incentives are still warranted. If the initial request for the retention incentive was made over the preceding 12 months, institutions are also required to ensure that it was approved. According to BOP officials, the results of these reviews are reported to PRD through the Central Office.
DOJ’s Justice Management Division (JMD) audits BOP’s human resources programs. According to BOP officials, JMD conducts audits of component-level human resources programs to determine whether BOP’s systems are compliant with DOJ policy and aligned with DOJ’s Human Capital Strategic Plan. JMD’s most recent audit of BOP’s human resources programs that included a review of BOP’s retention incentives occurred in September 2010 at BOP’s Human Resource Service Center in Grand Prairie, Texas. JMD found that in some cases BOP granted retention incentives prior to the signing of service agreements. JMD also found that BOP lacked documentation to authorize a group retention incentive for employees at its Victorville, California institution. BOP’s written response to the findings stated that JMD incorrectly applied the service agreement requirement, as service agreements were not warranted in the specific case that it identified. Additionally, BOP stated that the documents JMD identified as missing from the case files in question were kept in separate files and not required to be part of the retention incentive application. JMD agreed with BOP’s responses and in January 2013, JMD closed out the audit’s findings noting that these responses satisfied all required corrective actions.
BOP Conducts Limited Planning and Evaluation of the Effectiveness of Retention Incentives
BOP’s Planning for the Use of Retention Incentives is Limited
While BOP takes a number of steps to determine current workforce needs and how to fill those needs, BOP does not strategically plan for how retention incentives can be used to meet long-term human capital goals. BOP officials stated that planning for human capital needs is conducted at institutions during quarterly workforce utilization meetings or manpower salary meetings. During these meetings, executive staff at the institution discuss the current state of the institution’s workforce. According to the BOP officials, while considering attrition, hiring, and turnover rates, the executive staff decide strategies they will employ to attract and retain employees for their current needs.
While officials we spoke with at four institutions have discussed retention incentives at their workforce utilization meetings, details about the content of these discussions ranged. According to these officials and our review of meeting minutes from the four institutions, discussions about retention incentives respond to each institution’s short-term staffing situation rather than address future staffing needs based on an overall strategic human capital plan. For example:
USP Atwater officials told us that they review the current turnover rate, budget, projected vacancies, and use of retention incentives at annual budget development meetings. Meeting minutes reflected the following on retention incentives: “retention … still necessary to retain staff and hard-to-fill positions.”
FCC Butner is a medical facility that offers retention incentives to all medical officers (all types of doctors) and nurses (practitioners, registered, etc.) at the institution. According to Butner officials, during workforce utilization meetings, Butner officials discuss recruitment and staffing trends for the institution and plans for how to address any staffing challenges. Meeting minutes we reviewed did not indicate specific discussions about the use of retention incentives.
FCC Pollock executive staff discuss current institutional salary expenditures and projections and the status of vacant positions at workforce utilization meetings. While meeting minutes we reviewed indicated discussions about projected expenditures for incentive awards, the minutes did not differentiate between retention incentive awards, and other incentive awards such as recruitment or relocation incentive awards.
FCI Phoenix officials stated that in their workforce utilization meetings, executive staff discuss salary projections and vacancy statuses. Meeting minutes we reviewed did not indicate specific discussions about the use of retention incentives.
BOP decisions about retention incentives are currently not tied to any strategic human capital plan for how to use human capital flexibilities— such as retention incentives—to address their ongoing challenge of retaining staff in hard-to-fill positions. According to officials, retention incentives are awarded on an as-needed basis, determined by an institution’s warden, if funds are available.
According to key principles for effective strategic human capital planning, such planning is an important component of an agency’s effort to develop long-term strategies for acquiring, developing, and retaining staff needed for an agency to achieve its goals. Specifically, senior leaders should be involved in developing, communicating, and implementing strategic human capital plans. Within an agency’s strategic human capital plan, the human capital policies, practices, and programs—for example, an agency’s retention incentive program—should clearly link to the human capital and program goals of the organization. By not having a strategic human capital plan that clearly establishes strategies that will be used to achieve specific human capital goals, BOP cannot ensure that its institutions are strategically managing their workforces in a manner that meets the agency’s human capital needs.
In August 2017, BOP officials told us that they began drafting a strategic human capital operating plan that will include strategic objectives, action plans, performance objectives and measures, and evaluation/reporting requirements. Officials stated that the plan will also include planning regarding the use of human capital flexibilities, such as retention incentives. BOP officials told us that they anticipate that the strategic human capital operating plan will be a supplement to their workforce utilization meetings and that an agency-wide plan will provide a set of strategies for all institutions to consider. However, BOP could not provide documentation of the project beginning or whether it would include a strategic approach specific to retention incentives. Including retention incentives in BOP’s strategic human capital operating plan would create a roadmap for the agency and the institutions to use to move from being reactive in their current workforce needs—for example, awarding retention incentives on an ad hoc basis when funds are available—to being strategic in how retention incentives are used and to ensure that these and other flexibilities help the agency achieve its long-term workforce goals.
BOP Does Not Evaluate the Effectiveness of Retention Incentives
From fiscal year 2012 through fiscal year 2016, BOP spent more than $59 million on retention incentives but has not established any measures to evaluate their effectiveness. According to officials, BOP has not evaluated the effectiveness of its use of retention incentives because BOP officials consider a retention incentive successful if an employee does not leave the agency. However, BOP also uses other human capital flexibilities along with retention incentives to help retain staff. For example, BOP uses physician and dental comparability allowances—additional pay to a physician or dentist who enters into an agreement for a specified period of service—to help retain these medical personnel. According to officials, it would otherwise be difficult to compete with private sector salaries without the use of all available incentives. However, BOP has not studied whether or how retention incentives have contributed to employees’ retention in relation to other incentives such as physician and dental comparability allowances.
According to our work on strategic human capital management and OPM’s guidance, it is crucial for organizations to evaluate the success of their human capital strategies, such as the use of retention incentives. In measuring the performance of these strategies and their contribution to key programmatic results, agencies can make adjustments, if necessary. For example, agencies can use evaluation results to make targeted investments in certain human capital strategies—such as the use of retention incentives—creating a cycle of strategic workforce management, where evaluation informs planning, planning dictates strategies, and strategies are evaluated for effectiveness. While BOP uses retention incentives to address critical skills gaps—such as with medical professionals—evaluating the effectiveness of retention incentives would help BOP determine whether and how retention incentives, as well as other human capital flexibilities, contribute to an employee’s continued employment at BOP or if adjustments to BOP retention strategies must be made for improved results.
BOP officials agreed that evaluating the effectiveness of retention incentives would help them be more strategic about their human capital needs and spending on incentives. By including and implementing such an evaluation in its upcoming strategic human capital operating plan, BOP could better determine if it is making maximum use of its funds to retain the necessary qualified personnel or if changes must be made to most effectively retain its staff.
Conclusions
As the largest employer within DOJ with some staff working in remote locations and undesirable conditions, BOP relies on a number of available flexibilities, including retention incentives, to help retain its employees. However, BOP currently lacks a strategic approach for using and evaluating retention incentives to address human capital goals. Given BOP’s ongoing staffing challenges, for example, retaining staff in hard-to- fill medical positions, developing a plan that includes a thoughtful blueprint for using retention incentives could help BOP better anticipate and address staffing needs. Moreover, evaluating its use of retention incentives could help BOP determine whether these incentives are effective or whether adjustments are needed to better retain its employees. By using evaluation results to inform planning, and planning to inform how retention incentives are used, BOP would be better positioned to achieve its long-term human capital goals and address its critical staffing needs.
Recommendations for Executive Action
We are making two recommendations to BOP: 1. The Director of BOP should include in the forthcoming strategic human capital operating plan, 1) human capital goals and 2) strategies on how human capital flexibilities—including retention incentives—will be used to meet these goals. (Recommendation 1) 2. The Director of BOP should evaluate the effectiveness of BOP’s use of retention incentives to help determine whether the incentives have helped BOP achieve its human capital goals or if adjustments in retention incentives are needed. (Recommendation 2)
Agency Comments
We requested comments on a draft of this report from DOJ. In an email received November 15, 2017, the DOJ liaison stated that DOJ concurred with our recommendations. The Department did not provide official written comments to include in our report, but did provide written technical comments, which we incorporated as appropriate.
As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Attorney General and the Director of BOP. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-9627 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix III.
Appendix I: Objectives, Scope, and Methodology
This report examines (1) how BOP has used its authority to pay retention incentives; (2) what internal controls are in place for the use of retention incentives; and (3) the extent to which BOP plans for and evaluates the use of retention incentives.
To determine how BOP has used its authority to pay retention incentives, we reviewed BOP’s July 2012 report on its use of recruitment, relocation, and retention (3R) incentives. We then obtained underlying retention incentive expenditure data from DOJ’s Justice Management Division because it serves as the focal point for performance and financial information for all Department of Justice components and employees, including BOP. In particular, we obtained employee-level retention incentive payroll data for fiscal years 2012 through 2016. We selected this time period because it includes the most recent five complete fiscal years for which data were available and because we believe five years is sufficient time to identify trends in BOP’s retention incentive expenditures. We analyzed and aggregated the employee-level data by institution, occupation, and employee grade level. To identify trends, we compared per fiscal year expenditures across the various categories of occupations and locations across the five years. Additionally, we categorized institutions by BOP region, institutions that use group retention incentives, and institutions that use individual retention incentives. We also categorized occupations as medical professionals, correctional officers, and all other occupations and compared aggregate retention incentive expenditures for the different groups. Using information from BOP’s website and testimonial evidence from BOP officials on its health care system, for the purposes of this report, we defined medical professionals as BOP employees in occupations that provide medical, dental, and mental health care services and who do not solely provide these services in an administrative function. For the purposes of our analyses, medical professionals are dentists, dental assistants and hygienists, diagnostic radiological technologists, health aid and technicians, medical doctors (including psychiatrists), medical technologists, nurses, pharmacists, pharmacy technicians, physician assistants, and practical nurses and psychologists. To assess the employee-level retention incentive payroll data’s reliability, we obtained and analyzed documentation on systems’ capabilities and data control, interviewed data users and managers responsible for maintaining data, conducted checks for completeness and logical consistency, and compared the employee-level data to aggregated institution-level retention incentive expenditure data from BOP’s Financial Management Information System. We found the employee-level data to be sufficiently reliable for the purpose of this report.
Additionally for this objective, we reviewed documents such as the DOJ’s Financial Management Information System Sub-Object Classification Code Guide and the Office of Personnel Management (OPM) Handbook of Occupational Groups and Families to respectively identify the system codes used to track retention incentives expenditures and to identify the names for each occupational series code in the datasets. We also interviewed BOP Human Resource Management headquarters officials to obtain information on the primary purposes for BOP’s use of retention incentives and their views on identified retention incentive expenditures trends. We also interviewed U.S. Department of Health and Human Services’ (HHS) Public Health Service (PHS) officials to better understand how BOP and PHS manage costs, including retention incentive expenditures, for PHS staff assigned to BOP. BOP partners with PHS to acquire medical staff to provide medical care for BOP’s inmate population. BOP reimburses PHS for the costs of compensation and benefits—including retention incentive payments, if applicable—for PHS staff assigned to BOP. PHS has final approval authority for retention incentives paid to PHS staff assigned to BOP facilities. Furthermore, we obtained aggregated retention incentive expenditure data from PHS on the total amount of funds BOP reimbursed PHS for fiscal years 2012 through 2016. For the reliability of PHS’s data, we reviewed the system’s data fields to check that the appropriate fields were used to provide data and interviewed data users and managers to discuss how expenditures are recorded and maintained. We found the PHS data to be sufficiently reliable for the purpose of this report.
To identify and describe the internal controls that BOP has in place related to retention incentives, we obtained and analyzed documentation regarding BOP requirements and guidance for the use of retention incentives. We also interviewed officials from BOP’s Central Office who are responsible for the administration, management, and oversight of BOP’s human capital management systems, including retention incentives. We focused on the management and administrative controls used by BOP to review, approve, re-certify, and monitor retention incentives. Additionally, we interviewed the warden and human capital officers at 4 of the 122 institutions to obtain illustrative examples regarding the internal controls in place at these institutions to ensure the proper disbursement of retention incentives. We interviewed BOP officials at Federal Correctional Complex Pollock in Pollock, LA; Federal Correctional Complex Butner in Butner, NC; United States Penitentiary, Atwater in Atwater, CA and Federal Correctional Institution Phoenix, in Phoenix, AZ. These institutions were selected to ensure variation in the number and types of employees receiving retention incentives, BOP region, and security-level. Although the information we obtained from the interviews with officials at these four institutions cannot be generalized to other BOP institutions, these interviews provided important insights and perspectives about the use of retention incentives at BOP institutions. We also reviewed a non-generalizable random sample of 40 retention incentive application packet case files to determine the extent to which these files contained documentation on the internal control activities in place to monitor the application, approval, and funds disbursement processes of BOP’s retention incentive program. To identify our sample, we used employee-level expenditure data to randomly select 40 application files from the universe of BOP employees who received retention incentives from fiscal years 2014 through 2016. Each application file was reviewed by two GAO analysts who each assessed the extent to which each application contained the appropriate justification, approval signatures, and other documentation such as an application checklist and whether the application was an initial or continuation application.
To determine the extent to which BOP plans for and evaluates the use of retention incentives, we interviewed BOP officials regarding their experiences with retention incentives, how they use retention incentives to strategically manage their workforce needs, how the agency evaluates the effectiveness of retention incentives, and how retention incentives contribute to BOP’s broader human capital goals. We then compared these efforts to our work on strategic human capital planning, specifically in terms of planning for and evaluating the use of human capital flexibilities. Additionally, we interviewed the warden and human capital officers at four BOP institutions mentioned above to obtain illustrative examples of how workforce planning occurs at these institutions. We also reviewed the DOJ’s Office of Inspector General Report 16-02 “Review of the Federal Bureau of Prisons’ Medical Staffing Challenges” (March 2016) and our past work to better understand the challenges that BOP faces in retaining medical professionals and other staff.
Appendix II: Bureau of Prisons’ Use of Retention Incentives by Occupations in Fiscal Year 2016
Table 2 provides the Bureau of Prisons’ (BOP) fiscal year 2016 retention incentive expenditures by various occupations and groups of occupations, such as medical professionals, correctional officers, and other occupations. A range of occupations are reflected in the table primarily as a result of four California institutions—United States Penitentiary (USP) Atwater, Federal Correctional Institution (FCI) Herlong, FCI Mendota, and Federal Correctional Complex Victorville—providing retention incentives to all employees at General Schedule grades level 12 and below and those in the Federal Wage System.
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Dawn Locke (Assistant Director) and Meghan Squires (Analyst-in-Charge) managed the work. Also, David Alexander, Renee Caputo, Willie Commons III, Jamarla Edwards, Robert Goldenkoff, Chelsa Gurkin, Eric Hauswirth, Janice Latimer, Lerone Reid, Rachel Stoiko, and Adam Vogt made significant contributions to this report. | Why GAO Did This Study
BOP is the largest employer within DOJ and is responsible for the care and custody of an inmate population of about 186,000. BOP has faced challenges retaining staff at correctional facilities, although it has used retention incentives, along with other human capital flexibilities. GAO was asked to review BOP's use of retention incentives.
This report addresses: (1) how BOP used its authority to pay retention incentives; (2) internal controls BOP has in place for the use of retention incentives; and (3) the extent to which BOP plans for and evaluates the use of retention incentives. GAO obtained employee-level retention incentive expenditure data from DOJ's Justice Management Division for fiscal years 2012 through 2016. GAO also reviewed agency documentation, such as policy statements and 40 randomly selected retention incentive application packet case files from fiscal years 2014 through 2016. GAO also interviewed officials from BOP's Central Office and four correctional facilities that use retention incentives, selected to reflect variation in the number and types of employees receiving retention incentives, BOP regions, and BOP institution security levels.
What GAO Found
From fiscal years 2012 to 2016, the Department of Justice's (DOJ) Federal Bureau of Prisons' (BOP) total retention incentive expenditures generally increased from $10.7 to $14.0 million and the number of employees receiving retention incentives increased from 2,024 to 2,460. During those five years, BOP spent more than 97 percent of its total retention incentive expenditures on employees at four BOP institutions in California and for medical professionals nationwide. Further, total retention incentive expenditures for medical professionals increased by an average of 21 percent per year (see figure). According to BOP officials, BOP uses retention incentives, for example, to supplement BOP's medical professionals' salaries which are generally lower than private sector salaries.
BOP has a variety of internal controls in place throughout the retention incentive process that help ensure retention incentive applications and approvals meet requirements. For example, each application goes through multiple levels of review to verify its accuracy and completeness.
BOP takes steps to determine workforce needs and how to fill those needs, but has not strategically planned for and evaluated its use of retention incentives. According to BOP, planning for human capital needs is conducted at institutions during quarterly meetings, but discussions about these incentives respond to short-term staffing situations rather than proactively addressing future staffing needs. Including human capital goals and strategies in BOP's human capital plan would create a roadmap so the agency could move from being reactive to its current workforce needs to being strategic in trying to achieve its long-term workforce goals. Additionally BOP has not evaluated the effectiveness of its use of retention incentives in retaining staff. As a result, BOP does not know whether retention incentives have contributed to employees' retention in relation to other incentives used by BOP. Consistent with key principles for strategic human capital planning, planning for and evaluating the use of retention incentives could help BOP better determine if these incentives are an efficient and effective means by which to retain staff.
What GAO Recommends
GAO recommends that BOP (1) include human capital goals and how retention incentives will be used to achieve these goals in its human capital plan; and (2) evaluate the use of retention incentives. BOP concurred with GAO's recommendations. |
gao_GAO-19-107 | gao_GAO-19-107_0 | Background
This section discusses DOE’s use of management and operating (M&O) and non-M&O contracts, DOE’s contracting structure, and federal and DOE requirements for oversight of contractors’ subcontract management.
DOE’s Use of M&O and Non-M&O Contracts
Since World War II, DOE and its predecessor agencies have depended on the expertise of private firms, universities, and others to carry out federal research and development work and to manage and operate government-owned facilities. DOE relies on contracts to accomplish most of its work. DOE mainly uses M&O contracts, which are agreements under which the government contracts for the operation, maintenance, or support, on its behalf, of a government-owned or government-controlled research, development, special production, or testing establishment wholly or principally devoted to one or more of the major programs of the contracting federal agency.
DOE and other agencies with sufficient statutory authority and the need for contracts to manage and operate their facilities may use the M&O form of contract; however, according to DOE, it is the only agency using such contracts. According to the DOE Acquisition Guide, DOE generally requires that the M&O contractors be subsidiaries of their corporate parents, dedicated to performance at the specific location, and supported by performance guarantees from their corporate parents. According to DOE officials, in fiscal year 2016, DOE obligated nearly $21 billion on 22 M&O prime contracts—about three-quarters of its total contract obligations for that year.
DOE also used non-M&O contracts for some contracts that were active in fiscal year 2016. For example, DOE used non-M&O contracts for the Mixed Oxide Fuel Fabrication Facility (MOX) construction project at the Savannah River Site in South Carolina, for construction and cleanup at the Hanford Site in Washington State, and for cleanup at the Oak Ridge Reservation in Tennessee. Figure 1 shows the site or project, and contract type, for the 24 largest DOE prime contracts as of fiscal year 2016, in our selection.
DOE uses a variety of contract types for its M&O and non-M&O contracts, including cost-reimbursement contracts, time-and-materials contracts, and fixed-price contracts. Under cost-reimbursement contracts, the government reimburses a contractor for allowable costs incurred, to the extent prescribed by the contract. Cost-reimbursement contracts are considered high risk for the government because the government agrees to reimburse the contractors allowable costs, regardless of whether the work is completed. The DEAR states that cost-reimbursement plus award fee contracts are generally the appropriate contract type for M&O contracts, but agencies can choose among a number of different contract types for M&O contracts. A time-and-materials contract provides for acquiring supplies or services on the basis of direct labor hours at specified fixed hourly rates that include wages, overhead, general and administrative expenses, profit, and actual cost for materials. According to DOE’s General Guide to Contract Types for Requirements Officials, this type of contract can fulfill a special need that no other contract type can serve, but it places a heavy burden on technical personnel to perform surveillance to preclude inefficiency or waste, and there is no positive profit incentive for a contractor to control costs. Under fixed-price contracts, the government and contractor agree on a firm pricing arrangement that is subject to adjustment only according to the terms of the contract, and the contractor generally must deliver the product or service for that price.
DOE Contracting Structure
A contractor, for purposes of this report, is a party that has signed a contract with DOE (known as a prime contract), while a subcontractor is a party that has signed a contract with a DOE contractor (or another subcontractor). For example, a contractor may enter into a subcontract to obtain access to a specific set of skills or services that it may not possess, such as construction expertise, equipment services, or technology support. According to the FAR and the DEAR, contractors may subcontract with affiliates or parties to their prime contract under certain circumstances. Subcontracts with M&O contractor affiliates for performance of contract work itself—as distinguished from the purchase of supplies and services needed in connection with the performance of work—require DOE authorization and may involve an adjustment of the contractor’s fee. If the contractor seeks authorization to have some part of the contract work performed by a party to the contract, and the party’s performance of the work was a factor in the negotiated fee, DOE would normally require (1) that the party perform such work without fee or profit; or (2) an equitable downward adjustment to the M&O contractor’s fee, if any.
Requirements for DOE’s Oversight of Contractors’ Subcontract Management
DOE’s oversight of contractors’ subcontract management generally falls into three broad categories: (1) reviewing subcontract costs, including conducting certain subcontract audits, to ensure that subcontract costs are appropriately charged to prime contracts; (2) reviewing and approving contractor business systems, including contractor accounting and purchasing systems, to ensure validity of data and sufficiency of subcontract oversight policies and procedures; and (3) performing subcontract consent reviews to consider, among other things, whether the contractor is complying with contract provisions and assuring against conflicts of interest, such as close working relationships or ownership affiliations between the contractor and subcontractor, which may preclude free competition or result in higher prices.
Audits and Cost Oversight
The DOE OIG and other federal agencies or external audit organizations conduct periodic incurred cost audits and assessments of DOE’s prime contracts. The purpose of incurred cost audits is to determine whether such incurred costs are reasonable; applicable to the contract; determined under generally accepted accounting principles and cost accounting standards applicable in the circumstances; and not prohibited by the contract, statute, or regulation.
For its M&O contracts, the contractors’ own internal audit staff performs incurred cost audits under a process known as the “cooperative audit strategy.” Under this strategy, each M&O contractor’s internal audit organization is responsible for performing periodic operational and financial audits, assessing the adequacy of management control systems, and conducting an audit of its own incurred cost statements. Each year, the DOE OIG performs an assessment of incurred costs for the 10 M&O contractors that incurred and claimed the most costs that year, according to the DOE OIG’s audit manual. For the remaining M&O contractors, the OIG performs assessments based on risk. These assessments do not follow standards for independent third-party audits; rather, they follow standards for review-level engagements, which are substantially narrower in scope than an audit. These assessments consist of determining whether the contractor’s internal audits complied with professional standards and could be relied upon; the contractor conducted or arranged for audits of its subcontractors when costs incurred were a factor in determining the amount payable to a subcontractor; and the contractor adequately resolved any questioned costs and internal control weaknesses affecting allowable costs that had been identified in prior reports and reviews.
For non-M&O prime contracts, DOE has generally relied on the Defense Contract Audit Agency (DCAA), an independent third party, to audit contractors’ incurred costs that they invoiced to DOE. However, resource issues at DCAA have delayed audits and led to a backlog of prime contract audits. Further, the National Defense Authorization Act for Fiscal Year 2016 prohibited DCAA from providing nondefense audit support until DCAA addressed its backlog of incurred cost audits at the Department of Defense. To try to address its audit backlog that accumulated as a result of DCAA’s delays, DOE has used independent public accounting firms, expanded internal audit functions, and relied more heavily on invoice reviews and OIG audits and assessments. However, in February 2015, DOE’s OIG reported that at the time of that report, these methods were not completely effective and did not meet audit standards in some cases. DCAA has since resumed performing audits for civilian agencies. However, while DCAA has made some progress in reducing its backlog of audits, it did not meet its initial goal of eliminating the backlog by fiscal year 2016, and as we found in September 2017, DCAA officials stated that they were unlikely to meet the agency’s revised goal by the end of fiscal year 2018.
According to the DEAR, each of DOE’s M&O contracts should include a clause that requires the contractor to conduct or arrange for audits of its subcontractors’ incurred costs when costs incurred are a factor in determining the amount payable to the subcontractor to ensure that subcontract costs are allowable. This subcontract audit requirement includes cost-reimbursement and time-and-materials type subcontracts. This requirement is also included in some of DOE’s large non-M&O contracts, including the seven non-M&O prime contracts in our selection. According to DOE headquarters officials, they included this requirement in the non-M&O contracts because of the large dollar amount of the prime contracts. The DOE OIG, DCAA, or other entities generally include information about the status of required subcontract audits in their audits and assessments of the prime contracts.
In March 2017, we found that DOE generally completed audits or assessments of contractors’ incurred costs after DOE had reimbursed the contractors for the costs for DOE’s M&O and non-M&O contracts, including those contractors’ subcontract costs. If, as a result of these audits or assessments, DOE detects fraud or other improper payments— such as reimbursements for costs determined to be unallowable under the contract—DOE will question these costs and work with the contractor to resolve them. Sometimes, this can result in DOE recovering funds.
Contractor Business System Reviews
DOE’s oversight of business systems includes oversight of accounting systems and purchasing systems. With regard to accounting systems, under the FAR, agency contracting officers are required to obtain information concerning the adequacy of the contractors’ accounting systems prior to determining whether a prospective contractor is responsible with respect to the contract. Under the FAR, the adequacy and suitability of these systems affects the quality and validity of the contractor data, including subcontract data, on which the government relies to oversee the contractors’ performance. DOE grants approval of the accounting system through headquarters-level reviews, local office reviews, or external audits of the system.
With regard to purchasing systems, under the FAR, DOE should review and approve contractors’ purchasing systems, including their procurement policies and procedures. If the contractor does not have an approved purchasing system, the contracting officer is required to approve all cost-reimbursement, time-and-materials, and labor-hour subcontracts (among other types) above the simplified acquisition threshold. According to DOE headquarters officials, an approved purchasing system signifies that the contractor’s purchasing policies and practices are efficient and provide adequate protection of the government’s interests, including the contractor’s ability to award some subcontracts without the need to seek review and consent by the local DOE contracting officers. Local contracting officials use a formal contractor purchasing system review or a combination of other monitoring techniques to grant or extend approval of the contractor’s purchasing system.
Subcontract Consent Reviews
DOE monitors contractors’ compliance with subcontracting requirements primarily by providing consent to the contractors to award certain subcontracts. DOE determines the subcontracts that require consent prior to award with criteria the agency develops for each prime contract, such as subcontract dollar value and type of contract. Under the FAR, agencies should consider whether a proposed subcontract is appropriate to the risks involved and consistent with current policy when conducting a consent review. DOE officials told us that they generally use these reviews to ensure that the contractor’s accounting and purchasing systems are continuing to operate as intended and that the contractor is following its policies and procedures, including policies to safeguard against conflicts of interest, such as issues precipitated by shared ownership interests. Under the FAR, where consent is required, the consenting official must give particularly careful and thorough consideration to potential conflicts of interest, such as where close working relationships or ownership affiliations between the contractor and subcontractor may preclude free competition or result in higher prices. For subcontracts that are subject to a consent review, the contractor submits a package of information to the local DOE contracting officer. The contracting officer either provides consent or raises issues that the contractor must address before awarding the subcontract. According to DOE documents we reviewed, the package typically includes summary information such as: what the contractor is buying, the type of contract to be used (i.e., cost-reimbursement, fixed-price), who the subcontract will be awarded to, a general description of the scope of work, a summary of the basis for making the award, documentation that shows the contractor conducted a cost and price analysis prior to award and that the contractor adhered to its internal policies and procedures, and conflict of interest determinations and mitigations.
Eleven Entities Participated in Multiple DOE Prime Contracts, with Complex Ownership Relationships among the Contractors and Subcontractors
In fiscal year 2016, 28 entities were party to DOE’s 24 largest prime contracts. Specifically, DOE awarded 15 prime contracts to contractors composed of groups of two to five entities and awarded the remaining nine of the 24 prime contracts in our selection to contractors composed of a single entity. Our review found that 11 of the 28 participating entities were parties to multiple prime contracts. The prime contracts in which these 11 entities participated represented about 69 percent, or $19.3 billion, of DOE’s total prime contract obligations in fiscal year 2016. Figure 2 shows the relationships among the 11 entities that are parties to multiple prime contracts included in our selection. For example, Battelle Memorial Institute and Bechtel National, Inc. each were party to six prime contracts, based on ownership information DOE provided.
It can be difficult to track changes in the ownership of entities that are parties to the prime contracts to understand the entities’ relationships, if any. Our review found that changes in ownership of the parties to six of the 24 prime contracts in our selection occurred prior to fiscal year 2016 but were not reflected in the information DOE provided to us. Therefore, our analyses do not reflect the modified ownership information. The fact that one entity could be party to multiple prime contracts and could acquire other entities that are parties to prime contracts complicated our ability to understand the relationships among them.
AECOM—which was identified as a party to three prime contracts in our selection—acquired URS Corporation in 2014, and URS had previously acquired Washington Group International in 2007. This resulted in AECOM becoming a party to the Lawrence Livermore National Security, LLC; Washington River Protection Solutions, LLC; Los Alamos National Security, LLC; and Battelle Energy Alliance, LLC, prime contracts, making AECOM a party to seven of the contracts in our selection. However, the documents DOE provided show it as a party to three of the contracts in our selection.
Our review of contractor Lawrence Livermore National Security, LLC’s website showed that BWX Technologies, Inc. split from the Babcock and Wilcox Company in 2015 and replaced it as a party to the contract, making BWX Technologies party to four of the prime contracts in our selection rather than the three reported in DOE’s documents. These changes in ownership occurred prior to fiscal year 2016, the time period we reviewed, but the changes were not reflected in the ownership information DOE provided to us for these prime contracts.
Such acquisitions can also complicate DOE’s review of contract proposals. For example, in August 2016, NNSA awarded the contract for the management and operation of the Nevada National Security Site to Nevada Site Science Support and Technologies Corporation. The contractor identified itself as a wholly owned subsidiary of Lockheed Martin. However, after awarding the contract, the NNSA contracting officer was notified that the awardee had been acquired in its entirety by Leidos Innovations Corporation prior to the award. According to NNSA, the request for proposals required offerors to disclose ownership changes that occur during the proposal process, but NNSA was not notified about the ownership change until after the proposal had been awarded. Once the Nevada Site Science Support and Technologies Corporation’s ownership changed from Lockheed Martin to Leidos, its proposal was not compliant with the requirements and NNSA rescinded the award.
The 24 contractors in our selection reported obligating funds to more than 169,000 subcontracts to about 23,000 different entities in fiscal year 2016. Contractors subcontracted more than $6.9 billion, an amount equivalent to nearly 30 percent of DOE’s obligations to its prime contracts in fiscal year 2016. The extent to which contractors obligated funds to subcontracts in fiscal year 2016 varied widely, from 13 percent of prime contract obligations to 83 percent, as shown in table 1.
The contractors in our selection reported that they awarded about 54 percent, or about $3.7 billion, of their subcontract obligations in fiscal year 2016 as fixed-price contracts and 46 percent, or about $3.2 billion, as cost-reimbursement contracts, cost-reimbursement contracts with no fee earned, or time-and-materials contracts. See figure 3 for the distribution of subcontract obligations by type.
We found that in fiscal year 2016, at least 24 of the 28 entities that were parties to the prime contracts were also subcontractors to the prime contracts in our selection. Specifically, these 24 entities held nearly 3,000 subcontracts with fiscal year 2016 subcontract obligations totaling about $927 million. Table 2 shows the parties to prime contracts that also held subcontracts in fiscal year 2016.
Further, we found that, in some cases, entities held subcontracts on the specific prime contracts to which they were a party. As discussed previously, subcontracting to an entity that is also a party to the prime contract is allowable under the FAR and DOE regulations. Figure 4 shows the 15 contractors that obligated funds in fiscal year 2016 to subcontracts with parties to their prime contracts. For example, UT Battelle, LLC—the contractor for the Oak Ridge National Laboratory prime contract in fiscal year 2016—had 416 active subcontracts with two parties to that prime contract (University of Tennessee and Battelle Memorial Institute). UT Battelle, LLC obligated more than $34 million for subcontracts to these two entities in fiscal year 2016. In another example, Savannah River Remediation, LLC, the liquid waste contractor for the Savannah River Site, had 30 active subcontracts with three parties to that prime contract (AECOM, Inc.; Bechtel National, Inc.; and CH2M Hill Constructors, Inc.). The contractor obligated about $12 million for subcontracts to these three entities in fiscal year 2016. For more information about the relationships among DOE’s prime contracts, parties to the prime contracts, and subcontractors, see an interactive graphic at https://www.gao.gov/products/GAO-19-107.
DOE Did Not Always Ensure That Contractors Conducted Required Subcontract Audits, and Some Unallowable Subcontract Costs May Be Unrecoverable Because Audits Are Not Timely
Each of the 24 prime contracts in our selection required contractors to conduct or arrange for audits of their subcontractors’ incurred costs for certain subcontract types, including cost-reimbursement and time-and- materials contracts, among others. Contracting officers at DOE’s local offices are responsible for overseeing contractors and for ensuring, among other things, that both DOE and the contractor comply with the terms of the prime contract. However, officials from DOE’s local offices have not always ensured that contractors completed the required subcontract audits.
DOE relies on the contractors’ subcontract audits to identify unallowable subcontract costs. As previously discussed, the DOE OIG, DCAA, or third parties complete incurred cost audits or assessments of DOE’s prime contracts, which generally report on the extent to which the contractor has completed required audits of subcontract costs. We requested the reports for the two most recent incurred cost audits or assessments that the DOE OIG or third parties conducted, as of February 2018, for the prime contracts in our selection to determine whether contractors had conducted required subcontract audits for the period covered by the reports. In response to our request, the 24 contractors provided a total of 43 reports, 11 of which were audit reports and 32 of which were assessment reports:
Twenty contractors provided both requested reports.
Three contractors provided only one report each that had been completed.
One contractor did not provide the two requested reports because of pending litigation.
Of the 43 incurred cost assessment and audit reports we reviewed, 21 reports indicated that contractors had not audited more than $3.4 billion in costs incurred by subcontractors over the 10-year period covered by the reports. These reports documented various reasons that the subcontracts had not been audited, including that a contractor did not appropriately recognize that time-and-materials subcontracts needed to be audited, or that a contractor relied on internal controls or a non-audit procedure to meet subcontract audit requirements. For example, an April 2013 assessment by the DOE OIG found that subcontractor costs of more than $12 million incurred over a 4-year period for two multi-year time-and-materials contracts had not been audited by the contractor, as required by its prime contract, because the local DOE office did not submit a request to DCAA to perform the audits due to the DCAA backlog. In another example, a March 2014 DOE OIG assessment found that a contractor did not conduct required audits of $155 million in subcontract costs incurred during 1 fiscal year because the contractor believed its internal controls met the intent of the requirement to conduct the subcontract audits.
Some audit or assessment reports we reviewed included some questioned subcontract costs. For example, in an assessment for fiscal year 2013, the DOE OIG reported that an M&O contractor’s internal audit department performed audits of 78 subcontracts for 30 different subcontractors and questioned nearly $900,000 in subcontractor costs incurred from fiscal year 2009 through fiscal year 2013. As of June 2016, most of the questioned amount had been resolved, and the remaining amount—about $7,900—was deemed unallowable and applied against an invoice from the contractor. In another assessment of an M&O contractor for fiscal year 2013, the DOE OIG questioned subcontract costs identified by the contractor of more than $725,000, with about $8,000 ultimately deemed unallowable. We have previously found that DOE sometimes negotiates questioned costs with its contractors to settle on an amount—potentially lower than the amount initially questioned— ultimately deemed unallowable. Although the amounts of unallowable costs in these examples are small, DOE does not know the full extent of unallowable subcontractor costs that it has reimbursed because required subcontract audits were not always conducted.
For some contractors, the issue of unaudited subcontract costs is long- standing and extensive. For example, DOE documents show that, at the time of our review, one contractor had never completed an adequate audit of its subcontractors’ incurred costs over the 16 years of the prime contract period, although its prime contract with DOE requires such audits. In June 2016, the contractor placed the value of its unaudited subcontracts at more than $1.3 billion. This amount included some subcontracts that were closed without being audited, meaning the work had been completed and the final costs under the prime contract had been paid. DOE has been working with the contractor since 2013 to implement corrective actions to resolve the issue; in October 2018, DOE officials told us they reached an agreement with the contractor to complete current audits and address the backlog.
We identified three key differences in how contractors and DOE’s headquarters and local office officials interpreted the subcontract audit requirements included in the prime contracts we reviewed that contributed to DOE not always ensuring that contractors audited their subcontractors’ incurred costs. Specifically:
Extent of subcontracts that must be audited. We identified differing interpretations of whether the prime contract required contractors to audit all cost-reimbursement and time-and-materials contracts. Specifically, some contractors told us that they had developed risk- based approaches to selecting subcontracts for audit based on thresholds, such as the amount of the subcontract. However, using such a strategy could exclude significant subcontract costs from audit. For example, according to an April 2012 DOE OIG audit, one contractor increased its subcontract audit threshold from $1 million to $15 million in annual incurred costs, thereby excluding from audit nearly $343 million in subcontract costs incurred during fiscal years 2008 and 2009. In its report, the DOE OIG questioned whether the contractor’s subcontract audit strategy provided sufficient audit coverage to ensure that DOE did not pay unallowable costs. In that case, the DOE OIG found that the audit strategy, which was supposed to be based on DCAA requirements, did not meet a key DCAA requirement to audit incurred costs of at least one-third of all subcontracts under $15 million at least once every 3 years.
Definition of an audit. Some contractors used invoice reviews in place of audits to meet the requirement. As discussed previously, DOE documents showed that one contractor had never completed an adequate audit of its subcontractors’ incurred costs over the 16 years of the contract. According to contractor representatives, the term “audit” was not defined in their contract, and therefore they performed detailed subcontractor invoice reviews instead of conducting subcontract audits to meet the requirement. DOE found that these invoice reviews did not meet generally accepted government auditing standards.
Responsibility for arranging for audits if DCAA is unable to conduct audits. Some contractor representatives we interviewed reported that their subcontracts remained unaudited as a result of the DCAA backlog. Representatives from one contractor told us that they believed that they were not responsible to conduct the audits if DCAA was unable to do so, and another said that they tried to engage a third-party auditor to conduct the audits themselves, but their subcontractor would not allow the third-party auditor to access their records despite specific language establishing the contractor’s responsibilities for audits.
Differences in the interpretation of the subcontract audit requirements have continued to occur because DOE has not clearly defined—in guidance or other documents—how these contract requirements should be met, which could eliminate confusion about which subcontracts should be audited, how an audit is defined, and how to meet subcontract audit requirements if DCAA is unable to conduct the audit. Federal internal control standards state that management should externally communicate the necessary quality information to achieve the entity’s objectives so that external parties can help the entity achieve its objectives and address related risks. Until DOE clearly defines how contractors should meet subcontract audit requirements, contractors may not perform subcontract audits as intended and unallowable costs may not be identified or recouped.
In addition, we found that audits or assessments of a contractor are usually not conducted immediately after the fiscal year in which funds are spent, partly because of the availability of DCAA staff or third-party auditors to complete the work. Our review of the 43 audit and assessment reports identified reports covering 7 fiscal years that were audited or assessed 6 or more years after the fiscal year in which the costs were incurred; more than $557 million in subcontract costs in those fiscal years had not been audited as required by the prime contracts. The Contract Disputes Act of 1978 imposes a 6-year statute of limitations for the government to seek recovery of unallowable costs that could be identified through subcontract audits, so it is important for audits to be completed in a timely manner.
We also found that local offices’ efforts to monitor contractors’ completion of subcontract audits have not ensured that contractors have completed required subcontract audits and that those audits are completed in a timely manner. Officials from the local offices said their approaches for overseeing whether contractors performed required subcontract audits included reviewing and approving the contractors’ internal audit plans, reviewing monthly or quarterly reports from the contractors’ internal audit departments, or reviewing the contractors’ internal audits and reviews of subcontractors’ costs. Additionally, several DOE officials from the local offices said they relied on the DOE OIG and external auditors’ assessments and audits of the contractor to monitor the status of subcontract audits, even though these assessments and audits may be infrequent.
Federal internal control standards state that management should implement control activities through policies, such as by documenting such policies in the appropriate level of detail, to allow management to effectively monitor the control activity. These standards state that policies may be further defined through procedures, including the timing of when a control activity occurs, to help personnel implement the control activities for their assigned responsibilities. However, we found that DOE headquarters has not issued documented procedures or guidance that requires local offices to monitor the contractors’ progress in completing the required audits or to specify the time period during which an audit must be completed. Without such procedures or guidance, unallowable costs may go unidentified beyond the 6-year period set by the Contract Disputes Act, preventing DOE from identifying and recovering unallowable costs.
DOE Did Not Always Ensure That Contractors Met Other Subcontract Oversight Requirements and Does Not Assess Subcontractor Management in Performance Evaluations
In addition to ensuring that contractors conduct required audits of subcontract costs, DOE must meet other requirements to ensure its contractors are effectively overseeing subcontracts, specifically by approving contractors’ accounting and purchasing systems and performing consent reviews to monitor subcontracting actions. With respect to approval of contractors’ accounting and purchasing systems, DOE generally ensures that reviews and approvals of these systems occur, but the frequency of some accounting system reviews varies. With respect to performance of consent reviews to monitor subcontracting actions, most subcontracts are not reviewed by DOE, and we found that while DOE’s local officials could independently review available information on ownership to assist them with their assessment of contractors’ identification of potential conflicts of interest in the consent review process, they generally do not. Further, DOE’s thresholds for conducting consent reviews are inconsistent and there is no requirement to reevaluate the thresholds. In addition, DOE’s annual contractor performance evaluations do not explicitly measure its contractors’ performance in managing or overseeing subcontracts.
DOE Generally Approves Contractors’ Accounting Systems, but the Frequency of Some Reviews Varies
Under the FAR, federal agencies are to determine the adequacy and suitability of contractors’ accounting systems. The adequacy and suitability of these accounting systems affects the quality and validity of the contractor and subcontractor data upon which the government must rely for its management and oversight of the contractor and contract performance. DOE local contracting officers responsible for the prime contracts in our selection stated that they rely on contractor accounting system approvals to help them determine the contractor’s suitability to appropriately place and manage subcontracts. The FAR provides that the contractor’s accounting system should be adequate during the entire period of contract performance, but does not specify a minimum frequency for performing accounting system reviews.
According to interviews with local DOE officials and our review of documentation they provided, DOE may grant accounting system approval through headquarters-level reviews, local office reviews, or external audits of the accounting system. Headquarters-level reviews occur at a level above the local office, such as through NNSA’s Office of Management and Budget. In addition, the contracting officers or other subject matter experts at DOE’s local offices can conduct the reviews of the accounting systems themselves or employ an external audit organization, such as DCAA, to conduct the reviews. DOE conducted at least one review of the accounting systems used for each of the 24 prime contracts in our selection: eight accounting systems were reviewed through headquarters-level organization reviews, nine were reviewed by local offices, and seven were reviewed through external audits. DOE headquarters officials said that no method for review is considered more rigorous or preferred over another, and it is left to the discretion of the contracting officers at DOE’s local offices to determine which method to use.
According to our review of documents from DOE’s local offices and interviews with DOE officials from the local offices, 22 of the 24 prime contracts in our selection had approved accounting systems in fiscal year 2016. Contracting officers from the local DOE offices responsible for oversight of the two prime contracts for which there was no approved accounting system for fiscal year 2016 told us that they maintained oversight of the contractors’ accounting systems through mechanisms other than the traditional review and approval process. Specifically:
Local DOE officials responsible for oversight of one prime contract, which was awarded in December 2000, told us that they did not have to review or approve the contractor’s accounting system at the local level after the contract was awarded because the contractor’s corporate office was required to have an approved accounting system to enter into its contract with DOE. The officials were not sure whether an approval of the corporate accounting system had been performed since 2000, but DCAA was scheduled to perform a review of the system in late 2018. In a 2017 letter to the DOE local office, DCAA stated that its review of the accounting system was delayed due to staffing issues, and it was the agency’s opinion that the contractor’s internal audits and reviews demonstrated that the contractor was adhering to the criteria of an adequate accounting system.
A local official responsible for oversight of another prime contract stated that they had not approved the contractor’s accounting system because it was adopted from the site’s former contractor. The officials told us the former contractor’s accounting system had already been approved and no additional review or approval was necessary. Officials at DOE headquarters agreed that the use or transfer of an existing DOE-approved accounting system satisfies the review requirement. According to the officials responsible for overseeing this prime contract, the local office annually reviews and approves the contractor’s Financial Management System Plan, which would identify any major planned enhancements and upgrades to the current financial management systems and subsystems, including the accounting system.
In addition to differences in how accounting system approvals were conducted, local DOE officials said there are differences in the frequency of the contractor accounting system reviews and approvals across local offices. Some accounting systems are approved only at the time the prime contract is awarded, while others are approved annually, on a 3- year cycle, or only if there are major changes to the accounting system. DOE headquarters officials we interviewed said that the frequency of reviews and approvals was determined on a contract-by-contract basis, and for the prime contracts for which the accounting system was approved at the time of contract award, the officials were unaware of what might necessitate an additional review. Figure 5 shows the frequency of accounting system approvals for the 24 prime contracts in our selection as of fiscal year 2016.
The DOE Acquisition Guide states that the creation and maintenance of rigorous business, financial, and accounting systems by the contractor is crucial to ensuring the integrity and reliability of the cost data used by DOE officials. Further, the FAR provides that the contractor’s accounting system should be adequate during the entire period of contractor performance. In addition, DOE headquarters officials said that periodic reviews and approvals of the accounting systems are important to ensuring these requirements are met. However, there is wide variation in the frequency of these reviews, in part because DOE has not reviewed the differences in the frequency of its accounting system approvals and whether the basis for these differences is appropriate.
Prime contracts can last for decades, so many years may pass without further review of the adequacy of the accounting systems. For example, local officials responsible for overseeing a prime contract with an accounting system that was approved at the time the contract was awarded said that the approval occurred 12 years ago, and they had questions about the adequacy of the system.
DOE officials said that they do not have guidance to help contracting officers at local offices determine the appropriate frequency for reviewing accounting systems’ adequacy. Instead, local DOE contracting officers that oversee each prime contract have discretion to determine the manner and frequency of reviews based on their knowledge of the contractor. Under federal standards for internal control, management should design control activities to achieve objectives and respond to risks, including by clearly documenting internal control in management directives, administrative policies, or operating manuals. When reviews are infrequent, or it is unclear when a review should be conducted, subsequent changes to the accounting system may not be promptly evaluated and DOE may not have adequate assurance that contractors’ accounting systems can be relied upon. By reviewing the differences in the frequency of its accounting system reviews and approvals and developing guidance that provides criteria to determine the appropriate frequency of such reviews, DOE could better ensure that adequate accounting systems are in place during the entire period of the contract.
DOE Generally Reviews and Approves Contractors’ Purchasing Systems and Plans More Consistent Reviews Going Forward
Under the FAR, the federal agency should maintain a sufficient level of surveillance to ensure that the contractor is effectively managing its purchasing program. Each of the contractors for the 24 prime contracts in our selection had an approved purchasing system in fiscal year 2016. If a local DOE contracting officer determines that a contractor does not have an approved purchasing system, under the FAR, the office should review and decide whether to approve (i.e. consent to) all cost- reimbursement type subcontracts and unpriced actions for fixed-price subcontracts that exceed the simplified acquisition threshold of $150,000 prior to award. Under the FAR, the contractor is to continue to seek approval for every proposed subcontract that meets these criteria until the issues with the purchasing system that led to the withdrawal of approval are resolved and the system is again approved. Our review of subcontract information provided by DOE’s contractors indicates that, without an approved purchasing system, more than 6,600 of the subcontracts that were active in fiscal year 2016 would have required review and approval prior to award, according to the existing simplified acquisition threshold.
According to DOE officials at local offices and headquarters, DOE contracting officers may use a formal contractor purchasing system review or a combination of surveillance and other monitoring techniques to grant or extend approval of a contractor’s purchasing system. DOE headquarters officials told us that the variation in the source and method of purchasing system reviews is intentional to allow the local offices to meet the requirement in a way that works best for their location and contractor, and that the most important aspect of the purchasing system review is the ongoing surveillance of the system.
Contracting officers from DOE’s local offices told us they had approved the purchasing system for each of the 24 prime contracts in our selection in a variety of ways:
Seven local offices approved contractors’ purchasing systems based on the local contracting officer’s knowledge of the contractor’s work;
Six local offices relied on the results of a peer review program;
Five local offices considered the results from a combination of internal and external audits and reviews (including peer reviews);
Four local offices performed a formal purchasing system review but did not provide specifics as to the source of information, such as internal or external audits or peer reviews; and
Two local offices relied on the results of external audits.
One of the contractors in our selection of 24 prime contracts, Bechtel National, Inc., had a DOE-approved purchasing system for the construction of the Hanford Waste Treatment and Immobilization Plant at the Hanford Site in Washington State, which was subsequently withdrawn for a 3-month period in 2018. Specifically, in fiscal year 2018, the Defense Contract Management Agency (DCMA) performed a review of the contractor’s corporate purchasing system and identified a number of significant deficiencies—such as inadequate advance notice of subcontract awards, missing subcontractor disbarment disclosures, and general documentation issues with the contractor’s procurement files— that resulted in Bechtel National, Inc.’s corporate purchasing system being disapproved until the identified deficiencies could be resolved. DOE officials said they lifted the restrictions on the contractor in October 2018 following DCMA’s validation that Bechtel National, Inc. implemented the required updates to its purchasing system and procedures.
In June 2018, DOE headquarters officials told us they encouraged the local offices to focus on the use of a peer review program to review and approve purchasing systems. NNSA officials further explained that they expected the peer reviews would encourage contractors to remain diligent in the administration of their systems. As a part of this new approach, DOE headquarters officials told us that local officials will be required to assess the need for a purchasing system review every 3 years, and if the local office did not conduct a review, then a peer review would be required at least every 6 years.
According to DOE’s November 2018 updated peer review handbook and officials responsible for the handbook, the peer review program is DOE’s preferred method for conducting purchasing system reviews and is now mandatory for DOE’s M&O contracts at least every 6 years and for non- M&O contracts, with a contract length of 5 years or less, at the 3-year mark. NNSA headquarters officials stated that they expect all of their local offices to use the peer review program to assess contractors’ purchasing systems going forward, regardless of the type of contract. According to documents provided by DOE headquarters and local offices, as of July 2018, contractors for 18 of the 24 prime contracts in our selection participated in the peer review program, and six did not participate, including two NNSA contractors. Figure 6 shows the date of the most recent peer review for the 24 prime contracts in our selection, as of July 2018.
DOE Uses Consent Reviews to Monitor Some Contractors’ Subcontract Actions but Does Not Independently Assess Potential Conflicts of Interest
According to contracting officers and headquarters officials we interviewed, DOE’s local offices use subcontract consent reviews to monitor contractors’ compliance with subcontracting requirements. In addition, local officials told us that they use these reviews to review and assess any reported potential conflicts of interest on the part of the contractor and subcontractors. However, we found that local DOE officials generally do not request additional information on ownership to independently ensure contractors are mitigating these conflicts, nor do they routinely make use of various databases available to government employees that report ownership information for many government contractors. In addition, local offices conduct a limited number of consent reviews for subcontracts, based on a dollar threshold that varies among local offices, which makes it difficult for DOE to ensure that local offices have sufficient visibility into contractors’ subcontracting actions.
DOE Uses Consent Reviews to Monitor Contractor Compliance with Subcontracting Requirements
According to local DOE officials we interviewed, subcontract consent reviews are the primary control method used to monitor contractors’ compliance with subcontracting requirements. Under the FAR, in conducting a consent review, agencies should consider whether a proposed subcontract is appropriate to the risks involved and consistent with current policy. Specifically, local DOE officials told us that they use the consent reviews to monitor contractors’ accounting and purchasing systems between formal reviews of these systems; as well as to monitor their compliance with policies and procedures for subcontracting, including ensuring that subcontracts are awarded competitively, are of appropriate types, and that the contractor adheres to requirements to safeguard against conflicts of interest.
According to officials we interviewed, local DOE contracting officers often receive a notice from the contractor of its intention to solicit subcontracted work and, if the proposed subcontract value exceeds an agreed-upon dollar threshold, contracting officers typically will review a consent package from the contractor before the final award of the subcontract. The contractor is to obtain DOE’s consent to the proposed action before proceeding.
NNSA’s local offices have a standard consent checklist that directs the contracting officer to consider certain factors before granting consent for the contractor to issue a particular subcontract. These factors include the contractor’s past performance, whether the solicitation for subcontracted work was appropriately competed, the type of subcontract selected, and whether the proposed prices are reasonable for the intended work, among other things. In comparison, individual DOE local offices generally use consent checklists they develop. These checklists have similar review topics to the NNSA checklist, but the specific items and formats vary.
According to DOE officials we interviewed, subcontract consent reviews are DOE’s only opportunity to review subcontract pricing and to ensure best value for the government before the contractor awards the subcontract. Furthermore, because fixed-price subcontracts do not have the same audit requirements as cost-reimbursement subcontracts, these consent reviews may be the only opportunity for DOE to review the cost and pricing of fixed-price subcontracts to be awarded by the contractor. As mentioned previously, the contractors for the 24 prime contracts in our selection awarded 54 percent of their fiscal year 2016 subcontract obligations as fixed-price subcontracts, and these contracts may be awarded to parties to the prime contract, subject to certain conditions.
DOE contracting officials we interviewed noted a number of ways in which consent reviews have helped them oversee contractors’ compliance with subcontracting requirements. For example, an official described one case in which the contractor was proposing a cost-reimbursement subcontract for items that could have been purchased more favorably under a fixed- price contract. The consent package did not support why the contractor chose the more costly contract type, so the contracting officer denied consent and asked the contractor to review and reissue the solicitation. In another example, the contractor had to renegotiate a subcontract before award, after the contracting officer identified inherent safety concerns in the description of the proposed work upon review of the consent package.
DOE Uses Consent Reviews to Ensure Contractors Mitigate Potential Conflicts of Interest Contractors Identify, but DOE Does Not Independently Assess Ownership Conflicts
DOE requires certain provisions to be included in the prime contracts that require both DOE and the contractor to safeguard against personal and organizational conflicts of interest. Among other things, these contract provisions include requirements from the FAR that prohibit former officials of a federal agency from accepting compensation from a contractor within a year of awarding a contract to that contractor; prohibit contractors from soliciting, accepting, or attempting to accept any kickbacks; and generally prohibit federal agencies from subcontracting with debarred entities. All of the local DOE officials we interviewed said they rely on the contractor to identify and mitigate potential conflicts by including these requirements in contract clauses in their subcontracts and in the contractor’s internal policies and procedures. Headquarters and local DOE officials said they rely on the consent review process to ensure that contractors are following these policies and procedures, and that contractors identify and mitigate subcontract ownership conflicts, such as those that may occur in connection with subcontracts to related parties.
If the contractor has identified a conflict of interest in connection with a proposed subcontract, the consent package checklists we reviewed request the contractor to also include in the package either a simple conflict of interest disclosure statement, which would include steps the contractor claims to have taken to mitigate the conflict, or a conflict of interest analysis conducted by the contractor. In both cases, the contracting officer is expected to check that the information is included in the package, but no additional action or assessment by local DOE contracting officers is required. Local DOE officials performing consent reviews told us that subcontracting with related parties is their main concern when assessing conflicts of interest; however, they generally did not independently assess information on subcontractor ownership during their reviews, beyond the information that the contractor reported. Information on subcontractor ownership could alert local contracting officers to potential conflicts of interest, such as preferential treatment in the awarding of subcontracts to parties of the prime contract, and could help DOE to determine if the mitigation plan included in the consent package is adequate to address the potential conflict of interest. However, local DOE officials told us that they generally do not request or review subcontractor ownership information in available databases when reviewing proposed subcontracts because there is no requirement to do so. (See appendix III for a description of data systems available to DOE officials that may contain relevant ownership information about existing contractors or entities.)
Local DOE officials told us they have identified instances, through their consent reviews, in which the contractors’ reporting of potential conflicts of interest was inadequate. For example, DOE officials reviewing consent packages at a local office noticed that a number of subcontracts were awarded to a single company. The officials subsequently determined that the contractor’s former president was currently sitting on the board of the subcontracting company, but the contractor had not disclosed this information during the consent review process. According to DOE officials, this case is currently under review.
In addition, according to a Department of Justice press release, an employee of one contractor created an entity and then, on behalf of the contractor, ensured that a multimillion-dollar subcontract was awarded to the new entity, and this employee received payments under the subcontract from May 2011 to April 2016. The subcontractor did not disclose this conflict of interest while working for the contractor.
As previously discussed, contractor ownership can be complicated, with complex relationships between and among entities. Further, contractor ownership may change over time through various mergers and acquisitions. These relationships and changes can make it difficult for DOE to monitor contractors’ ownership, such as in the previously discussed example in which an awardee did not notify NNSA of an ownership change prior to contract award as required by the request for proposals. In this case, NNSA would have been unable to identify or mitigate potential conflicts of interest in connection with the owner, had the contracting officer not been notified separately of the change in ownership.
Nevertheless, according to officials from DOE’s local offices, because DOE is not a party to the subcontracts, agency officials generally do not maintain or request subcontractor ownership information beyond the information that contractors provide during consent reviews. Although DOE has the right to access information about the subcontractors’ costs and performance—through contract clauses that generally allow DOE to request and review information relevant to costs and performance under the prime contract, including the costs and performance of subcontractors as well as through multiple databases available to government employees—officials stated there is no requirement for contracting officers to request or search such information during reviews. According to DOE headquarters officials, depending on the type of prime contract, the government may request direct access to subcontractor records as required. For example, DOE officials from one local office told us that they have access to the contractor’s subcontract information through a direct link to the contractor’s internal restricted network, but they do not routinely access the network to review ownership information. Like data available through other databases, these internal data maintained by the contractors have the potential to be useful to local officials during consent reviews for identifying the risks imposed by potential conflicts of interest between parties to the prime contract and potential subcontractors.
Federal internal control standards state that management should identify, analyze, and respond to risks related to achieving the defined objectives, such as analyzing identified risks to estimate their significance, which provides a basis for responding to the risks. As noted above, local officials said that their main concern when assessing conflicts of interest is the contractor subcontracting with related parties. However, local DOE officials told us that they generally do not request or review subcontractor ownership information because there is no requirement to do so. By requiring contracting officers to independently review subcontractor ownership information as part of consent reviews and assess potential conflicts of interest, DOE would have better assurance that contractors are adequately identifying and mitigating organizational conflicts of interest.
DOE Does Not Periodically Reevaluate Consent Thresholds to Ensure Sufficient Visibility into Contractors’ Subcontracting Actions
Although consent reviews have the potential to provide contracting officers with important information on the contractor’s compliance with requirements, the number of reviews conducted by local offices each year varies due to different thresholds at each location. DOE headquarters and local officials told us the numbers of consent reviews conducted by local offices are based on dollar-amount thresholds or other criteria established by the local DOE offices, and these criteria vary among DOE locations. According to DOE officials, consent review thresholds vary for a number of different reasons. For example, a senior agency official and some local DOE officials said that small staff sizes and other oversight responsibilities may limit the number of consent reviews that contracting officers conduct. DOE guidance recommends that when establishing the threshold for consent reviews, the contracting officer should aim to review enough subcontracts annually to provide the local office with sufficient visibility into subcontracting actions without being overly burdensome on either the contractor or the federal staff.
The consent review thresholds for the 24 prime contracts in our selection varied widely, and contracting officers performed few reviews for some prime contracts. For example, as shown in table 3, one local office set its subcontract consent threshold at $250,000, which led the local contracting officer to review about 175 consent packages in a year, and another set the threshold at $25 million, which led the local office to review 1 consent package in a year. Local DOE officials told us that most subcontracts are not subject to consent reviews because they fall below the consent threshold. One of the prime contracts with a $25 million consent threshold is held by Bechtel National, Inc., the contractor constructing the Hanford Waste Treatment and Immobilization Plant. As previously discussed, Bechtel National, Inc.’s purchasing system was disapproved for a 3-month period in fiscal year 2018 and, during that time, the contracting officer was required to review and consent to all subcontracts above $250,000. A DOE official told us that the local office reviewed 48 subcontract consent packages during this time period, and the office would not have reviewed any if the purchasing system had not been disapproved.
In some cases, DOE contracting officers have adjusted the consent review thresholds during the contract period based on concerns they have identified with subcontracts that the contractor awarded. For example, one local office had concerns that the subcontractor was not disclosing potential conflicts of interest to the contractor and, therefore, the contractor did not mitigate these conflicts of interest. As a result, the contracting officers reduced the consent threshold to increase the number of consent packages they reviewed until they could be certain the contractor was managing subcontracting risks adequately. According to the local DOE officials, part of the reason they did not identify the deficiencies sooner was that high thresholds resulted in the local officials conducting few consent reviews. In another example, a DOE contracting officer from a different local office lowered the consent review threshold in 2017 due to documentation issues—such as files with inadequate documentation to explain or justify proposed prices—as well as the contractor not sending a subcontract to the local office for approval, as required. Local DOE officials told us they requested a peer review of the contractor to see if this was a systemic issue, and they reduced the consent threshold to send a message to the contractor that DOE expected the contractor to improve its subcontracting practices.
For more than half of the contracts in our selection, thresholds for required consent reviews have not been reevaluated since the contracts were awarded because, according to DOE officials, there has not been a requirement to do so. Federal internal control standards state that management should design control activities to achieve objectives and respond to risks. As discussed in the examples above, without an appropriate number of subcontract reviews, deficiencies, such as inadequate documentation, have persisted. By requiring contracting officers to periodically reevaluate the thresholds for consent reviews, DOE may be able to better ensure that local offices have sufficient visibility into contractors’ subcontracting actions to ensure that proposed subcontracts are appropriate to the risks involved and consistent with current policy and sound business judgment.
After we provided our preliminary results from our review of the consent review process to DOE headquarters officials, the officials told us they planned to reevaluate consent thresholds as part of the peer review process described above, with respect to purchasing system reviews. NNSA headquarters officials stated that they would implement a similar change to its process, based on DOE’s guidance, once DOE implements its changes in the November 2018 update. However, we reviewed the November 2018 update and found that it did not include a requirement to reevaluate consent thresholds as part of the peer review process.
DOE Does Not Explicitly Evaluate Its Contractors’ Performance on Subcontract Management
According to local DOE officials and documents provided, DOE develops Performance Evaluation and Measurement Plans at the beginning of each fiscal year to establish expectations for contractor performance and to describe how the local office will evaluate the contractors’ performance against those expectations. According to DOE guidance, the plans provide a standard to assess whether the contractors are meeting the mission requirements and performance expectations for goals stipulated within the contracts. In addition, according to DOE guidance, the plans should describe the incentives available, such as award fees, and the methodology for determining the amount of incentives earned by the contractor for the year, based on the evaluation of the contractor’s performance. In general, Performance Evaluation and Measurement Plans we reviewed included goals and performance criteria. Goals are the broad, high-level categories and benchmarks that local DOE officials use to assess the contractor’s annual performance and reflect what local officials consider most important in the contractor’s performance. Performance criteria, also included in the plans we reviewed, refer to the elements officials should consider when reviewing to determine whether the contractor has met the goals. Not all performance criteria need to be met for a contractor to show adequate performance toward a goal.
None of the fiscal year 2016 Performance Evaluation and Measurement Plans for the 24 prime contracts we reviewed included goals explicitly related to subcontractor management, and only 3 of the 24 plans included performance criteria that were related to the contractor’s management of subcontractors. According to DOE officials, there is no requirement to include specific goals or performance criteria related to subcontractor management in these plans because the contractor is responsible for completing the scope of work in the prime contract, regardless of whether it was performed by the contractor or a subcontractor. The fiscal year 2016 Performance Evaluation and Measurement Plans we reviewed for 18 of the 24 prime contracts in our selection included a goal for effective and efficient business operations, which includes the contractor’s accounting and purchasing systems. DOE headquarters officials stated that they would expect any subcontract management issues that affected the scope, schedule, or cost of the contract to be identified and addressed within this goal. However, of the three plans that included performance criteria on subcontract management, none of the criteria were included under the business operations goal, as DOE officials said they would have expected. Rather, these performance criteria were included under goals such as “project performance and technical issue resolution” or a “special emphasis area.”
The fiscal year 2016 Performance Evaluation and Measurement Plans we reviewed did not reflect the expectations DOE headquarters officials described to us that subcontract management would be reflected in the business operations goal of contractor evaluations, and the plans do not acknowledge the importance of subcontract management and oversight, particularly in light of the high percentage of contract obligations— frequently for cost-reimbursement contracts—that subcontractors ultimately execute. As we mentioned above, contractors in our selection subcontracted out nearly 30 percent of their fiscal year 2016 obligated funds, making subcontract management a key part of the contractors’ work.
According to DOE guidance, DOE should use performance-based management as a strategic contract management tool to plan for, manage, and evaluate contractor performance under the prime contract and to align performance with costs. A March 2018 study of NNSA’s M&O contractors and a February 2019 GAO report on DOE performance measures found that performance evaluations tend to be subjective and do not focus on potentially important areas, such as the contractors’ cost performance. The Deputy Secretary of Energy also issued a statement in September 2018 noting the importance of properly incentivizing performance as part of contract management to ensure that the most important performance measures are identified and that incentives are appropriately aligned to those measures. However, the plans we reviewed do not reflect the importance of subcontract management because there is no requirement to include assessments of the contractors’ management of its subcontractors in the plans. By requiring that explicit performance criteria that assess the contractors’ management of subcontractors be included as part of the annual Performance Evaluation and Measurement Plans, DOE would have more reasonable assurance that the agency is emphasizing the importance of subcontract management and providing contractors an additional incentive to properly manage their subcontractors.
Conclusions
Contracting officers at DOE’s local offices are responsible for, among other things, ensuring that contractors complete required subcontract audits. DOE’s headquarters and local offices have taken some steps to ensure that contractors comply with their subcontracting requirements. However, differences in how contractors, local DOE offices, and DOE headquarters offices interpret subcontract audit requirements and perform subcontract audits persist because DOE has not clearly defined—in guidance or other documents—how these requirements should be met. Until DOE clarifies which subcontracts should be audited, how an audit is defined, and how to meet subcontract audit requirements if DCAA is unable to conduct the audit, contractors may not perform subcontract audits as intended and unallowable costs may not be identified or recouped. Additionally, DOE’s local offices did not always ensure that contractors audited their subcontractors’ incurred costs for cost- reimbursement and time-and-materials subcontracts as required because DOE headquarters has not issued documented procedures or guidance that requires local offices to monitor contractors’ progress in completing the required subcontract audits in a timely manner. Without such procedures or guidance, unallowable costs may go unidentified beyond the 6-year limitation period of the Contract Disputes Act, preventing DOE from recovering those costs.
In addition, the timing of contractor accounting system reviews differs among DOE’s local offices. DOE has not reviewed the differences in the frequency of the reviews and whether the basis for these differences is appropriate, nor provided guidance that includes criteria to determine the frequency of reviews. By reviewing the differences in the frequency of its accounting system reviews and approvals and developing guidance that includes criteria to determine the appropriate frequency of such reviews, DOE acquisition officials could better ensure that adequate accounting systems are in place during the entire period of the contract.
DOE uses consent reviews to ensure that other subcontracting requirements are met, including that subcontracts are appropriate to the risks involved and that there are appropriate safeguards related to personal and organizational conflicts of interest. Nevertheless, DOE generally does not independently request or review subcontractor ownership information or assess potential conflicts of interest related to ownership between contractors and subcontractors as part of their consent reviews—beyond information disclosed by the contractor— because there is no requirement to do so. Recent criminal investigations into conflicts of interest, local offices’ own findings of unreported conflicts, and the complex ownership relationships among contractors and subcontractors that we identified emphasize the need for oversight in this area. By establishing such a requirement, DOE would have better assurance that contractors are adequately identifying and mitigating conflicts of interest.
DOE’s local offices set thresholds to determine which subcontracts to review. The thresholds often are set at the beginning of the contract and are not reevaluated because there is no requirement to do so. We observed a small number of instances in which DOE local offices decreased thresholds after identifying concerns during consent reviews. We were encouraged that DOE intended to incorporate evaluation of consent review thresholds in their peer review process as part of their planned update to their guidance, but upon subsequent review, the guidance did not contain the requirement. By requiring local offices to periodically reevaluate consent review thresholds, DOE and NNSA acquisition officials may be able to better ensure that local offices have sufficient visibility into contractors’ subcontracting actions to ensure that proposed subcontracts are appropriate and consistent with current policy.
Finally, DOE uses Performance Evaluation and Measurement Plans to establish expectations for contractor performance, including performance criteria, used to evaluate contractor performance. However, few of the plans we reviewed included explicit goals or performance criteria related to subcontract management because there is no requirement to do so. By requiring inclusion of explicit performance criteria for assessing the contractors’ management of subcontractors in these plans, DOE and NNSA acquisition officials would have more reasonable assurance that the agency is emphasizing the importance of subcontract management and providing contractors an additional incentive to properly manage their subcontractors.
Recommendations for Executive Action
We are making the following six recommendations to DOE: The Director of the DOE Office of Acquisition Management should clearly define—in guidance or other documents—which subcontracts should be audited, how an audit is defined, and how to meet subcontract audit requirements if DCAA is unable to conduct the audit.(Recommendation 1)
The Director of the DOE Office of Acquisition Management should develop documented procedures or guidance that requires DOE’s local offices to monitor the contractors’ progress in completing required subcontract audits in a manner that ensures unallowable costs can be recovered within the 6-year limitation period in the Contract Disputes Act. (Recommendation 2)
The Director of the DOE Office of Acquisition Management should review the differences in the frequency of DOE’s accounting system reviews and approvals and develop guidance that includes criteria to determine the appropriate frequency of such reviews for prime contracts. (Recommendation 3)
The Director of the DOE Office of Acquisition Management should require local officials to independently review subcontractor ownership information as part of DOE consent reviews and assess potential conflicts of interest to ensure contractors are mitigating them. (Recommendation 4)
The Director of the DOE Office of Acquisition Management should require local offices to periodically reevaluate consent review thresholds. (Recommendation 5)
The Director of the DOE Office of Acquisition Management should require contracting officers to include assessments of the contractors’ management of subcontractors as part of annual Performance Evaluation and Measurement Plans, as appropriate. (Recommendation 6)
Agency Comments and Our Evaluation
We provided a draft of this report to DOE for comment. In our draft report, we made twelve recommendations—each of our six current recommendations was made to both DOE and NNSA. In response to DOE’s comments, we consolidated our original twelve recommendations into six recommendations addressed to DOE. We did so with the understanding that NNSA follows DOE guidance and would develop supplemental guidance, as needed, to implement these recommendations. With regard to the remaining six current recommendations, DOE partially concurred with five of the recommendations and did not concur with one of the recommendations. DOE’s written response is reproduced in appendix IV. In addition, DOE provided technical comments which we incorporated as appropriate.
DOE did not concur with our fourth recommendation to require local officials to independently review subcontract ownership information as part of DOE consent reviews and assess potential conflicts of interest to ensure contractors are mitigating them. In response to the recommendation, DOE said that it plans to issue guidance emphasizing the importance of contracting officers’ reviewing contractors’ disclosure and mitigation of issues created by potential conflicts of interest or ownership affiliations between contractors and subcontractors, and NNSA plans to evaluate the need for additional action upon issuance of the guidance. DOE officials said they rely on the consent review process to ensure that contractors identify and mitigate subcontract ownership conflicts as required, such as those that may occur in connection with subcontracts to related parties. Local DOE officials told us they have identified instances, through their consent reviews, in which the contractors’ reporting of potential conflicts of interest was inadequate. Furthermore, we have identified several recent high-profile incidents that have involved fraudulent activity by subcontractors related to conflicts of interest that were not disclosed to DOE. DOE officials—including those in local offices—have access to several databases and other sources of information that would allow them to independently verify ownership information that could allow the local offices to identify potential conflicts of interest that were not disclosed. We continue to believe that requiring local officials to independently review subcontractor ownership information as part of consent reviews and assess potential conflicts of interest could provide DOE with greater assurance that the contractors are identifying and mitigating conflicts of interest.
In response to our other five recommendations, DOE stated that it partially concurred with each. For each recommendation, DOE said that it would review existing regulations, procedures, guidance, or contract provisions and assess the need for supplemental guidance. We believe that DOE’s plans to further examine the issues raised in our report is a positive step toward resolving the issues; however, we believe that the actions called for in our recommendations remain valid and that DOE could more efficiently resolve the issues by proceeding to implement those actions.
We are sending copies of this report to the appropriate congressional committees, the Secretary of Energy, the Administrator of the National Nuclear Security Administration, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix V.
Appendix I: Scope and Methodology
To address our objectives, we reviewed relevant laws, regulations, and guidance, including the Federal Acquisition Regulation (FAR); the Department of Energy Acquisition Regulation (DEAR); Department of Energy (DOE) policies and guidance on contract management and subcontract oversight; and individual prime contracts to identify requirements that explicitly apply to subcontracting, including DOE’s roles and responsibilities and requirements for the contractor. We also reviewed relevant documentation and interviewed officials from DOE and the National Nuclear Security Administration (NNSA), as well as representatives of DOE’s largest prime contracts and officials from the local DOE offices that oversee these prime contracts.
To identify the entities that participated in DOE’s largest prime contracts, the extent to which they subcontracted their work, and the entities that participated in those subcontracts during fiscal year 2016, we reviewed a list of all DOE prime contracts active in that year provided by DOE headquarters officials. That list included information about prime contract type, total prime contract value, fiscal year 2016 obligations, and DOE’s local offices responsible for overseeing the contractors. We selected fiscal year 2016 for review because it was the most recent fiscal year for which complete data were available at the start of our review. DOE’s total prime contract obligations for fiscal year 2016 were $28.2 billion. We determined that an appropriate threshold for establishing our selection would be all single prime contracts for which DOE obligated at least $300 million (about 1% of all contract obligations) in fiscal year 2016, and this resulted in a list of 24 prime contracts that represented about $23.6 billion in obligations, or about 84 percent of DOE’s fiscal year 2016 prime contract obligations. The resulting selection of 24 prime contracts consisted of both management and operating (M&O) and non-M&O prime contracts from the three major program offices within DOE: NNSA, Office of Science, and Office of Environmental Management. We took several steps to determine the reliability of the prime contract data provided by DOE, including interviewing agency officials and reviewing individual prime contract documents, as well as verifying, through contractor and local office interviews, the amount of funds obligated to the prime contract in fiscal year 2016. We determined that the data provided by DOE on the prime contracts, in terms of prime contract obligations in fiscal year 2016, were sufficiently reliable for identifying DOE’s largest prime contracts.
To identify the parties to DOE’s largest prime contracts, we reviewed documents and statements the DOE local offices provided about the parties to each of the 24 prime contracts in our selection. For consistency, we used only the information local DOE officials provided about prime contract ownership, either from their direct statements or from the prime contract documents they provided as our source for the information, although we observed that in some cases more recent ownership information was available through the contractors’ websites. In addition to the documents and statements officials from DOE’s local offices provided, we also reviewed contractors’ websites and information from the parties’ websites about acquisitions and mergers to better understand the complicated relationships among all of the contractors and the parties to the prime contracts. Because of changes in entity ownership or the structure of these prime contracts, more entities than we identified in our analysis may be parties to these prime contracts.
To identify the subcontractors to the 24 prime contracts in our selection, we requested and reviewed data from the 24 contractors about their active subcontracts in fiscal year 2016. Each contractor provided data on their subcontracts that were $10,000 or more and that were active in fiscal year 2016, including: the subcontractor’s name, Dun & Bradstreet’s Data Universal Numbering System (DUNS) number, location of subcontractor’s office, total award amount, total obligated amount for fiscal year 2016, type of subcontract, contract award date, and contract term. There were some cases in which the contractors did not provide all of the requested subcontract data, or the data provided were not clear, such as the meaning of the type of subcontract. To resolve these issues, we conducted contractor-specific follow-up requests to either collect the missing information, identify the reasons that information was not available, or to clarify data they provided. We were able to collect missing information and clarify the data with two exceptions. First, many contractors did not have DUNS numbers for all of their subcontractors and therefore we did not use this identifier in our analyses. Second, contractor Brookhaven Science Associates, LLC did not track the obligated dollar amount for fiscal year 2016 for its active subcontracts. As a result, we were not able to include it in our analysis of the dollar amount of subcontracted funds, and we indicated that this analysis was therefore based on 23 of the 24 prime contracts in our selection.
We took several steps to determine the reliability of the subcontract data provided by the contractors, including requesting and reviewing information from each of the contractors about the systems used to capture the data, and we determined that the information was sufficiently reliable to use in analyses of subcontract information from these 24 contractors in fiscal year 2016. We identified the amount of funds subcontracted, the number of subcontracts, and the number of unique entities subcontracted to during fiscal year 2016. We also identified the amount subcontracted for each contractor by type of subcontract, as defined in the FAR: (1) fixed-price; (2) cost-reimbursement; (3) cost- reimbursement, no-fee; and (4) time-and-materials. In addition, we used the names of the subcontractors to identify any cases in which a party to the prime contract was also a subcontractor to any of the prime contracts in our selection. We used shortened versions of the parties’ names to perform the matching between parties to the prime contract and subcontractors. For example, the party to the Battelle Energy Alliance, LLC prime contract—Battelle Memorial Institute—was shortened to “Battelle,” and we included any subcontract that included the word “Battelle” in its name in our match list. This allowed us to identify a conservative estimate of the number of parties who were also subcontractors in fiscal year 2016. However, this analysis would not have identified any cases in which the subcontractor was a party to the prime contract but had a different name.
To develop graphical representations of (1) figure 2, Entities That Were Party to More than One of the 24 Largest Department of Energy Prime Contracts, Fiscal Year 2016 (which explores ownership relationships between parties and prime contracts) and (2) figure 4, Selected Department of Energy Contractors That Awarded Subcontracts to Parties to Their Prime Contract, Fiscal Year 2016 (which explores contracting relationships between prime contracts and subcontractors that were also parties), we performed the name-matching exercise described in the previous paragraph to first structure the data and then develop graphical prototypes using the UCINet network analysis tool, including its NetDraw graphics tool, which were then further refined for GAO publication. For each of the static representations, the graphics juxtaposed two sets of entities in columnar format: (1) for the party-prime contract graphic, we arrayed parties to two or more prime contracts in the first column of entities and the prime contracts in which these parties had ownership in a second column, and (2) for the prime contract-party as subcontractor representation, we arrayed the prime contracts in the first column and the subcontractors who were also parties to their prime contract in the second column. Lines between parties and prime contracts in the first graphic represented the presence of an ownership relationship. The parties were sized according to the number of contracts that the entity was a party to, and the contracts were sized according to the number of parties to that contract. Lines between prime contracts and parties as subcontractors in the second graphic represented the value of subcontracts between the two, with the lines taking on one of four weights corresponding to dollar value ranges.
To examine the extent to which DOE ensured that the 24 contractors in our selection audited subcontractors’ incurred costs and met other requirements for subcontract oversight, we developed a structured interview and a request for data and documents, which we administered to representatives of the 24 prime contracts in our selection and to DOE officials at local offices who were responsible for the oversight of the contractors. To develop the list of requested documents and structured interview questions, we reviewed the FAR, DEAR, DOE policies and guidance, and individual prime contracts to identify both DOE’s roles and responsibilities and requirements for the contractor regarding subcontracting. From these sources, we confirmed that the review of subcontract costs, including subcontract audits and DOE access to subcontractor records, was a key requirement and identified two other broad categories that covered the requirements we identified for DOE and the contractor related to subcontracting: (1) the review and approval of contractor business systems, including the accounting and purchasing systems; and (2) DOE’s approval of subcontracts through consent reviews, which are intended to assess the contractors’ adherence to subcontracting requirements and provide assurance against conflicts of interest, including personal and organizational conflicts, and issues with kickbacks, foreign influence, and disbarment.
We designed the structured interview questions and document requests to identify how DOE officials met subcontract oversight requirements. We pretested the structured interview questions and document requests at three of DOE’s local offices that included both M&O and non-M&O prime contracts from three major program offices—the Hanford Site in Washington State, Lawrence Livermore National Laboratory, and Pacific Northwest National Laboratory—and made changes to the request for documents and the interview guide as appropriate. We then conducted the structured interviews with DOE’s local officials responsible for oversight of the 24 contractors in our selection, including contracting officers, and with representatives from the 24 contractors during February, March, and April 2018. We also collected documents that addressed DOE’s oversight of the contractors’ management of subcontracts, including, as of February 2018, the two most recent incurred cost audits or assessments of the prime contract—which spanned the 10-year period from 2007 to 2016—the contract management plans, annual contractor performance reviews, peer reviews, and information about the subcontractors and entities that were parties to the prime contracts. We conducted a content analysis of DOE and contractor officials’ responses provided through the structured interview process and on the data and documentation we received, and we summarized the extent to which DOE ensures that contractors were auditing subcontractors’ incurred costs and meeting other requirements for subcontract oversight.
We conducted this performance audit from May 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: The Department of Energy’s 24 Largest Prime Contracts in Fiscal Year 2016
Table 4 provides information on the Department of Energy’s (DOE) 24 largest prime contracts in fiscal year 2016, including the name of the site or project, the name of the contractor, entities that were party to the prime contract, and the amount obligated on the contract in fiscal year 2016. Local DOE officials provided information on parties to the prime contract, either from direct statements or from the prime contract documents. We used information DOE provided as our source for the information in the table, although we observed that in some cases more recent information was available through the contractors’ websites or other sources.
Appendix III: Summary of Key Data Systems Used to Collect Data on Department of Energy Contractors
There are several key federal data systems that include information on Department of Energy (DOE) contractors. Additionally, DOE has internal systems that include information on contractors. These data systems are available to federal employees and can be used to differing extents to identify information about contractor ownership.
Appendix IV: Comments from the Department of Energy
Appendix V: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgements
In addition to the contact named above, Hilary Benedict (Assistant Director), Kathy Pedalino (Analyst in Charge), Caitlin Dardenne, and Jeffrey (Chris) Wickham made key contributions to this report. Also contributing to this report were Enyinnaya David Aja, David Dornisch, Farrah Graham, Richard P. Johnson, Cynthia Norris, Dan Royer, and Tatiana Winger. | Why GAO Did This Study
DOE, including NNSA, is the largest federal civilian contracting agency, spending about 90 percent of its appropriations on contracts with companies, universities, and others for federal research and development, engineering, and production. DOE headquarters and local offices oversee contractors’ activities, including their management of subcontracts.
GAO was asked to review contracting at DOE, including the use of subcontractors. This report examines, for fiscal year 2016, (1) the parties that participated in DOE’s largest prime contracts and the extent to which they subcontracted their work; (2) the extent to which DOE ensured that those contractors audited subcontractors’ costs, as required; and (3) the extent to which DOE ensured that contractors met other subcontract oversight requirements. GAO reviewed DOE’s fiscal year 2016 data and documents, analyzed regulations, and interviewed federal officials and contractor representatives for DOE’s 24 largest fiscal year 2016 prime contracts.
What GAO Found
In fiscal year 2016, 28 entities participated in the Department of Energy’s (DOE) and its National Nuclear Security Administration’s (NNSA) 24 largest prime contracts, which totaled $23.6 billion of DOE’s fiscal year 2016 obligations. The contractors awarded about $6.9 billion (nearly 30 percent) of those obligations to thousands of subcontractors. Further, multiple companies, universities, and other entities can join together to bid on a contract (i.e., become a “party to” a contract). GAO’s review of data about these contracts and subcontracts identified complex ownership relationships among the contractors and subcontractors. For example, GAO found that almost all of the 28 parties to the prime contracts in its review were also subcontractors to some prime contracts, holding a total of nearly 3,000 subcontracts with fiscal year 2016 obligations totaling about $927 million (see figure). GAO found that it can be difficult to track changes in the ownership of parties to the contracts and to understand the relationships between parties.
DOE and NNSA did not always ensure that contractors audited subcontractors’ incurred costs as required in their contracts. GAO’s review of 43 incurred-cost assessment and audit reports identified more than $3.4 billion in subcontract costs incurred over a 10-year period that had not been audited as required, and some subcontracts remained unaudited or unassessed for more than 6 years. Completing audits in a timely manner is important because of a 6-year statute of limitations to recover unallowable costs that could be identified through such audits. DOE headquarters has not issued procedures or guidance that requires local offices to monitor contractors to ensure that required subcontract audits are completed in a timely manner, consistent with federal standards for internal control. Without such procedures or guidance, unallowable costs may go unidentified beyond the 6-year limitation period of the Contract Disputes Act, preventing DOE from recovering those costs.
DOE and NNSA perform several reviews to ensure that contractors meet other subcontract oversight requirements. For example, DOE’s local offices review proposed subcontracts to ensure they are awarded consistent with policies related to potential conflicts of interest. However, local officials do not independently review information on subcontractor ownership because doing so is not required, although such information could alert officials to potential conflicts of interest. By requiring contracting officers to independently review subcontractor ownership information, DOE and NNSA would have better assurance that contractors are adequately identifying and mitigating organizational conflicts of interest.
What GAO Recommends
GAO is making six recommendations, including that DOE develop procedures that require local offices to monitor contractors to ensure timely completion of required subcontract audits, and require local DOE officials to independently review subcontractor ownership information to identify potential conflicts of interest. DOE partially concurred with five of GAO’s six recommendations but did not agree to independently review subcontractor ownership information. GAO maintains that the recommended actions are valid. |
gao_GAO-19-198 | gao_GAO-19-198_0 | Background
DOD Personnel Misconduct
The Inspector General Act of 1978, as amended, provides that the IG may receive and investigate complaints or information from an employee concerning the possible existence of an activity constituting a violation of law, rules or regulations; gross mismanagement; gross waste of funds; abuse of authority; or a substantial and specific danger to public health or safety. Violation of the law may also include a violation of a provision of criminal law, including the Uniform Code of Military Justice, which is codified in Title 10 of the United States Code.
Whistleblower Protections for DOD Personnel
Whistleblowers are protected from reprisal as a result of making a protected disclosure through various statutes, regulations, and presidential policy covering different DOD personnel groups. Table 1 summarizes the statutory and policy authorities covering DOD personnel, along with selected protected disclosures and prohibited personnel actions—which are two required elements of the test for determining whether there was reprisal against a complainant for whistleblowing. A protected disclosure is a disclosure of wrongdoing by a whistleblower to a party that is an eligible recipient of that disclosure, while prohibited personnel actions include those actions that are taken or threatened in response to a protected disclosure, such as termination, reassignment, or a significant change in duties, responsibilities, or working conditions.
DODIG and Military Service IG Roles and Responsibilities for Investigating Whistleblower Reprisal Complaints
DODIG and the military service IGs share responsibility for investigating misconduct and whistleblower reprisal complaints. Allegations of misconduct and other whistleblower complaints, including those involving senior officials, may be investigated by DODIG or a military service IG depending on the nature of the allegation or the DOD employees involved. Responsibilities for investigating whistleblower reprisal complaints differ according to DOD personnel type. Specifically, DODIG is responsible for investigating and overseeing DOD component investigations of complaints alleging reprisal against certain DOD civilian employees, and for investigating complaints alleging reprisal against DOD contractor, subcontractor, grantee, and subgrantee employees. For complaints alleging reprisal against a military servicemember, DODIG has the authority to either investigate the complaint or refer it to a military service IG for action. Most reprisal cases involving military servicemembers are investigated by the military services IGs, with DODIG oversight.
In order to carry out its responsibilities, DODIG has established several directorates to facilitate the handling and investigation of misconduct and reprisal complaints. Figure 1 provides a high-level depiction of the DODIG and military service IG processes for handling reprisal, senior official misconduct, and internal DODIG employee complaints, along with the basic roles of the DODIG directorates.
Protecting Whistleblower Confidentiality
Whistleblowers confidentiality protections are codified in federal law. The Inspector General Act of 1978, as amended, restricts DODIG and military service IGs from disclosing a whistleblower’s identity without the consent of the whistleblower unless the IG determines that such disclosure is unavoidable during the course of the investigation. For example, if a complaint includes information that poses a personal or public safety concern, disclosing the identity of the complainant may be unavoidable. Additionally, the Privacy Act of 1974 prohibits the disclosure of records on any person to another agency without the consent of the person the record relates to, but allows for the disclosure of an employee’s identity if the purpose is for routine use—that is, a use that is disclosed for a purpose compatible with the purpose for which it was collected. For example, referring an allegation from an IG hotline to an appropriate investigative unit would be considered routine use.
Federal Law and Standards Establish Information Security Requirements to Protect Federal Systems
The Federal Information Security Modernization Act of 2014 is intended to provide a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations. The law requires each agency to develop, document, and implement an agency-wide information security program to provide risk-based protections for the information and information systems that support the operations and assets of the agency. The law also requires agencies to comply with NIST standards and the Office of Management and Budget requires agencies to comply with NIST guidelines for protecting federal IT systems.
Among other things, NIST defines how agencies should determine the security category of their information and information systems based on the potential impact or magnitude of harm that could occur should there be a loss in the confidentiality, integrity, or availability of the information or information system. NIST also prescribes an array of activities associated with the selection, implementation, and assessment of IT security controls—and the authorization to operate federal IT systems and other products.
DOD Instruction 8510.01, Risk Management Framework for DOD Information Technology, established a risk management framework for DOD information technology that is consistent with the principles established in NIST Special Publication 800-37. This framework includes requirements and procedures for identifying, implementing, assessing, and managing security controls.
Council of the Inspectors General on Integrity and Efficiency (CIGIE) Standards
CIGIE’s Quality Standards for Investigations and Quality Standards for Federal Offices of Inspectors General collectively provide a set of overarching principles that IGs should adhere to in conducting their operations. They also provide a framework for conducting high-quality investigations through the definition of general and qualitative standards. General standards, among other things, address the qualifications of investigators, independence, and the concept of due professional care and confidentiality protections throughout the course of an investigation. Qualitative standards focus on the establishment of policies, procedures, and instructions for confidentially handling and processing complaints, along with investigative planning, execution, reporting, and information management.
The CIGIE Integrity Committee receives, reviews, and refers for investigation allegations of wrongdoing made against Inspectors General, designated staff members of an IG, and the Special Counsel and Deputy Special Counsel of the Office of Special Counsel. Each Inspector General, including the DODIG, is required to submit a list of designated staff members to the CIGIE Integrity Committee Chairperson annually.
IGs Met Some Timeliness and Many Quality Goals, but More Actions Could Improve Performance against Unmet Goals
DODIG Met Some, but Not All, Fiscal Year 2018 Timeliness Goals
DODIG met some but not all internal timeliness goals for fiscal year 2018 related to the intake and referral of whistleblower allegations, as well as the oversight of DOD component investigations. DODIG also did not meet internal goals related to the timeliness of senior official misconduct investigations or internal and statutory goals related to the timeliness of reprisal investigations. Intake is the initial process to determine whether a complaint contains a prima facie allegation of whistleblower reprisal or a credible allegation of misconduct by senior officials. Oversight reviews are conducted by the DODIG whistleblower reprisal and senior official investigations directorates to ensure the quality of DOD component investigations.
DODIG officials cited several reasons for not meeting timeliness goals, including a backlog of cases and a lengthy report review process. Further, DODIG officials noted that the number of whistleblower reprisal cases increased from 1,013 to 2,002 (98 percent) over the past 5 years, while an internal DODIG fiscal year 2018 performance report cited other reasons for not meeting timeliness goals, including the assumption of responsibility for all sexual assault victim reprisal cases by the whistleblower reprisal investigations unit, the number of high-priority senior official cases concurrently open, and the increasing scope and complexity of investigations.
Timeliness of DODIG Intake and Oversight Reviews
DODIG met its fiscal year 2018 timeliness goals for civilian and contractor case intakes and senior official misconduct oversight reviews goals, but did not meet goals related to the average days of senior official misconduct and military reprisal intakes, and the average days for reprisal oversight reviews (see figure 2). In fiscal year 2018, DODIG resolved and closed 631 senior official misconduct cases during the intake review process and it performed intake reviews for 1,032 whistleblower reprisal cases. It also conducted oversight reviews for 157 senior official misconduct cases and 995 reprisal cases.
By comparison, DODIG met its fiscal year 2017 targets related to the percentage of intakes and oversight reviews meeting timeliness goals, but it did not meet its goals for the average days of reprisal and senior official misconduct intakes.
Timeliness of DODIG Senior Official Misconduct and Reprisal Investigations
DODIG did not meet internal or statutory timeliness goals related to the percentage or average days for senior official or reprisal investigations (see figure 3). DODIG closed 73 investigations in fiscal year 2018, including 13 senior official misconduct cases and 60 military, contractor, and civilian reprisal cases. Overall, about 85 percent of all investigations did not meet the timeliness goal.
DODIG similarly did not meet its investigation timeliness goals for senior official misconduct and reprisal investigations in fiscal year 2017. However, DODIG officials noted that the record closure of 60 reprisal investigations in fiscal year 2018 was a significant improvement over the 37 closed in fiscal year 2017, and DODIG data showed that the average age of closed and open investigations peaked in April 2018 and June 2018, respectively, and that both were lower as of January 1, 2019. Additionally, DODIG officials stated that they expected to eliminate the case backlog and reach a sustainable state of timeliness during fiscal year 2019.
Timeliness of DOD Hotline Referrals and Completion Report Reviews
In fiscal year 2018, the DOD Hotline referred 3,872 cases to other entities for inquiry, and it performed oversight of 945 completion reports from DOD components. As shown in figure 4, the DOD Hotline met its timeliness goals, except for the percentage of referrals meeting the goal for priority 1 complaints.
Comparatively, in fiscal year 2017, the DOD Hotline did not meet timeliness goals for the average days or percentage of referrals, but did meet its goal for completion reports.
DODIG Generally Met Fiscal Year 2018 Internal Quality Goals
Quality goals can enhance the ability of organizations to provide reasonable assurance that they are exercising appropriate safeguards for federal programs, as demonstrated by our prior work. DODIG generally met its fiscal year 2018 internal quality goals related to the thoroughness and completeness of senior official misconduct and whistleblower reprisal investigations, as well as the completeness and accuracy of information in DOD Hotline referrals. DODIG’s internal quality goals for senior official misconduct and reprisal investigations pertain to the thoroughness of required case-file documentation and the integrity and completeness of data in its case management system. Criteria for assessing these goals include whether or not key documentation of the investigation—such as the incoming complaints and required notifications—are present in the proper folders in the case file, and whether start, end, or milestone dates have been recorded in the case management system. Criteria for assessing the completeness and accuracy of information in DOD Hotline referrals include checks on whether whistleblower consent is accurately documented and whether correspondence is addressed to the correct recipient. According to DOD Hotline officials, a weighted checklist was created in June 2018 that has greater focus on those criteria associated with protecting confidentiality.
In fiscal year 2018, DODIG reported that it conducted quality reviews for 59 whistleblower reprisal cases and 13 senior official misconduct cases. DODIG further reported conducting reviews related to the quality of DOD component investigations for 80 whistleblower reprisal cases and 80 senior official misconduct cases, while the Hotline reviewed the thoroughness of 1,954 referrals. As shown in table 2, DODIG either met or partially met its quality goals except for the data integrity and completeness goal for senior official investigations and the documentation goal for senior official oversight reviews.
While we have reported DODIG’s performance against its quality measures, we recommended in September 2017 that DODIG develop quality performance measures and enhance then-existing timeliness measures to reflect key attributes of successful performance measures, and DODIG concurred. In November 2018, DODIG officials stated that DODIG is currently using the quality measures it had in place prior to fiscal year 2017, and noted that DODIG had developed DOD-wide quality performance measures for 2018 that measure the thoroughness of military service investigations. As a result, we continue to believe that our 2017 recommendation is valid in that DODIG’s performance measures should reflect key attributes of successful performance measures.
Military Service IGs Generally Did Not Meet Fiscal Year 2018 Timeliness Goals
Military service IGs generally did not meet internal and statutory timeliness goals related to the notification of receipt of allegations of reprisal and misconduct, intake reviews, or senior official misconduct and reprisal investigations.
Military service IG officials provided several reasons for not meeting the internal and statutory timeliness goals for notifications, intake reviews, and investigations. Specifically, officials cited an increasing number of complaints; the increasing complexity of complaints, such as those that include multiple allegations and subjects; staffing challenges, such as training related to the rotation of military staff; and the use of reservists, who only work part-time. In addition, a senior official from one military service IG noted that service IGs should be provided greater latitude in dismissing complaints without DODIG review and approval, such as for reprisal complaints where there is no protected communication or personnel action.
Timeliness of Military Service IG Notifications and Intake Reviews
The military service IGs did not meet fiscal year 2018 timeliness goals for notifying DODIG of allegation receipts, or conducting intake reviews for reprisal cases (see figure 5). In fiscal year 2018, the military service IGs sent 141 senior official misconduct notifications and 876 reprisal notifications to DODIG, and performed 618 reprisal intake reviews.
The military service IGs did not meet statutory or internal timeliness goals for senior official misconduct and whistleblower reprisal investigations, with exception of the Marine Corps IG—which met its goal for senior official misconduct investigations (see figure 6). In fiscal year 2018, the military service IGs closed 424 investigations, including 347 reprisal investigations, and 77 senior official misconduct investigations.
Military Service IGs Met DODIG and Internal Quality Goals for Investigations
Military service IGs met fiscal year 2018 quality goals established by DODIG related to the thoroughness of investigations conducted by the service IGs. Specifically, 89 percent of DODIG’s thoroughness criteria were met in the 93 senior official misconduct investigations conducted by the military service IGs and other DOD components, exceeding the 81 percent goal established by DODIG. Similarly, 85 percent of DODIG’s thoroughness criteria were met in the 310 whistleblower reprisal investigations conducted by the military service IGs and other DOD components, exceeding the 81 percent goal established by DODIG. DODIG has established six criteria for assessing the thoroughness of senior official investigations, including whether all allegations were addressed, whether the complainant and subject were interviewed, and whether relevant documents were obtained. DODIG has seven criteria for assessing the thoroughness of reprisal investigations, including whether protected communications and personnel actions were identified, and whether the report of investigation was approved.
The Army, the Air Force, and the Marine Corps IGs also met internal quality goals for fiscal year 2018 related to the percentage of cases returned by DODIG for rework due to quality issues. Specifically, Army IG officials stated that they met their goal of having no more than 5 percent of the investigations they submitted to DODIG for review returned by DODIG due to quality issues, and Air Force IG officials stated that they met their goals of obtaining DODIG concurrence on all of the senior official investigations they submitted for review, and having no more than 5 percent of reprisal investigations returned for rework. Similarly, the Marine Corps IG achieved its goal of having no investigations returned for rework, according to a senior Marine Corps IG official. The Naval IG did not provide us with any internal quality goals.
Aside from the quality goals, DODIG also conducted quality assurance reviews for the Air Force (2017), Army (2018), and Naval (2016) IGs, in which the quality of a sample of case files was examined. The reviews concluded that the military service IGs reviewed were generally complying with internal regulations and CIGIE standards for quality. In addition, in accordance with recommendations made in the quality assurance reviews, each of the service IGs reviewed by DODIG has developed or plans to develop checklists to help ensure that all required documentation is present in their case files, according to service IG officials and documentation.
IGs Have Implemented and Planned Initiatives to Improve Timeliness, but Initiatives Do Not Target All Aspects of Timeliness
DODIG and the military service IGs have implemented and planned various initiatives to improve the timeliness of their processing of senior official misconduct and reprisal complaints. Table 3 shows examples of recent DODIG and military service IG initiatives.
While these initiatives are positive steps, given that the performance of some measures is far below the goals, additional efforts could be made to improve performance against unmet timeliness goals—including those pertaining to senior official misconduct investigations conducted by the military service IGs, military service IG notifications made to DODIG, and military service IG intake reviews for reprisal cases. Additionally, DODIG and some of the military service IGs do not agree on the timeframes prescribed by DOD policy for military service IGs to notify DODIG of the receipt of a complaint, thereby complicating achievement of these goals. For example, officials from the Air Force IG stated that they notify DODIG of the receipt of misconduct allegations only after making a credibility determination, instead of within the five working days of receipt prescribed by DOD policy for senior official allegations. Similarly, Marine Corps IG officials stated that senior official allegations should be reported to DODIG within five days of a credibility determination.
Standards for Internal Control in the Federal Government state that management should complete and document corrective actions to remediate internal control deficiencies in a timely manner. Expanding initiatives to target unmet goals related to military service senior official investigations, notifications, and intakes could provide DODIG and the military service IGs a more comprehensive approach to improving timeliness and better position the IGs to improve upon the timeliness goals prescribed by DOD policy. In addition, resolving disagreements related to notification timeliness could improve the military service IGs’ ability to achieve those goals. Further, additional initiatives could provide greater assurance to potential whistleblowers that their cases will be handled expeditiously.
IGs Have Processes to Protect Whistleblower Confidentiality, but Some Gaps Exist
DODIG Has Policies and Procedures to Protect DOD Whistleblower Confidentiality
DODIG has established policies and procedures to implement key statutory requirements and CIGIE standards for protecting the confidentiality of whistleblowers from the receipt of a whistleblower complaint through its investigation. The Inspector General Act of 1978, as amended, states that the Inspectors General shall not, without consent from the employee, disclose the identity of an employee who reports misconduct or provides information, unless the Inspector General determines that such disclosure is unavoidable during the course of the investigation. Further, CIGIE’s Quality Standards for Investigations states that policies, procedures and instructions for handling and processing complaints should be in place to ensure that basic information is recorded, held confidential, and tracked to final resolution. Table 4 shows examples of key confidentiality protections included in DOD Hotline and senior official misconduct and whistleblower reprisal investigation policies.
DODIG officials stated that they routinely emphasize the importance of protecting whistleblower confidentiality and that confidentiality policies and procedures are addressed through internal training, staff meetings, and on-the-job instruction. Further, 69 of 86 (80 percent) DODIG respondents to our survey reported believing that the guidance they received on protecting confidentiality is sufficient to maintain the confidentiality of individuals involved in IG investigations, citing many of the processes identified in table 4 above as examples of guidance they have received.
DODIG Guidance for Protecting the Confidentiality of Whistleblowers Who Report Internal DODIG Misconduct Lacks Sufficient Detail
The DODIG Office of Professional Responsibility’s investigations manual on handling misconduct complaints against internal DODIG employees requires that complainant information be strictly controlled in order to protect the integrity of the investigative process and to avoid potential harm to the privacy and reputation of the employee. This guidance also includes some steps to protect whistleblower information such as redacting substantiated reports of investigation to be provided to investigation subjects. As previously noted, DOD Hotline guidance also includes steps to protect the confidentiality of internal DODIG whistleblowers. However, the Office of Professional Responsibility guidance does not include several key steps and procedures that some DODIG officials reported taking to protect whistleblower confidentiality, such as excluding complainant information from notifications sent to subjects and not identifying complainants during interviews with case subjects. In addition, DODIG’s Office of General Counsel does not have documented procedures for controlling access to cases involving designated DODIG staff members subject to review by the CIGIE Integrity Committee. DODIG designated staff members include the Principal Deputy Inspector General, Deputy Inspectors General, General Counsel, and Senior Advisor to the Inspector General, among other staff members.
Guidance on handling complaints alleging internal DODIG misconduct is also outdated and does not reflect recent organizational changes. In particular, the Office of Professional Responsibility’s investigations manual does not reflect its updated roles and responsibilities since splitting from the Quality Assurance and Standards directorate in October 2016, and certain chapters do not recognize that it now reports directly to the Inspector General. Further, sections of the manual have been revised at different points in time and do not align with the office’s current functions. For example, the section covering the office’s organization, mission, and authorities has not been updated since July 2009. Similarly, the section detailing investigation policies and procedures has not been updated since November 2012.
Some of the DODIG employees we surveyed reported concern that DODIG’s process for reporting employee misconduct and resolving internal complaints may not protect whistleblower confidentiality. For example, 14 (16 percent) survey respondents reported believing that DODIG’s internal process for reporting misconduct did not protect DODIG employee confidentiality or only protected it slightly. Also, 36 (42 percent) survey respondents reported not knowing whether or not DODIG’s internal process for reporting misconduct protects confidentiality, and 36 (42 percent) reported believing that it protects confidentiality somewhat or very well. Additionally, 14 of 86 (16 percent) and 9 of 86 (10 percent) employees surveyed reported having considered but ultimately choosing not to resolve an issue through the Office of the Ombuds—which may receive some internal misconduct complaints—or report misconduct through DODIG’s internal process on or after October 1, 2016, respectively, because they feared that their confidentiality could be compromised. Table 5 shows the distribution of these responses.
Survey respondents identified some concerns related to the confidentiality, objectivity, and independence of DODIG’s internal process for reporting misconduct and suggested some related improvements. For example, although it has separated from the Quality Assurance and Standards directorate, the Office of Professional Responsibility continues to share office space with the directorate and hold complainant and witness interviews in the shared space. Also, it was suggested that an online form could be used so that internal complaints are routed directly to the Office of Professional Responsibility instead of through the DOD Hotline. DODIG officials told us that there are record-keeping and performance measure-related bases for continuing to use the DOD Hotline to receive complaints of internal misconduct, but that they would carefully evaluate the suggestion.
CIGIE Quality Standards for Federal Offices of Inspector General state that IGs should establish and follow policies and procedures for receiving and reviewing allegations and ensure that whistleblower identities are not disclosed without consent, unless the IG determines that such disclosure is unavoidable during the course of the investigation. CIGIE Quality Standards for Investigations also state that policies and procedures should be revised regularly to align with current laws and regulations. DODIG officials told us in November 2018 that the Office of Professional Responsibility investigations manual is in the process of being updated but were unable to provide a timetable for the completion of these updates, and stated that all of the provisions—including the confidentiality protections—are subject to changes and updates. In addition, in January 2019 DODIG officials noted, after discussion with GAO, that they intended to implement guidance for making referrals to the CIGIE Integrity Committee. Until DODIG develops guidance that incorporates procedures to protect confidentiality and documents how to maintain whistleblower confidentiality throughout the CIGIE referral process, it will lack reasonable assurance that its process for investigating internal misconduct allegations can fully protect the confidentiality of whistleblowers.
Military Service IGs Have Guidance for Protecting Whistleblower Confidentiality, but It is Not Comprehensive
Military service IG guidance identifies confidentiality as a core tenet of handling and investigating whistleblower complaints. For example, military service IG guidance states that consent should generally be obtained from complainants before each military service IG can share a complainant’s identity with officials who will investigate the allegations, and provides that complaints may be redacted or summarized to omit personally identifiable information—such as when consent is not given or for other purposes. In addition, military service IG guidance state that a complainant’s identity may only be disclosed without consent when an authorized official has determined that such disclosure is unavoidable in order to investigate an allegation.
Aside from these shared provisions, each of the military service IGs’ guidance includes additional precautions aimed at protecting whistleblower confidentiality. For example, Air Force Instruction 90-301 instructs Hotline personnel to coordinate communication between the complainant and investigator if a complainant does not give consent to disclose his or her identity. In addition, Army and Marine Corps IG guidance stipulate that whistleblowers will be notified if it becomes necessary to disclose their identity without their consent, and Naval IG guidance requires investigators to inform complainants that although the use of their testimony may be necessary under administrative action procedures, their identity will be released as a witness, not a complainant, to safeguard their identity.
While all military service IGs acknowledge the need to preserve confidentiality, we found gaps in confidentiality protections in Air Force, Naval, and Marine Corps IG guidance, but not Army IG guidance. For example, we found that Air Force, Naval, and Marine Corps IG guidance did not include requirements outlined in DOD Instruction 7050.01 related to the specific conditions under which information disclosures may be made without complainant consent. According to DOD Instruction 7050.01, these include circumstances when a complainant has made it known outside IG channels that he or she submitted the complaint, there is an emergency situation or health or safety issue, or the allegation is being transferred outside of DOD to another IG. Air Force, Naval, and Marine Corps IG guidance predate DOD Instruction 7050.01, updated in October 2017, and reference an older instruction that omits this disclosure guidance.
Additionally, DODIG’s 2016 and 2017 quality assurance reviews of the Naval IG and Air Force IG concluded that confidentiality protections could be improved. Specifically, DODIG found that the Air Force IG did not have written procedures for handling and restricting IG employee access to complaints against individuals with access to the Air Force IG’s whistleblower database, including both IG employees and contractors that support the database. In addition, DODIG found that the Naval IG Hotline program instruction needed to be updated and that it did not have a hotline standard operating procedure with guidance to redact complainant identities before releasing investigation reports to installation commanders or other military officials.
Air Force, Naval, and Marine Corps IG officials stated that they are currently in the process of updating their guidance to better incorporate confidentiality protections. For example, Naval IG officials told us that the Naval IG is updating its Hotline instruction, which will provide guidance to obtain consent from complainants prior to releasing investigation reports to installation commanders or other military officials, or redact the complainant’s name. According to Naval IG officials, the updated instruction should be finalized in the first quarter of fiscal year 2019.
CIGIE Quality Standards for Federal Offices of Inspector General state that IGs should establish and follow policies and procedures for receiving and reviewing allegations and ensure that whistleblower identities are not disclosed without consent, unless the IG determines that such disclosure is unavoidable during the course of the investigation. Further, CIGIE Quality Standards for Investigations state that policies and procedures should be revised regularly to align with current laws and regulations, and that confidentiality should be considered throughout an investigation, to include drafting reports, validating contents, and submitting the final report. Without updated policies and procedures that fully implement confidentiality standards for complaint handling and investigation, the Air Force IG, the Naval IG, and the Marine Corps IG may not be able to ensure the consistent implementation of confidentiality protections within their offices.
IGs Are Able to Access Whistleblower Information to Perform Their Duties and Have Taken Some, but Not All, Required Steps to Safeguard It
IGs Are Able to Access Information Needed to Handle Whistleblower Complaints, and Have Taken Steps to Safeguard Classified Information
DODIG and military service IGs do not experience significant challenges in accessing sensitive or classified information necessary to handle whistleblower complaints, according to cognizant IG officials. Such information includes documentary evidence or witness statements. Similarly, 79 of 86 (92 percent) DODIG respondents to our survey reported that they are generally able to access all types of unclassified information necessary to perform the duties of their position, while 82 of 86 (95 percent) respondents stated that they are either able to access classified information as necessary or do not require access to classified information.
DODIG and the military service IGs have also taken steps to safeguard physical and electronic classified whistleblower information in accordance with DOD policy, which requires that DOD components establish a system of technical, physical, and personnel controls to ensure access to classified information is limited to authorized persons. Cases including classified information constituted a small percentage of cases closed by DODIG and the military service IGs in fiscal year 2017, with the percentage of those closed by DODIG directorates—including the DOD Hotline and the whistleblower reprisal and senior official investigations— ranging from 0.2 percent to 0.5 percent, according to DODIG officials. Officials from each of the military service IGs reported closing no classified cases in fiscal year 2017. In addition, DODIG and military service IG officials reported having an adequate number of staff with clearances at the requisite levels (e.g., SECRET) to handle classified case information, along with processes for physically and electronically storing and accessing information at different classification levels.
Most IGs are Following DOD’s IT Risk Management Process
DODIG and most military service IGs are following DOD’s IT risk management process, which involves the assessment of and authorization to operate IT used to manage DOD information—including sensitive but unclassified whistleblower information. The Naval IG has not authorized its case management system in accordance with DOD policy, which implements NIST and Office of Management and Budget federal IT security guidelines related to IT systems and applications, including those used by the IGs. However, it is taking steps to do so. DODIG and the Naval IG use IT systems to manage sensitive whistleblower information, while the Air Force, Army, and Marine Corps IGs use IT applications—which are not subject to the full IT risk management authorization process, as discussed below.
DODIG and Naval IG IT Systems
DODIG has followed the DOD IT risk management process by authorizing the Defense Case Activity Tracking System (D-CATS)—its whistleblower case management system—to operate in accordance with DOD policy and federal IT security guidelines. DOD’s risk management process requires that IT systems be authorized to operate using a multistep process that entails the identification, implementation, and assessment of system security controls, along with the corresponding development and approval of a system security plan, security assessment report, and plan of action and milestones. The process requires systems to be reassessed and reauthorized every 3 years in order to ensure the continued effectiveness of security controls, and allows for ongoing authorizations through a system-level strategy for the continuous monitoring of security controls employed within or inherited by the system. The strategy should include a plan for annually assessing a subset of system security controls. DOD policy states that component heads may only operate systems with a current authorization to operate, and that authorization termination dates must be enforced.
DODIG last authorized D-CATS to operate in May 2017, determining that overall system security risk was acceptable based on a review of the system security plan, security assessment report, and plan of action and milestones. Our review of DODIG’s system authorization documents also found that they addressed key, required content elements. For example, the system security plan specified the security controls intended to be in place based on the system’s risk classification, and the security assessment report documented findings of compliance and the methods used by the assessor to evaluate security controls when implementing DODIG’s continuous monitoring strategy. Additionally, the plan of action and milestones identified tasks needed to mitigate identified vulnerabilities along with resources and milestones to accomplish the tasks.
However, as of December 2018, the Naval IG had not authorized its case management system in accordance with the DOD risk management process, and the system remained in operation. The Naval IG was issued an interim authorization to operate its case management system in March 2017 by the Commander, U.S. Fleet Cyber Command. The interim authorization—which expired in January 2018—required the Naval IG to transition from the department’s prior IT risk management process to the current process by the time of its expiration, noting that the overall risk of the system was high due to incomplete testing. Subsequently, in June 2018, the Naval IG requested and was eventually granted, in September 2018, a conditional authorization to continue operating the case management system through October 2018.
In early December 2018, the Naval IG requested another conditional authorization to operate the case management system until September 2019. According to Naval IG officials, the conditional authorization is needed because the whistleblower case management system’s host environment is not expected to attain its authorization until September 2019. As a result, the Naval IG was taking steps beyond the conditional authorization request to manage IT security risks as it works towards compliance with the new DOD risk management process. For example, Naval IG officials stated that new leadership was put in place to oversee the case management system; that a senior system administrator would be hired to help maintain IT security; and that the case management system was undergoing regular scans to assess security risks, with any resultant issues being remediated.
NIST guidelines state that organizations should design and prioritize activities to mitigate security risks, and that alternative strategies may be needed when an organization cannot apply controls to adequately reduce or mitigate risk. As noted, the Naval IG’s case management system was not authorized as of December 2018 and it was not yet able to transition to the current DOD risk management process. However, if completed, the actions planned and underway—including the conditional authorization and security scans—should help to mitigate system security risks and provide greater assurance that existing system security controls safeguard sensitive whistleblower information.
Air Force, Army, and Marine Corps IG IT Applications
The IGs of the Air Force, the Army, and the Marine Corps are following DOD’s IT risk management procedures for their primary case management applications, which are not subject to the full IT risk management authorization process. According to DOD Instruction 8510.01, Risk Management Framework (RMF) for DOD Information Technology (IT), DOD IT such as applications must be securely configured in accordance with applicable DOD policies, and application security controls must undergo special assessment of their functional and security-related capabilities and deficiencies. The results of such assessments are to be documented within an application-level security assessment report and reviewed by a security manager to ensure that the product does not introduce vulnerabilities into its host system.
We found that while the Army, Air Force, and Marine Corps IGs have not produced the required application-level security assessment reports for their primary applications, they have met the intent of these requirements through other actions. Specifically, we noted that the Air Force and Army IGs’ primary case management applications reside in host systems that were authorized to operate under the risk management process within the last 3 years, and that the assessments associated with the host system authorizations included a review of application-level security controls, according to IG officials. Similarly, the Marine Corps IG’s case management application was exempted from assessment by its authorizing official because it was determined that the application did not introduce additional risk into its authorized host system.
DODIG Does Not Fully Restrict Employee Access to Sensitive Whistleblower Information
DODIG’s Case Management System Does Not Include Some Controls to Restrict Internal Employee Access
As previously discussed, DODIG has taken steps to restrict employee access to whistleblower information, such as by restricting access to cases in which a complainant has not consented to releasing his or her identity. DOD Hotline also applies additional restrictions to all cases involving internal misconduct referrals to the Office of Professional Responsibility and CIGIE Integrity Committee, and it has the capability to further restrict records, according to DODIG officials. Beyond restricting records, the case management system also includes user roles, which govern users’ view of information. However, employees at the three DODIG directorates that are principally responsible for handling whistleblower information are generally able to access sensitive whistleblower information belonging to other directorates in both the Defense Case Activity Tracking System (D-CATS)–DODIG’s whistleblower case management system—and an associated document repository, that is not necessary to accomplish assigned tasks. NIST Special Publication 800-53, Security and Privacy Controls for Federal Information Systems and Organizations, states that organizations should employ the core security principle of least privilege, which allows only authorized access for users that is necessary to accomplish assigned tasks in accordance with organizational missions and business functions.
DODIG employees in the DOD Hotline, senior official investigations directorate, and whistleblower reprisal investigations directorate are able to access whistleblower information belonging to other DODIG directorates in both D-CATS and its associated document management repository because DODIG has not developed sufficient system controls needed to restrict access across the three directorates. For example, a DODIG employee in either the senior officials or reprisal investigations directorates can access Hotline records in D-CATS that the employee does not have a need to access, with the exception of cases specifically restricted by the DOD Hotline to prevent unauthorized access. According to an August 2018 internal DODIG memo, the lack of controls to restrict access to information across the three directorates has been known since the system was established in 2012.
DODIG plans to establish controls to restrict access among the DODIG directorates in a new enterprise system (D-CATSe), which will eventually replace D-CATS and the case management systems used by the military service IGs. D-CATSe is intended to provide a common case activity tracking system capable of supporting mandatory reporting requirements and collecting, storing, and exchanging IG records related to complaints and administrative investigations throughout a complaint’s lifecycle. According to DODIG officials, D-CATSe will restrict access both within and among user IGs, including the DODIG directorates and military service IGs, each of which may have unique access requirements based on their different types of user groups. According to DODIG officials, this will be accomplished through the establishment of unique business units at different organizational levels, teams, and user roles, which will collectively determine what information a user can access. However, as shown in figure 7 below, the incremental release schedule for D-CATSe has been delayed, and the IGs are not expected to fully transition to the new system until fiscal year 2021.
NIST guidelines state that organizations should design and prioritize activities to mitigate security risks, and that alternative strategies (such as plans) may be needed when an organization cannot apply controls to adequately reduce or mitigate risk. Further, NIST guidelines state that addressing assurance-related controls during system development can help organizations obtain sufficiently trustworthy information systems and components that are more reliable and less likely to fail. However, DODIG does not plan to take other actions to address the lack of cross- directorate controls before the advent of the enterprise system. Additionally, while DODIG is designing such controls and plans for each system release to provide a requirements basis for subsequent releases, it has not developed an assurance plan for testing controls, according to DODIG officials, or fully defined the system requirements needed to implement these controls and ensure it has achieved least privilege both within and across the user IGs. Without considering interim actions to address the lack of D-CATS cross-directorate access controls, DODIG may be unable to sufficiently mitigate security risks while D-CATSe is developed. Also, without developing a plan with assurance controls for achieving least privilege in D-CATSe, DODIG may be unable to ensure the confidentiality and integrity of sensitive whistleblower information during its implementation.
DODIG Has Identified Instances Involving Improper Employee IT Access Rights to Whistleblower Information
Separate from the lack of cross-directorate controls, DODIG has identified multiple instances in which sensitive but unclassified whistleblower information in the DODIG Administrative Investigations directorate whistleblower case management system and document repository was accessible to DODIG personnel who did not have a need to know this information. These instances involve DOD Hotline records that are specifically restricted to protect complainants requesting confidentiality, along with records belonging to DODIG’s Office of Professional Responsibility—which handles internal DODIG misconduct complaints.
Table 6 shows examples of recent instances in which DODIG determined that sensitive whistleblower records were accessible to DODIG personnel without a need to know. According to DODIG officials, as of January 2019, there were no known instances of anyone without a need to know actually accessing these records. These officials also stated that corrective action had been taken for each instance in table 6, including by blocking access to information while the underlying issues were resolved; that at no time was information available to the public; and that the instances did not result in any disclosure outside of DODIG.
NIST guidelines state that the need for certain user privileges may change over time, necessitating the periodic review of assigned user privileges in order to determine if the rationale for assigning such privileges remains valid. DODIG has determined that its user access issues are broadly attributable to system administration and application problems, including permission changes resulting from system updates. To address such issues, DODIG has taken several remedial actions and identified additional recommended steps, including: reconciling user accounts and validating permissions related to restricted records; reviewing policies related to protecting complainant confidentiality and conducting awareness training with personnel, as appropriate; and developing enhanced user management procedures and internal controls related to establishing user accounts, reconciling current user permissions, and controlling access to restricted records.
In addition, in October 2018, DODIG instituted a process whereby user privileges associated with its case management system and document repository will be reviewed, validated, and corrected, if necessary, on a quarterly basis. If fully implemented, this process, along with the proposed actions, should help ensure that assigned user privileges are periodically validated and aligned with business needs. However, DODIG’s process does not include steps to test document repository permissions after case management system updates, which were determined by DODIG to be the cause of some permission issues. Without including such steps in its process, DODIG lacks assurance that system permissions will align with business needs on an ongoing basis, and therefore may not be able to appropriately control user accounts to prevent unauthorized access by system users.
Sensitive Whistleblower Information Has Been Accessible to Military Service IG Employees without a Need to Know
The military service IGs’ case management systems and applications incorporate IT controls, such as authenticated user accounts and unique permissions, to protect certain whistleblower information. However, service IG systems and applications do not fully restrict employee access to sensitive whistleblower information only to information that is necessary to accomplish assigned tasks. As previously discussed, NIST guidelines state that organizations should only provide authorized access to users which is necessary to accomplish assigned tasks in accordance with organizational missions and business functions. As shown in Table 7, DODIG’s quality assurance reviews and our work identified issues related to IG employee access restrictions.
At the time of our review, the military service IGs had not taken steps to fully address the identified access issues. Specifically, Air Force officials stated that they did not plan to address the application access issues because they did not have funding to continue developing their existing application prior to transitioning to D-CATSe, although they would explore whether solutions were possible within current fiscal constraints during the next system maintenance evaluation. Similarly, Army IG officials stated that while the Army IG had resources to further develop its existing case management application, they had elected to not use those resources to remedy the identified access issue in light of the future arrival of D-CATSe. In addition, Naval IG officials reported taking action to restrict senor official investigations, but did not provide information to us on actions taken to address DODIG’s recommendation to restrict cases involving internal Naval IG personnel. Finally, Marine Corps IG officials stated that access restrictions would be implemented as part of an application redesign scheduled to be complete by the end of 2018. However, these officials also noted that they have not identified the root of the access problem or developed a plan to ensure that needed access restrictions are implemented and functioning properly, raising questions as to whether the redesign will fully restrict access on a continuing basis. As mentioned previously, the Marine Corps’ case management application is also exempt from testing under the DOD IT risk management process, and therefore is not subject to routine security assurance testing.
Federal Standards for Internal Control state that management should analyze and respond to risks, and evaluate and remediate internal control deficiencies on a timely basis, including those related to audit findings. Further, NIST guidelines state that organizations should design and prioritize activities to mitigate security risks, and that alternative strategies, such as plans, may be needed when an organization cannot apply controls to adequately reduce or mitigate risk. These guidelines also encourage organizations to obtain assurance-related evidence on an ongoing basis in order to maintain the trustworthiness of information systems. As previously discussed, D-CATSe is being implemented incrementally, with releases for the Naval IG and the Air Force and Army IGs not scheduled to occur until fiscal years 2020 and 2021, respectively. By considering actions prior to the advent of D-CATSe, the Air Force, Army, and Naval IGs could mitigate existing risks to whistleblower confidentiality by reducing the potential for unauthorized employee access of whistleblower records. Also, by developing a plan to ensure that access restrictions function properly, the Marine Corps IG could better ensure the confidentiality and integrity of sensitive whistleblower information in its redesigned case management application on a continuing basis.
IGs Report Few Instances of Confidentiality Violations but IT Access Issues Create This Potential
Potential violations of whistleblower confidentiality may be reported to DODIG, the service IGs, the Office of Special Counsel, or CIGIE. IGs identified some substantiated violations of whistleblower confidentiality between fiscal years 2013 and 2018. Specifically, DODIG identified 8 substantiated violations of whistleblower confidentiality between fiscal years 2013 and 2018, representing approximately .01 percent of the 95,613 contacts handled by DODIG during that timeframe, according to DODIG officials. Army IG identified 6 substantiated violations of whistleblower confidentiality between these years. These violations include the improper release of IG information, disclosures made to individuals who do not have a need to know, and unauthorized access to whistleblower records by IG personnel. DODIG officials noted that in some instances, violations were determined not to result from employee misconduct because the complainant’s identify was disclosed unwittingly. According to DODIG and Army IG officials, disciplinary or corrective action was taken in all but one of the 14 substantiated violations because the DODIG employee involved resigned prior to action being taken. Officials from the Air Force, Naval, and Marine Corps IGs stated that they were unaware of any substantiated incidences of confidentiality violations between fiscal years 2013 and 2018 and that they were unable to specifically track such incidents in their case management systems. Similarly, CIGIE Integrity Committee and Office of Special Counsel officials stated that they were unaware of and do not specifically track confidentiality violations, and we did not identify any confidentiality violations in the fiscal year 2013-2018 data they provided to us that involved DODIG employees.
Respondents to our survey of DODIG employees separately reported potential violations of whistleblower confidentiality. Specifically, 15 of the 86 respondents (about 17 percent) reported being aware of at least one instance since June 1, 2017, where the identity of a complainant or source was avoidably disclosed by a DODIG employee to an organization or individual without a need to know, and nine of these 15 were aware of more than one instance. These responses are not intended to be a count of separate instances because respondents may have recalled the same instance(s), including one or more of the 8 substantiated violations reported to us by DODIG. The most common avoidable disclosure described by survey respondents involved distributing whistleblower materials to the wrong official or agency. Survey respondents reported that in such instances corrective action included recalling the complaint and deleting the erroneously sent record, or, in some cases, sending a complaint to DODIG’s Office of Professional Responsibility for the investigation of possible misconduct.
While the number of known violations is small, IT access issues related to the case management systems and applications used by DODIG and the military service IGs create the potential for additional violations of whistleblower confidentiality. As previously discussed, issues such as the absence of cross-directorate access controls within DODIG’s case management system and the ability for non-Air Force IG users of the Air Force IG case management system to view IG case information allow for the improper access of sensitive whistleblower information. Recognizing this potential, a senior DODIG official noted concern regarding the possible extent of confidentiality violations stemming from these and the other access issues previously discussed in this report. Additionally, DODIG requested that the Defense Criminal Investigative Service investigate the April 2018 incident involving 946 case folders to determine who accessed the identified records. Without steps to address these ongoing IT access issues, the potential for additional violations of whistleblower confidentiality will persist.
DODIG Generally Met Documentation Requirements in Senior Official Cases that GAO Reviewed and Reported Most Credible Allegations
DODIG Dismissed Most Cases Involving Civilian DOD Presidential Appointees with Senate Confirmation and Generally Included Required Data and Documentation
DODIG closed 129 misconduct and reprisal cases in fiscal years 2013 through 2017 with complaints involving a civilian DOD Presidential appointee with Senate confirmation (PAS) subject. Of the 129 cases closed, DODIG dismissed without investigation 125 cases and investigated the remaining four. Figure 8 shows the number of cases closed in each fiscal year, by case disposition.
Our review of the 125 case files for dismissed misconduct and reprisal cases found that key documentation and data needed to demonstrate compliance with significant aspects of the case-handling process were generally present. Key documentation and data for dismissed cases include the case open and close dates, the incoming complaint, disposition of the case, and the dismissal approval and rationale. CIGIE standards state that the degree to which an organization efficiently achieves its goals is affected by the quality and relevance of information that is collected, stored, retrieved, and analyzed, and that the results of investigative activities should be accurately and completely documented in the case file.
Examples of data and documentation consistently present. Our review of 125 case files for dismissed cases closed in fiscal years 2013 through 2017 found that key documentation and data were generally present. For example:
100 percent of the cases we reviewed included the incoming complaint.
Approximately 99 percent of the dismissed misconduct cases included a dismissal rationale that aligned with dismissal criteria in DODIG policy.
100 percent of the dismissed reprisal cases that involved a closure letter informing the complainant of case dismissal listed a rationale for dismissal in the closure letter.
100 percent of the dismissed reprisal cases that did not involve a closure letter to the complainant had a rationale for dismissal elsewhere in the case file.
Approximately 99 percent of dismissed misconduct cases included a required entry recording the intake disposition.
Documents or data that were not material. Our review of case files for dismissed cases closed in fiscal years 2013 through 2017 found that some other documentation or data that are needed to demonstrate compliance with DODIG policy were missing. The deficiencies we found were not material to case outcomes. For example, approximately 77 percent of dismissed misconduct cases did not include a recording of case dismissal approval by IG supervisory staff. However, DODIG officials told us that the presence of the required entry recording the intake disposition indicated that the case dismissal had been approved by the appropriate authority. Similarly, approximately 55 percent of dismissed misconduct cases did not include a notification letter to the appropriate military service IG in the case file. DODIG officials stated that while there is guidance to send these letters, it is not a required practice.
DODIG Has Reported Most Credible Misconduct Allegations to the Secretary of Defense and Some Investigation Results to Congress
DODIG reported most credible allegations concerning civilian DOD PAS officials to the Secretary of Defense as required. DODIG also reported some investigation results involving these officials to Congress prior to the enactment of the Inspector General Empowerment Act of 2016, which required the reporting of results of substantiated investigations involving DOD senior officials. DODIG investigated four of the 129 cases closed in fiscal years 2013 through 2017, with two of those investigations leading to substantiated allegations of misconduct.
DODIG generally met DOD requirements to report credible allegations of misconduct against civilian DOD PAS officials to the Secretary of Defense. DOD Directive 5505.06 requires that DODIG notify the Secretary of Defense of all credible allegations or investigations involving presidential appointees and others of significance, including Senate- confirmed civilian officials. We found documentary evidence that DODIG notified the Secretary of credible allegations in three of the four misconduct and reprisal investigations closed from fiscal years 2013 through 2017, and the secretary of a military service was notified in the fourth case. In addition, DODIG officials stated that the Principal Deputy IG provides the Secretary of Defense periodic updates on current investigations and other periodic updates of incoming allegations, as necessary and appropriate.
Separately, the Inspector General Empowerment Act of 2016 requires that DODIG report in its semiannual reports to Congress on all substantiated allegations of misconduct involving senior officials. Prior to 2016, there was no requirement to notify Congress of substantiated allegations of misconduct involving senior officials. We found evidence that DODIG communicated investigation results to Congress in two of the four civilian DOD PAS official investigations closed between fiscal years 2013 and 2017, but not in the other two because it was not required. For one investigated case, a report of investigation was provided to Congress upon request, and for another investigation, which had a substantiated allegation, the results of the investigation were published in narrative detail in a semi-annual report to Congress. DODIG now reports in its semi-annual reports to Congress summary results of substantiated and unsubstantiated cases closed during the corresponding period, but it has not closed any civilian DOD PAS official allegations since the statutory requirement to report to Congress on all substantiated cases was established.
Conclusions
Maintaining a program that instills trust and confidence for potential whistleblowers to come forward is critical to minimizing fraud, waste, abuse, and personnel misconduct in the federal government. Important components of a credible whistleblower program are timeliness of case processing and safeguarding confidentiality to the maximum extent possible. It is encouraging that DODIG and the service IGs have met some key goals and have policies that address whistleblower confidentiality. In addition, DODIG generally met key documentation and data requirements for the 125 cases dismissed by DODIG involving civilian DOD PAS officials, and reported most credible allegations, as required.
However, the IGs face challenges in addressing unmet timeliness goals and updating guidance to ensure full alignment with current confidentiality requirements. By pursuing more targeted, collective efforts with additional initiatives aimed at improving performance against unmet timeliness goals, the IGs can better assure current and potential whistleblowers that their complaints will be processed expeditiously. Additionally, without formal guidance documenting procedures for protecting the confidentiality of whistleblowers reporting potential internal DODIG employee misconduct, those employees lack assurance that DODIG can fully protect their identities. Similarly, without updated policies and procedures, the Air Force, Naval, and Marine Corps IGs may not be able to fully ensure whistleblower confidentiality in their organizations.
The integrity of a whistleblower program also extends to ensuring that sensitive information in IT systems remains secure and inaccessible by employees without a need to know. The IGs have existing controls for safeguarding whistleblower information, but additional efforts are warranted. Specifically, without further steps—such as considering interim actions to mitigate the lack of cross-directorate access controls, developing a plan, along with the military service IGs for achieving least privilege in the future enterprise case management system, and enhancing the process for periodically validating user privileges—DODIG may not be able to ensure that access controls in its existing and future case management systems align with business needs on an ongoing basis. Similarly, without considering actions to further restrict IG employee access in existing IT, the Air Force, Army, and Naval IGs may be unable to mitigate ongoing risks to whistleblower confidentiality. Finally, without a plan for ensuring that access restrictions in its redesigned case management system function properly, the Marine Corps IG may be unable to fully ensure whistleblower confidentiality.
Recommendations for Executive Action
We are making a total of 12 recommendations to DOD. Specifically: The DOD Inspector General should coordinate with the IGs of the military services to take additional actions to improve performance against unmet timeliness goals. This includes steps to improve performance of senior official misconduct investigations and military service reprisal intakes, and to resolve disagreement on notifications. (Recommendation 1)
The DOD Inspector General should issue formal guidance documenting procedures for protecting the confidentiality of whistleblowers throughout its internal misconduct investigation process. (Recommendation 2)
The Air Force Inspector General should establish procedures to fully reflect and implement DOD policy on the protection of whistleblower confidentiality. (Recommendation 3)
The Marine Corps Inspector General should establish procedures to fully reflect and implement DOD policy on the protection of whistleblower confidentiality. (Recommendation 4)
The Naval Inspector General should establish procedures to fully reflect and implement DOD policy on the protection of whistleblower confidentiality. (Recommendation 5)
The DOD Inspector General should consider interim actions as the whistleblower enterprise case management system is being developed to help ensure that access to sensitive whistleblower information in the current case management system and associated document repository is limited to information that is necessary to accomplish assigned tasks. (Recommendation 6)
The DOD Inspector General should coordinate with the IGs of the military services to develop a plan to fully restrict case access in the future whistleblower enterprise case management system so that user access is limited to information necessary to accomplish assigned tasks in accordance with organizational missions and business functions. (Recommendation 7)
The DOD Inspector General should enhance its process for periodically reviewing whistleblower case management system and document repository user privileges by including steps to ensure that such privileges remain valid after system updates, as appropriate. (Recommendation 8)
The Air Force Inspector General should consider interim actions as the whistleblower enterprise case management system is being developed to help ensure that access for users of existing applications is limited to information that is necessary to accomplish assigned tasks in accordance with organizational missions and business functions. (Recommendation 9)
The Army Inspector General should consider interim actions as the whistleblower enterprise case management system is being developed to help ensure that access for users of existing applications is limited to information that is necessary to accomplish assigned tasks in accordance with organizational missions and business functions. (Recommendation 10)
The Marine Corps Inspector General should develop a plan to ensure that its redesigned whistleblower case management application restricts user access to information based on what is needed to accomplish assigned tasks in accordance with organizational missions and business functions. (Recommendation 11)
The Naval Inspector General should consider interim actions as the whistleblower enterprise case management system is being developed to help ensure that access for users of existing applications is limited to information that is necessary to accomplish assigned tasks in accordance with organizational missions and business functions. (Recommendation 12)
Agency Comments and Our Evaluation
We provided a draft of this report to DODIG and the military service IGs for review and comment. In written comments, DODIG and the military service IGs concurred with each of our 12 recommendations. Comments from DODIG and the Air Force, Army, and Marine Corps IGs are reproduced in appendix V; the Naval IG concurred in an email. These IGs also provided technical comments, which we have incorporated as appropriate.
In its comments, DODIG stated that it will seek to implement the recommendations. In addition to highlighting recent and planned improvements, DODIG provided additional comments on some of the report’s findings and statements. In particular, DODIG noted that the report understated its improvements in timeliness, such as by stating that DODIG did not meet timeliness goals related to average days of senior official and military reprisal intakes, and average days for reprisal oversight reviews. Citing figure 2, DODIG further stated that it met its timeliness goals in more than 60 percent of all senior official and reprisal intake cases, including 87 percent of senior official oversight review cases, and that it met its 15-day goal in more than 70 percent of senior official intakes. We agree that DODIG achieved these percentages and present the associated data in figure 2. However, as described in the report, and shown in figure 2, DODIG did not meet its goals for the average days of senior official misconduct and military reprisal intakes, and the average days for reprisal oversight reviews. Nonetheless, it is encouraging that DODIG has taken and planned actions to improve timeliness as its caseload has increased, including by increasing its staff by about 29 percent since fiscal year 2016, during which time it reported that its caseload similarly increased by about 26 percent.
DODIG also noted that the report presented some information in a manner that could create an incomplete impression of the agency’s commitment to protecting whistleblower confidentiality. Specifically, DOD stated that the report’s presentation of survey data related to DODIG employee concerns about internal DODIG processes may give a misleading impression because of the focus on the small number of respondents who had a negative impression. In particular, DODIG noted that more than 80 percent of respondents either believed that DODIG’s internal process for reporting misconduct protected confidentiality somewhat or very well, or did not know if it did so. However, a positive perspective cannot be inferred from the respondents that reported not knowing whether or not DODIG’s internal process protects confidentiality (42 percent). Also, it should be recognized that the respondents that held negative views on DODIG’s process for reporting internal misconduct (16 percent) accounted for a substantial proportion of respondents (28 percent) that held either positive or negative views on this issue. Importantly, these and other survey information presented in the report also provide valuable information on the degree to which DODIG employees have confidence in the integrity of these important internal processes, and, as mentioned, align with other information obtained during our review. As such, this information may help to inform DODIG’s efforts in addressing our recommendation to issue formal guidance documenting procedures for protecting the confidentiality of whistleblowers throughout its internal misconduct process, along with any future efforts to instill employee confidence in internal misconduct reporting mechanisms.
DODIG also noted that portions of the report addressing restrictions on DODIG employee access to sensitive whistleblower records need further context, stating specifically that no DODIG employees outside of the Administrative Investigations directorate, Office of Professional Responsibility, and Office of General Counsel had access to any of the records, and that there was no evidence that any person without a need to know accessed any such records. However, information provided to us by DODIG does not show that accessibility was limited in all instances to employees within one of those DODIG offices. Also, the ability of any employee to access records that were specifically restricted to protect complainant identities or internal records belonging to the Office of Professional Responsibility is problematic given the increased sensitivity of such records. Further, while DODIG did not identify instances in which anyone without a need to know accessed the records, DODIG did not provide evidence that all cases of improper access were thoroughly investigated, as we state in our report, and the instances included in the report are examples and not inclusive of all instances of improper access identified by the DODIG. Nevertheless, it is positive that DODIG has reported taking corrective action to address instances of improper accessibility. It is also encouraging that DODIG plans to implement our recommendations, as the potential for unauthorized access will persist until it establishes cross-directorate controls in the case management system and enhances its processes for periodically reviewing user privileges for its whistleblower case management system and document repository.
All of the military service IGs concurred with the recommendations directed to them. The Air Force and the Army IGs also provided comments on some of the report findings. In particular, the Air Force IG noted in relation to our third recommendation that language in Air Force Instruction 90-301, updated in December 2018, is essentially the same as 5 U.S.C. Appendix § 7, and that this language precludes Air Force officials at any level from waiving the requirement to inform complainants and employees of the requirement to not disclose their identities without their consent, unless the Inspector General determines such disclosure to be unavoidable. However, as stated in our report, Air Force guidance did not include requirements outlined in DOD Instruction 7050.01 related to the specific conditions under which information disclosures may be made without complainant consent. These include circumstances wherein a complainant has made it known outside IG channels that he or she submitted the complaint, there is an emergency situation or health or safety issue, or the allegation is being transferred outside of DOD to another IG. As a result, we continue to believe that without updated policies and procedures that fully implement confidentiality standards, the Air Force IG may not be able to ensure the consistent implementation of confidentiality protections.
Separately, in relation to IG employee access of information, the Army IG stated that the processes it has in place provide judicious access and control of whistleblower information to achieve an appropriate balance between efficient operations and minimized risk. As stated in our report, DODIG’s 2018 quality assurance review of the Army IG found that the Army IG’s application did not restrict personnel without a need to know from accessing allegations involving Army IG personnel, contrasting with NIST guidelines, which predicate user access on the need to accomplish assigned tasks. Army IG officials acknowledged this issue, but stated that the Army IG had elected to not use existing resources to further develop its case management application in light of the enterprise system being developed by DODIG. As a result, we continue to believe that by considering actions prior to the advent of the enterprise system—which is not expected to be released to the Army IG until fiscal year 2021—the Army IG could mitigate risks to whistleblower confidentiality by reducing the potential for unauthorized IG employee access of whistleblower records.
We are sending copies of this report to congressional committees; the Acting Secretary of Defense; the Department of Defense Principal Deputy Inspector General performing the duties of the Inspector General; the Inspectors General of the Air Force, the Army, the Navy, and the Marine Corps; the Office of Special Counsel; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3604 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Scope and Methodology
To determine the extent to which the Department of Defense Office of Inspector General (DODIG) and the military service offices of inspector general (IG) met and took steps to achieve key fiscal year 2018 timeliness and quality goals related to the handling of whistleblower complaints, we reviewed performance documentation and interviewed officials on DODIG and military service IG timeliness and quality goals, performance measures, and associated performance data for fiscal year 2018, along with ongoing and planned efforts to improve performance. We also reviewed fiscal year 2017 performance data for comparison purposes. We selected data from this period because they constituted the most complete and recent performance data available. Using the data, we assessed the extent to which DODIG and the military service IGs met timeliness and quality goals defined by statute and internal IG policy. Specifically, we assessed the timeliness of DOD Hotline referrals and completion reports against its internal goals, along with DODIG senior official misconduct and whistleblower reprisal intakes, investigations, and oversight reviews against internal and statutory goals. We also assessed the timeliness of military service IG senior official and reprisal notifications, intakes, and investigations against DOD and statutory goals, and reviewed the results of DODIG quality assessments for DOD Hotline referrals, military service investigations, and DODIG senior official and whistleblower reprisal investigations.
We assessed the reliability of DODIG and military service IG data by administering questionnaires, interviewing cognizant officials, and reviewing case management system documentation and quality assurance procedures. We also compared select electronic data to fiscal years 2013 through 2017 case file documentation associated with our review of case files to determine whether dates had been properly recorded in the system. We determined that these data were sufficiently reliable for our purposes.
To identify factors affecting timeliness and quality, we interviewed IG officials and reviewed relevant documentation including strategic plans, briefing materials, and semiannual reports to Congress. We also compared DODIG and military service IG completed and planned efforts to improve timeliness and quality against Council of the Inspectors General on Integrity and Efficiency (CIGIE) standards for federal IGs related to establishing performance plans with goals and performance measures, and Standards for Internal Control in the Federal Government related to assessing performance and improving performance.
To determine the extent to which DODIG and the military service IGs have established processes to protect the confidentiality of whistleblowers, we assessed DOD and military service IG policies and procedures for handling whistleblower allegations against DOD policy, CIGIE standards for federal IGs, and statutory protections related to safeguarding whistleblower confidentiality. We also reviewed the results of DODIG’s quality assurance reviews of the Air Force (2017), Army (2018), and Naval (2016) IGs. We performed a web-based survey of the entire population of 108 DODIG Administrative Investigations directorate employees directly involved with the handling of whistleblower cases to ascertain whether, in their view, confidentiality processes are being implemented in accordance with guidance and standards, identify potential confidentiality violations, and to gather perceptions on the integrity of the internal process for reporting misconduct, among other things. We removed four employees from our initial population of 112 employees because two employees left DODIG prior to the initiation of our survey and two employees were new to the organization and therefore likely not familiar with the issues covered by the survey.
To conduct the survey, we developed 27 questions covering (1) access to and protection of sensitive and classified whistleblower information; (2) confidentiality guidance, safeguards and identity disclosures; (3) resolving internal conflict through DODIG’s Office of the Ombuds; and (4) reporting misconduct through the internal DODIG process for DODIG employees to report misconduct. A survey specialist helped to develop these questions, and another survey specialist provided independent feedback on the questions to ensure that content necessary to understand the questions was included and that the questions could be answered accurately and completely. To minimize errors that might occur from respondents interpreting our questions differently than we intended, we pretested our survey with seven DODIG employees to ensure the clarity and reasonableness of the questions. During the pretests, conducted in person and by phone, DODIG employees read the instructions and each question out loud and told us whether (1) the instructions and questions were clear and unambiguous, (2) the terms we used were accurate, and (3) they could offer a potential solution to any problems identified. We also asked them for a mock answer to ensure that the questions were understood as intended. We noted any potential problems identified by the reviewers and through the pretests and modified the questionnaire based on the feedback received. A full listing of survey questions is provided in appendix IV.
We conducted the survey between June 14, 2018, and July 6, 2018. To maximize our response rate, we sent reminder emails and contacted non- respondents by telephone to encourage them to complete the survey. In total, we received responses from 86 DODIG employees, achieving a response rate of 80 percent. Although not required, we assessed the potential for non-response bias by analyzing differences in the percent of DODIG employees per directorate and job position (e.g., investigator) that responded to our survey and the percent of potential DODIG respondents in each directorate and position. We found no meaningful differences between respondents and our population of potential respondents, indicating no evidence for non-response bias. Also, we took steps in the development of the survey, data collection, and data analysis to minimize nonsampling errors and help ensure the accuracy of the answers that were obtained. For example, a social-science survey specialist helped to design the questionnaire, in collaboration with analysts having subject- matter expertise. Then, as noted earlier, the draft questionnaire was pretested to ensure that questions were relevant, clearly stated, and easy to comprehend, and it was also reviewed by another specialist with expertise in survey development.
We calculated the frequency of responses to our closed-ended survey questions and performed content analysis on the open-ended questions to identify common themes from across the responses and to determine their frequencies. The quantitative analysis was performed by one analyst and independently reviewed by another analyst. For the qualitative analysis, a standard coding scheme was developed to identify common themes and determine their frequencies. We also used professional judgment to identify other themes that were determined to be important based on our review of case files, discussions with DODIG management, and review of guidance and relevant standards.
To determine the extent to which DODIG and the military service IGs are able to access and safeguard classified and sensitive information necessary to handle whistleblower complaints, we reviewed documentation and interviewed officials on the extent to which DODIG and the military service IGs have developed, implemented, and assessed key information technology (IT) security controls, and authorized the systems and applications used to process, store, and transmit sensitive whistleblower information per requirements and standards prescribed by DOD, the Office of Management and Budget, and the National Institute of Standards and Technology. Collectively, these documents delineate an array of documentary and procedural requirements related to the assessment of IT security controls and the authorization to operate IT systems and applications. We also reviewed plans and interviewed cognizant officials on the development and implementation of the Defense Case Activity Tracking System enterprise (D-CATSe)—DOD’s future system for managing whistleblower information across DODIG and the military service IGs, and reviewed DODIG’s quality assurance reviews of the Air Force (2017), Army (2018), and Naval IGs (2016). Separately, we reviewed data and information on the number and percentage of DODIG and military service IG classified cases closed in fiscal year 2017, the number and allocation of DODIG and military service IG staff possessing security clearances, and the processes and procedures for storing and accessing classified information within DODIG and the military service IGs against DOD policy related to establishing controls to ensure access to classified information is limited to authorized persons. We assessed the reliability of classified case data by administering questionnaires to cognizant officials, and determined the data were sufficiently reliable for the purpose of reporting the number of classified cases closed in fiscal year 2017.
To determine the extent of substantiated and potential confidentiality violations and retaliatory investigations involving DODIG employees, we obtained and analyzed available fiscal year 2013 through 2018 data on known or perceived violations of confidentiality standards and retaliatory investigations from DODIG and the military service IGs. We selected data covering this period of time because they constituted the most recent and reliable data available, and because DODIG officials told us that data prior to fiscal year 2013 were unreliable. We also reviewed fiscal year 2013–2018 complaint data from the Office of Special Counsel and the CIGIE Integrity Committee in order to identify possible violations of confidentiality standards or retaliatory investigations. We assessed the reliability of DODIG and service IG data by administering questionnaires, interviewing cognizant officials, and reviewing the methods used to query IG case management systems for this information. We determined the data to be sufficiently reliable for the limited purpose of identifying potential confidentiality violations and retaliatory investigations.
To evaluate the extent to which select misconduct and reprisal cases involving civilian DOD Presidential appointee with Senate confirmation (PAS) officials met key documentation and reporting requirements, we reviewed all 125 administrative misconduct and reprisal cases involving Senate-confirmed civilian official subjects that were dismissed by DODIG in fiscal years 2013 through 2017. We chose to review cases from this period because they constituted the most recent and complete data in DODIG’s case management system and would therefore most accurately reflect the extent to which the majority of DODIG’s cases included required documentation. Also, DODIG officials informed us that information on cases prior to the implementation of the current case management system in fiscal year 2013 were both incomplete and unreliable. During the course of our review, we removed five out-of-scope cases from the original population of 130 cases, reducing the number of cases in our population from 130 to 125. Four cases were removed because the related allegations were investigated, and one case was removed because it was a record used to track an investigation occurring at a military service IG. Table 8 shows the distribution per fiscal year of closed misconduct and reprisal cases involving civilian DOD PAS subjects by the result of the case.
To conduct the case-file review, we developed and used a data collection instrument to guide our review regarding general case characteristics and the presence of information and documentation required by DOD policies and CIGIE best practices. Core elements of this instrument were shared with DODIG officials to ensure the instrument aligned with the policies and practices in place when the cases were dismissed. These core elements represented individual documents and data elements. We incorporated DODIG’s feedback into our instrument before commencing the file review. Examples of elements in our review that represent key data in DODIG’s database or constitute documentation of key steps of the case-handling process include the following: case open date, case close date, disposition of the matter at intake, dismissal rationale.
To validate the data collection instrument and ensure consistency in its application, we developed and followed standard procedures to review a test sample of 11 case files that were selected from each stratum of cases (e.g., misconduct) to ensure that each case type was tested at least once. In reviewing the sample, we adjusted the relevant case file elements for each case based on its type and circumstances and captured responses in our data collection instrument accordingly. To help ensure the accuracy of the information we collected, one analyst reviewed each casefile and coded for the presence of required information using the data collection instrument, and another analyst reviewed the first analyst’s work. In the event that disagreement between the two analysts occurred, the analysts discussed and resolved the disagreement by identifying and reviewing supporting database information or documentation, and obtaining the input of a third analyst, if necessary, until a final resolution was made. We reviewed all cases dismissed during this period; for this reason, the results of this analysis do not have a sampling error.
To identify other characteristics of DODIG cases involving civilian DOD PAS officials, we also analyzed fiscal years 2013-2017 case data to determine the number of cases closed by fiscal year, case types, case dispositions, source organizations, and the frequency and type of alleged misconduct. Separately, we also reviewed documentation from DODIG on civilian DOD PAS official allegations and investigation results reported to the Secretary of Defense and Congress since fiscal year 2013.
In addressing our objectives, we met with officials from the organizations identified in table 9.
We conducted this performance audit from October 2016 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
While this audit was initiated in October 2016, work was suspended from December 2016 until September 2017 due to other engagement work.
Appendix II: Additional Examples of DODIG Initiatives to Improve Timeliness
This appendix provides additional examples of Department of Defense Office of Inspector General (DODIG) timeliness improvement initiatives. According to DODIG officials, recent steps to improve the timeliness of whistleblower reprisal and senior official misconduct intakes, investigations, and oversight reviews include:
Transferring the intake of most military reprisal complaints to the DODIG oversight branch for increased consistency.
Changing the intake metric from 30 to 45 days for non-military reprisal cases to allow for more robust intakes.
Not requiring a clarification interview when a written reprisal complaint is clear.
Requesting documents from the employer at the intake stage in contractor reprisal cases.
Interviewing subjects early in the investigation, when appropriate.
Conducting investigative travel only when doing so would save time or for other compelling reasons. Otherwise, most interviews are conducted by phone or video teleconference and information is requested in opening letters for investigations to facilitate early receipt of documentary evidence.
Using summary reports of investigation to facilitate timelier report- writing and review. DODIG issued 24 summary reports in fiscal year 2018, starting in May, for simple, non-substantiated investigations.
Eliminating the requirement to conduct peer reviews of the reprisal reports of investigation, except at supervisors’ discretion.
Using standardized complaint notification and determination forms across DOD to formalize the processing of complaints received by component and service IGs.
Implementing a more robust intake process for senior official misconduct investigations, which includes complaint clarifications and more investigative work. According to DODIG officials, most of the complaints reviewed during this new process would have otherwise been investigated by DODIG or the military service IGs, with a negative impact on the overall timeliness of investigations.
Authorizing the military service IGs to close and simultaneously notify the DODIG reprisal investigations directorate of actions taken for complaints relating to uncooperative complainants, untimely complaints, and withdrawn complaints. This has increased notification rates and decreased processing time, according to DODIG officials.
Appendix III: Characteristics of Closed Misconduct and Reprisal Cases Involving Civilian DOD Presidential Appointees with Senate Confirmation
This appendix provides information on the characteristics of closed and dismissed misconduct and reprisal cases involving civilian DOD Presidential appointee with Senate confirmation (PAS) officials based on our analysis of fiscal year 2013 through fiscal year 2017 case data from the Department of Defense Office of Inspector General (DODIG) case- management system and our review of dismissed cases. DODIG closed 129 cases from October 1, 2012, through September 30, 2017, of which 125 were dismissed. Of the 125 dismissed cases, 117 were misconduct cases and eight were reprisal cases.
Organizational Source of Complaints Dismissed in Fiscal Years 2013-2017
DODIG dismissed 125 civilian DOD PAS official misconduct and reprisal cases. The largest number of cases—40 (32 percent)—were submitted by defense agency employees. Employees from the Navy submitted the next highest number of complaints, with 31 (25 percent), followed by the Army, which accounted for 26 (21 percent) of the complaints. Figure 9 shows the percentage of dismissed cases closed from fiscal years 2013 through 2017, by organizational source.
DODIG Number of Days to Close Dismissed Cases, Fiscal Years 2013-2017
Our review of the 125 dismissed civilian DOD PAS official cases closed by DODIG from fiscal years 2013 through 2017 showed that the majority of cases were closed in 30 days or less. Specifically, approximately 81 percent of the cases were closed in 30 days or less, and 58 percent of the cases were closed in 10 days or less. Table 10 groups the cases dismissed in each fiscal year from fiscal years 2013 through 2017 by the number of days to close.
DODIG Closed Misconduct Case Allegations, Fiscal Years 2013-2017
We reviewed data on the number and type of allegations made against civilian DOD PAS officials in the 117 closed misconduct cases from fiscal years 2013 through 2017. In total, there were 152 allegations across the 117 closed cases. Allegations are grouped into 13 broad categories and 38 sub-allegation categories. From fiscal years 2013 through 2017, we found that the greatest proportion of allegations, at 47 percent, were personal misconduct and ethical violations. Personnel matters—at 14 percent—and “other”—an indeterminate category at 12 percent—were the next two largest in proportion of allegations. Figure 10 provides the percentages of allegations in closed misconduct cases from fiscal years 2013 through 2017.
Appendix IV: Survey of Select DODIG Employees
GAO administered the survey questions shown in this appendix to learn more about DODIG processes related to the access and protection of whistleblower records, and the avenues available to DODIG employees to resolve conflict and report alleged misconduct themselves. The survey was divided into four sections: information access and protection, confidentiality, resolving internal conflict, and reporting misconduct. Survey questions without response options were open-ended. This appendix accurately shows the content of the web-based survey but the format of the questions and responses options have been changed for readability in this report. For more information about our methodology for designing and administering the survey, see appendix I. 1. How long have you worked in Administrative Investigations (AI)?
Please consider your full tenure across all AI directorates (DOD Hotline, Investigations of Senior Officials, and Whistleblower Reprisal Investigations) if you have worked in more than one directorate. (Response options provided: radio buttons labeled “Less than 1 year,” “1 year or more but less than 5 years,” “5 years or more but less than 10 years,” and “10 years or more.”)
SECTION I: Information Access and Protection 2.
Formal training (in-person/w eb-based)
Informal training (staff meetings/briefings)
Please describe any other guidance you have received . ii. Do you believe the guidance identified above is sufficient or insufficient in specifying requirements for properly securing whistleblower records in your directorate? Select only one Sufficient SKIP to Question 4 Insufficient Continue to 1 below Not sure SKIP to Question 4 1. Why do you believe the guidance is insufficient? SKIP to iii. Would guidance that specifies access restrictions and security controls for handling whistleblower records be helpful? (Response options provided: radio button labeled “yes” and “no.”) 1. Please explain why guidance would or would not be helpful. 4. Are you aware of any controls in place to restrict access to D-CATS records to only DODIG employees (either within or outside your directorate) with a need to know? Select only one Yes Continue to i No SKIP to Question 5 I’m not sure SKIP to Question 5 i.
Please describe the control(s) in place to restrict access to D- CATS records. 5. During your tenure at DODIG, have you or other DODIG employees (either within or outside your directorate) been able to access records in D-CATS without a need to know? This applies to potential access to records, regardless of whether anyone actually accessed records or not. Yes Continue to i No SKIP to Question 6 I don’t know SKIP to Question 6 i. Which DODIG directorate’s records have you or other DODIG employees been able to access without a need to know? (Response options provided: checkboxes labeled “DOD Hotline,” “Investigations of Senior Officials,” “Whistleblower Reprisal Investigations,” and “Office of Professional Responsibility.”) ii.
Are you aware of any actions taken to address the ability of DODIG employees to access records without a need to know? Examples of actions taken include a policy or procedure change, additional guidance, or other actions taken. Yes Continue to 1 below No SKIP to 2 below 1. Please describe the action(s) taken. 2. What improvements, if any, could be made to address the ability of DODIG employees to access records without a need to know? 6. Do you believe protections are sufficient or insufficient to ensure only DODIG employees with a need to know can access records in D- CATS? Select only one Sufficient Continue to i Not sure SKIP to Question 7 i. Why do you believe the protections are sufficient or insufficient? 7. Are you able to access classified information when needed to perform the duties required of your position? Select only one Yes SKIP to Question 8 No Continue to i I do not require access to classified information to perform the duties of my position SKIP to Question 8 i.
Formal training (in-person/w eb- based)
Informal training (staff meetings/briefings)
Please describe any other guidance you have received.
Formal training (in-person/w eb- based)
Informal training (staff meetings/briefings)
Please describe any other guidance you have received. ii. Do you believe the guidance identified above is sufficient or insufficient in specifying how to determine whether disclosing the identity of a complainant or source (e.g., witness) is unavoidable? Sufficient SKIP to 2 below Insufficient Continue to 1 below Not sure SKIP to 2 below 1. Why do you believe the guidance is insufficient? 2. What improvements, if any, do you think could be made to guidance specifying how to determine whether disclosing the identity of a complainant or source (e.g., witness) is unavoidable? (After answering, SKIP to Question 12) iii. Would guidance that specifies how to determine whether disclosing the identity of a complainant or source (e.g., witness) is unavoidable be helpful? (Response options provided: radio buttons labeled “yes” and “no.”) 1. Please explain why guidance would or would not be helpful. 12. To your knowledge, is there one or more official(s) who is responsible for determining whether disclosing the identity of a complainant or source (e.g., witness) is unavoidable? Yes Continue to i No SKIP to Question 13 I don’t know SKIP to Question 13 i. Who is responsible for determining whether disclosing the identity of a complainant or source (e.g., witness) is unavoidable? 13. While working in AI, have you ever encountered a situation where disclosing the identity of a complainant or source (e.g., witness) was unavoidable? Yes Continue to i No SKIP to Question 14 i. Please describe the general circumstance(s) and the steps you took to verify that the circumstance(s) required disclosing the identity of a complainant or source (e.g., witness). Please do not provide individual names related to the actors involved. 14. Between June 1, 2017, and today, are you aware — either by experiencing firsthand or directly observing actions of another person – of an instance where the identity of a complainant or source (e.g., witness) was disclosed by a DODIG employee to an organization or individual without a need to know (i.e., an avoidable disclosure)? Please check only one below. No, I am not aware of any avoidable disclosures SKIP to Yes, I am aware of one or more avoidable disclosure(s) Continue to i i. How many avoidable disclosures are you aware of between June 1, 2017, and today? For example, if the identity of a complainant was revealed to one person who did not have a need to know, please consider that event as one instance. Similarly, if the identity of a source was revealed separately to two different people who did not have a need to know, please consider those events as two instances. ii. Please describe any actions taken in response to the avoidable disclosure(s) you are aware of between June 1, 2017, and today. Examples of actions taken include but may not be limited to retracting/recalling a referred complaint, a change to policy, procedure or guidance, and notifying the complainant or source, among other actions. 15. What improvements, if any, could be made to prevent avoidable disclosures from happening in the future? 16. Please describe any best practices that you follow to help prevent avoidable disclosures.
SECTION III: Resolving Internal Conflict 17. Have you ever contacted the DODIG Office of the Ombuds or participated in a DODIG Office of the Ombuds activity in order to address conflict among DODIG employees? Examples of DODIG Office of the Ombuds activities include but are not limited to providing confidential advice for resolving conflict among peers and supervisors and participating in an Ombuds-led mediation among DODIG employees. Yes Continue to i No, but I know about the DODIG Office of the Ombuds I do not know about the DODIG Office of the Ombuds SKIP to the next section i. Do you believe the DODIG Office of the Ombuds provided or is providing sufficient or insufficient assistance to address the conflict(s) for which you contacted the Ombuds or participated in an Ombuds activity? Sufficient Continue to 1 Insufficient Continue to 1 Too soon to tell Continue to 1 1. Please describe, in general terms, your latest experience working with the DODIG Office of the Ombuds. Please do not provide the names of individuals involved with your experience. 18. Have you ever considered reaching out to the DODIG Office of the Ombuds, but ultimately chose not to? Yes Continue to i No SKIP to the next section i. How much, if at all, did each of the following contribute to your decision not to utilize DODIG Office of the Ombuds services? Select one in each row.
Resolved the issue through another avenue Not sure how to initiate contact w ith the Ombuds Concern about length of process Concern about objectivity or conflict of interest w ithin the Office of the Ombuds Fear that confidentiality w ould be compromised Fear of retaliation or reprisal from w ithin DODIG Please describe any other factor(s) that contributed to your decision not to utilize DODIG Office of the Ombuds services.
SECTION IV: Reporting Misconduct 19. As a DODIG employee, have you ever personally reported misconduct against another DODIG employee through DODIG’s internal process for investigating alleged misconduct? For the purposes of this survey, “misconduct” refers to (1) a violation of a provision of criminal law, (2) a violation of a recognized standard, such as a federal or DOD regulation, or (3) a matter of concern involving DOD leadership that could reasonably be expected to be of significance to DODIG. Yes Continue to i No SKIP to iii i. Did you report misconduct on or before September 30, 2016? Yes Continue to 1 below No SKIP to ii 1. Do you believe your report(s) of misconduct on or before September 30, 2016 were investigated in a fair and objective manner? (Response options provided: radio buttons labeled “yes” and “no.”) a. Please describe your general experience(s) in reporting misconduct against a DODIG employee on or before September 30, 2016, including why you do or do not believe your report(s) of misconduct were investigated in a fair and objective manner. Please do not provide the names of individuals related to the misconduct you reported. ii. Did you report misconduct on or after October 1, 2016? Yes Continue to 1 below No SKIP to Question 20 1. Do you believe your report(s) of misconduct on or after October 1, 2016 were investigated in a fair and objective manner? (Response options provided: radio buttons labeled “yes,” “no,” and “too early to have an opinion”) a. Please describe your general experience(s) in reporting misconduct against a DODIG employee on or after October 1, 2016, including why you do or do not believe your report(s) of misconduct were investigated in a fair and objective manner. Please do not provide the names of individuals related to the misconduct you reported. iii. Do you know how to report misconduct against another DODIG employee through DODIG’s internal process? (Response options provided: radio buttons labeled “yes” and “no.”) 20. Thinking about the time period on or before September 30, 2016, did you ever consider reporting misconduct against a DODIG employee through DODIG’s internal process, but ultimately choose not to? Yes Continue to i No SKIP to Question 21 i. How much, if at all, did each of the following contribute to your decision not to report incident(s) of misconduct on or before September 30, 2016? Select one in each row.
Resolved the issue through another avenue Not sure how to report misconduct Concern about length of process Concern about objectivity or conflict of interest w ithin DODIG’s internal process to report misconduct Fear that confidentiality w ould be compromised Fear of retaliation or reprisal from w ithin DODIG Please describe any other factor(s) that contributed to your decision not to report incidents of misconduct on or before September 30, 2016.
21. Thinking about the time period on or after October 1, 2016, did you ever consider reporting misconduct against a DODIG employee through DODIG’s internal process, but ultimately choose not to? Yes Continue to i below No SKIP to Question 22 i. How much, if at all, did each of the following contribute to your decision not to report incident(s) of misconduct on or after October 1, 2016? Select one in each row.
Resolved the issue through another avenue Not sure how to report misconduct Concern about length of process Concern about objectivity or conflict of interest w ithin DODIG’s internal process to report misconduct Fear that confidentiality w ould be compromised Fear of retaliation or reprisal from w ithin DODIG Please describe any other factor(s) that contributed to your decision not to report incidents of misconduct on or after October 1, 2016.
22. How well, if at all, do you believe DODIG’s internal process for reporting misconduct protects the confidentiality of DODIG employees? (Response options provided: radio buttons labeled “Not at all,” “Slightly,” “Somewhat,” “Very well,” and “I don’t know.”) 23. What improvements, if any, do you think could be made to DODIG’s internal process for reporting misconduct to protect the confidentiality of DODIG employees? 24. How well, if at all, do you believe DODIG’s internal process handles misconduct allegations against DODIG employees? This includes activities associated with both assessing incoming complaints and subsequently investigating them, as appropriate. (Response options provided: radio buttons labeled “Not at all,” “Slightly,” “Somewhat,” “Very well,” and “I don’t know.”) 25. What factors contribute to your opinion about DODIG’s internal process for handling misconduct allegations against DODIG employees? 26. What improvements, if any, do you think could be made to DODIG’s internal process to improve the handling of misconduct allegations? 27. If you would like to comment on any of the topics covered by this survey, or anything else that you feel might be relevant to our review on the DOD whistleblower program, please do so below.
Appendix V: Comments from the Department of Defense
Appendix VI: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Alissa Czyz (Assistant Director),Tracy Barnes, Amy Bush, Nicole Collier, Ryan D’Amore, Chad Hinsch, Linda Keefer, Kevin Keith, Amie Lesser, Serena Lo, Michael Silver, and Lillian Yob made key contributions to this report.
Related GAO Products
Office of Special Counsel: Actions Needed to Improve Processing of Prohibited Personnel Practice and Whistleblower Disclosure Cases. GAO-18-400. Washington, D.C.: June 14, 2018.
NASA Contractor Whistleblowers: Steps Taken to Implement Program but Improvements to Timeliness and Guidance Needed. GAO-18-262. Washington, D.C.: March 8, 2018.
Whistleblower Protection: Opportunities Exist for DOD to Improve the Timeliness and Quality of Civilian and Contractor Reprisal Investigations. GAO-17-506. Washington, D.C.: September 29, 2017.
Contractor Whistleblower Protections Pilot Program: Improvements Needed to Ensure Effective Implementation. GAO-17-227. Washington, D.C.: March 2, 2017.
Whistleblower Protection: Additional Actions Would Improve Recording and Reporting of Appeals Data. GAO-17-110. Washington, D.C.: November 28, 2016.
Whistleblower Protection: DOD Has Improved Oversight for Reprisal Investigations but Can Take Additional Actions to Standardize Process and Reporting. GAO-16-860T. Washington, D.C.: September 7, 2016.
Department of Energy: Whistleblower Protections Need Strengthening. GAO-16-618. Washington, D.C.: July 11, 2016.
Whistleblower Protection: DOD Needs to Enhance Oversight of Military Whistleblower Reprisal Investigations. GAO-15-477. Washington, D.C.: May 7, 2015.
Whistleblower Protection: Additional Actions Needed to Improve DOJ’s Handling of FBI Retaliation Complaints. GAO-15-112. Washington, D.C.: January 23, 2015.
Whistleblower Protection Program: Opportunities Exist for OSHA and DOT to Strengthen Collaborative Mechanisms. GAO-14-286. Washington, D.C.: March 19, 2014.
Whistleblower Protection: Actions Needed to Improve DOD’s Military Whistleblower Reprisal Program. GAO-12-362. Washington, D.C.: February 22, 2012. | Why GAO Did This Study
Safeguarding confidentiality to the maximum extent possible is essential for encouraging whistleblowers to report wrongdoing without fear of reprisal. In fiscal year 2018, DODIG received over 12,000 contacts from potential whistleblowers related to fraud, waste, abuse, employee misconduct, or other violations. The National Defense Authorization Act for Fiscal Year 2017 included a provision for GAO to review the integrity of DOD's whistleblower program. This report assesses the extent to which DODIG and the military service IGs (1) met and took steps to achieve key fiscal year 2018 timeliness and quality goals, (2) established processes to protect whistleblower confidentiality, and (3) are able to safeguard sensitive information necessary to handle whistleblower complaints. It also evaluates (4) the extent to which select cases involving certain senior DOD civilian officials met key requirements.
GAO assessed fiscal year 2018 IG performance data, surveyed all 108 DODIG employees who directly handle whistleblower complaints, reviewed IT security controls, and analyzed all 125 cases involving civilian DOD Presidential appointees with Senate confirmation dismissed by DODIG in fiscal years 2013-2017.
What GAO Found
The Department of Defense Office of Inspector General (DODIG) and military service offices of inspector general (IG) met some but not all fiscal year 2018 timeliness and quality goals for handling whistleblower complaints. For example, DODIG met its goals related to referring complaints to the appropriate agency within a certain number of days. All IGs also generally met goals related to the quality of investigations. However, about 85 percent of DODIG reprisal and senior official misconduct investigations exceeded statutory and internal timeliness goals. Further, military service IGs did not meet most goals for handling cases within prescribed timeframes. For example, the service IGs averaged between 17 and 84 days to notify DODIG of their receipt of whistleblower reprisal allegations, exceeding the 10-day goal. The IGs have various initiatives underway to improve timeliness, such as a Naval IG program to reduce timeframes for initial credibility determinations. However, additional actions could provide a more targeted approach to improving performance against unmet timeliness goals—such as for senior official misconduct investigations—and better assure whistleblowers that their cases will be handled expeditiously.
DODIG and the military service IGs have policies to protect whistleblower confidentiality, but some gaps exist. For example, DODIG guidance for protecting whistleblowers who report internal DODIG misconduct does not specify key steps investigators should take to protect confidentiality, such as not identifying complainants during interviews with case subjects. Also, Air Force, Naval, and Marine Corps IG guidance does not specify when whistleblower identities can be disclosed without consent. Without updated guidance, the IGs cannot ensure the consistent implementation of confidentiality protections.
The IGs have taken steps to safeguard whistleblower information in their information technology (IT) systems and applications, such as by restricting access to case information through unique user permissions and by taking actions to follow DOD's IT risk management process. However, between 2016 and 2018, employees in all of the IGs have been able to access sensitive whistleblower information without a need to know. For example, DODIG determined that numerous restricted whistleblower records in its document repository were accessible to DODIG personnel without a need to know. Similarly, the Air Force IG's application did not restrict users from other DOD components from viewing Air Force IG case descriptions and complainant identities, while the Army IG's application and the Naval IG's system did not restrict personnel within those IGs from viewing allegations or investigations involving other personnel within those IGs. Additionally, employees in Marine Corps IG offices were able to see whistleblower cases assigned to other IG offices without a need to know. While some actions have been taken to address these issues, additional steps are needed to restrict access to case information in order to mitigate ongoing risks to whistleblower confidentiality.
DODIG generally met key documentation requirements for the 125 cases it dismissed without investigation involving civilian DOD Presidential appointees with Senate confirmation.
What GAO Recommends
GAO is making 12 recommendations, including that the IGs take additional actions to improve timeliness, develop additional procedures to protect whistleblower confidentiality, and take steps to further limit IG employee access to sensitive whistleblower information. DOD concurred with all of the recommendations. |
gao_GAO-18-243 | gao_GAO-18-243_0 | Background
Overview of UN Peacekeeping Operations since 1948
As of June 30, 2017, the UN had carried out 71 peacekeeping operations since 1948, and had 16 active UN peacekeeping operations worldwide. Eight of these UN peacekeeping operations were in sub-Saharan Africa (see fig. 1).
In their earliest days, UN peacekeeping operations were primarily military in nature and limited to monitoring cease-fire agreements and stabilizing situations on the ground while political efforts were being made to resolve conflicts. Today, in response to increasingly complex situations in which conflicts may be internal, involve many parties, and include civilians as deliberate targets, UN peacekeeping operations are more commonly “multidimensional”—deploying civilian and police personnel in addition to military personnel. Multidimensional peacekeeping operations seek to create a secure and stable environment while working with national authorities and actors to make sure the peace agreement or political process is implemented. According to the UN, multidimensional peacekeeping operations are designed to protect civilians and often assist in the disarmament, demobilization and reintegration of former combatants; support the organization of elections; protect and promote human rights; and assist in restoring the rule of law. Figure 2 shows examples of UN peacekeepers serving in different capacities as part of MINUSCA in Bangui, CAR.
Each UN peacekeeping operation, including its mandated size and tasks, is authorized through a UN Security Council resolution. The operation’s budget and resources are subject to General Assembly approval. The UN’s approved budget for global peacekeeping operations in UN fiscal year 2017 was about $7.9 billion. Individual operation budgets ranged from about $36 million for the peacekeeping operation in Kosovo to more than $1.2 billion for the peacekeeping operation in the Democratic Republic of the Congo (see table 1).
The UN reported in June 2017 that it maintained 95,544 uniformed peacekeepers, 5,004 international civilians, 10,149 local civilians, and 1,597 UN volunteers in support of its operations around the world. According to UN documents, civilian peacekeeping personnel are generally recruited to peacekeeping operations as individuals, while police and military personnel are volunteered by member states to participate as part of their country’s contribution to UN peacekeeping operations.
U.S. Contributions to UN Peacekeeping Operations
The United States is the largest financial contributor to UN peacekeeping operations. From fiscal years 2014 to 2017, the United States contributed an average of about $2.1 billion per year to these operations. The UN General Assembly sets the assessment levels for UN member contributions to peacekeeping operations every 3 years. The United States’ assessment has averaged about 28.5 percent of the UN peacekeeping budget; however, Congress has authorized payment with appropriated funds at about 27 percent for U.S. fiscal years 2014 through 2016, and 25 percent for U.S. fiscal year 2017.
Overview of MINUSCA
In April 2014, UN Security Council Resolution 2149 established MINUSCA following escalating sectarian violence in CAR that resulted in the destruction of state institutions, thousands of deaths, and 2.5 million people—more than half of CAR’s entire population—in need of humanitarian aid, according to a UN report. The conflict also resulted in 174,000 people being internally displaced and over 400,000 fleeing to neighboring countries, according to the UN report. MINUSCA’s tasks include protecting civilians, given the security, humanitarian, human rights, and political crisis in CAR; supporting the implementation of the transition process, including efforts to extend state authority and preserve territorial integrity; facilitating the delivery of humanitarian assistance and promoting and protecting human rights; supporting justice and the rule of law; and facilitating the disarmament, demobilization, reintegration, and repatriation processes.
On November 15, 2017, UN Security Council Resolution 2387 (2017) extended MINUSCA’s mandate for a fourth time, through November 15, 2018.
MINUSCA’s approved personnel levels for UN fiscal year 2017 comprised 10,750 military personnel, 400 individual police officers, and 1,680 formed police unit personnel, as well as 790 international civilian and 696 national civilian personnel, 238 UN volunteers, and 40 government- provided personnel, according to a UN Secretary-General report. Table 2 shows average annual personnel deployment for MINUSCA and the number of authorized positions for the first 3 full fiscal years of the operation.
Cost Estimate for a Hypothetical U.S. Operation in CAR Exceeds Actual Costs for a Comparable Ongoing UN Operation in CAR, as Well as U.S. Contributions to That UN Operation
Based on data and other input from the UN, DOD, and State, we estimate that it would cost the United States more than twice as much as it would cost the UN to implement a hypothetical operation comparable to MINUSCA, the ongoing UN operation in the Central African Republic (CAR). In addition, the estimated cost of a U.S. operation in CAR far exceeds U.S. contributions to MINUSCA.
Estimated Cost of a Hypothetical U.S. Operation in CAR Is More than Twice the Cost of a Comparable Ongoing UN Operation in CAR
Based on an estimate we developed in conjunction with DOD and State officials, a hypothetical, comparable U.S. operation would likely cost nearly $5.7 billion, more than twice as much as MINUSCA, the ongoing UN operation in CAR. Our comparison covers the time from which MINUSCA was established in April 2014 through the end of UN fiscal year 2017, which ended on June 30, 2017—a total of 3 years and 3 months, the first 39 months of MINUSCA. Over this time period, UN costs for MINUSCA totaled approximately $2.4 billion. Using roughly the same basic parameters for MINUSCA, including similar deployment levels of military and civilian personnel over the same time period, in consultation with DOD and State officials, we estimate that a comparable, hypothetical U.S. operation would likely cost nearly $5.7 billion, more than twice the UN cost for MINUSCA (see table 3 for a detailed comparison of estimated U.S. costs and actual UN costs). This estimate does not include, among other things, the cost for State diplomatic security and office space for civilian staff, the inclusion of which could further increase the total estimated U.S. cost for such an operation.
Estimated Cost of a Hypothetical U.S. Operation in CAR Far Exceeds U.S. Contributions to MINUSCA
During the same time period, from April 10, 2014 through June 30, 2017, the United States contributed approximately $700 million to the UN to support MINUSCA. Therefore, the estimated cost of a U.S. operation (nearly $5.7 billion) would be almost eight times greater than the United States’ contribution to MINUSCA. See figure 3 for a comparison of these costs with the U.S. estimate.
Various Factors Affect Differences between the Actual Cost of MINUSCA and the Estimated Cost of a Hypothetical, Comparable U.S. Operation
Various factors contribute to the differences in costs between actual UN expenditures for MINUSCA from April 10, 2014 through June 30, 2017— the first 39 months of MINUSCA—and a hypothetical, comparable U.S. operation over the same time period, including disparities in the cost of sourcing and transporting equipment and supplies, staffing and compensating military and police personnel, and maintaining facilities and communications and intelligence systems. These disparities reflect operational, structural, and doctrinal differences in the way the United States likely would undertake a hypothetical, comparable operation, should such an operation be deemed in the U.S. national interest.
Different Methods for Sourcing and Transporting Equipment and Other Supplies Contribute to Higher Estimated U.S. Costs
High U.S. costs to source and transport supplies and equipment to the Central African Republic (CAR) contribute to the difference between our cost estimate for the hypothetical U.S. peacekeeping operation and the UN’s actual costs for MINUSCA. In the hypothetical U.S. operation, based on input from DOD and IDA officials and the output of the IDA cost estimating tool, the United States would fly in most of its consumable supplies from outside CAR. Specifically, materials such as water, ice, food, and other subsistence items would be airlifted into CAR from Italy, a supply location validated as reasonable by DOD and IDA officials given its proximity to the operation and because MINUSCA relies on a UN global service center there, one of two such UN centers in Europe. The estimated U.S. cost of airlifting water alone over the 39-month time period for the hypothetical operation would total nearly $700 million. The United States would still deploy its equipment and personnel to CAR from the United States, at a cost of nearly $600 million. Transportation of equipment and supplies within CAR would cost an additional estimated $316 million.
In contrast, the UN does not fly in water or consumables on the same scale as the United States would in the hypothetical operation. Instead, the UN relies on some in-country or local infrastructure and consumables. Military and formed police unit equipment is provided by the troop- and police-contributing countries. The UN reimburses these countries for equipment at set rates. The UN cost of reimbursing countries for deploying their equipment to CAR likely would be less than the amount the United States would spend on airlifting the equipment to CAR alone. For example, the UN cost of freight, deployment, and country reimbursements for military and formed police equipment was approximately $229 million over a 2-year period (July 2014 through June 2016), while in the hypothetical operation the U.S. cost of deploying equipment alone would be over $382 million, which is about $154 million more than the UN cost over a similar 2-year period (September 2014 through August 2016).
Differing Staffing and Compensation Practices for Military and Police Personnel Contribute to Higher U.S. Costs
The United States would staff and compensate its military and police personnel differently than the UN, leading to differences between the estimated U.S. costs and actual UN costs. While neither the hypothetical U.S. cost estimate nor UN expenditures include the cost of salaries for active duty personnel or troops contributed by other countries, respectively, the United States would bear the additional cost of salaries for the share of personnel drawn from military reserves. According to DOD officials, 10 percent of infantry unit personnel would have been reservist personnel in a hypothetical, comparable U.S. operation, based on the average ratio of active to reserve personnel deployed by the United States in fiscal years 2015 through 2017, roughly the same time period as the first 39 months of MINUSCA. As a result, the total estimated cost of the hypothetical U.S. operation reflects the additional U.S. expense of paying full salaries and hardship duty pay for U.S. reservist military personnel. The estimate also includes the incremental costs the United States would incur for deploying active duty military personnel, including hardship duty pay that is not incurred when those personnel are in the United States. For military troops deployed to MINUSCA, the UN pays a standard troop cost reimbursement to the troop-contributing countries, which is intended also to cover incremental expenses but not the cost of troops’ salaries.
U.S. costs for civilian police also are significantly higher than UN costs. The United States would pay over $167 million for U.S. civilian police for the duration of the hypothetical operation, while the UN spent $41 million on its individual police officers over the same time frame. The U.S. estimate includes the cost of police salaries and the additional costs of deployment, whereas UN costs for deploying individual police officers do not include salaries, which are borne by the police-contributing countries.
U.S. Standards for Facilities, Communications and Intelligence Systems, and Medical Capability Contribute to Higher Estimated U.S. Cost
Higher U.S. standards for certain aspects of the hypothetical peacekeeping operation in CAR would contribute to costs that exceed those of MINUSCA.
Facilities. The higher estimated U.S. costs reflect higher U.S. standards for facilities, according to State officials. The U.S. cost estimate includes more than $1.1 billion for facilities and related costs, which include facility maintenance, food service, laundry, management and administration, and residential leases for civilian personnel. In contrast, the actual UN cost for facilities as part of MINUSCA totaled $292 million over the same time period.
Communications and intelligence systems. The United States incurs costs associated with meeting U.S. intelligence standards that are not part of UN operations, which lack comparable intelligence capabilities. The U.S. cost estimate includes $140 million for the cost of Command, Control, Communications, Computers, and Intelligence Systems, which represents additional operational costs to meet higher U.S. standards for U.S. communications and intelligence capabilities.
Medical capability. Higher U.S. standards for medical care and medical evacuation capability as compared to the UN are another factor that would contribute to higher U.S. medical costs for a hypothetical operation, according to DOD and State officials. Some UN hospitals may not meet U.S. minimum standards for medical care, according to DOD officials. Although medical costs do not constitute a significant portion of the U.S. cost estimate, estimated U.S. medical costs ($132 million) greatly exceed actual UN medical costs ($8 million) over the same time period.
Officials Cited Relative Strengths of UN and U.S. Peacekeeping Operations
UN and U.S. peacekeeping operations have various relative strengths, according to U.S. and UN officials we met with. Relative strengths of UN peacekeeping operations include international and local acceptance, access to global expertise, and the ability to leverage assistance from multilateral donors and development banks, according to these officials. Relative strengths of U.S. peacekeeping operations would include faster deployment and superior command and control, logistics, intelligence, and counterterrorism capabilities, according to U.S. and UN officials.
Relative Strengths of UN Peacekeeping Operations Include Acceptance, Global Expertise, and Ability to Leverage Multilateral Assistance
According to U.S. and UN officials, UN peacekeeping operations benefit from greater international and local acceptance, access to global expertise, and the ability to leverage assistance from multilateral donors and development banks. UN peacekeeping operations also provide indirect benefits to the military capacity of participating countries.
International and local acceptance. As a multilateral organization, the UN benefits from greater international and local acceptance for its peacekeeping operations, according to State, DOD, and UN officials. These officials noted that the UN’s multinational character contributes to a reputation for local impartiality. Conversely, the United States acting alone may not be viewed as impartial and could face challenges gaining or maintaining international or local support for peacekeeping operations, according to State and DOD officials.
Global expertise. UN officials noted that the UN has unmatched convening power and access to expertise and experience from across the globe to implement the objectives of multidimensional peacekeeping operations. The UN is able to bring in people with subject matter expertise, native language skills, and knowledge of local customs to work for these operations, according to U.S. and UN officials.
Leveraging multilateral assistance. U.S. officials told us that the UN is better able to leverage assistance from multilateral donors and multilateral development banks to expand the scope of assistance provided in support of the goals of peacekeeping operations. For example, according to a UN report, MINUSCA is partnering with the UN Development Fund to provide capacity building related to elections, police, courts, and prisons. The report also noted that the UN, European Union, and World Bank supported the Central African Republic government in developing a “National Recovery and Peacebuilding Plan” while harmonizing humanitarian and development funding to ensure complementarity with the UN peacekeeping operation.
Developing international military capacity. U.S. officials told us that UN peacekeeping operations provide an indirect benefit of helping to professionalize the military units from many developing countries that contribute troops to the UN. We have previously reported that building military capacity of foreign partners to address security-related threats is an important goal of U.S. national security strategy and foreign policy.
Relative Strengths of U.S. Peacekeeping Operations Would Include Faster Deployment and Superior Command and Control, Logistics, Intelligence, and Counterterrorism Capabilities
According to U.S. and UN officials, the relative strengths of U.S. peacekeeping operations would be faster deployment and superior command and control, logistics, intelligence, and counterterrorism capabilities.
Deployment speed. State, DOD, and UN officials highlighted the United States’ ability to deploy troops and police to peacekeeping operations more quickly than the UN. Unlike the U.S. military, which can draw from a ready pool of military personnel, the UN must seek troops from UN member states, which takes time. UN officials told us that the UN faces a shortage of both troops and UN police, which slows deployment. Further, a 2015 report by the UN High-level Panel on Peacekeeping stated that the UN “has struggled to get sufficient forces on the ground quickly enough and relies on under-resourced uniformed capabilities.” The report also stated that aviation, medical, and engineering specialists, among others, are difficult to mobilize in advance of infantry units.
Command and control. State, DOD, and UN officials told us that U.S. operations would enable the U.S. military to have direct command and control, whereas UN operations, which are inherently multinational, face challenges with command and control over troops from several different countries. The UN High-level Panel report noted that UN peacekeeping operations’ weak command and control is a well-known constraint that limits the type of operations the UN can undertake.
Logistics support. U.S. and UN officials told us that U.S. operations have superior logistics systems. U.S. procurement likely would be faster than UN procurement, which lacks a standing supply chain and, therefore, relies on third-party vendors, according to UN officials. In addition, the UN High Level Panel report stated that UN peacekeeping operations’ logistics systems and structures in the field are under severe strain, which can limit the mobility of these operations.
Intelligence capability. U.S. and UN officials agreed that U.S. operations would involve superior intelligence capability. The UN only recently established an intelligence policy—in May 2017—having recognized that some peacekeeping operations had been deployed in fragile political and security environments with asymmetrical and complex threats. However, UN officials acknowledged that the scope of UN intelligence capability remains limited.
Counterterrorism capability. DOD officials told us that a U.S. peacekeeping operation would have the capability to include a counterterrorism component and would not be constrained in the use of force, if needed, in response to terrorist threats. UN peacekeeping operations, on the other hand, lack the capabilities and specialized military preparation to engage in counterterrorism operations, according to the UN High-level Panel report. The UN report stated that counterterrorism should be undertaken by the host government, a capable regional force, or an ad hoc coalition authorized by the UN Security Council. According to the UN report, UN peacekeeping operations may engage in proactive and preemptive use of force to protect civilians and UN personnel from threats; however, offensive force to degrade, neutralize or defeat an opponent is a fundamentally different type of posture that should be authorized by the Security Council only under limited and exceptional circumstances.
Agency Comments and Our Evaluation
We provided a draft of this report to DOD, State, and the UN for review and comment. DOD provided a letter, reproduced in appendix II, which stated that it had no comments. State did not provide comments. The UN provided technical comments, which we incorporated into our report as appropriate.
We are sending copies of this report to the appropriate congressional committees, the Secretaries of Defense and State, and the Secretary- General of the United Nations. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made significant contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
The objectives of this report were to (1) compare the reported costs of a specific United Nations (UN) peacekeeping operation to the estimated costs of a hypothetical, comparable operation implemented by the United States; (2) identify factors that affect cost differences; and (3) identify stakeholder views on the relative strengths of UN and U.S. peacekeeping operations.
To compare the reported costs of a specific UN peacekeeping operation to the estimated costs of a hypothetical, comparable operation implemented by the United States, we selected the UN Multidimensional Integrated Stabilization Mission in the Central African Republic (MINUSCA) as a case study. We compared the reported UN expenditures for MINUSCA, which included both military and civilian components, with estimated costs for a hypothetical U.S. operation with a similar level of military and civilian personnel. Our comparison covers a total of 3 years and 3 months—from MINUSCA’s establishment in April 2014 through June 30, 2017, the end of UN fiscal year 2017. We selected MINUSCA because it is in sub-Saharan Africa, where most UN peacekeeping operations established since 2003 have taken place, and has a typical scope and budget compared to other UN peacekeeping operations in sub-Saharan Africa, according to U.S. and UN officials. In addition, MINUSCA is one of the most recent UN peacekeeping operations; thus, initial expenditures for the operation are relatively current. Because the results of our cost comparison are based on a single case study, they cannot be generalized to all UN peacekeeping operations.
To determine the UN’s costs for MINUSCA, we analyzed UN budget and expenditure data covering the initial start-up period (April 2014 to June 2014) and the first 3 UN fiscal years (July 1, 2014 to June 30, 2017). We spoke with officials of the UN Departments of Peacekeeping Operations, Field Support, and Management at UN Headquarters in New York, New York, to better understand the characteristics of MINUSCA and the different costs affecting MINUSCA’s budget and expenditures. We assessed UN expenditure data through discussions with cognizant UN officials and a review of external audits of UN budgetary information and found them sufficiently reliable for our purposes. We also analyzed data on U.S. contributions to UN peacekeeping operations for fiscal years 2014 through 2017 from the Department of State’s (State) Bureau of International Organization Affairs to determine total U.S. contributions to MINUSCA, and UN peacekeeping operations overall.
To estimate the costs of a hypothetical, comparable operation implemented by the United States, we developed a hypothetical scenario for a U.S. operation based on the MINUSCA budget and supporting documents, assuming deployment of the same number of military, civilian, and police personnel in the Central African Republic (CAR) over the same time period (April 2014 through June 2017). To estimate the military portion of the operation, we interviewed Department of Defense (DOD) officials and staff at the Institute for Defense Analyses (IDA), a DOD-sponsored non-profit corporation involved in developing cost estimates for U.S. contingency operations. The Office of the Undersecretary of Defense-Comptroller and IDA generated a cost estimate for the military components included in the hypothetical operation using the Contingency Operations Support Tool (cost estimating tool). DOD uses this tool to develop cost estimates for all military contingency operations. The cost estimate included only the incremental costs of the operation—those directly attributable to the operation that would not be incurred if the operation did not take place. For example, the estimate produced by the cost estimating tool did not include the direct salaries of active duty personnel as those costs would be incurred by the United States regardless of a possible decision to undertake the hypothetical operation. We assessed the cost estimating tool’s applicability to developing a hypothetical cost estimate for the purposes of this report through discussions with DOD and IDA officials, and compared the tool to the accurate and comprehensive characteristics of a high-quality cost estimate, as described in the GAO Cost Estimating and Assessment Guide. While we found the DOD cost estimating tool generated a sufficiently reliable cost estimate for a hypothetical U.S. peacekeeping operation, we did not assess the overall reliability of the tool or its capability to generate accurate or comprehensive estimates for future U.S. operations.
To generate our estimate of U.S. military costs using the DOD’s estimating tool, we used UN military deployment numbers as a baseline for the scale of a hypothetical, comparable U.S. peacekeeping operation, while using unit sizes and rotations in deployment that were considered appropriate for the U.S. military, according to DOD and IDA officials. We based the hypothetical U.S. operation, and hence the cost estimate, on the following assumptions, which correspond approximately with MINUSCA’s actual UN personnel deployments:
Theater of operation: Central African Republic (CAR)
Type of operation: military contingency
Operation time frame: April 10, 2014 through June 30, 2017
Military contingents: as of June 30, 2017, 11,495 total personnel Infantry: 10 units of 630-785 personnel per unit, approximately 90 percent active duty / 10 percent reserves
Communication / signals: 1 unit, 124 personnel per unit
Engineering: 4 units, 200 personnel per unit
Military police: 1 unit, 120 personnel per unit
Formed police units (military police): 12 units, 140 personnel per
Hospital / medical: 1 level III hospital, 248 beds, 495 personnel
Helicopter units: 2 UH-60 C3 units, 1 MH-60M Assault attack helicopter unit, 100 personnel per unit
Quick reaction force: 1 unit, 160 personnel per unit
Special forces: 1 tactical civilian affairs unit, 1 Marine special operations intelligence unit, 160 personnel per unit
Unmanned aerial vehicle: 1 unit, 84 personnel
Transportation: 1 heavy transport unit, 120 personnel per unit
Operational tempo: 1.0 for all phases of operation and units, except aviation units (set at 1.5)
Deployment schedule and phasing: phased deployment, including 14 days for predeployment (e.g., training), 5 days for deployment, 180 days for active duty unit sustainment and 270 days for reserve unit sustainment, 5 days for redeployment, and 0 days for reconstitution
Housing: contractor-provided semi-permanent housing
Transportation: personnel and equipment transported by airlift from the United States (primarily Fort Hood, Texas), material (such as water, food, and other consumables) transported by airlift from Italy We obtained input on the operational design for the military portion of the cost estimate from DOD officials in the Joint Chiefs of Staff, the Office of the Undersecretary of Defense-Policy, and the Office of the Undersecretary of Defense-Comptroller, and IDA officials. However, the military portions of the scenario and their corresponding cost estimate have some limitations. As a result of rounding for some units, U.S. military personnel numbers do not exactly match the MINUSCA deployment levels. In addition, based on input from DOD officials, we attempted to select military units that would provide an essential function per U.S. common practices while keeping the overall personnel deployment level as close as possible to MINUSCA’s deployment level. An actual U.S. military plan may differ significantly from the UN plan as a result of differences between U.S. and UN military operations, structure, doctrine, and circumstances at the time of the operation.
To estimate U.S. civilian costs, we matched the number of U.S. civilian police and personnel to the number serving in MINUSCA. We then estimated the costs of deploying these U.S. civilian personnel in CAR for the same time period as MINUSCA. We did not attempt to determine how the U.S. government would actually implement civilian components of a peacekeeping operation in CAR.
To estimate U.S. civilian police costs, we met with State’s Bureau of International Narcotics and Law Enforcement Affairs (INL) to identify State’s costs for civilian police contractors providing police training and technical assistance in sub-Saharan Africa. Based on INL’s input we assumed that the base salary of civilian police would be grade 13, step 5 on the Office of Personnel Management’s general schedule salary tables for federal employees. In addition to the average base salary, we identified other costs—with input from INL—including, among others, personal equipment, travel from the United States, and State’s published allowances specific to CAR for local cost of living, post hardship differential, danger pay, and living quarters. We applied the average cost per officer to the average number of UN civilian police officers deployed in MINUSCA.
To estimate U.S. civilian personnel costs, we met with State’s Bureau of Budget and Planning to identify the costs of State Foreign Service officers and locally employed staff, based on the number of UN international and national civilian staff deployed to MINUSCA, respectively. We matched the number of State Foreign Service officers for the U.S. cost estimate to the number of UN international staff in MINUSCA, with input from State to align the grade levels. The estimated costs for Foreign Service officers include average salaries based on State’s Foreign Service salary tables and State’s allowances specific to CAR, including local cost of living, post hardship differential, and danger pay. We also met with State Bureau of Budget and Planning officials to estimate other costs for Foreign Service officers, which we included in our cost estimate, including post assignment travel, administrative support costs, residential furnishings, and residential guards, among others, but we did not assess the reliability of these additional costs provided by State. In addition, State’s Bureau of Overseas Buildings Operations provided the actual costs of residential leases for Foreign Service officers in CAR in fiscal year 2017, which we used to estimate the cost of housing Foreign Service officers in CAR. We also matched the number of State locally employed staff to the number of UN national staff deployed to MINUSCA and added their average salaries and other costs in CAR based on data provided by State’s Bureau of the Comptroller and Global Financial Services.
While MINUSCA’s expenditures also included costs for sending an annual average of up to about 200 UN volunteers to CAR, State officials told us that the United States generally would not send volunteers through its assistance efforts to a high-risk post, such as CAR. Therefore, we did not include any costs related to volunteers in the cost estimate. We also did not include costs related to host-government-provided personnel serving in MINUSCA. In addition, UN expenditures included about $7 million for “quick-impact projects” to support local government infrastructure and civil society initiatives. We did not include comparable costs for quick-impact projects in our U.S. cost estimate because we did not have a basis for matching these costs.
To identify factors that affect cost differences between MINUSCA and a hypothetical, comparable operation implemented by the United States, we reviewed the U.S. cost estimate generated in conjunction with DOD, IDA, and State officials, and identified significant areas of cost for the United States and the assumptions incorporated in the estimate or factors specified by U.S. officials that drive those costs. We compared the U.S. cost estimate, including these significant areas of cost, to UN costs to identify differences. We interviewed U.S. and UN officials regarding U.S. and UN standards and policies that explain differences between MINUSCA costs and the estimated costs of a U.S. operation.
To identify stakeholder views on the relative strengths of UN and U.S. peacekeeping operations, we reviewed UN reports on peacekeeping operations and interviewed UN, DOD, and State officials. In addition, we reviewed GAO’s 2006 report comparing the costs as well as the strengths of a UN peacekeeping operation in Haiti with those of a hypothetical U.S. operation.
We conducted our review from February 2017 through February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the Department of Defense
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Drew Lindsey (Assistant Director), Howard Cott, Juan Pablo Avila-Tournut, Debbie Chung, Martin de Alteriis, Neil Doherty, Jennifer Leotta, Caitlin Mitchell, Elizabeth Repko, and Alex Welsh made significant contributions to this report. | Why GAO Did This Study
To promote international peace and security, the UN had 16 ongoing peacekeeping operations worldwide as of June 30, 2017, with a total budget of almost $8 billion in UN fiscal year 2017 and contributions of over 100,000 military, police, and civilian personnel from more than 120 countries. The United States is the largest financial contributor to UN peacekeeping operations, providing an average of about 28 percent of total funding annually.
The Department of State Authorities Act, Fiscal Year 2017, includes a provision for GAO to compare the costs, strengths, and limitations of UN and U.S. peacekeeping operations. This report (1) compares the reported costs of a specific UN operation to the estimated costs of a hypothetical, comparable operation implemented by the United States; (2) identifies factors that affect cost differences; and (3) identifies stakeholder views on the relative strengths of UN and U.S. peacekeeping operations.
GAO worked with the UN, DOD, and State to generate a cost estimate of a hypothetical U.S.-led operation in the Central African Republic comparable to MINUSCA. GAO developed this estimate using DOD's cost estimating tool for contingency operations and State data on civilian costs, assuming a U.S. operation using roughly the same levels of military and civilian personnel as MINUSCA. The cost estimate should not be construed as suggesting that the United States would likely implement such an operation in the Central African Republic or that it would implement such an operation in the same way.
GAO is making no recommendations.
What GAO Found
Based on United Nations (UN) and Departments of Defense (DOD) and State (State) data, GAO estimates that it would cost the United States more than twice as much as it would cost the UN to implement a hypothetical operation comparable to the UN Multidimensional Integrated Stabilization Mission in the Central African Republic (MINUSCA). MINUSCA cost the UN approximately $2.4 billion for the first 39 months of the operation. GAO estimates that a hypothetical U.S. peacekeeping operation in the Central African Republic of roughly the same size and duration would cost nearly $5.7 billion—almost eight times more than the $700 million the United States contributed to MINUSCA over the same time period.
Various factors affect differences between the actual cost of MINUSCA and the estimated cost of a hypothetical, comparable U.S. operation in the Central African Republic. The United States and the UN would source and transport some supplies and equipment differently, affecting the cost of both operations; for example, the United States would airlift water into the Central African Republic, while the UN does not do so to the same extent. The United States also would incur the cost of civilian police and military reservist salaries, while the UN does not pay any troop or police salaries. Finally, some higher standards for facilities, intelligence, and medical services increase the U.S. cost estimate relative to UN costs for similar operational elements.
UN and U.S. peacekeeping operations have various relative strengths, according to U.S. and UN officials. These officials said that, because the UN is a multilateral organization, UN peacekeeping operations have international acceptance and are more likely to be viewed as impartial. Officials also said that the UN enjoys global access to expertise and experience, and can leverage assistance from multilateral donors and development banks. Relative strengths of a U.S. peacekeeping operation would include faster deployment and superior command and control, logistics, intelligence, and counterterrorism capability, according to U.S. and UN officials. |
gao_GAO-18-343 | gao_GAO-18-343_0 | Background
Federal Budget Process and Relevant ICE Entities
The federal budget process provides the means for the President and Congress to make informed decisions between competing national needs and policies, to allocate resources among federal agencies, and ensure laws are executed according to established priorities. OMB, as part of the Executive Office of the President, is to guide the annual budget process, make decisions on executive agencies’ budgets, aggregate submissions for agencies, and submit the consolidated document for the executive branch as the President’s Budget Request to Congress. In support of the President’s budget request, departments are to submit budget justifications to the congressional appropriations committees, typically to explain the key changes between the current appropriation and the amounts requested for the next fiscal year. During the process, OMB is to ensure that budget requests are consistent with presidential objectives and issue guidance to federal agencies through OMB Circular A-11, which provides instructions for submitting budget data and materials, as well as for developing budget justifications.
Various offices within ICE are involved in developing ICE’s annual budget request for immigration detention (see fig. 1). Two ICE entities integral to the budget request formulation are the Office of Budget and Program Performance (OBPP) and Enforcement and Removal Operations (ERO). Within ICE’s Office of the Chief Financial Officer, OBPP is responsible for guiding ICE’s annual budget request process, including analyzing and validating budget projections for all of ICE’s directorates, including ERO. ERO is responsible for estimating the total amount of funding to cover costs of immigration detention. For the upcoming budget year, ERO determines the projected ADP, while OBPP determines the projected bed rate. ERO then utilizes the two variables of bed rate and ADP in its estimate of future detention costs. Other offices within ICE, such as Custody Management, Field Operations, Operations Support, Management and Administration, and the Office of Policy are involved in the formulation of other aspects of ICE’s budget or in supervisory roles. Figure 1 is an organizational chart of ICE offices that are involved in the annual budget request for immigration detention resources.
ICE Formulates Its Budget Request According to DHS Guidance, But Does Not Have a Documented Review Process to Ensure Accuracy of Budget Calculations
ICE Follows DHS Guidance and Uses Key Variables to Formulate its Budget Request
ICE follows budget formulation guidance from DHS, and uses two key variables—the bed rate and ADP—when formulating its budget request. Approximately 20 months before the start of a particular fiscal year, the Secretary of Homeland Security provides its Resource Planning Guidance to all DHS components. This document works to align the department’s planning, programming, and budgeting activities and execution activities over a five-year period, and sets forth the resource planning priorities of the department as they relate to its mission. The department planning priorities are to guide the DHS components as they develop their respective Resource Allocation Plans (RAP). After the Secretary issues the Resource Planning Guidance, DHS’s Office of the Chief Financial Officer provides fiscal guidance to ICE that identifies an estimated allocation amount, which ICE is to budget to in its RAP submission.
In developing its RAP, each of ICE’s program offices determines its current budget needs and then submits Program Decision Options (PDO) to ICE leadership for any changes from the prior year’s budget. Every ICE program and activity submits, in the form of a PDO, any changes that are to occur, including all programmatic increases, initiatives, reductions, or eliminations. Once all of the program offices submit their PDOs to ICE leadership, a council of leadership representatives from across ICE convenes to approve and prioritize the selected PDOs moving forward to DHS.
ICE submits its RAP to DHS for a final decision with all pertinent information attached, such as the prioritized PDOs based on mission and department needs, fiscal changes to programs, and potential capital investments. During the Resource Allocation Decision (RAD) process, DHS leadership reviews all of the RAP submissions from across the department and approves or rejects the PDOs. Individual program offices work out any changes that may have occurred during the RAD process prior to the completion of the budget request and submission to OMB.
DHS then submits a budget proposal on behalf of the entire department, inclusive of ICE, to OMB. OMB is to prepare a budget request for all of the executive departments and agencies, which is submitted to Congress as the President’s budget. Following OMB decisions on agency budget requests, DHS submits a budget justification, inclusive of ICE, with more details to the congressional appropriations committees. Key steps in the overall process are shown in figure 2.
When preparing the budget submission, ICE uses two key variables, the bed rate and ADP (see sidebar), to calculate a cost estimate for the resources needed for managing the immigration detention system. In order to determine the amount necessary to operate the detention system for adult detainees, ICE multiplies the projected ADP by the projected bed rate by the number of days in the year (see fig. 3). ICE then includes these costs as part of its Custody Operations account.
ICE Does Not Have a Documented Review Process to Ensure the Accuracy of Budget Calculations
ICE does not have a documented review process to ensure the accuracy of its budget calculations presented in its yearly congressional budget justifications (CBJ). Based on our review of CBJs from fiscal year 2014 to fiscal year 2018, there are a number of inconsistencies and errors in the numerical calculations pertaining to immigration detention costs. During our review of ICE’s fiscal year 2014 and fiscal year 2015 budget requests, we calculated the total amounts requested for ICE’s immigration detention costs using its formula (see fig. 3) and the ADP and bed rate figures provided in the budget request and compared it with ICE’s requested amount. Based on our calculations, the amounts ICE requested are not consistent (by a difference of $34.7 million for fiscal year 2014 and $129 million for fiscal year 2015) with the figures used to develop their estimate. ICE officials acknowledged the error.
Additionally, ICE’s fiscal year 2017 budget request erroneously applied $2 million in costs from detention beds to transportation and removal, resulting in a request for $2 million less for detention beds and $2 million more for transportation and removal, a total of $4 million in errors in the agency’s estimate. In response to the misapplication of $2 million, ICE officials stated that the CBJ still provided for the same net total because the two mistakes offset each other. Officials also stated that the final appropriation ultimately was not based on its budget request numbers and ICE’s detention activities were funded at an amount that was greater than what they requested. The fiscal year 2018 request also contains a multiplication error that resulted in ICE requesting less funds—$4,000— than using the correct calculation.
ICE officials told us that there are multiple reviews of the budget documents prior to submission to ensure that the numbers presented are accurate and supportable. However, ICE could not provide us with any documentation that the reviews were conducted. ICE officials stated that reviews were typically completed using hard copies and then approval was verbal and not documented formally. Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives and respond to risks. Such activities include review processes to ensure the accuracy of budget calculations prior to official submission and appropriate documentation of the reviews.
While the final appropriations that Congress determines for ICE may ultimately be higher or lower than what ICE requested, generating and presenting an accurate picture of ICE’s funding needs is necessary to provide Congress the information needed to make informed decisions. By developing and implementing a documented review process, it is more likely that relevant ICE officials are accountable for ensuring the accuracy of the budget requests and underlying calculations. Without a documented review process, ICE is not positioned to demonstrate the credibility of its budget requests. Furthermore, Congress may not have reliable information to make informed decisions about funding immigration detention needs.
ICE Has Models for Developing Bed Rates and ADP But Could Improve Projections
ICE Uses Historical Costs to Develop its Projected Bed Rates But Underestimated Actual Bed Rates from Fiscal Years 2014 through 2017
Bed Rate ICE’s bed rate is based on four cost categories. Bed/guard costs: The contract costs of beds and guards at U.S. Immigration and Customs Enforcement’s (ICE) various detention facilities. Health care: Medical expenses of the detainee population. Other direct costs: All costs that directly concern detainees, including payments to detainees for work programs, provisions and supplies for detainees, and telecommunications billed to individual facilities. Service-wide or indirect costs: Overhead expenses for ICE’s management of the detention system, including rent, security, office equipment, and liability insurance.
Although ICE bases its projected adult bed rate on historical costs, from fiscal year 2014 through fiscal year 2017, ICE underestimated the actual rate. ICE calculates the adult bed rate by tracking obligations and expenditures in four categories—bed/guard costs, health care, other direct costs, and service-wide costs, also known as indirect costs. (See sidebar for more information.) We found that ICE has improved its process for collecting this information from its financial management system since 2014, when we previously reported that limitations in its data system required ICE personnel to manually enter codes to categorize relevant data. In fiscal year 2014, ICE introduced a new financial coding process that allows staff to pull costs—the obligations and expenditures—directly from its financial management system. This system is an improvement over the manual workarounds that ICE previously used and allows staff to pull the necessary data more easily for the purposes of calculating the projected bed rate.
To estimate what ICE’s projected adult bed rate will be two years into the future, ICE calculates and averages the year-over-year percentage change in costs since fiscal year 2009 and multiplies the current bed rate by this figure twice, following the formula outlined in figure 4.
ICE calculates the year-over-year percentage change for each cost category—bed/guard costs, health care, other direct costs, and service- wide costs—and then applies the average of these changes to the current cost of the category. The final projected bed rate is the sum of the four cost categories. According to ICE, the average of the year-over-year percentage change serves as its inflation rate and more accurately reflects the annual escalation of its detention costs. Given that ICE must determine the projected bed rate almost two years into the future, ICE applies its inflation rate twice to the current costs.
Although the formula outlined in figure 4 summarizes ICE’s adult bed rate methodology, ICE’s guidance notes that situations may occur in which it is advisable to adjust national bed rate projections to account for new trends or other changes. For example, in response to concerns from Congress about ICE’s application of indirect costs, and the opportunity to revise the fiscal year 2017 bed rate, ICE officials told us they changed some of the methodology for the projected 2017 and 2018 bed rates.
Although ICE’s bed rate model is based on historical costs, from fiscal year 2014 through fiscal year 2017 ICE’s adult bed rate projections underestimated the actual bed rate. Specifically, ICE underestimated the bed rate by $2.16 in fiscal year 2014, by $8.08 in fiscal year 2015, by $5.42 in fiscal year 2016, and by $0.31 in fiscal year 2017 (see fig. 5). For illustrative purposes, underestimating the bed rate by $5 per day, assuming an ADP of 34,000, yields a more than $62 million underestimation in the detention budget request.
The bed rate model assumes that operations in the immigration detention system will continue without drastic changes and that past trends will continue since it bases its projections on historical costs. According to ICE officials, the bed rate model cannot anticipate a need to increase the capacity of the entire system, or anticipate a policy decision to close or continue operation of a facility. Either of these situations may cause the bed rate to change.
Although certain situations may lead to unanticipated changes in the bed rate, we identified a number of factors in ICE’s current bed rate model that have led to inaccuracies, including using incorrect inflation factors and mixing costs for family and adult facilities.
ICE calculates the projected bed rate by using its own inflation rate based on the escalation of detention costs instead of a standard inflation rate provided by OMB or DHS, but did not provide documentation of its rationale. As described previously, ICE’s inflation factor is based on an average of the year-over-year changes in costs since fiscal year 2009. OMB guidance states that it will provide agencies with economic assumptions to be used for budget requests, including inflation rates, and that agencies can consider price changes, such as bed/guard costs, as a factor in developing estimates. ICE officials told us that historical costs more accurately reflect potential increases, but did not provide us with documentation to support that rationale. According to ICE officials, by accepting the inflation factor used in ICE’s budget request, OMB has given tacit, if not direct, approval for its usage.
Based on our review of ICE’s adult bed rate projections, historical costs may not be the best method for predicting future costs and assumes that past trends will continue, including negative inflation rates. Because the bed rate model accounts for changes on a per person basis, negative inflation factors could be due to decreasing costs or an increasing detainee population, both of which may change in the following year. For example, ICE’s fiscal year 2018 bed rate model incorporates a negative inflation factor for health care costs even though in its budget justification ICE attributes part of the bed rate increase over the prior year to rising health care costs. Relying on historical costs may lead to inaccuracies if a deflationary trend does not continue as the model assumes.
In our examination of the bed rate model, we also found that ICE did not calculate the percentage change correctly. Year-over-year percentage change compares the difference in costs in percentage terms and can be calculated by dividing the difference in costs by the starting costs. Instead of following this formula, ICE’s bed rate model calculated the actual monetary difference between the two years and represented it as a percentage change. For example, from fiscal year 2009 to fiscal year 2010, the bed/guard rate increased from $77.50 to $81.59. Whereas the percentage change in the rate is 5.28 percent, ICE calculated the percentage change by subtracting one rate from the other ($4.09) and adding a percent sign (4.09%), thereby treating the dollar difference as a percentage change. (See table 1.)
ICE officials stated that they decided to use the actual monetary difference as a way to account for inflation for the fiscal year 2018 adult bed rate. However, using the actual monetary difference in costs does not provide a percentage of change. It misrepresents a difference in price as a percentage. Further, we found that because ICE did not appropriately calculate the percentage change for each year, the average of year-over- year changes, which ICE uses as its inflation factor, is not correct. For example, ICE’s inflation factor for the bed/guard rate is 2.74 percent, while the appropriate calculation is 3.28 percent. (See table 1.) (See Appendix I for more information and calculations.)
In addition, when calculating the fiscal year 2018 projected bed rate, rather than following formulas contained in the bed rate model, ICE manually entered a different inflation factor for two cost categories—other direct costs and service-wide costs—instead of relying on the historical data. ICE added together the inflation factors indicated by the model for other direct costs and service-wide costs and then applied the combined inflation factor to both categories. By combining and manually entering the factors, ICE mistakenly introduced an additional error. Officials did not provide an explanation or documentation of why they manually entered these numbers or combined the two inflation factors except to state that it stemmed from the Congressional request to separate the costs.
ICE’s adult bed rate model includes information for family facilities, even though family facilities are budgeted separately and in a different manner from adult facilities. For its adult facilities, ICE contracts with the individual facilities to provide beds and the cost is dependent on the number of adults detained. ICE’s family detention facilities, however, are operated by local governments or private companies and are funded through fixed price contracts that are not dependent on the number of people detained. (See sidebar for more information.)
While ICE budgeted $291.4 million for its family facilities in fiscal year 2018, our analysis showed that ICE also included the population in its family facilities in the calculations of the adult bed rate. For example, in fiscal year 2018, ICE divided the obligations and expenditures for health care, other direct costs, and service-wide costs across the entire detainee population of adults and families, resulting in an adult bed rate that was lower than if the costs were divided by the adult population alone. Using this underestimated bed rate has resulted in a lower cost estimate than what ICE may need to sustain its adult population.
Additionally, ICE double-counted some costs by budgeting for family facilities in both the adult bed rate and the total cost for family facilities. Specifically, we found that ICE included “other direct costs” associated with its family facilities when calculating its adult bed rate. Given that ICE already budgeted for these family facilities’ costs as a line item within its budget for family facilities, calculating the adult bed rate in this way double-counts the costs for family facilities in the budget. ICE officials did not provide documentation or their rationale for including the family facilities in their adult bed rate model. (See Appendix I for more information and calculations.)
Standards for Internal Control in the Federal Government states that management should use quality information to achieve objectives, defining quality information as appropriate, current, complete, accessible, and provided on a timely basis. Quality information is based on relevant data from reliable sources and relatively free from error. According to GAO’s Cost Estimating and Assessment Guide, having a realistic estimate of projected costs facilitates effective resource allocation. Because information requirements should consider the expectations of external users, by basing its detention cost estimates on quality information, ICE would help ensure they are useful to Congress for making resource allocation decisions. Additionally, GAO’s cost estimating guide states that applying correct inflation rates is an important step to ensure accurate cost estimates and that inflation assumptions should be well documented.
According to ICE officials, ICE’s most substantial change to the bed rate model since its creation in 2009 was a revision in 2014 to account for the costs of family facilities. In our review, we found that ICE includes information for family facilities in the adult bed rate model. By reviewing its bed rate model and methodology and correcting identified inaccuracies and other potential issues, ICE could improve its adult bed rate projections and better ensure its funding requests are credible and reliable.
ICE Reported Using ADP Numbers Based on Policy Decisions to Calculate Budget Needs, But It Is Unclear How the ADP Figures Were Developed
To calculate its budget needs, ICE reported using ADP figures that are based on policy decisions, but it is unclear if the ADP figures were based on statistical analysis. Further, ICE did not provide documentation on how it calculated the final ADP numbers used in its budget requests. For example, the fiscal year 2018 budget justification includes a projected ADP of 48,879 adults, a 63 percent increase over the fiscal year 2017 projected adult ADP (29,953) and a 49 percent increase over the fiscal year 2016 actual adult ADP (32,770). Although ICE provided a general explanation of various factors that influence ADP, including policy changes such as executive orders regarding immigration enforcement, the agency did not provide documentation quantifying the effect of these factors nor the calculations or methodology used to arrive at the 48,879 figure.
In the absence of documentation, we reviewed ICE’s CBJs from fiscal year 2014 through fiscal year 2018 and we could not identify a clear methodology that ICE used across the years for developing the ADP and using it to calculate its detention-related budget needs. For example, in the fiscal year 2018 CBJ, ICE did not independently determine the projected ADP for use as an input into its cost estimate. Rather, officials started with the prior year’s funding level for detention costs, which officials told us they were directed to do by OMB, and calculated the ADP it could house with that amount. In the fiscal year 2017 budget justification, ICE used its projected ADP numbers from the previous year as starting points to calculate changes in its budget request. Additionally, while the appropriations act for fiscal year 2014 included a proviso that ICE’s funding support at least 34,000 detention beds during the fiscal year, ICE included a lower number of detention beds (30,539) in its 2015 budget request.
According to ICE officials, the ADP figures used in its budget requests are initially projected by ERO, but may be changed by ICE leadership, DHS leadership, or OMB. Officials said the final ADP figure is based on policy decisions that account for factors that could affect the detainee population—for example, delays in immigration courts or the number of asylum officers on staff. According to officials, ICE prepares the budget request two years in advance of the year of execution with the best knowledge they have available at that time, including ADP projections. Officials stated that ADP is difficult to estimate given the unpredictable nature of events such as natural disasters, gang activity, or political upheaval in another part of the world, which may lead to an unanticipated increase in migration. Additionally, officials told us that various policy developments across the administration, DHS, or other agencies may affect immigration trends or enforcement. ICE officials also stated that because immigration detention facilities may receive detainees from other parts of the immigration system, ADP can be affected by actions taken by other actors involved in immigration enforcement, such as the Executive Office for Immigration Review, U.S. Customs and Border Protection, and U.S. Citizenship and Immigration Services. Such events could include, for example, delays in immigration court cases or an increase in the number of asylum cases, which could increase ADP.
When asked to provide documentation for the fiscal year 2018 ADP projection of 51,379, ICE provided us a document containing tables and justification that explained the factors that impact ADP, but did not provide us the calculations or methodology used to arrive at the projected ADP. While the ADP used in its budget requests may be developed based on policy decisions, documenting the calculations and rationale by which the figure was developed would help to demonstrate how the number was determined and that it was based on sound decisions.
Although ICE officials stated that ADP is difficult to forecast, the agency has developed a statistical model that may help predict the ADP. ERO’s Law Enforcement Systems and Analysis (LESA) Office has developed a statistical model that uses population data directly pulled from ICE’s Enforcement Information Database to forecast the ADP in upcoming years. (See sidebar for more information.) ERO began using the model in 2014, and according to officials, ICE currently uses it to estimate how much funding the agency will need for detention costs for the remainder of the fiscal year. The model describes historical trends, seasonal fluctuations, and random movement in the ADP, and then uses these historical patterns to make forecasts. Based on our evaluation, we found that this type of model was a reasonable method to forecast ADP, and that LESA’s particular modeling choices were generally consistent with accepted statistical practices and appropriate for the data and application.
Using LESA’s model, ICE can produce a range of ADP forecasts under different scenarios, as well as confidence intervals for any particular forecast. Confidence intervals indicate the level of certainty around the model’s forecast, depending on how wide the range is for the ADP forecast. Confidence in the model’s forecasts decreases when the ADP range is smaller and when forecasting for later time periods. Because the model relies on historical data in making ADP forecasts, LESA is able to incorporate separate analysis of external or unexpected events to help inform the effects of similar events on ADP in the future. For example, according to ICE officials, LESA can conduct ad hoc analysis outside of the model of how potential policy decisions, such as a change in the number of field officers, may affect future ADP, if a similar event occurred in the past. Although new policies, processes, or political or economic events may cause the dynamics of ICE’s detainee population to change in ways that historical data would not predict, incorporating this type of model into ICE’s process to project ADP could potentially help provide useful and accurate forecasts in instances where ICE does have relevant historical data. ICE officials stated that ICE has used the LESA model in the past to inform the budget during the year of execution, but has only recently used it to provide confidence intervals for the ADP inputs into the budget projections when revising the projected fiscal year 2017 bed rate.
According to GAO’s Cost Estimating and Assessment Guide, having a realistic estimate of projected costs facilitates effective resource allocation. In addition, federal standards for internal control state that management should design control activities to achieve objectives, and as part of those control activities, management should clearly document significant events in a manner that allows the documentation to be readily available for examination. Without documenting the methodology or rationale behind the ADP numbers ICE uses to develop its budget request for immigration detention, Congress and other stakeholders do not have clear visibility into the number upon which ICE is basing its budget request. Additionally, by considering how or whether the LESA model could be incorporated into ICE’s process for projecting ADP, ICE could leverage an existing model and identify potential improvements in the accuracy of its ADP projections based on historical data.
ICE Does Not Fully Meet GAO Best Practices For Estimating Detention Costs
ICE’s cost estimate for immigration detention resources does not fully meet best practices outlined in GAO’s Cost Estimating and Assessment Guide. As described earlier, the characteristics of a reliable cost estimate are comprehensive, well documented, accurate, and credible. As noted in table 2, ICE’s cost estimate for fiscal year 2018 substantially met the comprehensive characteristic, partially met the well documented and accurate characteristics, and minimally met the credible characteristic. By not sufficiently meeting the best practices in all of the characteristics, the cost estimate for the immigration detention cannot be considered reliable.
Based on our analysis, ICE substantially met the comprehensive characteristic by including all costs, but has double-counted certain costs, as described earlier, and has not clearly documented all ground rules and assumptions. Based on our analysis, ICE’s cost estimate appears to include all government and contractor labor costs as well as material, equipment, facilities, and services to fund immigration detention, accounting for both the salary and expenses categories of the budget. ICE also adheres to DHS’s Common Appropriations Structure, and follows the OMB Object Class structure for planning and tracking costs at a more granular level. Officials stated that they use past execution reports, historical data, and spend plans to help inform the necessary distribution of funding for immigration detention by project and object code.
While ICE accounted for all costs, ICE did not directly address how the agency prevents omissions or double-counting in its cost estimate, and double-counted costs by including other direct costs for family facilities when estimating the cost to house adult detainees. Additionally, ICE did not identify ground rules and assumptions influencing the estimate. Officials said that several documents list ground rules and assumptions; however, the ground rules cited are very broad or have not been followed. For example, ICE guidance states that ICE shall fund sufficient detention beds to support current enforcement and removal priorities and mandatory detention requirements, but it does not provide a basis for determining a sufficient number of detention beds. Another important factor in determining the bed/guard rate for adult beds is tier utilization. Tier utilization refers to the use of bed space in detention centers. For example, at a given detention center, ICE may pay a lower rate if it houses more detainees. When determining the bed rate based on tier utilization, ICE did not provide documentation of the ground rules or assumptions behind the tier utilization percentage used to calculate the fiscal year 2018 bed rate. Finally, as noted earlier in this report, ICE has not documented its rationale for not following DHS or OMB guidance for applying inflation rates to the estimate.
According to GAO’s guide, given that cost estimates are based on limited information, defining ground rules and assumptions is important because they help identify the risks associated with these assumptions, including how changes in the assumptions could influence cost. Without clear documentation and rationale behind ground rules and assumptions, the estimate will not be able to be reconstructed when the budget staff and information used to develop the estimate are no longer available.
Based on our analysis, ICE partially met the well documented characteristic by showing that its cost estimate had been reviewed by management and providing documentation that described its methodology in general. However, ICE did not show the formulas used to develop the cost estimate in sufficient detail to enable an outside party to fully follow its calculations or to re-create the fiscal year 2018 bed rate. Although the agency provided the bed rate model and showed what numbers were used as inputs into the model to project the fiscal year 2018 bed rate, it did not provide documentation that described the formulas used to calculate the projected bed rate. During our review of the bed rate model, we had to reconstruct the calculations step-by-step to identify the formulas and variables used to create the fiscal year 2018 bed rate.
Additionally, ICE officials provided conflicting explanations regarding how they applied inflation to develop the projected fiscal year 2018 adult bed rate. In one instance, ICE officials said that they applied a 2.66 percent inflation factor to develop the fiscal year 2017 adult bed rate and then calculated and applied a cost adjustment to add more than 8,800 new beds, to produce the fiscal year 2018 bed rate. In another instance, ICE officials stated that the inflation factor was adjusted to 3.73 percent overall to develop the fiscal year 2017 bed rate and then they applied the cost adjustment to develop the fiscal year 2018 projected bed rate. These two explanations also differ from how the bed rate model applies inflation as described earlier in this report. ICE also did not document how the cost adjustment was calculated or the actual costs that the adjustment is based upon.
When asked about documentation, ICE officials stated that the budget justification was not the appropriate document to cite detailed methodologies, but did not provide any additional supporting documentation. Documentation is essential for validating a cost estimate, including demonstrating that it is a reliable estimate of future costs. Consistent with GAO’s guide, without a well documented cost estimate, ICE is not positioned to present the estimate’s validity or answer questions about its basis. According to GAO’s Cost Estimating and Assessment Guide, estimates that lack sufficient documentation are not useful for updates or information sharing and can hinder understanding and proper use.
Based on our analysis, ICE partially met the accurate characteristic by basing the cost estimate on historical cost data and tracking the differences between the projected and actual bed rate and ADP. ICE officials stated that they utilized historical cost data for bed/guard contract costs, health care costs, overhead expenses, detainee wages and supplies, and detainee headcount and capacity utilization, among other categories to estimate detention costs. However, ICE did not provide evidence that it analyzes the reasons behind the variances between the cost estimate and actual numbers for each year, and as mentioned previously, we identified issues with the inflation rates used to project the bed rate and the inclusion of family facilities in the adult bed rate.
While ICE tracks differences between the projected bed rate used in the cost estimate and the actual numbers for each fiscal year, officials did not provide evidence that they analyze the reasons for these variances nor that they use this information to reassess its assumptions or models and improve them. ICE officials said that variances between the projected and actual bed rates are documented in a quarterly report that is publicly available. While these reports track the bed rate in the execution year, they do not demonstrate that ICE tracks explanations for variances between that bed rate and the original cost estimate figures presented in the budget request. ICE provided a document that showed the bed rate projection and the year-end result for fiscal years 2013 through 2016 and quarter-end results for fiscal year 2017, but the document did not explain most of the changes from the projected and actual numbers. ICE officials also said that they conduct ad hoc analyses to identify and communicate sources of variance, but did not provide any related documentation.
Without a comparison and analysis of the reasons behind the differences between the actual figures and the original estimates, ICE is not positioned to assess the quality of its projections and use that information to improve cost estimates. Tracking the forecast rate against the actual rate and tracking budget justification assumptions against actual conditions could offer insight into the quality of the forecasts, according to GAO’s cost estimating guide.
Based on our analysis, ICE minimally met the credible characteristic, and in particular did not conduct sensitivity or risk and uncertainty analyses to capture the cumulative effects if variables change. ICE also did not conduct any cross checks on the major cost elements using alternate methods to estimate cost. A sensitivity analysis reveals how a change in a single assumption, or variable, affects the cost estimate. A risk and uncertainty analysis would provide ICE a clear level of confidence about the estimate. ICE did not conduct a risk and uncertainty analysis for either the fiscal year 2018 cost estimate or the fiscal year 2018 bed rate model. Additionally, ICE’s description of the LESA model to project ADP discussed forecast confidence levels, but ICE did not quantify the uncertainty around the ADP projection of 51,379 detainees used in the fiscal year 2018 budget justification. ICE also did not discuss the range of potential costs due to uncertainty in the ADP and bed rate projections. Having a range of costs around a point estimate is useful to decision makers because it conveys the level of confidence in achieving the most likely cost.
Additionally, ICE did not provide any documentation showing that major cost elements were cross checked using a different method for calculating the cost estimate to see if results were similar. According to GAO’s cost estimating guide, one way to reinforce the credibility of the cost estimate is to determine whether applying a different method produces similar results. If so, then confidence in the estimate increases, leading to greater credibility. ICE officials stated that internal and external auditors vetted the bed rate model and determined it to be credible, but this does not constitute an estimate cross check and using an alternate cost estimating method to cross check its estimate would provide greater assurance of its credibility. As noted previously, we found ICE’s bed rate model underestimated the actual bed rates over several years.
Unless all characteristics are met or substantially met, the cost estimate cannot be considered reliable. Additionally, a poor cost estimate can negatively affect a program by eventually requiring a transfer or reprogramming of funds. In recent years, ICE has consistently transferred and reprogrammed millions of dollars of funds to account for budgeting too little or too much for immigration detention costs. By improving the budget estimation to better reflect cost estimating best practices, ICE could ensure a more reliable budget request.
Conclusions
As an agency, ICE operates the immigration detention system on a budget of nearly $3 billion. Although estimating immigration detention costs may be difficult, taking steps to improve ICE’s cost estimating and budget request processes could help provide Congress with a more accurate picture of ICE’s funding needs.
Developing and implementing a documented review process for its annual budget request calculations could help ICE better ensure that its budget requests are consistently credible and reliable. Additionally, assessing its bed rate model and addressing the identified inaccuracies in its methodology could help ICE more accurately project the bed rate in upcoming years. As we noted, a difference of just five dollars in the bed rate amounts to a difference of tens of millions of dollars in the final budget calculation. Documenting the methodology or rationale behind the ADP projections would better position ICE to support the basis for its budget requests each year, and incorporating the use of a statistical model may help decision makers by providing more information about the numbers that ICE presents. Furthermore, taking steps to ensure that ICE fully addresses cost estimating best practices could ensure a more reliable overall estimate.
Recommendations for Executive Action
We are making the following five recommendations to ICE:
The Director of ICE should take steps to document and implement its review process to ensure accuracy in its budget documents.
The Director of ICE should take steps to assess ICE’s adult bed rate methodology to determine the most appropriate way to project the adult bed rate, including any inflation rates used.
The Director of ICE should take steps to update ICE’s adult bed rate methodology by incorporating necessary changes based on its assessment, and ensure the use of appropriate inflation rates and the removal of family beds from all calculations.
The Director of ICE should take steps to determine the most appropriate way to project the ADP for use in the congressional budget justification and document the methodology and rationale behind its ADP projection. As part of that determination, ICE should consider the extent to which a statistical model could be used to accurately forecast ADP.
The Director of ICE should take steps to ensure that ICE’s budget estimating process more fully addresses cost estimating best practices.
Agency Comments and Our Evaluation
We provided a draft of this report to DHS for the department’s review and comment. DHS provided written comments, which are noted below and reproduced in full in appendix II, and technical comments, which we incorporated as appropriate. DHS concurred with our recommendations and described actions underway or the actions it plans to take in response.
To our first recommendation, DHS stated that ICE recently implemented a more stringent process for the fiscal year 2020 budget cycle, and will work to more effectively document its review process and decisions during the budget formulation process. To our second recommendation, DHS stated that ICE has completed multiple third-party assessments of its bed rate methodology. We will evaluate any assessments provided and determine the extent to which those assessments meet the intent of the recommendation. To our third recommendation, DHS stated that ICE will provide GAO with documentation demonstrating updates to the adult bed rate methodology, including the use of an appropriate inflation rate and removal of family beds from calculation. We will evaluate any documentation provided and determine the extent to which ICE’s actions meet the intent of the recommendation. To our fourth recommendation, DHS stated that ICE ERO developed a statistical modeling capability and provided that documentation and methodology to GAO. As previously noted in this report, we found that this type of model was a reasonable method to forecast ADP, and the particular modeling choices were generally consistent with accepted statistical practices and appropriate for the data and application. DHS began leveraging the model for its fiscal year 2019 budget cycle, and it will be important to see how the model is used in future budget justifications. To our fifth recommendation, DHS stated that ICE will implement the best practices for cost estimating to the degree that it is possible, specifically performing sensitivity and cost risk and uncertainty analyses to strengthen the credibility of its estimates. Implementing the best practices should help position ICE to produce a more reliable cost estimate. If implemented effectively, these actions should address the intent of our recommendations.
We are sending copies of this report to the appropriate congressional committees and the Secretary of the Department of Homeland Security. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III.
Appendix I: GAO Analysis of U.S. Immigration and Customs Enforcement’s Fiscal Year 2018 Bed Rate Model
U.S. Immigration and Customs Enforcement (ICE) calculated a bed rate for fiscal year 2018 using a bed rate model built in Excel with data from its Federal Financial Management System and Enforcement Information Database. To project the fiscal year 2018 bed rate, ICE officials told us they used a different inflation factor from the ones set forth in guidance from the Office of Management and Budget (OMB) or the Department of Homeland Security (DHS). Specifically, ICE used an inflation factor based on the historical service costs. ICE did not provide a documented rationale for not using the OMB’s inflation rate, written descriptions of the calculations within the bed rate model, or detailed ground rules and assumptions for the bed rate model.
In examining the adult bed rate model used by ICE to project the fiscal year 2018 bed rate, we identified a number of inaccuracies and errors in the formulas used. Specifically: Instead of using the average of the percentage change in year-over- year costs, ICE used the average of the actual monetary difference in year-over-year costs and then applied that figure as a percentage; ICE added the inflation factors for two cost categories and then applied the combined rate to each category, which led to additional negative inflation; and ICE included information for family facilities, which were already budgeted as fixed priced contracts, in the calculation of the adult bed rate.
ICE calculates a projected bed rate for two years into the future based on actual obligations and expenditures for four cost categories—bed/guard costs, health care, other direct costs, and service-wide or indirect costs. Table 3 shows ICE’s historical costs since fiscal year 2009 for these categories.
Table 4 shows ICE’s calculations to determine the projected fiscal year 2018 bed rate. To calculate the projected fiscal year 2018 bed rate, ICE applied its inflation factors twice to the fiscal year 2016 costs and then added a cost adjustment to account for the cost of adding new beds.
ICE notes that the initial projected rate is for fiscal year 2017; however, this figure follows the formula that ICE would use to determine the fiscal year 2018 bed rate. With the change in administration during fiscal year 2017, ICE had the opportunity to revise its projected bed rate. ICE officials told us that they applied their inflation factors to fiscal year 2016 costs once to project the bed rate one year into the future and then applied their inflation factors a second time in order to account for an operational adjustment, which they estimated to be approximately 3 percent. ICE officials did not provide us with documentation of their calculations or analysis showing that compounding the inflation factors over two years was equivalent to one year’s inflation plus an operational adjustment. In addition, because the inflation factors used in the bed rate model are based on historical costs, any operational costs should already have been accounted for in the model itself.
Inflation Factors
Using Actual Monetary Difference in Costs Instead of Percentage Change ICE’s bed rate model is designed to use the average of year-over-year percentage change as its inflation rate. However, for the revised fiscal year 2017 and the projected fiscal year 2018 bed rates, ICE did not calculate the inflation rate based on year-over-year percentage changes, but based it on the actual monetary difference in yearly costs. ICE officials told us that in response to Congress’s concerns about service- wide costs, ICE began separating service-wide costs from other direct costs in fiscal year 2017. Previously, the two cost categories had been combined as an “other costs, miscellaneous” cost category. ICE officials told us that when other direct costs were separated from service-wide costs, they discovered that the average of year-over-year percentage changes showed a large decrease (negative 20 percent) for other direct costs which was not reflected in a separate analysis conducted by ICE. Therefore, officials decided to use the average of the actual monetary difference in year-over-year costs instead. ICE officials did not provide documentation of this separate analysis. According to ICE officials, for consistency they decided to use the average of the actual monetary difference in year-over-year costs for all of the cost categories including bed/guard, health care, and service-wide costs. The bed rate model then applied these figures as inflation factors.
Table 5 shows the results from ICE’s calculation of yearly cost changes as percentages. In this table, ICE uses the formula of (Year 2 - Year 1)/100 and displays it as a percentage. For example, as noted in table 1, the fiscal year 2010 bed/guard rate was $81.59 and the fiscal year 2009 rate was $77.50. ICE calculated the change in the bed/guard rate for fiscal year 2010 as $81.59 - $77.50 = $4.09, and then replaced the dollar sign with a percent sign, thereby treating the dollar difference as a percentage change.
Table 6 shows the results if the year-over-year change were calculated by comparing the actual percentage difference in costs. In this table, we use the formula of (Year 2 - Year 1) / Year 1 and display it as a percentage. For example, for fiscal year 2010, the percentage change in the bed/guard rate is 5.28 percent (or ($81.59 - $77.50) / $77.50), not 4.09 percent as calculated by ICE.
Because of how ICE presented the percentage change for each year, the average of year-over-year changes, which ICE uses as its inflation factors, is not correct. For example, ICE’s inflation factor for the bed/guard rate is 2.74 percent (see table 3), while the appropriate calculation is 3.28 percent (see table 4).
Applying Combined Inflation Factor Twice In developing its fiscal year 2018 projected adult bed rate, ICE combined the inflation factors for two cost categories—other direct costs and service-wide costs—and applied the combined rate to each category. By using this combined rate, the bed rate model applies an additional -0.54 percent factor to the categories, which it otherwise would not have done if ICE applied the individual inflation factors for the categories.
As noted in Table 7, ICE’s year-over-year average change for other direct costs was -1.33 percent when ICE calculated it individually for the category, and was 0.78 percent for service-wide costs.
Instead of applying these inflation factors (-1.33 and 0.78 percent) to the fiscal year 2016 costs for these categories, ICE added the two inflation factors for a total of -0.54 percent, based on the following calculation: - 1.3267 + 0.7833 = -0.5433. ICE then applied this combined inflation factor to both categories (see table 2). Officials did not provide us with a rationale or documentation of why they manually entered these numbers, or combined the two rates except that it stemmed from the Congressional request to separate the costs. By applying the combined inflation factor to both categories, ICE mistakenly introduced an additional error for these two cost categories.
Family Facility Information in the Adult Bed Rate
Counting Families in the Adult Bed Rate ICE’s bed rate model divides the obligations and expenditures for health care, other direct costs, and service-wide costs by the entire detainee population of adults and families, resulting in an adult bed rate that is lower than if the costs were divided by the adult population alone. ICE’s bed rate model is used to calculate a bed rate to estimate detention costs for the adult population. Family facilities operate on firm fixed price contracts and all cost categories for the family facilities—bed/guard costs, health care costs, other direct costs, and service-wide costs—are budgeted for separately from costs for adult detention in ICE’s budget request. By dividing adult bed costs across its entire detainee population, ICE may be underestimating the total detention costs.
To calculate the daily per person cost of health care, other direct costs, and service-wide or indirect costs, the bed rate model divides the total obligations and expenditures for each category by the number of mandays. Table 8 shows ICE’s calculations using the formula: Obligations and Expenditures / Mandays for Adults and Families = Daily Per Person Rate By spreading these costs across the entire population, the bed rate model derives a lower daily per person cost than by considering only the adult detainee population. For example, ICE calculated the daily per person cost of health care in fiscal year 2016 as: $148,186,091 / 9,096,014 = $16.29.
Table 9 shows what the daily per person cost of health care would be if the family population were removed from the calculation. Specifically, the daily per person health care cost would be $148,186,091 / 8,696,453 = $17.04 The result of a $0.75 underestimate in health care costs is an overall underestimation of approximately $13.4 million for the fiscal year 2018 immigration detention system cost estimate based on the calculation: $0.75 x 48,879 x 365 = $13,380,626.
Including Family Facilities in Cost Data In addition to spreading total costs across the entire population, rather than just the adult population, ICE’s bed rate model includes obligations and expenditures for family facilities. In examining ICE’s data for other direct costs, we found that data from the three family facilities (Berks, Karnes, and South Texas) were included in the facility cost data. These three facilities’ other direct costs totaled $222,425. Because these facilities operate on firm fixed price contracts that include other direct costs, and these costs were already budgeted at $5.5 million in the $291.4 million allotted for family facilities, these costs were double- counted in the model and the costs were added to the adult bed rate. It is unclear if cost data for family facilities are also included in the health care and in the service-wide costs used to calculate the adult bed rate. ICE officials did not provide documentation or their rationale for including the family facilities in their adult bed rate model.
Table 10 demonstrates the effect of removing information for family facilities from the other direct cost data and then dividing by the adult population alone. This calculation results in a daily per adult rate for other direct costs of $1.75 for fiscal year 2016, which is 3 cents lower than the rate if the other direct costs for family facilities are included (and the costs are divided by the adult population alone).
Appendix II: Comments from the Department of Homeland Security
Appendix III: GAO Contacts and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Kirk Kiester (Assistant Director), Brian Bothwell, Pamela Davidson, Eric Hauswirth, Susan Hsu, Heather Keister, Sasan J. “Jon” Najmi, Leah Q. Nash, Karen Richey, Daniela Rudstein, Jack Sheehan, and Jeff Tessin made significant contributions to this report. | Why GAO Did This Study
In fiscal year 2017, ICE operated on a budget of nearly $3 billion to manage the U.S. immigration detention system, which houses foreign nationals whose immigration cases are pending or who have been ordered removed from the country. In recent years, ICE has consistently had to reprogram and transfer millions of dollars into, out of, and within its account used to fund its detention system. The explanatory statement accompanying the DHS Appropriations Act, 2017, includes a provision for GAO to review ICE's methodologies for determining detention resource requirements. This report examines (1) how ICE formulates its budget request for detention resources, (2) how ICE develops bed rates and determines ADP for use in its budget process, and (3) to what extent ICE's methods for estimating detention costs follow best practices. GAO analyzed ICE's budget documents, including CBJs, for fiscal years 2014 to 2018, examined ICE's models for projecting ADP and bed rates, and evaluated ICE's cost estimating process against best practices.
What GAO Found
U.S. Immigration and Customs Enforcement (ICE) formulates its budget request for detention resources based on guidance from the Office of Management and Budget and the Department of Homeland Security (DHS). To project its detention costs, ICE primarily relies on two variables—the average dollar amount to house one adult detainee for one day (bed rate) and the average daily population (ADP) of detainees.
U.S. Immigration and Customs Enforcement's (ICE) Formula to Calculate Detention Costs
GAO found a number of inconsistencies and errors in ICE's calculations for its congressional budget justifications (CBJs). For example, in its fiscal year 2015 budget request, ICE made an error that resulted in an underestimation of $129 million for immigration detention expenses. While ICE officials stated their budget documents undergo multiple reviews to ensure accuracy, ICE was not able to provide documentation of such reviews. Without a documented review process for reviewing the accuracy of its budget request, ICE is not positioned to ensure the credibility of its budget requests.
ICE has models to project the adult bed rate and ADP for purposes of determining its budget requests. However, ICE consistently underestimated the actual bed rate due to inaccuracies in the model, and it is unclear if the ADP used in the budget justification is based on statistical analysis. GAO identified factors in ICE's bed rate model—such as how it accounts for inflation and double counts certain costs—that may lead to its inaccurate bed rate projections. For example, in fiscal year 2016, ICE's projections underestimated the actual bed rate by $5.42 per day. For illustrative purposes, underestimating the bed rate by $5 per day, assuming an ADP of 34,000, yields a more than $62 million underestimation in the detention budget request. By assessing its methodology and addressing identified inaccuracies, ICE could ensure a more accurate estimate of its actual bed rate cost. Additionally, ICE reported that the ADP projections in its CBJs are based on policy decisions that account, for example, for anticipated policies that could affect the number of ICE's detainees. While ICE's projected ADP may account for policy decisions, documenting the methodology and rationale by which it determined the projected ADP would help demonstrate how the number was determined and that it was based on sound assumptions.
ICE's methods for estimating detention costs do not fully meet the four characteristics of a reliable cost estimate, as outlined in GAO's Cost Estimating and Assessment Guide . For example, while ICE's fiscal year 2018 detention cost estimate substantially met the comprehensive characteristic, it partially met the well-documented and accurate characteristics, and minimally met the credible characteristic. By taking steps to fully reflect cost estimating best practices, ICE could better ensure a more reliable budget request.
What GAO Recommends
GAO recommends that the Director of ICE: (1) document and implement its review process to ensure accuracy in its budget documents; (2) assess ICE's adult bed rate methodology; (3) update ICE's adult bed rate methodology; (4) document the methodology and rationale behind the ADP projection used in budget requests; and (5) take steps to ensure that ICE's detention cost estimate more fully addresses best practices. DHS concurred with the recommendations. |
gao_GAO-18-233 | gao_GAO-18-233_0 | Background
Colleges are a unique and diverse sector, varying from small, private schools in rural environments to large public schools in major cities. As of the 2015-2016 school year (the most recent available data), there were approximately 4,000 degree-granting colleges in the United States. In addition to educating students in classrooms, many colleges also manage a number of related business operations, such as dormitories, scientific research facilities, hospitals, performing arts centers, athletic venues, child care facilities, transportation systems, and agricultural facilities. These various roles and responsibilities increase the complexity of emergency preparedness efforts.
DHS has developed a national approach to emergency preparedness by setting a national preparedness goal and outlining activities for achieving it. This approach is designed to apply across all levels of government and sectors of the economy—including colleges, as well as local, state, and federal governments—and to prioritize collaboration among these entities. The National Preparedness Goal identifies activities to prevent, protect against, mitigate, respond to, and recover from threats and hazards and recognizes that preparedness is a shared responsibility of the whole community. The National Incident Management System (NIMS), which was developed by DHS’ Federal Emergency Management Agency (FEMA), operationalizes the goal by providing a guide with advice for government and nongovernmental entities for managing emergencies, including identifying a common vocabulary and processes for responding to emergencies. For example, NIMS establishes a standardized approach for communicating information during emergencies and outlines a leadership structure for managing emergencies, called an “Incident Command System,” so that the various entities responding to an emergency can operate seamlessly.
DHS, DOJ, and Education all develop and disseminate emergency preparedness resources in line with their respective missions. Other agencies, such as the Department of Health and Human Services and the National Weather Service, also produce information that can help with colleges’ emergency preparedness efforts.
Selected Colleges Prepare for Emergencies by Involving the Campus Community, Developing and Publicizing Plans, and Partnering with Local and State Agencies
Selected Colleges Varied in Organizational Structures for Emergency Preparedness and Involvement of Campus Community
College offices responsible for emergency preparedness efforts and the number of staff assigned to such efforts varied among the 18 selected colleges we interviewed and generally received some input from other members of the campus community. According to guidance for emergency planning from DHS’ Federal Emergency Management Agency (FEMA), emergency preparedness staff are generally responsible for tasks such as developing emergency plans, communicating and updating those plans, and taking a lead role during an actual event. College officials we spoke with said that their schools generally designated a lead office for emergency preparedness efforts. This lead office ranged from a dedicated emergency preparedness office at some colleges to offices that had non-emergency preparedness responsibilities as well, such as offices of public safety, student affairs, or facilities. About half of the officials responsible for emergency preparedness efforts at the 18 selected colleges we interviewed also spent time on other types of responsibilities that were not specific to emergency preparedness, such as health and safety issues. State agency officials and representatives from a college emergency preparedness association we spoke with also noted that emergency managers at colleges often “wear many hats,” or have limited time to devote to emergency planning, which makes their jobs more difficult. College officials often said balancing competing priorities was challenging. For example, an official at one college told us that if his school had more staff it could expand outreach efforts to students and faculty and design specific actions for a wider range of emergencies.
In addition to having a lead office, most of the 18 colleges reported convening advisory committees or teams from the campus community to help develop or revise emergency preparedness plans. For example, one official at a large public university with over 36,000 students told us emergency plans are reviewed by an emergency response committee comprised of representatives from the business office, student housing, faculty, and the provost, among others. An official from another college reported that, while some campus community members played a less active part in developing the emergency plan, they were still responsible for understanding their roles and responsibilities in the event of an emergency. According to FEMA’s guidance for emergency plans, there are benefits to using a team approach. For example, the campus community is more likely to follow a plan if members have been involved in developing it because of a sense of shared ownership (see text box.)
Two College Emergency Managers’ Descriptions of Emergency Preparedness Efforts On the day a campus police officer was shot and killed, several of the members of the campus leadership, including myself (the emergency manager) and chief of police, were off campus. Fortunately, many people on campus have been trained to manage a significant event because college leadership had placed a strong emphasis on emergency preparedness, including succession planning. When something occurs it is important to have a team that has practiced together and can provide leadership even if some key individuals are not on campus at the time.
Hurricane Irma was 340 miles across, wider than the states of Florida and Georgia in some places. We were on the “dirty side” of the hurricane, just to the east of the eye. We were relieved that the damage on our campuses was not worse. Because of our actions before the storm—such as removing loose items like traffic cones and signage and tying down large equipment— we minimized the damage.
Selected Colleges Reported Preparing for a Range of Emergencies and Varied in How They Communicated and Practiced their Plans
College officials we interviewed described preparations for a range of emergencies and used a variety of tools to communicate and practice their plans (see text box). Officials we interviewed at all 18 colleges said their school developed “all hazards” emergency plans, which means the plans are designed to address a range of emergencies while prioritizing those that are most likely to affect their campus. This “all hazards” approach is supported by federal emergency preparedness principles as outlined in NIMS. Most college officials we spoke with said they prioritize at least one type of natural disaster that could occur in their geographical area, as well as manmade threats like active shooters. Most of the college officials reported talking with state or local partners or using some type of risk assessment tool or similar analysis to prioritize specific types of emergencies. College officials sometimes described this process as prioritizing emergencies that either occur more frequently, or are likely to have a significant effect on the college if they were to occur. For example, several officials at selected colleges said their schools prioritized active shooter events—even though they occur relatively rarely—because of incidents at other colleges or the potential effects on the community if such an event were to occur. A college’s specific characteristics can also inform its emergency plan. For example, officials from two colleges said their schools serve as research institutions and may need to take extra steps to secure scientific infrastructure in an emergency. Two officials described emergency preparedness efforts related to the physical location of their campus, such as bordering a body of water or being adjacent to an airport.
Two College Emergency Managers’ Descriptions of Responding to Emergencies We knew that the hurricane was likely to hit other parts of our state badly, but we were not overly concerned that the hurricane would hit us directly. I came to work that morning and there were 20 buses on campus by our football field. We are an evacuation center but someone had forgotten to tell us that they were sending us 1,100 evacuees. Where were we going to put 1,100 people? These are the types of events that you plan for and hopefully you never have to implement those plans, but that day we had to do it. It took us about 4 or 5 hours between the time the buses showed up to when we had prepared the gymnasium with cots that were provided by the American Red Cross and food for the evacuees. The evacuees were here for 3 days. Our administrative staff slept on cots in our offices so that we were on campus the whole time the evacuees were here.
Prior to the rally, we set up cameras in the area and arranged for additional security through mutual aid agreements with other police departments. We also convened in a nearby meeting room to monitor the situation. The situation turned violent very suddenly. At first, a couple hundred students and other individuals were in the area peacefully. Then a more rowdy group convened and within 15 minutes of their arrival, bottles were flying through the air and windows were being broken. I looked down for just a moment, then looked up again and a generator was on fire. We tried very hard to continue with the event because we believe in free speech, but safety became a concern and we had to cancel. It was very stressful and hard to watch. We were worried about the safety of our students.
College officials we interviewed also outlined a variety of methods to communicate with the community in the event of an emergency and to conduct emergency drills.
Officials we interviewed at the 18 selected colleges most commonly described using college websites, text messages, or mass email alerts to communicate emergency preparedness information to the campus community (see text box). Officials at several colleges also said they developed more detailed applications that students and faculty could download to their electronic devices for up-to-date emergency preparedness information.
Two College Emergency Managers’ Descriptions of Emergency Communications Within minutes of the shooting, an alert was sent utilizing multiple channels including texts, email, message boards, web, desktop and voice messages. This serves two functions; it provides redundancy of delivery and also considers the different information receiving preferences of the community. Emergency messages, at a minimum, provide what happened, where it happened, and what action needs to be taken. Updates are sent when there is new information. It is recommended that during an emergency you communicate at least every 30 minutes. It is also important to ensure that correct up to date information is available, since inaccurate rumors can spread quickly through social media.
Twitter helped us amplify our messages. We wrote these messages quickly, while doing many other things, so that the community could have information as soon as possible including about areas to avoid for safety reasons. After the fact, the messages also provided a time- stamped record of the events and the campus response to those events.
About half of the colleges also told us that they offer training to communicate emergency preparedness information to specific groups such as students, faculty, and administrators. For example, an official at one college told us the college has targeted outreach to faculty by developing specific trainings that cover specific issues, such as what to do when classes are disrupted or a building is no longer accessible, for example, as the result of a weather event. Officials from several colleges also said they communicate emergency preparedness information during new student orientation. Several college officials acknowledged that engaging students can be challenging, and some officials said they address this challenge by making presentations or printed and online materials as engaging as possible.
Emergency Preparedness Drills and Exercises College officials we interviewed also said their colleges practice and test emergency preparedness plans by conducting drills and exercises at least once a year. Most officials from the 18 selected colleges said they conducted evacuation drills, such as fire drills; a few officials said they conducted more time-intensive activities such as “tabletop exercises” (i.e., sessions in which officials meet to discuss their roles during a specific type of emergency). For example, a large public college conducted a tabletop exercise to simulate a hypothetical weather event that damaged a dormitory. One official at a large university also described how the college uses emergency preparedness principles to manage non- emergency events such as sports events in order to practice their plans.
College emergency managers said that buy-in from a college’s top leadership was very important for promoting emergency preparedness efforts and increasing campus involvement. For example, one official described top leadership buy-in as the “guiding light” for the campus community. Another official said the president of his college made it mandatory for all executive staff to attend emergency preparedness trainings, which demonstrated his commitment to emergency planning and preparedness. When such support is lacking, officials said it is often difficult to engage students and faculty. For example, one college official told us that his college’s previous president viewed emergency preparedness as bothersome and a burden. The lack of support limited the type of drills that could be conducted on campus, the official said. Another official at a private 4-year school explained that his college could not participate in the “The Great ShakeOut” program because the drill fell outside of the allowable hours when drills were permitted to occur to avoid any conflicts with classroom instruction time.
Selected Colleges Often Coordinated with Local Partners or State Agencies
Officials at most of the 18 selected colleges stated that they relied on either their local or state partners, or both, for advice, questions, or to obtain resources for emergency preparedness. These partners were also the first responders for colleges experiencing emergencies and may include local and state police and fire departments, hospitals, and emergency management offices. Coordinating with partners is a key component of the federal emergency preparedness principles, as outlined in the National Preparedness Goal and NIMS.
Most of the officials we spoke with at our selected colleges said they work with partners in their local community, such as police, fire, and emergency management departments or local public health agencies, in preparing for emergencies. For example, one official at a large public university described a mutual aid agreement with its local emergency management department, which allows his school access to the county’s radio communication system in the event of an emergency. The specific nature of local partnerships often varied based on factors such as the size of the college and the surrounding community. For example, we heard from some state, college, and association representatives that some smaller colleges did not have very extensive police or security departments, and therefore, relied heavily on local police departments when emergencies occurred. While coordination often involved planning for how a community could help a college in the event of an emergency, college and emergency preparedness association officials also described instances in which large universities in small towns had more emergency preparedness resources than the town and were therefore the ones offering help. For example, one large university in a part of the country prone to tornadoes offers shelter to town residents and employs emergency response coordinators to help individuals quickly find shelter.
Officials also said interpersonal relationships play a big part in deciding to whom they reach out. Most of the college officials with whom we spoke highlighted the importance of their interpersonal relationships with local and/or state law enforcement or emergency management officials and in some cases, attributed these relationships to having previously worked in local or state law enforcement or emergency management. For example, one college official told us that his former role as a local police chief has made it easy to identify and maintain contacts with local police, fire, and emergency medical services and to include them in all campus drills and exercises.
College officials also described partnering with state agencies to develop their emergency plan and identify roles in the event of an emergency, adhere to state requirements, or obtain resources (see text box). Officials at about half of the 18 selected colleges described working with state law enforcement entities to, for example, obtain information about emerging threats, or involve state officials in drills and exercises to practice their colleges’ emergency plans. About half of the college officials also described cases in which they were required by state law or regulation to complete certain college-specific emergency preparedness activities, such as developing an emergency operations plan, although officials from a college emergency preparedness association noted that state requirements related to college emergency preparedness vary widely. In addition to describing requirements from state emergency management agencies, officials from several public colleges described emergency preparedness requirements from the head office of their state’s college system. Other officials said that their state did not have any requirements specific to emergency preparedness at colleges.
States sometimes also provided resources for colleges’ preparedness efforts. Officials at most of the 18 colleges we contacted said that they received some state written guidance, training, or technical assistance that was either specifically tailored to colleges, or was designed for various entities including colleges. For example, Colorado has an online school safety center that disseminates emergency preparedness resources and offers technical assistance. An official from the Kansas Board of Regents told us the Board’s staff helps to facilitate a new emergency preparedness community of practice led by colleges, and an official from the state’s Division of Emergency Management said they hold general emergency preparedness trainings in which colleges may participate.
In addition to supports from local and state government, officials at most of the selected colleges reported that they received support or assistance from college emergency preparedness associations. For example, these associations host conferences and conduct studies on emergency preparedness.
Three College Emergency Managers’ Descriptions of Working with Community Partners Informal networks were essential. People who know each other will help each other. I have a friend in the state police department and requested his assistance with security for the evacuation center. The state police provided approximately 10 troopers to assist the campus police officers. Some evacuees brought their pets with them, so the county office of emergency management activated its animal shelter resources and positioned an animal shelter on campus. Someone brought a 4-foot iguana. What do you do with an iguana?
The group that was being destructive moved back and forth between campus and the city, so we communicated and coordinated a lot with community partners. We work together on a daily basis, so the communication that night was seamless. We also had a member of the local police department in our emergency management headquarters during the event, which was very helpful.
In the days leading up to Hurricane Irma, statewide briefings were held twice a day with a variety of emergency personnel in the room, including local police and fire chiefs, mayors, power companies, communications personnel, and the state emergency management department. Everyone had already discussed how we would work together in the event of an emergency, so the conversation focused on coordinating specific actions. For example, we are a state system of technical colleges with many tractor-trailer drivers on campus. We were asked to deploy those drivers to deliver supplies to various state and FEMA locations around the state.
In addition to managing emergencies for the college, I am also the mayor of one of the local towns and those responsibilities dovetail nicely. Responding to emergencies never becomes second nature, but it’s nice to know that when something natural or manmade strikes, there are systems, people, and assets in place. One of the reasons that the system works so well now is because frameworks like NIMS were put in place after Hurricane Katrina.
Several Federal Agencies Offer Emergency Preparedness Resources Although Selected Colleges Reported Mixed Awareness
Federal Agencies Provide Guidance, Training, Technical Assistance, and Other Resources to Help Colleges Prepare for Emergencies
Various sub-agencies within DHS, DOJ, and Education are involved in developing and providing emergency preparedness resources for colleges (see fig. 1).
These three agencies use a variety of methods to provide resources, such as written guidance, webinars, and individual technical assistance (see fig. 2). The content of these resources ranges from general emergency management information to guidance specifically tailored to schools (see text box). Agency officials we interviewed said federal agencies have specific areas of expertise as it relates to college emergency preparedness. For example, DHS’ FEMA provides broad emergency preparedness information and tools and DOJ approaches emergency preparedness through a law enforcement and public safety perspective. Education’s role includes the work of its Federal Student Aid office, which approaches emergency preparedness by issuing relevant guidance, providing technical assistance, and enforcing compliance with the Clery Act.
Federal officials noted that colleges can have differing needs when it comes to emergency preparedness, based on their size, funding, and current threats. As a result, agency officials said they strive to provide tailored resources when possible. For example, DHS officials said that the Campus Resilience Program is building a website portal that will include a menu of FEMA resources tailored to colleges’ needs, including a downloadable self-assessment of risk and vulnerability. This new program is meant to expand on a similar pilot program that operated from 2013 to 2016; officials expect it to be accessible to schools midway through fiscal year 2018. Education and DOJ officials said that college officials have recently been requesting information and assistance with demonstrations and large events on campus. Specifically, the DOJ-funded National Center for Campus Public Safety (NCCPS) publicized a “For Official Use Only” report on maintaining safety and order on campuses during protests and demonstrations, which was produced by DHS and DOJ. According to NCCPS tracking records, 325 colleges and other parties requested this guidance from January through August 2017. Additionally, agencies have developed resources based on current events, including webinars in response to a series of severe hurricanes in fall 2017.
Examples of Federal Resources for Colleges’ Emergency Preparedness Efforts
National Incident Management System: The Department of Homeland Security’s (DHS) Federal Emergency Management Agency (FEMA) provides general emergency management resources through its National Incident Management System (NIMS) and Incident Command System (ICS). FEMA officials have also helped produce some college-specific resources within NIMS and ICS, such as a guide for NIMS implementation for colleges, and courses tailored to college officials, including a course titled “Multi-Hazard Emergency Management for Higher Education.”
National Center for Campus Public Safety (NCCPS): Funded by the Department of Justice (DOJ), NCCPS maintains a website with a library of resources and training for colleges, and distributes a weekly electronic newsletter to officials who request to be on the distribution list. NCCPS also staffs research associates who answer email requests from college officials.
Readiness and Emergency Management for Schools (REMS)
Technical Assistance Center: Administered by the Department of Education (Education), the center includes a community of practice, and links to federal resources and training. The REMS Center addresses emergency preparedness for both K-12 schools and colleges; according to officials, the center devotes approximately 20 percent of its resources to emergency preparedness for colleges.
2013 Guide for Developing High-Quality Emergency Operations Plans for Institutions of Higher Education: Developed by Education, DOJ, DHS, and other agencies, this is an overall guide for colleges as they develop their emergency plans.
Assistance related to Clery Act components on emergency preparedness: Offices within Education provide guidance (such as the Handbook for Campus Safety and Security Reporting) and assistance with calls to the Campus Safety and Security Help Desk.
Examples of Federal Resources for Colleges’ Emergency Preparedness Efforts
Campus Resilience Program: As part of this program, the Office of Academic Engagement, within DHS, leads the National Seminar and Tabletop Exercise Series for Institutions of Higher Education, a series of campus-based events where college officials discuss their roles during a simulated emergency situation. DHS officials collaborate with officials from DOJ and other agencies to conduct these events. In 2016, the tabletop exercise focused on responding to campus violence.
Campus Liaison Program: Federal Bureau of Investigation (FBI)
Campus Liaison Agents, comprised of both Special Agents and Task Force Officers on the Joint Terrorism Task Forces in FBI field offices, provide information, training, exercises, and response capabilities to campus public safety officials.
Research and reports on manmade threats: Agencies have published reports on manmade threats applicable to higher education settings, such as the 2010 report “Campus Attacks: Targeted Violence Affecting Institutions of Higher Education,” which was a collaborative among the FBI, Education, and Secret Service.
Most of the federal agency officials we interviewed said they were generally aware of resources produced by other federal agencies and reported that collaboration is based on relationships formed through prior collaborative efforts, such as the White House-initiated effort to produce emergency preparedness guidance for colleges in 2013. For example, Education officials described being contacted by their colleagues at other agencies with questions or requests, and DHS and DOJ officials said they frequently cross-promote each other’s resources. Further, various agencies have advisory boards and committees to inform their agency- specific initiatives, such as the DHS Homeland Security Academic Advisory Council, which includes officials from other agencies. However, some agency officials shared potential issues with information sharing. For example, one official said he continues to encounter federal offices that have emergency preparedness resources of which he was unaware, indicating there are continued opportunities for increased collaboration.
There is currently no systemic way for federal agencies to share information about resources for college emergency preparedness. Federal officials have established an interagency working group, “Federal Partners in School Emergency Management and Preparedness” that currently focuses on resources for K-12 schools, and Education officials said it plans to expand its focus to include colleges, perhaps by fall 2018. Most federal agency officials we spoke with said having an interagency working group focused on colleges would be useful, for example, to ensure that officials are aware of all available resources across the federal government.
Selected Colleges and Stakeholders Cited Schools’ Mixed Awareness of Federal Resources Despite Agency Efforts to Publicize Them
Officials from the selected 18 colleges cited mixed levels of awareness regarding federal resources on emergency preparedness developed specifically for them. For example, officials at all 18 colleges said they were aware of FEMA resources focused on general emergency preparedness, such as NIMS. However, we found that college emergency managers were less frequently aware of college-specific resources produced or funded by Education, DOJ, and others. Specifically, college emergency managers at almost half of the selected schools said that they were unaware of each of the following key resources: the 2013 Guide for Developing High-Quality Emergency Operations Plans, the NCCPS website, or Education’s Readiness and Emergency Management for Schools (REMS) Technical Assistance Center website. In addition, the college officials with whom we spoke sometimes requested the federal government develop specific resources without realizing these resources already exist. For example, one college official described wanting resources on how to manage active shooter and weather-related emergencies, although several agencies currently fund or provide such resources. Additionally, another college official who generally accessed federal resources through DHS suggested that the agency develop tailored guidance for colleges beyond NIMS, without realizing that a NIMS guide for colleges exists on Education’s REMS website.
Federal officials and representatives from college emergency preparedness associations have also observed gaps in awareness of federal resources among college officials and have acknowledged it as a challenge. For example, one agency official said that every time she goes to a conference, she finds more college officials who have not heard of key federal resources, signaling a continued need for more outreach. A needs assessment funded by DOJ also found that awareness of federal resources may be an issue. Further, NCCPS staff conducted a survey among colleges to assess the level of engagement these schools have with entities such as FBI Campus Liaison Agents, and told us they found about half of colleges—especially private colleges—are unaware of the federal entities included in the survey.
This limited awareness among some schools is occurring despite federal agencies’ efforts to disseminate resources and engage with the higher education community. Agencies publicize resources through electronic mailing lists (i.e., listservs), social media, conferences, websites, direct outreach, and college emergency preparedness associations. For example, Education’s Office of Safe and Healthy Students, which publishes its resources on its REMS website, publicizes these resources through social media. DHS publicizes its Campus Resilience Program at conferences. Other agency officials we spoke with said they also use conferences as opportunities to increase school officials’ awareness of federal resources, and they partner with college emergency preparedness associations to publicize their resources. NCCPS includes information on various resources in its weekly e-newsletter. Additionally, following up on the results of the NCCPS survey on colleges’ engagement with FBI Campus Liaison Agents discussed above, NCCPS staff have discussed the results with the FBI Program Manager of the Campus Liaison Program so the FBI can improve engagement with colleges.
Officials from colleges, college emergency preparedness associations, and federal agencies we interviewed identified several factors, such as colleges’ staffing resources dedicated to emergency preparedness and the nature of the professional networks used by their emergency managers, that may lead officials to be less familiar with college-specific federal resources on emergency preparedness:
Without full-time emergency preparedness staff, colleges, particularly small colleges, must prioritize the most urgent tasks, and thus, officials reported not having enough time to research available federal resources. Representatives from college emergency preparedness associations also said that, in their experience, larger schools were more likely to be aware of federal resources than private and smaller colleges.
College emergency managers we spoke with often have backgrounds in local or state emergency preparedness or law enforcement or have networks comprised of local or state officials. These managers often said they learned about federal resources through their more general local and state emergency preparedness networks. As a result, they were more frequently aware of general FEMA resources applicable to these localities versus resources specifically designed for colleges. In particular, college officials we contacted were more likely to report seeking information from DHS than from Education or DOJ.
Some college officials may be uninterested in learning about additional resources provided by the federal government, especially if they receive resources from states, localities, or college emergency preparedness associations or potentially in cases where campus leadership does not prioritize emergency preparedness.
While agency officials and representatives from college emergency preparedness associations said that federal agencies have made strides in publicizing their resources to a population of college officials that can be challenging to reach, and expressed desire to increase awareness, we identified potential gaps or missed opportunities in their dissemination approaches, including:
Agencies commonly publicize new resources through their existing listservs and social media accounts. While these dissemination strategies are effective for alerting colleges already connected to federal agencies, they are less likely to reach additional colleges not already subscribed to these distribution lists. For example, a REMS official reported that the REMS listserv includes approximately 1,000 officials from colleges and related associations. Given that approximately 4,000 colleges were operating in the 2015-2016 school year, according to Education data, most colleges do not receive these electronic communications. In addition, DHS officials told us that one of their college emergency preparedness distribution lists includes representatives from college emergency preparedness associations and state college and university systems, but is not designed to include individual colleges unless they request to be included.
Agencies also often publicize their efforts at conferences, but these conferences may miss some colleges, especially some smaller colleges with fewer resources with which to send college officials, according to several agency and college emergency preparedness association officials. As a result, colleges that can afford to send officials to these conferences may already be more informed than colleges not in attendance.
In reviewing various federal websites, we found some lists of resources that did not include key federal resources, or included web links that directed visitors to other agencies’ resources that were out of date. For example, one federal website included a list of resources related to emergency planning for colleges, but did not list the NCCPS website among these resources, even though it is a key resource focused on the topic. Another federal website for college emergency preparedness did not include a link to Education’s REMS website, which was specifically developed for school emergency preparedness. Further, this same resource included a link to another Education webpage that was empty of content and had not been updated since 2015. When federal emergency preparedness websites are out of date or incomplete, federal agencies miss opportunities to provide accurate, up-to-date information about their resources and initiatives and those of their partner agencies, and may contribute to colleges’ gaps in awareness about these resources.
We heard from several college officials that they would like direct outreach from the federal government. Agencies do not generally distribute information directly to all colleges, especially those not previously signed up for listservs or other distribution services. However, Education has email contact information for the official at every college who reports campus crime statistics to the agency, which may be a natural entry point for federal agencies to disseminate information on emergency preparedness to all colleges.
As discussed above, agency officials do not have a systematic method for notifying each other about their resources for colleges. This could limit officials’ ability to cross-publicize each other’s resources; an important activity given that some colleges we contacted only seek information from one agency or website and were unaware of resources from others.
According to federal standards for internal control, in communicating information to achieve their objectives, agencies should consider appropriate methods of communication with their external audience (in this case, college emergency managers). Relatedly, these standards also state that agencies should communicate with each other on necessary information for achieving their objectives. Limited awareness of federal resources may result in colleges unnecessarily focusing their limited time and resources on developing strategies or information that federal agencies have already addressed, or advancing preparedness efforts that are not fully informed by federal agencies’ expertise.
Conclusions
Emergency preparedness is a vital and challenging task for the higher education community. Various sub-agencies within three key federal agencies—DHS, DOJ, and Education—provide a number of resources for colleges, but over the course of our review, we found that colleges were sometimes unaware of key federal resources that could assist them in meeting their important emergency preparedness needs. The breadth of many colleges’ responsibilities beyond education—such as housing students, running research facilities, and operating hospitals—increases their exposure to risks. Being underprepared in the face of an emergency could dramatically increase both human and economic consequences, not only for the colleges themselves, but also for the larger communities to which they are connected.
Emergency preparedness is a shared responsibility and colleges bear some responsibility for learning about federal resources that can assist them in protecting their students and staff. However, striking an appropriate balance between meeting colleges’ main mission—educating students—and other equally important responsibilities, such as emergency preparedness, can be difficult, especially given resource constraints. While federal agencies also face resource constraints, supporting the safety of college community members is an important part of the missions of DHS, DOJ, and Education. These agencies have developed a variety of resources intended to support colleges in their emergency preparedness efforts, but colleges are not always aware of these resources. This problem is exacerbated by federal agencies’ choice of dissemination methods, which could miss a large portion of college emergency managers, and because federal agencies have missed opportunities to cross-promote each other’s resources. Unless federal agencies address these issues, they will continue to miss opportunities to more effectively communicate important information to colleges, particularly those that may be harder to reach, such as smaller schools. The planned interagency working group on emergency preparedness for colleges may offer an opportunity to systematically explore areas in which communication and connection between colleges and federal agencies can be improved, while leveraging and improving existing agency relationships.
Recommendations
We are making a total of three recommendations—one to each of the three agencies in our review—to improve awareness of federal resources for emergency preparedness among colleges. Specifically: The Secretary of Education, in collaboration with other agencies through the planned interagency working group or another mechanism, should identify further opportunities to more effectively publicize resources to reach additional colleges. (Recommendation 1)
The Secretary of Homeland Security, in collaboration with other agencies, through the planned interagency working group or another mechanism, should identify further opportunities to more effectively publicize resources to reach additional colleges. (Recommendation 2)
The Attorney General, in collaboration with other agencies through the planned interagency working group or another mechanism, should identify further opportunities to more effectively publicize resources to reach additional colleges. (Recommendation 3)
Agency Comments and Our Evaluation
We provided a draft of this report to Education, DHS, and DOJ for each agency’s review and comment. All three agencies agreed with our recommendations or described steps they would take to implement them. Education’s written comments are reproduced in appendix I and DHS’ written comments are reproduced in appendix II. DOJ did not provide written comments. DHS and DOJ provided technical comments. We incorporated changes based on their comments into the report, as appropriate.
Education stated that the agency is always interested in increasing utilization by colleges of the emergency management resources that the Department and other federal agencies develop. It also stated that the planned interagency working group would be a very appropriate and effective vehicle for increasing utilization of these resources, and that it will consider that group or other mechanisms to identify further opportunities to publicize resources to colleges.
DHS concurred with our recommendation to the agency and said that it would continue to collaborate with its partners to further publicize resources available to colleges. It also highlighted several of the Department’s current and planned resources for its related Campus Resilience Program.
DOJ did not provide written comments, but stated that it agreed with our recommendation to the agency. Officials stated that they would outline steps for addressing the recommendation in future communications.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Attorney General of the United States, the Secretary of Education, and the Secretary of Homeland Security. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (617) 788-0580 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in Appendix III.
Appendix I: Comments from the Department of Education
Appendix II: Comments from the Department of Homeland Security
Appendix III: Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, Janet Mascia (Assistant Director), Barbara Steel-Lowney (Analyst-in-Charge), Shilpa Grover, and Vernette Shaw made key contributions to this report. Also contributing to this report were: Susan Aschoff, Rachael Chamberlin, Jessica Moscovitch, Jessica Orr, Mimi Nguyen, Deborah Bland, Benjamin Sinoff, Sheila McCoy, Jean McSween, Lori Rectanus, and Sarah Veale. | Why GAO Did This Study
Colleges and other postsecondary schools must plan for various potential emergencies, ranging from natural disasters to violence. A number of federal agencies, including DHS, DOJ, and Education, offer resources to support these efforts. GAO was asked to review colleges' awareness of these resources.
This report examines how (1) selected colleges prepare for emergencies, and (2) federal agencies support college emergency preparedness efforts, including the extent to which selected colleges reported awareness of federal resources.
To answer these questions, GAO interviewed officials from a non-generalizable sample of 18 colleges selected for diversity in size, type, and location. GAO also interviewed officials from three states (Colorado, Kansas, and Virginia) in which some of these schools operated. The states were selected to represent varied approaches to supporting colleges' emergency preparedness efforts. GAO also reviewed federal emergency preparedness resources, agency written responses, applicable federal laws, and federal internal control standards, and interviewed federal officials and representatives from several associations recommended by agency officials.
What GAO Found
Emergency managers at 18 colleges across the country told GAO that their efforts to prepare for emergencies involved working with the campus community to develop, communicate, and practice plans, as well as working with state and local partners. Campus community members who are involved often include personnel from offices such as public safety, student affairs, or facilities. Officials at all 18 colleges reported developing emergency plans addressing a range of potential events—an approach consistent with federal emergency management principles. To publicize plans, officials often reported using websites, text messages, or presentations to the campus community. Colleges also reported practicing plans through drills. College officials noted that buy-in from the college president and other top campus leaders was critical to their efforts; several officials reported struggling to obtain such support. Most officials also said they coordinate with local or state partners such as police and relied on these partners for advice or to obtain emergency preparedness resources.
The Departments of Homeland Security (DHS), Justice (DOJ), and Education (Education) offer a variety of emergency preparedness resources to colleges (see figure). However, officials GAO interviewed at 18 colleges described mixed awareness of federal resources, especially those specifically tailored to colleges, despite federal efforts to publicize these resources in a variety of ways. Federal officials and other stakeholders acknowledged this mixed awareness and identified potential causes, such as college emergency managers having networks comprised of local officials who are more likely to know about federal resources for local agencies versus those for colleges, or some college officials devoting limited time to researching federal resources for various reasons.
DHS, DOJ, and Education all publicize their resources through electronic mailing lists, websites, or other methods, but GAO identified missed opportunities in their dissemination approaches. For example, the electronic mailing list for one key resource may reach the approximately 1,000 officials from colleges subscribed, but may miss at least 3,000 additional schools. GAO also found two federal agency websites that did not include key resources from other federal agencies. Federal internal control standards state that agencies should consider the most appropriate methods for communicating with their external audiences. By identifying opportunities to improve dissemination, federal agencies may increase their ability to effectively communicate important information to colleges.
What GAO Recommends
GAO recommends that DHS, DOJ, and Education work together to identify opportunities to more effectively publicize emergency preparedness resources to colleges. All three agencies concurred with the recommendations or described actions to implement them. |
gao_GAO-18-318 | gao_GAO-18-318_0 | Background
NRC is an independent agency established by the Energy Reorganization Act of 1974 to license and regulate civilian uses of nuclear materials in the United States for commercial, industrial, medical, and academic purposes. Under the Atomic Energy Act of 1954, as amended, NRC is responsible for issuing licenses for civilian uses of radioactive material and conducting oversight activities under such licenses to protect the health and safety of the public, among other things. NRC regulates commercial nuclear power plants; research, test, and training reactors; nuclear fuel cycle facilities; the transport, storage, and disposal of radioactive materials and waste; and the use of radioactive materials in medical, academic, and industrial settings. NRC is authorized to conduct inspections and investigations; enforce regulatory requirements by, among other things, issuing orders and imposing civil (monetary) penalties; and revoke licenses. NRC is headed by a five-member Commission, with members appointed by the President and confirmed by the Senate; one commissioner is designated by the President to serve as the Chair and official spokesperson of the Commission. NRC staff from headquarters and the four regional offices implement the agency’s programs for developing regulations, licensing, inspection, enforcement, and emergency response, among other responsibilities. NRC’s Office of the Chief Financial Officer establishes, maintains, and oversees the implementation and interpretation of the agency’s regulatory user fee policies and regulations, among other responsibilities.
The Office of the Chief Financial Officer is responsible for assessing service fees to licensees for each license they hold and sending licensees invoices quarterly. The quarterly invoices for service fees may include costs in the following three categories:
NRC staff work. NRC staff record their time related to services, such as licensing, inspections, special projects, and license reviews, which is then billed to licensees to recover the full cost of these services. To calculate the cost of work performed by NRC staff, NRC applies an hourly rate—as established during the agency’s annual rulemaking process—to the number of staff hours spent on work that is directly attributable to a specific licensee.
Overhead costs for project managers and resident inspectors.
Some licensees work with an NRC project manager or resident inspector, and NRC allocates the overhead costs for these NRC staff to the licensees. Overhead costs cover the costs of these staff doing tasks that are not assigned to a specific licensee, but that benefit licensees, such as training, according to NRC staff. Project manager and resident inspector overhead costs are calculated for each relevant licensee as 6 percent of the licensee’s total NRC staff time charges for the quarter.
Contractor charges. NRC sometimes hires a commercial contractor or other federal agency, such as the Department of Energy (referred to collectively as contractors), to perform services that are directly attributable to a licensee, such as reviewing license applications. In these cases, NRC pays the contractor for the work and then bills the licensee for reimbursement of the contractor’s charges.
NRC’s billing process for service fees begins by identifying work that can be billed to a specific licensee and ends when the licensee pays the quarterly invoice. Once NRC determines that billable work needs to be done, the agency follows the steps in the billing process shown in figure 1.
The steps in NRC’s billing process are described in more detail below.
NRC assigns activity code: After billable work is identified, NRC assigns an activity code, which is a project code to which NRC employees charge time for billable work performed.
NRC performs work: NRC staff perform work that is billable to a licensee and record their time biweekly in electronic time cards in NRC’s time and labor management system. If NRC staff discover that they have recorded time incorrectly in a previous pay period, such as by charging time to an incorrect activity code, they can correct the error by making a prior-period adjustment. Adjustments that are made within 6 weeks of the date of the error can be made directly in the time and labor management system; adjustments made 6 weeks or more after the date of the error require a memo with justification from the employee’s office director to NRC’s Controller.
Supervisors review hours: At the end of each 2-week pay period, NRC supervisors review and approve the time cards for the staff they supervise, including the hours charged to activity codes.
Contractor performs work: If work is done by a contractor, the contractor submits a status report and invoice to NRC each month. Each monthly status report includes a description of the work done, the planned completion date, the total charges for the current invoice, the cumulative charges to date, and an estimate of future charges.
NRC reviews charges: NRC staff responsible for managing the agency’s contracts review the monthly status reports and invoices and must approve invoices before paying the contractor. After paying a contractor’s invoice, NRC bills the licensee for reimbursement of the amount NRC paid to the contractor. Contractor charges are included on a licensee’s quarterly invoice, and NRC may bill a licensee for contractor charges after the quarter in which the work was performed.
NRC aggregates charges: NRC’s financial management system aggregates all NRC staff hours and charges from contractors biweekly for each licensee. The financial management system obtains data on staff hours from NRC’s time and labor management system. Contractor charges are entered manually into the financial management system.
NRC validates charges: NRC regional and program offices review and certify all charges to licensees after the end of each quarter. To accomplish this, the Office of the Chief Financial Officer produces quarterly validation reports—one for staff charges and one for contractor charges—from NRC’s financial management system.
NRC invoices licensees: NRC creates invoices each quarter and sends them to licensees via the U.S. Postal Service. Licensees’ payments are due to NRC within 30 days of the invoice date to avoid paying interest on the charges.
Licensee reviews and pays invoice: Licensees review the invoice and may pay the invoice, request that NRC review the fees assessed, or dispute the fees. These billing disputes generally start informally with the licensee contacting NRC. According to NRC staff, most disputes are handled informally and generally entail explanations of the agency’s billing or licensing policies. If NRC staff are unable to resolve a licensee’s concern informally, the licensee can write a letter to the Chief Financial Officer, which begins a formal dispute process. According to NRC staff, to address a licensee’s concerns with the charges, the Office of the Chief Financial Officer reviews the charges on the invoice and may involve the relevant regional or program offices to determine whether the charges are valid for the work performed. Additionally, NRC’s Office of the General Counsel may be included in disputes regarding NRC’s fee policy. After the dispute is resolved, the licensee pays the invoice.
NRC’s OIG and internal reviews conducted by NRC in the last 5 years identified problems with the agency’s billing process. In 2012 and 2015, for example, OIG audits identified problems with NRC’s management and review of billable charges and recommended changes to the agency’s internal processes and procedures—called internal controls—to improve the accuracy of invoices. In 2013, NRC launched the Business Process Improvement Project to determine the root causes of billing errors, many of which were discovered during the quarterly validation step of the billing process. The project was completed in 2014 and made recommendations focused on strengthening internal controls and improving efficiency and effectiveness of the billing process. Additionally, in 2016 NRC requested feedback from the public, including licensees and other stakeholders, on the general communications the agency provides about its fees, intending to use the feedback to improve the transparency of its fees development and invoicing practices. Following this effort, NRC launched its Fees Transformation initiative to improve transparency of its fee-setting and billing processes.
NRC Has Taken Action in Four Main Areas to Improve Its Billing Process
NRC has recently implemented or plans to implement changes in four main areas of its billing process to address problems identified by NRC’s OIG and NRC internal reviews: controls over activity codes, guidance and training for NRC staff, quarterly validation of charges, and charging licensees for billable overhead costs.
Controls over Activity Codes
NRC’s OIG and internal reviews found problems with NRC’s internal controls over activity codes, which affected the quality of data used for billing and other agency processes. The management of activity codes was decentralized, meaning that staff in NRC’s offices generated the codes and each office followed its own policies and procedures regarding the setup and use of these codes. NRC also did not have a standardized set of activity codes to be used across the agency. Activity codes were instead linked to specific licensees, meaning that identical work activities for two licensees would require two different activity codes. These conditions resulted in an excessive number of activity codes in the agency’s time and labor management system. According to an internal NRC review, the decentralized management and absence of standardized activity codes weakened internal controls and put NRC at risk for incomplete or inaccurate billing. Further, there was no consistent naming convention, and activity code titles often lacked the specificity necessary for NRC staff to readily identify the correct code for the work activity performed, according to the OIG. NRC staff could also search and access the entire inventory of activity codes, including those unrelated to their work. According to the OIG, these conditions increased risk for staff to inadvertently select the wrong activity codes when recording their time; in such cases, the wrong licensee could be billed for the work.
Starting in fiscal year 2016, the Office of the Chief Financial Officer began taking responsibility for overseeing and managing activity codes, including establishing, maintaining, and closing activity codes available in the agency’s time and labor management system. Further, NRC developed a set of standardized activity codes with titles related to the specific work activities completed. The transition to centralized activity code management and standardized activity codes was completed in October 2017, according to NRC staff. Also in October 2017, the agency implemented controls that prevent a staff member from charging time to an activity code unless a project manager has granted that staff member access to the code.
Guidance and Training
NRC’s OIG and internal reviews found problems with staff’s understanding of their roles and responsibilities for accurate time and labor reporting and management of billable contracts, which, according to NRC documents, contributed to avoidable time card errors and billing errors. To address these problems, NRC provided training and updated guidance for staff covering the following two areas:
Time and labor reporting. According to an internal NRC review, staff were making avoidable data entry errors in time cards that supervisors who approved the timecards were not identifying, meaning incorrect time cards were sent to the Office of the Chief Financial Officer for billing. In late fiscal year 2015, NRC provided training to all agency staff to emphasize the importance of accurate time reporting, the process for selecting correct activity codes, and the relationship of time card entry to billing. According to NRC officials, the agency also provided specialized training to staff in offices where errors were common. Additionally, the agency updated its time and labor reporting guidance and provided supplemental guidance to staff related to time and labor reporting. Furthermore, in preparation for changes to activity codes that were implemented in October 2017, NRC provided additional training to staff on the new activity code structure and making corrections to their time cards.
Management of contracts. According to an internal NRC review, approximately one-third of the billing errors identified during the quarterly validation step of the billing process resulted from administrative errors in managing contracts. NRC’s OIG also found that agency guidance related to the invoice review process was outdated and did not provide staff with sufficient criteria for verifying information contained in contractor invoices. Without such criteria, NRC could not ensure that it was evaluating contractor charges consistently and appropriately before billing those charges to licensees. In 2015, NRC provided training to staff who manage contracts, which, according to NRC officials, resulted in an immediate decrease in associated billing errors. NRC also revised its guidance to clarify responsibilities, procedures, and instructions for reviewing and approving contractor invoices.
Quarterly Validation
NRC’s OIG and internal reviews identified conditions that made the quarterly validation step in the billing process challenging for staff to perform and that led to inconsistent validation procedures among program and regional offices. NRC has taken or plans to take the following two actions to address these problems: Improving validation reports. According to NRC documents, the quarterly validation reports contained billing data for all program and regional offices—sometimes amounting to more than 4,000 pages of data—and the reports did not have the sorting functionality or querying capability that would allow NRC staff to extract relevant information. Staff in program and regional offices instead relied on manually generated reports to compile information they needed. Additionally, according to the OIG, the quarterly validation reports did not include sufficient detail on contractor charges for NRC’s staff to properly review them. To address these problems, in 2014 NRC started providing the quarterly validation report in electronic spreadsheet format, which gave staff the sorting and filtering capabilities needed to extract data relevant to their respective reviews and eliminated the need for manually generated reports, according to NRC staff. Further, NRC began providing validation information for contractor charges in a separate report. The new validation report for contractor charges has more detailed information and specific instructions for NRC staff for verifying the accuracy of the charges.
Standardizing the quarterly validation process. According to NRC, the current quarterly validation process is not standardized across the regional and program offices and there is no agency guidance to ensure that staff in different offices conduct the process consistently. Further, there is currently no way to ensure that an adequately trained person in each program or regional office is conducting the validation, according to NRC staff. To address these problems, NRC is planning to standardize the quarterly validation process and to establish clear roles and responsibilities for staff participating in the process. One key change NRC is planning is to have the individual leading the work validate the accuracy of the charges. According to NRC’s planning documents—dated August 2017—NRC expects to pilot the new validation process in June 2018 and to implement it agency-wide by October 2018.
Charging for Billable Overhead Costs
At the end of fiscal year 2012, an internal NRC audit identified approximately $24 million in unbilled overhead hours. NRC staff explained that the hours went unbilled because project managers and resident inspectors charged billable overhead time to nonbillable activity codes, rather than to the billable activity codes associated with licensees.
According to an internal NRC review, these errors accounted for approximately two-thirds of the billing errors identified during the quarterly validation process. At the beginning of fiscal year 2016, NRC started billing this overhead time as a separate fee on invoices that is calculated as 6 percent of all NRC billable hours on an invoice, which eliminated the billing errors related to overhead. However, NRC analyzed this billing method again in fiscal year 2017 and determined that eliminating the percentage charge and having staff charge their billable overhead time to billable activity codes would be more equitable. NRC intends to implement a new process for charging billable overhead time at the start of fiscal year 2019. According to NRC staff, the agency has made administrative changes to address the factors that contributed to project managers and resident inspectors incorrectly charging overhead time in the past.
Licensees We Interviewed Identified Several Challenges with NRC’s Billing Process, and NRC’s Recent and Planned Changes May Not Fully Address Them
Licensees we interviewed identified challenges with the amount of information available about NRC’s billable work, and NRC’s recent changes have made more information available, but some licensees are not aware of the information. Licensees also identified challenges with NRC’s method of delivering paper invoices by mail, and although NRC’s recent and planned changes may help address these challenges, NRC’s plans are incomplete.
Licensees Identified Challenges with the Information Available about NRC’s Billable Work, and NRC Has Made More Information Available, but Some Licensees Are Not Aware of It
Licensees we interviewed identified challenges with the amount of information available about NRC’s billable work, including challenges related to planning and budgeting for NRC work and verifying charges on invoices. NRC has recently implemented changes that may address some of the challenges.
Planning and Budgeting for NRC Work
Licensees we interviewed identified challenges with planning for future work and budgeting to pay future costs because NRC does not provide certain information about the agency’s billable work. Specifically, NRC does not formally provide information on timeframes for completing billable work, customized cost estimates for projects, or the status of ongoing work. Eleven of the 13 licensees we interviewed indicated that having timeframes, cost estimates, status reports, or a combination of these would be useful. One licensee explained that when it receives an invoice for work that NRC staff have performed, the licensee does not know how much work remains and cannot budget for future expenses.
This challenge may be addressed, in part, by NRC’s Fees Transformation initiative. Under this initiative, NRC began reporting on its public website in September 2017 resource estimates for various licensing actions, such as site permitting, design certifications, inspections, license amendments, and license renewals, among others. These resource estimates include the low, high, and average number of NRC staff hours billed for each action, as well as some estimates for contractor charges for certain tasks. These resource estimates are based on historical expenses and were calculated using a sample of licensing and oversight actions, though they may still be useful to licensees to help plan and budget for future NRC costs. According to NRC’s website, the agency will update most of the resource estimates every 2 years.
Verifying Charges on Invoices
Licensees we interviewed said that they have challenges verifying charges on their invoices because NRC’s invoices do not provide enough information on work that NRC staff or contractors perform. For NRC staff work, invoices include the total hours charged by NRC staff for each activity code. However, activity codes often cover broad topics rather than specific work activities. Also, activity codes have a 120-character limit, according to NRC staff, and NRC uses some of those characters to list each licensee’s name and other identifying information, which means that there is limited remaining space to identify the specific work activity. Nine of the 13 licensees we interviewed explained that more descriptive activity codes on invoices would be helpful. One licensee said that it is difficult to know what project it is being billed for because the activity code descriptions are cryptic and sometimes nondescript. In addition, all 13 licensees we interviewed indicated that having NRC staff names or their positions would be helpful in verifying the accuracy of charges. For example, 2 of these licensees explained that they are familiar with the NRC staff who consistently work on their projects, so they could consider questioning charges if the invoice showed a new person working on a project.
Additionally, licensees said that it is difficult to verify charges on their invoices because NRC’s invoices also do not contain detailed information on contractor charges. Invoices indicate that work was done by a contractor and provide the total cost of the work, but they do not include the contractor’s name or describe the work performed. Five of the licensees we interviewed said that invoices do not have enough information about the contractor and the work performed. Additionally, 4 of these licensees stated that they cannot determine whether the amounts charged were accurate or reasonable without more information.
Challenges related to verifying charges may be addressed by some of NRC’s recent changes to its billing process, which include updating invoices. NRC is updating its invoices to include (1) standardized activity codes that have titles describing the specific work activity completed, (2) the names of the NRC staff charging time to the licensee, and (3) the name of the contractor that performed the work for which the licensee is being billed. NRC staff expected to issue the updated invoices to licensees in January 2018, after we completed our audit work. Therefore, we could not assess licensees’ satisfaction with the updated invoices. According to a planning document for some of NRC’s recent changes, the agency intends to solicit feedback from licensees in fiscal year 2018 on whether the updated invoices have addressed licensees’ challenges. However, NRC staff told us that they are not certain when the agency will solicit feedback.
In addition to updating invoices, NRC can provide supplemental information to licensees to help them verify the accuracy of the following charges:
NRC staff charges: NRC created biweekly reports on staff charges that it sends to licensees, when requested. These biweekly reports provide more frequent cost data and include a level of detail that is not provided on the quarterly invoices. For example, the biweekly reports include NRC staff names and the charges, by employee, for that 2- week period. Three of the 13 licensees we interviewed that receive the biweekly reports said that they use the reports to check the quarterly invoice for accuracy by adding up the costs from the biweekly reports and comparing them to the quarterly invoice. For example, a licensee told us that if the biweekly reports and quarterly invoice have similar totals, it does not raise any questions about the charges.
Contractor charges: NRC has supplemental contractor information that it can provide to licensees. NRC receives monthly status reports from contractors on charges that are ultimately billed to licensees on their quarterly invoices. These monthly status reports include current work performed and associated charges, as well as remaining work to be performed and an estimate of future charges. In 2015, NRC developed a process to review and provide to licensees certain information from the monthly status reports when licensees request it.
Although this supplemental information on staff charges and contractor charges is available, not all licensees know it is available. Specifically, 6 of the 13 licensees we interviewed told us that the biweekly reports would be useful, but did not know the reports are available and can be requested. Also, 10 of the 13 licensees we interviewed told us that detailed information on contract work would be useful, but 9 of them did not know this information is available and can be requested.
Not all licensees know the supplemental information is available because, according to NRC staff in the Office of the Chief Financial Officer, the agency has not instituted a formal process to inform all licensees of its availability. These staff added that the agency has announced the availability of this supplemental information at industry conferences or has told individual licensees about it. NRC staff explained that the agency is meeting statutory requirements for issuing invoices and provides the supplemental information as a courtesy to licensees, but is not required to do so. According to NRC staff in the Office of the Chief Financial Officer, the agency has not formally notified all licensees about the availability of this supplemental information because it is time-consuming to provide it to licensees. These staff also said that they have found that not all licensees may need this information. This is consistent with information from the licensees we interviewed. For example, 2 licensees told us that they do not need biweekly reports; one said that it operates on a fixed annual budget, so additional information on biweekly costs would not be useful. In contrast, NRC staff noted that licensees with more complex invoices—such as multiple sites and multiple inspections and licensing actions—may find the supplemental information useful. Standards for Internal Control in the Federal Government explains that management should communicate quality information externally so that external parties can help the entity achieve its objectives and address related risks. Furthermore, being open and transparent in communications is part of NRC’s Organizational Values, which guide every action it takes, how it performs administrative tasks, and how it interacts with stakeholders. Communicating to licensees about what information is available could help improve the transparency of NRC’s invoices, in accordance with the agency’s values.
Additionally, 2 licensees told us that they requested information on work being done by a contractor but NRC staff told them that the information could not be provided. NRC staff in the Office of the Chief Financial Officer acknowledged that some NRC project managers may not be aware that licensees can request contract information because there is no policy or guidance to instruct NRC staff on what information they can provide or how to do so. Standards for Internal Control in the Federal Government states that agency management should clearly document— in management directives, administrative policies, or operating manuals— the processes it uses to ensure that it is achieving its objectives. By developing a policy and guidance for NRC staff, the agency could help ensure that staff are aware of the agency’s processes and provide quality information consistently.
Licensees Identified Challenges with NRC’s Method for Delivering Invoices, and NRC’s Recent and Planned Changes May Address These Challenges, but Its Plans Are Incomplete
Licensees we interviewed said that NRC’s method for delivering paper invoices by mail created challenges related to the format and timeliness of the invoice, though NRC’s recent and planned changes may help address these challenges. For example, one licensee told us that, without the sorting and filtering capabilities of an electronic spreadsheet, this licensee is not able to verify the accuracy of charges for specific components of the work that NRC is doing. Another licensee told us that it is difficult to track costs of projects to completion without an electronic spreadsheet of charges. NRC now provides biweekly reports in an electronic spreadsheet format, which may help address the challenges these licensees cited. However, as discussed above, NRC does not provide these biweekly reports unless they are requested, and some licensees do not know that they are available.
Licensees also cited challenges with the timeliness of invoices they receive via mail. For example, 2 licensees stated that, with invoices taking up to 10 days to arrive in the mail, licensees sometimes do not have sufficient time to conduct a proper review of charges and remit payment to NRC within the 30-day deadline. According to one licensee we interviewed, delays in receiving the invoices have resulted in late fees. To address the challenge of timeliness, NRC will, upon request from a licensee, e-mail a copy of the invoice to the licensee after the hardcopy invoice has been mailed. This practice allows the licensee to begin reviewing its charges while waiting for the mailed copy to arrive. However, of the 11 licensees that told us an e-mailed copy of the invoice would be useful, 4 of them did not know this option was available.
NRC staff said the agency intends to transition to electronic billing—that is, sending invoices in electronic format via e-mail or providing licensees with web access to review and pay invoices. According to NRC staff, the agency’s transition to electronic billing is being done to improve efficiency and internal controls in NRC’s billing process. However, doing so may also help address challenges that some licensees experience with the format and timeliness of invoices. For example, 11 of the 13 licensees we interviewed affirmed that receiving electronic invoices or periodic statements of charges electronically would be beneficial.
In October 2016, NRC’s Commission directed NRC staff to examine opportunities to accelerate the transition to an electronic billing system. The agency has indicated its intent to complete the planning phase by October 2017 and fully implement a new system by October 2019. However, according to NRC staff, the planning phase was not completed because the agency needed to fully implement the recent changes to its billing process before planning for the transition to electronic billing. As a result, the agency has not yet developed any planning documents to help ensure that it meets its deadlines, achieves its goal of increasing efficiency, or addresses licensees’ challenges. NRC staff in the Office of the Chief Financial Officer said that they recognize there could be delays in planning, but still expect to implement electronic billing by October 2019.
We have previously found that federal information technology projects too frequently incur cost overruns and schedule slippages, but that proper planning—including incorporating best practices for project planning and scheduling—may help mitigate these effects. The Project Management Institute’s A Guide to the Project Management Body of Knowledge identifies standards related to project management processes, including project planning. In particular, the guide explains that the project plan is a comprehensive document that defines the basis of all project work and describes how the project will be executed, monitored, and controlled. The project plan integrates and consolidates plans for project components, such as plans for managing the project’s scope, schedule, cost, quality, and risk, among others. Among other things, the project management plan may also include requirements and techniques for communication among stakeholders and key reviews by management. By developing a project management plan that is consistent with best practices, NRC would have more reasonable assurance that it is better managing its transition to electronic billing.
Furthermore, Standards for Internal Control in the Federal Government also states that, in deciding what information is required to achieve objectives, management should consider the needs of both internal and external users. Additionally, we have previously identified common factors critical to successful information technology acquisitions. Among these factors are (1) involving end users and stakeholders in developing requirements and (2) including end users in testing of system functionality prior to formal end user acceptance testing. As NRC develops a project management plan, by involving licensees in developing system capabilities for electronic billing, which includes soliciting and considering licensees’ information needs, the agency would have better assurance of a successful transition to electronic billing.
Additionally, Standards for Internal Control in the Federal Government states that management should design control activities to achieve objectives. Control activities may include establishing performance measures and indicators and management reviews that compare actual performance to planned or expected results and analyze significant differences. As NRC develops a project management plan, including steps to assess the results of implementing electronic billing, which includes comparing actual performance of the new electronic billing system to planned results, would provide the agency more reasonable assurance that the project meets desired outcomes.
Conclusions
NRC has recently implemented or plans to implement a number of changes to its billing process that—if implemented as intended—could address challenges that licensees identified in our interviews. However, additional steps could enhance NRC’s efforts to improve its billing process. Licensees told us that they could use more detailed information, more timely information, and information in an electronic format. NRC has made more detailed information on staff charges available in biweekly reports and has developed a process to provide detailed information on contractor work, upon request from a licensee. NRC is also providing invoices in electronic format to some licensees, when requested. However, some licensees that would find the information on staff and contractor charges useful do not know that it is available, and some NRC staff are not aware that they can provide it or how to do so. Until NRC communicates to all licensees about what information is available and develops a policy and guidance for agency staff, the agency cannot ensure that it is providing quality information consistently.
Further, NRC intends to take additional action toward improving its billing process and invoices by transitioning to electronic billing. As NRC moves forward with this project, developing a project management plan that is consistent with best practices, to include establishing plans for the project’s schedule and cost, as well as involving licensees in developing the plan and assessing the results of implementation, will give the agency more reasonable assurance that it is better managing its transition to electronic billing and could help ensure that the project meets desired outcomes.
Recommendations for Executive Action
We are making the following five recommendations to NRC: The Chief Financial Officer of NRC should formally communicate to all licensees that supplemental billing information—including biweekly reports and monthly status reports on contractor charges—is available and how to request it. Formal communication that would reach all licensees could include adding information to their quarterly invoices. (Recommendation 1)
The Chief Financial Officer of NRC should develop agency policy and guidance for staff on what billing information related to contractor charges NRC staff can provide to licensees and how it should be provided. (Recommendation 2)
As NRC plans its transition to electronic billing, the Chief Financial Officer of NRC should develop a project plan that incorporates standards for project management, which includes establishing plans for schedule and cost. (Recommendation 3)
In developing the project plan for electronic billing, the Chief Financial Officer of NRC should include steps to involve licensees in developing system capabilities, which includes soliciting and considering licensees’ information needs. (Recommendation 4)
In developing the project plan for electronic billing, the Chief Financial Officer of NRC should include steps to assess the results of implementing electronic billing, which includes comparing the actual performance to intended outcomes. (Recommendation 5)
Agency Comments
We provided a draft of this report to NRC for review and comment. NRC provided written comments, which are reproduced in appendix I. In its written comments, NRC agreed with our findings and recommendations.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chairman of the Nuclear Regulatory Commission, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II.
Appendix I: Comments from the Nuclear Regulatory Commission
Appendix II: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Hilary Benedict (Assistant Director), Wyatt R. Hundrup (Analyst in Charge), and Breanna Trexler made key contributions to this report. Also contributing to this report were Ellen Fried, Cindy Gilbert, Heather Keister, Benjamin Licht, Laurel Plume, Dan C. Royer, and Barbara Timmerman. | Why GAO Did This Study
NRC is responsible for regulating the commercial nuclear industry, including nuclear power plants. NRC provides services, such as inspections, for regulated entities that hold licenses—that is, licensees. NRC recovers the costs for these services by assessing fees and billing licensees quarterly. In fiscal year 2016, NRC collected about $321 million in service fees. From 2006 to 2016, audits of NRC's fees identified problems with NRC's billing process. For example, a 2012 audit identified about $24 million in unbilled fees from fiscal years 2011 and 2012.
GAO was asked to review NRC's billing process for service fees. This report examines (1) the actions NRC is taking to address problems with its billing process identified by internal reviews and (2) the challenges selected licensees identified with NRC's billing process and the extent to which NRC's actions are addressing them.
GAO reviewed audits of NRC's billing process and other documents related to this process. GAO also interviewed NRC staff and a nongeneralizable sample of 13 licensees, selected based on the amount of service fees charged from October 2015 through July 2017, and compared NRC's actions against criteria on internal controls and project planning.
What GAO Found
The Office of the Inspector General for the Nuclear Regulatory Commission (NRC) and internal reviews conducted by NRC identified several problems with the agency's billing process, and NRC has implemented or plans to implement several changes to address the recommendations. For example, the codes that NRC staff use to record their work hours on time cards—referred to as activity codes—did not describe the work and did not have a consistent naming convention, which increased the risk of staff charging their time to the wrong activity codes. This could lead, in some cases, to billing errors. To address these problems, NRC created a standard naming convention for activity codes that provides more information about the activity. See the figure below for the steps in NRC's billing process for work that NRC or contractor staff performed.
Some of the 13 licensees that GAO interviewed identified challenges with NRC's billing process, including its method for delivering paper invoices by mail. For example, two of these licensees stated that with invoices taking up to 10 days to arrive in the mail, they sometimes do not have sufficient time to properly review charges and remit payment to NRC within the 30-day deadline for paying the invoice. One licensee said that delays in receiving an invoice resulted in late fees. NRC is undertaking an initiative to transition to electronic billing, which may address the challenges the licensees identified and, according to NRC staff, improve the agency's billing process. However, NRC has not developed planning documents for this initiative and, according to staff, the planning phase is already past its original deadline of October 2017. The Project Management Institute has identified standards related to project management processes, including project planning. By developing a project management plan that is consistent with best practices and includes steps for involving licensees in system development and assessing results of the project, NRC would have reasonable assurance that it can better manage its electronic billing initiative.
What GAO Recommends
GAO is making five recommendations, including that NRC develop a project management plan for its electronic billing initiative that follows project management standards and includes steps for involving licensees and assessing results. NRC agreed with these recommendations. |
gao_GAO-19-168 | gao_GAO-19-168_0 | Background
Federal agencies conduct a variety of procurements that are reserved for small business participation through small business set-asides. The set- asides can be for small businesses in general, or they can be specific to small businesses that meet additional eligibility requirements in the Service-Disabled Veteran-Owned Small Business (SDVOSB), Historically Underutilized Business Zone (HUBZone), 8(a) Business Development (8(a)), and WOSB programs.
The WOSB program enables federal contracting officers to identify and establish a sheltered market, or set-aside, for competition among WOSBs and EDWOSBs in certain industries. To determine the industries eligible under the WOSB program, SBA is required to conduct a study to determine which NAICS codes are eligible under the program and to report on such studies every 5 years. WOSBs can receive set-asides in industries in which SBA has determined that women-owned small businesses are substantially underrepresented. EDWOSBs can receive set-asides in WOSB-eligible industries as well as in an additional set of industries in which SBA has determined that women-owned small businesses are underrepresented but not substantially so. As of February 2019, there were a total of 113 four-digit NAICS codes (representing NAICS industry groups) eligible under the WOSB program—92 eligible NAICS codes for WOSBs and 21 for EDWOSBs.
Additionally, businesses must be at least 51 percent owned and controlled by one or more women who are U.S. citizens to participate in the WOSB program. The owner must provide documents demonstrating that the business meets program requirements, including a document in which the owner attests to the business’s status as a WOSB or EDWOSB. EDWOSBs are WOSBs that are controlled by one or more women who are citizens and who are economically disadvantaged in accordance with SBA regulations. According to SBA, as of early October 2018, there were 13,224 WOSBs and 4,488 EDWOSBs registered in SBA’s online certification database.
SBA’s Office of Government Contracting administers the WOSB program by promulgating regulations, conducting eligibility examinations of businesses that receive contracts under a WOSB or EDWOSB set-aside, deciding protests related to eligibility for a WOSB set-aside, conducting studies to determine eligible industries, and working with other federal agencies in assisting WOSBs and EDWOSBs. According to SBA officials, the Office of Government Contracting also works at the regional and local levels with SBA’s Small Business Development Centers and district offices, and with other organizations (such as Procurement Technical Assistance Centers), to help WOSBs and EDWOSBs obtain contracts with federal agencies. The services SBA coordinates include training, counseling, mentoring, facilitating access to information about federal contracting opportunities, and business financing. According to SBA, as of October 2018, there were two full-time staff within the Office of Government Contracting whose primary responsibility was the WOSB program.
Initially, the program’s statutory authority allowed WOSBs to be self- certified by the business owner or certified by an approved third-party national certifying entity as eligible for the program. Self-certification is free, but some third-party certification options require businesses to pay a fee. Each certification process requires businesses to provide signed representations attesting to their WOSB or EDWOSB eligibility. Businesses must provide documents supporting their status before submitting an offer to perform the requirements of a WOSB set-aside contract. In August 2016, SBA launched certify.sba.gov, which is an online portal that allows firms to upload required documents and track their submission and also enables contracting officers to review firms’ eligibility documentation. According to the Federal Acquisition Regulation (FAR), contracting officers are required to verify that all required documentation is present in the online portal when selecting a business for an award. In addition, businesses must register and attest to being a WOSB in the System for Award Management, the primary database of vendors doing business with the federal government.
In 2011, SBA approved four organizations to act as third-party certifiers:
El Paso Hispanic Chamber of Commerce,
NWBOC (previously known as the National Women Business Owners
U.S. Women’s Chamber of Commerce, and
Women’s Business Enterprise National Council.
These organizations have been the WOSB program’s third-party certifiers since 2011. According to SBA data, the Women’s Business Enterprise National Council was the most active third-party certifier in fiscal year 2017—performing 2,638 WOSB certification examinations. The U.S.
Women’s Chamber of Commerce, NWBOC, and El Paso Hispanic Chamber of Commerce—completed 644, 105, and 12 certifications, respectively.
As discussed previously, in 2014 we reviewed the WOSB program and found a number of deficiencies in SBA’s oversight of the four SBA- approved third-party certifiers and in SBA’s eligibility examination processes and we made related recommendations for SBA. In addition, in 2015 and 2018 the SBA OIG reviewed the WOSB program and also found oversight deficiencies, including evidence of WOSB contracts set aside for ineligible firms. In both reports, the SBA OIG also made recommendations for SBA. Further, in July 2015, we issued GAO’s fraud risk framework, which provides a comprehensive set of key components and leading practices that serve as a guide for agency managers to use when developing efforts to combat fraud in a strategic, risk-based way. In July 2016, the Office of Management and Budget issued guidelines requiring executive agencies to create controls to identify and respond to fraud risks. These guidelines also affirm that managers should adhere to the leading practices identified in GAO’s fraud risk framework.
SBA Has Implemented One of the Three Changes Made by the 2015 NDAA
As of February 2019, SBA had implemented one of the three changes that the 2015 NDAA made to the WOSB program—sole-source authority. The two other changes—authorizing SBA to implement its own certification process for WOSBs and requiring SBA to eliminate the WOSB self-certification option—have not been implemented. The 2015 NDAA did not require a specific time frame for SBA to update its regulations. SBA officials have stated that they will not eliminate self- certification until the new certification process for the WOSB program is in place, which they expect to be completed by January 1, 2020.
Sole-Source Authority Has Been Implemented
In September 2015, SBA published a final rule to implement sole-source authority for the WOSB program (effective October 2015). Among other things, the rule authorized contracting officers to award a contract to a WOSB or EDWOSB without competition, provided that the contracting officer’s market research cannot identify two or more WOSBs or EDWOSBs in eligible industries that can perform the requirements of the contract at a fair and reasonable price. In the final rule, SBA explained that it promulgated the sole-source rule before the WOSB certification requirements for two reasons. First, the sole-source rule could be accomplished by simply incorporating the statutory language into the regulations, whereas the WOSB certification requirements would instead require a prolonged rulemaking process. Second, SBA said that addressing all three regulatory changes at the same time would delay the implementation of sole-source authority. SBA described the sole-source mechanism as an additional tool for federal agencies to ensure that women-owned small businesses have an equal opportunity to participate in federal contracting and to ensure consistency among SBA’s socioeconomic small business procurement programs.
According to SBA, most of the 495 comments submitted about the sole- source rule supported the agency’s decision to implement the authority quickly. However, the SBA OIG’s June 2018 audit report cautioned that allowing sole-source contracting authority while firms can still self-certify exposes the WOSB program to unnecessary risk of fraud and abuse, and the report recommended that SBA implement a new certification process for the WOSB program per the 2015 NDAA. In addition, our previous report identified risks of program participation by ineligible firms associated with deficiencies in SBA’s oversight structure. As we discuss in detail later, SBA has still not addressed these risks, which may be exacerbated by the implementation of sole-source authority without addressing the other changes made by the 2015 NDAA, including eliminating the self-certification option.
A New WOSB Program Certification Process Has Not Been Implemented
As of February 2019, SBA had not published a proposed rule for public comment to establish a new certification process for the WOSB program. Previously, in October 2017, an SBA official stated that SBA was about 1–2 months away from publishing a proposed rule. However, in June 2018, SBA officials stated that a cost analysis would be necessary before the draft could be sent to the Office of Management and Budget for review. Certain stages of the rulemaking process have mandated time periods, such as the required interagency review process for certain rules. In June 2017, we reported that SBA officials said that an increase in the number of statutorily mandated rules in recent years had contributed to delays in the agency’s ability to promulgate rules in a more timely fashion. As of February 2019, SBA had not provided documentation or time frames for issuing a proposed rule or completing the rulemaking process. However, in response to the SBA OIG recommendation that SBA implement the new certification process, SBA stated that it would fulfill the recommendation (meaning implement a new certification process) by January 1, 2020.
In December 2015, SBA published an advance notice of proposed rulemaking to solicit public comments to assist the agency with drafting a proposed rule to implement a new WOSB certification program. In the notice, SBA stated that it intends to address the 2015 NDAA changes, including eliminating the self-certification option, through drafting regulations to implement a new certification process. Previously, in its September 2015 final rule implementing sole-source authority, SBA stated that there was no evidence that Congress intended that the existing WOSB program, including self-certification, be halted before establishing the infrastructure and new regulations for a new certification program. The advance notice requested comments on various topics, such as how well the current certification processes were working, which of the certification options were feasible and should be pursued, whether there should be a grace period for self-certified WOSB firms to complete the new certification process, and what documentation should be required.
Three third-party certifiers submitted comments in response to the advance notice of proposed rulemaking, and none supported the option of SBA acting as a WOSB certifier. One third-party certifier commented that such an arrangement is a conflict of interest given that SBA is also responsible for oversight of the WOSB program, and two certifiers commented that SBA lacked the required resources. The three third-party certifiers also asserted in their comments that no other federal agency should be allowed to become an authorized WOSB certifier, with one commenting that federal agencies should instead focus on providing contracting opportunities for women-owned businesses. All three certifiers also proposed ways to improve the current system of third-party certification—for example, by strengthening oversight of certifiers or expanding their number. The three certifiers also suggested that SBA move to a process that better leverages existing programs with certification requirements similar to those of the WOSB program, such as the 8(a) program. In the advance notice, SBA asked for comments on alternative certification options, such as SBA acting as a certifier or limiting WOSB program certifications to the 8(a) program and otherwise relying on state or third-party certifiers. Further, in June 2018, SBA officials told us that they were evaluating the potential costs of a new certification program as part of their development of the new certification rule.
SBA Has Not Fully Addressed Deficiencies in Oversight and Program Implementation
SBA Has Not Implemented Procedures to Regularly Monitor and Assess the Performance of Third- Party Certifiers
SBA has not fully addressed deficiencies in its oversight of third-party certifiers that we identified in our October 2014 report. We reported that SBA did not have formal policies for reviewing the performance of its four approved third-party certifiers, including their compliance with their agreements with SBA. Further, we found that SBA had not developed formal policies and procedures for, among other things, reviewing the monthly reports that certifiers submit to SBA. As a result, we recommended that SBA establish comprehensive procedures to monitor and assess the performance of the third-party certifiers in accordance with their agreements with SBA and program regulations. While SBA has taken some steps to address the recommendation, as of February 2019 it remained open.
In response to our October 2014 recommendation, in 2016 SBA conducted compliance reviews of the four SBA-approved third-party certifiers. According to SBA, the purpose of the compliance reviews was to ensure the certifiers’ compliance with regulations, their signed third- party certifier certification form (or agreement) with SBA, and other program requirements. The compliance reviews included an assessment of the third-party certifiers’ internal certification procedures and processes, an examination of a sample of applications from businesses that the certifiers deemed eligible and ineligible for certification, and an interview with management staff.
SBA officials said that SBA’s review team did not identify significant deficiencies in any of the four certifiers’ processes and found that all were generally complying with their agreements. However, one compliance review report described “grave concerns” that a third-party certifier had arbitrarily established eligibility requirements that did not align with WOSB program regulations and used them to decline firms’ applications. SBA noted in the report that if the third-party certifier failed to correct this practice SBA could terminate the agreement. As directed by SBA, the third-party certifier submitted a letter to SBA outlining actions it had taken to address this issue, among others. The final compliance review reports for the other third-party certifiers also recommended areas for improvement, including providing staff with additional training on how to conduct eligibility examinations and reviewing certification files to ensure they contain complete documentation. In addition, two of the three compliance review reports with recommendations (including the compliance review report for the certifier discussed above) required the certifier to provide a written response within 30 days outlining plans to correct the areas. SBA officials said that they reviewed the written responses and determined that no further action was required.
In January 2017, SBA’s Office of Government Contracting updated its written Standard Operating Procedures (SOP) to include policies and procedures for the WOSB program, in part to address our October 2014 recommendation. The 2017 SOP discusses what a third-party-certifier compliance review entails, how often the reviews are to be conducted, and how findings are to be reported. The 2017 SOP notes that SBA may initiate a compliance review “at any time and as frequently as the agency determines is necessary.” In September 2018, SBA officials told us that they were again updating the SOP, in part to address deficiencies we identified in our prior work and during this review. However, as of February 2019, SBA had not provided an updated SOP.
In addition, in April 2018, SBA finalized a WOSB Program Desk Guide that, according to SBA, is designed to provide program staff with detailed guidance for conducting oversight procedures, including compliance reviews of third-party certifiers. For example, the Desk Guide discusses how staff should prepare for a compliance review of a third-party certifier, review certification documents, and prepare a final report. However, the Desk Guide does not describe specific activities designed to oversee third-party certifiers on an ongoing basis. In November 2017, SBA officials told us that they planned to conduct additional compliance reviews of the third-party certifiers. However, in June 2018, officials said there were no plans to conduct further compliance reviews until the final rule implementing the new certification process was completed. Further, SBA officials said that the 2016 certifier compliance reviews did not result in significant deficiencies. However, as noted previously, one of the compliance review reports described a potential violation of the third-party certifier’s agreement with SBA.
Per written agreements with SBA, third-party certifiers are required to submit monthly reports that include the number of WOSB and EDWOSB applications received, approved, and denied; identifying information for each certified business, such as the business name; concerns about fraud, waste, and abuse; and a description of any changes to the procedures the organizations used to certify businesses as WOSBs or EDWOSBs.
In our October 2014 report, we noted that SBA had not followed up on issues raised in the monthly reports and had not developed written procedures for reviewing them. At that time, SBA officials said that they were unaware of the issues identified in the certifiers’ reports and that the agency was developing procedures for reviewing the monthly reports but could not estimate a completion date.
In our interviews for this report, SBA officials stated that SBA still does not use the third-party certifiers’ monthly reports to regularly monitor the program. Specifically, SBA does not review the reports to identify any trends in certification deficiencies that could inform program oversight. Officials said the reports generally do not contain information that SBA considers helpful for overseeing the WOSB program, although staff sometimes use the reports to obtain firms’ contact information. SBA officials also said that staff very rarely receive information about potentially fraudulent WOSB firms from the third-party certifiers—maybe three firms per year—and that this information is generally received via email and not as part of the monthly reports. SBA officials said that when they receive information about potentially fraudulent firms, WOSB program staff conduct an examination to determine the firm’s eligibility and report the results back to the certifier. However, a third-party certifier told us it has regularly reported firms it suspected of submitting potentially fraudulent applications in its monthly reports and that SBA has not followed up with them. In addition, two third-party certifiers said that if SBA is not cross-checking the list of firms included in their monthly reports, a firm deemed ineligible by one certifier may submit an application to another certifier and obtain approval.
The three third-party certifiers we spoke with said that SBA generally had not communicated with them about their implementation of the program since the 2016 compliance reviews. However, SBA officials noted that three of the four third-party certifiers attended an SBA roundtable in March 2017 to discuss comments on the proposed rulemaking. In addition, SBA officials said that the third-party certifiers may contact them with questions about implementing the WOSB program, but SBA generally does not reach out to them.
Although SBA has taken steps to enhance its written policies and procedures for oversight of third-party certifiers, it does not have plans to conduct further compliance reviews of the certifiers and does not intend to review certifiers’ monthly reports on a regular basis. SBA officials said that third-party certifier oversight procedures would be updated, if necessary, after certification options have been clarified in the final WOSB certification rule. However, ongoing oversight activities, such as regular compliance reviews, could help SBA better understand the steps certifiers have taken in response to previous compliance review findings and whether those steps have been effective. In addition, leading fraud risk management practices include identifying specific tools, methods, and sources for gathering information about fraud risks, including data on fraud schemes and trends from monitoring and detection activities, as well as involving relevant stakeholders in the risk assessment process. Without procedures to regularly monitor and oversee third-party certifiers, SBA cannot provide reasonable assurance that certifiers are complying with program requirements and cannot improve its efforts to identify ineligible firms or potential fraud. Further, it is unclear when SBA’s final rule will be implemented. As a result, we maintain that our previous recommendation should be addressed—that is, that the Administrator of SBA should establish and implement comprehensive procedures to monitor and assess the performance of certifiers in accordance with the requirements of the third-party certifier agreement and program regulations.
SBA Has Not Implemented Procedures to Improve Its Eligibility Examinations of WOSB Program Participants
SBA also has not fully addressed deficiencies found in our 2014 review related specifically to eligibility examinations. We found that SBA lacked formalized guidance for its eligibility examination processes and that the examinations continued to identify high rates of potentially ineligible businesses. As a result, we recommended that SBA enhance its examination of businesses that register for the WOSB program to ensure that only eligible businesses obtain WOSB set-asides. Specifically, we suggested that SBA consider (1) completing the development of procedures to conduct annual eligibility examinations and implementing such procedures; (2) analyzing examination results and individual businesses found to be ineligible to better understand the cause of the high rate of ineligibility in annual reviews and determine what actions are needed to address the causes, and (3) implementing ongoing reviews of a sample of all businesses that have represented their eligibility to participate in the program.
SBA has taken some steps to implement our recommendation—such as by completing its 2017 SOP and its Desk Guide, both of which include written policies and procedures for WOSB program eligibility examinations. The 2017 SOP includes a brief description of what activities are entailed in the examinations, the staff responsible for conducting them, and how firms are selected. In addition, as noted previously, SBA officials told us in September 2018 that a forthcoming update to the SOP would address deficiencies we identified regarding WOSB eligibility examinations. However, as of February 2019, SBA had not provided an updated SOP. The Desk Guide contains more detailed information on eligibility examinations. It notes that a sample of firms is to be examined annually and it provides selection criteria, which can include whether the agency has received information challenging the firm’s eligibility for the program. The Desk Guide also provides specific instructions on how to determine whether a firm meets the WOSB program’s ownership, control, and financial requirements and what documentation should be consulted or requested.
SBA does not collect reliable information on the results of its annual eligibility examinations. According to SBA officials, SBA has conducted eligibility examinations of a sample of businesses that received WOSB program set-aside contracts each year since fiscal year 2012. However, SBA officials told us that the results of annual eligibility examinations— such as the number of businesses found eligible or ineligible—are generally not documented. As a result, we obtained conflicting data from SBA on the number of examinations completed and the percentage of businesses found to be ineligible in fiscal years 2012 through 2018. For example, based on previous information provided by SBA, we reported in October 2014 that in fiscal year 2012, 113 eligibility examinations were conducted and 42 percent of businesses were found to be ineligible for the WOSB program. However, during this review, we received information from SBA that 78 eligibility examinations were conducted and 37 percent of businesses were found ineligible in fiscal year 2012. We found similar disparities when we compared fiscal year 2016 data provided by SBA for this report with a performance memorandum summarizing that fiscal year’s statistics. Regardless of the disparity between the data sources, the rate of ineligible businesses has remained significant. For example, according to documentation SBA provided during this review, in fiscal year 2017, SBA found that about 40 percent of the businesses in its sample were not eligible.
In addition, SBA continues to have no mechanism for evaluating examination results in aggregate to inform the WOSB program. In 2014, we reported that SBA officials told us that most businesses that were deemed ineligible did not understand the documentation requirements for establishing eligibility. However, we also reported that SBA officials could not explain how they knew a lack of understanding was the cause of ineligibility among businesses and had not made efforts to confirm that this was the cause. In June 2018, SBA officials told us they did not analyze the annual examinations in aggregate for common eligibility issues because the examination results are unique to each WOSB firm. They noted that this was not necessary as WOSB program staff are familiar with common eligibility issues through the annual eligibility examinations. As we noted in 2014, by not analyzing aggregate examination results, the agency is missing opportunities to obtain meaningful insights into the program, such as the reasons many businesses are deemed ineligible.
Also, SBA still conducts eligibility examinations only of firms that have already received a WOSB award. In 2014, we concluded that this sampling practice restricts SBA’s ability to identify potentially ineligible businesses prior to a contract award. Similarly, during this review, SBA officials said that while some aspects of the sample characteristics have changed since 2012, the samples still generally consist only of firms that have been awarded a WOSB set-aside. In addition, officials said that the sample size of the eligibility examinations has varied over time and is largely based on the workload of WOSB program staff. Restricting the samples in this way limits SBA’s ability to better understand the eligibility of businesses before they apply for and are awarded contracts, as well as its ability to detect and prevent potential fraud.
SBA officials said that their other means of reducing participation by ineligible firms and mitigating potential fraud is through WOSB or EDWOSB status protests—that is, allegations that a business receiving an award does not meet program eligibility requirements. A federal contractor can file a status protest against any firm receiving an award that represents itself as a WOSB in the System for Award Management for grounds that include failure to provide all required supporting documentation. The penalties for misrepresenting a firm’s status, per regulation, include debarment or suspension. However, one third-party certifier expressed in its comments to the advance notice of proposed rulemaking on certification that status protests alone are not a viable option for protecting the integrity of the WOSB program. The certifier questioned how a firm could have sufficient information about a competitor firm to raise questions about its eligibility. According to SBA officials, 11 status protests were filed under the WOSB program in fiscal year 2018. Of these, four firms were deemed ineligible for the WOSB program, four were deemed eligible, and three status protests were dismissed. In fiscal year 2017, 9 status protests were filed; of these, three firms were found ineligible, two were found eligible, and four status protests were dismissed.
We recognize that SBA has made some effort to address our previous recommendation by documenting procedures for conducting annual eligibility examinations of WOSB firms. However, leading fraud risk management practices state that federal program managers should design control activities that focus on fraud prevention over detection and response, to the extent possible. Without maintaining reliable information on the results of eligibility examinations, developing procedures for analyzing results, and expanding the sample of businesses to be examined to include those that did not receive contracts, SBA limits the value of its eligibility examinations and its ability to reduce ineligibility among businesses registered to participate in the WOSB program. These deficiencies also limit SBA’s ability to identify potential fraud risks and develop any additional control activities needed to address these risks. As a result, the program may continue to be exposed to the risk of ineligible businesses receiving set-aside contracts. In addition, in light of these continued oversight deficiencies, the implementation of sole-source authority without addressing the other changes made by the 2015 NDAA could increase program risk. For these reasons, we maintain that our previous recommendation that SBA enhance its WOSB eligibility examination procedures should be addressed.
SBA Has Not Addressed Previously Identified Issues with WOSB Set- Asides Awarded Under Ineligible Industry Codes
In 2015 and 2018, the SBA OIG reported instances in which WOSB set- asides were awarded using NAICS codes that were not eligible under the WOSB program, and our analysis indicates that this problem persists. In 2015, the SBA OIG reported on its analysis of a sample of 34 WOSB set- aside awards and found that 10 awards were set aside using an ineligible NAICS code. The SBA OIG concluded that this may have been due to contracting officers’ uncertainty about NAICS code requirements under the program and recommended that SBA provide additional, updated training and outreach to federal agencies’ contracting officers on the program’s NAICS code requirements. In response, SBA updated WOSB program training and outreach documents in March 2016 to include information about the program’s NAICS code requirements.
In 2018, the SBA OIG issued another report evaluating the WOSB program, with a focus on the use of the program’s sole-source contract authority. Here, the SBA OIG identified additional instances of contracting officers using inaccurate NAICS codes to set aside WOSB contracts. Specifically, the SBA OIG reviewed a sample of 56 awards and found that 4 were awarded under ineligible NAICS codes. The report included two recommendations for SBA aimed at preventing and correcting improper NAICS code data in FPDS-NG: (1) conduct quarterly reviews of FPDS- NG data to ensure contracting officers used the appropriate NAICS codes and (2) in coordination with the Office of Federal Procurement Policy and GSA, strengthen controls in FPDS-NG to prevent contracting officers from using ineligible NAICS codes.
SBA disagreed with both of these recommendations. In its response to the first recommendation, SBA stated that it is not responsible for the oversight of other agencies’ contracting officers and therefore is not in a position to implement the corrective actions. With respect to the second recommendation, SBA stated that adding such controls to FPDS-NG would further complicate the WOSB program and increase contracting officers’ reluctance to use it. SBA also stated its preference for focusing its efforts on ensuring that contracting officers select the appropriate NAICS code at the beginning of the award process.
In our review, we also found several issues with WOSB program set- asides being awarded under ineligible NAICS codes. Our analysis of FPDS-NG data on all obligations to WOSB program set-asides from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018 found the following:
3.5 percent (or about $76 million) of WOSB program obligations were awarded under NAICS codes that were never eligible for the WOSB program;
10.5 percent (or about $232 million) of WOSB program obligations made under an EDWOSB NAICS code went to women-owned businesses that were not eligible to receive awards in EDWOSB- eligible industries; and
17 of the 47 federal agencies that obligated dollars to WOSB program set-asides during the period used inaccurate NAICS codes in at least 5 percent of their WOSB set-asides (representing about $25 million).
According to SBA officials we spoke with during this review, WOSB program set-asides may be awarded under ineligible NAICS codes because of human error when contracting officers are inputting data in FPDS-NG or because a small business contract was misclassified as a WOSB program set-aside. They characterized the extent of the issue as “small” relative to the size of the FPDS-NG database and said that such issues do not affect the program’s purpose. Rather than review FPDS-NG data that are inputted after the contract is awarded, SBA officials said that they have discussed options for working with GSA to add controls defining eligible NAICS codes for WOSB program set-aside opportunities on FedBizOpps.gov—the website that contracting officers use to post announcements about available federal contracting opportunities. Adding controls to this system, officials said, would help contracting officers realize as they are writing the contract requirements that they should not set aside contracts under the WOSB program without reviewing the proper NAICS codes. However, SBA officials said that the feasibility of this option was still being discussed and that the issue was not a high priority. For these reasons, according to officials, SBA’s updated oversight procedures described in the 2017 SOP and the Desk Guide do not include a process for reviewing WOSB program set-aside data in FPDS-NG to determine whether they were awarded under the appropriate NAICS codes.
Further, as of November 2018, the WOSB program did not have targeted outreach or training that focused on specific agencies’ use of NAICS codes. As noted previously, in March 2016, SBA updated its WOSB program training materials to address NAICS code requirements in response to a 2015 SBA OIG recommendation. In fiscal year 2018, SBA conducted three WOSB program training sessions for federal contracting officers, including (1) a virtual learning session, (2) a session conducted during WOSB Industry Day at the Department of Housing and Urban Development, and (3) a session conducted during a Department of Defense Small Business Training Conference. However, with the exception of the virtual learning session, these training sessions were requested by the agencies. SBA officials did not identify any targeted outreach or training provided to specific agencies to improve understanding of WOSB NAICS code requirements (or other issues related to the WOSB program).
Congress authorized SBA to develop a contract set-aside program specifically for WOSBs and EDWOSBs to address the underrepresentation of such businesses in specific industries. In addition, federal standards for internal control state that management should design control activities to achieve objectives and respond to risks and to establish and operate monitoring activities to monitor and evaluate the results. Because SBA does not review whether contracts are being awarded under the appropriate NAICS codes, it cannot provide reasonable assurance that WOSB program requirements are being met or identify agencies that may require targeted outreach or additional training on eligible NAICS codes. As a result, WOSB contracts may continue to be awarded to groups other than those intended, which can undermine the goals of and confidence in the program.
Federal Contracts to WOSB Set-Asides Remain Relatively Small, and Stakeholders Discussed Various Aspects of Program Use
The Percentage of Obligations to Women- Owned Small Businesses under the WOSB Program Increased Slightly since 2012
Federal dollars obligated for contracts to all women-owned small businesses increased from $18.2 billion in fiscal year 2012 to $21.4 billion in fiscal year 2017. These figures include contracts for any type of good or service awarded under the WOSB program, under other federal programs, or through full and open competition. Contracts awarded to all women-owned small businesses within WOSB-program-eligible industries also increased during this period—from about $15 billion to $18.8 billion, as shown in figure 1. However, obligations under the WOSB program represented only a small share of this increase. In fiscal year 2012, WOSB program contract obligations were 0.5 percent of contract obligations to all women-owned small businesses for WOSB-program- eligible goods or services (about $73.5 million), and in fiscal year 2017 this percentage had grown to 3.8 percent (about $713.3 million) (see fig. 1).
From fiscal years 2012 through 2017, 98 percent of total dollars obligated for contracts to all women-owned small businesses in WOSB-program- eligible industries were not awarded under the WOSB program. Instead, these contracts were awarded without a set-aside or under other, longer- established socioeconomic contracting programs, such as HUBZone, the SDVOSB, and 8(a). For example, during this period, dollars obligated to contracts awarded to women-owned small businesses without a set-aside represented about 34 percent of dollars obligated for contracts to all women-owned small businesses in these industries (see fig. 2).
As shown in table 1, six federal agencies—DOD, DHS, Department of Commerce, Department of Agriculture, Department of Health and Human Services, and GSA—collectively accounted for nearly 83 percent of the obligations awarded under the WOSB program from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018, with DOD accounting for about 49 percent of the total.
Contracting officers’ use of sole-source authority was relatively limited, representing about 12 percent of WOSB program obligations from January 2016 through June 2018. In fiscal year 2017—the only full fiscal year for which we have data on sole-source authority—about $77 million were obligated using sole-source authority. The share of sole-source awards as a percentage of total WOSB program set-asides also varied considerably by quarter—from as low as 5 percent in the third quarter of 2016 to as high as 21 percent in the first quarter of 2017 (see fig. 3).
Stakeholders Discussed Various Issues Related to WOSB Program Usage
We spoke with 14 stakeholder groups to obtain their views on usage of the WOSB program. These groups consisted of staff within three federal agencies (DHS, DOD, and GSA), eight contracting offices within these agencies, and three third-party certifiers. Issues stakeholders discussed included the impact of sole-source authority and program-specific NAICS codes on program usage. Stakeholders also noted the potential effect of other program requirements on contracting officers’ willingness to use the program, and some suggested that SBA provide additional guidance and training to contracting officers.
Sole-source authority. Participants in 12 of the 14 stakeholder groups commented on the effect of sole-source authority on WOSB program usage. Staff from 4 of the 12 stakeholder groups—including three contracting offices—said that sole-source authority generally had no effect on the use of the WOSB program. One of these stakeholders believed contracting officers seldom use the authority because they lack an understanding of how and when to use it; therefore, in this stakeholder’s opinion, use of the WOSB program has not generally changed since the authority was implemented. However, staff from two contracting offices and one third-party certifier said that sole-source authority was a positive addition because, for example, it can significantly reduce the lead time before a contracting officer can offer a contract award to a firm. Staff from one of these two contracting offices stated that the award process can take between 60 to 90 days using sole-source authority, compared to 6 to 12 months using a competitive WOSB program set-aside. These staff also said that negotiating the terms of a sole-source contract is easier, from a contracting officer’s perspective, because they can communicate directly with the firm. As discussed previously, SBA officials we interviewed said that adding sole-source authority to the WOSB program made the program more consistent with other existing socioeconomic set-aside programs, such as 8(a) and HUBZone.
The remaining five stakeholder groups that discussed the effects of WOSB sole-source authority described difficulties with implementing it. Specifically, representatives from DHS, DOD, and one third-party certifier said that executing sole-source authority under the WOSB program is difficult for contracting officers because rules for sole-source authority under WOSB are different from those under other SBA programs, such as 8(a) and HUBZone. For example, the FAR’s requirement that contracting officers justify, in writing, why they do not expect other WOSBs or EDWOSBs to submit offers on a contract is stricter under the WOSB program than it is for the 8(a) program. Further, staff from one contracting office noted that justifications for WOSB set-asides must then be published on a federal website. In contrast, contracting officers generally do not need to prepare and publish a justification under the 8(a) program. According to staff from another contracting office, it may be difficult to find more than one firm qualified to do the work under some WOSB-eligible NAICS codes, but contracting officers would still have to conduct market research and explain why they do not expect additional offers in order to set the contract aside for a WOSB.
Program-specific NAICS codes. Participants in 13 of the 14 stakeholder groups we interviewed commented on the requirement that WOSB program set-asides be awarded within certain industries, represented by NAICS codes. For example, two third-party certifiers we interviewed recommended that the NAICS codes be expanded or eliminated to provide greater opportunities for WOSBs to win contracts under the program. Another third-party certifier said that some of its members focus their businesses’ marketing efforts on industries specific to the WOSB program to help them compete for such contracts.
Representatives from GSA and DHS made comments about limitations with respect to the WOSB program’s NAICS code requirement. Staff we interviewed from three contracting offices made similar statements, adding that the NAICS codes limit opportunities to award a contract to a WOSB or EDWOSB because they are sufficient in some industry areas but not others. All five of these stakeholder groups suggested that NAICS codes be removed from the program’s requirements to increase opportunities for WOSBs.
Conversely, staff from five other contracting offices we interviewed generally expressed positive views about the WOSB program’s NAICS code requirements and stated that eligible codes line up well with the services for which they generally contract. Finally, SBA officials noted that there are no plans to reassess the NAICS codes until about 2020. However, SBA officials also stated that the NAICS code requirements complicate the WOSB program and add confusion for contracting officers who use program, as compared to other socioeconomic programs that do not have such requirements, such as HUBZone or 8(a).
Requirement to verify eligibility documentation. Staff from 7 of the 14 stakeholder groups we interviewed discussed the requirement for the contracting officer to review program eligibility documentation and how this requirement affects their decision to use the program. For example, staff from one contracting office said that using the 8(a) or HUBZone programs is easier because 8(a) and HUBZone applicants are already certified by SBA; therefore, the additional step to verify documentation for eligibility is not needed. GSA officials noted that eliminating the need for contracting officers to take additional steps to review eligibility documentation for WOSB-program set-asides—in addition to checking the System for Award Management—could create more opportunities for WOSBs by reducing burden on contracting officers. However, staff from two contracting offices said it is not more difficult to award contracts under the WOSB program versus other socioeconomic programs.
WOSB program guidance. Staff from 13 of the14 stakeholder groups we interviewed discussed guidance available to contracting officers under the WOSB program. Most generally said that the program requirements outlined in the FAR are fairly detailed and help contracting officers implement the program. According to SBA officials, SBA provides training on WOSB program requirements to contracting officers in federal agencies by request, through outreach events, and through an annual webinar. SBA officials also said that the training materials include all the regulatory issues that contracting officers must address.
However, representatives from two third-party certifiers described feedback received from their members about the need to provide additional training and guidance for contracting officers to better understand and implement the WOSB program. Staff from two contracting offices also expressed the need for SBA to provide additional training and guidance. Staff from one of these contracting offices said that the last time they received training on the WOSB program was in 2011, when the program was first implemented. Staff in the other contracting office added that the most recent version of a WOSB compliance guide they could locate online was at least 6 years old. SBA officials estimated that the WOSB compliance guide was removed from their public website in March 2016 because it was difficult to keep the document current and officials did not want to risk publishing a guide that was out-of-date. SBA officials also said that there are no plans to issue an updated guide as the FAR is sufficient.
The stakeholder groups also identified positive aspects of the WOSB program. Specifically, staff from seven stakeholder groups believed that the program provided greater opportunities for women-owned small businesses to obtain contracts in industries in which they are underrepresented. In addition, staff from three stakeholder groups mentioned that SBA-led initiatives, such as the Small Business Procurement Advisory Council and SBA’s co-sponsorship of the ChallengeHER program, help improve collaboration between federal agencies and the small business community and overall government contracting opportunities for women-owned small businesses.
Conclusions
The WOSB program aims to enhance federal contracting opportunities for women-owned small businesses. However, weaknesses in SBA’s management of the program continue to hinder its effectiveness. As of February 2019, SBA had not fully implemented comprehensive procedures to monitor the performance of the WOSB program’s third- party certifiers and had not taken steps to provide reasonable assurance that only eligible businesses obtain WOSB set-aside contracts, as recommended in our 2014 report. Without ongoing monitoring and reviews of third-party certifier reports, SBA cannot ensure that the certifiers are fulfilling the requirements of their agreements with SBA, and it is missing opportunities to gain information that could help improve the program’s processes. Further, limitations in SBA’s procedures for conducting, documenting, and analyzing eligibility examinations inhibit its ability to better understand the eligibility of businesses before they apply for and potentially receive contracts, which exposes the program to unnecessary risk of fraud. In addition, given that SBA does not expect to finish implementing the changes in the 2015 NDAA until January 1, 2020, these continued oversight deficiencies increase program risk. As a result, we maintain that our previous recommendations should be addressed.
In addition, SBA has not addressed deficiencies that the SBA OIG identified previously—and that we also identified during this review— related to WOSB set-asides being awarded under ineligible industry codes. Although SBA has updated its training and outreach materials for the WOSB program to address NAICS code requirements, it has not developed plans to review FPDS-NG data or provide targeted outreach or training to agencies that may be using ineligible codes. As a result, SBA is not aware of the extent to which individual agencies are following program requirements and which agencies may require targeted outreach or additional training. Reviewing FPDS-NG data would allow SBA to identify those agencies (and contracting offices within them) that could benefit from such training. Without taking these additional steps, SBA cannot provide reasonable assurance that WOSB program requirements are being met.
Recommendation for Executive Action
The SBA Administrator or her designee should (1) develop a process for periodically reviewing FPDS-NG data to determine the extent to which agencies are awarding WOSB program set-asides under ineligible NAICS codes and (2) take steps to address any issues identified, such as providing targeted outreach or training to agencies making awards under ineligible codes. (Recommendation 1)
Agency Comments
We provided a draft of this report to DHS, DOD, GSA, and SBA for review and comment. DHS, DOD, and GSA indicated that they did not have comments. SBA provided a written response, reproduced in appendix II, in which it agreed with our recommendation. SBA stated that it will implement a process to review WOSB program data extracted from FPDS-NG and certified by each agency. Specifically, through the government-wide Small Business Procurement Advisory Council, SBA plans to provide quarterly presentations to contracting agencies’ staff that would include training and an analysis and review of the data. The response also reiterated that SBA has contacted GSA to implement a system change to FedBizOpps.gov that would prevent contracting officers from entering an invalid NAICS code for a WOSB program set-aside.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time we will send copies of this report to appropriate congressional committees and members, the Acting Secretary of DOD, the Secretary of DHS, the Administrator of GSA, the Administrator of SBA, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov.
If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III.
Appendix I: Objectives, Scope, and Methodology
This report examines (1) the extent to which the Small Business Administration (SBA) has implemented changes to the Women-Owned Small Business Program (WOSB program) made by the 2015 National Defense Authorization Act (2015 NDAA); (2) the extent to which SBA has implemented changes to address previously identified oversight deficiencies; and (3) changes in WOSB program use since 2011 and stakeholder views on its use, including since the 2015 implementation of sole-source authority.
To describe the extent to which SBA has implemented changes to the WOSB program made by the 2015 NDAA, we reviewed relevant legislation, including the 2015 NDAA; related proposed regulations; and SBA documentation. We reviewed comment letters on the advance notice of proposed rulemaking for the new WOSB program certification process from three of the four SBA-approved third-party certifiers: the El Paso Hispanic Chamber of Commerce, the U.S. Women’s Chamber of Commerce, and the Women’s Business Enterprise National Council. To ensure the accuracy of our characterization of the comment letters, one staff member independently summarized the third-party certifiers’ comments on the advance notice, and a second staff member then reviewed the results. We also interviewed SBA officials, including officials from SBA’s Office of Government Contracting and Business Development.
To respond to the second and third objectives, we conducted interviews on SBA’s implementation and oversight of the WOSB program and its use with SBA officials, three of the WOSB program’s four third-party certifiers, three selected agencies (and three agency components within two of the agencies), and a total of eight selected contracting offices within six selected agencies or components. Using data from the Federal Procurement Data System-Next Generation (FPDS-NG), we judgmentally selected the three federal agencies and three components (for a total of six federal agencies and components) because their WOSB program dollar obligations (including competed and sole-source) were among the largest or because we had interviewed them for our prior work. Specifically, we selected the following six agencies or agency components: the Department of Homeland Security (DHS) and, within DHS, the Coast Guard; the Department of Defense (DOD) and, within DOD, the U.S. Army and U.S. Navy; and the General Services Administration (GSA). Within the components and GSA, we judgmentally selected eight contracting offices (two each from Coast Guard, U.S. Army, U.S. Navy, and GSA) based on whether they had a relatively large amount of obligations and had used multiple types of WOSB program set- asides (competed or sole-source) to WOSBs or economically disadvantaged women-owned small businesses (EDWOSB).
To address our second objective, we reviewed the findings and recommendations in our October 2014 report and in audit reports issued by the SBA Office of Inspector General (OIG) in May 2015 and June 2018. We also reviewed SBA documentation on the WOSB program, including SBA’s 2017 Standard Operating Procedures and 2018 WOSB Program Desk Guide, results from 2016 compliance reviews of the four third-party certifiers, and SBA eligibility examinations from fiscal years 2012 through 2018. In addition, we analyzed FPDS-NG data on contract obligations to WOSB program set-asides from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018 to determine whether set-asides were made using eligible program-specific North American Industry Classification System (NAICS) codes. To conduct this analysis, we compared contract obligations in FPDS-NG with the NAICS codes eligible under the WOSB program at the time of the award for the time frame under review. The WOSB program’s eligible NAICS codes have changed three times since the program was implemented in 2011, but the eligible industries have changed once. SBA commissioned the RAND Corporation to conduct the first study to assist SBA in determining eligible NAICS codes under the WOSB program. Based on the results of the RAND study, SBA identified 45 four-digit WOSB NAICS codes and 38 four-digit EDWOSB NAICS codes, for a total of 83 four-digit NAICS codes. WOSB and EDWOSB NAICS codes are different and do not overlap. In December 2015, the Department of Commerce issued the next study, which increased the total NAICS codes under the program to 113 four-digit codes, with 92 WOSB NAICS codes and 21 EDWOSB NAICS codes (which became effective March 2016). Often, there is a time lag between the effective date of NAICS codes and when they are entered in FPDS-NG. Therefore, we did not classify a contract as having an ineligible NAICS code if the code eventually became eligible under the WOSB program. We also excluded actions in FPDS-NG coded other than as a small business. These actions represented a small amount of contract obligations—approximately $125,000. We compared SBA information on its oversight activities and responses to previously identified deficiencies, federal internal control standards, and GAO’s fraud risk framework.
We assessed the reliability of FPDS-NG data by considering their known strengths and weaknesses, based on our past work and through electronic testing for missing data, outliers, and inconsistent coding in the data elements we used for our analysis. We also reviewed FPDS-NG documentation, including the FPDS-NG data dictionary, FPDS-NG data validation rules, FPDS-NG user manual, prior GAO reliability assessments, and relevant SBA OIG audit reports. Based on these steps, we concluded that the data were sufficiently reliable for the purposes of reporting on trends in the WOSB program and the use of sole-source authority under the program.
To describe how participation in the WOSB program has changed since 2011, including since the 2015 implementation of sole-source authority, we analyzed FPDS-NG data from the third quarter of fiscal year 2011 through the third quarter of fiscal year 2018. We identified any trends in WOSB program participation using total obligation dollars set aside for competitive and sole-source contracts awarded to WOSBs and EDWOSBs under the program. We also compared data on obligations for set-asides under the WOSB program with federal contract obligations for WOSB-program-eligible goods and services to all women-owned small businesses, including those made under different set-aside programs or with no set-asides, to determine the relative usage of the WOSB program. In our analysis, we excluded from WOSB program set-aside data actions in FPDS-NG coded other than as a small business (representing approximately $125,000) or coded under ineligible NAICS codes that were never eligible under the WOSB program (representing approximately $76.3 million).
To describe stakeholder views on WOSB program use, we conducted semistructured interviews to gather responses from 14 stakeholder groups. These groups consisted of staff within three federal agencies (DHS, DOD, and GSA), eight contracting offices within these agencies, and three third-party certifiers (selection criteria described above). One person summarized the results of the interviews, and another person reviewed the summary of the interviews to ensure an accurate depiction of the comments. In addition, a third person then reviewed the summarized results.
We conducted this performance audit from October 2017 to March 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Comments from the U.S. Small Business Administration
Appendix III: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the contact named above, Allison Abrams (Assistant Director), Tiffani Humble (Analyst-in-Charge), Pamela Davidson, Jonathan Harmatz, Julia Kennon, Jennifer Schwartz, Rebecca Shea, Jena Sinkfield, Tyler Spunaugle, and Tatiana Winger made key contributions to this report. | Why GAO Did This Study
In 2000, Congress authorized the WOSB program, allowing contracting officers to set aside procurements to women-owned small businesses in industries in which they are substantially underrepresented. To be eligible to participate in the WOSB program, firms have the option to self-certify or be certified by a third-party certifier. However, the 2015 NDAA changed the WOSB program by (1) authorizing SBA to implement sole-source authority, (2) eliminating the option for firms to self-certify as being eligible for the program and (3) allowing SBA to implement a new certification process.
GAO was asked to review the WOSB program. This report discusses (1) the extent to which SBA has addressed the 2015 NDAA changes, (2) SBA's efforts to address previously identified deficiencies, and (3) use of the WOSB program. GAO reviewed relevant laws, regulations, and program documents; analyzed federal contracting data from April 2011 through June 2018; and interviewed SBA officials, officials from contracting agencies selected to obtain a range of experience with the WOSB program, and three of the four private third-party certifiers.
What GAO Found
The Small Business Administration (SBA) has implemented one of the three changes to the Women-Owned Small Business (WOSB) program authorized in the National Defense Authorization Act of 2015 (2015 NDAA). Specifically, in September 2015 SBA published a final rule to implement sole-source authority, effective October 2015. As of February 2019, SBA had not eliminated the option for program participants to self-certify that they are eligible to participate, as required by 2015 NDAA. SBA officials stated that this requirement would be addressed as part of the new certification process for the WOSB program, which they expect to implement by January 1, 2020.
SBA has not addressed WOSB program oversight deficiencies identified in GAO's 2014 review (GAO-15-54). For example, GAO previously recommended that SBA establish procedures to assess the performance of four third-party certifiers—private entities approved by SBA to certify the eligibility of WOSB firms. While SBA conducted a compliance review of the certifiers in 2016, it has no plans to regularly monitor them. By not improving its oversight of the WOSB program, SBA is limiting its ability to ensure third-party certifiers are following program requirements. In addition, the implementation of sole-source authority in light of these continued oversight deficiencies can increase program risk. Consequently, GAO maintains that its prior recommendations should be addressed. In addition, similar to previous findings from SBA's Office of Inspector General, GAO found that about 3.5 percent of contracts using a WOSB set-aside were awarded for ineligible goods or services from April 2011 through June 2018. SBA does not review contracting data that could identify this problem and inform SBA which agencies making awards may need targeted outreach or training. As a result, SBA cannot provide reasonable assurance that WOSB program requirements are being met and that the program is meeting its goals.
While federal contract obligations to all women-owned small businesses and WOSB program set-asides have increased since fiscal year 2012, WOSB program set-asides remain a small percentage (see figure).
What GAO Recommends
GAO recommends that SBA develop a process for periodically reviewing the extent to which WOSB program set-asides are awarded for ineligible goods or services and use the results to address identified issues, such as through targeted outreach or training on the WOSB program. SBA agreed with the recommendation. |
gao_GAO-18-440 | gao_GAO-18-440_0 | Background
Roles and Responsibilities
Under the Brady Handgun Violence Prevention Act of 1993 (referred to hereafter as the “Brady Act”) and implementing regulations, the FBI and designated state and local criminal justice agencies use NICS to conduct background checks on individuals seeking to purchase firearms from an FFL or obtain permits to possess, acquire, or carry firearms. The mission of the FBI’s NICS Section is to enhance national security and public safety by providing the timely and accurate determination of a person’s eligibility to possess firearms in accordance with federal law. Figure 1 shows the states where the FBI performs background checks for all transactions, as well as POC and partial POC states.
ATF—one of several Department of Justice law enforcement components—is responsible for investigating criminals and criminal organizations that use firearms, arson, or explosives in violent criminal activity, among other things. ATF is also responsible for investigating criminal and regulatory violations of federal firearms, explosives, arson, and alcohol and tobacco-smuggling laws subject to the direction of the Attorney General, as well as any other function related to the investigation of violent crime or domestic terrorism that is delegated to ATF by the Attorney General.
U.S. Attorneys prosecute criminal cases brought forward by the federal government, prosecute and defend civil cases in which the United States is a party, and collect debts owed to the federal government that are administratively uncollectible. U.S. Attorneys investigate and prosecute a wide range of criminal activities—including, but not limited to, international and domestic terrorism, corporate fraud, public corruption, violent crime, and drug trafficking. Each U.S. Attorney exercises wide discretion in the use of his or her resources to further the priorities of the local jurisdictions and needs of their communities. The Executive Office for United States Attorneys (EOUSA) represents the 93 U.S. Attorneys that prosecute federal cases. Among other things, EOUSA provides guidance, management direction, and oversight to USAOs.
Firearms Purchase Background Check Process
During a NICS check, the FBI and POC states use descriptive data provided by an individual—such as name and date of birth—to search various databases containing criminal history and other relevant records. These databases include the Interstate Identification Index, the National Crime Information Center, and the NICS Indices.
The Interstate Identification Index includes, among other things, information on persons who are indicted for, or have been convicted of, a crime punishable by imprisonment for a term exceeding 1 year or have been convicted of a misdemeanor crime of domestic violence.
The National Crime Information Center includes criminal justice- related records pertaining to wanted persons (fugitives) and persons subject to protection orders, among other things.
The NICS Indices were created for use in connection with NICS background checks and contain information on persons determined to be prohibited from possessing or receiving a firearm.
NICS checks determine whether or not an individual is disqualified by federal or state law from possessing firearms. As shown in figure 2:
Federal NICS transactions increased from about 6.5 million in fiscal year 2011 to about 8.6 million in fiscal year 2017. Federal NICS denials increased from about 77,000 in fiscal year 2011 to about 112,000 in fiscal year 2017.
POC state transactions—which include both full and partial POC states—increased from about 9.3 million in fiscal year 2011 to about 17 million in fiscal year 2017. POC state denials increased from about 45,000 in fiscal year 2011 to about 69,000 in fiscal year 2017.
If the FBI or state agency completes a background check within 3 business days and determines that a person should be denied, such denials are referred to as “standard denials” and do not involve the potential transfer of a firearm. If the FBI or state agency cannot complete a background check within 3 business days, the FFL may transfer the firearm pursuant to federal law, unless state law provides otherwise. When the FBI makes a denial determination after 3 business days— called a “delayed denial”—the FBI determines if the FFL transferred the firearm to the individual, and if so, refers these cases to ATF for retrieval of the firearm if the individual is confirmed to be prohibited from possessing a firearm.
States may establish requirements regarding background check processing times, including waiting periods, beyond the federal requirement. States also may include state databases in addition to NICS indices when conducting background checks. In POC states, FFLs initiate a NICS check by contacting one or more state organizations, such as a state or local law enforcement agency, to query NICS databases and related state files. If necessary, the state organization then conducts any required follow-up research.
States may use different methods to conduct background checks. Examples of these varying methods include the following: Instant Check: Requires an FFL to transmit a buyer’s application to a checking agency by telephone or computer. The agency is required to respond immediately or as soon as possible.
Purchase Permit: Requires a buyer to obtain, after a background check, a government-issued document (such as a permit, license, or identification card) that must be presented to an FFL before the buyer can receive a firearm.
Exempt Carry Permit: State concealed weapons permits, issued after a background check, exempt the holder from a new check at the time of purchase under an ATF ruling or state law.
Other: Requires an FFL to transmit an application to a checking agency, which delays transfer until a waiting period expires or the agency completes a check.
Federal Process After a Firearm Denial
After a federal NICS denial, ATF can take enforcement actions through criminal investigation and referral for prosecution to a USAO, as making false written statements on the ATF Form 4473 is a crime punishable as a felony under federal law by up to 10 years in prison and up to a $250,000 fine. Any fines that result from a firearm denial are criminal fines assessed through prosecution as part of a plea agreement or sentencing. ATF does not have the statutory authority to issue fines or take any civil action against individuals whose firearm applications are denied and are suspected of providing false information during the attempted purchase.
Investigations
For federal denied transactions, the FBI’s NICS Section sends information about each denial to ATF’s DENI Branch. The DENI Branch is responsible for researching each transaction to determine whether the case should be referred to one of ATF’s 25 field divisions for possible investigation. The DENI Branch is to refer all delayed denial cases— which may require recovery of a firearm—and standard denial cases that meet USAO investigative referral criteria for each corresponding judicial district. An ATF NICS coordinator in each field division is to distribute the referred denial cases to the appropriate field office within each field division.
In addition to recovery of a firearm for delayed denial cases, all firearms denial investigations may involve verifying the purchaser’s prohibited status, gathering relevant supporting documentation such as mental health or court files, and communicating with prosecutors regarding the prosecutorial merit of the case, according to ATF officials. Figure 3 shows the general NICS background check process when purchasing a firearm from an FFL in either a NICS or POC state.
Among the denials that ATF investigates (delayed and standard), each field office also determines which cases should be referred to a USAO for possible prosecution. If the ATF field office determines that the subject is a prohibited person and local prosecutorial guidelines are met, the field office may refer the case for prosecution. ATF agents may discuss potential referrals with prosecutors to try to obtain USAO acceptance before ATF formally refers a case for possible prosecution. A case that is not deemed appropriate for federal prosecution may be referred to a state prosecutor. If the U.S. Attorney decides to prosecute, an arrest is made or a warrant is issued. Figure 4 shows the general process for the investigation and prosecution of standard firearms denials.
State Processes After a Firearm Denial
POC states vary in their procedures and standards for investigating and prosecuting persons denied firearms transactions. For example, these states may or may not investigate and prosecute prohibited persons who violate state gun control laws. In some states, the agency conducting background checks notifies the state or local police, depending on which has jurisdiction, where the transaction occurred. The local agency is then responsible for investigating and assisting in the prosecution of the case by state or local prosecutors. Other states have units with statewide jurisdiction that screen cases before deciding whether a referral should be made to a state police trooper or local law enforcement agency for investigation. A POC state may also refer denials for further investigation to the nearest ATF field office. In POC states, a firearm retrieval associated with a delayed denial may be handled by local law enforcement, a statewide firearms unit, or ATF. State and local prosecutors, whether the district attorney, county or city prosecutor, or the state Attorney General’s office, represent the state for cases arising under state law. Occasionally, federal and state law may prohibit similar types of criminal conduct, allowing both federal and state prosecutors to pursue the case.
Federal and Selected State Law Enforcement Agencies Collectively Investigate and Prosecute a Small Percentage of Firearms Denials
In fiscal year 2017, ATF referred about 13,000 firearms denials to its field divisions for investigation, of which USAOs had prosecuted 12 cases as of June 2018. In March 2018, the Attorney General issued a memo that directed all United States Attorneys to enhance prosecution of cases involving individuals who make false statements on the ATF Form 4473. Officials from 10 of our 13 selected POC states said that they do not investigate or prosecute NICS denials.
ATF Referred about 13,000 Firearms Denials to Its Field Divisions for Investigation in Fiscal Year 2017, of Which USAOs Have Prosecuted 12 Cases
At the federal level, the FBI’s NICS Section referred 112,090 denied transactions to ATF’s DENI Branch in fiscal year 2017, of which ATF referred 12,710 (about 11 percent) to its field divisions for further investigation. The 12,710 referred cases consisted of 3,993 delayed denials and 8,717 standard denials. According to ATF headquarters officials, the DENI Branch refers all delayed denials to ATF field divisions for additional investigation because these cases could potentially require the recovery of a firearm that was transferred to a prohibited person. The DENI Branch uses investigative guidelines established by USAOs that cover 94 judicial districts to determine if standard denials should be referred to the respective ATF field division for investigation. USAO criteria may include individuals who are violent felons, have an active protection order, or have made multiple attempts to purchase a firearm in the past after being denied, among other offenses. Based on our analysis of ATF data, the number of firearms denials the DENI Branch referred to ATF field divisions for investigation increased from 5,208 in fiscal year 2011 to 12,710 in fiscal year 2017—an increase of 141 percent. We discuss the reported impact of this increase in referrals on ATF staff later in the report.
Of the 12,710 referrals ATF sent to its field divisions in fiscal year 2017 for investigation, USAOs considered 50 cases for prosecution, and prosecuted a total of 12 cases (9 delayed denial and 3 standard denial) as of June 2018, according to ATF data (see table 1). An additional 10 cases were pending or awaiting prosecution as of June 2018. Overall, USAOs filed about 54,000 criminal cases in fiscal year 2016, of which about 9,200 involved firearm-related matters. According to Department of Justice officials, in fiscal year 2017, USAOs also filed about 54,000 criminal cases, of which about 10,400 involved firearm-related matters.
We also asked state officials from states within our six selected ATF field divisions whether they investigated and prosecuted these denials. Officials from four of these six states said that ATF has not been referring firearms denials to them, so investigation and prosecution of firearms denials was not being done in their state. State officials from two of the six states said that they either occasionally receive referrals from ATF, which are investigated and submitted for local prosecution or they are not aware whether they receive referrals from ATF because they do have a dedicated team to investigate these cases. Officials from all 6 states said they have laws that prohibit persons from purchasing and/or possessing firearms based on prohibitions, such as a prior felony or misdemeanor convictions, but do not have laws that prohibit persons from falsifying information on ATF’s form 4473 during the NICS background check. These states also cited some limitations for investigating these referrals such as lack of statutory authority within their state agency and resource constraints.
Attorney General Memo to Enhance Prosecution of Persons Denied Firearms Purchases
On March 12, 2018, the Attorney General issued a memo that directed all United States Attorneys to enhance prosecution of cases involving false statements on the ATF Form 4473, which the memo refers to as “lie-and- try” cases. The memo specifically stated that every United States Attorney must coordinate with the ATF Special Agent-in-Charge in the local district to review and revise, as necessary, local prosecution and referral guidelines to ensure vigorous and appropriate prosecution of these cases. The memo also stated that these guidelines should place particular emphasis on cases against violent persons, including—but not limited to—denials involving individuals convicted of violent felonies, misdemeanor crimes of domestic violence, or subject to protective orders, and denials involving fugitives where the underlying offense is a violent felony or misdemeanor crime of domestic violence. Further, the memo stated that the review and any resulting revisions should ensure that district-specific prosecution and referral guidelines reflect the Department of Justice’s renewed commitment to reducing violent crime. The memo required that all United States Attorneys certify that the review has been completed and all necessary adjustments made within 45 days.
According to EOUSA officials, as of early May 2018, about 90 percent of USAOs had coordinated with their respective ATF field divisions to discuss revisions in USAO referral guidelines for standard denial cases. The officials added that in response to the Attorney General’s memo, some USAOs narrowed criteria to focus resources on particular denials, such as those involving an attempted purchaser with a history of violent crime, or prioritized denials with recent prohibitions, such as a domestic violence conviction in the past year. In other cases, ATF officials said that USAOs broadened criteria, which may result in more potential cases from which to select for investigation and referral for prosecution. ATF officials also said that some USAOs added investigation referral criteria (for individuals prohibited from possessing firearms) to include elements outside the list of federal prohibitors, such as denied individuals with ties to gang activity or terrorism. These attributes outside of NICS prohibiting categories would require further investigation at the local level by ATF, according to officials. While ATF officials have the expectation that the revised criteria would increase the overall workload on ATF field divisions, ATF officials said that it is too early to discern how these changes will impact ATF and the number of denial cases prosecuted by USAOs. EOUSA officials suggested that firearm-related prosecutions may well increase in the future, but added that any increase that results does not necessarily mean that firearms denial prosecutions would increase.
Ten of 13 POC States Do Not Investigate Firearms Denials, But the Remaining 3 Investigate a High Percentage of Denials
Officials from 10 of our 13 selected POC states said that they do not investigate or prosecute any NICS denials, sometimes citing resource availability or the lack of state statutes as the reason. Officials from these 10 states said that while their state does not investigate or prosecute firearms denials, their state may take other actions following a denial. These possible actions may include informing local jurisdictions of the denial for possible investigation, and possible arrest if the denied individual has an active warrant. Other actions cited include revoking a state firearms owner identification card and possibly seizing any firearms; informing ATF of a delayed denial so ATF can retrieve the firearm; and providing the information on each denial to the FBI for input into FBI databases used to perform NICS checks.
Officials and data from the remaining three POC states—Oregon, Pennsylvania, and Virginia—indicate that these states investigate a high proportion of firearms denials. These states have statutes that prohibit providing falsified information on a state or federal firearms form as well as statutes that penalize the attempt to purchase firearms by individuals prohibited from such purchases.
Oregon: Prior to 2014, the state generally did not investigate firearms denials, according to state officials. In 2014, the state changed its policy based on concerns about firearm-related crimes. Specifically, beginning in late 2014, Oregon began investigating all firearms denials, which resulted in more than 2,500 firearms denial investigations in both 2016 and 2017. According to state data, there were between 2,000 and 2,400 firearms denials annually from 2011 to 2013. According to the two Oregon county prosecutors we interviewed, from late 2014 through 2017, their offices accepted about 141 of the more than 700 firearms denial investigations referred to their offices, with most prosecuted successfully.
Pennsylvania: Prior to 2014, the state investigated a relatively small percentage of firearms denials per year using risk-based criteria, according to state police officials. In 2014, the state changed its policy to investigate all firearms denials. According to state police reports, in 2016, approximately 6,500 denial cases were referred for investigation, of which about 1,600 were referred for prosecution and 356 resulted in convictions. For 2017, the state reported that approximately 5,500 denial cases were referred for investigations, of which 1,907 investigations were referred for prosecution, resulting in 472 convictions.
Virginia: Virginia has investigated firearms denials since 1989, according to state officials. Virginia does not refer all firearms denials for investigation, but instead uses risk-based criteria to refer a subset of prohibited categories for investigation, according to these officials. The number of referrals for investigation in Virginia has increased from about 770 in 2011 to around 1,700 in 2016 and 2017. Virginia prosecutors we interviewed in three jurisdictions from localities where a high volume of firearms denial referrals occur said they tend to work with Virginia state troopers who specialize in denial investigations and reported high prosecution rates for the cases they accept. The prosecutors noted that most convictions do not go to trial and are reduced to less severe violations and most of the penalties imposed tend to be probation, but there is the occasional jail term. For example, one Virginia prosecutor said that jail sentences are rare, but for a felon with a record of violence, sentences of 7 to more than 24 months in jail have been imposed.
Unlike federal denial investigation referrals where about 30 percent of the total is for delayed denials, the vast majority of investigations and prosecutions within these three states are related to standard denials. Officials within these states explained that background checks that result in delayed denials are fairly uncommon. According to Pennsylvania officials, in Pennsylvania this is because of state background check policies that provide additional time, 15 days, to complete background checks if a denial is possible, but not clear initially. If the 15-day period expires without an approved transfer, the transaction is not denied, but the firearm is not transferred. According to officials in all three states, FFLs generally will not transfer a firearm until the background checks are completed.
ATF and Selected States Cited Challenges Investigating and Prosecuting Firearms Denials; ATF Has Not Assessed Field Divisions’ Use of Warning Notices
ATF officials from our six selected field divisions said that investigating firearms denials can be challenging because of the high volume and require use of their limited resources. ATF has not assessed field divisions’ use of warning notices in lieu of prosecution, which could provide greater awareness of their deterrence value. EOUSA officials said that denial cases are difficult to prosecute and offer less value for public safety than other prosecutions. State officials said that denial investigations compete with other investigations and can be difficult to successfully prosecute.
ATF and EOUSA Officials Described Denial Investigations and Prosecutions as High Volume and Require Use of Their Limited Resources
Federal Denial Investigations
ATF officials from our six selected field divisions—which combined received approximately 60 percent of the total standard denials that ATF referred to field divisions from fiscal years 2011 through 2017—said that investigating firearms denials can be challenging for various reasons. ATF field divisions have taken some steps to help mitigate these challenges, but ATF headquarters could benefit from enhancing its oversight of some aspects of the investigations process.
According to officials from our six selected field divisions, one challenge to investigating and prosecuting firearms denials is the high volume of firearms denial referrals for investigation that ATF sends to field divisions. According to ATF headquarters officials, the DENI Branch has agreed to send these referrals to field divisions based on criteria each ATF field division has established with USAOs within their division. In fiscal year 2017, ATF’s DENI Branch referred 1,889 delayed denial cases to our six selected field divisions—which field divisions are required to investigate— and 5,435 standard denials cases, which they are to consider for further investigation. In the six field divisions, the number of standard denial referrals more than tripled from fiscal years 2011 to 2017, and in two field divisions, the number of standard referrals in 2017 was more than five times the number in 2011. For example, in one field division, the number of standard referrals was 166 in 2011 and increased to 1,064 in 2017. ATF officials did not know why the number of standard and delayed denials had increased during this period.
Officials from all six of our selected ATF field divisions also said that investigating denial cases can be time-intensive and require use of their limited resources. The officials said that delayed denials can be particularly time-intensive because they are required to be investigated and the investigation involves a defined set of actions, including the possible retrieval of the firearm. For example, these investigations typically involve steps to verify the prohibition of the individual, including obtaining court records; contacting the individual and FFL that sold the firearm; and arranging to retrieve the firearm for those individuals found to be prohibited. A fiscal year 2016 ATF funding request through the annual congressional budget justification submission noted the drain on investigative resources because of the requirement for ATF to follow-up on delayed denials. While the investigation of standard denials also can take time, officials from our six selected field divisions said they have greater discretion over whether or not to investigate these denials. For example, each field division has discretion to screen all or some of the standard denials, which can include confirming the person was correctly denied and contacting the denied individual and the firearms dealer.
Officials from all of the six selected field divisions said that, in light of the high volume and time-intensiveness of denial cases, they have taken various steps to prioritize the types of cases to investigate. For example, per ATF policy, field divisions prioritize delayed denials over standard denials because a prohibited person may be in possession of a firearm.
Officials from three of the six field divisions said that after verifying that the applicant is prohibited by reviewing the criminal history attached to the case file, they generally close standard denials without further investigation. The officials added that while these cases may meet USAO criteria and be referred to a field division for investigation, they ultimately do not have prosecutive merit based on coordination with prosecutors who have experience in prosecuting these cases. Officials from one field division said that they typically do not devote resources to verifying the prohibited status, and instead triage standard denials based on certain criteria, such as a recent violent felony or domestic violence conviction. Accordingly, officials in that field division only refer to a criminal investigator for further review what they consider the greatest threats to public safety.
Federal Prosecutions of Firearms Denials
EOUSA officials said that USAOs generally do not accept and prosecute denial cases that do not involve aggravating circumstances, as these cases can require significant effort for prosecutors relative to the short length of punishment and may offer little value to public safety because the offender does not obtain the firearm, compared to other cases involving gun violence. The officials added that USAOs filed about 9,200 firearm-related cases in fiscal year 2016 and about 10,400 in fiscal year 2017, but that cases involving falsifying information when attempting to purchase a firearm generally are only a small fraction of USAO efforts. Instead, USAOs primarily focus on cases where persons obtain firearms and are prohibited persons or use the firearms in connection with a criminal offense. According to ATF DENI Branch data, the majority of the 25 cases that USAOs prosecuted in fiscal years 2016 and 2017 that involved firearms denials (standard and delayed) resulted in reaching plea agreements with the defendants.
Federal law provides that it is unlawful “for any person in connection with the acquisition or attempted acquisition of any firearm or ammunition from a licensed importer, licensed manufacturer, licensed dealer, or licensed collector, knowingly to make any false or fictitious oral or written statement or to furnish or exhibit any false, fictitious, or misrepresented identification, intended or likely to deceive such importer, manufacturer, dealer, or collector with respect to any fact material to the lawfulness of the sale or other disposition of such firearm or ammunition ….” Generally, to convict someone for making a false statement on the ATF Form 4473, the prosecutor must establish beyond a reasonable doubt that the seller was a FFL; the defendant made a false statement or used a false identification while acquiring or attempting to acquire a firearm; the defendant knew the statement or identification was false; and the false statement or identification was intended to, or likely to, deceive a FFL about the lawfulness of the firearm sale. EOUSA officials said that prosecutions for falsifying information are very challenging because of the requirement to prove intent, and can become further complicated because the purchaser may not know that he or she is prohibited and was not intentionally trying to deceive an FFL. The officials added that these cases are not appealing to judges and juries from a public safety standpoint. They also said that they find juries questioning why the case is being prosecuted in instances when the individual did not get the gun, resulting in juries refusing to convict these individuals or jury nullification.
EOUSA officials said that the number of prosecutions of firearms denials can be low, particularly in standard denial cases where the system worked and the subject did not obtain a firearm, and because of the priority often given to other cases involving gun violence. EOUSA officials said that delayed denial cases can require less effort to prosecute than standard denials, since USAOs do not need to prove an individual’s intent in making a false statement in purchasing the firearm, only that the prohibited individual is intentionally in possession of a firearm. For instance, generally, to obtain a conviction for a felon in possession of a firearm, the prosecution must establish beyond a reasonable doubt that the defendant had previously been convicted of a crime punishable by imprisonment for a term of more than 1 year; the defendant knowingly possessed a firearm; and the firearm previously passed in interstate commerce. However, officials from our six selected field divisions said that as long as a firearm is recovered from the prohibited person and the person is cooperative, ATF is unlikely to refer delayed denial investigations to USAOs for prosecution.
ATF Has Not Assessed Field Divisions’ Use of Warning Notices in Lieu of Prosecution
While officials from all six selected ATF field divisions said that investigating the increasing number of denial cases can be time-intensive and require use of their limited resources, ATF headquarters has not assessed the extent to which field divisions’ use warning notices in lieu of prosecutions or whether any policy changes could enhance their use as a deterrence tool.
Increase in Denial Investigations
Standard denial cases ATF referred to field divisions for investigation grew by more than 200 percent ATF-wide from fiscal years 2011 through 2017, and by more than 300 percent within our six selected field divisions. Moreover, delayed denial referrals grew by about 70 percent ATF-wide and by 70 percent within our six selected field divisions during this period. Figure 5 shows the increase in standard and delayed denial cases ATF referred to its field divisions for investigation from fiscal years 2011 through 2017.
At the same time, ATF data show that special agent staffing across our six selected field divisions collectively only increased by one special agent from fiscal years 2011 through 2017. Officials from five of our six selected field divisions said that the increasing number of NICS denial cases received from ATF headquarters for investigation has posed a burden on staff resources.
Field divisions are required to investigate all delayed denial referrals, but have discretion as to how thoroughly they investigate standard denial referrals. Officials from all six selected field divisions said that, to date, one of the ways they have been able to adjust to the increasing volume of standard denial referrals has been by closing them with limited investigation or sending warning notices to the prohibited individuals. However, based on trends over the last 7 years, the number of standard and delayed denial referrals for investigation could continue to increase. In addition, the Attorney General’s March 2018 memo to USAOs directing that the prosecution of false statements on the ATF Form 4473 be enhanced may impact how, and how many, denial investigations ATF performs.
Use of Warning Notices
For all delayed denials, ATF policy requires field divisions to contact prohibited persons within three days of being assigned the case to advise the person of their prohibition. According to ATF headquarters officials, warning notices are intended to inform the individual that he or she is prohibited from purchasing a firearm, should not attempt to purchase a firearm again, and may be subject to prosecution. For delayed denials, ATF policy also requires field divisions to send a written warning notice in all instances where the special agent is unable to make contact with the prohibited person within 3 business days, or when other circumstances exist, such as extraordinary distance or inclement weather. Officials from our six selected field divisions said that while warning notices for delayed denials are not always delivered in writing, all individuals involved in delayed denials receive a warning in some form—e.g., written, oral, or via text message—from the ATF special agent investigating the denial. Officials from one field division said that they send text messages to denied purchasers in lieu of warning letters because they are less intimidating to prohibited persons, the texts save time and money, and are more effective in helping retrieve firearms.
For standard denials, warning notifications are not required. Specifically, ATF policy provides that field divisions may send warning notices to denied persons “where appropriate and in lieu of prosecution.” However, in instances where aggravating circumstances exist, such as if the prohibited person committed a violent felony or made multiple attempts to purchase firearms, ATF policy provides that consideration should be given to hand-deliver the notice to the prohibited person. The 6 selected field divisions varied in the extent to which they sent warning notices related to standard denials. Specifically, three of the six divisions had established a practice to send notices to all prohibited persons. Officials from these three divisions said that such letters are intended to (1) educate the denied person that he or she is prohibited from purchasing firearms, (2) deter the individual from attempting future purchases, and (3) serve as evidence during any subsequent investigation or prosecution that the individual knew that he or she was prohibited from purchasing a firearm. Officials from one of these field divisions also said that the practice of addressing standard denials by sending warning notices is a good use of limited resources while addressing a public safety concern.
Of the three field divisions that routinely send warning notices for all standard denials, two send them via certified mail, while the other sends letters via standard mail due to limited resources. According to officials from these three field divisions, the costs associated with mailing warning notices also includes staff time to locate recipient information and mail the letters, in addition to supervisory review, as is done in at least one field division. A group supervisor in one of these field division’s sub-offices said that while their field division primarily uses certified mail, the sub- office hand delivers these notices for all standard and delayed denials. Officials from one of these three field divisions said that they confirm the prohibited status of individuals before sending the warning notices, while officials at another field division said they do not confirm the prohibited status prior to mailing but that the notice includes information on how to appeal the denial. These three divisions received an average of about 800 standard denials in fiscal year 2017.
Officials from the three divisions that do not routinely send warning notices for standard denials said that notices are only sent for standard denials in rare cases. Such cases can include when there are aggravating circumstances. Criminal activity or not cooperating with the ATF—after the attempted purchase are examples of aggravating circumstances.
Officials from one field division stated that warning notices were used for standard denials by individual agents in the past, but there was no field division policy to do so routinely. Officials from another field division said that due to limited resources, the decision was made to not send these notices, though they said the notices could be an effective deterrent for prohibited individuals from trying to possess a firearm or attempting to purchase from an FFL. ATF headquarters officials said that under ATF policy, the decision whether to send warning notices for all standard denials is made by individual field divisions. Therefore, they did not know the extent to which each of the 25 divisions used this practice.
Standards for Internal Control in the Federal Government call on federal managers to design control activities to achieve an agency’s objectives. These controls can include using quality information to make informed decisions, such as how best to achieve ATF’s objectives given limited resources; evaluating ATF’s performance in achieving key objectives; and addressing risks, including its limited resources to investigate or prosecute denial cases. While ATF policy provides that individual field divisions determine their use of warning notices, ATF headquarters is uniquely positioned to assess use of the notices across all field divisions. Assessing the extent to which ATF field divisions use warning notices for standard denials would provide ATF headquarters with greater awareness regarding agency-wide efforts to use the notices as a deterrence tool in lieu of prosecution. As assessment could also better inform ATF as to whether the application of certain practices to all field divisions could be a feasible and effective use of limited investigative resources, given the small number of standard denial cases prosecuted each year, and revise related policies if appropriate.
Selected States Reported That Denial Investigations Compete with Other Investigations and Cases Can Be Difficult to Successfully Prosecute
State Denial Investigations
State police supervisors in all three states (Oregon, Pennsylvania, and Virginia) that investigate denials said investigators are generally assigned to denial investigations as their time permits. Supervisors also said these investigations are generally considered time consuming and can sometimes impact other duties. State police supervisors said that these investigations can be disruptive to operations by taking troopers away from their core duties, such as traffic enforcement and response, except where troopers are dedicated to conducting these investigations. State troopers echoed this point, adding that denial investigations are difficult to conduct given the amount of documentation needed for prosecution when they have other duties. Local law enforcement officials in Oregon and Pennsylvania also said that denial investigations are disruptive, as they are usually forwarded to officers when they are on patrol, sometimes many weeks or months after the firearms background check was initiated.
Investigators in all three states also said they face challenges assisting with prosecutions of denied persons, including gathering the necessary documentation to prove the individual knew they were prohibited. For example, Virginia troopers said that obtaining records on out-of-state convictions and mental health prohibitions, and locating documentation on older convictions, can be especially difficult. Troopers in Oregon and Virginia commented that in their experience, there can be some degree of inaccuracy in the criminal records in their state. For example, they said that arrests and prosecution results may not be accurately reflected in the criminal history of the denied person. When the trooper checks the actual record, it is sometimes discovered that the person is not prohibited. A Virginia trooper said this is especially common for juvenile convictions.
Oregon and Virginia officials said they have been able to mitigate these challenges by utilizing specialized troopers to conduct denial investigations. These troopers are taken off line and generally perform denial investigations almost exclusively. In both states, these specialized troopers conduct a large portion of the denial investigations in these states or in designated locations within the state. Virginia State Police officials told us that some areas within police divisions that receive a high volume of denials for investigation use specialized troopers that spend all or most of their time investigating firearms denials. These Virginia troopers reported that they have become more efficient than troopers that do not specialize because the repetition of performing multiple investigations improves the learning curve and the quality of their investigations. Virginia State Police officials said that while any area may assign troopers to work exclusively on denial investigations, most areas either cannot afford to remove a trooper from road coverage or do not investigate enough cases involving persons denied firearms to make it an effective use of resources. According to Oregon officials, five specialized troopers in the state investigated more than 1,100 of the almost 2,600 firearms denials referred for investigation in 2016. These troopers covered the denials for several metropolitan areas in Oregon and cited efficiencies in conducting and referring investigations for prosecution.
State Denial Prosecutions
State prosecutors we interviewed in the three states that conducted denial investigations said the primary challenge in prosecuting denial cases is in gathering the evidence needed to prove that the individuals knew they were prohibited. They added that the difficulty in gathering evidence for certain prohibited categories also make those prosecutions more difficult. For example, obtaining records related to old convictions, out of state convictions, and mental health prohibitions are common challenges. There are also challenges due to record retention policies for specific prohibitions. For example, a Virginia prosecutor said that prosecuting denials for misdemeanor crimes of domestic violence convictions in Virginia that are more than 10 years old is difficult because these records may be destroyed under state law after 10 years. Oregon state investigators we interviewed said that, under state statutes, successfully prosecuting someone for falsifying information on firearms purchase forms requires proving that the person “knowingly and willingly” falsified information on the form, which can be difficult to prove. One Pennsylvania investigator also said that denied individuals may not understand the questions on the forms and genuinely believe they are not prohibited.
Prosecutors we interviewed who worked with specialized investigators reported that they have worked closely with these troopers to facilitate successful prosecutions. For example, an Oregon prosecutor we spoke to utilizes a case reporting process where the trooper advises the prosecutor of the strong cases to be considered for prosecution. This allows prosecutors to focus their attention on the cases more likely to be successfully prosecuted. In one Virginia county, the prosecutor’s office provides troopers a checklist of important points the trooper should address to make a strong case for prosecution. Virginia prosecutors in jurisdictions served by a specialized trooper said that they confer with the troopers regularly and are able to successfully prosecute a high percentage of the denial investigations these troopers conduct.
Firearms Denial Investigations and Prosecutions Are Generally Based on Aggravating Circumstances in Addition to Criminal Records
While individuals are denied firearms purchases because they are prohibited from possessing firearms under federal or state law, federal denial investigations and prosecutions are generally based on additional aggravating circumstances. The three states that investigate denial cases have established priorities for investigating and prosecuting such cases.
ATF Investigations Most Frequently Involve Convicted Felons, but Aggravating Circumstances Are Generally Needed for Prosecution Referrals
The types of standard and delayed denial cases investigated by ATF field divisions and referred to USAOs for prosecution are determined by multiple factors, including the prohibiting category (e.g., felony conviction), criminal history of the denied individual, USAO investigative referral criteria, and the nature of the ATF investigation itself.
Of the almost 21,000 delayed denials the ATF DENI Branch referred to ATF field divisions for investigation from fiscal years 2011 through 2017, 32 percent were denied for being convicted felons, 23 percent for a qualifying misdemeanor crime of domestic violence, and 19 percent for being an unlawful user of, or addicted to, a controlled substance. As discussed earlier, all delayed denials are referred to the appropriate field division for investigation.
Of the almost 36,000 standard denials the ATF DENI Branch referred to field divisions for investigation during this time period, 36 percent were denied for being convicted felons, 30 percent for a qualifying protective order, and 16 percent for a conviction for a qualifying misdemeanor crime of domestic violence. For standard denials, USAO investigative referral criteria, not the prohibiting category itself, determines which cases are referred for investigation.
From fiscal years 2015 through 2017, the number of delayed denials referred to ATF field divisions for investigation increased by 46 percent (from 2,742 to 3,993). This increase was driven by cases in which the prohibiting category was drug-related, which increased by about 300 referrals (60 percent increase); involved misdemeanor crimes of domestic violence, which increased by about 250 referrals (34 percent increase); and involved felony convictions, which increased by about 280 referrals (34 percent increase). Also during this period, the number of standard denials referred to ATF field divisions for investigation increased by 30 percent (from 6,715 to 8,717). This increase was driven by misdemeanor crimes of domestic violence, which increased by about 626 referrals (62 percent increase), and felony convictions, which increased by about 659 referrals (25 percent increase). Cases in which the prohibiting category was related to mental health or protection orders also increased by 42 percent (about 200 referrals) and 21 percent (about 300 referrals), respectively. Figure 6 shows the breakdown of investigation referrals by prohibiting category from fiscal years 2011 through 2017.
The types of denial cases that ATF’s DENI Branch refers to field divisions for investigation are determined by the USAO referral criteria established in the district in which the purchase took place. Based on our analysis of the standard denial referral criteria for the 34 USAO districts that cover the six selected ATF field divisions as of February 2017, there are similarities in the criteria used across these USAO districts. For example, most of the 34 districts direct ATF to refer standard denials for investigation if the cases involved recent convictions for violent crimes or convictions for misdemeanor crimes of domestic violence. Also, about two-thirds of the 34 USAO districts direct ATF to refer cases in which prohibited persons have made two or more attempts to buy firearms while prohibited. In addition to the 10 prohibitions listed under federal law, other referral criteria used by USAO districts include prohibited individuals who are also suspected terrorists or associates of suspected terrorists; known gang members or members of criminal organizations; or suspected of gun trafficking.
Aggravating Circumstances Resulting in a Prosecuted Firearms Denial Case An individual attempted to purchase a firearm while under indictment for first degree robbery, in which the subject used a woman to set up an exchange of sex for marijuana. During the exchange, the subject robbed and shot the victim. The subject was charged with two felonies—falsifying information on the background check form and illegal possession of a firearm while under indictment. The subject pled guilty to both charges and was sentenced to 24 months in prison and 3 years supervisory release.
The denial cases ATF field divisions refer to USAOs for prosecution generally include aggravating circumstances in addition to the factors discussed above related to an individual’s criminal history. According to ATF officials in one field division, these aggravating circumstances could include violent felonies or multiple serious offenses in a short period of time, especially if these occurred in close proximity to the timing of the attempted firearms purchase. For example, a prohibited person with multiple armed robberies or actively involved in gang activity could be considered to have aggravating circumstances. The officials described a recent incident where an individual was found in possession of PCP three times in a span of a couple months, then bought a firearm and fired it at an occupied dwelling. This was considered a clear example, and the individual was prosecuted for making a false statement as well as illegal possession of a firearm stemming from the delayed denial. Additional examples provided by ATF officials from our 6 selected field divisions of recent cases ATF referred for prosecution include:
An individual purchased a firearm from an FFL and sold that firearm to a prohibited person. The original purchaser was later denied (delayed denial) due to a prior drug conviction. The purchaser was charged with illegally possessing a firearm, making a false statement in the purchase of a firearm, and making a “straw purchase,” which is when an individual illegally purchases a firearm on behalf of another person. According to ATF, this individual was sentenced to 1 year in federal custody and 3 years of supervisory release.
An individual was charged with making false statements in the attempted purchase of a firearm. The individual did not receive the firearm as a result of a standard denial. During the investigation, the subject was not cooperative, and had an extensive criminal history in multiple states dating back 35 years, including several contacts with law enforcement on domestic violence and protective orders. The subject was charged with falsifying a background check form, to which he pled guilty and was sentenced to 12 months in prison.
An individual under indictment for armed criminal action committed first-degree robbery in which he used a woman to set up an exchange of sex for marijuana. During the exchange, the subject robbed and shot the victim. The subject later attempted to purchase a firearm and was able to obtain the firearm as a result of a delayed denial. Later, a completed NICS check revealed that he was a prohibited person for being under indictment, and was subsequently arrested later that week. The subject was perceived as a threat to the community and charged with two felonies, falsifying the background check form, and illegal possession of a firearm while under indictment. He pled guilty to both charges and was sentenced to 24 months in prison and 3 years supervisory release.
Of the 12 examples from our six selected field divisions provided, 9 involved delayed denials and 3 involved standard denials. Eleven of the 12 cases have been completed as of May 2018. Of the 9 cases charged in federal court, 1 case was declined by prosecutors, and the other 8 resulted in guilty pleas. These guilty pleas resulted in penalties ranging from time served to 33 months in prison, along with additional punishments such as probation, fines, and mandated treatment programs. Of the 3 cases charged in state court, 2 resulted in guilty pleas and 1 had not been resolved as of May 2018. Of the 10 cases pursued by federal and state prosecutors that resulted in guilty pleas, 7 cases involved a subject with a history of drug crimes, 6 involved violent crimes, and 4 involved domestic violence. Additional information on these case examples can be found in appendix VI.
According to officials from our six selected ATF field divisions, standard denial referrals may meet USAOs criteria and be referred to a field division for investigation, but almost always do not have prosecutive merit based on coordination with prosecutors. The officials noted that USAOs generally do not accept standard denials that only involve a violation related to falsified information. The officials also said that minor crimes, such as burglary, from decades ago would likely not be a high enough threat for prosecution. For delayed denial cases, officials from the 6 field divisions said that if a firearm is retrieved or otherwise recovered from the prohibited person—and the person is cooperative—ATF is unlikely to refer these investigations to USAOs for prosecution unless there are aggravating circumstances.
Oregon, Pennsylvania, and Virginia Investigate a Large Proportion of Firearms Denials and Prioritize Certain Prohibitions, but a Small Number are Prosecuted
State Investigations
The types of denial cases that are referred for investigation in Oregon, Pennsylvania, and Virginia are determined in part by the priorities the states have set for such referrals. For Oregon and Pennsylvania, which investigate all firearms denials, these priorities include cases involving stolen guns, purchasers with active warrants, active protection orders, and prior felony convictions. In these states, convictions of a crime punishable by more than one year (i.e. felony convictions) are the most common reasons for denial. Virginia investigates a subset of all denials based on risk, and has a policy to prioritize denials that is similar to Oregon and Pennsylvania—active warrants, active protection orders, as well as mental health issues.
According to Virginia state police officials, denials can be referred for investigation if they involve one or more of a set of prohibiting categories. In 2017, these amounted to about 50 percent of the almost 3,600 denials recorded. Virginia state police officials said that investigations tend to be handled in the order they arrive, regardless of prohibited category. Two troopers said that Virginia residents with exclusive Virginia criminal histories jump to the top of their lists because the records for these individuals will be easiest to obtain. The investigators in these Virginia jurisdictions said they tend to refer most of their investigations for prosecution, regardless of the prohibited category, if there is evidence to support the falsified information charge. Pennsylvania investigators and supervisors generally said that no priority is given to the denial investigation referrals they receive. They said investigations tend to be handled on a first in first out basis, regardless of the prohibiting category of the denied person. One supervisory trooper said that since these investigations are usually sent to the field 2 to 3 months after the transaction has occurred, they are generally considered low priority when compared to assaults, robberies, and other crimes a trooper investigates. Oregon state police management and troopers told us they prioritize cases involving stolen guns, purchasers with active warrants, active protection orders, and prior felony convictions. Local law enforcement agencies that investigate denial cases in Oregon told us they do not prioritize any cases—except for active warrants—handling them in order as they are received.
Investigators in all three states said that the criminal histories of those investigated tend to be minor. For example, outside of the prohibiting offenses that led to persons being denied, most of these individuals’ criminal histories tend to consist of old prohibiting offenses like non- violent felonies, or drug possession, with few gun violations noted. Investigators in these three states said that this may be because individuals with the most severe criminal histories do not attempt to purchase firearms through FFLs. However, one investigator said that individuals who were denied based on misdemeanor crimes of domestic violence tend to have multiple charges in their background.
State Prosecutions
State investigators said prosecutors’ interest or willingness to prosecute is a key determinant for whether a case is referred for prosecution. One investigator also said he may check with prosecutors early in an investigation to determine the likelihood of prosecution. According to Oregon troopers, denial investigations that are recommended for prosecution often involve convictions for felonies, misdemeanor crimes of domestic violence, and restraining orders. The troopers said that the strength of the case—including the adequacy and availability of proof the individual knew he or she was prohibited and falsified information— determines which cases are referred to prosecutors.
Prosecutors from all three states said that they generally pursue cases against individuals who have indications of violence, including protection orders, domestic violence, and felony convictions. Individual prosecutors also identified specific prohibiting categories, based on public safety concerns, as their priorities for prosecution. An Oregon prosecutor said there is a good public safety argument for prosecuting denials based on domestic violence, mental health, and felony prohibitions when there is probable cause. However, for other prohibiting categories, such as being on probation or being a drug user, the officials said that prosecuting these denial cases is not very useful based on the amount of effort required to prosecute. A Virginia prosecutor cited domestic violence and protection orders as being prosecuted most often. A Pennsylvania prosecutor said that his county prosecutes most of the referrals it receives, with denials for multiple instances of driving under the influence, mental health, and domestic violence being the most common.
State prosecutors we interviewed also said the cases they accept for prosecution may be influenced by the fact that certain types of cases are harder to prove. For example, they said that denials involving mental health, drug users, and misdemeanor crimes of domestic violence are often harder to prove, due in part to the difficulty in obtaining related records. The officials added that cases involving out-of-state and older convictions are also not prosecuted as often as other cases due to the difficulty in obtaining records. State prosecutors also said that there are certain circumstances where prosecutors are reluctant to pursue prosecution—such as cases where prohibitions occurred as a juvenile— where a firearms denial conviction would establish an adult criminal record where no criminal record had previously existed.
According to the prosecutors we contacted, the criminal histories of denied individuals generally involved minor violations other than the prohibiting offense. Prosecutors said the criminal history of the individual can play a role in whether felony charges are filed, as opposed to misdemeanor charges, and for sentencing. For example, one Virginia prosecutor said that he will file felony charges for a denial case only for cases in which an individual was denied based on an active protection order or serious felony in his county. Another Virginia prosecutor said that there is consideration of both criminal history—convictions for violent felonies or misdemeanors, especially—as well as multiple arrests where no conviction resulted, when deciding whether to charge the denied person with a felony or misdemeanor. The prosecutor noted, however, that denial cases tend not to be violent felons or hardened criminals. According to a Pennsylvania prosecutor, almost all cases are ultimately charged with misdemeanors. The prosecutor noted, however, that the state recently brought multiple felony charges against a person who was denied a firearms purchase based on a murder conviction in 1973.
These prosecutors also stated that they often try to plead denial cases whenever possible, as these cases often do not result in convictions when they go to trial. For example, a prosecutor in Pennsylvania told us about one denial case that went to trial where the jury found the denied person not guilty. The defendant was prohibited from purchasing a firearm based on convictions for repeatedly driving while under the influence, a misdemeanor with a potential prison term of 5 years in that state. The attorney said the jury believed that it was a pointless prosecution for a firearm’s denial offense. In Virginia, one prosecutor also described a case where a person was denied because of a mental health prohibition, and the person was found not guilty of the charges of falsifying information when attempting to purchase the firearm. He attributed this to a sympathetic defendant and jury reluctance to impose a criminal conviction on an individual without a criminal record.
Further, state officials said that the penalties handed down when denied individuals are convicted tended to be minor. The Oregon prosecutors said that common penalties are fines (usually in the hundreds of dollars) and probation ranging up to 1 year depending on the criminal background of the denied person. According to a Pennsylvania state police official, in some instances the charges are pled down to a lesser violation, such as disorderly conduct, which result in an approximately $300 fine. The two Pennsylvania prosecutors we interviewed said that most denial prosecutions in their jurisdictions are pled down to misdemeanors, eliminating the need for a trial. According to the prosecutors, common penalties for misdemeanor convictions include probation and the requirement to pay court costs (upwards of $1,000 in one county). The prosecutors added that there is an occasional jail sentence for denied felons with substantial criminal records that can result in about 1 to almost 2 years in jail. Prosecutors across the states said that they try to plead cases—thus avoiding trial—whenever possible. One Pennsylvania prosecutor said that cases without strong evidence that cannot be pled are sometimes dropped because conviction would be difficult. Another Pennsylvania prosecutor said jury apathy in one strong case led to fewer denial cases. Virginia prosecutors said that most convictions are for misdemeanor charges and result in probation, fines, and court costs. They did say, however, that jail time has resulted for denied individuals with violent felony convictions.
Conclusions
At the federal level, the number of firearms denial cases ATF has referred to its field divisions for investigation has increased substantially over recent years, which has placed a burden on field division resources. At the same time, field division resources have not increased, and the number of USAO prosecutions remains low—totaling 12 in fiscal year 2017. Assessing the extent to which ATF field divisions use warning notices for standard denials in lieu of prosecution would provide ATF headquarters greater awareness of agency-wide deterrence efforts, and better inform the agency as to whether any policy changes are needed.
Recommendation for Executive Action
We recommend that the Deputy Director, Head of the Bureau of Alcohol, Tobacco, Firearms and Explosives assess the extent to which ATF field divisions use warning notices for standard denials in lieu of prosecution and determine whether any policy changes are needed. (Recommendation 1)
Agency Comments
We provided a draft of this report to DOJ for review and comment. DOJ concurred with our recommendation to ATF and provided technical comments, which we incorporated in this report where appropriate.
We are sending copies of this report to the appropriate congressional committees, the Attorney General, the Deputy Director, Head of the Bureau of Alcohol, Tobacco, Firearms and Explosives, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512- 8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions are listed in appendix VII.
Appendix I: Objectives, Scope and Methodology
Our objectives in this report were to (1) describe the extent to which federal and selected state law enforcement agencies investigate and prosecute firearms denial cases; (2) examine the challenges, if any, that federal and selected federal and selected state law enforcement agencies face in investigating and prosecuting firearms denial cases; and (3) describe the circumstances that lead to the investigation and prosecution of persons denied firearms.
To describe the extent that federal and selected state law enforcement agencies investigate and prosecute firearms denials, we reviewed published reports regarding federal and state law enforcement efforts to investigate and prosecute firearms denials. For federal efforts, we requested data from the Federal Bureau of Investigation’s (FBI) National Instant Criminal Background Check System (NICS) regarding firearms denials provided to the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) by state and prohibiting category for fiscal years 2011 through 2017. We reviewed the internal controls in place for these data and determined that the data were reliable for our purposes. We requested and received data from the ATF Automated National Instant Criminal Background Check System Referral Application and the NForce Case Management System that showed how many of these denials, both standard and delayed, were forwarded from ATF’s Denial Enforcement NICS Intelligence (DENI) Branch to ATF field divisions, broken out by the prohibiting category of the denials. This provided us the total count of denials that ATF may investigate nationwide. To assess the reliability of these data we reviewed internal controls and the data quality assurance program of ATF. We determined that these data were reliable for the purpose of our reporting objectives.
To examine federal prosecutions of denied persons, we requested information from ATF’s case management system that identified the NICS cases that were prosecuted, including those instances where a conviction was recorded. For state investigations and prosecutions, we selected the 13 states that perform their own background checks for all firearms transactions and searched their state police and state agency websites to identify the state’s background check units, or staff associated with this function and inquired about their policy regarding the investigation of persons denied firearms purchase. From these contacts we determined that 10 of these selected states did not perform investigations, while 3 point-of-contact (POC) states did investigate these denials.
We analyzed data from the state police in Oregon, Pennsylvania and Virginia that identified the number of firearms denials recorded, the prohibiting category of the denials, and the number of these denials referred to state or local law enforcement for investigation. To assess the reliability of these data we interviewed knowledgeable individuals about the procedures for creating these data, and reviewed the internal controls in place within these systems. We determined that this data was reliable for the purpose of our reporting objectives. We spoke to state and local investigators and prosecutors in these states to discuss the investigative processes followed and the frequency of prosecution. Though these prosecutors tended to lack hard data on the number of these cases prosecuted and the outcome of these prosecutions, they were able to share their experiences prosecuting these cases, and to estimate the approximate quantity of these cases that have been addressed by their offices. We believe their experiences provide an understanding of the demands these prosecutions place on prosecutors’ offices and the value these prosecutions have for the jurisdiction in question.
To describe the challenges, if any, federal and selected state law enforcement agencies face in investigating and prosecuting firearms denials, for federal denial investigations, we used the denial referral data provided by ATF to identify the field divisions that received the most denial referrals for investigation. We found that 6 field divisions received about 60 percent of the total ATF standard denial referrals over the 2011 through 2017 fiscal year period. These six field divisions also received more than half of the delayed denial referrals distributed to the 25 ATF field divisions over that time period. To assess the reliability of the referral data and the case data, we discussed the internal controls in place with knowledgeable officials and received a copy of the ATF quality assurance plan for review. We determined that the data was reliable for the purposes of our reporting objectives.
We contacted officials in these six field divisions and discussed the investigative process for standard and delayed denial investigations as well as the challenges these investigations posed to the ATF staff in these field divisions. We also evaluated ATF’s investigative procedures and internal controls in place against the Standards for Internal Control in the Federal Government. We also discussed the types of cases that each field division referred to the appropriate USAO for prosecution, and were provided detailed examples from ATF headquarters of these denial cases for each of the six field divisions. We spoke to EOUSA officials to discuss the circumstances that would lead a USAO to prosecute a firearms denial and the challenges faced in these prosecutions. For state denial investigation challenges we spoke to state troopers and local law enforcement to learn about the procedures for conducting these investigations, the challenges that investigators face, and how and when these firearms denial investigations are referred to prosecutors. We also spoke with multiple prosecutors from each of these states and discussed their offices’ policies for accepting these denial cases, how often these cases were prosecuted in these localities and the general outcome of the cases. Though we did not speak to a representative sample of prosecutors across our selected states, we believe their views provide insights into the types of challenges faced by prosecutors in those states.
To identify the circumstances that lead to investigations and prosecutions of firearms denials, we reviewed federal denial investigations by visiting the ATF DENI Branch, the office that uses USAO criteria to screen federal NICS denials for referral to ATF field divisions. There, we observed how denials are screened and discussed internal controls. We also requested USAO referral criteria from the 34 USAO districts that comprise the six ATF field divisions that received the most denial referrals from 2011 to 2017. We also analyzed standard and delayed denial referral data that captured the prohibited categories of the referrals to those field divisions. Further, we analyzed standard and delayed denial case data for the investigations that were referred for prosecution for fiscal years 2015 through 2017, and those that were ultimately prosecuted. To assess the reliability of the data we discussed the internal controls in place for entering the data and the quality assurance plan in place after data was entered. We determined that the data was reliable for the purposes of our reporting objectives.
Officials from our 6 selected ATF field divisions also provided examples of denial cases investigated and referred for prosecution. These case examples included the specific circumstances that convinced the field division to investigate and refer the case for prosecution. For these federal denial prosecutions, we identified firearms denial cases in PACER and LEXIS for the years 2015, 2016 and 2017 to identify the specific circumstances of the cases prosecuted, the statutes used to charge the defendants, and the outcome of the cases. We also spoke to EOUSA officials and discussed the reasons that certain denial cases were prosecuted while thousands of others are not. For state denial investigation circumstances, we spoke with state and local investigators from the three selected states that investigate and prosecute denials and discussed the circumstances—to include state priorities, the prohibiting category and criminal history of those investigated—that resulted in state firearms denial to be referred for prosecution. We also spoke with multiple prosecutors from the same states and asked them to describe the characteristics of cases they are more likely to prosecute, as well as those they are less likely to prosecute. While we did not speak to a representative sample of investigators and prosecutors from these states, we believe their experiences and viewpoints provide insights into how these investigations and prosecutions are conducted and prioritized in these states.
We conducted this performance audit from March 2017 through September 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Bureau of Alcohol, Tobacco, Firearms and Explosives Form 4473
Appendix III: Investigation and Prosecution of Firearms Denials in Oregon
This appendix includes information on the investigation and prosecution of individuals denied firearms purchases in the state of Oregon.
Firearms Background Checks
In the state of Oregon, the Oregon State Police (OSP) Firearms Unit serves as the point of contact responsible for conducting background checks for firearms transactions. OSP’s Firearms Instant Check System (FICS) unit conducts criminal background checks to determine the eligibility of individuals attempting to transfer or purchase a firearm. Oregon law requires that gun dealers request that the OSP conduct a criminal history record check on the purchaser before a firearm is delivered to a purchaser. Dealers may submit these requests either by telephone or online. The FICS unit determines from criminal records and other available information whether the purchaser is disqualified under state or federal law from completing the transfer or is otherwise prohibited by state or federal law from possessing a firearm.
Generally, for gun shows, Oregon law prohibits a transferor who is not a gun dealer from transferring a firearm unless the transferor requests a criminal background check prior to completing the transfer, receives a unique approval number from OSP indicating that the recipient is qualified to complete the transfer, and has the recipient complete the form for transfer of a firearm at a gun show, or completes the transfer through a gun dealer. Generally, for private firearms sales, Oregon law requires a transferor to complete the transfer of a firearm to a transferee through a gun dealer. Prior to the transfer of the firearm, both the transferor and the transferee must appear in person before a gun dealer, with certain exceptions, with the firearm and request that the gun dealer perform a criminal background check on the transferee.
Process for Conducting a Background Check When a FICS background check is requested, Oregon law requires the seller to provide information about the firearm—so OSP can ensure it has not been reported stolen—and the purchaser in order to conduct a criminal history check. If the purchaser is qualified, a unique approval number is provided to complete the transaction. The dealer then enters this number on the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) background check form (Form 4473), and a thumbprint form, which is attached to the Form 4473 and retained for 5 years. By statute, if OSP is unable to determine if the purchaser is approved or denied within 30 minutes, OSP is required to notify the dealer and provide an estimate of when the check will be completed. These checks are placed in a pended/delayed status until sufficient record information can be obtained to complete the request.
Federal law provides that if the FBI or state agency cannot complete a background check within 3 business days and make a final determination (i.e., proceed or denied), the Federal Firearms Licensee (FFL) may transfer the firearm pursuant to federal law, unless state law provides otherwise. Regardless of the FFL’s decision to transfer or not transfer the firearm, OSP will continue to research missing information in order to complete the background check request and provide either an approval number or notice that the person is denied for the FFL’s records. Typically, a case is placed in “pend” status because the record is missing information necessary to make a final determination. For example, domestic violence charges may not include details about the relationship needed to make a determination; state, local, or federal agencies may not have the resources to respond in a timely manner to requests for missing information; or it may be unclear whether prior charges were a felony or misdemeanor.
When a transaction is denied, it is either labeled a Priority FICS Call, and is dispatched to the first available trooper or local law enforcement officer, or it is labeled a Cold FICS Call, and dispatched to the appropriate OSP office and next available trooper or local law enforcement officer. Priority calls are those that involve a convicted felon, a serviceable warrant, a stolen gun, or a restraining/stalking order. Oregon Executive Order 16-12 requires notification of certain officials after a transaction is denied if the prohibited person is on probation, on parole or post-prison supervision, subject to a court-issued release agreement or protective order, or subject to supervision by a Psychiatric Security Review Board. Figure 7 shows the process for purchasing a firearm from a dealer in Oregon.
According to OSP officials, 95 to 97 percent of background checks are approved and less than 1 percent are denied within minutes of initiation, while roughly 3 to 5 percent are placed in pend/delay status. According to FICS officials, about 95 percent of pend/delay transactions are ultimately approved.
A challenge phone line is available for individuals who have been denied or pended and wish to find out the reason, or to challenge a denial determination. The gun dealer may be asked to fax the ATF form 4473 and thumbprint form to the FICS Unit to assist in the challenge process. The purchaser is provided a reference number upon request to be used to appeal the determination through the Federal Bureau of Investigation’s (FBI) National Instant Criminal Background Check System (NICS) program.
Denials and Prohibited Categories
Oregon law prohibits individuals that have been convicted of certain offenses from possessing firearms. For example, Oregon prohibits the possession of a firearm by any person found to have mental illness and subject to a court order for treatment or commitment that prohibits them from purchasing or possessing a firearm as a result of mental illness. Finally, an individual is prohibited if while a minor, was found to be within the jurisdiction of the juvenile court for having committed an act which, if committed by an adult, would constitute a felony or misdemeanor involving violence and was discharged from the jurisdiction of the juvenile court within the last 4 years. Table 2 shows Oregon firearms denials by prohibiting categories.
From 2011 through 2017, prohibited persons convicted of a felony was the most common category among firearm denials, followed by individuals on probation, individuals convicted of a violent misdemeanor in the previous 4 years, and wanted persons. The two largest prohibiting categories, convicted felons and individuals on probation, made up 32 percent and 24 percent, respectively, of all denials in 2016. In 2017, convicted felons fell to 29 percent and individuals on probation increased to 28 percent. Wanted persons, the fourth largest group in 2016, made up 10 percent of all denials that year, but fell to less than 5 percent of all denials in 2017. Total firearms denials fluctuated during that span from more than 2,400 denials in 2012, to 1,050 in 2017. From 2015 to 2017, denials declined each year. Total firearm transactions fluctuated as well, but generally increased during that span, increasing from less than 200,000 in 2011 to over 287,000 in 2017. From 2015 to 2017, denials fell by 45 percent while total transactions increased by 9 percent.
Investigations of Denials
Oregon has had the policy of investigating all persons denied a firearms purchase since 2014. Prior to 2014, OSP only investigated a small percentage of persons denied firearms purchases, with a priority placed on denied persons with an active warrant.
According to OSP, the FICS unit provides the initial source of information in a denial investigation packet, which generally includes but is not limited to:
FICS Transaction Report, which includes information regarding the denied transfer, the subject firearm, the point of sale location, the denied transferee, and the specific reason for denial;
Oregon Criminal History data; Interstate Identification Index information;
FBI’s NICS information; and
Court records, police reports, or other records specific to the individual transferee and the denial in question.
Before an investigation is started, OSP must determine whether the investigation should be conducted by OSP or local law enforcement. If the jurisdiction where the transaction took place has an agreement with OSP to receive training on firearms investigations, then the local law enforcement agency will conduct the investigation. Otherwise, OSP will conduct the investigation. In 2016, 26 percent of denial investigations were conducted by local law enforcement, up from 22 percent in 2015. The percentage covered by local law enforcement rose to 28 percent in 2017. In September 2017, three large local jurisdictions agreed to receive firearms denial referrals from OSP. For the last three months of 2017 the proportion of denials referred to local law enforcement was about 33 percent.
OSP has five troopers dedicated full-time to FICS denial investigations in specific locations across the state. These troopers have essentially been pulled off of regular patrol duties and dedicated full-time to firearms denial investigations, according to OSP officials. These troopers cover the denials for most of the major metropolitan areas in Oregon. Except for the highest priority cases, the denial cases are tasked to the dedicated FICS troopers if the case falls within their geographical area of responsibility. According to Oregon officials, the five specialized troopers in the state investigated more than 1,100 of the almost 2,600 firearms denials referred for investigation in 2016.
OSP troopers are required through OSP executive leadership directives to investigate each FICS case and submit the case, with all available facts and evidence, to the appropriate District Attorney’s Office for review, regardless of findings. With this information, the prosecutor makes an independent charging decision. When there is a recommendation included with the investigator’s report, it is most often to not file charges, either because the evidence indicates no crime was committed, or because there are specific mitigating circumstances involved in the case. Finally, OSP generates a report tracking denial investigations and the dispositions of any new criminal cases initiated after the investigation is completed. There is no current mechanism for reporting actions taken following an investigation and therefore OSP has no data regarding the total number of prosecutions accepted and convictions obtained.
Statutes Used
According to OSP officials, potential state level criminal conduct associated with denied firearm transfers are established in Oregon Revised Statutes Chapters 162 and 166. These crimes include but are not limited to:
Or. Rev. Stat. § 162.075 False swearing.
Or. Rev. Stat. § 166.250 Unlawful possession of firearms.
Or. Rev. Stat. § 166.270 Possession of weapons by certain felons.
Or. Rev. Stat. § 166.416 Providing false information in connection with a firearm transfer.
Or. Rev. Stat. § 166.418 Improperly transferring a firearm.
Or. Rev. Stat. § 166.425 Unlawfully purchasing a firearm.
Or. Rev. Stat. § 166.435 Firearm transfers by unlicensed persons; requirements; exceptions; penalties.
Or. Rev. Stat. § 166.470 Limitations and conditions for sales of firearms.
Prosecution of Firearm Denials
Generally, Oregon’s Constitution requires the election by districts of a sufficient number of prosecuting attorneys (District Attorneys), who are the law officers of the state, and of the counties within their respective districts, and are to perform duties pertaining to the administration of law. District Attorney responsibilities may include, but are not limited to, representing the district in felony prosecutions, misdemeanor prosecutions, grand jury proceedings, mental commitment hearings, family abuse prevention hearings, and juvenile delinquency hearings.
After a trooper completes an investigation, they submit a report to the District Attorney’s office. A prosecuting attorney then reviews the case and decides whether to charge an individual or individuals with a crime. When a case is not prosecuted, a rejection memo is provided to the trooper that submitted the report. According to two Oregon county prosecutors we interviewed, from late 2014 through 2017, their offices accepted about 140 of the more than 700 firearms denial investigations referred to their offices, with most prosecuted successfully. According to OSP officials, the most common types of cases resulting in convictions are related to misdemeanor domestic violence convictions, followed closely by prior felony convictions. The officials said that a new working group was created in 2016 to review gun relinquishment protocols in domestic violence cases, review outcomes and make recommendations to improve the safety of domestic violence survivors. With regard to sentencing, these prosecutors said common penalties in firearms denial cases include fines (usually in the hundreds of dollars), and probation ranging up to 1 year, depending on the criminal background of the denied individual.
According to OSP, data is not collected on what prosecutions and convictions result from investigations by prohibited category. However, anecdotally, investigators and prosecutors said the prohibiting category of convicted felons is the most common among persons prosecuted for FICS denials. Prosecution outcomes are not automatically reported back to OSP; each county’s District Attorney must be contacted to obtain their agency’s respective case outcome data. Reporting disposition of firearms denial cases back to FICS is voluntary and can be done via an online form. The participating local agencies are requested to report back to OSP on the findings of their investigations; however, this reporting is voluntary and according to FICS officials, many agencies do not consistently submit this information.
Appendix IV: Investigation and Prosecution of Firearms Denials in Pennsylvania
This appendix includes information on the investigation and prosecution of individuals denied firearms purchases in the state of Pennsylvania.
State Firearms Background Checks
Since 1998, Pennsylvania has served as a Point-of-Contact (POC) state for the National Instant Criminal Background Check System (NICS) operated by the Federal Bureau of Investigation (FBI). The Pennsylvania State Police (PSP) acts as the state point of contact for NICS for determining an individual’s eligibility to acquire, possess, transfer, and carry firearms. PSP conducts instant records checks using the Pennsylvania Instant Check System (PICS). PICS uses a voice response component and a web-based application that allows users to initiate firearm and license to carry (also known as concealed carry) background check requests.
In Pennsylvania, a licensed importer, manufacturer or dealer is required to request by means of a telephone call that the PSP conduct a criminal history, juvenile delinquency history and a mental health check prior to selling or delivering any firearm to another unlicensed person. In addition, the firearm may not be transferred until the licensed importer, manufacturer or dealer has received a unique approval number for that inquiry from the PSP and recorded the date and number on the application or record of sale form. Generally, for any person that is not a licensed importer, manufacturer or dealer who wants to sell or transfer a firearm to an unlicensed person, the person must do so at the place of business of a licensed importer, manufacturer, dealer or county sheriff’s office and follow the procedures related to the transfer of a firearm for a licensed importer, manufacturer or dealer.
Process for Conducting a Background Check At the point of purchase, once the Bureau of Alcohol, Tobacco, Firearms, and Explosives (ATF) Form 4473 background check form is submitted, a PICS automated firearms check is initiated. The licensed firearms dealer contacts the PICS unit to determine if the applicant is eligible to purchase a firearm. The initial PICS check, which takes about 10 to15 minutes, searches the state’s repositories and NICS to identify any criminal history records or prohibitions. State databases searched as part of the check includes but are not limited to:
Pennsylvania criminal history records; Juvenile records, contained within the criminal history record file;
Mental Health File, containing involuntary commitment information and adjudication of incompetence;
Pennsylvania Protection From Abuse File;
Pennsylvania Wanted/Missing Persons File; and
Bureau of Motor Vehicle records.
If there is no record in the system for the applicant, the transaction can be approved automatically without any manual evaluation. The gun dealer is provided a unique approval number, which is required to authorize the transfer of the firearm.
Any firearm purchase check that hits on a record is transferred to a PICS operator. According to PSP officials, if a PICS operator cannot immediately approve or deny a firearm purchase on the phone, the firearm purchase application is put in “research” status, and the PICS unit has 15 days to determine if the firearm purchase can proceed. During this period, the PICS staff attempts to obtain clarifying information from the state’s repositories. In many of these instances, the PICS staff needs to obtain the final disposition to an arrest, according to PICS officials. If after 15 days, PICS staff cannot make a determination, the applicant’s status becomes “undetermined” and the applicant is not allowed to purchase the firearm.
If the automated check comes back with a red flag, the applicant is denied the purchase, and the information is sent to the PICS Challenge Unit, according to PSP officials. Generally, any person who is denied the right to receive, sell, transfer, possess or carry a firearm as a result of the procedures may challenge the accuracy of that person’s criminal history, juvenile delinquency history or mental health record pursuant to a denial by the instant records check by submitting a challenge to PSP within 30 days from the date of the denial. If challenged, PSP is required to conduct a review of the accuracy of the information forming the basis for the denial and has the burden of proving the accuracy of the record. Within 20 days after receiving the challenge, PSP is required to notify the challenger of the basis for the denial and provide the challenger an opportunity to provide additional information for the purposes of the review. PSP is to communicate its final decision to the challenger within 60 days of the receipt of the challenge with the decision containing all of the information which formed a basis for the decision. If after the challenge period the denial is upheld, the PICS Section sends the denied firearm application to the local police department or state police field station to investigate for falsification of the background check form and potentially refer the case for prosecution, according to PSP officials.
In addition to handling firearms denial appeals, the Challenge Unit prepares case files for appeals through the Office of the Attorney General, testifies at appeal hearings when required, and attends and testifies at relief hearings for restoration of firearms rights, which are conducted in the various county courts of common pleas throughout the state. Finally, the Challenge Unit handles enforcement investigations involving individuals who knowingly and intentionally provide false information in the attempt to acquire a firearm in violation of Pennsylvania law. Figure 8 shows the process for purchasing a firearm from a dealer in Pennsylvania.
According to PSP officials, in 2017, the PICS conducted about 1.1 million background checks for licensed firearm dealers, sheriffs and law enforcement throughout the state. Of these requests, 56 percent were approved within minutes by the system, while an additional 41 percent were approved during the initial check with operator assistance. The remaining 3 percent were placed in research status to obtain additional information. The Challenge Unit reversed 32 percent of all state background check denial challenges, which include licenses to carry, in 2017.
According to Pennsylvania officials, the state of Pennsylvania does not have delayed denials, in which a firearm is transferred to an individual before determining whether the individual is prohibited from purchasing or possessing a firearm under state or federal law, and the purchase is subsequently denied. Generally, under Pennsylvania law, a licensed importer, manufacturer or dealer may not sell or deliver any firearm to an unlicensed person until having received a unique approval number from PSP.
State History of Denials and Prohibited Categories
Pennsylvania law prohibits individuals that have been convicted of certain offenses from possessing firearms. For example, under Pennsylvania law, an individual who has been convicted of driving under the influence of alcohol or controlled substance on three or more separate occasions within a 5-year period is prohibited from possessing a firearm. One prosecutor told us that most of the denials in his county stemmed from second and third offense DUI convictions. Table 3 shows Pennsylvania firearms denials by prohibiting category.
According to PSP officials, from 2014—when Pennsylvania began investigating denials—through 2017, the most common category was “persons convicted of a crime punishable by more than one year or a misdemeanor punishable by more than two years,” which comprised 42 percent of all denials. The second most common prohibiting category was mental health-related denials, at 16 percent. During this span, the number of denials increased from 2014 to 2016, only to decline in 2017.
State Investigation of Denials
Since 2014, PICS policy has been to investigate all firearm denials, according to PSP officials. Prior to 2013, Pennsylvania used risk-based criteria to investigate a much smaller percentage of denials. Criteria used included violent felonies, drug trafficking, domestic violence, involuntary mental health commitment, active warrants, and straw purchases, among others. After PSP began investigating all firearms denials in 2014, according to PSP officials, the number of denials remained largely the same, but the number of investigations rose from 620 to 4,154. PSP officials told us they believe that the policy to investigate all denials acts as a deterrent, and that as prohibited individuals learn that investigations follow a denial these individuals will not attempt to purchase a firearm.
According to PSP officials, as PICS refers all confirmed firearms denials for investigation, PICS does not use screening criteria to make determinations about whether firearms denials should be referred for investigation, or which denials are more likely to be accepted for prosecution. However, PICS does prioritize and determine which denials involve more serious criminal violations. According to PSP’s Firearms Unit staff, many referrals are not pursued based on the investigator’s assessment of the case or a prosecutor’s declination of the case when the referral was received.
The PSP partners with local law enforcement to investigate firearms denials. Investigations are split up between the PSP and municipal police departments based on the jurisdiction of where the applicant submitted the firearms purchase. In 2016, 68 percent of cases referred for investigation were referred to state police, while 32 percent were referred to local law enforcement. In 2017, cases referred to local law enforcement increased to 62 percent, while 38 percent were referred to state police. If the subject is federally prohibited, a case may be referred to ATF for investigation, though based on our analysis this is relatively uncommon. In 2015 and 2016, 16 and 5 cases, respectively, were referred to ATF for investigation, while in 2017 one case was referred to ATF.
Firearms denials are automatically funneled into a state investigative database where an investigation file is created according to PSP officials. When a denial is referred to a PSP troop for investigation, it is assigned to a state investigator if the state police has jurisdiction. If local law enforcement has jurisdiction, the PSP troop or PSP investigation staff will pass the referral to local law enforcement, according to PSP officials. Though some PSP units have investigators that specialize in firearm denials cases, generally denial investigations are assigned to the next available investigator, according to PSP officials.
After an investigation is assigned, the investigator will review all provided documentation and verify that the subject is actually prohibited, according to PSP officials. The investigator will then pull an incident number and take steps to obtain necessary documentation. The investigator will then respond to the location of the violation, review the ATF Form 4473, and attempt to interview the employee who handled the attempted transaction. Finally, the investigator will locate and interview the subject of the denial. Cases are not prioritized for investigation because all firearms denials are investigated and are immediately assigned to an investigator upon receipt from PICS, according to PSP officials. While no denial categories are designated as priority, protective orders may be investigated more vigorously when there is an indication of violence, according to PSP officials. PSP does not track the length of time or resources required for conducting investigations of firearms purchase denials, according to PSP officials.
Some jurisdictions may send the subject a letter to notify them that they are prohibited and under investigation, according to PSP officials. Other jurisdictions may send a letter only when prosecutors decide not to press charges, explaining to the recipient why they were denied, that they are not eligible to purchase a firearm, and that they could have been prosecuted for that reason.
If the case is considered for prosecution, the investigator may meet with the District Attorney’s office and review the case for prosecutorial merit, according to PSP officials. If prosecution is sought, the investigator will type up the charges, process the subject, and arraign. If prosecution is approved, the investigator will notify the Firearms Unit and attend all court proceedings. The investigating unit is to inform PSP’s Firearms Unit of the outcome of the prosecution.
Statutes Used
According to prosecutors and PSP officials, denials are primarily referred for prosecution on the basis of the violations under:
18 Pa. Cons. Stat. § 4904 - Unsworn falsification to authorities
18 Pa. Cons. Stat. § 6111(g)(4) - Sale or transfer of firearms.
18 Pa. Cons. Stat. § 6105 - Persons not to possess, use, manufacture, control, sell or transfer firearms.
Prosecution of Firearms Denials
According to PSP officials, in Pennsylvania, the District Attorney is the chief law enforcement officer for each county, and in most instances, cases are accepted for prosecution based on their discretion. As such, discretionary decisions vary by county, and there are no internal criteria. District Attorneys may also refer cases for prosecution to the State Attorney General due to lack of resources or a conflict of interest. Trials for firearms denials are extremely rare in Pennsylvania, according to prosecutors that we spoke with. Only a small percentage of referred denials are ultimately prosecuted, mostly due to the difficulty proving the suspect “knowingly and willingly” provided false information on the background check application, according to PSP officials.
According to PSP officials, the conviction rate for firearms denial cases is about 10 percent of all denials referred for investigation. Based on our discussions with Pennsylvania prosecutors and PSP Firearms Division staff, most cases that are prosecuted result in misdemeanor pleas, rather than felony convictions, and common penalties are probation and fines. One county prosecutor told us that most convictions reduced to a misdemeanor are for “statement under penalty,” a third degree misdemeanor. Other cases might be pled down to misdemeanor disorderly conduct, which carries a $300 fine, according to PSP officials. According to county prosecutors that we spoke with, there is an occasional prison sentence for denied felons which can result in about 12 months in prison, and have resulted in sentences of almost 2 years in prison.
One prosecutor told us the most frequent firearms prohibitor among convictions is a crime punishable by greater than 1 year in prison, such as a second or subsequent DUI conviction within 10 years, as many of those are graded as misdemeanors of the first degree, punishable by up to 5 years in prison. Typically, when asked, these individuals are unaware of the maximum penalty. Another state prosecutor we spoke with stated that the most prosecuted prohibiting categories also involved felony DUIs, as well as matters related to mental health and domestic violence. He added that, typically, more recent crimes are treated with more severity. One county prosecutor told us they prioritize prosecution of persons with a history of violent behavior.
According to state police officials, upon conclusion of a prosecuted case, the investigator will document the disposition of the court. The entire investigative process is documented in a PSP incident report, which includes all interviews, queries made, investigative steps taken, and consultation with the District Attorney. The result of the investigation is then forwarded to the PSP investigation staff. Finally, an email summarizing the entire investigative process is sent to the Troop Crime Commander, Troop Administrative Manager, and the PSP Firearms Unit. Table 4 shows the disposition of firearms denial cases in Pennsylvania.
While no annual statistics are recorded at the unit level, according to PSP officials, the state of Pennsylvania does track prosecutions resulting from firearms denials. In 2016, there were convictions in about half of the approximately 730 arrests made and about 6,500 denials referred for investigation. This represents a 39 percent increase in referrals over 2015, but a 67 percent decline in convictions and 68 percent decline in arrests. In 2017, the number of cases referred for investigation declined by 16 percent to about 5,500. Numbers for 2016 and 2017, including arrests, convictions, and prosecutions returned to numbers more representative of a typical year, according to PSP officials. Neither PSP nor the municipal departments track enforcement actions associated with investigations, or the specific sentencing results of investigations referred for prosecution beyond whether the investigation resulted in a conviction or declination.
Appendix V: Investigation and Prosecution of Firearms Denials in Virginia
This appendix includes information on the investigation and prosecution of individuals denied firearms purchases in the state of Virginia.
State Firearms Background Checks
The Virginia Firearms Transaction Center (FTC), established in 1989, performs background checks at the point of sale by accessing state and federal databases. The FTC is the federally designated point of contact for the National Instant Criminal Background Check System (NICS), and is responsible for any investigations of firearms denials. The Virginia State Police (VSP) is responsible for conducting background checks using VCheck, Virginia’s Internet-based instant background check program, and for enforcing state and federal laws related to firearms purchases in Virginia. Under Virginia law, generally, a licensed dealer is required to obtain written consent and other identifying information— including but not limited to the name, date of birth, gender, race, citizenship, and Social Security number of a potential unlicensed purchaser—and provide the Department of State Police with this information and request criminal history record information by a telephone call to or other communication authorized by the State Police prior to selling, renting, trading, or transferring any firearm from the dealer’s inventory.
The FTC provides personnel to conduct transactions onsite at anticipated high volume gun shows. Pursuant to Virginia law, the Department of State Police are to be available at every firearms show held in Virginia to make determinations, in accordance with the procedures set out for background checks required for the transfer of certain firearms, of whether a prospective purchaser or transferee is prohibited under state or federal law from possessing a firearm. One prosecutor we spoke with estimated that 25 percent of his illegal possession cases are from private sales at gun shows. At a gun show, when an individual attempts to purchase a firearm from a licensed dealer, the individual has to complete the state background check form (SP-65B) and the federal form (ATF 4473) and the FTC will conduct a full NICS check. Should the transaction be denied, the trooper may arrest the applicant depending on the reason for the denial. A Virginia prosecutor explained that in his jurisdiction when two private parties, neither of whom is a FFL, initiate a sale outside of the state transaction system, troopers may approach the purchaser and ask questions related to his or her eligibility to purchase a firearm. If the purchaser appears to be prohibited based on their testimony they may be subject to arrest as well.
Process for Conducting a Background Check For transactions conducted through an FFL, the gun dealer submits a background check request to VSP via a toll free number or through an online application. Upon receipt of the request, VSP reviews the applicant’s criminal record information to determine if the applicant is prohibited from possessing or transporting a firearm by state or federal law. This check includes a review of an applicant’s entire criminal history, with no exclusion based on when the prohibiting offense occurred, according to FTC officials. For example, a recent prohibiting felony conviction is treated the same as the same conviction from decades ago.
The applicant’s information is submitted to the FTC, where it is checked against databases at the federal and state level. Information is screened through NICS, National Crime Information Center, and the Virginia Criminal Information Network. The FTC provides an instant response to approve the transaction or place it in delayed, research status. Databases maintained by VSP and accessible by the Virginia Criminal Information Network include:
Virginia’s wanted and missing persons files and protective orders;
Virginia’s criminal history record files; and
Virginia’s database of adjudications of legal incompetence and incapacity, and involuntary commitments to mental institutions.
If the instant VCheck search indicates that the purchaser is approved, a unique computer-generated approval number that is required to transfer the firearm is provided to the dealer to complete the transaction. If a possible identification is made in the state or federal databases, the instant check produces a “delayed” status and a review is conducted to determine identification and eligibility of the purchaser. If a background check enters delayed status, the dealer will be requested to provide additional information about the purchaser. The dealer is to be notified immediately upon a final determination of eligibility. Pursuant to federal law, if the dealer has not been notified of a final determination by the end of the third business day, the dealer may complete the sale and transfer of the firearm. If a firearm is transferred prior to a final determination of eligibility, the dealer is requested to notify VSP immediately. When a delayed transaction is ultimately approved or denied, the FTC updates the dealer on the status of the transaction by telephone or online depending on how it was entered. When research efforts have been exhausted, if no clear reason to deny is identified, the transaction is approved. More than 99 percent of delayed applications are resolved before 30 days, according to FTC officials.
All transactions that are not immediately approved and enter delayed status are assigned a priority level, based on the possible prohibiting category. Virginia investigates a subset of all denials based on risk, but prioritizes denials with active warrants, active protection orders, mental health issues, and certain felony convictions. According to VSP officials, a “priority 1” transaction is a possible hit for mental health reasons, a protective order, or a possible wanted subject. A “priority 2” transaction is any possible hit in NICS, such as convicted felons and out-of-state mental health cases. A “priority 3” transaction is any hit in the Interstate Identification Index, or Virginia’s Computerized Criminal History. According to VSP officials, convicted felons are normally a priority 3 unless they appear in the NICS database. A “priority 4” transaction is a hit from U.S. Immigration and Customs Enforcement, namely an alien or immigrant attempting to purchase a firearm, or a possible request for information, such as a Be On The Lookout or Alert notice, from a police agency or the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF). According to VSP officials, while a transaction may be given an initial priority level, VSP moves some priority 3 and 4 hits to the front of the list, such as those involving recent felony indictments or a misdemeanor crime of domestic violence. Denial decisions undergo supervisory review to verify that the denial is correct and accurate, including a review of the police report to document findings, and to ensure that the prohibited person’s rights have not been restored, according to VSP officials.
According to a VSP official, in practice, there are rarely any transactions in Virginia in which a firearm is transferred before the purchaser is determined to be ineligible, known as a delayed denial. According to an FTC official, there were no delayed denials in the previous 2 years. After 3 business days of conducting a background check, at which time firearms dealers may transfer a firearm, firearms dealers typically contact the FTC to notify them of the possible transfer, and ask whether to hold the gun for a few more days, according to VSP officials. If the FTC believes the purchaser will ultimately be denied, they will suggest the firearm be held, but it is up to the dealer to decide whether to do so. The FTC will also ask to speak with the purchaser to explain that if they accept the firearm and are later denied, VSP would have to send an officer to retrieve the firearm and charges may be filed against the purchaser for illegal possession of the firearm. VSP will then advise that if unsure of his or her prohibited status, the applicant should wait until the background check is complete.
According to a VSP official, there are advantages to being a point-of- contact state, such as the ability to provide better service to citizens and to build relationships with FFLs that would not be possible as a NICS state. For example, VSP conducts training sessions and regular outreach to firearms dealers. VSP officials estimate that in 80 percent of cases involving firearms purchases on behalf of a prohibited person, sometimes referred to as “straw purchases,” leads come from dealers notifying VSP of something suspicious. According to VSP officials, straw purchases are treated very seriously, and can result in prison sentences of 5 to 10 years. Figure 9 shows the process for purchasing a firearm from a dealer in Virginia.
Individuals denied the right to purchase a firearm may exercise a right of access, review, and correction of criminal history record information or institute a civil action within 30 days of the denial. Typically, after a denial, individuals are provided a Virginia Firearms Transaction Program brochure or referred to the VSP website for appeal procedures if they believe that they are not prohibited by state or federal law from purchasing or possessing a firearm. These individuals may contact the FTC via phone or e-mail to discuss the determination and provide additional information, provide fingerprinting to facilitate future transactions, request a correction of record, or institute a civil action. Denied persons may also challenge the accuracy of the record in writing to the FBI.
State History of Denials and Prohibited Categories
Generally, individuals prohibited from either purchasing or possessing a firearm under Virginia law include, but are not limited to: any person who has been convicted of a felony, or adjudicated delinquent as a juvenile 14 years of age or older at the time of certain offenses (including murder, kidnapping, robbery by threat or presentation of firearms, or rape), or under the age of 29 who was adjudicated delinquent as a juvenile14 years of age or older at the time of the offense of a delinquent act which would be a felony if committed by an adult; any person who has been acquitted by reason of insanity and committed to the custody of the Commissioner of Behavioral Health and Developmental Services on a charge of treason, any felony or certain offenses punishable as a misdemeanor or certain ordinances of any county, city, or town similar to other outlined offenses; any person who is subject to certain protective orders; or any person who, within a 36 consecutive month period, has been convicted under Virginia law of two misdemeanor offenses for possession of controlled substance or marijuana without a valid prescription or order of a practitioner while acting in the course of his professional practice within 5 years from the date of the second conviction.
The top prohibiting categories for individuals denied firearms purchases are felony convictions, which comprise 21 percent of all denials from 2011 through 2017, followed by drug-related prohibitions (19 percent), and mental health-related prohibitions (13 percent). One prosecutor we spoke with said that denials tend to not involve violent career criminals, and typically involve non-violent felonies, such as grand larceny, or involve drugs, and most occurred 20 years ago or more. From 2011 to 2017, the total number of denials increased from about 2,000 to about 3,600, an increase of almost 80 percent, while the total number of transactions increased from about 320,000 to about 500,000, an increase of more than 50 percent. Table 5 shows Virginia firearms denials by prohibiting category.
State Investigation of Denials
Virginia has investigated firearms denials since its instant check system was introduced in 1989. Virginia does not refer all firearms denials for investigation, instead using risk-based criteria to refer a sub-set of prohibited categories for investigation. The following conditions trigger an automatic investigation for a firearms denial: felony conviction, including juvenile felony conviction, or felony indictment; misdemeanor crime of domestic violence; involuntary mental health treatment; nonimmigrant or illegal alien; and dishonorable discharge from the military.
All Virginia denial investigations are handled by VSP with the exception of some fugitive and warrant-related, protective order, and mental health cases, as well as purchases at gun shows, which may involve municipal or local police, according to VSP officials. When FTC’s background check unit refers a case for investigation involving mental health or protective orders (both which are priority 1), the package is sent to both the VSP division and the local police department.
According to VSP officials, to initiate a denial investigation, FTC sends requests for investigation to the VSP division headquarters, where it is referred to the appropriate section where the gun transaction took place, then to a state trooper to conduct the investigation. A file with a copy of both the federal background form, ATF Form 4473, and the state background check form, SP-65, is sent to the investigating trooper. The trooper then collects necessary information, such as information about the denial from VCheck, the criminal history of the purchaser, and court records. As necessary, the investigator verifies the information in the FTC file at the FFL, and interviews the subject. Part of the investigation involves trying to prove the purchaser “willingly and knowingly” answered falsely on the state and federal forms.
Some VSP sections, typically those in more densely populated areas, have troopers dedicated exclusively to firearms denial investigations due to the higher volume of denials in those areas.
According to VSP officials, every area may assign troopers to work exclusively on firearms denial investigations. However, most areas either cannot afford to remove a trooper from road coverage availability, or don’t investigate enough firearms denial cases to make it an effective use of resources. These sections assign denial investigations to troopers on a case by case basis.
Prosecutors are often consulted as to whether a case will be prosecuted, where the prosecutor comments on the strength of the case based on the evidence available, according to investigators and prosecutors we spoke with. Investigators told us that prosecutors are generally more agreeable to taking on firearms denial cases involving recent felony convictions.
They also said that if the case is accepted for prosecution, the trooper will obtain warrants to make an arrest. If the Commonwealth Attorney finds that the case does not have prosecutorial merit, the case is closed and the name of the Commonwealth Attorney consulted is put in the case management system report, according to a VSP official. Table 6 shows Virginia denial investigations from fiscal years 2011 through 2017.
According to VSP officials, the time spent on denial investigations depends on the type of denial, location, and the information needed to bring charges or close the case. However, on average a case may involve about 4 hours of investigation. Officials in another division stated that in-state convictions can range from 4 to 6 hours of investigative work, while out-of-state convictions can take significantly more time, from 4 to 15 hours. Obtaining records from out of state can be difficult, and can take weeks or months. For example, one state requires a fee per conviction copy, which requires a check to be mailed, processed, and then for the files to be mailed back to the investigator. VSP officials told us that cases involving straw purchases can take 50 hours or more, however, these cases can result in longer prison sentences of 5 to 10 years. They added that additional time may be spent on search warrants, examining video from firearms stores, reviewing phone records, and conducting interviews. Further, denial investigations involving dishonorable discharges and mental health denials from out of state typically take the longest to investigate, in part because some states won’t release these records for the purpose of prosecution. Locating old felony documentation is also a challenge for investigators, according to VSP officials.
Statutes Used
According to investigators and prosecutors, the most common state statutes used for attempted firearm purchases include:
Va. Code Ann. § 18.2-308.2:2(K) Willfully and intentionally making a materially false statement on the consent form;
Va. Code Ann. § 18.2-308.1:3 (Usually prosecuted as an attempt)
Prohibition against purchase or possession of a firearm by someone involuntarily admitted or ordered to outpatient mental health treatment; and
Va. Code Ann. § 18.2-308.1:4 (Usually prosecuted as an attempt) Prohibition against purchase or transport of a firearm by someone subject to a protective order.
State Prosecution of Firearms Denials
Virginia’s chief prosecutors, the Commonwealth’s Attorneys, are elected at-large for a 4-year term. They are responsible for prosecuting all felonies and some misdemeanors, in addition to handling certain civil matters. According to a prosecutor we spoke with, Commonwealth’s Attorneys offices receive referrals for prosecution directly from state troopers.
We interviewed Virginia investigators and prosecutors from four counties, including from localities where a high volume of firearms denial referrals occur. These prosecutors said they tend to work with Virginia troopers who specialize in denial investigations and report high prosecution rates for the cases they accept. One investigator with a high referral rate to prosecutors told us he benefits from operating in a high-volume, relatively compact jurisdiction, while in other parts of the state, investigators may have to cross several counties to gather the paperwork needed to establish a denial case, interview the purchaser, and make an arrest. According to a county prosecutor, a key component of successful prosecutions is a willing Commonwealth Attorney because charging decisions are at their discretion. An investigator and prosecutor that work together stated that in some jurisdictions, attorneys may not welcome firearms denial cases, while in other jurisdictions specialized investigators working with an attorney willing to prosecute these cases for public safety and deterrence value can yield a high prosecution rate.
Two county prosecutors we spoke with said approximately 90 percent of firearms denial convictions are pled down to misdemeanors, and the penalties imposed tend to include probation or community service, but there is an occasional prison sentence. According to investigators and prosecutors we spoke with, some prosecutors prefer to avoid the use of fines while others may use them occasionally.
Of the few cases that go to trial, according to prosecutors, most go before a judge rather than a jury, and typically involve a felon in possession of a firearm, resulting in a felony conviction and likely probation. Judges have discretion to reduce sentences, while juries are constrained to issuing more severe sentences if they find the defendant guilty, and typically hand down more prison time, according to prosecutors we spoke with.
The severity of penalties handed down for firearms denials depends on the prohibited category, according to one county prosecutor. Another prosecutor said protective order violations tend to be easier to prosecute because the records are available and indicate a clear violation. Other cases where accurate records are difficult to obtain, such as juvenile denials, mental health denials, and out of state cases, prosecutions are difficult to prosecute, according to investigators and prosecutors.
One prosecutor told us that a subject’s criminal history also makes a big difference as to whether they might receive a harsher or more lenient sentence. Several prosecutors we spoke with said that while prison sentences are rare, for a felon with a history of violence, sentences of 7 months to more than 24 months in prison have been imposed. One prosecutor told us they typically agree to no prison time on a felony conviction unless there are indicators of violence on the record, such as destruction of property or assault and battery. If a person has no record, the prosecutor would be far more willing to forego a felony and sometimes even a misdemeanor, and propose community service instead. One prosecutor questioned whether it makes sense to make a person a felon over a firearms denial; however, if a person has a consistent misdemeanor history of getting into trouble then they would be less convinced that this particular offense is out of character and may not make any non-felony offers. Prosecutors also may reduce the charges to disorderly conduct or providing false information to police during a plea in these cases to try to get a conviction, according to one prosecutor.
Data on prosecutions, dismissals, and convictions resulting from investigations, are not collected at the state level, and are only accessible at the VSP divisions that conduct investigations and the courts where they are prosecuted, according to Virginia officials.
Appendix VI: Examples of Firearms Denial Cases Referred for Prosecution
Table 7 shows examples of firearms denial cases that our six selected ATF field divisions referred to U.S. Attorney’s Offices for prosecution during fiscal years 2014 through 2017, including the types of circumstances that could lead to referral for prosecution, the range of charges filed, and the severity of sentences that resulted. All the cases involved 18 U.S.C. § 922(a)(6), falsifying a background check form. While all were not ultimately charged under that statute, they were selected for investigation by ATF for that reason. Occasionally, federal and state law may prohibit similar types of criminal conduct, allowing both federal and state prosecutors to pursue the case. U.S. Attorney’s Offices may also refer a case to a state prosecutor that is not deemed appropriate for federal prosecution.
Appendix VII: GAO Contacts and Staff Acknowledgements
GAO Contact
Staff Acknowledgements
In addition to the contact named above Eric Erdman (Assistant Director) and Anthony DeFrank (Analyst-in-Charge) managed this assignment. Daniel Kuhn, James Lawson, Billy Commons, Susan Hsu, Michele C. Fejfar, and Eric D. Hauswirth made significant contributions to the work. | Why GAO Did This Study
In 2017, approximately 25.6 million firearm-related background checks were processed through NICS, and about 181,000 of the attempted purchases at the federal and state levels combined were denied because the individual was prohibited from possessing a firearm under federal or state law. Individuals who certify that they are not prohibited from purchasing or receiving a firearm and are subsequently determined to be prohibited could be subject to investigation, and if prosecuted, a fine, imprisonment, or both.
GAO was asked to examine firearms denials. This report (1) describes the extent to which federal and selected state law enforcement agencies investigate and prosecute firearms denial cases; (2) examines related challenges faced by these agencies; and (3) describes the circumstances that lead to investigations and prosecutions. GAO reviewed laws and regulations; analyzed federal and state data from 2011 through 2017; and interviewed officials from ATF headquarters, 6 of 25 ATF field divisions (the 6 that investigated the most cases), and the 13 states that process all NICS checks within their state. Results from state interviews are not generalizable but provide insights on state practices.
What GAO Found
Investigations and prosecutions. Federal and selected state law enforcement agencies that process firearm-related background checks through the National Instant Criminal Background Check System (NICS) collectively investigate and prosecute a small percentage of individuals who falsify information on a firearms form (e.g., do not disclose a felony conviction) and are denied a purchase. Federal NICS checks resulted in about 112,000 denied transactions in fiscal year 2017, of which the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) referred about 12,700 to its field divisions for further investigation. U.S. Attorney's Offices (USAO) had prosecuted 12 of these cases as of June 2018.
At the state level, officials from 10 of 13 selected states said they did not investigate or prosecute firearm denials, some citing competing resource demands and the lack of statutes with which states prosecute as reasons. The remaining 3 states investigated a high proportion of firearms denials. One of the 3 states reported about 1,900 referrals for prosecution in 2017 and about 470 convictions.
Challenges. ATF and selected states reported challenges in investigating and prosecuting firearms denials. Officials from six selected ATF field divisions said that investigating the increasing number of denial cases referred to field divisions—which increased from about 5,200 in fiscal year 2011 to about 12,700 in fiscal year 2017—has been time intensive and required use of their limited resources. ATF policy provides that field divisions may send “warning notices” to denied persons in lieu of prosecution, but ATF has not assessed field divisions' use of these notices, which could provide greater awareness of their deterrence value and inform whether any policy changes are needed. Officials from the Executive Office for United States Attorneys said that prosecuting denial cases can require significant effort and may offer little value to public safety compared to other cases involving gun violence. Selected state officials said that denial investigations can take law enforcement officials away from their core duties. State prosecutors said gathering evidence to prove individuals knew they were prohibited was a challenge.
Types of cases. ATF field divisions investigate denial cases based on USAO criteria and generally only refer cases to USAOs for prosecution when aggravating circumstances exist, such as violent felonies or multiple serious offenses over a short period of time. Officials from two of three selected states refer all denial cases for investigation, while one state uses risk-based criteria for selecting cases that include conditions such as felony convictions and misdemeanor crimes of domestic violence. Prosecutors from these three states said they generally pursue cases that involve indications of violence, though individual prosecutors had differing priorities based on public safety concerns.
What GAO Recommends
GAO recommends that ATF assess the extent to which ATF field divisions use warning notifications as an enforcement tool, which would inform whether changes to policy are needed. DOJ concurred with GAO's recommendation. |
gao_GAO-19-134T | gao_GAO-19-134T_0 | FCC’s Data Overstate Broadband Access on Tribal Lands
In our September 2018 report on broadband access on tribal lands, we found that FCC collects broadband availability data from broadband providers, but its method for collecting the data does not accurately or completely capture broadband access—the ability to obtain service—on tribal lands. Specifically, FCC directs fixed broadband providers to submit a list of census blocks where service is available on their Form 477 filings. In the Form 477 instructions, FCC defines “available” as whether the provider does—or could, within a typical service interval or without an extraordinary commitment of resources—provide service to at least one end-user premises in a census block. Thus, in its annual reports and maps of fixed broadband service, FCC considers an entire block to be served if a provider reports that it does, or could offer, service to at least one household in the census block. As shown in figure 1, FCC’s definition of availability leads to overstatements of fixed broadband availability on tribal lands by: (1) counting an entire census block as served if only one location has broadband, and (2) allowing providers to report availability in blocks where they do not have any infrastructure connecting homes to their networks if the providers determine they could offer service to at least one household. FCC has noted that overstatements of availability can be particularly problematic in rural areas, where census blocks cover larger areas.
According to FCC officials, FCC requires providers to report fixed broadband availability where they could provide service to: (1) ensure that it captures instances in which a provider has a network nearby but has not installed the last connection to the homes, and (2) identify where service is connected to homes, but homes have not subscribed. FCC officials also told us that FCC measures availability at the census block level because sub-census block data may be costly to collect. However, FCC acknowledged that by requiring a provider to report where it could provide service, it is not possible to tell whether the provider would be unable or unwilling to take on additional subscribers in a census block it lists as served. In addition, when reporting on broadband access in tribal lands, FCC uses the broadband availability data described above, and does not collect information on factors that FCC and tribal stakeholders have stated can affect broadband access. These factors include affordability, service quality, and service denials.
By developing and implementing methods for collecting and reporting accurate and complete data on broadband access specific to tribal lands, FCC would be better able to target federal broadband funding to tribal areas that need it the most. We recommended FCC develop and implement methods for collecting and reporting accurate and complete data on broadband access specific to tribal lands. FCC agreed with this recommendation and stated that it is exploring methods to collect more granular broadband deployment data.
FCC Does Not Have a Formal Process to Obtain Tribal Input on its Broadband Data
As we reported in September 2018, FCC does not have a formal process to obtain input from tribes on the accuracy of the data and tribal stakeholders can face difficulties obtaining information from providers. FCC’s 2010 National Broadband Plan noted the need for the federal government to improve the quality of data regarding broadband on tribal lands and recommended that FCC work with tribes to ensure that any information collected is accurate and useful. Although the Plan also noted that tribal representatives should have the opportunity to review mapping data and offer supplemental data or corrections, FCC lacks a formal process to obtain tribal input on its broadband data. FCC officials told us that they address questions and concerns regarding providers’ coverage claims submitted to FCC’s Office of Native Affairs and Policy. However, about half of the tribal representatives we spoke to stated that they were not aware of the Form 477 data or corresponding maps, or raised concerns about a lack of outreach from FCC to inform tribes about the data. Most of the tribal stakeholders we interviewed told us that FCC should work more directly with tribes to obtain information from them to improve the accuracy of FCC’s broadband deployment data for tribal lands. These stakeholders identified several ways in which FCC could work with tribes on this issue, including onsite visits, increased outreach and technical training, and opportunities for tribes to collect their own data or submit feedback regarding the accuracy of FCC’s data.
FCC’s National Broadband Plan also noted the importance of supporting tribal efforts to build technical expertise with respect to broadband issues. A few of the stakeholders we interviewed noted that tribes have faced difficulties when they attempt to challenge FCC’s broadband availability data. For example, in 2013, all of the tribal entities that challenged FCC’s data on mobile service availability were unsuccessful in increasing the number of eligible areas. A few tribal stakeholders provided varying reasons for this, one of which was the need for more technical expertise to help the tribes meet FCC’s requirements regarding the information needed to support a challenge. Because FCC lacks a formal process to obtain tribal input on its broadband data, FCC is missing an important source of information regarding areas in which the data may overstate broadband service on tribal lands.
By establishing a process to obtain input from tribal governments on the accuracy of provider-submitted broadband data as recommended in the National Broadband Plan, FCC could help tribes develop and share locally-specific information on broadband access and improve FCC’s data for tribal lands. However, the success of such an effort may rely on the tribes’ knowledge of, and technical ability to participate in, the process. Thus, we recommended FCC develop a formal process to obtain tribal input on the accuracy of provider-submitted broadband data that includes outreach and technical assistance to help tribes participate in the process. FCC agreed with this recommendation and stated that it will work with stakeholders to explore options for implementing such a process.
Finally, some tribes face challenges accessing data from providers. In 2011, FCC required that providers receiving funds to serve tribal lands meaningfully engage with the tribes and discuss broadband deployment planning. In 2012, FCC issued guidance on meeting this requirement and stated that the guidance would evolve over time based on the feedback of both tribal governments and broadband providers. However, FCC has taken limited steps to obtain such feedback and has not updated the guidance. About half of the tribal stakeholders we interviewed raised concerns about difficulties accessing information from providers regarding broadband deployment on their tribe’s lands (which providers may consider proprietary), and some providers told us that they attempt to engage with tribes, but the level of responsiveness they receive from tribes varies. Thus, we recommended, and FCC agreed, that FCC obtain feedback from tribal stakeholders and providers to determine whether it needs to clarify its tribal engagement guidance.
Few Tribal Broadband Partnerships Exist
In our September 2018 report on tribal partnerships, we found that partnership arrangements between tribes and other entities to increase broadband deployment on tribal lands are not widespread. Because of the greater costs associated with deploying broadband on unserved tribal lands that are generally rural, with possibly rugged terrain, there may be little to no private sector incentive to deploy broadband or enter into a partnership arrangement to do so. The partnership examples we identified were ones that obtained federal funding under past programs funded by the Recovery Act. Among these examples, tribes partnered with several different types of entities, including private providers, a community access network provider, an electric cooperative, a regional consortium, and tribally owned providers.
Tribes Face Barriers to Obtain Federal Funding for Broadband Deployment
We also reported in September 2018 that FCC and RUS are the primary sources of federal funding to deploy broadband infrastructure in rural and remote areas where the cost of providing service is high, including tribal lands. Based on our review of the funding provided by four federal programs targeted to increase deployment in unserved areas, very little has gone directly to tribes or to tribally owned broadband providers. Specifically, we found that from 2010 to 2017, less than 1 percent of FCC funding and about 14 percent of RUS funding went directly to tribes and tribally owned providers. Combined, FCC and RUS funding totaled $34.6 billion during that time period and tribes and tribally owned providers received $235 million, or about 0.7 percent.
FCC’s 2010 National Broadband Plan stated that tribes needed substantially greater financial support than was available to them at the time and that accelerating tribal broadband deployment would require increased funding. Furthermore, the National Congress of American Indians expressed concerns that the needs for federally funded broadband projects are greater on tribal lands but tribes do not receive the appropriate share of federal funding aimed at increasing broadband deployment. Several of the tribes we visited told us they were trying to deploy broadband infrastructure or offer service because the private providers were not building out on their lands.
Through our analysis, we found that from 2010 to 2017, 14 tribal entities received federal funding from FCC and RUS to increase broadband deployment (see fig. 2).
The tribal officials, tribal associations, and tribally owned broadband providers we interviewed cited several barriers that tribes may face when seeking federal funding for broadband deployment. The two primary barriers these interviewees cited were (1) the statutory requirement for the eligible telecommunications carrier (ETC) designation and (2) grant application requirements. Regarding the statutory requirement for ETC designation, FCC officials told us there were 11 tribes that have providers designated as ETCs and therefore would be eligible to receive support from FCC’s Connect America Fund (CAF)—the largest source of federal funding for broadband deployment in unserved and underserved areas. Although FCC adopted rules in 2011 to create CAF and modernize the program so that it could support broadband capable networks, FCC officials told us that most ETCs are the telephone companies that were in existence when the Telecommunications Act of 1996 was enacted into law. According to FCC officials, FCC has explored whether it has authority to allow non-ETC providers to receive CAF support payments but determined that the statute is clear that only ETCs can receive program support. Between 2012 and 2017, FCC officials said FCC received nine ETC applications, four of which were from tribally owned providers. Of those four, only one tribally owned provider was designated as an ETC.
According to representatives from a tribal association we contacted, FCC has provided ETCs with billions of dollars to deploy service to unserved areas, but FCC’s efforts have not always been successful in the hardest to reach areas, particularly tribal lands. The representatives stated that FCC’s competitive market approach does not work where competition cannot be supported and that there needs to be a different approach. Similarly, tribal officials from Idaho told us that although the provider in their area has received millions of dollars in CAF subsidies, it has not deployed broadband on the tribal lands. Other tribal officials from Idaho told us that although private providers received CAF subsidies to deploy broadband service to their reservation, the private providers told the tribe it would be years before they offer service on tribal lands.
Additionally, the tribal officials, tribal associations, and tribally owned broadband providers we interviewed said tribes may face barriers completing federal grant applications to obtain funding for broadband deployment. For example, they said tribes face regulatory barriers in applying for RUS’s grant funding, including preparing existing and proposed network design, demonstrating financial sustainability of the broadband project within 5 years, and obtaining matching funds.
The National Broadband Plan recommended that federal agencies facilitate tribal access to broadband funding opportunities. Furthermore, recognizing the need to reduce barriers to expand broadband deployment, the Broadband Opportunity Council, established in March 2015, issued a memorandum stating that federal agencies should use all available and appropriate authorities to identify and address regulatory barriers that may unduly impede either broadband deployment or the infrastructure to augment broadband deployment. However, according to RUS officials, RUS has not taken steps to identify or address the barriers tribes face when applying for RUS grant funding due to limited resources and multiple competing priorities for those resources. We recommended that RUS identify any regulatory barriers that may unduly impede efforts by tribes to obtain RUS grant funds for broadband deployment on tribal lands and implement any steps necessary to address the identified barriers. By doing so, RUS could help tribes obtain funding to expand broadband deployment on tribal lands. RUS neither agreed nor disagreed with this recommendation.
Chairman Hoeven, Vice Chairman Udall, and Members of the Committee, this completes my prepared statement. I would be pleased to respond to any questions that you may have.
GAO Contact and Staff Acknowledgments
If you or your staff have any questions about this testimony, please contact Mark Goldstein, Director, Physical Infrastructure Issues at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this testimony are Rose Almoguera, Katherine Blair, Keith Cunningham, Crystal Huggins, Sally Moino, and Tina Paek. Other staff who made contributions to the reports cited in this testimony are identified in the source product.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
This testimony summarizes the information contained in two GAO reports: Broadband Internet: FCC’s Data Overstate Access on Tribal Lands , ( GAO-18-630 ) and Tribal Broadband: Few Partnerships Exist and the Rural Utilities Service Needs to Identify and Address Any Funding Barriers Tribes Face ( GAO-18-682 ). Specifically, it addresses (1) the extent to which FCC’s approach to collecting broadband availability data accurately captures broadband access on tribal lands, (2) the extent to which FCC obtains tribal input on the data, (3) partnerships tribes have formed with entities to deploy broadband infrastructure on tribal lands, and (4) barriers tribes face in obtaining federal funding. For these reports, GAO analyzed FCC and RUS data, and interviewed agency officials as well as a non-generalizable sample of stakeholders representing tribes and broadband providers.
What GAO Found
The Federal Communications Commission’s (FCC) approach to collecting data on broadband availability causes it to overstate broadband access—the ability to obtain service—on tribal lands. In FCC’s approach, broadband is considered to be “available” for an entire census block if the provider could serve at least one location in the census block. FCC, tribal stakeholders, and providers have noted that this approach leads to overstatements of broadband availability. Because FCC uses these data to measure broadband access, it also overstates broadband access on tribal lands. By developing and implementing methods for collecting and reporting accurate and complete data on broadband access specific to tribal lands, FCC would be better able to target federal broadband funding to tribal areas that need it the most.
FCC does not have a formal process to obtain tribal input on the accuracy of provider-submitted broadband data. Most of the tribal stakeholders GAO interviewed stated FCC should work more directly with tribes to improve the accuracy of FCC’s data. Establishing a formal process to obtain input from tribal governments could help improve the accuracy of FCC’s broadband data for tribal lands.
Tribes have formed partnerships with different types of entities to deploy broadband infrastructure on tribal lands, but such partnerships are not widespread. The partnerships GAO identified included private providers, a community access network provider, an electric cooperative, a regional consortium, and tribally owned broadband providers.
GAO reviewed four federal programs to deploy broadband services and found that from 2010 to 2017, less than 1 percent of funding has gone directly to tribes or tribally owned providers. The tribal entities GAO contacted cited barriers to obtaining funds from the Rural Utilities Service (RUS) grant funding, such as preparing network design, demonstrating financial sustainability of the broadband project within 5 years, and obtaining matching funds required to apply for federal grants. However, according to RUS officials, RUS has not taken steps to identify or address the barriers tribes face when applying for RUS grant funding due to limited resources and multiple competing priorities for those resources. By identifying and addressing regulatory barriers that may impede tribal entities’ access to RUS funding, RUS could help tribes obtain funding to expand broadband deployment on tribal lands.
What GAO Recommends
In GAO-18-630 , GAO made three recommendations to FCC, two of which related to improving its collection of broadband data. In GAO-18-682 , GAO made one recommendation to RUS to address regulatory barriers. FCC agreed and RUS neither agreed nor disagreed and both agencies described actions planned to address the recommendations. |
gao_GAO-19-58 | gao_GAO-19-58_0 | Background
Investments in federal IT have the potential to make agencies more efficient in fulfilling their missions by reducing costs and improving operational efficiencies. Each year, the federal government invests approximately $90 billion in IT, with about 75 percent reportedly spent on operating and maintaining existing systems. However, as we have previously testified, federal IT investments have too frequently failed or incurred cost overruns and schedule slippages while contributing little to mission-related outcomes. As a result, the federal government has spent billions of dollars on failed and poorly performing IT investments. These investments have often suffered from ineffective management of project planning, requirements definition, and program oversight and governance tasks.
Accordingly, in February 2015, we added improving the management of IT acquisitions and operations to our high-risk list—a list of agencies and program areas that have a higher potential for fraud, waste, abuse, and mismanagement, or are in need of transformation. In introducing this high risk area, we specifically noted that agencies spend a significant portion of their budgets on the operations and maintenance of IT systems and need to effectively manage these investments in order to ensure they continue to meet agencies’ needs and deliver value. We issued an update to our high-risk report in February 2017 and noted that, while progress has been made in addressing the IT acquisitions and operations high-risk area, significant work remains to be completed, including establishing action plans to modernize or replace obsolete investments.
In addition, over the last 3 decades, Congress has enacted several laws to assist agencies and the federal government in managing IT investments. For example, Congress enacted the Clinger-Cohen Act of 1996 to assist agencies in managing their investments. This act requires OMB to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by federal agencies and report to Congress on the net program performance benefits achieved as a result of these investments.
Further, in December 2014, Congress enacted Federal Information Technology Acquisition Reform provisions (commonly referred to as FITARA) as a part of the Carl Levin and Howard P. ‘Buck’ McKeon National Defense Authorization Act for Fiscal Year 2015. The act requires OMB, among other things, to develop standardized performance metrics, including for cost savings and cost avoidances, and to submit quarterly reports to Congress on cost savings and reductions in duplicative information technology investments.
More recently, recognizing the challenges in modernizing government IT systems, in December 2017, Congress enacted the Modernizing Government Technology Act as part of the National Defense Authorization Act for Fiscal Year 2018. This law authorizes all covered agencies to establish an IT system modernization and working capital fund to, among other things, transition legacy systems to commercial cloud computing and other innovative commercial platforms and technologies using agency reprogrammed funds. The act also establishes a Technology Modernization Fund administered by the Administrator of General Services, in consultation with the CIO Council, which will provide funds to federal agencies for modernization efforts.
As of March 2019, the board that oversees the Technology Modernization Fund had awarded $60.87 million to four projects that plan to migrate or deploy systems to cloud services. Specifically,
GSA’s project, which received an award of $20.65 million, is intended to expedite the completion of a new software as a service solution for the agency’s payroll and work schedule and leave management within 2 years.
The Department of Housing and Urban Development’s project, which received an award of $20 million, is expected to accelerate the migration of five of the agency’s most critical business systems from an on-premise mainframe database to the cloud within the next 2 years.
Energy’s project, which received an award of $15.2 million, is intended to help the agency move 45 separate on-premise email systems to the cloud within the next 3 years.
Agriculture’s project, which received an award of $5 million, is intended to help the agency migrate 10 applications to a shared services cloud platform model.
Overview of Cloud Services
One approach to improving the government’s management of IT services is through cloud computing. As mentioned previously, cloud computing is a means for enabling on-demand access to shared pools of configurable computing resources (e.g., networks, servers, storage applications, and services) that can be rapidly provisioned. More specifically, purchasing IT services through a cloud service provider enables agencies to avoid paying for all the computing resources that would typically be needed to provide such services. This approach offers federal agencies a means to buy services more quickly and possibly at a lower cost than building, operating, and maintaining these computing resources themselves.
According to NIST, cloud computing offers federal agencies a number of benefits:
On-demand self-service. Agencies can, as needed, provision computing capabilities, such as server time and network storage, from the service provider automatically and without human interaction.
Broad network access. Agencies can access needed capabilities over the network through workstations, laptops, or other mobile devices.
Resource pooling. Agencies can use pooled resources from the cloud provider, including storage, processing, memory, and network bandwidth.
Rapid elasticity. Agencies can provision the resources that are allocated to match what actual resources are needed according to demand. This is done by scaling resources up or down by adding or removing processing or memory capacity, or both, according to demand.
Measured service. Agencies can pay for services based on usage.
This allows agencies to monitor, control, and generate reports, providing greater transparency into the agency’s use of cloud services.
As noted in NIST guidance, cloud service providers have established three types of service models that are offered to consumers: Infrastructure as a service. The service provider delivers and manages the basic computing infrastructure of servers, software, storage, and network equipment. The consumer provides the operating system, programming tools and services, and applications.
Platform as a service. The service provider delivers and manages the infrastructure, operating system and programming tools and services, which the consumer can use to create applications.
Software as a service. The service provider delivers one or more applications and all the resources (operating system and programming tools) and underlying infrastructure to run them for use on demand.
NIST has also defined four types of cloud deployment models, including:
Private cloud. Service is set up specifically for one organization, although there may be multiple customers within that organization and the cloud may exist on or off the customer’s premises.
Community cloud. Service is set up for organizations with similar requirements. The cloud may be managed by the organizations or a third party and may exist on or off the organization’s premises.
Public cloud. Service is available to the general public and is owned and operated by the service provider.
Hybrid cloud. Service is a composite of two or more of the three deployment models (private, community, or public) that are bound together by technology that enables data and application portability.
According to NIST guidance, these deployment models impact the number of consumers and the nature of other consumers’ data that may be present in the cloud environment. A public cloud should not allow a consumer to know or control other consumers of a cloud service provider’s environment. However, a private cloud can allow for ultimate control in selecting who has access to a cloud environment. Community clouds and hybrid clouds allow for a mixed degree of control and knowledge of other consumers. Additionally, the cost for cloud services typically increases as control over other consumers and knowledge of these consumers increase.
OMB and Past and Current Administrations Have Undertaken Efforts to Increase Use of Cloud Services
In December 2010, OMB made cloud computing an integral part of its 25 Point Implementation Plan to Reform Federal Information Technology Management. The plan called for the development of a government- wide strategy to hasten the adoption of cloud services. To accelerate the shift, OMB required agencies to identify three systems to migrate to cloud services, create a project plan for migration, and migrate all three systems by June 2012.
In February 2011, OMB issued the Federal Cloud Computing Strategy, as called for in its 25-point plan. The strategy provided definitions of cloud services; benefits of cloud services, such as accelerating data center consolidations; a decision framework for migrating services to a cloud environment; case studies to support agencies’ migration to cloud services; and roles and responsibilities for federal agencies. For example, the strategy states that NIST’s role is to lead and collaborate with federal, state, and local government agency CIOs, private sector experts, and international bodies to identify standards and guidance and prioritize the adoption of cloud services.
Subsequently, in December 2011, OMB established the Federal Risk and Authorization Management Program (FedRAMP), a government-wide program to provide joint authorizations and continuous security monitoring services for cloud services for all federal agencies. GSA initiated FedRAMP operations, which the agency referred to as initial operational capabilities, in June 2012.
In 2012, OMB began requiring agencies to evaluate each investment, or components or systems within the investment, for cloud services, regardless of the overall life-cycle stage of the investment. Agencies were required to report the status of each investment’s evaluation as part of the annual budget submission, as noted in OMB’s annual capital planning guidance. Specifically, OMB required agencies to select an option regarding whether they had evaluated a cloud alternative and chosen a cloud alternative with a particular cloud deployment model or indicate that they had not yet evaluated the investment for cloud services. Starting in fiscal year 2018, OMB revised the options that agencies were to select from and required agencies to select an option regarding whether the investment, or a portion of the investment, was leveraging cloud computing, or indicate that cloud computing had not been considered for the investment.
In 2012, OMB began requiring agencies to report associated cloud spending, as called for in its annual capital planning guidance. For fiscal years 2015 through 2018, OMB’s capital planning guidance required agencies to report their total cloud spending at the agency level based on the cloud deployment model, rather than by individual investment.
Starting in fiscal year 2019, OMB will require agencies to report total cloud spending by investment and use the Technology Business Management Framework. The Framework provides a cost taxonomy for agencies to use to manage the cost, quality, and value of their IT services. Specifically, agencies will be required to use a standard set of cost categories to group IT spending, including cloud-related spending. This new model is intended to increase the granularity in reporting of agency IT budget and spending data.
In addition, in May 2017, the administration established the American Technology Council to help transform and modernize federal IT and how the government uses and delivers digital services. The President is the chairman of this council, and the Federal CIO and the United States Digital Service Administrator are among its members.
Subsequently, in December 2017, the American Technology Council issued a Report to the President on Federal IT Modernization and made eight cloud computing-related recommendations that are relevant to the focus of our review. For example, the report recommended that OMB issue two data calls to agencies in order to: (1) obtain a list of agency in- progress and pending projects for cloud migration; and (2) have agencies identify systems that have not yet migrated due to perceived or encountered difficulties. Based on the information provided, OMB would then assist agencies in making transition plans and work to remove obstacles in order to accelerate cloud adoption. In addition, the report recommended that OMB take action to update its guidance related to cloud computing and revise the Federal Cloud Computing Strategy that was previously issued in 2011.
According to staff in OMB’s Office of E-Government and Information Technology, OMB has taken action to address these recommendations. For example, the staff reported that the two data calls were issued in December 2017 and staff are currently reviewing the information provided by agencies in response. In addition, OMB issued its draft strategy revision, the 2018 Federal Cloud Computing Strategy, for comment on September 24, 2018. This proposed Cloud Smart policy outlines a strategy for agencies to adopt cloud solutions that streamline transformation and embrace modern capabilities.
According to the draft strategy, Cloud Smart focuses on equipping agencies with the tools needed to make informative technology decisions in accordance with their mission needs. In addition, the draft strategy indicates that OMB intends to leverage private-sector solutions to provide the best services to the American people. The strategy also notes that the CIO Council and Chief Financial Officer Council are to work with OMB, GSA, DHS, and other federal entities to develop a work plan of actions and targeted policy updates that are to be delivered over the next 18 months. For more information about the current status of each of these eight cloud recommendations, as reported by OMB, please see appendix II.
Prior GAO Reports on Efforts to Implement Cloud Services
During the past several years, we reported on federal agencies’ efforts to implement cloud services, and on the progress that oversight agencies have made to help federal agencies in those efforts. For example, in July 2012, we reported that the seven federal agencies we reviewed had made progress in meeting OMB’s requirement to implement three cloud services by June 2012. Specifically, the seven agencies had implemented 21 cloud services and spent a total of $307 million for cloud computing in fiscal year 2012—about 1 percent of their total IT budgets. In addition, while all seven agencies had submitted plans to OMB for implementing cloud solutions, all but one plan were missing key required elements. We made 14 recommendations to the seven agencies to develop planning information, such as estimated costs and legacy IT systems’ retirement plans for existing and planned services. The agencies generally agreed with, and implemented, 13 out of 14 of our recommendations.
In September 2014, we reviewed the efforts of the same seven federal agencies again and found that each of them had implemented additional cloud services subsequent to our July 2012 report. In particular, the total number of cloud services implemented by the seven agencies had increased by 80 services, from 21 to 101. The seven agencies’ reported spending on cloud services had also increased by $222 million, from $307 million in 2012 to $529 million in 2014. However, this relatively small increase in cloud spending was attributed, in part, to the fact that these agencies had not considered cloud services for 67 percent of their investments. Accordingly, we recommended that the seven agencies assess their IT investments for suitability for cloud services. The agencies generally agreed with our recommendations and 6 of the agencies (Agriculture, DHS, GSA, HHS, SBA, and State) implemented all of our recommendations.
Further, in April 2016, we identified 10 key practices that federal and private-sector guidance noted should be included in service-level agreements in a contract when acquiring IT services though a cloud services provider. However, our review of five agencies’ (Defense, DHS, HHS, Treasury, and VA) cloud service contracts found that not all 10 key practices were included in these contracts. We therefore made recommendations to OMB to include all 10 key practices in future guidance to agencies. We also recommended that the five agencies incorporate these key practices as their contract and service level agreements expire. The agencies generally agreed with our recommendations and, to date, Defense and DHS have taken action to implement the recommendations.
More recently, in April 2017, we highlighted the results of a forum, convened by the Comptroller General on September 14, 2016, to explore challenges and opportunities for CIOs to improve federal IT acquisitions and operations—with the goal of better informing policymakers and government leadership. Thirteen current and former federal agency CIOs, members of Congress, and private-sector IT executives who participated in the forum noted challenges with agency operations that could be addressed by migrating more services to the cloud. In their view, this approach would offer agencies a means to buy the services faster and possibly at a lower cost than through the traditional methods of building and maintaining systems.
In addition, forum participants noted the importance of federal agencies’ IT procurement offices and processes evolving to align with new technologies, as agencies are not always set up to take advantage of cloud services. Lastly, forum participants said that, as the federal government is expected to increase its purchase of IT as a service with the move toward cloud computing, more oversight is needed to ensure that appropriate contracts are in place and appropriate oversight of performance occurs.
Most of the Selected Agencies Reported Making Progress in Implementing Cloud Services
The16 selected agencies reported making progress in implementing cloud services—namely, they established guidance for assessing investments for cloud services, performed those assessments, and implemented cloud services for their investments. However, the extent of these agencies’ progress varied. Specifically, 10 of the 16 agencies established guidance for assessing all new and existing investments for cloud services, while six agencies did not. In addition, while these agencies had assessed the majority of their investments for cloud services planned for fiscal year 2019, 12 agencies had not completed an assessment of 10 or more IT investments for cloud services. Lastly, 10 of the agencies reported a percentage increase in the use of cloud services from fiscal year 2016 through fiscal year 2019, while two agencies reported no percentage change and four agencies reported a decrease during this 4-year period.
About Two-thirds of the Selected Agencies Had Established Guidance for Assessing Investments for Cloud Services Suitability
OMB’s Cloud First policy, issued in February 2011, requires each agency’s CIO to implement a cloud service whenever there is a secure, reliable, cost-effective option to do so. Further, subsequent OMB capital planning guidance, issued in 2014, requires agencies to evaluate each investment, or components or systems within the investment, for cloud services, regardless of the overall life-cycle stage of the investment.
While OMB’s guidance is not specific on how agencies should conduct these evaluations, GAO’s Information Technology Investment Management framework notes that organizations should have documented policies and procedures for the management oversight of IT investments, including the selection of investments and the evaluation of information technologies that have the potential to improve the organization’s business.
Ten of the 16 agencies we reviewed had established guidance in accordance with OMB’s requirement to assess new and existing IT investments for suitability for cloud services, as of August 2018. In particular, all 10 agencies’ guidance required assessments of cloud service suitability for both new and existing IT investments.
However, the remaining six agencies did not have such comprehensive guidance in place. Rather, the guidance either required assessments of new or existing systems for cloud services but not both, or the guidance had not yet been established. Specifically, Labor’s and SSA’s guidance required assessments of new investments for cloud services but did not address assessments of existing investments. In addition, Energy’s guidance required assessments of existing systems but not new acquisitions. Further, three agencies (Education, HHS, and Transportation) had not established guidance for assessing investments for cloud services.
The results of our analysis of agencies’ guidance on assessing IT investments for cloud services are shown in figure 1.
Agency officials in the Office of the CIO at the six agencies provided a variety of reasons for why they did not have guidance for assessing all investments for cloud services. Specifically, Labor officials reported that they had not included legacy applications in their guidance because not all applications should or could be migrated to the cloud. In addition, SSA officials reported that they were planning to assess all existing systems for cloud services, but had not determined a time frame for this review. Further, Energy officials reported that, while the agency was following OMB’s Cloud First policy, it would need to establish guidance for assessing new investments for cloud services; however, a date for doing so had not been determined.
Transportation officials reported that they believed their guidance on managing cloud computing efforts was consistent with OMB’s Cloud First policy and stated that they had no plans to develop additional guidance. However, our review of the agency’s guidance found that it did not include any information regarding the assessment of investments for cloud services. Instead, the guidance only required that investments intending to use cloud services provide procurement and other cost information as part of the business case and use specified language in contracts with cloud service providers. Therefore, we believe that the guidance is not consistent with OMB’s guidance requiring agencies to assess investments for cloud services.
Education officials reported that they were in the process of finalizing a policy and hoped to have it completed by the end of the year. In addition, HHS officials reported that they had explored developing some guidance regarding cloud services, but had not established any plans to do so.
As previously discussed, assessing all new and existing IT investments to determine whether they are suitable for cloud services is an important component of OMB’s Cloud First policy. Until the six identified agencies update or establish guidance for assessing both new and existing investments for cloud services, they will not be positioned to ensure adequate implementation of OMB’s Cloud First policy. Further, these agencies increase the risk that they will not be able to take advantage of cloud services to improve operational efficiencies and minimize costs.
Selected Agencies Assessed the Majority of Their IT Investments for Cloud Services
As noted previously, OMB’s fiscal year 2016 IT capital planning guidance requires agencies to evaluate each investment for cloud services and report the status of this evaluation as part of the annual budget submission. Specifically, agencies were to respond to a question regarding whether they had selected cloud services for the investment, or components or systems within the investment, or, for example, report that the investment had not yet been assessed for cloud services. OMB publicly reports agencies’ responses to this question on the IT Dashboard.
As of October 9, 2018, the 16 agencies in our review reported on the IT Dashboard that they had completed cloud assessments for 84 percent of their IT investments (5,180 out of a total of 6,157) planned for fiscal year 2019. Of these, two agencies (GSA and State) had completed an assessment of all investments. However, 12 agencies had not completed an assessment of 10 or more IT investments for cloud services.
Table 1 lists the number of IT investments at the 16 selected agencies for fiscal year 2019 that had been assessed for cloud services. The table also shows the number and percentage of investments that remained to be assessed.
Officials in the Office of the CIO at the 12 agencies provided a variety of reasons for why they had not assessed all investments for cloud services. For example, Agriculture officials reported that 21 of their 53 investments did not need assessments because the investments were not suitable for cloud services. The officials said they intended to update the IT Dashboard to reflect this change. Further, these officials stated that they planned to assess the remaining 32 investments by April 30, 2019.
Defense officials reported that the agency was in the process of adjusting its cloud strategy that was issued in January 2018 and intends to address investments that have not yet been evaluated. However, the officials stated that they had not established time frames for the evaluations. In addition, DHS officials reported that they were in the process of implementing their guidance and putting in place a new process for identifying planned acquisitions based on the phase in the acquisition life cycle. However, the officials had not identified a time frame for when the new process would be finalized or when all assessments of the investments would be completed.
Justice officials reported that, as they began the budget process, the agency planned to look at performing additional assessments of investments for cloud services. However, the officials provided no time frames for when these assessments would be completed. In addition, SSA officials stated that the agency planned to perform an assessment of current investments for cloud services. However, the officials reported that they had not established a time frame for completing these assessments.
Further, Treasury officials reported that, while the agency had established a process for assessing investments for cloud services, it did not set specific dates for when the assessments were to be conducted. These officials reported that they only conducted a cloud assessment if the agency determined that it would replace, redevelop, or retire an investment. However, Treasury’s guidance is not consistent with OMB’s requirement that agencies conduct an annual assessment of all investments, regardless of the overall life-cycle stage of the investment.
Many of the 16 agencies in our review have made progress in implementing cloud services by establishing guidance for assessing investments for cloud services and performing assessments. Even agencies that lacked formal guidance for performing an assessment have made progress in increasing the use of cloud services when the assessment was completed. Nevertheless, 12 agencies still need to assess a large number of their investments. Until these agencies assess their investments that have yet to be evaluated for cloud services, they may not know which investments are likely candidates for migration to cloud services. Moreover, these agencies will not be positioned to take advantage of operational efficiencies, cost savings, and other benefits from the use of cloud services.
Selected Agencies Have Increased Their Use of Cloud Services
As of October 9, 2018, the 16 agencies in our review reported on the IT Dashboard that 11 percent of their IT investments were projected to use cloud services for fiscal year 2019—an increase of 3 percentage points from fiscal year 2016 to fiscal year 2019. In addition, 13 out of the 16 agencies reported that they planned to increase their use of cloud services, in some cases, by as much as 20 percentage points or more, between fiscal years 2018 and 2019.
Table 2 lists the percentage of the selected agency IT investments that used cloud services for fiscal years 2016 through 2018 and are projected for 2019. (For additional details on the number of cloud investments and the total investments reported by each of the selected agencies for fiscal years 2016 through 2019, see appendix III.)
In addition, while the majority of agencies made progress in implementing cloud services between fiscal years 2016 and 2019, the extent of agencies’ progress varied. Specifically, 10 of the 16 agencies reported an increase in the use of cloud services, with the percentage of increase varying from up to 10 percentage points to 20 or more percentage points.
For the remaining six agencies, two reported no change in the percentage of investments using cloud services and four reported a decrease in the overall percentage of cloud usage. Figure 2 shows the breakdown in the range of percentage point changes in the use of cloud services for agency investments for fiscal years 2016 through 2019, as reported on the IT Dashboard.
Officials in the Offices of the CIO, and Office of Information Technology, at the six agencies that reported no change or a decrease in their cloud investment percentages during this 4-year period provided a variety of reasons for why this was the case, or had no comments regarding the lack of change in their cloud investment percentages. Specifically, Energy officials reported that the agency had not shown an increase in the percentage of its cloud investments due to an IT portfolio optimization effort designed to consolidate the agency’s cloud investments. According to the officials, this optimization effort was designed to reduce the total number of these investments during the 4-year period. As a result, this optimization effort affected the overall percentage of cloud investments. As for Labor, its officials did not offer any comments regarding the lack of an increase in cloud use during this period.
In addition, Defense, DHS, and Education officials reported that staff in their agencies had inconsistently applied the definition of cloud computing, which had led to differences in identifying and reporting the number of cloud investments within their agencies during this period. Further, DHS officials noted that the ongoing addition, combination, completion, and cancellation of investments had contributed to the fluctuation in the number of cloud investments within their agency. Finally, VA officials reported that their cloud identification processes were maturing during this period and, as such, had resulted in different cloud investment counts.
Some of the inconsistencies reported by agencies regarding the types of investments that they identified as being cloud investments may also be a result of OMB’s changes to its guidance during this 4-year period. Specifically, OMB changed how agencies were required to report their use of cloud services in fiscal year 2018, and revised the options that agencies were to select from in order to identify and report the use of cloud services for each investment.
Going forward, several agencies reported that they intended to continue making progress in their implementation of cloud services beyond fiscal year 2019. For example:
Education officials reported that the agency expected to increase cloud use significantly in 2019 and beyond due to an IT services contract award that is to support the migration of the agency’s primary hosting infrastructure to the cloud.
DHS officials reported that the agency had set aggressive goals for acquiring cloud services. Toward this end, the agency had initiated a cloud steering group and created a team with staff from all components. In addition, the officials reported that they planned to consolidate space in one data center and eliminate another data center, which would allow the agency to accelerate its migration to the cloud.
Justice officials reported that they expected to increase spending on cloud services as the agency completed ongoing initiatives in 2019. In addition, the officials reported that they anticipated migrating the majority of the agency’s unclassified data to the cloud in the next few years.
VA officials reported that the agency planned to migrate at least 350 applications to the cloud by 2024.
Agencies’ efforts to acquire additional cloud services and take advantage of improved efficiencies and cost savings should help to further improve their management of IT acquisitions and operations.
Agencies Have Increased Spending and Realized Savings from Using Cloud Services, but Spending and Savings Figures Are Underreported
The 16 agencies in our review made progress in implementing cloud services. Specifically, the 16 agencies reported that their spending on cloud investments had increased by over $1 billion between fiscal years 2015 and 2018 for investments with total life-cycle costs of $1 million or more. Nevertheless, the agencies reported that factors such as inconsistent tracking of spending data, along with confusion in interpreting OMB guidance, impacted the accuracy of their reported cloud spending data. In addition, 13 of the 16 agencies provided savings data indicating that they had saved hundreds of millions of dollars on cloud services, but agencies reported that they had problems with tracking this data. Further, six agencies reported that they had reinvested cloud savings into other IT modernization efforts or other improvements to IT services.
Selected Agencies Are Spending More on Cloud Services, but Do Not Have Complete Spending Data
OMB requires agencies to report spending on cloud services. Specifically, OMB’s annual capital planning guidance for fiscal years 2015 through 2018 required agencies to report their total cloud spending on the IT Dashboard, although it did not require the information to be reported by investment.
While the selected agencies’ reporting on the IT Dashboard indicated that their percentage of total spending on cloud services generally remained constant during fiscal years 2015 through 2017. Specifically, the 16 agencies reported on the IT Dashboard that approximately 3 percent of their total IT spending each fiscal year during this period was spent on cloud services. For fiscal year 2018, agency-reported cloud spending through March 2018 was at 2 percent. Table 3 identifies the percentage of the selected agency spending on cloud services for fiscal years 2015 through 2018, as reported on the IT Dashboard. (For additional details on total agency cloud spending and total IT spending reported by each of the selected agencies for fiscal years 2015 through 2018, see appendix IV.)
However, the breakdown in spending by investment for cloud services with $1 million or more in life-cycle costs that the 16 agencies provided to us, showed that their spending on cloud investments had increased during fiscal years 2015 through 2018, and beyond (agencies generally submitted data on planned spending for one or more fiscal years beyond 2018). Specifically, the agencies’ provided data showed that total cloud spending for these investments was approximately $1.38 billion in fiscal year 2017—an increase of over $1 billion since fiscal year 2015. In addition, the 16 agencies’ data indicated that they plan to spend over $3.2 billion on cloud services in fiscal year 2018 and beyond for these investments.
Table 4 summarizes the information provided to us on a breakdown of the 16 selected agencies’ total spending for investments with $1 million or more in life-cycle costs for cloud services, from fiscal years 2015 through 2018 and beyond through fiscal year 2024, that was submitted to us. (For a list of the investments that have spent $1 million or more in life-cycle costs for cloud services, provided by each of the 16 selected agencies for fiscal year 2018, see appendix V.)
Officials in the Office of the CIO at all of the agencies in our review identified three factors that could affect the completeness of the cloud spending data provided to us and on the IT Dashboard: (1) spending data were not consistently tracked; (2) different methods were used to calculate cloud spending costs; and (3) interpreting changes in OMB and related guidance created confusion regarding what spending data should be tracked.
Spending data were not consistently tracked. Defense officials reported, for example, that the agency had only begun tracking cloud spending in fiscal year 2016 and, therefore, spending data were not available for fiscal year 2015. In addition, VA officials reported that they were in the process of maturing their tracking of cloud spending data and, therefore, the agency did not have spending data available for the majority of their investments prior to fiscal year 2019. Further, Justice officials reported that the agency had been challenged to track cloud costs because the costs are based on fluctuating usage, rather than a flat rate.
Different methods were used to calculate cloud spending costs.
Some agencies reported that the data they provided to us included costs for items such as power usage and staff full-time equivalents, while other agencies told us that they only provided contract costs. In addition, some agency officials noted that they included in the provided spending figure, the additional costs for migrating the application to cloud services, while other officials said that these costs had not been included in their spending totals.
Interpreting changes in OMB and related guidance created confusion regarding what spending data should be tracked. Agencies noted that OMB made changes to its guidance since 2015, including clarifications to the definition of cloud computing, changes to the definition and scope of cloud services and cloud spending, and changes to the guidance regarding what applicable costs should be included in spending totals—all of which created confusion regarding what investments and what costs should be tracked for cloud services. Defense officials also reported that the agency misinterpreted the NIST definition of cloud computing, and, as a result, Defense misreported that certain IT investments were using cloud services, when these investments were not using these services. Defense officials reported that the agency had corrected this issue but it affected the total cloud spending reported during this period and led to the decrease in spending noted.
Based on our review of these factors reported by the 16 selected agencies, we identified issues with the completeness of the reported cloud spending data. Specifically, these factors increase the likelihood that all costs associated with spending on cloud services may have been incompletely captured by the 16 selected agencies in our review. As a result, agencies’ reported total cloud spending on the IT Dashboard and the data provided to us is likely underreported.
Staff in OMB’s Office of E-Government and Information Technology stated that agencies have previously reported challenges in breaking out cloud costs, particularly when the cloud acquisition is part of a larger contract. Given these challenges, the staff acknowledged that agency- reported cloud spending data are underreported and stated that the IT Dashboard reflects only a fraction of actual federal spending on cloud services. However, the staff stated that OMB’s changes to its guidance, beginning in fiscal year 2019, should help to improve the reporting of cloud spending data. Specifically, beginning in fiscal year 2019, agencies will be required to report total cloud costs by investment, per OMB’s IT capital planning guidance, and use the Technology Business Management framework.
Having complete data on spending for cloud services is critical in order to ensure that agencies can provide effective management and oversight of their cloud use, and that OMB and lawmakers can hold CIOs accountable for the performance of these cloud investments. The changes to OMB’s guidance for fiscal year 2019 provide a key improvement for ensuring that agencies establish more consistent processes for reporting on cloud- spending and should help agencies improve the completeness of the cloud-spending data that they report to OMB.
Selected Agencies Report Saving Approximately $291 Million from the Use of Cloud Services but Acknowledge Data Are Incomplete
Since 2013, OMB has required agencies to report quarterly on their total savings and cost avoidances from implementing OMB’s IT reform initiatives, including savings realized from the migration to cloud services. Specifically, agencies are required to report actual and planned savings from implementing these initiatives in a quarterly submission and identify which implementation of an OMB initiative resulted in the reported savings. Despite this, in the reporting mechanism, agencies can only associate specific savings with certain OMB initiatives, a list that does not include the migration to cloud services. Standards for Internal Control in the Federal Government emphasizes that management should track the actual performance of key initiatives in order to ensure that these activities are meeting plans, goals, and objectives, and in doing so, management should use quality information.
Thirteen of the 16 agencies in our review provided savings data to us for at least one cloud investment with life-cycle costs of $1 million or more for cloud services during fiscal years 2014 through 2018. In total, the 13 agencies’ provided data showed that they had accrued approximately $291 million in savings or cost avoidances using cloud services since 2014. In addition, the agencies’ data indicated that they planned to save at least $150 million in fiscal year 2018 and beyond (agencies generally submitted data on planned savings for one or more fiscal years beyond 2018).
However, agency officials from the 13 agencies stated that, while they were able to provide some savings data, these data are only tracked on an ad hoc basis for certain cloud investments. In addition, officials from three agencies (Defense, State, and SSA) stated that they could not provide savings data for any of their cloud investments. As a result, the 16 agencies were unable to provide savings or avoidance data for 411 out of 488 investments (84 percent) that we reviewed.
Table 5 shows, for the selected agencies in our review, the breakdown in total agency savings and cost avoidances for fiscal years 2014 through 2018 and beyond for investments with $1 million or more in life-cycle costs for cloud services.
Officials in the Office of the CIO at the 16 agencies identified three factors that impacted their efforts to provide data on savings or cost avoidances for cloud computing investments: (1) savings data were not systematically tracked or were hard to track; (2) deploying or migrating systems to the cloud had resulted in no cost savings; and (3) OMB does not require agencies to identify savings associated with cloud services as part of reported savings.
Savings data were not systematically tracked or were hard to track. Defense, Treasury, and VA officials reported that their investment management systems did not have the capability to track cloud savings or avoidance data. In addition, GSA officials reported that, while their system had the capability to track cost savings data, the agency did not capture and track realized cloud savings in a consistent format. GSA officials stated that they were in the process of implementing the Technology Business Management Framework, which they expected would improve the collection of these data. However, the officials did not identify a time frame for when this framework was to be implemented.
Education officials reported that the agency did not provide cost savings data for those investments where cost savings targets had not been established or anticipated. SBA officials reported that investments with two cloud providers had only been recently made so the agency could not yet make a reasonable assessment of savings. Further, Agriculture officials reported that the agency had a hard time tracking the savings from certain investments because the process for formulating the overall agency budget was different than the process for determining savings from cloud implementation.
State officials reported that they were in the process of developing the capability to collect and track savings data from using cloud services but did not have any reliable data to provide during our review. In addition, Energy officials reported that their agency intended to establish review processes in the coming year to ensure that costs, cost savings, and cost avoidances were tracked for all cloud investments. As part of this process, the agency intended to work closely with its components to ensure that there was a consistent application of definitions for cloud spending and savings. However, the officials did not identify a specific time frame for when the agency expected the new process to be completed.
Lastly, HHS officials reported that the agency did not expect to track cost savings beyond the FITARA requirements. FITARA requires the reporting of savings associated with two OMB initiatives—data center consolidation and PortfolioStat. However, per M-13-09, OMB requires agencies to report savings associated with all of its initiatives. As such, HHS’s tracking of savings is not consistent with OMB’s guidance.
Deploying or migrating systems to the cloud resulted in no cost savings. Treasury officials reported that their agency had not realized any cost savings from the migration of certain investments because the acquisition of cloud services either had allowed the agency to purchase additional capabilities that the previous system did not have, or the agency had continued to operate the previous system at the same time as the new cloud system for a period of time. In addition, Commerce officials reported that their agency had not realized any cost savings for some investments because acquiring cloud services required that new business and performance requirements be put in place, which resulted in no overall savings for these investments. Further, DHS and SSA officials reported that a number of their investments were new applications that were developed and deployed in the cloud. As such, there were no costs from a prior system that could be compared with the costs to maintain the new system using cloud services; thus, there were no associated cost savings or avoidances.
OMB does not require agencies to identify savings associated with cloud services as part of reported savings. Officials from Agriculture, Justice and Transportation noted that, while OMB requires agencies to report savings, current reporting instructions do not specifically require the identification and reporting of cloud savings as a separate category of cost savings and avoidance. In this regard, OMB’s guidance requires agencies to identify which OMB initiative resulted in the reported savings, but the available options for agencies to choose from do not include cloud services. Accordingly, officials from these agencies stated that they either reached out to their components to try and collect this information or had to review their investments to determine whether there were any cloud savings, to be able to provide this information to us.
Based on our review of the factors that impacted the selected agencies’ efforts to provide savings data, we identified issues with the completeness of the savings data. Specifically, challenges identified by the selected agencies in systematically tracking savings data, and the lack of a specific OMB requirement to report savings associated with cloud services, increase the likelihood that all savings associated with cloud services may have been incompletely captured by the agencies that provided these data. As a result, agencies’ reported cloud savings data on the IT Dashboard and the data provided to us is likely underreported.
Staff in the Office of E-Government and Information Technology stated that, while agencies are required to report total savings related to OMB initiatives, the format is left to agency discretion. In addition, OMB staff confirmed that agencies do not have to specifically identify savings related to cloud computing unless they choose to do so. OMB staff further said that they do not require a specific format for reporting savings in order to minimize the burden on agencies in reporting this information.
While OMB’s effort to minimize the reporting burden on agencies is appropriate, the lack of an explicit requirement to identify reported savings associated with cloud services has contributed to agencies not consistently tracking these savings. In addition, while OMB has taken steps to ensure more accuracy and granularity in agency reporting of cloud investment spending data in fiscal year 2019, there has not been a corresponding effort to ensure better reporting of cloud savings data. As a result, OMB and Congress may not have sufficient data to see the results of key initiatives, like Cloud Smart, and understand whether agencies are achieving savings using cloud services.
Since 2013, OMB has required agencies to report on the savings resulting from implementation of its key IT reform initiatives. Although OMB does not provide the means for agencies to explicitly identify cloud- related savings, it is nevertheless important for agencies to take steps to fully track savings and cost avoidances from cloud computing acquisitions in order to ensure effective oversight and management of these initiatives. However, until OMB establishes a specific cloud savings reporting requirement, and until these agencies establish a consistent and repeatable mechanism to track these savings and cost avoidances, the agencies may lack sufficient information on the results of cloud acquisitions to date and the data necessary to make decisions regarding future cloud acquisitions.
Agencies Reported Reinvesting Cloud Implementation Savings into IT Modernization or Other Improvement Efforts
In 2017, Congress enacted what is known as the Modernizing Government Technology Act, which authorized covered agencies to establish an IT system modernization and working capital fund. This fund was to be used to, among other things, transition legacy IT systems to commercial cloud computing and other innovative commercial platforms and technologies using agency reprogrammed funds or amounts made available to the IT working capital fund through discretionary appropriations.
Regardless of the extent of agencies’ processes for tracking savings obtained from using cloud services, officials in the Office of the CIO at six agencies in our review (Education, GSA, Labor, SBA, SSA, and Treasury) reported that they have reinvested these savings into other IT modernization efforts or other improvements to IT services. For example:
Education officials reported that $498,000 in fiscal year 2018 cloud savings was used to modernize the agency’s network infrastructure in order to provide increased multipath bandwidth and software that automatically routes traffic if network issues occur.
GSA officials reported that the agency used the savings from replacing the agency’s legacy on-premises email program with a cloud-based email system to implement a modern enterprise collaboration platform, email, and document storage system. According to the officials, the move to the cloud helped improve the agency’s flexibility (the new system is accessible from any device, at any time, and from any location), productivity, and cost-effectiveness. As part of this effort, the officials reported that the savings were managed using the agency’s working capital fund.
Labor officials reported that their agency is using savings and cost avoidances to partially fund an initiative to consolidate cloud services within the agency in order to provide future secure cloud services and establish an enterprise contract vehicle to obtain cloud services. The officials noted that this investment is intended to allow the agency’s components to leverage a cloud authority to operate, obtain competitive pricing for, and establish communications to cloud service providers. In addition, Labor officials reported that the agency has established a working capital fund that is to be used to manage the savings from cloud and shared services.
SBA’s CIO reported that the agency reinvested $7.8 million in savings from efforts to consolidate data centers to the cloud toward the implementation of other enterprise-wide modernization efforts. In addition, the agency had used the savings for the deployment and migration of additional applications to cloud services. Specifically, the CIO reported that the savings were used to design and architect cloud services, roll out the agency’s update of key operating system and office applications, decommission obsolete data center assets, reduce overlapping technologies, and enhance security and compliance capabilities with new enterprise tools and network monitoring. In addition, the CIO stated that, by using the savings from the data center consolidation, SBA has been able to undertake all of the agency’s cloud modernization efforts with no additional budgeted funding.
We have previously reported that significant work remains to ensure that agencies improve their management of IT acquisitions and operations, including modernizing or replacing obsolete IT investments. It is encouraging that several agencies have reinvested savings from cloud initiatives into other IT modernization efforts and, in some cases, have taken advantage of working capital funds authorized by Congress to do so. Having complete information on the savings or avoidances that result from cloud initiatives and using those savings to further IT modernization efforts is critical to ensuring the transformation of IT services across the federal government in the future.
Selected Agencies Have Realized Benefits from Cloud Services
Officials from 15 of the 16 agencies in our review reported that they had realized several significant benefits from the adoption of cloud services, ranging from improvements in the delivery of IT services to increasing the efficiency of operations and systems. In addition, the 15 agencies noted that certain key practices enabled them to realize these benefits through the successful implementation of cloud services. These practices included establishing new governance planning activities and policies, reorganizing the management of agency IT resources, and having executive leadership involved to help drive acquisition efforts.
Cloud Services Aid with IT Efficiency, Cost Savings, and System Modernization
Officials in charge of cloud services at 15 of the 16 agencies in our review reported that they had identified five significant or notable benefits as a result of acquiring cloud services. Specifically, the 13 agencies reported that they had improved customer experiences through better design and performance of business systems and customer websites. In addition, all 15 agencies reported that they were able to procure more flexible and scalable IT resources, and reduce the cost of provisioning infrastructure and managing services. Table 6 lists the five significant or notable benefits reported by the 15 agencies and the number of agencies that reported each benefit. The discussion that follows the table provides examples of each of the five agency-reported benefits from the acquisition of cloud services.
Officials in the Office of the CIO at the 15 agencies reported that acquiring cloud services had allowed them to procure IT resources that were more flexible and scalable than the prior legacy infrastructure. For example, officials in Labor’s Office of the CIO reported that they had acquired cloud services to address seasonal demands for system processing. By eliminating the need to purchase additional servers and other equipment that would go unused during the rest of the year, Labor officials reported that cloud services allow the agency to scale resources up during these periods of increased processing and then scale the resources back down when the excess capacity is no longer needed.
In addition, officials in DHS’s Office of the CIO reported that, in 2012, they had acquired software as a service for the agency’s virtual desktop solution. This new service provided six agency components access to virtual secure desktop operating systems and applications. By eliminating the need for users to be physically present in a specified location in order to perform work activities, DHS officials reported that cloud services had improved the ability to quickly respond to the agency’s mission needs and provided teleworking capabilities. In addition, the officials reported that the solution streamlined the process of provisioning network access between agency components and other external agencies.
Acquiring Cloud Services Helped Agencies Reduce the Cost of IT Services
Officials in the Office of the CIO at the 15 agencies reported that acquiring cloud services had allowed them to procure more cost-effective options for provisioning IT infrastructure and managing IT services. For example, officials in Education’s Office of the CIO reported that, by migrating the Institute of Education Sciences’ data center to the cloud in 2014, the agency had saved approximately $3.3 million in cost avoidances annually for the last 3 years from not having to pay prior data center hosting charges. In addition, Education officials reported that the agency had saved $11.6 million between fiscal years 2013 and 2018 by eliminating contractor website hosting.
In addition, officials in Energy’s Office of the CIO reported that the agency saved $900,000 in fiscal years 2013 to 2014 by transitioning to a cloud- based platform for managing IT services, such as asset management. Acquiring the software and platform as a service reduced or eliminated the costs of administering the agency’s on-premise legacy infrastructure and associated software licensing fees.
Using Cloud Services Increased the Efficiency of Agency Operations and Systems
Officials in the 15 agencies’ Offices of the CIO reported that acquiring cloud services had allowed them to streamline or improve systems and automate business processes and other functions. For example, officials in State’s Office of the CIO reported that the agency had previously relied on paper-based and manual processes for completing employee requests for, among other things, leave, training, personal identification cards, and other general services. By acquiring software and platform as a service, State implemented an electronic application that replaced over 800 paper forms used to make these requests, without the time and cost of developing an application themselves. As a result, the officials reported that they estimate the application has saved more than 50,000 hours of staff time since its deployment by streamlining the request process, automatically populating common data fields, and improving support options.
In addition, officials in Treasury’s Community Development Financial Institutions Fund reported that their office acquired software as a service, which will enable them to reduce the number of legacy systems related to awards management from 17 to 2. These legacy systems had required staff to enter the same data in different systems and manually complete certification work tasks. By automating many of the manual review and compliance processes, the officials reported that the office saved approximately 650 staff hours in 2017.
Cloud Services Helped Agencies Enhance Their Customer Service
Officials in the Office of the CIO at 13 agencies reported that acquiring cloud services had allowed their agencies to improve system design and usability, which helped to enhance their customer service. For example, VA officials reported that they had deployed a website in the cloud, Access to Care, which included detailed data on the wait times and quality-care metrics at local hospitals. Doing so enabled veterans to be able to make better decisions about their health care options. By acquiring cloud services, VA officials reported that they had developed and deployed the new Access to Care website in approximately 30 days, incorporating information from 130 components of VA’s electronic health records system that were previously available on disparate legacy websites into one website. Further, the officials reported that the new website increased the transparency of health care information for the veteran community, empowered veterans, and promoted competition for health care services.
In addition, a Defense official from the Army’s Total Ammunition Management Information System reported that the office had acquired infrastructure as a service in order to improve the processing and reporting of ammunition requests that the Army receives from users worldwide. Defense staff reported that, previously, they had received complaints from customers regarding system pauses and delays when entering requests for ammunition and generating reports due to legacy infrastructure. Defense officials stated that using infrastructure as a service improved system processing and reporting times—from minutes to seconds—by providing scalable technology resources that can meet worldwide performance demands. As a result, customers can more quickly enter their orders into the system.
Acquiring Cloud Services Strengthened Mission Assurance
Officials in the Office of the CIO at nine agencies reported that acquiring cloud services had allowed them to achieve greater levels of mission assurance by streamlining security resources and improving backup capabilities that were not available previously. For example, officials at Defense’s North American Aerospace Defense Command and U.S. Northern Command reported that they had acquired cloud services for the Situational Awareness Geospatial Enterprise system in order to improve mission assurance and address Defense cybersecurity requirements.
According to the officials, cloud services improved mission assurance by allowing them to more quickly correct problems such as malware and the loss of network connectivity in order to ensure the near continuous availability of data from different access points. In addition, the system required extensive storage and backup capabilities due to the need to ensure the system’s data were available continuously from different access points. The officials reported that the acquisition of cloud services has reduced the costs required to maintain continuous backup and storage capabilities. They added that the system also complies with Defense requirements that investments use an approved cloud service provider. In addition, the officials said acquiring cloud services has provided the capability to scale resources, as needed, to meet demands during special events, such as the State of the Union address and the World Series, which require additional security.
Further, Federal Transit Administration officials reported that migrating two systems to the cloud had allowed the agency to enhance system security by managing access to the systems through a single portal rather than managing access to each system individually. As a result, the officials reported that they were able to shift some responsibilities of systems security management to the cloud vendor, which reduced the number of security risks and consolidated the number of security tools used. Further, according to the officials, an additional benefit is that users are required to only remember a single password rather than different passwords required for the multiple systems.
Separately, from information provided by the 15 selected agencies, we identified nine cloud computing investments that illustrate the variety of examples of benefits that had been realized by these agencies from the acquisition of cloud services. Table 7 identifies these investments and additional details regarding the nature and sources of the benefits achieved from them are profiled in appendix VI.
Selected Agencies Identified Key Practices That Enabled Cloud Services
In addition to the examples of significant benefits reported from acquiring cloud services, officials at the 15 agencies reported that six key practices had enabled them to realize these benefits through the successful implementation of cloud services. For example, 12 agencies reported that they implemented new governance planning activities, policies, or processes in order to help ensure that cloud acquisition efforts were managed enterprisewide. In addition, 12 agencies reported that they had reorganized the management of agency IT resources to help increase operational efficiency. Further, six agencies reported that having executive leadership involved in driving the acquisition or sponsoring efforts to use cloud services was critical for the successful adoption of cloud services across the agency. Table 8 lists these key practices and the number of agencies that reported each key practice, ranked by the number of agencies reporting the practice.
In addition, many of these six key practice areas identified by agencies are consistent with requirements outlined in FITARA and recommendations from our prior work made to agencies to address longstanding issues with the management of IT acquisitions and operations. Specifically, we previously have noted the importance of strengthening the authority of CIOs, improving the portfolio review process and the transparency of major investment data, ensuring the use of incremental development methodologies, and updating human capital plans.
Selected Agencies Implemented New Policies and Processes to Guide Governance of Cloud Acquisition
Officials in the Office of the CIO at 12 agencies reported that they had implemented new governance activities or drafted new policies and processes to help ensure the successful implementation of cloud services. For example, SSA officials reported that they had drafted several new policies to simplify the management of cloud resources and provide better oversight for new cloud service acquisitions. Specifically, the officials reported that the new policies established a request-and- approval governance process to address which staff can initiate cloud solutions and what types of projects can receive funding.
In addition, Energy officials reported that they had formalized policies and governance processes on how to perform cloud migrations, including establishing a documented, repeatable process to help offices migrate to cloud services more efficiently. Further, Treasury officials reported that they had focused on strengthening cloud-governance activities, including planning and identifying requirements, because changes to enterprise cloud systems may impact multiple programs. As a result, these officials reported that they had implemented a cross-cutting steering committee to help better plan and assess the impact of changes to enterprise cloud systems that support multiple programs.
Selected Agencies Modified Procurement and Contract Oversight Practices to Strengthen Cloud Acquisition Processes
Officials in the Office of the CIO at 12 agencies reported that they had modified their procurement and contract oversight practices in order to accommodate the differences in how cloud services are acquired from traditional acquisitions. For example, Commerce officials emphasized the importance of developing standardized requirements to ensure that when bureaus award contracts, they use standardized language. The officials stated that these requirements help to ensure that contracts with cloud service providers are comprehensive, legally adequate, and include specific details regarding all of the activities the agency will need the contractor to perform.
In addition, officials of the U.S. Trustee Program at Justice reported that they had used existing project and financial management resources to monitor the use of cloud services and associated spending to help control costs and ensure the accuracy of cloud vendor charges. For example, the officials reported that the program used the cloud vendor’s administrative and business intelligence tools to create reports to verify cloud charges. Also, Labor officials reported that they had worked with the agency’s acquisition team to ensure the agency is only billed for its actual cloud usage. This required the agency to transition from a fixed-price contract model to a time and materials-based contract model, which included a clause that limited the maximum costs the agency would have to pay for cloud services.
Selected Agencies Addressed Changes in Workforce Needs for Managing Cloud Services
Officials in the Office of the CIO at 12 agencies reported that they had taken several steps to address changes in workforce needs for managing cloud services. Specifically, these officials reported that they had conducted inventories of staff skills, transitioned staff into new roles, and ensured that staff acquired training. For example, VA officials reported that they had conducted a staff skills inventory to identify future IT workforce training needs and transition staff from managing legacy technologies to managing cloud services. In addition, Energy officials reported that they were preparing staff to transition from managing data center resources to managing the agency’s service level agreements with cloud providers. The officials reported that moving to cloud services allowed staff to spend more time improving existing applications and identifying other efforts to innovate IT services rather than managing on- premise infrastructure. Further, a Defense official lead for cloud computing in the Navy’s Office of the CIO reported that the Navy had developed an enterprise cloud working group consisting of key members from major offices and security groups to help determine the appropriate training and certification needs for staff and to conduct training seminars. In addition, SBA officials said that the agency took advantage of a contract option offered by the cloud vendor to acquire free cloud classes and training, thereby avoiding the need to spend approximately $380,000 on training.
Selected Agencies Reorganized the Management of IT Resources to Increase Operational Efficiency
Office of the CIO officials at 12 agencies reported that acquiring cloud services had led them to change how they organized and managed the agency’s IT resources. For example, GSA officials reported that they had transitioned from letting individual components within the agency acquire their own application to using an enterprise approach in which software as a service applications are acquired and made available to the entire agency. As a result, the officials reported that this approach allows the agency to further optimize their software purchases and improve their monitoring and tracking of software application usage enterprise-wide.
In addition, officials at the Agricultural Research Service reported that acquiring software as a service had promoted opportunities to share customizations of the acquired software between Agriculture’s components rather than having each component develop a separate customization. Specifically, these officials reported that they were able to take a software feature developed by another Agriculture component and implement it for their customer service portal, rather than having to develop it themselves.
Lastly, Education officials reported that they were in the process of beginning an assessment to consolidate the agency’s existing cloud services across federal and commercial environments. The officials said that they hoped to reduce the number of commercial cloud providers used from twenty-five to eight, and to consolidate two of the agency’s cloud environments into a single environment within the next 3 years.
Selected Agencies Streamlined Cloud Services to Address Security Needs in a More Efficient Manner
Office of the CIO officials at eight agencies reported that they were able to streamline the management of IT security by leveraging cloud services. For example, SBA officials reported that they used security tools from their cloud vendor in order to meet DHS’s requirements for continuous diagnostics and mitigation and improve the agency’s security posture. The officials reported that they had performed a requirements analysis and found that, compared to acquiring costly hardware solutions to manage this capability internally, their existing cloud vendor provided security capabilities that actually exceeded DHS’s recommended continuous diagnostics and mitigation requirements. As a result, SBA adopted the cloud vendor’s security tools and avoided $300,000 in initial hardware purchases, as well as subsequent hardware technology refreshes every 3 years.
In addition, GSA officials reported that choosing a FedRAMP-approved cloud service provider had expedited the agency’s adoption of cloud services. Specifically, the agency did not have to visit and review each vendor’s facility as part of the vendor approval process, which shortened the time frame needed to approve a system for use. The officials also reported that using cloud services streamlined the deployment process for new systems because using a cloud platform that had previously been granted the authority to operate allowed the agency to avoid undertaking a separate authorization process, which saved time and resources.
Selected Agencies Engaged Executive Leadership Support during Cloud Acquisition to Help Ensure Successful Implementation
Officials in the Office of the CIO at six agencies reported that having executive leadership involved in driving acquisitions or sponsoring efforts to use cloud services was critical to the successful adoption of cloud services across the agency. For example, SBA officials reported that their agency CIO’s commitment to acquiring cloud services and the deputy CIO’s attendance at daily cloud meetings were critical for the successful adoption of cloud services at the agency. Similarly, Energy officials reported that the agency had established a team with representatives from offices of the CIO, chief financial officer, and chief acquisition officer, to coordinate IT expenditures, including cloud investments, across the agency.
Further, Defense officials from the U.S. Transportation Command reported that establishing a cloud center of excellence team that reported directly to the Commander of U.S. Transportation Command had empowered the team to engage directly with users to help break down barriers that might impact the migration to cloud services. In addition, the officials said that the team helped streamline the processes—specifically, the design, contracting, funding, transition planning, and implementation processes—necessary for the successful migration of all of the command’s systems to the cloud.
Conclusions
Since 2011, when OMB began requiring agencies to adopt a Cloud First strategy, agencies have made progress in implementing cloud services and, in doing so, have saved hundreds of millions of dollars and realized notable benefits. However, six agencies still lack guidance for assessing IT investments for cloud services and 12 agencies still have not performed assessments for a number of their IT investments. In addition, all of the agencies in our review do not have sufficient mechanisms or approaches to track and report the savings data associated with these cloud initiatives.
Although agencies have reported spending $1 billion or more on cloud services in just the past 2 years, and identified hundreds of millions of dollars in related savings, these figures are not consistently reported. To its credit, beginning in fiscal year 2019, OMB will require more accuracy and granularity in how agencies report cloud investment spending data. However, there has not been a corresponding effort to improve the reporting of cloud savings data. An important aspect to the success of key OMB cloud initiatives, like Cloud Smart and the associated drive for greater agency adoption of cloud services, will be the ability for key stakeholders to access complete information on the savings that agencies are achieving under these efforts.
Recommendations for Executive Action
We are making a total of 35 recommendations—1 recommendation to OMB and 34 recommendations to the 16 agencies in our review.
The Director of the Office of Management and Budget should require agencies to explicitly report, at least on a quarterly basis, the savings and cost avoidance associated with cloud computing investments. (Recommendation 1)
The Secretary of Agriculture should ensure that the CIO of Agriculture completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 2)
The Secretary of Agriculture should ensure that the CIO of Agriculture establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 3)
The Secretary of Commerce should ensure that the CIO of Commerce establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 4)
The Secretary of Defense should ensure that the CIO of Defense completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 5)
The Secretary of Defense should ensure that the CIO of Defense establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 6)
The Secretary of Education should ensure that the CIO of Education establishes guidance on assessing new and existing IT investments for suitability for cloud computing services, in accordance with OMB guidance. (Recommendation 7)
The Secretary of Education should ensure that the CIO of Education completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 8)
The Secretary of Education should ensure that the CIO of Education establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 9)
The Secretary of Energy should ensure that the CIO of Energy updates the agency’s guidance on assessing IT investments for suitability for cloud computing services to include a requirement to assess new acquisitions for these services. (Recommendation 10)
The Secretary of Energy should ensure that the CIO of Energy completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 11)
The Secretary of Energy should ensure that the CIO of Energy establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 12)
The Secretary of Health and Human Services should ensure that the CIO of HHS establishes guidance on assessing new and existing IT investments for suitability for cloud computing services, in accordance with OMB guidance. (Recommendation 13)
The Secretary of Health and Human Services should ensure that the CIO of HHS completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 14)
The Secretary of Health and Human Services should ensure that the CIO of HHS establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 15)
The Secretary of Homeland Security should ensure that the CIO of DHS completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 16)
The Secretary of Homeland Security should ensure that the CIO of DHS establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 17)
The Attorney General of the United States should ensure that the CIO of Justice completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 18)
The Attorney General of the United States should ensure that the CIO of Justice establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 19)
The Secretary of Labor should ensure that the CIO of Labor updates the agency’s guidance on assessing IT investments for suitability for cloud computing services to include a requirement to assess existing investments for these services. (Recommendation 20)
The Secretary of Labor should ensure that the CIO of Labor completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 21)
The Secretary of Labor should ensure that the CIO of Labor establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 22)
The Secretary of State should ensure that the CIO of State establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 23)
The Secretary of the Treasury should ensure that the CIO of Treasury completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 24)
The Secretary of the Treasury should ensure that the CIO of Treasury establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 25)
The Secretary of Transportation should ensure that the CIO of Transportation establishes guidance on assessing new and existing IT investments for suitability for cloud computing services. (Recommendation 26)
The Secretary of Transportation should ensure that the CIO of Transportation completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 27)
The Secretary of Transportation should ensure that the CIO of Transportation establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 28)
The Secretary of Veterans Affairs should ensure that the CIO of VA completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 29)
The Secretary of Veterans Affairs should ensure that the CIO of VA establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 30)
The Administrator of General Services should ensure that the CIO of GSA establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 31)
The Administrator of the Small Business Administration should ensure that the CIO of SBA establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 32)
The Commissioner of the Social Security Administration should ensure that the CIO of SSA updates the agency’s guidance on assessing IT investments for suitability for cloud computing services to include a requirement to assess existing investments for these services. (Recommendation 33)
The Commissioner of the Social Security Administration should ensure that the CIO of SSA completes an assessment of all IT investments for suitability for migration to a cloud computing service, in accordance with OMB guidance. (Recommendation 34)
The Commissioner of the Social Security Administration should ensure that the CIO of SSA establishes a consistent and repeatable mechanism to track savings and cost avoidances from the migration and deployment of cloud services. (Recommendation 35)
Agency Comments and Our Evaluation
We provided a draft of this report to OMB and the 16 agencies for their review and comment. In response, 14 agencies provided comments stating that they agreed with our recommendations; one agency stated that it agreed with one recommendation but disagreed with another; and two agencies did not state whether they agreed or disagreed with the recommendations. In addition, multiple agencies provided technical comments, which we incorporated into the report, as appropriate.
The following 14 agencies agreed with our recommendations: In written comments from Commerce, Education, Energy, HHS, DHS, State, Transportation, VA, and GSA, these agencies stated that they agreed with the recommendations directed to them. In addition, each of the agencies indicated that it planned, or already had begun taking actions, to address the recommendations. The agencies’ comments are reprinted in appendixes VII through XV, respectively.
In emails received from Agriculture’s Director of Strategic Planning, Policy, Egovernment and Audits in the Office of the CIO on February 11, 2019, and from Justice’s Audit Liaison Specialist in the Internal Review and Evaluation Office on February 15, 2019, both of these departments stated that they agreed with our recommendations.
In written comments from Labor, the department stated that it agreed with our recommendations. The department also described actions taken to address our recommendation that it update its guidance on assessing IT investments for suitability for cloud computing services to include a requirement to assess existing investments for these services. Specifically, Labor stated that it had taken steps to ensure that its agencies included an assessment of cloud computing suitability as they moved forward with their investments and that this process had been integrated into Labor’s budget process. We followed up with the department and obtained a copy of Labor’s guidance.
However, in examining this guidance, we found it to be the same as what Labor had previously provided to us during the course of our audit. Further, as we mentioned earlier regarding our analysis of the department’s guidance for assessing investments for suitability for cloud services, Labor had required existing investments that were already using cloud services to migrate to the department’s new consolidated cloud environment; however, it did not require existing systems not using cloud services to be assessed for these services. Without receiving any additional information from the department that supported its actions to address our recommendation prior to this report’s issuance, we believe our recommendation to Labor is still appropriate. The department’s comments are reprinted in appendix XVI.
In written comments from SBA, the agency agreed with our recommendation. Also, in additional comments sent by a GAO liaison in the Office of Congressional and Legislative Affairs via email on March 11, 2019, SBA provided updated information regarding the benefits that the agency had realized from using cloud services for its system that was profiled in appendix VI of the draft report. Specifically, SBA officials in charge of the system provided a revised list of realized benefits from the cloud services. However, the officials did not provide any supporting documentation regarding the revised benefits; therefore, we were not able to validate the revised list of benefits prior to the issuance of this report. As a result, we removed the profile from the report in order to be consistent with our methodology for reporting examples of systems that had realized benefits from the acquisition of cloud services, and notified SBA of this decision. SBA’s comments are reprinted in appendix XVII.
I) Application and Service Migration to DOL Cloud, version 1.5 (March 2018). report. However, during further discussion with SSA officials in charge of the system on January 17, 2019, the officials confirmed that the agency had not yet identified all of the potential benefits related to the use of cloud services as a result of a change in their vendor solution. Thus, we removed the profile from our report in order to be consistent with our methodology for reporting examples of systems that had realized benefits from the acquisition of cloud services, and notified SSA of this decision. SSA’s comments are reprinted in appendix XVIII.
One agency agreed with one recommendation and disagreed with a second recommendation:
Defense provided written comments in which it agreed with our recommendation to complete an assessment of all IT investments for suitability for migration to a cloud computing service. However, the agency did not agree with our recommendation that it establish a mechanism to track savings and cost avoidances from the migration and deployment of cloud services. Specifically, Defense stated that it did not agree with our recommendation because there was no standard, consistent way to capture such savings or cost avoidance. The department stated that it would work with OMB on whether or how to collect such information, and, if practical, report such information in accordance with OMB guidance.
However, as we noted in our report, for the past 6 years, OMB has required agencies to report on savings and cost avoidances from implementing IT reform initiatives, including savings realized from the migration to cloud services. Tracking savings and cost avoidances for cloud initiatives is important in order to ensure that Defense is effectively managing and overseeing its cloud initiatives. In addition, it is essential that OMB and Congress have sufficient data to see the results of Defense’s cloud initiatives and understand whether the department is achieving savings using cloud services. Consequently, we believe our recommendation to track savings and cost avoidances from the migration and deployment of cloud services is still warranted. The department’s comments are reprinted in appendix XIX.
Finally, we received comments via email from Treasury’s Supervisory IT Specialist in the Office of the CIO on February 22, 2019, and OMB’s Liaison to GAO on February 25, 2019. In these comments these two agencies did not state whether they agreed or disagreed with the recommendations that we directed to them.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Director of the Office of Management and Budget, the Secretaries and agency heads of the departments and agencies in this report, and other interested parties. This report will also be available at no charge on our website at http://www.gao.gov.
If you or your staffs have any questions on matters discussed in this report, please contact me at (202) 512-4456 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XX.
Appendix I: Objectives, Scope, and Methodology
Our objectives for this engagement were to (1) evaluate selected agencies’ progress in implementing cloud services, (2) review the extent to which selected agencies have increased spending on cloud services and achieved cost savings or avoidances, and (3) describe examples of cloud investments with significant or notable benefits that have been identified by selected agencies.
For this review, we selected a sample of agencies based on the size of their total information technology (IT) budget for fiscal year 2017. Specifically, we categorized each of the 24 Chief Financial Officers Act agencies by the size of its IT budget: large (more than $3 billion), medium ($1 billion to $3 billion), and small (less than $1 billion), as reported on the Office of Management and Budget’s (OMB) IT Dashboard These agencies were the Department of Agriculture (Agriculture), Department of Commerce (Commerce), Department of Defense (Defense), Department of Education (Education), Department of Energy (Energy), Department of Health and Human Services (HHS), Department of Homeland Security (DHS), Department of Justice (Justice), Department of Labor (Labor), Department of State (State), Department of Transportation (Transportation), Department of the Treasury (Treasury), Department of Veterans Affairs (VA), General Services Administration (GSA), Small Business Administration (SBA), and Social Security Administration (SSA).
OMB, IT Dashboard, 2017 (https://itdashboard.gov).
To address the first objective, we obtained and analyzed IT Dashboard data related to the 16 selected agencies’ use of cloud services for fiscal years 2016 through 2018, and their projected use in 2019. We chose to begin with fiscal year 2016 because we had previously reported on federal agencies’ use of cloud services through fiscal year 2014 and fiscal year 2015 data was not available. Specifically, the Dashboard includes agency responses to a cloud-related question from OMB’s capital planning guidance. The question asks whether a cloud alternative was evaluated for the investment, or components or systems within the investment. We reviewed agency responses that were submitted for fiscal years 2016 through 2019 as part of the annual budget submission process in order to determine whether a specific investment was using cloud services.
During this 4-year period, OMB made changes to the options that agencies were required to choose from which indicated whether an investment was using cloud services. For OMB’s capital planning guidance for fiscal years 2016 and 2017, we selected responses that indicated that the agency “had evaluated a cloud alternative and chose a cloud alternative” with a particular cloud deployment model. In addition, for OMB’s capital planning guidance for fiscal years 2018 and 2019, we selected responses that indicated the “investment or a portion of the investment is leveraging cloud computing”. We then determined the total number of investments using cloud services and calculated the percentage of investments using these services based on the total number of reported investments by each agency for each fiscal year.
To ensure the accuracy and completeness of the selected agencies’ data on the use of cloud services, we downloaded this data from the IT Dashboard on October 3, 2017, March 7, 2018, and October 9, 2018. We took this step because agencies may update their data on a quarterly basis throughout the fiscal year. In addition, we presented the results of our analysis to officials in charge of cloud services within the Office of the Chief Information Officer (CIO) at each selected agency. We asked these officials to verify the completeness and accuracy of this data and provide any updates as appropriate. Officials at all 16 agencies confirmed the total number of investments using cloud services for fiscal years 2016 through 2018 and their projected use for fiscal year 2019. Based on these steps, we determined that these data were sufficiently reliable to report on agencies’ progress in using cloud services.
In addition, we compared each selected agency’s cloud guidance to OMB’s Cloud First guidance. We interviewed Office of the CIO officials in charge of cloud services at each agency regarding their guidance. In addition, we interviewed OMB staff from the Office of E-Government and Information Technology regarding its guidance. Because of the wide variety of responses and documents we received from the agencies related to their guidance for assessing investments for cloud computing services, we conducted a content analysis of the information in order to determine compliance with OMB’s guidance. In doing so, team members individually reviewed agencies’ responses and documents and assigned them to various categories and subcategories. Team members then compared their categorization schemes, discussed the differences, and reached agreement on the final characterization of compliance with OMB guidance. In cases where agencies provided multiple policies or documents, we followed up to clarify which portions were considered by the agency to support the requirement to assess all investments for cloud services.
In analyzing whether the agencies’ guidance on assessing investments for cloud services met OMB criteria, we assessed whether the guidance clearly identified a requirement for evaluating both new and existing investments for cloud services. Agencies found to not have guidance which clearly defined the assessment process were evaluated as such for one of two reasons: either the agency’s formal guidance did not completely address our assessment criteria or the agency’s guidance had not yet been established or finalized.
In analyzing whether agencies had met OMB’s requirement to evaluate each investment for cloud services, we assessed the number of investments that had completed assessments based on the fiscal year 2019 budget submission. Agencies found not to have met the requirement were evaluated as such if the agency had 10 or more investments that had not yet been evaluated for cloud services. We set this threshold based on a reasonable interpretation of the intent of OMB’s guidance requiring assessments of all investments.
For our second objective, we obtained and analyzed IT Dashboard data related to the 16 agencies’ spending on cloud services for fiscal years 2015 through 2018. We chose to begin with fiscal year 2015 because we had previously reported on federal agencies’ spending on cloud services through fiscal year 2014. Agencies report actual spending costs by fiscal year on the IT Dashboard as part of the next fiscal year reporting. To determine actual cloud spending costs for each fiscal year, we used agency spending data reported each subsequent fiscal year (from fiscal years 2017 through 2018) as of October 5, 2018.
In addition, we administered a data collection instrument to each of the 16 agencies to obtain and analyze spending and savings data by the 16 selected agencies for fiscal years 2014 through 2018 and plans for future planned costs. We requested that these agencies provide spending and savings data broken down by investment, as OMB only requires federal agencies to report total spending by cloud deployment model on the IT Dashboard, and agencies were not required to identify whether any reported savings were cloud-related. This instrument was administered from November 2017 to January 2018.
In the data collection instrument, we asked the selected agencies to complete information on each of their cloud investments, including the title of the application or system leveraging cloud, the cloud deployment and service models, and the associated cloud spending and net cloud savings or avoidances from fiscal year 2014 through fiscal year 2018 and beyond, as agencies generally submitted data on planned spending for one or more fiscal years beyond 2018. Due to the varied scale of cloud implementation efforts ongoing at these agencies, we asked agencies to only provide all applications, systems, or investments leveraging cloud with total life-cycle costs of $1 million or more. We also asked agencies to provide spending and savings or cost avoidances figures in whole numbers in order to avoid errors in rounding numbers when we calculated the reported figures in millions.
We took the following steps to help ensure the reliability of the data we collected. First, to minimize errors that might occur from respondents interpreting our instrument differently from our intended purpose, we reviewed the data collection instrument with agency officials who would be completing the instrument during meetings in October and November 2017. Second, we reviewed the completed spreadsheets to identify missing data or other errors, and consulted with our data quality expert about these issues as appropriate.
All agencies completed the data collection instrument by May 2018. For those agencies that provided rounded (rather than exact) spending or savings figures, we recalculated the data into whole numbers and confirmed our calculations with the agencies. In addition, one agency broke down its savings data into savings and cost avoidances; we combined these reported figures for each investment and, after consultation with a GAO data subject matter expert, confirmed with all the other agencies in our review that their information on savings also included cost avoidances.
We also reviewed the associated notes regarding agencies’ qualifications of the provided data and followed up with agency officials to clarify the responses as appropriate. These notes included information on whether certain spending or savings data were unavailable, whether certain costs were excluded from the spending information provided to us or whether there were other qualifications of the provided data.
Lastly, we presented the results of our analysis of IT Dashboard data and the data obtained from the data collection instrument to each of the selected agencies between June and August 2018. We asked the agencies to verify the completeness and accuracy of these data and provide any updates as appropriate. All 16 agencies provided updated information regarding the list of investments using cloud services with life- cycle costs of $1 million or more and six agencies (Agriculture, Commerce, Justice, Transportation, Treasury, and VA) provided updated information related to spending and savings for these investments, which we have incorporated as appropriate. Based on the measures we took to ensure the reliability of the data provided by the agencies and reported on the IT Dashboard, we determined that the data were sufficiently reliable for the purpose of this report.
For the third objective, we obtained and reviewed available documentation discussing examples of cloud computing investments reported by the selected agencies as having produced notable benefits and key practices that ensure the effort was successful. We also interviewed officials from the Office of the CIO and other components in charge of cloud services regarding these benefits.
In order to develop our list of questions for these meetings, we first conducted research to identify the range of benefits that could be achieved from acquiring cloud services. We reviewed OMB, GSA, and CIO Council cloud guidance; our prior work; and key leading cloud practices from GSA’s Federal Cloud Computing Center of Excellence. Based on this work, we developed a list of seven key areas of benefits: (1) improving efficiency and operations; (2) promoting agility and responsiveness; (3) achieving business growth; (4) reducing cost; (5) meeting regulatory requirements; (6) enhancing customer experience; and (7) ensuring mission outcomes. During meetings with agency officials in the Office of the CIO and other components in charge of cloud services, we asked officials whether they had identified any significant or notable benefits in these seven areas. As these seven areas might not represent all potential benefits, we also asked officials to describe any additional benefits not included in these areas.
In addition, as part of these meetings, we asked officials from the Office of the CIO at each selected agency to identify up to three examples of investments that benefited from the acquisition of cloud services. We asked agencies to exclude examples of email deployments to the cloud to ensure a wider variety of examples of investments with benefits. Fifteen of the 16 agencies in our review identified at least one or more examples of cloud investments that had produced significant or notable benefits, while one agency—HHS—reported that it did not have any such examples because it did not have any completed migration efforts. Because of the open-ended nature of the 15 agencies’ responses to our questions, we conducted a content analysis of the information we received in order to identify and summarize the benefits and key practices that were identified by the 15 agencies. We reviewed the benefits and key practices reported by the agencies and grouped them using the seven key benefit areas that our prior research had identified. We discussed the groupings of the reported benefits, and reached agreement on these categories. We grouped the benefit categories together based on commonalities such as purpose, impact, and capabilities, and summarized the benefits reported. Based on discussion, we confirmed a list of benefits and key practices and totaled the number of agencies that reported each of these.
In addition, to select systems or investments to profile, we reviewed the 34 examples provided by the 15 agencies and narrowed the list to 11 examples. We selected these examples using the following factors: the type of system, whether the system supported the mission or business operations of the agency or component, and the availability of information related to the benefits achieved from acquiring cloud services. In doing so, we sought to have a mix of systems that provided mission critical services to the agency or the public, illustrated a range of cloud computing benefits, and included detailed information on the benefits achieved from using cloud services.
In technical comments received on a draft of this report, two agencies provided new information regarding the use of cloud services for their systems that were profiled in appendix VI of the draft report. Based on the additional information provided by the two agencies, we determined there was no longer sufficient detail regarding what benefits were realized for these systems. Therefore, we removed the two agencies’ profiled examples from the report in order to be consistent with our methodology for reporting examples of systems that had realized benefits from the acquisition of cloud services. We then notified both agencies of this decision.
We conducted this performance audit from September 2017 to April 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Status of Cloud-related Recommendations in the 2017 American Technology Council Report
In May 2017, the Administration established the American Technology Council to help transform and modernize federal IT and how the government uses and delivers digital services. Subsequently, in December 2017, the American Technology Council issued a Report to the President on Federal IT Modernization and made eight cloud computing- related recommendations that are relevant to the focus of our review. Table 9 outlines the cloud-related recommendations contained in the report and the current status of these recommendations as of July 2018, according to Office of Management and Budget (OMB) staff from the Office of E-Government and Information Technology.
Appendix III: Selected Agency Cloud Investments for Fiscal Years 2016 through 2019
The Office of Management and Budget (OMB) requires federal agencies to evaluate each investment, or components or systems within the investment, for cloud services, regardless of the overall life cycle stage of the investment. Agencies are required to report the status of each investment’s evaluation as part of the annual budget submission, as noted in OMB’s annual capital planning guidance. Table 10 lists the total number of investments using cloud services and the total number of all IT investments for fiscal years 2016 through 2019 for 16 selected agencies, as reported on the IT Dashboard as of October 9, 2018.
Appendix IV: Selected Agency Cloud Spending for Fiscal Years 2015 through 2018
The Office of Management and Budget (OMB) requires federal agencies to report total cloud spending based on the cloud deployment model as part of the annual budget submission, as noted in OMB’s annual capital planning guidance for fiscal years 2015 through 2018. Table 11 lists the total agency-reported cloud spending and total IT spending for fiscal years 2015 through 2018 for the 16 agencies in our review, as reported on the IT Dashboard as of October 5, 2018.
Appendix V: Description of Cloud Computing Investments Provided by Selected Agencies for Fiscal Year 2018
Sixteen selected agencies provided us with information on their investments related to spending on cloud services of at least $1 million or more in life-cycle costs. Table 12 identifies the investments for fiscal year 2018 that these agencies submitted to GAO as of October 2018. This list includes the name of the investment, the cloud deployment model, and the cloud service model.
Appendix VI: Profiles of Selected Cloud Computing Acquisitions
The following nine cloud investment profiles illustrate the variety of benefits that the selected agencies in our review had realized from the acquisition of cloud services. These profiles describe the cloud investment, costs, key benefits, and the savings or avoidances associated with implementation of cloud services. In addition, the profiles detail how, among other things, the acquisition of cloud services enabled the agency to overcome previous challenges with legacy systems and acquire more cost-effective, efficient, and responsive IT resources in order to meet mission needs.
Treasury’s Cloud Acquisition Supports Enterprise Resource Planning
Each year, the Department of the Treasury’s (Treasury) Bureau of Engraving and Printing prints billions of dollars—referred to as Federal Reserve notes—for delivery to the Federal Reserve System. According to the Chief of the Bureau of Engraving and Printing’s Office of Enterprise Solutions, in October 2012, the bureau’s CIO deployed an enterprise resource planning system to the cloud to improve efficiency and operations, and enhance the availability, security, and performance of its systems that manage the daily business activities of the bureau.
Previously, the bureau had used 16 separate IT legacy systems that were facing technological obsolescence and required heavy customization using old programming languages. Officials in the bureau reported that they spent time updating and maintaining the hardware and software for these systems to minimize downtime that led to the need for staff to work overtime and decreased customer satisfaction with the system. In addition, the legacy systems were not integrated and certain tasks were performed manually, including data entry, aggregation, and product quality checks. Further, the legacy systems lacked robust data validation features; thus, database administrators had to spend time making corrections to data submitted by users.
By moving to the cloud, the bureau replaced the 16 systems with software as a service that required no customization. As a result, officials in the bureau reported that the bureau was able to significantly improve its operations and decision making. Further, officials stated that the bureau’s use of software as a service enabled it to procure cloud services which helped to ensure that the new system had increased availability, performance, and security to prevent delays and downtime in operations.
The bureau also implemented features to improve data entry to reduce user errors. Figure 3 provides a summary of Treasury’s cloud acquisition.
Acquiring software as a service improved operations and production decisions. According to Bureau of Engraving and Printing officials, by consolidating the bureau’s 16 legacy systems with a single system in the cloud, the bureau made significant improvements to its operations through automation, improved data quality, and increased availability, performance, and security. Specifically, acquiring software as a service enabled the bureau to purchase capabilities that its previous systems did not have. By doing so, this eliminated some manual data entry and improved the bureau’s ability to, among other things, automate production decisions which allowed users to focus on more critical tasks.
For example, bureau staff reported that now they enter data from monthly Federal Reserve Bank orders into the Manufacturing Support Suite to determine the denominations and quantities of currency to produce, along with what banks will receive it. In addition, the new system uses current production times and data to determine when to replenish existing inventory and supplies, such as ink and paper, in order to prevent operational delays and downtime. Furthermore, staff stated that they now make compliance decisions using automated alerts and triggers, which help to prevent the release of products that do not meet bureau standards.
According to bureau officials, the bureau has also improved data quality and reduced the amount of time that database administrators have to spend correcting errors. For example, when users enter data into the system, real-time data validation checks prevent common errors and prompt users to make corrections before submission. In addition, staff stated that the bureau implemented bureau-specific data checks, including which accounts could be associated with item categories and cost centers, which has reduced user errors and improved the reliability of the data.
Finally, the bureau has improved system availability, security, and performance by acquiring software as a service. For example, bureau officials stated that they selected a FedRAMP-approved provider and established service level agreements with the provider to help ensure system availability, security, and performance, including disaster recovery capabilities that were not available for the legacy systems. In addition, bureau staff said that they no longer need to update and maintain IT software and hardware, which has saved time and resources, and decreased system downtime.
Transportation’s Cloud Acquisition Stores Public Transit Data
The Department of Transportation’s (Transportation) Federal Transit Administration provides financial and technical assistance to local public transit systems, including buses, subways, light rail, commuter rail, trolleys and ferries. According to the Federal Transit Administration IT Director, in October 2014, the National Transit Database was migrated to the cloud in order to improve customer experience, mission assurance, agility and responsiveness. Previously, the legacy database had several challenges, including the use of obsolete technology, poor usability, and problems with data accuracy. In addition, developing new functionality for the legacy system was a lengthy process, which decreased the ability of developers to respond to other user needs.
By transitioning to the cloud, the Federal Transit Administration established a centralized access portal for users, which consolidated systems, eliminating the need to remember multiple passwords for external users, and added a single sign on feature for internal users. Staff in the Federal Transit Administration also reported that they improved the database’s user interface by implementing improved system validation functionality for transit data. In addition, the cloud provided software developers with tools to develop functionality quicker to help improve the database’s responsiveness to user needs. Figure 4 provides a summary of Transportation’s cloud acquisition.
Automating validation features improved customer experience. According to Federal Transit Administration officials, by moving to the cloud, developers established automated validation features which use historical data to identify outliers and prevent potential user data entry errors. Officials reported that analysts previously performed manual data validation to ensure the accuracy of customer-entered data. The new cloud version of the National Transit Database uses historical data to identify errors and leverages cross-form data validation for the current reporting year, which has reduced the time it takes to validate the data.
Faster development methods improved the responsiveness to user needs. According to Federal Transit Administration officials, with the deployment to the cloud, the agency adopted faster development processes, which led to more frequent releases of functionality. For example, the database’s developers regularly receive requests from transit customers for enhancements that would traditionally take longer to implement in the prior legacy environment. By leveraging the cloud framework and improved Agile development procedures, officials reported that developers can now engage users earlier to make adjustments based on their feedback, thereby focusing more directly on meeting business needs.
GSA’s Cloud Acquisition Enhances Federal Data Analytics
The General Services Administration (GSA) helps federal agencies build and acquire office space, products and other workspace services, and oversees the preservation of historic federal properties. According to GSA’s Chief Data Officer, in 2015, GSA began developing an enterprise platform pilot program, Data to Decisions, in order to improve the agility, responsiveness, efficiency, and operations of the agency’s data analytics capabilities. Previously, GSA’s data analytics operations had redundancy and overlap, including similar contracts and data sources, negatively affecting data sharing efforts across the agency.
Subsequently, in October 2015, GSA’s CIO, Chief Data Officer, and Chief Technology Officer moved the program to the cloud while consolidating existing contracts to create a centralized web portal. GSA officials reported that the new portal provides new data analytics capabilities for staff to use in generating analyses to advise decision makers at the agency and across other federal agencies, while also saving staff time in producing these analyses. For example, the centralized web portal allows the agency’s 400 data practitioners to, among other things, build data models, understand business operations through analytics and visualizations, and publish dashboards and reporting. Figure 5 provides a summary of GSA’s cloud acquisition.
Providing analytical capabilities and tools to the federal government improved the management of resources. According to GSA officials in the Office of the CIO, cloud deployment has enabled the sharing of data and other analytical tools across the federal government to help agencies better manage their resources and create efficiencies in data management. For example, previously, agencies did not have access to detailed data regarding agency-owned and GSA-managed property in their asset portfolios. By moving to the cloud, GSA officials reported that they developed two tools called the Real Property Management Tool and the Asset Consolidation Tool. These tools were deployed to between 30 and 40 federal agencies, which enabled these agencies to identify potential opportunities to consolidate their building properties or co-locate office spaces to help save resources. Specifically, the tools provided dashboards that showed expiring leases and occupancy agreements, as well as excess and underutilized space. Further, the program provided data to federal agencies that were not previously available. By doing so, officials said that smaller agencies did not have to invest in their own data analytics capabilities or acquire additional staff resources for data analytics.
Flexible and scalable technology addressed an increased demand for data services. According to GSA officials in the Office of the CIO, as the program has expanded its data analytics capabilities, program usage has grown over time. In particular, in 2018, of the program’s 7,200 users, more than 80 percent were from federal agencies other than GSA. Cloud deployment has allowed GSA to easily scale resources to manage changes in user traffic and enabled agency personnel to focus on the mission rather than managing a data center to respond to these changes in demand. For example, in 2017, GSA officials said that they sent out a notice to approximately 1 million federal employees who had completed its annual tenant satisfaction survey notifying them that the survey’s results were available. As a result, several thousand users tried to access the report on the program’s portal, affecting the program’s operations. Officials said that the agency was able to scale up the portal’s resources and capabilities to handle the demand and then scale the resources back once user traffic returned to normal levels.
VA’s Cloud Acquisition Improves Veteran Benefits and Services
The Department of Veterans Affairs (VA), among other duties, administers a variety of benefits and services that provide financial and other forms of assistance to veterans, their dependents, and survivors. According to the Deputy Assistant Secretary for Enterprise Program Management, in March 2016, the VA CIO deployed the Vets.gov web portal to the cloud in order to improve veterans’ customer experience and scale resources to meet demand. Previously, VA officials reported that they had experienced challenges with its legacy websites. Specifically, the websites were not designed using federal government web standards, including browser compatibility and accommodations for the needs of individuals with disabilities. In addition, the websites required users to remember several sets of login information to access many features on approximately 500 websites.
By moving to the cloud, VA officials stated that the agency has been able to better address veterans’ needs by consolidating access to over 500 of the agency’s websites for benefits and services. The new easier, mobile- friendly web portal requires only one login for all 500 websites, and incorporates features for users with disabilities, such as blind veterans.
Further, the program was able to scale up the portal’s resources to meet the increased demand for online benefits and services, while adopting a design approach that better incorporated the needs of veterans and delivered functionality more quickly. In November 2018, the Vets.gov cloud platform became the building block for the agency’s new homepage at VA.gov. Figure 6 provides a summary of VA’s cloud acquisition.
Consolidating website access to benefits and services and incorporating veterans’ feedback improved customer service to veterans and reduced costs. According to VA officials in the Office of the CIO, by moving to the cloud, VA has worked to improve veterans’ access to benefits and services through its websites in several key areas. For example, Vets.gov is intended to be mobile-friendly and work on any computing device with a compliant web browser, avoiding the need to install separate software to apply for benefits. In addition, officials stated that the agency intends the portal to be easier for veterans to search for services. For instance, VA had previously developed an application to help veterans schedule medical appointments but VA officials reported that veterans could not easily locate the application after searching across multiple VA websites.
In addition, VA officials stated that the agency was also able to reduce costs because, in moving to the cloud, Vets.gov cost the agency 85 percent less than it would have cost to build a traditionally hosted service with the same features. VA also retired a legacy application, which saved an estimated $1 million in annual contract costs.
Scalable technology and a faster veteran-centered development approach increased agility and responsiveness. According to VA officials in the Office of the CIO, moving to the cloud allowed VA to acquire more flexible and scalable technologies in order to scale resources up and down to meet demand, while incorporating a faster, more user-friendly design approach. For example, after its launch, officials said that Vets.gov received a spike in the number of veterans that chose to submit online applications for healthcare, which the agency was able to handle by scaling up resources to meet the spike in demand.
In addition, VA officials reported that the agency adopted a design approach in the cloud that, among other things, allowed it to adopt Agile methods to more quickly deliver releases. For example, based on feedback, VA incorporated mobile friendly design features—40 percent of Vets.gov users access benefits and services through a mobile device. Officials said that the agency has made efforts to focus on the needs of veterans first by using an iterative design approach that incorporates user feedback into the design process so that no features in the portal are deployed without a final usability test with a veteran. VA officials also reported that using the cloud has allowed the agency to deploy new features as soon as they are ready, in small incremental daily releases. Further, the officials noted that VA developers have worked with veterans on the portal’s healthcare claims status tracker. Specifically, veterans can access the status of their healthcare claims that may be experiencing a backlog in processing, along with an estimated decision date. Lastly, officials reported that by incorporating an online application, Vets.gov reduced the number of paper-based healthcare applications submitted by veterans. In fiscal year 2018, users submitted over 750,000 digital forms for benefits through Vets.gov.
Justice’s Cloud Acquisition Hosts Data Center
The Department of Justice’s (Justice) U.S. Trustee Program (USTP) is responsible for overseeing the administration of bankruptcy cases and private trustees within the United States. According to USTP’s Chief Technology Officer, in June 2016, executives in the Program, including the CIO, decided to migrate USTP’s operations to the cloud to meet regulatory requirements, reduce costs, and improve agility, efficiency, and responsiveness. Officials said that their office had conducted an evaluation and determined that, in order to fulfill OMB’s mandate to consolidate agency data centers, USTP would have to spend at least $1 million for an on-premise consolidation. Officials reported that USTP also faced challenges with having adequate backup capabilities and implementing new technological solutions due to its legacy computing environment and the time it took to purchase and install new hardware and software. Subsequently, in March 2017, the Program moved its operations to the cloud and avoided the cost of consolidating its data centers. In addition, officials in USTP said that the move to the cloud helped them address backup issues, and speed up the development and testing of new applications. Figure 7 provides a summary of Justice’s cloud acquisition.
Avoiding an on-premise data center consolidation and streamlining IT operations reduced costs. According to USTP officials, by moving to the cloud, their office avoided at least $1 million in costs, while resolving internal performance issues, and streamlining the management of its contracts. Specifically, officials said that USTP shut down 1 of its 2 data centers and reduced its server inventory from 140 to 75 and the number of vendors from 21 to 9. In addition, the office eliminated an estimated 50- 70 monthly IT staff hours dedicated to resolving backup issues.
Flexible technology resources sped up the development of functionality. According to USTP officials, acquisitions of new technology previously took several months because of the time needed to estimate requirements and wait for officials to purchase and install hardware and software. By moving to the cloud, USTP officials stated that the intention is to be able to develop and test new applications faster, and determine their viability, with minimal time and costs. Specifically, officials reported that they have set up a cloud test lab to better understand system requirements by scaling up and down resources as needed and experimenting with new capabilities. In addition, while USTP’s legacy monitoring solution required consulting assistance and took months to implement, officials noted that they were able to set up a similar solution in the cloud within 1 week.
DHS’s Cloud Acquisition Supports Information Sharing and Collaboration
The Department of Homeland Security (DHS) collaborates with a variety of agencies and organizations to share information related to homeland security. According to the program’s Service Operations Manager, DHS’s CIO migrated the Homeland Security Information Network to the cloud in July 2017 in order to improve the system’s availability and operational efficiency, while reducing costs. Officials stated that, previously, the agency had faced challenges in ensuring the system’s redundancy and deploying new network enhancements quickly. This was due to the costs and time frames associated with acquiring new infrastructure and maintaining and upgrading current infrastructure. In addition, the agency was not able to quickly develop and deploy new capabilities to meet user needs.
By moving to the cloud, officials stated that the agency was able to implement a disaster response capability and improve the system’s operational efficiency, while also establishing more efficient environments for software development and testing. In addition, the agency was able to shut down an existing data center, which achieved cost savings of at least 30 percent from hosting the network in the data center. Figure 8 provides a summary of DHS’s cloud acquisition.
Acquiring infrastructure as a service improved system availability and operational efficiency. According to DHS officials in the Office of the CIO, migrating to the cloud has improved the system’s availability and operational efficiency, which cost less money than the prior hosting solution. For example, acquiring infrastructure as a service provided increased redundancy over the old solution and has helped to ensure the network remains continuously available for daily operations and emergency response. The officials stated that, previously, the agency had not been able to implement a disaster recovery capability because it would cost over $1.5 million to build and maintain a second active network environment. Moving to the cloud enabled the agency to implement this capability for significantly less cost.
In addition, officials in the CIO’s office said that the acquisition of infrastructure as a service has enabled the agency to improve the operational efficiency of the system. For example, network managers can easily stand up new virtual hardware, networking, and storage capabilities, or make changes to existing infrastructure, in less than a day. Officials said that, previously, it used to take staff several months to make these changes manually. This allows network managers to respond very rapidly to changes in user demand, particularly if there are emergencies or natural disasters, and then scale down resources during non-use periods. For example, officials said that managers scaled up resources to support first responders from federal, state, and local governments to share weather, response and recovery information during Hurricanes Harvey, Irma, and Jose, and the West Coast wildfires. In addition, the officials noted that network managers now have access to the latest virtual hardware and the agency does not have to pay for hardware refreshments.
Flexible technology resources strengthened the development of functionality. According to DHS officials in the Office of the CIO, moving to the cloud enabled the agency to very inexpensively build multiple environments in the system for software development, testing, and production, which has improved the development and deployment of new services. Software developers now have consistent and standardized environments, which helps to reduce the risk of errors and security vulnerabilities, as well as configuration issues. DHS officials stated that all of these issues would previously require staff time and funding to resolve. The developers can also now use automation tools to deploy new code from development into production more quickly to help meet user needs for new functionality. In addition, officials noted that cloud providers are constantly adding new services that users can leverage to do their work more efficiently, without the time and cost of the agency having to develop or procure this capability separately.
Agriculture’s Cloud Acquisition Improves Enterprise Content and Electronic Records Management
The Department of Agriculture’s (Agriculture) U.S. Forest Service manages 193 million acres of federal land in order to sustain the health, diversity, and productivity of the nation’s forests and grasslands for present and future generations. According to the Acting Assistant Forest Service CIO for Natural Resources and Environment, in August 2017, the Forest Service began deploying a new enterprise content management and electronic records management system, called Pinyon, to the cloud to help improve operations and the management of electronic records. The move also addressed federal requirements related to electronic records management. Officials stated that, previously, the Forest Service relied on a shared storage drive for enterprise content management. Officials reported that this drive was highly proprietary, slow, unreliable, and a security vulnerability because it could not be easily maintained. In addition, officials reported that the shared storage drive was on the verge of failure because the vendor no longer supported and upgraded the system.
By acquiring two software as a service solutions for enterprise content and electronic records management, officials said that the Forest Service was able to quickly deploy a new system with only some limited software customization for the integration of the two solutions. The Forest Service completed this in two phases; officials deployed the enterprise content management solution in August 2017 and the electronic records management solution began deployment in August 2018. Officials reported that they plan to fully deploy the system by December 2018. Figure 9 provides a summary of Agriculture’s cloud acquisition.
Acquisition of software as a service improved operations. According to Forest Service officials in the Office of the CIO, by acquiring software as a service, the Forest Service was able to implement new enterprise content management capabilities and collaboration tools quickly without the costs and risks associated with software development. Officials said that, previously, users did not have capabilities for managing their own content such as setting permissions, granting access privileges to documents, or easily managing different document versions. In addition, officials noted that users relied heavily on email to collaborate on daily work activities as other collaboration tools were not available. By acquiring software as a service, officials said that the Forest Service was able to quickly implement enhanced workflow and document management capabilities and add new tools for collaboration, which has increased staff productivity. Furthermore, acquiring software as a service allowed the Forest Service to integrate their new system with Agriculture’s electronic authentication system, which the agency could not previously accomplish with the legacy system. By integrating these systems, Forest Service officials said that the agency has increased the accessibility of the Forest Service’s information by allowing staff to securely access files regardless of physical location.
Going forward, officials in the Forest Service said that they are exploring other features and capabilities offered by the cloud vendor to help better meet mission needs. For example, the Forest Service regularly collaborates with a variety of other agencies, state and local governments, educational institutions and other organizations on issues related to managing federal lands and responding to natural disasters, such as wildfires. Officials noted that the Forest Service hopes to use shared virtual workspaces and other collaboration tools to engage these partners.
In addition, by acquiring software as a service, Forest Service officials said that they have ensured that there is a system in place that the vendor will automatically upgrade with new enhancements, capabilities, and the latest technology. For example, in order to meet new federal cybersecurity requirements, Forest Service officials said that they have been able to work with the cloud vendor to ensure the vendor incorporates software changes to meet these requirements.
Flexible and scalable technology enhanced the management and storage of electronic records. According to Forest Service officials in the Office of the CIO, by moving to the cloud, the Forest Service was able to acquire new storage capabilities that are easily scalable as its volume of electronic records grows over time. Officials said that, previously, the Forest Service used both paper-based records and a shared storage drive for storing work documents and other operational records. Paper- based records were stored in file cabinets and warehouses while the shared storage drive maintained approximately 320 million files and 250 terabytes of data. In addition, the agency previously used tape backups for the shared storage drive. By moving to the cloud, officials in the Forest Service said that they gained unlimited storage and electronic backup capabilities. Further, Forest Service officials said the new system is intended to be able to easily scale up storage resources as needed for the digitization of its paper-based records and handle the future volumes of electronic records.
In addition, by acquiring software as a service, officials in the CIO’s office reported that the Forest Service was able to meet the federal requirement for electronic records management more than a year before the December 2019 deadline, which the prior shared drive could not meet.
Commerce’s Cloud Acquisition Enhances Access to Weather Data
The Department of Commerce’s (Commerce) National Oceanic and Atmospheric Administration works to understand and predict changes in climate, weather, oceans, and coasts, and shares that knowledge and information with others. According to Commerce’s Acting Chief Information Officer, in September 2017, the National Oceanic and Atmospheric Administration’s CIO and National Weather Service leadership decided to deploy its public weather websites to the cloud in order to improve the agility and responsiveness of these websites in a cost-effective manner. Officials stated that, previously, in 2016, as a result of Hurricane Matthew, hundreds of millions of web requests led to failures with the program’s on-premise infrastructure, causing websites to become unavailable to the public for a period of time. Subsequently, in September 2017, prior to the landfall of Hurricane Irma, officials stated that the agency launched its weather cloud content delivery network. This new network is intended to ensure the availability of weather-related information, while avoiding the additional expenses for infrastructure that would likely go unused during normal business operations. Figure 10 provides a summary of Commerce’s cloud acquisition.
Increased service availability ensured the public’s timely access to extreme weather-related information. According to National Oceanic and Atmospheric Administration officials, the deployment of the weather cloud content delivery network in September 2017 helped websites handle the web requests for data on Hurricanes Irma and Maria by scaling up the resources needed to handle the increased requests. Normally, the weather websites receive approximately 26 million daily web requests from the public. However, officials noted that the number of requests increases dramatically during adverse weather events, such as hurricanes. For example, officials said that in August 2017, the websites began experiencing delays because of the high volume of hurricane- related requests from Hurricane Harvey—including approximately 218 million web requests on August 31, 2017 alone. After deployment to the cloud in September 2017, officials reported that over the course of two days, the weather cloud content delivery network successfully scaled up its resources and handled approximately two billion web requests received through the administration websites.
On-demand capabilities decreased costs. According to National Oceanic and Atmospheric Administration officials, by acquiring software as a service, it avoided the cost of expanding existing on-premise infrastructure to handle sudden surges in demand that only last a short period of time, as well as associated maintenance costs. Officials said that the program can now scale up the resources supporting the weather cloud content delivery network whenever it anticipates an adverse weather event that would lead to greater demand for website information.
Defense’s Cloud Acquisition Enhances Transportation Command Systems
The Department of Defense’s (Defense) U.S. Transportation Command (USTRANSCOM) provides common user and commercial air, land, and sea transportation, as well as terminal management and air refueling, in support of the military’s deployment, employment, sustainment, and re- deployment efforts. USTRANSCOM’s Chief of Cyber Operations and Readiness Division reported that in January 2017, the USTRANSCOM Commander made the decision to migrate all of the command’s systems to the cloud in order to improve mission assurance, agility, responsiveness, efficiency, and operations. Officials reported that, previously, the command had experienced a massive power outage affecting the availability of approximately 25 legacy systems that lacked the capability to quickly recover from network failures. In addition, officials noted that the command’s system, used to manage world-wide moves of Defense personnel property, was not user-friendly, and was difficult to maintain because the agency built the system using waterfall software development methods. Lastly, officials said that the command had largely relied on manual reporting activities that took numerous staff hours to produce to make financial, operational, planning, and support decisions.
By beginning to transition to the cloud in January 2018, USTRANSCOM officials said that the command is in the process of ensuring its systems are secure and continuously available, and is developing capabilities to improve the usability of its legacy systems. In addition, officials reported that the command is streamlining its tracking and reporting mechanisms to allow users to automatically generate key reports, which will give decision makers access to more current and accurate information to help improve program operations. USTRANSCOM officials said that executive sponsorship is absolutely critical for migrating to the cloud to overcome culture change by bringing together people throughout the enterprise. In addition, the command’s cloud center of excellence team facilitates the command’s adoption of cloud by, among other things, training users and addressing governance issues. Figure 11 provides a summary of Defense’s cloud acquisition.
Incorporating automated recovery from network failures and streamlining security increased mission assurance. According to USTRANSCOM officials, by moving the command’s network to the cloud, the command has been able to design and build its new network with higher levels of availability. For example, officials said that if a network segment becomes unavailable, the cloud technology has the capability to automatically reroute traffic to help reduce the amount of delay that users experience. In addition, officials reported that developers have been working to automate several hundred security checks that are part of Defense’s security technical implementation guides by implementing a repeatable, automated process instead of doing manual checks. Officials noted that, previously, manually checking of the status of configurations would take hundreds of man hours to complete. Eventually, the command anticipates that automation will save these hours of manual checks. The command plans to implement the new capability in May 2019.
Replacing a legacy system with a cloud-based system developed using Agile software development methodologies will enhance the shipment of personnel property. According to USTRANSCOM officials, moving to the cloud has assisted the command by replacing the legacy system that manages moves of Defense personnel property, like household goods, with a mobile prototype built in the cloud. Officials reported that the legacy system currently uses a variety of commercial products that are difficult to maintain and do not efficiently address the command’s complex business processes for personnel property moves, all of which affects the usability of the system. Currently, the command is using Agile software development methodologies to reengineer its business processes to develop a solution that is mobile and user-friendly. The new mobile prototype is intended to allow personnel to request access in order to manage the moves of certain household goods. Officials reported that the command initially planned to deploy the prototype in June 2018 but deployment was delayed and a new date had not yet been identified.
Automating reporting and tracking mechanisms will help eliminate manual processes. According to USTRANSCOM officials, the command is in the process of automating its processes for reporting and tracking cargo shipments utilizing cloud technologies. Currently, the command employs manual processes to track and monitor a variety of its cargo shipments. For example, officials reported that five analysts typically spend one day compiling a status report that details delays with food shipments for Defense military exercises and operations. In addition, analysts currently have to query up to 11 Defense and commercial carrier systems to compile a report on high-priority shipments across the combatant commands. Officials noted that these analysts often experience delays getting access to timely information and must also resolve conflicting information in various transportation systems. However, with the transition to cloud services, officials in USTRANSCOM reported that analysts will have the capability to automatically generate reports based on defined criteria, such as shipment method or destination, and use data feeds that officials can continuously update. By developing phase one of the system in the cloud in fiscal year 2018, officials reported that they will be able to monitor delays in a shipment and immediately take action to change the mode of transportation or source shipments from alternate suppliers. The command plans to release the full operational capability in fiscal year 2020, which, officials noted, will give authorized users near real-time access to shipment information, including estimates of whether a shipment will arrive on time.
Appendix VII: Comments from the Department of Commerce
Appendix VIII: Comments from the Department of Education
Appendix IX: Comments from the Department of Energy
Appendix X: Comments from the Health and Human Services
Appendix XI: Comments from the Department of Homeland Security
Appendix XII: Comments from the Department of State
Appendix XIII: Comments from the Department of Transportation
Appendix XIV: Comments from the Department of Veterans Affairs
Appendix XV: Comments from the General Services Administration
Appendix XVI: Comments from the Department of Labor
Appendix XVII: Comments from the Small Business Administration
Appendix XVIII: Comments from the Social Security Administration
Appendix XIX: Comments from the Department of Defense
Appendix XX: GAO Contact and Staff Acknowledgments
GAO Contact
Staff Acknowledgments
In addition to the individual named above, the following staff made key contributions to this report: Dave Powner (Director), Dave Hinchman (Assistant Director), Chris Businsky, Nancy Glover, Valerie Hopkins (Analyst-in-Charge), Sandra Kerr, James MacAulay, Jamelyn Payan, and Priscilla Smith. | Why GAO Did This Study
Cloud computing enables on-demand access to shared computing resources providing services more quickly and at a lower cost than having agencies maintain these resources themselves. In 2012, OMB began requiring agencies to assess all IT investments for cloud services.
GAO was asked to review agencies' reported use of cloud services. This report discusses selected agencies' progress in implementing cloud services, the extent to which those agencies increased cloud service spending and achieved savings or cost avoidances, and examples of agency-reported cloud investments with notable benefits. GAO selected 16 agencies to review based on their fiscal year 2017 IT budgets and analyzed their use of cloud services, associated spending and savings data, and guidance for assessing investments for these services. GAO interviewed agency officials in charge of cloud services and reviewed pertinent documents to identify acquisitions with notable benefits. GAO also interviewed OMB staff about their agency's role in federal cloud computing and related OMB guidance.
What GAO Found
The 16 agencies GAO reviewed made progress in implementing cloud computing services (cloud services)—namely, they established assessment guidance, performed assessments, and implemented these services—but the extent of their progress varied. To encourage cloud service acquisition, the Office of Management and Budget (OMB) began requiring agencies to assess all information technology (IT) investments for cloud services. However, only 10 of the 16 agencies reviewed had established assessment guidance. In addition, while the agencies assessed the majority of their planned fiscal year 2019 IT investments for cloud services, 12 agencies had not completed an assessment of 10 or more investments. Nevertheless, 10 of the agencies reported increasing their use of cloud services between fiscal years 2016 through 2019 (see figure). Six agencies noted that inconsistent reporting of cloud investments and investment consolidation impacted their reported percentage.
Further, the 16 agencies reported that they had increased their cloud service spending since 2015 and 13 of the 16 agencies had saved $291 million to date from these services. However, these agencies identified issues in tracking and reporting cloud spending and savings data, including not having consistent processes in place to do so. Agencies also noted that OMB guidance did not require them to explicitly report savings from cloud implementations and, therefore, they had to specifically collect this data to meet GAO's request. As a result of these identified issues, it is likely that agency-reported cloud spending and savings figures were underreported.
Officials from 15 of the 16 agencies reported that they had identified significant benefits from acquiring cloud services, including improved customer service and the acquisition of more cost-effective options for managing IT services. In addition, these agencies identified nine cloud investments that, among other things, enhanced the availability of weather-related information, facilitated collaboration and information sharing among federal, state, and local agencies related to homeland security, and provided benefits information to veterans, as examples of systems that realized these benefits. One agency reported that it had not realized benefits because it did not have any completed migration efforts.
What GAO Recommends
GAO is making one recommendation to OMB on cloud savings reporting, and 34 recommendations to the 16 agencies on cloud assessments and savings. Fourteen agencies agreed with all recommendations, OMB and one agency neither agreed nor disagreed, and one (Defense) agreed with one recommendation but not the other. GAO continues to believe its recommendation to the department is appropriate. |
gao_GAO-18-430T | gao_GAO-18-430T_0 | Background
DHS leads the federal government’s efforts to secure our nation’s public and private critical infrastructure information systems against cyber threats. As part of these efforts, cybersecurity professionals can help to prevent or mitigate the vulnerabilities that could allow malicious individuals and groups access to federal information technology (IT) systems. The ability to secure federal systems depends on the knowledge, skills, and abilities of the federal and contractor workforce that designs, develops, implements, secures, maintains, and uses these systems.
The Office of Management and Budget has noted that the federal government and private industry face a persistent shortage of cybersecurity and IT talent to implement and oversee information security protections. This shortage may leave federal IT systems vulnerable to malicious attacks. Experienced and qualified cybersecurity professionals are essential in performing DHS’s work to mitigate vulnerabilities in its own and other agencies’ computer systems and to defend against cyber threats.
Since 1997, we have identified the protection of federal information systems as a governmentwide high-risk area. In addition, in 2001, we introduced strategic governmentwide human capital management as another area of high risk. We have also identified a number of challenges federal agencies are facing to ensure that they have a sufficient cybersecurity workforce with the skills necessary to protect their information and networks from cyber threats. These challenges pertain to identifying and closing skill gaps as part of a comprehensive workforce planning process, recruiting and retaining qualified staff, and navigating the federal hiring process.
Federal Initiative and Guidance Are Intended to Improve Cybersecurity Workforces
In recent years, the federal government has taken various steps aimed at improving the cybersecurity workforce. These include establishing a national initiative to promote cybersecurity training and skills and developing guidance to address cybersecurity workforce challenges.
Founded in 2010, the National Initiative for Cybersecurity Education (NICE) is a partnership among government, academia, and the private sector, and is coordinated by the National Institute of Standards and Technology (NIST). The NICE mission promotes cybersecurity education, training, and workforce development in coordination with its partners. The initiative’s goal is to increase the number of skilled cybersecurity professionals in order to boost national IT security.
In 2013, NICE published the National Cybersecurity Workforce Framework to provide a consistent way to define and describe cybersecurity work at any public or private organization, including federal agencies. In 2014, OPM developed guidance for assigning 2-digit employment codes for each cybersecurity work category and specialty area identified in the 2013 NICE framework. Federal agencies can use the codes to identify cybersecurity positions in personnel and payroll systems, such the system of the National Finance Center.
To further enhance efforts to strengthen the cybersecurity workforce, NICE subsequently revised the framework in 2017 to include 33 cybersecurity-related specialty areas organized into 7 categories— securely provision, operate and maintain, protect and defend, investigate, collect and operate, analyze, and oversee and govern. The revision defined work roles in specialty areas and cybersecurity tasks for each work role, as well as the knowledge, skills, and abilities that a person should have in order to perform each work role. Also, in 2017, OPM issued guidance creating a unique 3-digit employment code for each cybersecurity work role. In October 2017, NIST issued guidance that reflected the finalized 2017 NICE framework and included a crosswalk of OPM’s 2-digit employment codes to the 3-digit codes.
DHS’s Cybersecurity Workforce Performs a Wide Range of Critical Missions
DHS is the third largest department in the federal government, employing approximately 240,000 people, and operating with an annual budget of about $60 billion, of which about $6.4 billion was reportedly spent on IT in fiscal year 2017. In leading the federal government’s efforts to secure our nation’s public and private critical infrastructure information systems, the department, among other things, collects and shares information related to cyber threats and cybersecurity risks and incidents with other federal partners to enable real-time actions to address these risks and incidents.
The department is made up of 15 operational and support components that perform its critical mission functions. Table 1 describes the 6 components that we included in our review.
DHS Is Required to Assess Its Cybersecurity Workforce
The Homeland Security Cybersecurity Workforce Assessment Act of 2014 required DHS to perform workforce assessment-related activities to identify and assign employment codes to its cybersecurity positions. Specifically, the act called for DHS to: 1. Establish procedures for identifying and categorizing cybersecurity positions and assigning codes to positions (within 90 days of law’s enactment). 2. Identify all filled and vacant positions with cybersecurity functions and determine the work category and specialty area of each. 3. Assign OPM 2-digit employment codes to all filled and vacant cybersecurity positions based on the position’s primary cybersecurity work category and specialty areas, as set forth in OPM’s Guide to Data Standards.
In addition, after completing the aforementioned activities, the act called for the department to take steps to identify and report its cybersecurity workforce areas of critical need. Specifically, DHS was to: 4. Identify the cybersecurity work categories and specialty areas of critical need in the department’s cybersecurity workforce and report to Congress. 5. Submit to OPM an annual report through 2021 that describes work categories and specialty areas of critical need and substantiates the critical need designations.
The act required DHS to complete the majority of these activities by specific due dates between March 2015 and September 2016.
Within DHS, OCHCO is responsible for carrying out these provisions, including the coordination of the department’s overall efforts to identify, categorize, code, and report its cybersecurity workforce assessment progress to OPM and Congress.
DHS Has Not Fully Identified Cybersecurity Positions or Assigned Employment Codes in a Complete and Reliable Manner
The act required DHS to establish procedures to identify and assign the appropriate employment code, in accordance with OPM’s Guide to Data Standards, to all filled and vacant positions with cybersecurity functions by March 2015. In addition, DHS’s April 2016 Cybersecurity Workforce Coding guidance states that components should ensure procedures are in place to monitor and to update the employment codes as positions change over time.
Further, the Standards for Internal Control in the Federal Government recommends that management assign responsibility and delegate authority to key roles and that each component develop individual procedures to implement objectives. The standards also recommend that management periodically review such procedures to see that they are developed, relevant, and effective.
DHS OCHCO developed departmental procedures in May 2014 and recommended implementation steps for coding positions with cybersecurity functions for the department’s components. However, OCHCO did not update its procedures to include information on identifying positions and assigning codes until April 2016—13 months after the due date specified by the act.
In addition, the procedures were not complete because they did not include information related to identifying and coding vacant positions, as the act required. Moreover, the departmental procedures did not identify the individual within each DHS component who was responsible for leading and overseeing the identification and coding of the component’s cybersecurity positions.
Further, although components were able to supplement the departmental procedures by developing their own component-specific procedures for identifying and coding their cybersecurity positions, OCHCO did not review those procedures for consistency with departmental guidance. The department could not provide documentation that OCHCO had verified or reviewed component-developed procedures. In addition, OCHCO officials acknowledged that they had not reviewed the components’ procedures and had not developed a process for conducting such reviews.
OCHCO officials stated that several factors had limited their ability to develop the procedures and to review component-developed procedures in a timely and complete manner. These factors were (1) a delayed departmental decision until April 2016 as to whether certain positions should be considered cybersecurity positions; (2) a belief that each component had the best understanding of their human capital systems, so procedure development was best left up to each component; (3) a condition where each of the six selected DHS components recorded and tracked vacant positions differently; and (4) cybersecurity specialty areas for vacant positions were not known until a position description was developed or verified and a hiring action was imminent. Without assurance that procedures are timely, complete, and reviewed, DHS cannot be certain that its components have the procedures to identify and code all positions with cybersecurity functions, as required by the act.
Accordingly, our February 2018 report included recommendations that DHS 1) develop procedures on how to identify and code vacant cybersecurity positions, 2) identify the individual in each component who is responsible for leading that component’s efforts in identifying and coding cybersecurity positions, and 3) establish and implement a process to periodically review each component’s procedures for identifying component cybersecurity positions and maintaining accurate coding. DHS concurred with the recommendations and stated that it would implement them by April 30, 2018.
DHS Has Not Yet Completed Required Identification Activities
The act required DHS to identify all of its cybersecurity positions, including vacant positions, by September 2015. Further, the act called for the department to use OPM’s Guide to Data Standards to categorize the identified positions and determine the work category or specialty area of each position.
As of December 2016, the department reported that it had identified 10,725 cybersecurity positions, including 6,734 federal civilian positions, 584 military positions, and 3,407 contractor positions. Nevertheless, as of November 2017, the department had not completed identifying all of its cybersecurity positions and it had not determined the work categories or specialty areas of the positions. In explaining why the department had not identified all its positions, OCHCO officials stated that components varied in reporting their identified vacant positions because the department did not have a system to track vacancies.
Of the 7 work categories and 33 specialty areas in the NICE framework, DHS reported that its 3 most common work categories were “protect and defend”, “securely provision,” and “oversight and development;” and its 2 most common specialty areas were “security program management” and “vulnerability assessment and management.” However, DHS could not provide data to show the actual numbers of positions in each of these categories and specialty areas.
According to OCHCO officials, the department was still in the process of identifying positions for the 2-digit codes and would continue this effort until the 3-digit codes were available in the National Finance Center personnel and payroll system in December 2017. At that time, OCHCO officials stated that the department intends to start developing procedures for identifying and coding positions using the 3-digit codes.
DHS Has Not Completely and Accurately Assigned Employment Codes
The act also required DHS to assign 2-digit employment codes to all of its identified cybersecurity positions. This action was to be completed by September 2015.
However, as of August 2017—23 months after the due date—the department had not completed the coding assignment process. Although, in August 2017, OPM provided a progress report to Congress containing DHS data which stated that 95 percent of DHS-identified cybersecurity positions had been coded, our analysis determined that the department had assigned cybersecurity position codes to approximately 79 percent of its identified federal civilian cybersecurity positions. The primary reason for this discrepancy was that DHS did not include the coding of vacant positions, as required by the act. Further, OCHCO officials stated they did not verify the accuracy of the components’ cybersecurity workforce data. Without coding cybersecurity positions in a complete and accurate manner, DHS will not be able to effectively examine its cybersecurity workforce; identify skill gaps; and improve workforce planning.
Thus, in our recently issued report, we recommended that OCHCO collect complete and accurate data on all filled and vacant cybersecurity positions when it conducts its cybersecurity identification and coding efforts. DHS concurred with the recommendation and stated that, by June 29, 2018, it intends to issue memorandums to its components that provide instructions for the components to periodically review compliance and cybersecurity workforce data concerns to ensure data accuracy.
DHS Has Not Identified or Reported Its Cybersecurity Workforce Areas of Critical Need
According to the act, DHS was to identify its cybersecurity work categories and specialty areas of critical need in alignment with the NICE framework and to report this information to the appropriate congressional committees by June 2016. In addition, a DHS directive required the DHS Chief Human Capital Officer to provide guidance to the department’s components on human resources procedures, including identifying workforce needs.
As of February 2018, the department had not fulfilled its requirements to identify and report its critical needs. Although DHS identified workforce skills gaps in a report that it submitted to congressional committees in March 2017, the department did not align the skills gaps to the NICE framework’s defined work categories and specialty areas of critical need.
In September 2017, OCHCO developed a draft document that attempted to crosswalk identified department-wide cybersecurity skills gaps to one or more specialty areas in the NICE framework. However, the document did not adequately help components identify their critical needs by aligning their gaps with the NICE framework because it did not provide clear guidance to help components determine a critical need in cases in which a skills gap is mapped to multiple work categories.
According to OCHCO officials, DHS had not identified department-wide cybersecurity critical needs that aligned with the framework partly because OPM did not provide DHS with guidance for identifying cybersecurity critical needs. In addition, OCHCO officials stated that the components did not generally view critical skills gaps in terms of the categories or specialty areas as defined in the NICE framework, but instead, described their skills gaps using position titles that are familiar to them. In the absence of relevant guidance to help components identify their critical needs, DHS and the components are hindered from effectively identifying and prioritizing workforce efforts to recruit, hire, train, develop, and retain cybersecurity personnel.
DHS also did not report cybersecurity critical needs to OPM in September 2016 or September 2017, as required. Instead, the department first reported its cybersecurity coding progress and skills gaps in a March 2017 report that it sent to OPM and Congress to address several of the act’s requirements. However, the report did not describe or substantiate critical need designations because DHS has not yet identified them.
Additionally, DHS had not developed plans or time frames to complete priority actions—developing a DHS cybersecurity workforce strategy and completing its initial cybersecurity workforce research— that OCHCO officials said must be completed before it can report its cybersecurity critical needs to OPM. According to OCHCO officials, the report that the department submitted to Congress in March 2017 had contained plans and schedules. However, we found that the March 2017 report did not capture and sequence all of the activities that DHS officials said must be completed in order to report critical needs. Until DHS develops plans and schedules with time frames for reporting its cybersecurity critical needs, DHS may not have insight into its needs for ensuring that it has the workforce necessary to carry out its critical role of helping to secure the nation’s cyberspace.
In our report, we recommended that DHS 1) develop guidance to assist DHS components in identifying their cybersecurity work categories and specialty areas of critical need that align to the NICE framework and 2) develop plans with time frames to identify priority actions to report on specialty areas of critical need. DHS concurred with the recommendations and stated that it plans to implement them by June 2018.
In summary, DHS needs to act now to completely and accurately identify, categorize, and assign codes to all of its cybersecurity positions, and to identify and report on its cybersecurity workforce areas of critical need. Implementing the six recommendations we made in our February 2018 report should better position the department to meet the requirements of the 2014 act. Further, doing so will help DHS understand its needs for recruiting, hiring, developing, and retaining a cybersecurity workforce with the skills necessary to accomplish the department’s varied and essential cybersecurity mission. Until DHS implements our recommendations, it will not be able to ensure that it has the necessary cybersecurity personnel to help protect the department’s and federal networks and the nation’s critical infrastructure from cyber threats.
Chairmen Ratcliffe and Perry, Ranking Members Richmond and Correa, and Members of the Subcommittees, this concludes my statement. I would be pleased to respond to your questions.
GAO Contact and Staff Acknowledgments
If you or your staffs have any questions about this testimony, please contact Gregory C. Wilshusen at (202) 512-6244 or [email protected], or Chris P. Currie at (404) 679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement.
GAO staff who made key contributions to this testimony are Alexander Anderegg, Ben Atwater, David Blanding, Jr., Chris Businsky, Wayne Emilien, Jr., Nancy Glover, David Hong, Tammi Kalugdan, David Plocher, Luis E. Rodriguez, and Priscilla Smith.
Related GAO Products
GAO, Cybersecurity: Federal Efforts Are Under Way That May Address Workforce Challenges, GAO-17-533T (Washington, D.C.: Apr. 4, 2017).
GAO, Information Security: DHS Needs to Continue to Advance Initiatives to Protect Federal Systems, GAO-17-518T (Washington, D.C.: Mar. 28, 2017).
GAO, High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others, GAO-17-317 (Washington, D.C.: Feb. 15, 2017).
GAO, Cybersecurity: Actions Needed to Strengthen U.S. Capabilities, GAO-17-440T (Washington, D.C.: Feb. 14, 2017).
GAO IT Workforce: Key Practices Help Ensure Strong Integrated Program Teams; Selected Departments Need to Assess Skill Gaps, GAO-17-8 (Washington, D.C.: Nov. 30, 2016).
GAO, Federal Chief Information Security Officers: Opportunities Exist to Improve Roles and Address Challenges to Authority, GAO-16-686 (Washington, D.C.: Aug. 26, 2016).
GAO, Federal Hiring: OPM Needs to Improve Management and Oversight of Hiring Authorities, GAO-16-521 (Washington, D.C.: Aug. 2, 2016).
GAO, Information Security: DHS Needs to Enhance Capabilities, Improve Planning, and Support Greater Adoption of Its National Cybersecurity Protection System, GAO-16-294 (Washington, D.C.: Jan. 28, 2016).
GAO, Federal Workforce: OPM and Agencies Need to Strengthen Efforts to Identify and Close Mission-Critical Skills Gaps, GAO-15-223 (Washington, D.C.: Jan. 30, 2015).
GAO, Cybersecurity Human Capital: Initiatives Need Better Planning and Coordination, GAO-12-8 (Washington, D.C.: Nov. 29, 2011).
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
DHS is the lead agency tasked with protecting the nation's critical infrastructure from cyber threats. The Homeland Security Cybersecurity Workforce Assessment Act of 2014 required DHS to identify, categorize, and assign employment codes to all of the department's cybersecurity workforce positions. These codes define work roles and tasks for cybersecurity specialty areas such as program management and system administration. Further, the act required DHS to identify and report its cybersecurity workforce critical needs.
GAO was asked to testify on the extent to which DHS has (1) identified, categorized, and assigned employment codes to its cybersecurity positions and (2) identified its cybersecurity workforce areas of critical need. To do so, GAO summarized the findings discussed in its February 2018 report on DHS's cybersecurity workforce ( GAO-18-175 ).
What GAO Found
The Department of Homeland Security (DHS) has taken actions to identify, categorize, and assign employment codes to its cybersecurity positions, as required by the Homeland Security Cybersecurity Workforce Assessment Act of 2014 ; however, its actions have not been timely and complete. For example, DHS did not establish timely and complete procedures to identify, categorize, and code its cybersecurity position vacancies and responsibilities. Further, DHS did not complete efforts to identify all of the department's cybersecurity positions and accurately assign codes to all filled and vacant cybersecurity positions. In August 2017, DHS reported to Congress that it had coded 95 percent of the department's identified cybersecurity positions. However, the department had, at that time, coded approximately 79 percent of the positions. DHS's 95 percent estimate was overstated primarily because it excluded vacant positions, even though the act required DHS to report these positions.
In addition, although DHS has taken steps to identify its workforce capability gaps, it has not identified or reported to Congress on its departmentwide cybersecurity critical needs that align with specialty areas. The department also has not reported annually its cybersecurity critical needs to the Office of Personnel Management (OPM), as required, and has not developed plans with clearly defined time frames for doing so. (See table).
Without ensuring that its procedures are complete and that its progress in identifying and assigning codes to its cybersecurity positions is accurately reported, DHS will not be positioned to effectively examine its cybersecurity workforce, identify critical skill gaps, or improve its workforce planning. Further, until DHS establishes plans and time frames for reporting on its critical needs, the department may not be able to ensure that it has the necessary cybersecurity personnel to help protect the department's and the nation's federal networks and critical infrastructure from cyber threats. The commitment of DHS's leadership to addressing these matters is essential to helping the department fulfill the act's requirements.
What GAO Recommends
In its February 2018 report, GAO recommended that DHS take six actions, including ensuring that its cybersecurity workforce procedures identify position vacancies and responsibilities; reported workforce data are complete and accurate; and plans for reporting on critical needs are developed. DHS concurred with the six recommendations and described actions the department plans to take to address them. |
gao_GAO-18-571T | gao_GAO-18-571T_0 | Congress and Executive Branch Agencies Continue to Address Actions That Span the Federal Government
We monitor the progress that Congress and executive branch agencies have made in addressing the issues we identified in each of our last seven annual reports. As shown in table 4, Congress and executive branch agencies have made consistent progress in addressing many of the actions we identified from 2011 to 2017. As of March 2018, 376 (52 percent) of the actions we identified from 2011 to 2017 have been fully addressed. See our online Action Tracker for the status of all actions.
Billions in Financial Benefits Due to Actions Taken by Congress and Executive Branch Agencies
The progress Congress and executive branch agencies have made in addressing our open actions has resulted in $178 billion in financial benefits, including roughly $125 billion in financial benefits from 2010 through 2017, with at least an additional $53 billion in estimated benefits projected to accrue in 2018 or later. Table 5 highlights examples of these results.
These financial benefits continue to grow as we identify and document additional agency actions that respond to our recommendations. For example, in recent months CMS has formalized changes to its oversight of spending allowed for large Medicaid demonstrations, which allow states to test new ways to deliver or pay for care. These demonstrations, by HHS policy, should not raise federal costs over what the program would have cost without the demonstration—that is, they should be budget neutral. But our past work has shown that the spending HHS had authorized for these demonstrations was much higher than what was justified, as HHS had allowed states to use questionable methods when proposing spending for their demonstrations.
CMS’s new policy partially responds to a longstanding recommendation we have made to better ensure that valid methods are used to demonstrate budget neutrality. We anticipate that CMS’s recent actions could potentially reduce the federal government’s liability for Medicaid by billions, or tens of billions, annually.
While not all actions result in financial benefits to taxpayers, all of our suggested actions, when implemented, can result in other benefits—for instance, they make government more efficient or eliminate, reduce, or improve management of fragmented, overlapping, or duplicative programs. For example, such benefits can be seen in the results of our work on the government’s acquisition of space programs. For over two decades, we and others have reported on problems caused by fragmented leadership and a lack of a single authority in oversight of these multibillion dollar programs.
In 2012, we made a recommendation aimed at strengthening leadership and authority of space systems acquisitions. In response, in 2017 the President revived the National Space Council to provide a coordinated process for developing and monitoring the implementation of national space policy and strategy. Separately, in the National Defense Authorization Act for Fiscal Year 2018, Congress made changes to certain DOD space leadership positions and required the department to conduct a review and identify a recommended organizational and management structure for its national security space components, and submit related reports. The act also required DOD to contract with a federally funded research and development center not closely affiliated with the Air Force to develop a plan to establish a separate military department responsible for DOD national security space activities. These actions could reduce fragmentation and speed decision making in the development of a substantial investment in space systems.
Action on Remaining and New Areas Could Yield Significant Additional Benefits
While Congress and executive branch agencies have made progress toward addressing the 798 total actions we have identified since 2011, further steps are needed to fully address the 365 actions that are partially addressed or not addressed. We estimate that tens of billions of dollars in additional financial benefits could be realized should Congress and executive branch agencies fully address open actions. In addition to producing financial benefits, these actions make government more efficient; improve major government programs or agencies; reduce the risk of mismanagement, fraud, waste, and abuse; and increase assurance that programs comply with laws and funds are legally spent.
Significant Open Actions Directed to Congress
Congress has used our work to identify legislative solutions to achieve cost savings, address emerging problems, and find efficiencies in federal agencies and programs. Our work has contributed to a number of key authorizations and appropriations. In addition, congressional oversight of agencies’ efforts has been critical in realizing the full benefits of our suggested actions addressed to the executive branch, and it will continue to be critical in the future.
In our 2011 to 2018 annual reports, we directed 100 actions to Congress, including the 3 new congressional actions we identified in 2018. Of the 100 actions, 58 remain open (11 of which were partially addressed and 47 were not addressed or new) as of March 2018. Table 6 highlights areas with significant open actions directed to Congress. Appendix I has a full list of all open congressional actions.
Significant Open Actions Directed to Executive Branch Agencies
In our 2011 to 2018 annual reports, we directed 698 actions to executive branch agencies, including 65 new actions identified in 2018. Of the 698 actions, 307 remained open as of March 2018. Of these open actions, 164 were partially addressed and 143 were not addressed or new. While these open actions span the government, a substantial number of them are directed to seven agencies that made up 83 percent—$3.7 trillion—of federal outlays in fiscal year 2017 and have the largest number of open actions (see figures 2 and 3).
As shown in figure 3, seven agencies have at least 25 open actions.
The following sections highlight examples of open actions across those seven major agencies.
More Efficiently Targeting Defense Resources
In our 2011 to 2018 reports, we directed 176 actions to DOD in areas that center on DOD’s effectiveness in providing the military forces needed to deter war and to protect the security of the United States. As of March 2018, 74 of these 176 actions remained open. DOD represented about 14 percent of federal spending in fiscal year 2017, with outlays totaling about $635.5 billion. Our work suggests that effectively implementing these open actions, including those related to areas listed in table 7, could yield substantial financial benefits and improve DOD’s effectiveness.
Improving the Efficiency of Health Care Programs
In our 2011 to 2018 reports, we directed 111 actions to HHS in areas that contribute to HHS’s mission to enhance the health and well-being of Americans. HHS provides health coverage for over 145 million Americans through three principal programs—Medicare, Medicaid, and the Children’s Health Insurance Program—as well as the health-insurance marketplaces. HHS also operates other public health-related agencies such as the Food and Drug Administration, the Centers for Disease Control and Prevention, and the National Institutes of Health.
HHS represented about 27 percent of the fiscal year 2017 federal budget, with outlays totaling about $1.2 trillion. As of March 2018, 56 of HHS’s 111 actions remained open. Our work suggests that effectively implementing these actions, including those related to areas listed in table 8, could reduce costs, provide services more efficiently, and yield substantial financial benefits.
Enhancing Federal Revenues
In our 2011 to 2018 reports, we directed 91 actions to the Internal Revenue Service (IRS) in areas that contribute to effectively and efficiently providing high-quality service to taxpayers and enforcing the law with integrity and fairness to all. As of March 2018, 38 of these 91 actions remained open. The funding of the federal government depends largely upon IRS’s ability to collect taxes legally owed. Our work suggests that effective implementation of our open actions, including those related to areas listed in table 9, could increase revenues through better compliance or reduce costs.
Improving the Efficiency and Effectiveness of Homeland Security Operations
In our 2011 to 2018 reports, we directed 79 actions to the Department of Homeland Security (DHS) in areas that contribute to the effective implementation of its mission. In fiscal year 2017, DHS spent about $63.6 billion, about 1.4 percent of federal outlays. As of March 2018, 31 of the 79 actions to DHS remained open. Fully implementing these actions, including those related to areas listed in table 10, could result in financial benefits and substantial improvements in agency operations.
Advancing the Implementation of Government-Wide Policies and Performance
Many of the results the federal government seeks to achieve require the coordinated effort of more than one federal agency, level of government, or sector. OMB manages and coordinates many government-wide efforts. In our 2011 to 2018 reports, we directed 66 actions to OMB in areas to improve the efficiency and effectiveness of government-wide programs and activities. As of March 2018, 30 of the 66 actions to OMB remained open. Fully implementing these actions, including those related to areas listed in table 11, could yield significant financial benefits and substantial program improvements across government.
More Efficiently Administering Services to Retirees and Citizens with Disabilities
In our 2011 to 2018 reports, we directed 32 actions to the Social Security Administration (SSA) in areas that contribute to SSA providing financial assistance to eligible individuals through Social Security retirement and disability benefits and Supplemental Security Income (SSI) payments. As of March 2018, 27 of these 32 actions remained open.
In fiscal year 2017, SSA spent about $1 trillion, roughly 22 percent of federal outlays. While most of SSA’s funding is used to pay Social Security retirement, survivors, and disability benefits from the Old-Age and Survivors Insurance Trust Fund and the Disability Insurance Trust Fund, our work suggests that effective implementation of these actions, including the examples listed in table 12, could result in significant benefits.
Improving Support and Services for Veterans
In our 2011 to 2018 reports, we directed 54 actions to the Department of Veterans Affairs (VA) in areas that contribute to VA effectively and efficiently achieving its mission to promote the health, welfare, and dignity of all veterans by ensuring that they receive medical care, benefits, and social support. As of March 2018, 25 of these 54 actions remained open. In fiscal year 2017, VA spent about $183.0 billion—about 4 percent of federal outlays—for veterans’ benefits and services. Our work suggests that effective implementation of these actions, including those related to areas listed in table 13, could yield cost savings and efficiencies that would improve the delivery of services to the nation’s veterans and their families.
We will continue to look for additional or emerging instances of fragmentation, overlap, and duplication and opportunities for cost savings or revenue enhancement. Likewise, we will continue to monitor developments in the areas we have already identified. We stand ready to assist this and other committees in further analyzing the issues we have identified and evaluating potential solutions.
Thank you, Chairman Enzi, Ranking Member Sanders, and Members of the Committee; this concludes my prepared statement. I would be pleased to answer questions.
GAO Contacts
For further information on this testimony or our April 26, 2018 report, please contact J. Christopher Mihm, Managing Director, Strategic Issues, at (202) 512-6806 or [email protected], and Jessica Lucas-Judy, Director, Strategic Issues, at (202) 512-9110 or [email protected]. Contact points for the individual areas listed in our 2018 annual report can be found at the end of each area in GAO-18-371SP. Contact points for our Congressional Relations and Public Affairs offices may be found on the last page of this statement.
Appendix I: Open Congressional Actions, by Mission
In our 2011 to 2018 annual reports, we directed 100 actions to Congress, of which 58 remain open. Of the 58 open congressional actions, 11 are partially addressed and 47 are not addressed or new, as of March 2018. See table 14. | Why GAO Did This Study
The federal government faces a long- term, unsustainable fiscal path based on an imbalance between federal revenues and spending. While addressing this imbalance will require fiscal policy changes, in the near term opportunities exist in a number of areas to improve this situation, including where federal programs or activities are fragmented, overlapping, or duplicative.
To call attention to these opportunities, Congress included a provision in statute for GAO to identify and report on federal programs, agencies, offices, and initiatives—either within departments or government-wide—that have duplicative goals or activities.
GAO also identifies areas that are fragmented or overlapping and additional opportunities to achieve cost savings or enhance revenue collection. GAO's 2018 annual report is its eighth in this series ( GAO-18-371SP ).
This statement discusses
new areas identified in GAO's 2018 annual report;
the progress made in addressing actions GAO identified in its 2011 to 2017 reports; and
examples of open actions directed to Congress or executive branch agencies.
To identify what actions exist to address these issues, GAO reviewed and updated prior work, including recommendations for executive action and matters for congressional consideration.
What GAO Found
GAO's 2018 annual report identifies 68 new actions that Congress or executive branch agencies can take to improve the efficiency and effectiveness of government in 23 new program areas. For example:
The Department of Defense (DOD) could potentially save approximately
$527 million over 5 years by minimizing unnecessary overlap and duplication in its U.S. distribution centers for troop support goods.
The Department of Energy may be able to reduce certain risks and save tens of billions of dollars by adopting alternative approaches to treat a portion of its low-activity radioactive waste at its Hanford Site.
The Department of Veterans Affairs could potentially save tens of millions of dollars when acquiring medical and surgical supplies by better adhering to supply chain practices of leading hospitals.
The Coast Guard should close its boat stations that provide unnecessarily duplicative search and rescue coverage to improve operations and potentially save millions of dollars .
Significant progress has been made in addressing many of the 724 actions that GAO identified from 2011 to 2017. As of March 2018, Congress and executive branch agencies have fully or partially addressed 551 (76 percent) of these actions. This has resulted in about $178 billion in financial benefits, of which $125 billion has been realized and at least an additional $53 billion is estimated to accrue. These estimates are based on a variety of sources that considered different time periods, assumptions, and methodologies. GAO estimates that tens of billions of additional dollars could be saved should Congress and executive branch agencies fully address the remaining 365 open actions, including the 68 new ones identified in 2018.
Further steps are needed to fully address these remaining actions. For example:
Congress and the Internal Revenue Service could realize hundreds of millions of dollars in savings and increased revenues by enhancing online services and improving efforts to prevent identity theft refund fraud.
Medicare could save $1 to 2 billion annually if Congress equalized the rates paid for certain health care services, which often vary depending on where the service is performed.
DOD could achieve billions of dollars in savings over the next several years by continuing to employ best management practices on its weapon systems acquisition programs.
Congress could consider modifying how Medicare pays certain cancer hospitals to achieve almost $500 million annually in program savings.
The Social Security Administration could help prevent the loss of billions of dollars by preventing overpayments to beneficiaries of the Disability Insurance program and improper waivers of beneficiaries' overpayment debt.
Congress could consider modifying tobacco tax rates to eliminate significant tax differentials between similar products to address future revenue losses caused by manufacturers and consumers substituting tobacco products. Federal losses ranged from $2.6 to 3.7 billion between April 2009 and February 2014. |
gao_GAO-19-166 | gao_GAO-19-166_0 | Background
Potential Impacts of Climate Change on Migration
According to international and U.S. government sources, climate change poses serious risks to many of the physical and ecological systems upon which society depends, although the exact details of these impacts are uncertain. Climate change may intensify slow-onset disasters, such as drought, crop failure, and sea level rise. Climate change is also increasing the frequency and intensity of extreme weather events, including sudden- onset disasters, such as floods, according to key scientific assessments. These effects of climate change may alter existing migration trends across the globe, according to IOM. (See appendix II for further discussion of climate change as a driver of migration in seven geographic regions.) For example, sea level rise, a slow-onset disaster, may result in the salinization of soil and drinking water, thereby undermining a country or community’s ability to sustain livelihoods and maintain critical services, which could cause some people to migrate. Sudden-onset disasters may also contribute to migration as people flee natural disasters, in most cases leading to temporary displacement. For example, people may either voluntarily migrate, or be forced to migrate, to earn money needed to rebuild damaged homes after flooding, especially as extreme weather events increase in intensity and number. If unable or unwilling to migrate, people may find themselves trapped or choosing to stay in deteriorating conditions. Sources agree that the effects of climate change generally impact internal migration, while migration across international borders due to climate change is less common.
In deciding whether to migrate, people weigh multiple factors including economic and political factors, social or personal motives, or demographic pressures. The effects of climate change add another layer of complexity to this decision, but there is debate about the role climate change plays in migration. Figure 1 depicts how climate change may influence other factors that drive the decision to migrate or stay.
There are limitations to reliably estimating the number of people displaced by climate change because there are no reliable global estimates for those migrating due to slow-onset disasters, and estimates for those migrating due to sudden-onset disasters are based on limited data, according to IOM. The lack of reliable data is due in part to the multi- causal nature of migration. Further, IOM notes that forecasts for the number of environmental migrants by 2050 vary from 25 million to 1 billion. They and others have questioned the methodologies used to arrive at even these broad estimates.
Climate Change Impacts on Migration that May Affect National Security
Migration, potentially driven by climate change, may contribute to instability and result in national security challenges, according to some international organizations and national governments. For example, an influx of migrants to a city may put pressure on existing resources, resulting in tensions between new migrants and residents, or between the population and its government. The U.S. Global Change Research Program has also stated that migration, such as displacement resulting from extreme weather events, is a potential national security issue. At different times, the United Nations General Assembly and, in 2014, DOD have deemed climate change to be a threat multiplier, as the effects of climate change could increase competition for resources, reduce government capacity, and threaten livelihoods, thereby causing instability and migration. Further, the U.S. intelligence community considers climate change to increase the risks of humanitarian disasters, conflict, and migration.
Identifying the cause of a conflict, however, is complicated, and experts debate the connections linking climate, migration, and national security. For example, IOM has reported that existing evidence on climate migration and instability must be considered with caution. Further, some studies stress that other factors can mitigate the effects of climate change on migration and stability, including governance and community resilience, as the World Bank has reported.
U.S. Government Agency Roles Related to Climate Change
State, USAID, and DOD are among the U.S. government agencies with a role in responding to issues related to climate change, including as a driver of migration.
State interacts with foreign governments and international organizations focused on climate change and migration primarily through the Bureau of Oceans and International Environmental and Scientific Affairs (State/OES) and the Bureau of Population, Refugees, and Migration (State/PRM).
USAID supports a range of development programs that help to mitigate the effects of climate change through the Bureaus for Economic Growth, Education and Environment; Democracy, Conflict and Humanitarian Assistance; Food Security; Asia; and Africa; and individual USAID missions. Additionally, USAID’s Offices of U.S. Foreign Disaster Assistance (USAID/OFDA) and Food for Peace (USAID/FFP) lead and coordinate the U.S. government’s emergency responses to sudden- and slow-onset disasters, and complex emergencies overseas.
DOD assists in the United States’ humanitarian response to sudden- onset disasters abroad through its six geographic combatant commands, with support from the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict and the Joint Staff’s Office of Humanitarian Engagement.
Executive Branch Actions Related to Climate Change and Migration from Fiscal Years 2014 through 2018
Climate change as a driver of migration was not a focus of the policy documents we reviewed for either the current or previous administrations during fiscal years 2014 through 2018. Our review of executive actions, budget requests, and executive branch strategies that affected State, USAID, and DOD found only brief mentions of climate change as a driver of migration. None of the documents we reviewed reflected a priority for assessing or addressing climate change as a driver of migration, although these documents reflect a shift in administrations’ climate change priorities more generally.
Executive Actions
The previous administration issued two executive orders and a presidential memorandum related to climate change. These executive actions had a policy of improving climate preparedness and resilience, factoring climate-resilience considerations into agencies’ international development decisions, and creating forums for interagency coordination. In March 2017, the current administration issued a subsequent executive order revoking some of the previous executive actions related to climate change. See figure 2 for a timeline of these executive actions.
The previous administration issued three executive actions related to climate change, which included requirements focused on agencies’ considerations of the impacts of climate change and established forums for interagency coordination. The current administration issued an executive action related to energy independence and climate change.
Executive Order 13653: Preparing the United States for the Impacts of Climate Change. Executive Order 13653 stated that agencies—including State, USAID, and DOD—shall, among other things, develop, implement, and update comprehensive Agency Adaptation Plans that integrate consideration of climate change into agency operations and overall mission objectives. Executive Order 13653 also established the Council on Climate Preparedness and Resilience.
Executive Order 13677: Climate-Resilient International Development. Executive Order 13677 requires State, USAID, and other U.S. government agencies with direct international development programs and investments to incorporate climate-resilience considerations into decision making by assessing climate-related risks to agency strategies, and to adjust relevant strategies as appropriate, among other things. Executive Order 13677 also established the Working Group on Climate-Resilient International Development as part of the Council on Climate Preparedness and Resilience.
2016 Presidential Memorandum on Climate Change and National Security. The 2016 presidential memorandum required, among other things, that agencies, including State, USAID, and DOD, develop an agency-specific approach to address climate-related threats to national security. It also required agencies to develop implementation plans that would describe how they would identify the potential impact of climate change on human mobility, including migration and displacement, and the resulting impacts on national security, among other requirements, and stated that the effects of climate change can lead to population migration within and across international borders, spur crises, and amplify or accelerate conflict in countries or regions already facing instability. The 2016 memorandum also established the Climate and National Security Working Group.
Executive Order 13783, Promoting Energy Independence and Economic Growth. Executive Order 13783 revoked Executive Order 13653 and the 2016 presidential memorandum, among other things, as seen in figure 2.
Presidential Budget Requests for Fiscal Years 2017 and 2018
Priorities related to climate change shifted between the past two administrations as reflected in a recent budget request that reduced some climate change funding affecting U.S. foreign assistance.
2017 Presidential Budget Request. The previous administration stated in its fiscal year 2017 budget request that “the challenge of climate change will define the contours of this century more dramatically than any other” and that “it is imperative for the United States to couple action on climate change at home with leadership internationally.” The fiscal year 2017 budget request sought $1.3 billion in discretionary funding to advance the goals of the Global Climate Change Initiative, which was established in 2010 and aimed to promote resilient, low-emission development, and integrate climate change considerations into U.S. foreign assistance. The $1.3 billion in requested funding included $750 million in U.S. funding for the Green Climate Fund, a multilateral trust fund designed to foster resilient low-emission development in developing countries.
2018 Presidential Budget Request. The current administration, in its fiscal year 2018 budget request, did not include any funding for the Global Climate Change Initiative. In addition, the current administration’s budget request stated that it “Eliminate the Global Climate Change Initiative and fulfill the President’s pledge to cease payments to United Nations’ (UN) climate change programs by eliminating U.S. funding related to the Green Climate Fund. . .”
Strategy Documents Affecting State, USAID, and DOD
Some strategies from the current and previous administrations that affect State, USAID, and DOD, among other agencies, reflect a shift in priorities related to climate change. For example, the previous administration cited climate change as a “top strategic risk” in its 2015 National Security Strategy and stated that climate change is an urgent and growing threat to U.S. national security, contributing to increased natural disasters, refugee flows, and conflicts over basic resources like food and water. The current administration does not discuss climate change in its 2017 National Security Strategy. Additionally, State and USAID have a Joint Strategic Plan to help the agencies achieve the objectives of the National Security Strategy. The previous State-USAID Joint Strategic Plan included a strategic goal on “promoting the transition to a low-emission, climate-resilient world” that proposed leading international actions to combat climate change. The current State-USAID Joint Strategic Plan does not have a climate change goal.
State, USAID, and DOD Have Discussed the Potential Effects of Climate Change on Global Migration, but State Does Not Provide Clear Risk Assessment Guidance
State, USAID, and DOD were required by executive orders to assess climate change-related risks to their missions and, for State and USAID, to their strategies, among other things. In response to Executive Order 13653, which has since been revoked, the agencies completed adaptation plans that integrated considerations of climate change into agency operations and overall mission objectives. In response to Executive Order 13677, which has not been revoked, State and USAID developed processes for climate change risk assessments for their country and regional planning documents. Although these executive orders did not require a specific assessment of climate change as a driver of migration, all three agencies have discussed the effects of climate change on migration in their adaptation plans and risk assessments. However, State lacks clear guidance on its process for assessing climate change-related risks to its integrated country strategies.
Agencies Discussed the Effects of Climate Change on Migration in Their 2014 Adaptation Plans
State, USAID, and DOD each completed adaptation plans in 2014 that included limited discussions of migration as one potential effect of climate change. Executive Order 13653 directed the agencies to develop or continue to develop, implement, and update comprehensive Agency Adaptation Plans that integrate consideration of climate change into agency operations and overall mission objectives. Each adaptation plan was to include, among other things, a description of how the agency would consider the need to improve climate adaptation and resilience.
State. In its 2014 adaptation plan, State included a brief discussion of climate change as one of multiple factors that potentially will drive migration and impact its mission. State reported that the specific impacts of climate change on the ability of the department to promote peace and stability in regions of vital interest to the United States were unknown. For example, according to the plan, an increase in heavy precipitation events around the world could damage the electric grid and transportation and energy water infrastructure, upon which State depends, making it difficult to maintain operations and diplomatic relations. In its plan, State reported that climate change impacts may threaten international peace, civil stability, and economic growth through aggravating existing problems related to poverty and environmental degradation. Further, environmental and poverty- related issues and regional instability could stress relationships with some foreign governments. However, the plan noted that specific impacts of climate change on conflict, migration, terrorism, and complex disasters were still unknown.
USAID. In its 2014 adaptation plan, USAID included a brief discussion of migration as one potential effect of climate change that could also impact security. USAID stated that the impact of climate change on its programs and operations, if left unaddressed, could compromise the agency’s ability to achieve its mission. Further, USAID’s plan referred to increased migration as a potential risk of climate change. Flooding and other extreme climate events can result in increased migration, among other impacts, that could affect existing and planned USAID programming. In particular, programs in areas like agriculture and food security, global health, water and sanitation, infrastructure, and disaster readiness and humanitarian response are vulnerable to climate change, according to USAID. In the infrastructure area, climate change may necessitate new protective measures for coastal homes and infrastructure, and in some cases even mass evacuations or permanent migration. USAID stated that climate change could further reduce or alter the distribution of already limited resources like food and water, or force temporary or permanent migration of communities. According to the plan, in areas with high risk factors for conflict, climate change stresses can aggravate tensions and contribute to conflict.
DOD. In its 2014 adaptation roadmap, DOD included a brief discussion of migration as one of multiple potential effects of climate change that could impact national security. DOD referred to climate change as a threat multiplier that can aggravate other risks around the world, with migration being one effect that could increase requests for DOD to provide assistance. The roadmap stated that as climate change affects the availability of food and water, human migration, and competition for natural resources, the department’s unique capability to provide logistical, material, and security assistance on a massive scale or in rapid fashion may be called upon with increasing frequency. Furthermore, DOD stated that the impacts of climate change may cause instability in other countries by, among other things, impairing access to food and water, damaging infrastructure, uprooting and displacing large numbers of people, and compelling mass migration. These developments, according to the department, could undermine already fragile governments that are unable to respond effectively, or challenge currently stable governments, as well as increase competition and tension between countries vying for limited resources.
Few of the State and USAID Risk Assessments We Reviewed Identified the Nexus of Climate Change and Migration as a Risk
In response to Executive Order 13677, State and USAID developed processes for climate change risk assessments for their country and regional planning documents. Though these assessments are not specific to migration, a few of the assessments identified the nexus of climate change and migration.
State. State required climate change risk assessments for all new integrated country strategies drafted in 2016 or later. We reviewed 10 integrated country strategies from the two regions that were the first to implement the climate change risk assessment requirement— Africa, and East Asia and the Pacific. All 10 of the strategies included climate change risk assessments, one of which—Cambodia— identified migration as a risk for the country. The Cambodia strategy states that internal migration due to climate change hinders access to health care and the prevention of infectious diseases like malaria. We also reviewed 10 strategies from State’s functional and regional bureaus for assessments of climate-related risks, including 3 functional bureau strategies (State/PRM, State/OES, and State’s Bureau of International Organization Affairs) and 7 regional bureau strategies. All of the functional bureau strategies we reviewed identified climate change as a risk and State/PRM cited the impact of climate change on migration. Of the regional bureau strategies we reviewed, we found that one, the Bureau for East Asian and Pacific Affairs, identified climate change as a driver of migration as a challenge or risk in its region. For example, the strategy states that climate change is becoming increasingly disruptive, potentially increasing migration due to rising sea levels. None of the other six regional bureau strategies we reviewed identified the nexus of climate change and migration as a risk or challenge. However, five regional bureaus identified climate change as a risk or challenge and one identified migration as a risk or challenge.
USAID. USAID also requires the integration of climate risk management into all country or regional development cooperation strategies drafted since October 1, 2015. Missions must document in a climate change appendix to the strategy any climate risks they identified and how they considered climate change in their strategy. As of August 2018, USAID had completed five country or regional development cooperation strategy updates initiated since October 1, 2015—Uganda, Tunisia, East Africa, Sri Lanka, and Zimbabwe—and all five included the required appendix. Of the five updated strategies, three—Uganda, Tunisia, and East Africa—discuss the indirect effect of climate change on migration, among other issues. For example, Uganda’s 2016-2021 country strategy states that increased frequency and duration of droughts is likely to be the most significant climate‐related change in Uganda. The strategy also notes that droughts have affected, and will continue to affect, water resources, hydroelectricity production, and agriculture, among other sectors. As agriculture, forestry, and fisheries decline in Uganda, the strategy asserts that people will migrate to urban areas, leading to the formation of slums. We also reviewed USAID’s nine regional development cooperation strategies, one of which—East Africa—had been updated since the requirement to include climate risk management. Of the other eight strategies that have yet to be updated, seven identified climate change as a challenge or risk and three identified climate change as a driver of migration as a challenge or risk. For example, the Southern Africa regional development cooperation strategy states that water scarcity, natural disasters, and other climate change related events will most likely increase migration throughout the region. Additionally, the Asia regional development cooperation strategy discusses the risks of climate change in urban areas. In Asia, the number of migrants seeking economic opportunities in urban centers is likely to increase. According to the strategy, migrants are moving into hazard-prone areas located along coastlines, flood plains, and other low-lying areas in many Asian primary and secondary cities—areas that experts predict will experience more frequent and intense storm surges, floods, and coastal erosion as a result of climate change.
State Lacks Clear Guidance on its Process for Assessing Climate Change-Related Risks
The requirement in Executive Order 13677 to assess climate change- related risks to agency strategies remains unchanged; however, State now lacks clear guidance on its process for assessing climate change- related risks to its integrated country strategies. Specifically, State’s 2016 guidance for developing integrated country strategies stated that all missions should assess the risk of climate change on their strategies’ goals and objectives and included reference to the climate risk screening tool—a method that missions could use to assess climate change risks. State issued new guidance to its missions in 2018, but this guidance does not include information on the process for assessing climate change-related risks to agency strategies. According to State officials, the 2018 guidance for integrated country strategies does not reference climate change risk assessments because, in September 2017, State decided that the strategies should not single out climate change risks in a separate appendix. State officials said this decision resulted, in part, from the new administration’s shift in priorities on climate change. Officials also said that this decision reflects a new approach to risk management by State and that the missions could choose to include climate change and other potential risks in the general risk discussion section of their strategies. Officials from State’s Office of U.S. Foreign Assistance Resources said that it is now up to each mission to decide whether a strategic objective may have a climate challenge. However, those missions that choose to include an assessment of climate change risks are not provided guidance on the process for doing so and there is no reference to the climate risk screening tool—or to climate change at all—in the 2018 guidance.
Executive Order 13677 directed State to incorporate climate-resilience considerations into decision making by assessing climate-related risks to agency strategies, among other things. Subsequently, a State cable from September 2016 further explained that State would implement the executive order’s requirement by screening for climate risks as part of the process for drafting all new integrated country strategies. Additionally, the Standards for Internal Control in the Federal Government state that documentation is a necessary part of an effective internal control system. If management determines that a principle is not relevant, management must support that determination with documentation that includes the rationale of how, in the absence of that principle, the associated component could be designed, implemented, and operated effectively.
Because State lacks clear guidance on its process for assessing climate change-related risks to its integrated country strategies, it is less likely that the current round of strategies will include the assessment of climate- related risks. It is also possible that those missions that choose to conduct climate change risk assessments will not do so in a consistent manner. Such assessments might identify climate change as a driver of migration, as at least one previous assessment did under the 2016 guidance. Thus, without clear guidance, missions may not examine climate change as a risk to their strategic objectives and could miss opportunities to improve the climate resilience of foreign assistance activities.
State, USAID, and DOD Have Been Involved in Various Climate Change Related Activities, but None Were Focused Specifically on Migration, and Their Participation Has Declined
For fiscal years 2014 through 2017, State, USAID, and DOD had some activities that could potentially address climate change as a driver of migration, although none of these activities specifically focused on the issue. For example, USAID has climate change adaptation activities, but to date migration has not been a focus of this programming. With the shift in priorities related to climate change in fiscal year 2017, agencies have reduced some of these activities.
State Activities
State’s offices that are focused on the issues of climate change (State/OES) and migration (State/PRM) have participated in multilateral activities related to climate change as a driver of migration and funded adaptation and other activities related to the issue. State officials said that the agency does not, however, have any activities that specifically address migration due to climate change or environmental factors.
Multilateral Efforts
State has participated in multilateral activities related to climate change and migration. With the shift in priorities related to climate change in fiscal year 2017, the United States has disengaged from some of these multilateral activities (see table 1).
In addition to State’s participation in the multilateral activities described in table 2, State has provided funding for activities related to climate change and capacity building that address natural disasters. These activities may involve efforts potentially related to migration. For example, according to State:
State provided about $2 million per year, between fiscal years 2014 and 2016, to the Intergovernmental Panel on Climate Change, which analyzed the impacts of climate change on migration in its most recent assessment report.
State/PRM provided about $4 million, between fiscal years 2014 through 2018, for IOM’s Migrants in Countries in Crisis Initiative, which provides guidelines to protect migrants in countries experiencing conflict or natural disasters. IOM provides training to countries on these guidelines. State/PRM officials said that this initiative is not specifically related to climate change and does not focus on specific types of disasters but does mention sudden-onset disasters. Officials also said that IOM tries to promote a climate change perspective in its trainings.
State/OES provided about $78 million in adaptation funding from the Global Climate Change Initiative to eight projects during fiscal years 2014 through 2017. (See appendix III for a description of all eight projects.) State/OES officials said that these projects help countries prepare for the impacts of climate change, potentially reducing the pressure to migrate. However, to these officials’ knowledge, none of these projects directly supported activities related to migration. For example, State/OES provided a $4 million grant to the National Adaptation Plans Global Network. This network focuses on increasing the capacity of governments to identify and assess climate risks, integrate these risks in planning, develop a pipeline of projects to address these risks, identify and secure funding for projects, and track progress toward resilience targets. Adaptation activities occurred in over 35 countries.
With the shift in priorities related to climate change in fiscal year 2017, State discontinued some of these efforts. For example, funding for the Global Climate Change Initiative was not included in the President’s budget request for fiscal year 2018. State/OES officials said that the agency does not plan to fund additional adaptation activities and has not requested additional funding for the activities. According to a State official, PRM had been in discussion with IOM to develop a project proposal that would have assisted the governments of Small Island Developing States in adapting their migration policies to account for challenges and opportunities associated with environmental degradation, ecosystem loss, climate change impacts, and natural disasters. State/PRM stopped further development of the proposal following the change in administrations. Additionally, according to a State official, the department made some efforts at the end of the previous administration to develop a formal position on the topic of climate change as a driver of migration. For example, State drafted an internal document to help clarify its role in responding to the humanitarian aspects of sudden-onset and slow-onset climate events. This initial work stopped under the current administration.
USAID Activities
USAID officials said that, with respect to the agency’s climate-related programming, its climate change adaptation programming was the most likely to include activities related to migration or displacement, although a broad swath of USAID development programming has the potential to build host country resilience. Officials stated that, to date, migration has not been a primary motivation for the agency’s climate-related or disaster assistance programming. However, officials said that, in a humanitarian crisis or under some economic conditions, development programming can reduce displacement or the pressure to migrate—such as by fostering greater resilience to drought or other adverse conditions—and that this is also true of climate-related programming. USAID also provides humanitarian assistance in response to natural disasters that displace people. Officials said that USAID recognizes the links between displacement and natural disasters, but that the agency does not have specific programs linking disaster assistance, migration, and climate change.
Adaptation Efforts
USAID identified about 250 activities that received adaptation funding from the Global Climate Change Initiative during fiscal years 2014 through 2016. Our analysis of the descriptions of these activities determined that none directly mentioned any efforts specifically related to migration. Officials emphasized that the connection between climate change and migration tends to be indirect and shaped by other more immediate factors. USAID’s data on activities that received adaptation funding identified 38 beneficiary countries, as well as activities described generally as implemented at the regional or global level. For activities where USAID’s data identified a specific region, most activities were located in Africa followed by Asia and Latin America and the Caribbean.
Examples of the types of activities that received adaptation funding from the Global Climate Change Initiative during fiscal years 2014 through 2016 include:
The Mali Climate Change Adaptation Activity, which aims to build resilience to current climate variability and increase resilience to longer-term climate change effects. This activity is also working to strengthen the capacity of Mali’s meteorological agency to provide improved climate information as well as to incorporate climate considerations into local-level planning. The total estimated cost is about $13 million over 5 years.
The activity for Climate-Resilient Ecosystems and Livelihoods, which ended in September 2018, aimed to increase Bangladesh’s resilience to natural hazards by working with community-based organizations, government ministries, and technical agencies. This activity provided technical assistance to the Government of Bangladesh and local communities to improve ecosystem conservation and resilience capacity. The total estimated cost was about $33 million in funding over 6 years.
The activity for Pastoralist Areas Resilience Improvement through Market Expansion, which aims to support pastoralists in Ethiopia via expansion of markets and long-term behavior change (see fig. 3). USAID officials cited this activity as an example of adaptation efforts that indirectly address the issue of climate change as a driver of migration. The activity has three interrelated objectives: increasing household incomes, enhancing resilience, and bolstering adaptive capacity to climate change among pastoral people in Ethiopia. An evaluation of the activity found that migration is a coping strategy for dealing with climate shocks, although participants said that drought is becoming more frequent, placing a severe strain on traditional coping mechanisms, such as migration and selling cattle, and that permanent migration is not a preferred strategy. The total estimated cost is about $60 million in funding over 6 years.
With the shift in priorities related to climate change, funding for USAID’s climate change adaptation activities has decreased. Missions may continue to fund their adaptation activities with discretionary funds or other earmarked, sector funding, provided the activities further the funding source’s objective, according to USAID. For example, in some cases, missions are using Water sector funding to continue some of their adaptation work. USAID also said that among the agency’s goals are to increase the resilience of USAID partner countries to recurrent crises, including climate variability and change.
Humanitarian Aid and Disaster Assistance Efforts
In addition to USAID’s climate change adaptation programming, USAID/OFDA and USAID/FFP provide emergency humanitarian assistance to people affected by sudden-onset disasters—such as hurricanes and floods—and slow-onset and extended disasters, including droughts and conflicts. Some of this assistance helps people who have been displaced by disaster. USAID officials stated that although disasters cause mainly temporary displacement, the relationship among humanitarian assistance, climate change, and migration is very complex and depends on both climatic and non-climatic factors. USAID/OFDA responded to 267 disasters from fiscal year 2014 through June 2018, according to agency data. For example, USAID/OFDA responded to the effects of Hurricane Matthew in Haiti in October 2016, as seen in figure 4, including helping temporarily displaced people.
DOD Activities
DOD assists in the U.S. government response to overseas disasters, including helping people displaced by such disasters, regardless of the cause of the disaster. These efforts are not specific to climate change as a driver of migration. For example, officials from DOD’s geographic combatant commands said that, to the extent they address climate change, migration is not a focus of those efforts and they view migration as caused by security and economic issues.
Between fiscal years 2014 and 2018, Congress has appropriated to DOD between $103 and $130 million per year for Overseas Humanitarian, Disaster, and Civic Aid. Officials said that the geographic combatant commands use most of this funding for steady state humanitarian assistance related to health, education, basic infrastructure, and disaster preparedness with a smaller amount set aside for immediate disaster assistance although that varies based on emergency requirements. DOD officials said that they have not seen any changes to this funding or associated activities with the change of administrations in fiscal year 2017. DOD officials we spoke with also emphasized that USAID/OFDA is the lead agency for the U.S. government’s response to disasters overseas. USAID/OFDA formally requested DOD support on about 10 percent of the foreign disaster assistance provided by USAID/OFDA, according to USAID data for fiscal year 2014 through June 2018 and DOD officials. DOD assistance is typically provided for the largest, most complex disasters, according to agency officials.
According to a July 2015 assessment conducted by the geographic combatant commands, while their activities vary, each command works with partner nations to increase their abilities to reduce the risks and effects from environmental impacts and climate-related events, including severe weather and other hazards. For example, in the report, U.S. Southern Command stated that it had requested funding to pre-position assets for when a severe storm threatens Haiti to be able to respond immediately to a potential disaster. U.S. Southern Command officials said that they work with partner nations to encourage residents experiencing extreme weather to remain where they are because it is easier to provide help to people who stay in one place. Officials from U.S. Southern Command and U.S. Africa Command also said that the major factors driving migration in their regions are security and economic issues.
Interagency Forums
State, USAID, and DOD have participated in interagency forums regarding climate change, which may have addressed its effects on migration. With changes to priorities regarding climate change in fiscal year 2017, these forums have been disbanded or are not meeting.
The Council on Climate Preparedness and Resilience. The Council on Climate Preparedness and Resilience, of which State, USAID, and DOD were members, was established to facilitate the integration of climate science in policies and planning of government agencies, including by promoting the development of climate change related information, data, and tools, among other things. Additionally, the council was to develop, recommend, and coordinate interagency efforts on priority federal government actions related to climate preparedness and resilience. According to State officials, the council began working with the National Security Council and other agencies to facilitate greater interagency cooperation on adaptation. In addition, a task force on the council was discussing the federal role in addressing displacement related to climate change. The council was disbanded when Executive Order 13783 revoked Executive Order 13653, which had established the council.
The Working Group on Climate-Resilient International Development. The Working Group on Climate-Resilient International Development, of which State and USAID were members, was established by Executive Order 13677 and placed under the Council on Climate Preparedness and Resilience. The working group’s mission includes developing guidelines for integrating considerations of climate-change risks and climate resilience into agency strategies, plans, programs, projects, investments, and related funding decisions, among other things. Additionally, the working group was tasked with facilitating the exchange of knowledge and lessons learned in assessing climate risks to agency strategies, among other things. USAID officials said that the working group had not discussed climate change as a driver of migration. While the working group has not been formally disbanded, it has not met since at least November 2017 according to USAID.
The Climate and National Security Working Group. The Climate and National Security Working Group, of which State, USAID, and DOD were members, was established by the 2016 presidential memorandum. The chairs of the working group were to coordinate the development of a strategic approach to identify, assess, and share information on current and projected climate-related impacts on national security interests and to inform the development of national security doctrine, policies, and plans, among other things. According to the memorandum, the working group was to provide a venue for enhancing the understanding of the links between climate change- related impacts and national security interests and for discussing opportunities for climate mitigation and adaptation activities to address national security issues. This working group was disbanded when Executive Order 13783 revoked the 2016 presidential memorandum, which had established the working group.
Conclusions
State, USAID, and DOD assessments and activities have not focused specifically on the nexus of climate change and migration. State did identify migration as a risk of climate change in at least one of its climate change risk assessments for the department’s country strategies. However, State now lacks clear guidance on its process for assessing climate change-related risks to its integrated country strategies. State’s current guidance for these country strategies no longer mentions a climate change risk assessment and does not provide missions with information about the climate risk screening tool that can be used to conduct such an assessment. As such, missions are less likely to examine climate change as a risk to their strategic objectives, or to do so in a consistent manner, and thus may not have the information they would need to identify migration as a risk of climate change. By clearly documenting and providing guidance on how to assess the risk of climate change, State would ensure that the department examines the potential risks of climate change on its foreign assistance activities.
Recommendation for Executive Action
We are making the following recommendation to State: The Secretary of State should ensure that the Director of the Office of U.S. Foreign Assistance Resources provides missions with guidance that clearly documents the department’s process for climate change risk assessments for integrated country strategies. (Recommendation 1)
Agency Comments
We provided a draft of this product to State, USAID, and DOD for review and comment. State provided written comments, which we have reprinted in appendix IV. In its comments, State did not oppose the recommendation and noted that the agency will update its integrated country strategy guidance by June 30, 2019 to inform missions that they have the option to include an annex on climate resilience, as well as other topics. However, State also indicated that the agency will begin working with stakeholders to consider whether to recommend that the Secretary of State ask the President to rescind Executive Order 13677: Climate- Resilient International Development.
USAID also provided written comments, which we have reprinted in appendix V. In its letter, USAID provided some additional information about its programs and its proposed transformation effort. USAID and DOD provided technical comments, which we incorporated as appropriate.
We are sending copies of this report to the appropriate congressional requesters, Secretary of State, the Administrator of USAID, and the Secretary of Defense. In addition, the report is available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact David Gootnick at (202) 512-3149 or [email protected], or Brian J. Lepore at (202) 512-4523 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI.
Appendix I: Objectives, Scope, and Methodology
This report (1) describes executive branch actions related to climate change and migration from fiscal years 2014 through 2018; (2) examines the extent to which the Department of State (State), the U.S. Agency for International Development (USAID), and the Department of Defense (DOD) have discussed the potential effects of climate change on migration in their plans and risk assessments; and (3) describes State, USAID, and DOD activities, if any, that are related to climate change and global migration. We chose fiscal years 2014 through 2018 as our time frame based on our review of recent executive orders related to climate change. We selected State, USAID, and DOD because the agencies’ missions of diplomacy, development, and defense provide the foundation for promoting and protecting U.S. interests abroad.
To describe executive branch actions related to climate change and migration from fiscal years 2014 through 2018, we reviewed documents that reflect priorities of the previous and current administrations. Specifically, we reviewed budget requests and enacted appropriations between fiscal years 2014 through 2018 for funding priorities related to climate change and U.S. foreign assistance. In addition, we reviewed executive actions and executive branch strategies that applied to State, USAID, and DOD between fiscal years 2014 through 2018 for executive and national security priorities related to climate change. For example, we reviewed the current and previous national security strategies. strategies and seven regional bureau strategies. For USAID, we examined the five country and regional strategies that were required to include a climate risk assessment at the time of our review: Uganda, Tunisia, East Africa, Sri Lanka, and Zimbabwe. We also reviewed all nine USAID regional strategies. For both State and USAID, we reviewed the selected strategies by searching for information related to migration and climate change. To determine whether State clearly documents the department’s current climate risk assessment process for integrated country strategies, we compared State’s 2018 guidance for developing integrated country strategies with standards related to documentation in Standards for Internal Control in the Federal Government and previous State guidance issued in 2016, which was created in response to Executive Order 13677’s requirements to assess climate change risks to strategies, among other things. to these issues. The agency then provided us with data for about 250 activities from its annual operational plans for fiscal years 2014 through 2016, the 3 years during the period we reviewed in which it received adaptation funding. USAID identified these activities based on whether the agency had tagged them in its plans as having an “adaptation key issue.” USAID excluded projects that had planned attributions to the adaptation key issue of less than $250,000 in a given fiscal year, as well as certain other activities such as those that focused on project support. We then conducted an automated review of the activity description fields provided by USAID for terms related to migration and other descriptive information such as locations of activities. Because no USAID adaptation activities specifically mentioned migration, for the purposes of this report we chose illustrative examples to provide context for the types of activities the agency has funded.
DOD officials we met with did not identify any specific activities related to climate change as a driver of migration. DOD officials from the Assistant Secretary of Defense for Special Operations and Low Intensity Conflict and the geographic combatant commands generally discussed DOD activities related to humanitarian assistance and disaster response as most relevant to our inquiry. Because DOD works in coordination with USAID’s Office of U.S. Foreign Disaster Assistance on disaster assistance we also reviewed USAID data on its disaster response activities during this period.
We determined that the USAID and State adaptation project data and USAID disaster assistance data were sufficiently reliable for the purposes of describing these efforts.
State, USAID, and DOD to obtain information on whether changes in government priorities related to climate change affected their activities.
We conducted this performance audit from October 2017 to January 2019 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Regional Focus on Climate Change as a Driver of Global Migration
This appendix provides a review by region of observed and projected climate change effects, migration trends, and challenges in stability and security. Multiple sources we used for this overview make a connection between climate change and such events as rising sea levels, higher temperatures, and an increase in the number and severity of extreme weather events. The following regions are discussed: Asia, South America, the Arctic, Sub-Saharan Africa, the Middle East and North Africa, Oceania, and Central America and the Caribbean. We have provided an overview for each region and a focus on one country or territory in the region. international and regional organizations, including a variety of organizations within the United Nations, the World Bank, regional development banks, the European Union, and others. Third, we reviewed relevant public documents from U.S. government agencies, including the Department of Defense, the U.S. Agency for International Development (USAID), and the United States Institute of Peace (USIP). Fourth, we reviewed academic sources, research institutions, and documents from the relevant country’s national government. population. Economic conditions may be a factor for people deciding whether to migrate or stay in their country of origin.
Remittances as Percent of GDP: The money international migrants transfer to recipients in their country of origin, expressed as a percentage of the origin country’s GDP. Sources agree that remittances support resilience in origin countries.
Agriculture, Fishing, Forestry as Percent of GDP: A measure of the value added to an economy from the agricultural sector, which includes forestry, hunting, fishing, and the cultivation of crops and livestock, expressed as a percentage of the country’s GDP. Countries that depend on the agricultural sector may be vulnerable to the effects of climate change, according to the World Bank.
Percent of Population in Cities: The population living in areas classified as urban according to criteria each country uses. Today, more than half of the global population lives in cities. Migration, in some cases due to climate change, is an important driver of urban growth, according to IOM. Cities are also expected to face increasing risks from rising sea levels, flooding, storms, and other climate change effects.
Net Migration Rate: A measure of the number of people leaving a country compared to the number of people entering a country, expressed as a number per 1,000 people.
The effects of climate change in Asia may impact migration and stability according to the Intergovernmental Panel on Climate Change (IPCC) and the Asian Development Bank (ADB). In coastal areas, effects of climate change include rising sea levels, storm surges, and others. Receding glaciers in mountanous areas may also cause flooding, and monsoons in a warmer climate may be more severe. Heat extremes and more rainfall are a particular concern in Southeast Asia. Changes in precipitation and drought in Asia may exacerbate food security challenges, and contribute to people deciding to migrate. Increases in migration, partly stemming from the effects of climate change in surrounding rural areas, may put pressure on existing urban infrastructure. Rural migrants may settle in informal communities on the outskirts of cities, areas that have little resilience to natural disasters. Although the World Bank and others agree that climate change largely causes internal migration, some evidence shows that the impact of climate change contributes to cross-border migration in Asia. Large numbers of migrants, along with other destabilizing factors, may contribute to instability and conflict, according to the IPCC. The effects of climate change on livelihoods, for example, could increase migration, strain governance, and contribute to conflict as a result. Bangladesh is one example where decreased yields from agriculture and fisheries have contributed to migration to the country’s coastal cities, which face their own climate change challenges.
Bangladesh: Climate Change, Migration, Stability and Security
Total Population 164,700,000
Bangladesh’s high population density and geography make the country susceptible to the effects of climate change, according to the World Bank, and others. Bangladesh’s coasts and river banks are vulnerable to sudden-onset events such as tropical cyclones and flooding. Cyclone Aila in 2009, for example, caused widespread flooding in the southern coastal areas of Bangladesh and impacted millions of people. The storm washed away embankments that protected coastlines and caused severe damage to crops and livelihoods. Tropical Cyclone Mora in 2017 damaged thousands of homes and displaced an estimated 200,000 people. Increases in the number and intensity of tropical cyclones, which some predict will occur in a warmer climate, could have severe impacts on homes, livelihoods, and food security. Bangladesh also experiences many slow-onset climate change events, such as rising sea levels and increasingly severe droughts, which are projected to intensify with climate change. Bangladesh would lose an estimated 17.5 percent of its land if the sea level rose 1 meter, as the International Organization for Migration (IOM) has reported. Projected changes in precipitation levels could cause drought and food insecurity in the northwest and salt-water intrusion could reduce crop yields in the southwest.
Net Migration Rate per 1000 people -3.2
Map of Bangladesh
Migration is a common adaptation strategy to climate change in Bangladesh, according to the ADB. For example, some farmers have adapted to salt water intrusion and destroyed crops by switching to salt- tolerant rice production or shrimp cultivation. Others have migrated, often to Bangladesh’s cities to find work less dependent on agriculture. Many new migrants to Bangladesh’s cities live in informal settlements that lack the resilience to withstand sudden-onset climate events. The capital city, Dhaka, is a common destination for migrants displaced by salt-water intrusion, flooding, and river erosion, according to IOM. Dhaka, like many coastal cities in South Asia, is located on a low-lying riverbank and faces increasing risks of extreme flooding. For example, past floods in Dhaka have destroyed homes and contaminated drinking water, creating significant health hazards. In some cases, individuals migrate to cities temporarily for work and return home after the agricultural off season ends. Bangladeshis also provide a significant number of labor migrants to the Gulf States and Malaysia. Remittances from international migrants represent 5.4 percent of the country’s GDP, and may help to support resilience to climate change, according to IOM, and others. These migration trends may intensify in the future. One study estimates 9.6 million people will migrate from 2011 to 2050 due to the effects of climate change.
Challenges in Stability and Security Migration due to climate change is cited as a potential destabilizing factor in Bangladesh by ADB, and others. The low-income population in Bangladesh is dependent on agriculture, making the effects of climate change—including impacts on food security—a particular concern. By 2030, these effects on livelihoods and food security could increase the poverty rate in Bangladesh by 15 percent, as the IPCC has reported. Given the proximity of Bangladesh to India, some individuals may also choose to cross the border. Increased migration to India is a potential concern, according to some sources, as India may not have the resources to absorb large numbers of Bangladeshi migrants.
The CNA Corporation, National Security and the Threat of Climate Change (Alexandria, VA: 2007); and Population Council, “Effects of Future Climate Change on Cross-Border Migration in North Africa and India,” Population and Development Review, Vol. 36, No. 2 (2010).
The effects of climate change in South America vary by region, according to the the Intergovernmental Panel on Climate Change (IPCC) and International Organization for Migration (IOM), as well as potentially impacting migration and stability. On the coast, risks include sea level rise, depletion of fisheries, and coral reef bleaching, according to IOM. Coastal cities with growing populations are particularly vulnerable. Melting glaciers in the Andean mountain region, and increased rainfall are expected to change the distribution of water resources, and impact food production as global demand for food is growing. Desertification and land degradation, complicated by the effects of climate change, are contributing to migration from rural areas to cities in South America, as IOM has reported. An estimated 77 percent of people living in high risk areas in South America are located in cities, according to IOM. IOM predicts that as these people feel the effects of sea level rise and water scarcity, they will migrate from the large coastal cities to smaller urban areas. While South America has experienced economic growth in the last decade, poverty rates remain high, and the effects of climate change, including possible migration, may exacerbate inequalities, putting further pressure on cities to meet the needs of their populations. Water security in particular is expected to disproportionaly impact low-income communities, according to the IPCC. For example, in Brazil, drought in the northeast may increase migration to southern cities that are facing rising sea levels and landslides, with consequences for food, water, and energy security.
Brazil: Climate Change, Migration, Stability and Security
Human Development Index High Gross Domestic Product (GDP) per Capita $14,103 Remittances as % of GDP 0.1 Agriculture, Fishing, Forestry as % of GDP 4.6 % Population in Cities 86.3 Net Migration Rate per 1000 people 0.0
Observed and Projected Effects of Climate Change Brazil’s cities and rural regions may encounter a range of climate change effects, according to the IPCC and IOM. Rural areas, particularly in the northeast, could experience significant impacts from climate change partly due to poverty rates, and historical vulnerability to drought. Higher temperatures are expected to affect crop yields and household incomes, especially for low-income communities. In northeastern Brazil, temperatures are expected to increase and rainfall to decrease. The northeast could see a 22 percent reduction in precipitation by 2100, according to IPCC projections. Brazil’s coastal areas, including cities, are also vulnerable to rising sea levels, heavy precipitation, flooding, and landslides. The vast majority of Brazil’s population, about 86 percent, lives in cities, many in coastal areas, according to the United Nations Development Program. As their populations have grown, urban areas have extended out. This urban growth in Brazil’s megacities has caused further increases in temperature, rainfall, and landslides. For example, current levels of urbanization in the metropolitan area of Sao Paulo may already be responsible for the 2°C warming observed in the city over the last 50 years, as well as the rise in extreme rainfall, according to the IPCC. The metropolitan area is expected to extend its area 38 percent by 2030. Multiple studies of the effects of urbanization on Sao Paulo’s climate suggest higher temperatures affect convective rainfall, which occurs when warm air rises, condenses to form clouds, and produces extreme rain. Other concerns are the depletion of coral reefs and mangrove forests on Brazil’s coastlines, and decreases in biodiversity.
Migration from drought in northeastern Brazil to cities has increased urban populations, putting more people at risk of displacement from flooding and landslides. Migration from the northeast is a historical trend in Brazil, as economic migrants have sought seasonal jobs in more productive agricultural regions, or moved permanently to southern cities. Projected declines in rainfall have led some to predict further increases in migration in northeastern Brazil, as the IPCC has reported. However, remittances from family members who leave Brazil’s northeast support resilience for those who remain and may help to reduce migration. Already environmental factors contribute to migration to cities, including to favelas, informal settlements often constructed in hilly areas and floodplains outside of Brazilian cities. A significant number of the favela residents in Rio de Janeiro are migrants from northeastern Brazil, according to IOM. These new migrants may be at risk of further displacement if heavy rainfall, flooding, and other climate change effects destroy their vulnerable homes. For example, heavy rainfall in April 2010 resulted in landslides across Rio de Janeiro, displacing an estimated 5,000 people, according to a report from the World Bank. Brazil is also a destination for migrants from other countries in the region. Migrants from Venezuela searching for jobs and improved food security have come in growing numbers in recent years, as have migrants from Haiti fleeing a series of natural disasters, as IOM has reported.
Challenges in Stability and Security Although Brazil ranks 106th out of 178 countries on the Fragile States Index, the effects of climate change may contribute to challenges with water, food, and energy access according to the IPCC. Decreased rainfall could decrease agricultural productivity, with potential health impacts for poor populations. These conditions are of particular concern in northeastern Brazil, as extreme weather and low crop yields are associated with more violence, according to the IPCC. Brazil also receives about 70 percent of its electricity from hydroelectric power, according the United Nations Environment Programme, and recent droughts caused power cuts across many major cities. Although not linked to the effects of climate change, absorbing a growing number of migrants fleeing political and economic instability in Venezuela may impact the broader region, according to the U.S. Department of Defense and the National Intelligence Council. Neighboring countries, including Brazil, may struggle to absorb the influx of migrants. On average, 800 Venezuelans are crossing the border to Brazil every day in need of urgent humanitarian assistance, according to the UNHCR, the UN Refugee Agency.
The effects of climate change in the Arctic, including higher temperatures and melting ice, have contributed to shifts in migration across the Arctic, and may have security implications. Increasing temperatures may have a variety of impacts in the Arctic, according to the Intergovernmental Panel on Climate Change (IPCC). The effects of rising temperatures are disrupting livelihoods and food security, especially for indigenous communities, and opening up untapped natural resources to extraction. Both trends have impacted migration flows in the Arctic. Rising temperatures and melting ice have opened up previously inaccessible waterways in the Arctic, with implications for national security, according to the Department of Defense and others. Greenland, located in the Arctic and considered part of Kingdom of Denmark, exhibits many of these trends.
Greenland: Climate Change, Migration, Stability and Security
Total Population 56,000
Greenland is experiencing the effects of climate change, including glacial and ice melt, shifts in wildlife distribution, and newly available oil and mineral deposits, among others. The Greenland Ice Sheet covers approximately 80 percent of Greenland’s land mass. The ice sheet’s melting rate is slow, but uncertain. Increases in temperature greater than 1°C may result in the near loss of the entire ice sheet over a millennium and significant sea level rise, according to the IPCC. In the short term, predicting the ice sheet’s melting rate is a challenge as predictions vary in the scientific community. Accurate predictions would support mitigation and adaptation efforts in vulnerable areas. Rising temperatures and shrinking ice cover have shifted the distribution and migration patterns of marine mammals and fish, and impacted food security according to the IPCC and the Arctic Council, an intergovernmental forum for Arctic states. For example, the economy in Paamiut, Greenland, depended primarily on cod fisheries until changing climate conditions caused cod to disappear, and the town was slow to adapt to newly available shrimp. Similarly, fisheries in Disko Bay, Greenland, have struggled to adapt to new conditions. Rising temperatures and the resulting reduction in ice cover have required a shift to fishing from boats in open water instead of hunting and fishing over ice cover. Lastly, warming and ice melt may make significant oil and mineral deposits accessible for extraction in the future. The potential expansion of extraction industries makes environmental sustainability another possible concern. For example, an estimated 31 billion barrels of oil and gas may exist off the coast of Northeast Greenland, according to the Kingdom of Denmark’s 2011-2020 Arctic Strategy. The strategy stresses the importance of assessing and reducing risks to the environment resulting from the exploration and extraction of oil and gas.
The effects of climate change are predicted to contribute to internal and external migration in Greenland. For example, young people are increasingly leaving indigenous communities in rural areas for cities in Greenland in search of work, as traditional livelihoods become unsustainable. Greenland is home to a majority indigenous population, primarily Inuit, whose traditional hunting and fishing practices require travel across ice. In the past, people adapted to seasonal changes to support livelihoods by migrating, and the practice was embedded into indigenous social structures. With reduced ice cover, however, migrating to hunt, fish, and maintain connections to community is more dangerous or restricted. Government policies promoting centralized services, such as health care and education, have also played a role in the shift away from migration as a way of life. As a result, indigenous livelihoods are more difficult to maintain, and young people often migrate to towns and cities in Greenland, or to Denmark, for education. At the same time, warmer temperatures have made mineral extraction feasible. As the extraction industry grows, new jobs may draw migrants from outside the Arctic region. In 2011 companies spent $100 million on the exploration of minerals in the Artic, and the estimated number of new mines is expected to require more workers than now live in the region.17 79Currently, more people leave than migrate to Greenland.
The local Inuit population in Uummannaq, Greenland relies heavily on ice coverage for fishing and travel by traditional dog-sled.
Brookings-LSE Project on Internal Displacement, A Complex Constellation: Displacement, Climate Change and Arctic Peoples (January 30, 2013).
Brookings-LSE Project on Internal Displacement.
The effects of climate change on Sub-Saharan Africa vary depending on the region and have impacts on migration and security, according to the International Organization for Migration (IOM). Coastal areas, for example, in West and East Africa are at risk from sea level rise that could affect major cities. Drought and the risk of desertification in the Sahel is cited as a concern, as is increased rainfall in parts of Central Africa accompanied by lower agricultural yields. As desertification threatens the livelihoods of farmers and herders, and drought makes fishing more challenging, rural dwellers may be more likely to migrate to cities, according to the United Nations Environment Programme (UNEP). Urbanization and population growth across Sub-Saharan Africa is already making densely populated cities vulnerable to flooding, storms, and erosion, increasing the number of people at risk of displacement by sudden-onset disasters. Climate change effects and changing migration flows across Sub-Saharan Africa may impact access to natural resources and contribute to existing tensions and conflicts, according to UNEP and the Intergovernmental Panel on Climate Change (IPCC). In Nigeria, the effects of climate change may effect a variety of livelihoods and increase migration south, while also exacerbating existing conflicts.
Nigeria: Climate Change, Migration, Stability and Security
Total Population 190,900,000 Fragile States Index #14 out of 178
The effects of climate change on Nigeria may impact the country’s agriculture and economy, according to the United States Institute of Peace (USIP). Higher temperatures and decreased rainfall have contributed to drought in northern Nigeria. Desertification is also a concern. Some regions in northern Nigeria have less than 10 inches of rain a year, an amount that has decreased by 25 percent since the 1980’s, according to USIP. In other areas across Nigeria flooding has resulted in major crop losses, according to UNEP. Rising sea level, water inundation, and erosion are concerns in Nigeria’s coastal areas. Rising sea level is predicted to pose medium to very high risks to Africa’s coastal areas by 2100, according to the IPCC. Future sea level rise could result in the inundation of over 70 percent of the Nigerian coast. A rise of 0.2 meters in sea level could risk billions of dollars in assets, including oil wells near the coast. Even without a rapid rise in sea level, Nigeria’s coastal areas could experience erosion and significant land loss by 2100, as the IPCC has reported.
The effects of climate change on livelihoods in northern Nigeria may contribute to migration to the south according to UNEP, while conflict in the north drives separate migration trends. As the effects of climate change make farming and fishing more challenging elsewhere in Nigeria, migration to southern coastal cities may increase. Traditionally, farmers, herders, and fishery workers migrated for temporary employment during the off season, including migration to Nigeria’s cities to work in the oil industry. Permanent migration south as well as to cities may become more common if land suitable for farming decreases. As fish habitats like Lake Chad dry up, fishery workers may also migrate. Larger urban populations on the coast will put more people at risk of sea level rise, water inundation, and erosion, according to the IPCC. A rise in sea level of 1 meter could put over 3 million people at risk of displacement as the IPCC has reported. Herders have also moved further south due to increased drought in northern Nigeria, as UNEP and USIP have reported. A 2010 survey of herdsmen in Nigeria, for example, found that nearly one-third of them had migrated southeast as a result of changes in the natural environment, according to the UNEP. The ongoing conflict with Boko Haram, while not caused by climate change, has further resulted in millions of displaced people across the Lake Chad region, including many Nigerians who have fled to Cameroon, Chad, and Niger.
Nigerian refugees at the Minawao camp in Cameroon.
Challenges in Stability and Security The effects of climate change, migration, and conflict are interconnected in Nigeria, as USIP has reported. The country is ranked 14th of 178 countries on the Fragile States Index. Events in northwest Africa, including Boko Haram’s attacks in Nigeria, have underscored concerns about the region’s vulnerability to the spread of violent extremism. The effects of climate change may exacerbate these concerns, according to USIP. Nigerians fleeing attacks from Boko Haram in the north have gone to communities in neighboring Chad, Cameroon, and Niger that are already experiencing food shortages due in part to climate change. These neighboring countries as a result have fewer resources to support both their own residents and the newer refugees. Non-state actors may also take advantage of government inaction on the effects of climate change. Boko Haram, for example, has justified its acts of violence by pointing to government failures, according to the USIP. Separately, increased drought in the north may aggravate historic tensions over land and water use between farmers in the south and herders migrating from the north, according to UNEP. Nigeria’s oil fields on the coast, which represent a significant part of the economy, are also at risk from sea level rise. Potential losses in oil revenue could impact Nigeria’s ability to respond to humanitarian crises and conflict at home. Increased violence within its borders could also affect Nigeria’s ability to support regional peacekeeping missions, such as the United Nations Mission in Liberia from 2003 to 2018, where Nigerian troops worked to restore security after a civil war.
The effects of climate change in the Middle East and North Africa, including on its desert regions, may impact water access and compound migration and stability challenges, according to the United Nations Environmental Programme (UNEP). Over 60 percent of the population already experiences high or very high water stress, according to the World Bank. Coupled with unsustainable water use, climate change may further exacerbate challenges with water security. The region continues to experience rising temperatures and declining annual rainfall, trends that contribute to the severity and length of drought, land degradation, and desertification. Decreased water security affects the livelihood and quality of life of farmers in the region, contributing to an increase in their migration to the cities and more urbanization, according to the World Bank. In contrast, many people are expected to migrate away from coastal cities as a result of sea level rise, according to UNEP. These potential migrations would be taking place in a region that already hosts large numbers of migrants such as those displaced by conflict and violence, including 18 percent of the world’s refugees, according to the International Organization for Migration. Challenges in water security may put greater pressure on unstable governments in the region, by intensifying existing tensions and conflicts between populations and their governments as well as between countries that share sources of water. The conflict in Syria illustrates the complex nature of climate change, migration, and conflict in the region, and the challenges to accurately assessing the links among the three, as noted in a technical paper commissioned by the U.S. Agency for International Development (USAID).
Syria: Climate Change, Migration, Stability and Security
Total Population 18,300,000
Rising temperatures and declining rainfall have contributed to recent droughts in Syria, a trend that may continue. The country underwent an extended drought from about 2006 until 2011. During the drought an estimated 60 percent of Syria experienced severe crop failure, and accompanying impacts on food security. Some studies have linked the length and severity of the drought in Syria to climate change, as USAID has reported. Others, however, have pointed to government land and water use policies, combined with the effects of climate change, as responsible for the severity of the drought. Agricultural policies, for example, encouraged farmers to grow water intensive crops like wheat, and supported inefficient irrigation practices, policies which further depleted ground water and made the region more vulnerable to decreases in rainfall linked to climate change. Across the Middle East, the rising temperatures and declining rainfalls of recent decades may worsen, according to the World Bank. If these trends continue, countries in the Middle East, including Syria, could continue to experience periods of severe drought and reduced crop yields.
Net Migration Rate per 1000 people -41.8
Map of Syria
Migration Trends The ongoing conflict in Syria, in which migration due to climate change may have been a contributing factor, has caused large-scale migration to neighboring countries in the Middle East and to Europe. Leading up to the civil war, prolonged drought, among other factors, had increased migration to Syrian cities. Because of the drought, in 2009, over 800,000 Syrians lost their livelihoods in the agricultural sector, while nearly 1 million experienced food insecurity. In 2010, an estimated 200,000 people migrated from farms in rural areas to cities, according to a UN report. The conflict in Syria, which began in 2011, has further displaced large numbers of people within the country and across the Middle East, as we have previously reported.At the beginning of the conflict, Syrians, as well as Iraqi and Palestinian refugees who had been residing in Syria, fled mainly to Jordan, Lebanon, and Turkey. As the conflict persisted, refugees fled in larger numbers to Turkey, with the UNHCR reporting that nearly 1 million Syrians sought protection in that country in 2015. Starting that year, a growing number of Syrians risked dangerous sea voyages to reach countries in Europe, such as Greece, Germany, and Sweden. As of June 2017, more than 5 million registered Syrian refugees were living in neighboring countries, including more than 3 million in Turkey, and more than 1 million in Lebanon.
Challenges in Stability and Security Sources agree that the Syrian conflict is a significant security challenge that has resulted in large scale migration across the Middle East and to Europe. Yet the link between prolonged drought, rural to urban migration, and the current conflict in Syria is uncertain. Some academic sources argue that the increased strain on urban infrastructure and resources due to the rural to urban migration played a role in Syria’s growing instability. Others highlight the complex nature of the Syrian conflict, pointing to broader political factors that exacerbated resource scarcity and inequality. For example, as the drought intensified, the Syrian government downplayed the severity of the humanitarian crisis, as described in research cited in a technical report commissioned by USAID.result, appeals to the international community for emergency aid received minimal support. Combined with existing sectarian divisions, ongoing revolutions across the Middle East, and other factors, the government’s response to the drought may have contributed to the current conflict. Migration and displacement are a concern in the region, according to the Department of Defense and others. The U.S. government has provided significant humanitarian assistance for Syrian refugees in the Middle East, including in Lebanon and Jordan, as we have previously reported.However, a technical report commissioned by USAID has cautioned that the ongoing conflict in Syria makes it difficult to conduct research and draw conclusions related to climate, migration and conflict.
As a The effects of climate change on Oceania, particularly rising seas, may significantly impact coastal populations and increase migration in the future, as the Asian Development Bank (ADB) and the Intergovernmental Panel on Climate Change (IPCC) have reported. Rising temperatures and declining rainfall may also contribute to lower yields from fisheries and agriculture, and a significant decrease in coral reef cover. Extreme weather events, including higher temperatures, wind, and rainfall, have already increased in number and intensity across the region. In the majority of Pacific island nations, of those who migrate, more people leave than come, according to the African, Caribbean, and Pacific Observatory on Migration. The majority of migration in the region is economically driven. In the future, climate change may further impact these migration patterns across the region, according to the IPCC. Climate change has already exacerbated challenges that aid-dependent nations in the region face, restricting livelihoods and resources and contributing to pressures to migrate. The costs of climate change, including a decline in crop yields, a rise in energy demands, and a loss of coastal land, are predicted to be significant. The ADB estimates these costs will reach 12.7 percent of the Pacific regions’ GDP by 2100. Increased migration may also impact political stability and play a role in geopolitical rivalries within the region, according to the IPCC. The effects of climate change, especially rising sea levels, may result in forced migration from the Republic of the Marshall Islands (the Marshall Islands) and have additional impacts on the U.S. defense infrastructure on the islands.
Marshall Islands: Climate Change, Migration, Stability and Security
Total Population 100,000 Fragile States Index Not Available Human Development Index High Gross Domestic Product (GDP) per Capita $3,819 Remittances as % of GDP 14.8 Agriculture, Fishing, Forestry as % of GDP 15.9 % Population in Cities 76.7 Net Migration Rate per 1000 people Not Available
Observed and Projected Effects of Climate Change Rising sea levels are a grave threat to the Marshall Islands.The country consists of islands, low-lying atolls—coral caps sitting on top of submerged volcanoes—making it particularly vulnerable to rising sea levels. On average, the Marshall Islands are 2 meters above sea level. In Majuro, the country’s most populous atoll, observed rates of sea level rise are already twice as fast as the global average. Population centers experience significant flooding, with damage to roads, houses, and infrastructure, especially during La Niña years, which are significantly wetter and more prone to extreme rainfall. Flooding is expected to worsen with rising sea levels, with consequences for the availabity of drinking water. On Roi-Namur island, for example, a 0.4 meter rise in sea level combined with wave-driven flooding is predicted to make groundwater undrinkable year round as early as 2055. This salt water inundation may contaminate already limited groundwater across the Marshall Islands. Lastly, during the 1940s and 1950s, the Marshall Islands was the site of 67 U.S. nuclear weapons tests on or near Bikini and Enewetak Atolls. Projected increases in frequency of flooding may negatively impact efforts to contain radioactive material stored on Runit Island.
A number of factors have increased migration from the Marshall Islands, including to the United States. In 1986, the United States entered into a compact of free association with the country that allowed its citizens to migrate to the United States, as we have previously reported. As a result, more than 20,000 Marshallese now live in the United States.People are more likely to migrate abroad as the effects of climate change on the Marshall Islands—including rising sea levels—increasingly impact livelihoods.The threat of mass displacement and forced migration is also a concern, as the International Organization for Migration has reported. However, Marshallese culture has a strong connection to the land, which means that many view migration as a last resort. For people still living in the Marshall Islands, they face overpopulation in urban centers and displacement by sudden-onset disasters like cyclones and flooding. Factors influencing people deciding to move abroad include displacement, lack of economic opportunity—sometimes exacerbated by climate change—and limited access to health care. Climate change is likely to increase risks to public health in the country.Increased rainfall, for instance, may expand mosquito breeding grounds, raising the risk of diseases like dengue fever. The country’s limited health care system may further contribute to migration from the islands.
Challenges in Stability and Security In the future, the Marshall Islands may become uninhabitable. This prospect threatens the existence of the Marshall Islands as a sovereign state, as well as the United States defense facilities located on the islands. The total loss of land could result in the Marshall Islands being uninhabitable, which raises problems of migration, resettlement, cultural survival, and sovereignty. Relocation of the population of the Marshall Islands, and of other Pacific Island nations at risk of rising seas, could cause significant geopolitical challenges.The Marshall Islands are also of strategic importance for the United States. Under the Compact of Free Association, the United States has permission to use several islands— including Kwajalein Atoll, the location of the Ronald Reagan Ballistic Missile Defense Test Range—until 2066. The country’s proximity to the equator makes the Marshall Islands ideal for missile defense and space work. Yet the island’s defense infrastructure and operations are at significant risk due to rising sea levels, flooding, and diminishing supplies of potable water. As the Department of Defense has noted, climate change will have serious implications for the department’s ability to maintain its infrastructure and ensure military readiness in the future.
DOD, 2014 Climate Change Adaptation Roadmap (Alexandria, VA: June 2014).
The effects of climate change on Central America and the Caribbean may increase migration and exacerbate poverty rates, as the National Intelligence Council has reported. The climate in Central America and the Caribbean is predicted to be warmer and dryer. The Caribbean’s extensive coastlines and low-lying areas are vulnerable to sea level rise and an increase in sudden-onset disasters, including hurricanes and storm surges. Drought is a particular concern in Central America, where declines in rainfall have reduced crop yields and threatened livelihoods in recent years. Some evidence shows that drought in parts of Central America has contributed to migration north, including to the United States. Population growth, especially in coastal cities, has increased the number of people at risk during hurricane season, and the number and intensity of hurricanes have grown in recent years. Some attribute the increase in intensity to higher sea surface temperatures caused by climate change. However, there remains debate about long term hurricane trends. Recent hurricanes have caused displacement, and significant losses and damages—including to infrastructure—across the region. The depletion of coral reefs and mangrove trees, natural barriers to coastal erosion and flooding, has exacerbated vulnerability to storms in coastal areas. Climate change is likely to have negative impacts on tourism in the Caribbean, where the industry is an important part of the economy, according to Inter-American Development Bank. Climate change impacts on the economy may make it increasingly difficult for governments to reduce poverty and move towards environmental sustainability. Haiti’s geography, location, and high poverty rates make the country especially vulnerable.
Haiti: Climate Change, Migration, Stability and Security
Total Population
Haiti is highly vulnerable to climate change effects, partly due to its long coastline.Hurricanes routinely make landfall in the country, and increases in rainfall and wind speeds associated with hurricanes are likely. Severe hurricanes, including Hurricane Matthew in September 2016, have hit Haiti in recent years. Hurricane Matthew was the first category 4 storm in Haiti since 1964. Damage from severe flooding and severe winds during the hurricane affected over 2 million people and created significant food security and public health challenges. Significant deforestation has further exacerbated Haiti’s vulnerability to hurricanes, as trees previously provided a natural barrier to the erosion that strong winds and more rainfall can cause. Rising temperature and highly variable rainfall have led to extreme drought and flash flooding, according to the U.S. Agency for International Development (USAID).32 2 These trends decrease crop yields, affecting the livelihoods of farmers, and threaten water access. Projected increase in temperature and decreases in rainfall are likely to intensify drought in Haiti’s interior.
USAID, Haiti: Environment and Climate Change Fact Sheet (January 2016).
Migration Trends Slow-onset climate events, such as drought, and rising sea levels, and sudden-onset events, including earthquakes, affect Haiti, according to the International Organization for Migration (IOM). Haiti is also particularly exposed to extreme weather events, such as hurricanes, which can lead to displacement. In January 2010, a catastrophic earthquake in Haiti killed an estimated 230,000 people and left close to 1.5 million people homeless. According to IOM, the recurrence of environmental disruptions increases risks and vulnerabilities. When Hurricane Sandy struck Haiti in October 2012, the country had still not recovered from the 2010 earthquake. The worsening of climate change effects around the world, particularly in low-income countries, may increase the number of people wanting to immigrate to the United States, where approximately 700,000 Haitians live today.Remittances from family members living outside Haiti make up a significant portion of the economy, at 24.7 percent of GDP. The majority of these remittances come from the United States, as we have previously reported.34 4Remittances may support resilience to climate change effects as migrants send money home for disaster recovery and adaptation.
Challenges in Stability and Security Haiti, the poorest country in the western hemisphere, has experienced political instability for most of its history, and ranks 12th of 178 on the Fragile States Index. The government has a low capacity to respond to additional challenges like those related to climate change, according to USAID. The Ministry of Environment, for example, is a relatively new organization within the Haitian government, and local and regional governments have a limited ability to enforce environmental laws and regulations. The United States has provided substantial aid to Haiti, both in disaster response and broader development projects. Official development assistance for Haiti in 2015, for instance, totaled slightly more than $1 billion. According to a January 2018 UN report, 2.8 million people were still in need of humanitarian assistance.
GAO, Remittances To Fragile Countries: Treasury Should Assess Risks from Shifts to Non-Banking Channels, GAO-18-313 (Washington, D.C., March 8, 2018).
Appendix III: Department of State Global Climate Change Initiative Adaptation Activities Funded in Fiscal Years 2014 through 2017
The Department of State’s Bureau of Oceans and International Environmental and Scientific Affairs (State/OES) provided about $78 million in adaptation funding from the Global Climate Change Initiative for eight projects for fiscal years 2014 through 2017 (see table 2).
The Global Climate Change Initiative was established in 2010 to promote resilient, low- emission development, and integrate climate change considerations into U.S. foreign assistance and was divided into three main programmatic initiatives: (1) Adaptation assistance, (2) Clean Energy assistance, and (3) Sustainable Landscapes assistance.
Activity name
The primary purpose of these contributions to the LDCF was to address the adaptation needs of the least developed countries, which are especially vulnerable to the adverse impacts of climate change. The LDCF financed the preparation and implementation of National Adaptation Programs of Action, which identify a country’s priorities for adaptation actions.
Initial grant to the National Adaptation Plans Global Network. The network is focused on increasing the capacity of national and subnational governments to identify and assess climate risks, integrate these risk considerations in sector planning, develop a pipeline of projects to address risks, identify and secure funding for projects, and track progress toward resilience targets.
Colombia, East Caribbean (Guyana, Saint Lucia, Saint Vincent and the Grenadines), Ethiopia, Peru, South Africa, Uganda, West Africa (Côte d’Ivoire, Ghana, Guinea, Sierra Leone, Togo) and, under current consideration, East Caribbean (Dominica, Suriname), and Pacific (Fiji, Kiribati, Tuvalu)
The cost amendment intensified the technical support on National Adaptation Plans to select countries dependent upon specific country adaption needs. In addition, the cost amendment continued the learning and progress from the initial grant.
Implemented through the Department of Treasury, this funding supported a Treasury grant to the Pacific Catastrophe Risk Assessment and Financing Initiative Multi Donor Trust Fund at the World Bank. This activity established the Pacific Catastrophe Risk Insurance Foundation and the Pacific Catastrophe Risk Insurance Company, among other things.
Activity name
The goal of PIER is to increase private sector investment in resilience to climate change in eight developing countries. The first phase of the project will assess and identify opportunities for private investment in resilience, as well as build public and private capacity for climate risk assessment in all the countries. In the second phase, public and private sector partners will develop and pilot climate risk-reduction investment models in four of the countries. The third phase will publicize the piloted investment models and lessons learned among the eight countries.
Implemented through the National Oceanic and Atmospheric Administration, this activity aims to implement a capacity-building partnership with India to promote effective climate resilient decision making at national, state, and local levels.
Appendix IV: Comments from the Department of State
Appendix V: Comments from the U.S. Agency for International Development
Appendix VI: GAO Contact and Staff Acknowledgments
Acknowledgments
In addition to the contacts named above, the following individuals made key contributions to this report: Miriam Carroll Fenton (Assistant Director), Kristy Williams (Assistant Director), Rachel Girshick (Analyst-in-Charge), Nancy Santucci, Miranda Cohen, Aldo Salerno, Neil Doherty, and Judith Williams. Alexander Welsh, Justin Fisher, and Joseph Thompson provided technical and other support.
Related GAO Products
Climate Change Adaptation: DOD Needs to Better Incorporate Adaptation into Planning and Collaboration at Overseas Installations. GAO-18-206. Washington, D.C.: November 13, 2017.
Compacts Of Free Association: Actions Needed to Prepare for The Transition of Micronesia and the Marshall Islands to Trust Fund Income. GAO-18-415. Washington, D.C.: May 17, 2018.
Remittances to Fragile Countries: Treasury Should Assess Risks from Shifts to Non-Banking Channels. GAO-18-313. Washington, D.C.: March 8, 2018.
Syrian Refugees: U.S. Agencies Conduct Financial Oversight Activities for Humanitarian Assistance but Should Strengthen Monitoring. GAO-18-58. Washington, D.C.: October 31, 2017.
International Food Assistance: Agencies Should Ensure Timely Documentation of Required Market Analyses and Assess Local Markets for Program Effects. GAO-17-640. Washington, D.C.: July 13, 2017.
High-Risk Series: Progress on Many High-Risk Areas, While Substantial Efforts Needed on Others. GAO-17-317. Washington, D.C.: February 15, 2017.
Federal Disaster Assistance: Federal Departments and Agencies Obligated at Least $277.6 Billion during Fiscal Years 2005 through 2014. GAO-16-797. Washington, D.C.: September 22, 2016.
Coast Guard: Arctic Strategy Is Underway, but Agency Could Better Assess How Its Actions Mitigate Known Arctic Capability Gaps. GAO-16-453. Washington, D.C.: July 12, 2016.
Climate Information: A National System Could Help Federal, State, Local, and Private Sector Decision Makers Use Climate Information. GAO-16-37. Washington, D.C.: November 23, 2015.
Hurricane Sandy: An Investment Strategy Could Help the Federal Government Enhance National Resilience for Future Disasters. GAO-15-515. Washington, D.C.: July 30, 2015.
High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015.
Standards for Internal Control in the Federal Government. GAO-14-704G. Washington, D.C.: September 10, 2014.
Combating Terrorism: U.S. Efforts in Northwest Africa Would Be Strengthened by Enhanced Program Management. GAO-14-518. Washington, D.C.: June 24, 2014.
Climate Change Adaptation: DOD Can Improve Infrastructure Planning and Processes to Better Account for Potential Impacts. GAO-14-446. Washington, D.C.: May 30, 2014.
Extreme Weather Events: Limiting Federal Fiscal Exposure and Increasing the Nation’s Resilience. GAO-14-364T. Washington, D.C.: February 12, 2014.
Climate Change: State Should Further Improve Its Reporting on Financial Support to Developing Countries to Meet Future Requirements and Guidelines. GAO-13-829. Washington, D.C.: September 19, 2013.
High-Risk Series: An Update. GAO-13-283. Washington, D.C.: February 14, 2013.
International Climate Change Assessments: Federal Agencies Should Improve Reporting and Oversight of U.S. Funding. GAO-12-43. Washington, D.C.: November 17, 2011.
Climate Change Adaptation: Federal Efforts to Provide Information Could Help Government Decision Making. GAO-12-238T. Washington, D.C.: November 16, 2011.
Foreign Relation: Kwajalein Atoll Is the Key U.S. Defense Interest in Two Micronesian Nations, GAO-02-119. Washington D.C.: January 22, 2002. | Why GAO Did This Study
The effects of climate change, combined with other factors, may alter human migration trends across the globe, according to the International Organization for Migration. For example, climate change can increase the frequency and intensity of natural disasters, causing populations to move from an area. Climate change can also intensify slow-onset disasters, such as drought, crop failure, or sea level rise, potentially altering longer-term migration trends.
GAO was asked to review how U.S. agencies address climate change as a potential driver of global migration. For State, USAID, and DOD, this report (1) describes executive branch actions related to climate change and migration from fiscal years 2014 through 2018; (2) examines the extent to which the agencies discussed the potential effects of climate change on migration in their plans and risk assessments; and (3) describes agency activities on the issue. GAO analyzed documents on administration priorities; reviewed agency plans, risk assessments, and documentation of agency activities; and interviewed agency officials.
What GAO Found
From fiscal years 2014 through 2018, a variety of executive branch actions related to climate change—such as executive orders and strategies—affected the Department of State (State), the U.S. Agency for International Development (USAID), and the Department of Defense (DOD), including their activities that could potentially address the nexus of climate change and migration. For example, a fiscal year 2016 presidential memorandum—rescinded in 2017—required agencies to develop implementation plans to identify the potential impact of climate change on human mobility, among other things. In general, however, climate change as a driver of migration was not a focus of the executive branch actions. For example, a fiscal year 2014 executive order—also rescinded in 2017—requiring agencies to prepare for the impacts of climate change did not highlight migration as a particular concern.
State, USAID, and DOD have discussed the potential effects of climate change on migration in agency plans and risk assessments. For example, State and USAID required climate change risk assessments when developing country and regional strategies, and a few of the strategies reviewed by GAO identified the nexus of climate change and migration as a risk. However, State changed its approach in 2017, no longer providing missions with guidance on whether and how to include climate change risks in their integrated country strategies. In doing so, State did not include in its 2018 guidance to the missions any information on how to include climate change risks, should the missions choose to do so. Without clear guidance, State may miss opportunities to identify and address issues related to climate change as a potential driver of migration.
The three agencies have been involved in climate change related activities but none were specifically focused on the nexus with global migration. For example, USAID officials said that the agency's adaptation efforts, such as its Pastoralist Areas Resilience Improvement through Market Expansion project in Ethiopia, were the most likely to include activities, such as enhancing resilience, that can indirectly address the issue of climate change as a driver of migration.
What GAO Recommends
GAO recommends that State provide missions with guidance that clearly documents its process for climate change risk assessments for country strategies. In commenting on a draft of this report, State indicated that it would update its integrated country strategy guidance and will specifically note that missions have the option to provide additional information on climate resilience and related topics. |
gao_GAO-19-141 | gao_GAO-19-141_0 | Background
Spinal Cord Injury
Spinal cord injuries are complex, lifelong injuries that typically result from acute traumatic damage to the spinal cord or nerves within the spinal column. In spinal cord injury patients, certain nervous system functions may be impaired temporarily or permanently lost, depending on the level and severity of the patient’s injury. In addition to lower level nervous system functioning, spinal cord injury patients may develop secondary medical complications that can further decrease functional independence and quality of life, including, but not limited to:
Autonomic dysreflexia: a condition that may result in life threatening hypertension—high blood pressure—due to impaired nervous system response, below the level of spinal cord injury.
Depression: a medical mood disorder—commonly affecting about one in five spinal cord injury patients—that can cause physical and psychological symptoms (including changes in sleep and appetite, and thoughts of death or suicide).
Impaired bowel and bladder functioning: potential inability to move waste through the colon and control, stop or release, urine—which can lead to other life-threatening illnesses (such as autonomic dysreflexia) and/or infections.
Pressure ulcers: a common complication affecting up to 80 percent of spinal cord injury patients that results from an area of the skin or underlying tissue that is damaged due to decreased blood flow, which can occur after extended periods of inactive sitting or lying, among other ways. Pressure ulcers—also known as pressure sores or wounds—can occur years after initial injury and may also result in life- threatening infections or amputation.
Spasticity: a common condition that affects 65 to 78 percent of spinal cord injury patients and can result in symptoms ranging from mild muscle stiffness to severe, uncontrollable leg movements.
Syringomyelia: a rare disorder that occurs when cerebrospinal fluid— normally found outside of the spinal cord and brain—enters the interior of the spinal cord to form a cyst known as a syrinx. This cyst expands and elongates over time, destroying the center of the spinal cord. Symptoms can develop slowly and can include numbness, pain, effects on bowel and bladder function, or paralysis. While this condition can occur as a result of a trauma, such as a spinal cord injury, the majority of cases are associated with a complex brain abnormality.
Brain Injury
Acquired brain injuries occur after birth and are not hereditary, congenital, degenerative, or a result of birth trauma. Acquired brain injuries result in changes to the brain’s neuronal activity, which can affect the physical integrity, metabolic activity, or functional ability of nerve cells in the brain. Acquired brain injuries can be either non-traumatic or traumatic in nature: non-traumatic brain injuries are caused by an internal force—such as in the case of stroke, tumors, or drowning—and traumatic brain injuries are caused by an external force—such as in the case of car accidents, gunshot wounds, or falls. The severity of brain injury can often result in changes to physical, behavioral, and/or cognitive functioning. For example, according to one source, nearly 50 percent of all people with a traumatic brain injury experience depression within the first year after injury, and nearly two-thirds experience depression within 7 years post- injury. Depression can develop as a result of physical changes in the brain, emotional response to the injury, and other unrelated factors—such as family history. Due to impaired cognitive functioning, traumatic brain injury patients may also experience difficulty communicating, concentrating, and processing and understanding information.
Medicare Payment in LTCHs
Acute care hospitals and LTCHs are paid under different Medicare payment systems by law. Acute care hospitals are paid under the inpatient prospective payment system (IPPS). LTCHs are paid under the LTCH PPS. Under both systems, Medicare classifies patients based on Medicare diagnosis groups, which organize patients based on their conditions and the care they receive. Medicare payments for LTCHs are typically higher than payments for acute care hospitals, to reflect the average resources required to treat Medicare beneficiaries who need long-term care.
Traditionally, all LTCH discharges were paid at the LTCH PPS standard federal payment rate. The Pathway for SGR Reform Act of 2013 modified the LTCH PPS by establishing a two-tiered payment system— such that certain LTCH discharges continue to be paid at the standard rate and others are paid at a generally lower, site-neutral rate. In its March 2013 report, MedPAC described concerns regarding growth in the number of LTCHs and the extent to which some of their patients may otherwise be treated appropriately in less costly settings. To continue to be eligible for the standard rate, the discharge must generally have a preceding acute care hospital stay with either an intensive care unit stay of at least 3 days or an assigned diagnosis group based on the receipt of at least 96 hours of mechanical ventilation services in the LTCH, unless an exception applies. Discharges that do not qualify for the standard rate are to receive a blended site-neutral rate—equal to 50 percent of the site-neutral rate and 50 percent of the standard rate—for discharges in cost reporting periods beginning in fiscal years 2016 through 2019, and the full site-neutral rate for discharges in cost reporting periods beginning in fiscal year 2020.
Beginning with cost reporting periods in fiscal year 2020, if fewer than half of an LTCH’s discharges meet the statutory requirements to be paid at the standard rate, the LTCH will no longer receive any payments at that rate for discharges in future cost reporting periods until eligibility for receiving payments under that rate is reinstated. Under this scenario, all discharges in succeeding cost reporting periods would be paid at the generally lower rate that an acute care hospital would receive for providing comparable care until eligibility for receiving payments at the standard rate is reinstated. According to officials from HHS, the department intends to establish a process for how hospitals would have their eligibility for receiving payments at the standard rate reinstated as part of the fiscal year 2020 rule-making cycle. Since the two qualifying hospitals are currently only excepted from the statutory two-tiered payment structure for cost reporting periods beginning during fiscal years 2018 and 2019, these two hospitals must also meet the statutory 50 percent threshold in fiscal year 2020 and beyond in order to receive the standard rate for any future discharges until reinstated. See table 1 for more information on Medicare’s LTCH PPS payment policies.
The Two Qualifying Hospitals: Craig Hospital and Shepherd Center
Two LTCHs have qualified for the temporary exception to site-neutral payments, according to CMS officials. Craig Hospital is a private, not-for- profit facility that has specialized in medical treatment, research, and rehabilitation for patients with spinal cord and brain injury since 1956. Craig Hospital is classified as an LTCH for the purposes of Medicare payment, and is licensed as a general hospital by the state of Colorado— which does not have separate designations for LTCHs. Craig Hospital has been selected as one of 14 NIDILRR Spinal Cord Injury Model Systems and one of 16 Traumatic Brain Injury Model Systems and is accredited by the Joint Commission.
Shepherd Center is a private, not-for-profit facility that specializes in medical treatment, research, and rehabilitation for people with traumatic spinal cord injury and brain injury—as well as neuromuscular disorders, including multiple sclerosis. Shepherd Center is classified as an LTCH for the purposes of Medicare payment, and as a specialty hospital—which includes LTCHs—by the state of Georgia. Shepherd Center is also currently designated as a NIDILRR Spinal Cord Injury Model System and is accredited by the Joint Commission. Shepherd Center also has several CARF International accredited specialty programs. Specifically, it has CARF-accredited inpatient rehabilitation specialty programs in spinal cord injury and brain injury—for adults, children, and adolescents; and interdisciplinary outpatient medical rehabilitation specialty programs in spinal cord injury and brain injury—for adults, children, and adolescents, among others.
More than half of the Medicare discharges in fiscal year 2013 at the two qualifying hospitals—43 of 75 at Craig Hospital and 47 of 88 at Shepherd Center—were within the diagnosis groups designated in section 15009(a) of the 21st Century Cures Act. (See table 2 below for more information.) Patients treated for these diagnosis groups may receive treatment for spinal disorders and injuries; medical back problems; degenerative nervous system disorders; skin grafts for skin ulcers; acquired brain injuries, such as traumatic brain injuries; or other significant traumas with major complicating and comorbid (simultaneous) conditions.
Both qualifying hospitals have a variety of specialized inpatient and outpatient programs to help treat the complex health care needs of their patients, including those covered by Medicare. For example, both hospitals have wheelchair positioning clinics that can help prevent skin complications, such as pressure ulcers, that can occur in spinal cord patients. Both hospitals also have programs for those patients who need ventilator support such as diaphragmatic pacing—support for patients with respiratory problems whose diaphragm, lungs, and nerves have limited function—and ventilator weaning programs. In addition to clinical programs, both qualifying hospitals also provide transitional support, such as providing counseling and education to families of patients with these injuries.
Most Medicare Beneficiaries Who Receive Services at the Two Qualifying Hospitals Need Specialized Follow- Up Care to Manage Long-Term Effects of Catastrophic Injury
We found that most Medicare beneficiaries at the two qualifying hospitals need specialized services to manage the chronic, long-term effects of a catastrophic spinal cord or brain injury. Most of these patients are younger than 65 and ineligible for Medicare at the time of their initial injury, according to officials from the qualifying hospitals. Instead, according to officials, these patients typically become eligible for Medicare 2 years or more after their initial injury due to disability. Medicare beneficiaries at the two qualifying hospitals typically need care to manage comorbidities or the associated long-term complications of their injury. Officials from Craig Hospital said a significant number of their Medicare beneficiaries have comorbid conditions—such as diabetes or cardiac problems—upon admission, that can be further complicated by their injury. The officials said managing these comorbidities is as much of a medical challenge as managing the spinal or brain injury. Officials from both qualifying hospitals noted their Medicare beneficiaries who have a spinal cord or brain injury also frequently seek care after initial injury to address secondary complications resulting from their injury, including urinary tract infections; respiratory problems; and pressure ulcers.
While the qualifying hospitals primarily treated traumatic spinal cord or brain injuries, we found that their Medicare populations differed from each other during the period from fiscal year 2013 to 2016. Specifically,
Craig Hospital. Our review of Medicare claims data indicates more than 50 percent of the 246 Medicare discharges during this time were associated with Medicare diagnosis groups for spinal cord conditions. Specifically, during this time, Craig Hospital’s Medicare discharges were commonly assigned to three diagnosis groups covering spinal procedures and spinal disorders and injuries. For example, officials from Craig Hospital told us that about 60 percent of Medicare beneficiaries in fiscal year 2016 required surgical care for a spinal cord injury. According to officials, most of these patients received surgery for syringomyelia—a complication in spinal cord patients that generally develops years after their initial injury. These officials told us that Craig Hospital provided the pre- and post-operative care for those patients in fiscal year 2016; however, currently, Craig Hospital is only responsible for pre-operative assessments. The remaining 40 percent of their Medicare beneficiaries in fiscal year 2016 received care for new spinal cord injuries.
Shepherd Center. Our review of Medicare claims data indicates the most common diagnosis group of the 365 Medicare discharges during this time—fiscal year 2013 to fiscal year 2016—related to treatment for skin grafts that can be associated with pressure ulcers, among other things. Shepherd Center officials confirmed that most of their Medicare beneficiaries received treatment for a pressure ulcer that occurred after initial injury which, as previously noted, can be so severe as to result in life-threatening infections. According to officials, most of their post-injury Medicare beneficiaries receive post-operative care and other wound management services following surgery to treat pressure ulcers, to ensure that the site will not tear again and to avoid reoccurrence. Other diagnosis groups for Medicare patients at Shepherd Center included those for spinal disorders and injuries and extensive operating room procedures unrelated to principal diagnosis. According to officials, beneficiaries in these diagnosis groups received treatment for a range of conditions, including traumatic injuries, urinary tract infections, neurogenic bladder and bowel or respiratory complications. Officials told us the hospital also served Medicare beneficiaries recovering from other acquired brain injuries, such as stroke, and paralyzing neuromuscular conditions, such as multiple sclerosis.
Stakeholders we interviewed—including providers at other facilities— noted that traumatic spinal cord and brain injury patients—including those covered by Medicare—require significant levels of care due to the complexity of their injuries as well as the immediate and long-term complications that can occur from the injuries. For example, most stakeholders told us these patients often require lifelong care due to the complexity and reoccurrence of comorbidities or secondary complications. Some of these stakeholders noted, for example, spinal cord and brain injury patients often face mental health or psychosocial conditions, such as depression or anxiety. Some stakeholders also emphasized that many spinal cord injury patients risk secondary complications that may not occur until years after injury, such as pneumonia, pressure ulcers, and other infections. A few stakeholders told us spinal cord and brain injury patients are often among the most complex patients they treat. As such, patients with spinal cord or brain injuries often require interdisciplinary care that covers a wide range of specialties—including physiatry (rehabilitation medicine), neurology, cardiology, and pulmonology—as well as specialized equipment or technology, such as eye glance tools to control call systems or the television.
Medicare Policies May Have Modest Effects on Payments to the Two Qualifying Hospitals Depending on the Types of Patients Treated and Other Factors
Simulations of Medicare Payments to Qualifying Hospitals Illustrate Potential Effects of Payment Policies
Simulations of Medicare payments illustrate the potential effects of Medicare’s site-neutral payment policies, which were required by law, on the qualifying hospitals. Specifically, our simulations calculated what the qualifying hospitals would have been paid for Medicare patient discharges that occurred in two baseline years—fiscal year 2013 (baseline year 1) and fiscal year 2016 (baseline year 2)—if applicable payment policies from future years (2017 through 2021) were applied to those discharges. We selected two baseline years to account for differences in data, such as the number of discharges, between fiscal year 2016—the most recent year of complete data available at the time we began our analysis—and fiscal year 2013. Table 3 below provides a summary of Medicare discharges and payments to the qualifying hospitals during these two baseline years. Variation in utilization and patient mix across the baseline years allows the simulations to cover a range of possible changes in payments for the two hospitals.
Our simulations indicated how Medicare’s payment policies could have affected these baseline payments to each qualifying hospital:
Fiscal Year 2017 Blended Site-Neutral Rate Policy: Discharges that do not meet criteria to receive the standard rate are to receive a blended site-neutral rate—equal to 50 percent of the site-neutral rate and 50 percent of the standard rate. We found that while some of the baseline discharges would qualify for the standard rate, most discharges would have been paid at the blended site-neutral rate. Specifically, 8 to 20 percent of Craig Hospital’s baseline Medicare discharges would have qualified for the standard rate, resulting in simulated payments of about $3.86 million (baseline year 1) and $3.22 million (baseline year 2) under blended site-neutral rate policy.
For Shepherd Center, between 23 percent and 40 percent of baseline Medicare discharges would have qualified for the standard rate, resulting in simulated payments of about $5.16 million (baseline year 1) and $5.31 million (baseline year 2). Each of these simulated payments is an increase compared to actual payments made in the baseline years.
Fiscal Years 2018 and 2019 Temporary Exception: The qualifying hospitals are receiving the standard rate for all discharges, due to the temporary exception. As a result, simulated payments under the temporary exception are about $3.74 million (baseline year 1) and $3.18 million (baseline year 2) for Craig Hospital and about $5.64 million (baseline year 1) and $5.75 million (baseline year 2) for Shepherd Center, which is an increase compared to actual payments made in the baseline years.
Fiscal Year 2020 Two-Tiered Payment Rate: The temporary exception for the qualifying hospitals no longer applies; therefore, the site- neutral rate will apply to discharges not qualifying for the standard rate. We found that both qualifying hospitals would receive some payments at the standard rate, but that most of their discharges would be paid at the lower, site-neutral rate—assuming similar caseloads (e.g., patient mix). As a result, simulated baseline year payments at Craig Hospital are about $3.47 million (baseline year 1) and $3.03 million (baseline year 2), and simulated baseline payments to Shepherd are about $4.42 million (baseline year 1) and $4.55 million (baseline year 2). The simulated payments therefore decrease compared to those in fiscal year 2019, and also generally decrease compared to actual payments made in the baseline years.
Future Years Under 50 Percent Threshold: Under statute, unless 50 percent or more of the hospital’s discharges in cost reporting periods beginning during or after fiscal year 2020 qualify for the standard rate, no subsequent payments will be made to a hospital at that rate in each succeeding cost reporting period. Most of the baseline year discharges did not qualify for the standard rate, and therefore simulated payments are based on the generally lower comparable acute care rate. However, simulated payments stayed about the same between fiscal year 2020 and 2021, in part due to differences in calculations for high-cost outlier payments. A high-cost outlier payment is made to hospitals for those cases that are extraordinarily costly, which can occur because of the severity of the case and/or a particularly long length of stay. Specifically, simulated payments were about $3.49 million (baseline year 1) and $3.02 million (baseline year 2) for Craig Hospital and about $4.24 million (baseline year 1) and $4.16 million (baseline year 2) for Shepherd Center. Without the high-cost outlier payments, the simulated payments would have decreased by at least $2 million. If the mix of patients at Craig Hospital and Shepherd Center changes so that they meet the 50 percent threshold in fiscal year 2020, then simulated payments for fiscal year 2021 could be higher. As of September 2018, Craig Hospital officials told us that they expect to meet the 50 percent threshold with their current patient mix. Shepherd Center officials told us they do not expect to meet the 50 percent threshold.
See figures 1 and 2 below for the results of our simulations.
Our simulations of payments assume the number and type of Medicare discharges at the two qualifying hospitals remain the same as those in fiscal years 2013 and 2016. However, the full effect of payment policy on future Medicare payments to the qualifying hospitals will depend on three key factors that are subject to change: 1. Severity of patient conditions: Medicare payment is typically higher for more severe injuries, such as a traumatic injury with major comorbidities or complications, relative to less severe injuries. In the two baseline years we used for our simulations—fiscal year 2013 and fiscal year 2016—more than half of the Medicare discharges at the qualifying hospitals were associated with conditions with multiple comorbidities and complications, as indicated by the diagnosis groups, and this level of severity is reflected in the simulation results. Future payments to qualifying hospitals will depend on the extent to which the severity of patient conditions changes over time. 2. Volume of discharges meeting criteria for the standard rate: As previously noted, for a hospital to receive the standard rate for a discharge, the discharge must meet certain criteria, such as having a preceding acute care hospital stay with either an intensive care unit stay of at least 3 days or an assigned diagnosis group based on the receipt of at least 96 hours of mechanical ventilation services in the LTCH. Our simulations reflect that in the two baseline years, about 23 percent of the fiscal year 2013 discharges and about 40 percent of the fiscal year 2016 discharges met the criteria to receive the standard rate for Shepherd Center; and about 8 percent of the fiscal year 2013 discharges and about 20 percent of the fiscal year 2016 discharges met the criteria for Craig Hospital. Changes to these amounts could affect future payments to the qualifying hospitals. In particular, if 50 percent or more of either hospital’s discharges beginning in fiscal year 2020 meet the standard rate criteria, then the hospitals would be eligible for payments at the standard rate in fiscal year 2021, which may result in higher payments compared to our simulations. 3. Payment adjustments: LTCHs may receive a payment adjustment for certain types of discharges, such as short-stay outliers, interrupted stays, or high-cost outliers. In particular, most discharges at Craig Hospital received high-cost outlier payments (additional payments for extraordinarily costly cases) during the two baseline years—76 percent in fiscal year 2013 and 85 percent in fiscal year 2016. At Shepherd Center, at least 40 percent of discharges during the two baseline years received high-cost outlier payments—about 42 percent in fiscal year 2013 and about 58 percent in fiscal year 2016. The amount of future payments to qualifying hospitals will depend on the extent to which they continue to have a high proportion of discharges with high-cost outlier payments.
Qualifying Hospitals and Some Stakeholders Reported that Payment Policies May Result in Fewer Services Provided and Fewer Patients Served by LTCHs
In addition to the effect on payments, officials from both qualifying hospitals and some stakeholders we interviewed noted that the LTCH site-neutral payment policies may result in fewer services provided and fewer patients served by the qualifying hospitals and other LTCHs. For example, officials from Craig Hospital told us they stopped providing post- operative care to patients requiring spinal surgery, such as patients with syringomyelia, in 2016—instead referring them to other facilities—in part because these discharges do not meet the criteria for the standard rate. As of September 2018, they told us they do not plan to provide this care in the future unless the temporary exception is extended. Officials from Shepherd Center told us while they have not yet made changes to services they offer to Medicare patients, they may limit which Medicare beneficiaries they serve in the future. For example, they told us that most of their Medicare beneficiaries were admitted from home or sought care in their outpatient clinic. When the temporary exception expires after fiscal year 2019, hospital officials expected that these patients will not qualify for the standard rate. Shepherd Center officials said they may not be able to serve similar patients in future years.
MedPAC officials and some stakeholders—a specialty association and health care providers with experience treating patients with similar conditions at other LTCHs—told us that some LTCHs have changed the services they offer and the patients they treat to increase the proportion of discharges that qualify for the standard rate. For example,
MedPAC officials cited reports that indicate how some LTCHs have adjusted to the site-neutral policies. For example, a 2018 MedPAC report indicated that LTCHs in one large for-profit chain were able to make adjustments so that, as of September 30, 2016, close to 100 percent of their Medicare discharges met the criteria to receive the standard rate.
A representative from an LTCH association told us that many LTCHs have adjusted their patient mix by increasing the number of discharges that meet criteria for the standard rate and turning away some Medicare beneficiaries to reduce the number of discharges subject to the site-neutral rate. The representative noted that certain LTCHs have already been able to adjust their patient mix because they have existing programs in place that focus on chronic, critically ill patients who would have a preceding acute care hospital stay. The representative told us that some LTCHs specialize in care for patients who do not meet the criteria to receive the standard rate and would generally be paid at the site-neutral rate; therefore, changing their patient mix is not a viable strategy for these LTCHs. According to the stakeholder, as of February 2018, about two-thirds of all LTCHs are above the 50 percent threshold.
Providers from another LTCH told us that before the site-neutral payment policy went into effect, only about 40 to 45 percent of its discharges met criteria for the standard rate. However, they worked to ensure most patients referred to the LTCH would qualify for the standard rate. Officials told us patients who do not meet the criteria for that rate typically either stay longer in the acute care hospital or are transferred to a different post-acute care setting, such as a skilled nursing facility. Officials noted that, in both cases, the patient may not receive the specialized services often required for their injuries, including those patients with spinal cord or brain injuries.
A provider we interviewed from another LTCH said that, historically, the LTCH has accepted patients who acquire pressure ulcers at home following discharge, but they may choose not to continue this practice because the patients’ discharges would not meet the criteria to receive the standard rate.
A few of these stakeholders told us some LTCHs are in markets that do not have alternative providers of care, such as skilled nursing facilities, for patients who do not meet the criteria. These LTCHs may have difficulty adjusting their patient mix to avoid site-neutral payments. For example, a provider from one LTCH said his facility continues to take “site-neutral patients” because those patients often do not have another option to receive the specialized services they need. The provider emphasized concerns about the long-term viability of caring for those patients at the facility, because their care is paid at lower rates.
Similarities and Differences May Exist Between the Two Qualifying Hospitals and Other Facilities that Treat Medicare Patients with Spinal Cord and Brain Injuries
The Two Qualifying Hospitals Treat Patients with Conditions Different Than Those at Most Other LTCHs, and Treat Fewer Medicare Patients
Our review of Medicare claims data, other information, and interviews with stakeholders indicated the two qualifying hospitals treated Medicare beneficiaries with different conditions than most of those treated at other LTCHs. Our analysis of Medicare claims data indicates Craig Hospital and Shepherd Center treat very few patients in the Medicare diagnosis groups that are most common to other LTCHs. Specifically, for several years, MedPAC has reported that LTCH patient discharges are concentrated in a relatively small number of diagnosis groups. For example, in March 2018, MedPAC reported that 20 diagnosis groups accounted for over 61 percent of LTCH discharges at both for-profit and not-for-profit facilities, in fiscal year 2016. However, in fiscal year 2016, these diagnosis groups accounted for approximately 30 percent of Medicare discharges—26 out of 88—at Shepherd Center, and most of these discharges fell within a single diagnosis group which covers a range of conditions. Craig Hospital did not discharge any Medicare beneficiaries assigned to these 20 diagnosis groups, in fiscal year 2016. The seven diagnosis groups that were used in the statutory criteria to except Craig Hospital and Shepherd Center from site-neutral payments were also not among these 20 diagnosis groups. For more information on the 20 diagnosis groups common to LTCHs in fiscal year 2016, see Appendix III, table 5.
Our review of Medicare claims data and other information indicates the two qualifying hospitals also treat a relatively small number of Medicare beneficiaries, a key distinguishing factor from most other LTCHs. In March 2018, MedPAC reported that, on average, Medicare beneficiaries account for about two-thirds of LTCH discharges. However, Medicare claims data and other information provided by the two qualifying hospitals indicate Medicare beneficiaries account for a significantly smaller proportion (about 8 percent) of patients discharged from Craig Hospital and Shepherd Center in 2016. Specifically, 40 of the 486 patients discharged from Craig Hospital in fiscal year 2016 and 75 of the 912 patients discharged from Shepherd Center in calendar year 2016, were Medicare beneficiaries. Officials from the qualifying hospitals told us they treat few Medicare patients primarily because of the younger average age of persons with spinal cord injuries and acquired brain injuries.
While patients with spinal cord and brain injuries may receive care in other LTCHs, most stakeholders we interviewed also suggested the two qualifying hospitals treat patients that are different from those treated at most other LTCHs, and can offer specialized care. Officials from the two qualifying hospitals told us that, relative to most other facilities—including most traditional LTCHs—they offer a more complete continuum of care to meet the needs of patients at different stages of spinal cord and brain injury treatment, without the need to transfer to different facilities. Officials also stated that, unlike most traditional LTCHs, they are able to offer more specialized care for patients with spinal cord and brain injuries, including more comprehensive rehabilitation services. Stakeholders we interviewed generally agreed that the two qualifying hospitals have developed expertise in treating spinal cord and brain injury patients and offer intensive rehabilitation services that are not provided in most other LTCHs. In addition, officials from the Colorado Department of Health Care Policy & Financing noted that Craig Hospital treats a patient population that is different from most other LTCHs in the state of Colorado. Specifically, according to officials, in comparison to other LTCHs in the state, Craig Hospital treats: (1) a higher percentage of patients with more severe conditions, (2) more patients from outside the state of Colorado, (3) fewer patients requiring ventilator weaning or requiring wound care— conditions typically characteristic of LTCH patients—and (4) patients that are, on average, younger than most other LTCHs in the state of Colorado. In addition, a 2014 study of LTCHs conducted for the Georgia Department of Community Health found Shepherd Center was “distinctly different” from other LTCHs in the state of Georgia, and most LTCHs nationwide.
Patients with Conditions Treated at Qualifying Hospitals Could Also Receive Care in IRFs, But Differences in Payment Systems and Data Limitations Make a Direct Comparison Difficult
Most stakeholders we interviewed suggested some IRFs provide specialty care to patients with catastrophic spinal cord, acquired brain injuries, or other paralyzing neuromuscular conditions. Most of the stakeholders we interviewed noted that—like the two qualifying hospitals—some IRFs have the expertise to treat patients with catastrophic spinal cord, acquired brain injuries, or other paralyzing neuromuscular conditions patients and thus, may also treat patients with similar conditions. According to CMS officials, IRFs are specifically designed to provide post-acute rehabilitation services to patients with spinal cord injuries, brain injuries, and other neuromuscular conditions. CMS officials noted that patients with these conditions typically respond well to intensive rehabilitation therapy provided in a resource intensive inpatient hospital environment and to the specific interdisciplinary approach to care that is provided in the IRF setting. Stakeholders also noted that patients with spinal cord injuries, brain injuries, and other neuromuscular conditions may receive care in other settings. However, some stakeholders noted that some of these providers—such as skilled nursing facilities—generally do not offer the specialized care these patients generally require.
Differences in payment systems and data limitations make it difficult to directly compare the attributes of Medicare beneficiaries discharged from the two qualifying hospitals and IRFs, including the costs of care they receive. Medicare uses separate payment systems to pay LTCHs and IRFs, for care provided to beneficiaries. LTCHs are paid pre-determined fixed amounts for care provided to Medicare beneficiaries, under the LTCH PPS. Medicare beneficiaries treated in LTCHs are assigned to diagnosis groups (MS-LTC-DRGs) for each stay—based on the patient’s primary and secondary diagnoses, age, gender, discharge status, and procedures performed. IRFs are also paid pre-determined fixed amounts for care provided to Medicare beneficiaries, but under a separate system—IRF PPS. Medicare beneficiaries treated in IRFs are assigned to case-mix groups—based on age, and level of motor and cognitive function—and then further assigned to one of four tiers (within these groups) based on the presence of specific comorbidities that may increase their cost of care. According to CMS officials, because the payment groups and assignments to those groups are different, it is difficult to directly compare LTCH patients, classified in diagnosis groups, with IRF patients, classified in case-mix groups. See Appendix II for more information on these payment systems.
MedPAC has previously reported the differences in patient assessment tools used by post-acute care providers undermines Medicare’s ability to compare the patients admitted, costs of care, and outcomes beneficiaries achieve in these settings, on a risk-adjusted basis. MedPAC has also reported that while similar beneficiaries can receive care in each setting, payments can differ considerably for comparable conditions, due to differences in payment systems. It has made recommendations to address these issues. The Improving Medicare Post-Acute Care Transformation Act of 2014 also requires the Secretary of HHS to collect and analyze common patient assessment information and, in consultation with MedPAC, submit a report to Congress recommending a post-acute care PPS. Such efforts may make future comparison of beneficiaries, costs of services, and outcomes of care across these settings possible.
Some Information Suggests Similarities and Differences Between Qualifying Hospitals and IRFs that Specialize in Spinal Cord and Brain Injuries
While data limitations make a direct comparison difficult, based on our review of other data and information, and interviews with stakeholders, we identified similarities and differences between the qualifying hospitals and certain IRFs that provide specialty treatment for catastrophic spinal cord injuries, acquired brain injuries, or other paralyzing neuromuscular conditions. Key similarities and differences include the following: Volume of services. Our review of Medicare claims data, other information, and interviews with stakeholders indicate that—similar to the two qualifying hospitals—some IRFs treat a high volume (at least 100) of patients with complex spinal cord injury, brain injury, and other related conditions. Officials from the two qualifying hospitals, as well as some other stakeholders we interviewed—including officials from the Christopher & Dana Reeve Foundation and the Brain Injury Association of America—emphasized the importance of facilities treating a high volume of patients with these specialized conditions, which can be an indicator of expertise in treating these patients. Our review of Medicare claims data for 1,148 IRFs in fiscal year 2016 identified 21 IRFs that treated at least 100 Medicare beneficiaries with non-traumatic and traumatic spinal cord injuries and 109 IRFs that treated at least 100 Medicare beneficiaries with non-traumatic and traumatic brain injuries.
Our review of Medicare claims data indicated that, similar to the two qualifying hospitals—some IRFs also treat a high volume of patients with “catastrophic” injuries—traumatic brain injury, traumatic spinal cord injury, and major multiple traumas with brain or spinal cord injuries. Specifically, we identified 25 IRFs that treated a high volume (at least 100) of Medicare beneficiaries with catastrophic injuries, in fiscal year 2016. In the absence of patient assessment data from the facilities, we did not independently evaluate the level and severity of these patients’ injuries, which can vary due to the presence of other co-morbid conditions. The Medicare case mix indexes we reviewed for these 25 IRFs indicated that, relative to other IRFs, most of these facilities treat patients who are more resource intensive.
Specialty accreditation and designation as model systems. Like Shepherd Center, some IRFs receive CARF-accreditation for specialty programs to treat spinal cord and brain injuries. According to most stakeholders, this accreditation indicates expertise in treating these patients, as CARF International has established standards using evidence-based practices, among other factors. Officials from the two qualifying hospitals also noted CARF International has a specific focus on quality and outcomes. However, officials from Shepherd Center noted similarities in care and services offered at CARF-accredited facilities would depend on the specialties for which they are certified.
Most of the stakeholders we interviewed also noted that designation as a NIDILRR model system is an indicator of similar expertise in treating patients with spinal cord and brain injuries. According to the Model Systems Knowledge Translation Center, spinal cord injury and brain injury model systems are recognized as national leaders in medical research and patient care and provide the highest level of comprehensive specialty services from the point of injury through eventual re-entry into full community life. While stakeholders we interviewed from NIDILRR model systems indicated the model system designation is focused primarily on research, rather than clinical care, most noted that model systems’ research often complements the facilities’ clinical efforts to address the unique needs of these patients. Officials from HHS’s Administration for Community Living also noted that all model system grantees must provide a continuum of care—emergency care, acute medical care, acute medical rehabilitation, and post-acute care—and that can happen in various provider types. According to officials from the qualifying hospitals and stakeholders from one other NIDILRR model system we interviewed, Craig Hospital and Shepherd Center are the only two LTCHs currently classified as spinal cord injury model systems; 12 of 14 spinal cord injury model systems are IRFs.
Specialized programs and services. Similar to the two qualifying hospitals, some IRFs may also offer specialized programs and services for patients with brain and spinal cord injuries, but the availability of these programs and services may vary by facility. Officials from some of the IRFs that responded to our information request—which included both NIDILRR facilities and IRFs with CARF-accredited programs—told us they provide specialized programs and services for patients with similar conditions as those treated at two qualifying hospitals, and sometimes compete with the two qualifying hospitals for the same patients. For example, each IRF reported having interdisciplinary treatment teams; the capacity to provide medical management of medically complex and high acuity patients with spinal cord injury, traumatic brain injury, or other major multiple traumas associated with a brain or spinal cord injury; family education and training; and skin and wound programs or services, among other services. However, the availability of certain services—including but not limited to ventilator-dependent weaning programs, diaphragmatic pacing, and outpatient programs for spinal cord and traumatic brain injury patients—varied by facility.
Staff with specialized training and clinical expertise. Similar to the two qualifying hospitals, most facilities that responded to our information request also reported having physicians, nurses, and physical and occupational therapists with specialty training in medical rehabilitation, spinal cord, and/or brain injury. However, the number of staff with these trainings, varied by facility. In comparison to the other facilities that responded to our information request, the number of nurses and physical and occupational therapists with these specialty trainings were generally higher at Craig Hospital and Shepherd Center. According to an American Spinal Injury Association consumer guideline that the Christopher & Dana Reeve Foundation typically provides to spinal cord injury patients and families, programs should regularly admit persons with spinal cord injury each year, to develop and maintain the necessary skills to manage a person with spinal cord injury, and a substantial portion of those admitted should have traumatic injuries.
Out-of-state Admissions. Officials from the two qualifying hospitals emphasized they admit a significant number of patients from out-of-state, and our review of information provided by the qualifying hospitals and a select group of IRFs indicated the qualifying hospitals admit a higher percentage of patients from out-of-state. Specifically, information provided by these IRFs indicates that less than a quarter of patients admitted to these facilities, in 2016, were from out-of-state. Information provided by Craig Hospital and Shepherd Center indicate that about half of their patients were admitted from out-of-state in 2016. Officials from the Colorado Department of Health Care Policy & Financing also noted Craig Hospital treats a higher percentage of out-of-state patients, compared to IRFs in the state.
Ability to treat medically complex patients. Officials from the two qualifying hospitals told us they treat more medically complex patients and provide a more complete range of medical services to spinal cord and brain injury patients, not provided by most IRFs. Specifically, officials from the two qualifying hospitals both noted they are able to treat patients much sooner in their recovery process than most IRFs, due to their LTCH status. Officials from the Shepherd Center noted that they have a 10-bed intensive care unit which allows them to take patients with certain injures that some IRFs may not be equipped to admit—such as patients requiring advance medical management and advanced level procedural services and monitoring. Information provided by Shepherd Center indicated that, in calendar year 2017, approximately 20 percent of all inpatients were admitted to this unit and 13 percent of all inpatients were internally transferred to this unit after developing medical complications. According to officials, Craig Hospital does not have an intensive care unit, but noted their ability to similarly care for medically complex patients—including telemetry (e.g., specialized heart monitoring) and one-to-one nursing care, if necessary. Most stakeholders we interviewed agreed that both qualifying hospitals’ LTCH status provides certain advantages over IRFs, such as the ability to admit some medically complex patients earlier in the recovery process and longer lengths of stay. Stakeholders from most of the IRFs we interviewed also reported having the flexibility to admit some medically complex patients requiring more advanced level monitoring and resources earlier in the recovery process—such as patients with disorders of consciousness.
Officials from the two qualifying hospitals also said they offer a continuum of care that can meet patient’s changing needs, without the need to transfer them to different facilities. Information provided by Craig Hospital indicated that 83 percent of patients treated at its facility, in 2016, were discharged to home, 13 percent were discharged to another post-acute care facility, and 3 percent were discharged to an acute care hospital. In 2016, approximately 91 percent of patients treated at Shepherd Center were discharged to home, 7 percent were discharged to another post- acute care facility, and 2 percent were discharged to an acute care hospital. Information provided by the IRFs that responded to our written request varied by facility, but—similar to the two qualifying hospitals— each facility discharged more than 65 percent of patients to home.
IRF payment criteria. CMS and most other stakeholders we interviewed noted that two Medicare payment policies applicable to IRFs, but not LTCHs, may contribute to their different patient populations. Specifically, to be classified for payment under Medicare’s IRF PPS, at least 60 percent of the IRF’s total inpatient population must require intensive rehabilitative treatment for one or more of 13 conditions—which includes both spinal cord and brain injury. To be admitted to an IRF, Medicare beneficiaries must reasonably be expected to actively participate in and benefit from the intensive rehabilitation therapy program, typically provided in IRFs. According to HHS, per industry standard, the intensive rehabilitation therapy program is often demonstrated by providing three hours of rehabilitation services per day for at least five days per week, but this is not the only way such intensity can be demonstrated. Officials from the two qualifying hospitals told us they generally use Medicare’s intensive rehabilitation requirement as a minimum standard for their rehabilitation patients—even though they are not held to this requirement, for the purposes of Medicare payment—but noted that some of their patients may not meet this requirement, due to their medical complexity.
Length of stay and site-neutral payment requirements, for LTCHs. As previously noted, LTCHs—including the two qualifying hospitals—must have an average length of stay of greater than 25 days; IRFs are not subject to this requirement. The average length of stay for patients discharged from the Craig Hospital was about 60 days, in fiscal year 2016, and the average length of stay for patients discharged from Shepherd Center was about 53 days, in calendar year 2016.
Stakeholders from the IRFs that responded to our information request reported average lengths of stay ranging from 14 to 31 days, for patients discharged in fiscal year 2016; the ranges of lengths of stay were slightly higher for spinal cord injury and traumatic brain injury inpatients for the IRFs, during the same period. LTCHs are also generally subject to site- neutral payment policy that is not applicable to IRFs and may decrease LTCHs payments for certain discharges, under Medicare.
Other services provided. In addition to these Medicare specific differences, a few stakeholders we interviewed also noted the two qualifying hospitals receive additional funding from their strong philanthropic donor base that may allow them to provide other services and resources, not covered by Medicare or offered at some IRFs. For example, while a few IRFs that responded to our information request reported offering housing for families of injured patients, the two qualifying hospitals offer up to 30 days of free housing to families of newly injured rehabilitation patients, if both the family and patient live more than 60 miles from the hospital. Officials from Shepherd Center told us their revenues are supplemented by investment income and donor funds. Craig Hospital has also established a foundation that supports the hospital in achieving its goals through philanthropy.
Agency Comments
We provided a draft of this report to HHS. HHS provided technical comments, which we incorporated as appropriate. We also provided the two qualifying hospitals summaries of information we collected from them, to confirm the accuracy of statements included in our draft report. We incorporated their comments, as appropriate.
We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov.
If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at [email protected]. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix IV.
Appendix I: Methodology for Simulating Payments to Qualifying Hospitals
This appendix describes our methodology for conducting simulations of payments for the two qualifying hospitals.
Simulations of Payments
We used Medicare claims data to conduct simulations of payments for the two qualifying hospitals. We first identified discharges at each hospital in two baseline years—federal fiscal years 2013 and 2016. We selected fiscal year 2016 because it was the year with the most recent data available at the time of our analysis, and we selected a second baseline year because data for 2016 was different than data for other recent years. For example, the number of discharges for one qualifying hospital declined by nearly half between fiscal years 2013 and 2016. We chose fiscal year 2013 because data from that year was used to help determine which hospitals are subject to the temporary exception.
To identify how to appropriately calculate the long-term care hospital (LTCH) payment for each of these discharges in future payment years, we reviewed applicable federal regulation and documents from the Centers for Medicare & Medicaid Services (CMS) and the Medicare Payment Advisory Commission (MedPAC), and interviewed officials from both organizations. See table 4 for the relevant components in the formulas, such as Medicare severity long-term care diagnosis related group (MS-LTC-DRG) weights, identified from final rule tables.
When conducting these simulations, we made the following assumptions:
For simulated payments for payment policies in effect for fiscal years 2017 and 2018, we used the base rates, relative weights (e.g., the MS-LTC-DRG weights), geometric mean length of stay, wage index, geographic adjustment factor, fixed-loss amounts, and outlier thresholds that were published in the final rule tables for LTCH and inpatient prospective payment system (IPPS) hospitals—also known as acute care hospitals—for each respective year. At the time we began our analysis, this information was not known for fiscal years 2019 through 2021. We chose to use the fiscal year 2018 rates when conducting simulations for payment policies in those years because historical trends showed that annual changes were minimal—about 1 percent. Therefore, to the extent that these values continue to change over time, our findings may understate or overstate the amount that the qualifying hospitals would have been paid in our baseline years based on these future payment policies.
The site-neutral payment policy did not apply to discharges from the fiscal year 2013 baseline year. Therefore, we examined Medicare claims data to determine whether each discharge would have met the criteria to receive the LTCH standard rate in that year. Specifically, we determined whether each discharge had an acute care hospital stay that immediately preceded their LTCH stay. We then determined whether the time at the acute care hospital included three or more days in the intensive care unit or whether there was a code on the LTCH claim that indicated at least 96 hours of mechanical ventilation services were provided. Per Medicare’s payment policy, we assumed any discharge that met these two criteria would qualify for full LTCH payment rate, unless the case was a psychiatric or rehabilitation stay, as identified by the following MS-LTC-DRG codes: 876, 880, 881, 882, 883, 884, 885, 886, 887, 894, 895, 896, 897, 945, or 946.
Under statute, unless 50 percent or more of the hospital’s discharges beginning during or after 2020 qualify for the standard rate, no subsequent payments will be made to a hospital at that rate. Therefore, when calculating simulated payments for fiscal year 2021, we applied the 50 percent threshold. At the time of our analysis, CMS had not yet finalized this policy through rule-making. As of November 2018, CMS officials told us that it is unlikely that any payment adjustment under this provision would apply until 2022 because the percentage cannot be determined until after an LTCH’s cost reporting period has ended and data have been submitted.
Shepherd Center’s fiscal year is different than the federal fiscal year.
Therefore, the variables used to determine whether discharges in federal fiscal year 2016 met criteria to receive the standard rate were not available to use for some of the discharges that year. Of those discharges, we assumed that the same percentage of discharges that met the criteria to receive the standard rate in Shepherd’s fiscal year—30 percent—met the criteria in federal fiscal year 2016.
When calculating site-neutral payments, we assumed that each discharge would be paid at a rate comparable to that for acute care hospitals—the IPPS comparable amount rate. Site-neutral payments may also be based on the estimated cost-of-care, if it is lower than the IPPS comparable amount rate. However, over 90 percent of discharges at the qualifying hospitals were paid at the IPPS comparable amount rate in fiscal year 2016.
Per CMS’s recommendation, we applied the cost-to-charge ratio that was effective October 1, 2017, for each qualifying hospital, regardless of discharge date. For Craig Hospital this value was 0.442 and for Shepherd Center this value was 0.464. According to CMS officials, in general, these values do not change significantly when they are updated during the fiscal year. Therefore, they believe that using the values effective at the start of the fiscal year is a reasonable assumption.
We excluded indirect medical education adjustments and disproportionate share hospital payments that are part of the IPPS comparable amount rate because, according to CMS, they were unlikely to have much impact for these hospitals.
CMS reviewed each of these assumptions and agreed they were reasonable for purposes of our analysis. CMS also verified that we were correctly applying the formulas for calculating these payments and using the appropriate values from the final rules.
Appendix II: Calculating Medicare Payments for Long-Term Care Hospitals and Inpatient Rehabilitation Facilities
Figures 3 and 4 illustrate the methodology for calculating Medicare payments under the long-term care hospital (LTCH) prospective payment system (PPS) and the inpatient rehabilitation facility (IRF) PPS, respectively, as reported by the Medicare Payment Advisory Commission (MedPAC).
Appendix III: List of Common Diagnosis Groups for Long-Term Care Hospitals (LTCH)
Appendix III: List of Common Diagnosis Groups for Long-Term Care Hospitals (LTCH)
In its March 2018 annual report to the Congress, the Medicare Payment Advisory Commission (MedPAC) reported that 20 diagnosis groups accounted for over 61 percent of LTCH discharges at both for-profit and not-for-profit facilities, in fiscal year 2016. Table 5 provides a list of these 20 diagnosis groups.
Appendix IV: GAO Contact and Staff Acknowledgments
GAO Contacts
Acknowledgments
In addition to the contact named above, Will Simerl, Assistant Director; Kathy King; Amy Leone, Analyst-in-Charge; Todd Anderson; Sam Amrhein; LaKendra Beard; Rich Lipinski; Jennifer Rudisill; and Eric Wedum made key contributions to this report. Also contributing were Leia Dickerson, Diona Martyn, Vikki Porter, and Lisa Rogers. | Why GAO Did This Study
The Centers for Medicare & Medicaid Services pays LTCHs for care provided to Medicare beneficiaries. There were about 400 LTCHs across the nation in 2016.
The 21st Century Cures Act included a provision for GAO to examine certain issues pertaining to LTCHs. This report examines (1) the health care needs of Medicare beneficiaries who receive services from the two qualifying hospitals; (2) how Medicare LTCH payment polices could affect the two qualifying hospitals; and (3) how the two qualifying hospitals compare with other LTCHs and other facilities that may treat Medicare patients with similar conditions.
GAO analyzed the most recently available Medicare claims and other data for the two qualifying hospitals and other facilities that treat patients with spinal cord injuries. GAO also interviewed HHS officials and stakeholders from the qualifying hospitals, other facilities that treat spinal cord patients, specialty associations, and others.
GAO provided a draft of this report to HHS. HHS provided technical comments, which were incorporated as appropriate. We also provided the two qualifying hospitals summaries of information we collected from them, to confirm the accuracy of statements included in our draft report. We incorporated their comments, as appropriate.
What GAO Found
Spinal cord injuries may result in secondary complications that often lead to decreased functional independence and quality of life. The 21st Century Cures Act changed how Medicare pays certain long-term care hospitals (LTCH) that provide spinal cord specialty treatment. For these hospitals, the act included a temporary exception from how Medicare pays other LTCHs. Two LTCHs—Craig Hospital in Englewood, Colorado and Shepherd Center in Atlanta, Georgia—have qualified for this exception. GAO found that most Medicare beneficiaries treated at these two hospitals typically receive specialized care for multiple chronic conditions and other long-term complications that develop after initial injuries, such as pressure ulcers that can result in life-threatening infection. The two hospitals also provide specialty care for acquired brain injuries, such as traumatic brain injuries.
GAO's simulations of Medicare payments to these two hospitals using claims data from two baseline years—fiscal years 2013 and 2016—illustrate potential effects of payment policies. LTCHs are paid under a two-tiered system for care provided to beneficiaries: they receive the LTCH standard federal payment rate—or standard rate—for certain patients discharged from the LTCH, and a generally lower rate—known as a “site-neutral” rate—for all other discharges. Under the temporary exception, Craig Hospital and Shepherd Center receive the standard rate for all discharges during fiscal years 2018 and 2019. Assuming their types of discharges remain the same as in fiscal years 2013 and 2016, GAO's simulations of Medicare payments in the baseline years indicate:
Most of the discharges we examined would not qualify for the standard rate, if the exception did not apply.
Medicare payments would generally decrease under fiscal year 2020 payment policy, once the exception expires.
However, the actual effects of Medicare's payment policies on these two hospitals could vary based on factors, including the severity of patient conditions (e.g., Medicare payment is typically higher for more severe injuries), and whether hospitals' discharges meet criteria for the standard rate.
Similarities and differences may exist between the two qualifying hospitals and other facilities that treat Medicare patients with spinal cord and brain injuries. Patients with spinal cord and brain injuries may receive care in other LTCHs, but GAO found that most Medicare beneficiaries at these other LTCHs are treated for conditions other than spinal cord and brain injuries. Certain inpatient rehabilitation facilities (IRF) also provide post-acute rehabilitation services to patients with spinal cord and brain injuries. While data limitations make a direct comparison between these facilities and the two qualifying hospitals difficult, GAO identified some similarities and differences. For example, officials from some IRFs we interviewed reported providing several of the same programs and services as the two qualifying hospitals to medically complex patients, but the availability of services and complexity of patients varied. Among other reasons, the different Medicare payment requirements that apply to LTCHs and IRFs affect the types of services they provide and the patients they treat. |
gao_GAO-18-277T | gao_GAO-18-277T_0 | Background
NASA’s mission is to drive advances in science, technology, aeronautics, and space exploration, and contribute to education, innovation, our country’s economic vitality, and the stewardship of the Earth. To accomplish this mission, NASA establishes programs and projects that rely on complex instruments and spacecraft. NASA’s portfolio of major projects ranges from space satellites equipped with advanced sensors to study the Earth to a telescope intended to explore the universe to spacecraft to transport humans and cargo to and beyond low-Earth orbit. Some of NASA’s projects are expected to incorporate new and sophisticated technologies that must operate in harsh, distant environments.
The life cycle for NASA space flight projects consists of two phases— formulation, which takes a project from concept to preliminary design, and implementation, which includes building, launching, and operating the system, among other activities. NASA further divides formulation and implementation into phase A through phase F. Major projects must get approval from senior NASA officials at key decision points before they can enter each new phase. Figure 1 depicts NASA’s life cycle for space flight projects.
Formulation culminates in a review at key decision point C, known as project confirmation, where cost and schedule baselines are established and documented in a decision memorandum. To inform those baselines, each project with a life-cycle cost estimated to be greater than $250 million must also develop a joint cost and schedule confidence level (JCL). The JCL initiative, adopted in January 2009, is a point-in-time estimate that, among other things, includes all cost and schedule elements, incorporates and quantifies known risks, assesses the impacts of cost and schedule to date, and addresses available annual resources. NASA policy requires that projects be baselined and budgeted at the 70 percent confidence level.
The agency baseline commitment established at key decision point C includes cost and schedule reserves held at the project—those within the project manager’s control—and NASA headquarters level. Cost reserves are for costs that are expected to be incurred—for instance, to address project risks—but are not yet allocated to a specific part of the project. Schedule reserves are extra time in project schedules that can be allocated to specific activities, elements, and major subsystems to mitigate delays or address unforeseen risks.
Status of NASA’s Major Telescope Projects
NASA’s current portfolio of major space telescopes includes three projects—WFIRST, TESS, and JWST—that vary in cost, complexity, and phase of the acquisition life cycle. WFIRST, a project that entered the concept and technology development phase and established preliminary cost and schedule estimates in February 2016, is in the earliest stages of the acquisition life cycle. With preliminary cost estimates ranging from $3.2 billion to $3.8 billion, this project is an observatory designed to perform wide-field imaging and survey of the sky at near-infrared wavelengths to answer questions about the structure and evolution of the universe and to expand our knowledge of planets beyond our solar system. The current design includes a 2.4 meter telescope that was built and qualified for another federal agency over 10 years ago; the project is evaluating which components to reuse and which to modify, refurbish, or build new. TESS—a smaller project whose latest cost estimate is approximately $337 million—is targeted to launch in March 2018 and will be used to conduct the first extensive survey of the sky from space for transiting exoplanets.
And finally, JWST, with a life-cycle cost estimate of $8.835 billion, is one of NASA’s most complex projects and top priorities. The telescope is designed to help understand the origin and destiny of the universe, the creation and evolution of the first stars and galaxies, and the formation of stars and planetary systems. With a 6.5-meter primary mirror, JWST is expected to operate at about 100 times the sensitivity of the Hubble Space Telescope. JWST’s science instruments are to detect very faint infrared sources and, as such, are required to operate at extremely cold temperatures. To help keep these instruments cold, a multi-layered tennis-court-sized sunshield is being developed to protect the mirrors and instruments from the sun’s heat.
We have reported for several years on the JWST project, which has experienced significant cost increases and schedule delays. Prior to being approved for development, cost estimates for JWST ranged from $1 billion to $3.5 billion, with expected launch dates ranging from 2007 to 2011. Before 2011, early technical and management challenges, contractor performance issues, low levels of cost reserves, and poorly phased funding levels caused JWST to delay work after confirmation, which contributed to significant cost and schedule overruns, including launch delays. The Chair of the Senate Subcommittee on Commerce, Justice, Science, and Related Agencies requested from NASA an independent review of JWST in June 2010. In response, NASA commissioned the Independent Comprehensive Review Panel, which issued its report in October 2010. The panel concluded that JWST was executing well from a technical standpoint, but that the baseline cost estimate did not reflect the most probable cost with adequate reserves in each year of project execution, resulting in an unexecutable project.
Following this review, Congress in November 2011 placed an $8 billion cap on the formulation and development costs for the project and NASA rebaselined JWST with a life-cycle cost estimate of $8.835 billion that included additional money for operations and a planned launch in October 2018. The new baseline represented a 78 percent increase to the project’s life-cycle cost from the original baseline and a launch date in October 2018, a delay of 52 months. The revised life-cycle cost estimate included a total of 13 months of funded schedule reserve.
Our ongoing work indicates that these three projects are each making progress in line with their phase of the acquisition cycle, but also face challenges in execution. Some of these challenges are unique to the projects themselves and some are common among the projects in NASA’s portfolio. For example, when projects enter the integration and test phase, unforeseen challenges can arise and affect the cost and schedule for the project. Table 1 provides more details about the current acquisition phase, cost, and schedule status of NASA’s major space telescope projects based on our ongoing work.
WFIRST. NASA’s preliminary cost and schedule estimates for the WFIRST project are currently under review as the project responds to findings in the WFIRST Independent External Technical/Management/Cost Review. This independent review was conducted to ensure the mission’s scope and required resources are well understood and executable. NASA initiated this review in April 2017 to address the National Academies’ concerns that WFIRST cost growth could endanger the balance of NASA’s astrophysics program and negatively affect other scientific priorities. The review found that the mission scope is understood, but not aligned with the resources provided and concluded that the mission is not executable without adjustments and/or additional resources. For example, the study team found that NASA’s current forecasted funding profile for the WFIRST project would require the project to slow down activities starting in fiscal year 2020, which would result in an increase in development cost and schedule. NASA agreed with the study team’s results and directed the project to reduce the cost and complexity of the design in order to maintain costs within the $3.2 billion cost target.
The project is currently identifying potential ways to reduce the scope of planned activities (called “descopes”), assessing the science impact of those descopes, and then developing recommendations for the Astrophysics Division leadership. An example of a descope that may be considered is the requirement for WFIRST to be “star-shade ready,” which means the design must be compatible with a star-shade device that is positioned between it and the star being observed to block out starlight while allowing the light emitted by the planet through.
TESS. The TESS project is currently holding cost and schedule reserves consistent with NASA center requirements, but there are no longer headquarters-held cost reserves to cover a delay if the project cannot launch as planned in March 2018. According to a project official, the project is holding 16 days of schedule reserve to its target March 2018 launch readiness date, which includes 6 days for the completion of integration and test, and 10 days for launch operations. The project previously used schedule reserves to accommodate the delayed delivery of its Ka-band transmitter, which is essential for TESS as it transmits the mission data back to Earth, due to continued performance and manufacturing issues. The two main risks to the March 2018 launch date are if: 1) SpaceX requires additional time past December 2017 for NASA’s Launch Services Program to certify that TESS can fly on its upgraded launch vehicle—certification is necessary because it will be the first time that NASA will use this version of the vehicle—and 2) any issues are identified during the remainder of environmental testing.
The project is also conducting additional testing on its spare camera at temperatures seen in space to better understand expected camera performance on orbit. TESS will use four identical, wide field-of-view cameras to conduct the first extensive survey of the sky from space for transiting exoplanets. However, during thermal testing, the project found that the substance attaching the lenses to the camera barrel places pressure on the lenses and causes the cameras to be slightly out of focus. In June 2017, NASA directed the project to proceed with integrating the cameras—as they are expected to meet TESS’s top level science requirements even with the anomaly. At its most recent key decision review in August 2017, NASA reallocated $15 million of TESS’s headquarters-held reserves to the WFIRST project. While this had the effect of decreasing life cycle costs for TESS, it also increased risk as the project no longer has any additional headquarters-held cost reserves to cover a launch delay past March 2018.
JWST. The JWST project continues to make progress towards launch, but the program is encountering technical challenges that require both time and money to fix and may lead to additional delays, beyond a delay recently announced. While the project has made much progress on hardware integration and testing over the past several months, it also used all of its remaining schedule reserves to address various technical issues, particularly on the spacecraft element. In September 2017, the JWST project requested from the European Space Agency—who will contribute the Ariane V launch vehicle—a launch window from March to June 2019, or 5 to 8 months later than the planned October 2018 launch readiness date, established in 2011. The project based this request on the results of a schedule risk assessment that incorporated inputs from the contractor on expected durations of ongoing spacecraft element integration work and other challenges that were expected to increase schedule.
With the later launch window to June 2019, the project expected to have up to 4 months of new schedule reserves. However, shortly after requesting the revised launch window, the project learned from its contractor that up to another 3 months of schedule reserve use is likely, due to lessons learned from conducting deployment exercises of the sunshield, such as reach and access limitations on the flight hardware. As a result, and pending further examination of the schedule, the project now has approximately one month of schedule reserve to complete environmental testing of the spacecraft element and the final integration phase. The final integration phase is where the instruments and telescope will be integrated with the spacecraft and sunshield to form the completed observatory. As I previously noted, our work has shown the integration and test is the riskiest phase of development, where problems are most likely to be found and schedules slip. Given the risks associated with the integration and test work ahead, coupled with a level of schedule reserves that is currently well below the level stated in the procedural requirements issued by the NASA center responsible for managing JWST, additional delays to the project’s revised launch readiness date of June 2019 are likely. As a result, the funding available under the Congressional cost cap of $8 billion may be inadequate as the contractor will need to continue to retain higher workforce levels for longer than expected to prepare the mission for a delayed launch.
Lessons Learned from NASA Acquisitions
As Congress, NASA, and the science community consider future telescope efforts, it will be exceedingly important to shape and manage new programs in a manner that minimizes cost overruns and schedule delays. This is particularly important for the largest programs as even small cost increases can have reverberating effects. NASA’s telescope and other science projects will always have inherent technical, design, and integration risks because they are complex, specialized, and often push the state of the art in space technology. But too often, our reports find that management and oversight problems—which can include poor planning, optimistic cost estimating, funding gaps, lax oversight, and poor contractor performance, among other issues—are the real drivers behind cost and schedule growth.
To its credit, NASA has taken significant steps, partly in response to our past recommendations, to reduce acquisition risk from both a technical and management standpoint, including actions to enhance cost and schedule estimating, provide adequate levels of reserves to projects, establish better processes and metrics to monitor projects, and expand the use of earned value management to better monitor contractor performance. For example, in November 2012, we found that NASA employee skill sets available to analyze and implement earned value management vary widely from center to center, and we recommended that NASA conduct an earned value management skills gap analysis to identify areas requiring augmented capability across the agency, and, based on the results of the assessment, develop a workforce training plan to address any deficiencies. NASA concurred with this recommendation and developed an earned value management training plan in 2014 based on the results of an earned value management skills gap analysis that was conducted in 2013. Moreover, in recent years, we have found that many of the projects within the agency’s major project portfolio have improved their cost and schedule performance. Nevertheless, the extent to which NASA has adopted some of the following lessons learned within its portfolio of major projects is mixed, and NASA has an opportunity to strengthen its program management of major acquisitions, including its space telescopes, by doing so.
Manage Cost and Schedule Performance for Large Projects to Limit Implications for Entire Portfolio. In 2013, following JWST’s cost increases and schedule growth, we found that though cost and schedule growth can occur on any project, increases associated with NASA’s most costly and complex missions can have cascading effects on the rest of the portfolio. For example, we found that the JWST cost growth would have reverberating effects on the portfolio for years to come and required the agency to identify $1.4 billion in additional resources over fiscal years 2012 through 2017, according to Science Mission Directorate officials. NASA identified approximately half of this required funding from the four science divisions within the Science Mission Directorate account. The majority of the cuts were related to future high priority missions, missions in the operations and sustainment phase, and research and analysis.
In essence, NASA had to mortgage future high priority missions and research to address JWST’s additional resource needs. Similarly, the National Academy of Sciences has concluded in the past that it is important for NASA to have a clearly articulated and consistently applied method for prioritizing why and how its scarce fiscal resources are apportioned with respect to the science program in general and on a more granular level among component scientific disciplines. The academy noted that failure to do so could result in a loss of capacity, capability, and human resources in a number of scientific disciplines and technological areas that may take a generation or more to reconstitute once eliminated. NASA’s establishment of the WFIRST Independent External Technical/Management/Cost Review that I previously discussed is a step in the right direction to help ensure the Astrophysics Division incorporates this lesson learned.
Establish Adequate Cost and Schedule Reserves to Address Risks. Twice in the history of the JWST program, independent reviewers found that the program’s planned cost reserves were inadequate. First, in April 2006, an Independent Review Team confirmed that the project’s technical content was complete and sound, but expressed concern over the project’s reserve funding, reporting that it was too low and phased in too late in the development lifecycle. The review team reported that for a project as complex as JWST, 25 to 30 percent total reserve funding was appropriate. The team cautioned that low reserve funding compromised the project’s ability to resolve issues, address risk areas, and accommodate unknown problems. As I previously mentioned, following additional cost increases and schedule threats, NASA commissioned the Independent Comprehensive Review Panel. In 2010, the panel again concluded JWST was executing well from a technical standpoint, but that the baseline cost estimate did not reflect the most probable cost with adequate reserves in each year of project execution, resulting in an unexecutable project.
NASA heeded these lessons when it established a new baseline for JWST in 2011. For example, the revised schedule included more reserves than required by the procedural requirements issued by the NASA center responsible for managing JWST. We have found, however, that NASA has not applied this lesson learned to all of its large projects— most notably with its human spaceflight projects, including the Space Launch System, Orion Crew Capsule, and associated ground systems— and similar outcomes to the JWST project have started to emerge with these projects. We previously reported that all three of these programs were operating with limited cost reserves, which limited each program’s ability to address risks and unforeseen technical challenges.
For example, we found in July 2016 that the Orion program planned to maintain very low levels of annual cost reserves until 2018. The lack of available cost reserves in the near term led to the program deferring work to address technical issues to stay within budget, and put the program’s future cost reserves at risk of being overwhelmed by deferred work. In April 2017, we also found that all three programs faced development challenges in completing work, and each had little to no schedule reserve remaining to the launch date—meaning they would have to complete all remaining work with minimal delay during the most challenging stage of development. We found that it was unlikely that the programs would achieve the planned launch readiness date and recommended that NASA reassess the date. NASA agreed with this recommendation and stated that it would establish a new launch readiness date. In November 2017, NASA announced that a review of the possible manufacturing and production schedule risks indicated a launch date of June 2020—a delay of 19 months—but the agency will manage to a December 2019 launch date because, according to NASA, they have put in mitigation strategies for those risks. We will follow-up on those mitigation strategies as part of future work on the human space exploration programs.
Regularly and Consistently Update Project JCLs to Provide Realistic Estimates to Decision Makers. In 2009, NASA began requiring that programs and projects with estimated life-cycle costs greater than $250 million develop a JCL prior to project confirmation. This was a positive step for NASA to help ensure that cost and schedule estimates are realistic and projects are thoroughly planning for anticipated risks. This is because a JCL assigns a confidence level, or likelihood, of a project meeting its cost and schedule estimates. Our cost estimating best practices recommend that cost estimates should be updated to reflect changes to a program or be kept current as a program moves through milestones. As new risks emerge on a project, an updated cost and schedule risk analysis can provide realistic estimates to decision-makers, including the Congress. This is especially true for NASA’s largest projects as updated estimates may require the Congress to consider a variety of actions.
However, there is no requirement for NASA projects to update their JCLs, and our prior work has found that projects—including JWST—do not regularly update cost risk analyses to take into account newly emerged risks. Our ongoing work indicates that of the 16 major projects currently in NASA’s portfolio that have developed JCL estimates, only 2 have reported updating their JCLs (other than required due to a rebaseline). For example, the Interior Exploration using Seismic Investigations, Geodesy, and Heat Transport Project (InSight), a Mars lander, updated its JCL after the project missed its committed launch date. As a result, the project was able to provide additional information to decision makers about the probability that it will meet its revised cost and schedule estimates. As a project reaches the later stages of development, especially integration and testing, the types of risks the project will face may change. An updated project JCL would provide both project and agency management with data on relevant risks that can guide the project decisions. For example, in December 2012, we recommended the JWST project update its JCL. NASA concurred with this recommendation; however, we recently closed the recommendation because NASA had not taken steps to implement it and the amount of time remaining before launch would not have allowed the benefit of implementing the recommendation to be realized. An updated JCL may have portended the current schedule delays, which could have been proactively addressed by the project.
Enhance Oversight of Contractors to Improve Project Outcomes. In December 2012, we found that the JWST project had taken steps to enhance communications with and oversight of its contractors. According to project officials, the increased communication allowed them to better identify and manage project risks by having more visibility into contractors’ activities. The project reported that a great deal of communication existed across the project prior to the Independent Comprehensive Review Panel; however, additional improvements were made. For example, the project increased its presence at contractor facilities as necessary to provide assistance; this included assigning two engineers on a recurring basis at a Lockheed Martin facility to assist in solving problems with an instrument. The JWST project also assumed full responsibility for the mission system engineering functions from Northrop Grumman in March 2011. NASA and Northrop Grumman officials both said that NASA is better suited to perform these tasks.
We continue to see instances in our ongoing work that highlight the importance of implementing this lesson learned from JWST. For example, we found in 2017 that the Space Network Ground Segment Sustainment project—a project that plans to develop and deliver a new ground system for one Space Network site that provides essential communications tracking services to NASA and non-NASA missions—exceeded its original cost baseline by at least $401.7 million and been delayed by 27 months. The project has attributed some of the cost overruns and schedule delays to the contractor’s incomplete understanding of its requirements, which led to poor contractor plans and late design changes. The project also took steps to assign a new NASA project manager, increase physical presence at the contractor facility, and have more staff focused on validation and verification activities.
In summary, NASA continues to make progress developing its space telescopes to help understand the universe and our place in it. But much like other major projects that NASA is developing, there continues to be an opportunity for NASA to learn from JWST and other projects that have suffered from cost overruns and schedule delays. Key project management tools and prior GAO recommendations that I have highlighted here today, could help to better position these large, complex, and technically challenging efforts for a successful outcome. We look forward to continuing to work with NASA and this subcommittee in addressing these issues.
Chairman Babin, Ranking Member Bera, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time.
GAO Contact and Staff Acknowledgments
If you or your staff have any questions about this testimony, please contact Cristina T. Chaplain, Director, Acquisition and Sourcing Management at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. GAO staff who made key contributions to this statement include Molly Traci, Assistant Director; Richard Cederholm, Assistant Director; Carrie Rogers; Lisa Fisher; Laura Greifner; Erin Kennedy; and Jose Ramos.
This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. | Why GAO Did This Study
Acquisition management has been a long-standing challenge at NASA, although GAO has reported on improvements the agency has made in recent years. Three space telescope projects are the key enablers for NASA to achieve its astrophysics' science goals, which include seeking to understand the universe. In its fiscal year 2018 budget request, NASA asked for about $697 million for these three projects, which represents over 50 percent of NASA's budget for its astrophysics' major projects. In total, these projects represent an expected investment of at least $12.4 billion.
This statement reflects preliminary observations on (1) the current status and cost of NASA's major telescope projects and (2) lessons learned that can be applied to NASA's management of its telescope projects. This statement is based on ongoing work on JWST and ongoing work on the status of NASA's major projects. Both reports are planned to be published in Spring 2018. This statement is also based on past GAO reports on JWST and NASA's acquisitions of major projects, and NASA input.
What GAO Found
The National Aeronautics and Space Administration's (NASA) current portfolio of major space telescopes includes three projects that vary in cost, complexity, and phase of the acquisition life cycle.
GAO's ongoing work indicates that these projects are each making progress in line with their phase of the acquisition cycle but also face some challenges. For example, the current launch date for the James Webb Space Telescope (JWST) project reflects a 57-60-month delay from the project's original schedule. GAO's preliminary observations indicate this project still has significant integration and testing to complete, with very little schedule reserve remaining to account for delays. Therefore, additional delays beyond the delay of up to 8 months recently announced are likely, and funding available under the $8 billion Congressional cost cap for formulation and development may be inadequate.
There are a number of lessons learned from its acquisitions that NASA could consider to increase the likelihood of successful outcomes for its telescope projects, as well as for its larger portfolio of projects, such as its human spaceflight projects. For example, twice in the history of the JWST program, independent reviews found that the program was not holding adequate cost and schedule reserves. GAO has found that NASA has not applied this lesson learned to all of its large projects, and similar outcomes to JWST have started to emerge. For example, NASA did not incorporate this lesson with its human spaceflight programs. In July 2016 and April 2017, GAO found that these programs were holding inadequate levels of cost and schedule reserves to cover unexpected cost increases or delays. In April 2017, GAO recommended that NASA reassess the date of the programs' first test flight. NASA concurred and, in November 2017, announced a launch delay of up to 19 months.
What GAO Recommends
GAO is not making any recommendations in this statement, but has made recommendations in prior reports to strengthen NASA's acquisition management of its major projects. NASA has generally agreed with GAO's recommendations and taken steps to implement them. |
gao_GAO-18-249 | gao_GAO-18-249_0 | Background
CFIUS Overview
In 1975, an executive order established CFIUS to monitor the impact of and coordinate U.S. policy on foreign investment in the United States. In 1988, Congress enacted the Exon-Florio amendment adding section 721 to the Defense Production Act of 1950. The amendment authorized the President to investigate the impact of certain foreign acquisitions of U.S. companies on national security and to suspend or prohibit acquisitions that might threaten to impair national security. FINSA further amended the Defense Production Act, established CFIUS as it currently exists, and guides the committee. One of the purposes of FINSA’s enactment was to ensure national security while promoting foreign investment and the creation of U.S. jobs.
CFIUS reviews transactions involving a large variety of countries and industry sectors to determine if such transactions pose a threat to national security and whether the transactions should be allowed to proceed (for more information on the characteristics of transactions reviewed by CFIUS, see app. II). FINSA does not formally define national security, but provides a number of factors for CFIUS and the President to consider in determining whether a transaction poses a risk. These factors include the potential national security-related effects on U.S. critical technologies and whether the transaction could result in the control of a U.S. business by a foreign government. CFIUS also may consider other factors that it finds appropriate in determining whether a transaction poses a national security risk (for a full list of factors, see app. III).
Under FINSA, CFIUS is chaired by the Secretary of the Treasury and includes voting members from the Departments of Commerce, Defense, Energy, Homeland Security, Justice, and State; and the Offices of the U.S. Trade Representative, and Science and Technology Policy. In addition, the Office of the Director of National Intelligence (ODNI) and the Department of Labor (DOL) are nonvoting ex officio members. Various other White House offices also observe and, as appropriate, participate in CFIUS activities (see fig. 1). CFIUS may also solicit perspectives and expertise from nonmember agencies, such as the Department of Agriculture, designating them as members for purposes of the review of particular transactions, as appropriate, which can include negotiating or imposing mitigation measures or referring the transaction to the President for decision. The committee, which meets weekly at a staff level, generally has three core functions: (1) review transactions that have been submitted to the committee and take action as necessary to address any national security concerns; (2) monitor and enforce compliance with mitigation measures; and (3) identify transactions of concern that have not been notified to CFIUS for review.
The Secretary of the Treasury, as the chair of CFIUS, is responsible for a number of tasks. According to Department of the Treasury (Treasury) officials, these tasks, including coordinating operations of the committee, facilitating information collection from parties to a transaction, reviewing and sharing data on mergers and acquisitions with member agencies, and managing CFIUS timeframes, are carried out by Treasury employees specifically staffed to support CFIUS. Treasury also communicates on the committee’s behalf with the parties, members of Congress, and the general public. When necessary, Treasury is responsible for delivering the committee’s recommendation that the President should suspend or prohibit a transaction.
Selecting Transactions for CFIUS Review
In examining covered transactions, CFIUS members seek to identify and address, as appropriate, any national security concerns that arise as a result of the transaction. According to the FINSA amendment, a “covered” transaction is defined as any merger, acquisition, or takeover by or with any foreign person that could result in foreign control of any person engaged in interstate commerce in the United States. CFIUS reviews “notices” that have been submitted—or notified—to the committee by parties to transactions. Notices to CFIUS contain information about the nature of the transaction and the parties involved. According to guidance on the Treasury website, with limited exceptions, a transaction receives “safe harbor” when CFIUS has completed its review and determines that the transaction may proceed. According to Treasury officials, safe harbor provides the parties to the transaction some certainty that CFIUS and the President will not subject the transaction to review again.
FINSA does not require that parties notify CFIUS of a transaction; however, CFIUS may choose to initiate a review of any covered transaction. Transactions that have not been notified to CFIUS for review are known as “non-notified transactions.” According to member agency officials, Treasury and several other member agencies have processes for identifying non-notified transactions for CFIUS to potentially review. For instance, Treasury staff compile data on mergers and acquisitions and distribute information about potential non-notified transactions to member agencies for review. In addition, according to member agency officials, in 2010, the Federal Bureau of Investigation (FBI) began a working group, now called Project Iceberg, which is responsible for identifying and understanding counterintelligence threats posed by foreign investments that have not been notified to CFIUS. The working group holds monthly meetings that intelligence agencies as well as CFIUS member agencies are invited to attend. In the absence of voluntary reporting by the parties involved or independent discovery of the transaction, it is possible that CFIUS may not review a covered transaction that could pose a risk to national security.
CFIUS Process for Reviewing Notified Transactions
Based on information including FINSA and regulations, the CFIUS process for reviewing transactions that have been notified to the committee comprises up to four stages: pre-notice consultation, national security review, national security investigation, and presidential action. CFIUS reviews each transaction individually, with a focus on the aspects of the transaction that could pose a risk. For each transaction reviewed, the committee identifies agencies with relevant expertise to act as co-lead with Treasury to guide the transaction through the CFIUS process. CFIUS reviews are confidential and protected from public disclosure. A CFIUS review could be concluded when CFIUS members reach consensus about whether the transaction should be allowed to proceed, including on the basis of mitigation, if necessary, or when the parties withdraw their notice, whether for commercial reasons or in light of CFIUS’s national security concerns. Absent one of these conclusions to a CFIUS review, the committee may send the transaction to the President, with a recommendation that the President suspend or prohibit it. See figure 2 for an overview of the steps that comprise the CFIUS process for reviewing selected transactions.
Before a transaction is reviewed by CFIUS, Treasury may conduct a pre- notice consultation with parties to a notified transaction. Upon request, Treasury and other agencies meet with the parties, provide informal guidance on the CFIUS review process, and may review early drafts of the notice.
Once the parties have developed the final draft, they submit it to the committee for review. When Treasury, with input from member agencies, determines that the notice of the transaction is complete, the official CFIUS review of the transaction commences.
CFIUS conducts a national security review of each notified transaction, which includes determining whether it is a covered transaction and developing a national security threat assessment. The national security review lasts up to 30 days and begins the day after Treasury determines the filing is complete and circulates the filing to CFIUS member agencies. At the beginning of the national security review, CFIUS identifies co-lead agencies. According to Treasury officials, typically within the first 10 to 12 days of the national security review, CFIUS develops a “covered transaction analysis,” which determines whether the transaction is a covered transaction according to FINSA. According to Treasury officials, there typically is consensus among voting members on whether the transaction is a covered transaction. During the national security review, CFIUS also assesses whether there is credible evidence that the foreign party in control of that U.S. business might take action to impair the national security of the United States as well as whether the covered transaction is a foreign government-controlled transaction. Concurrently, ODNI develops a national security threat assessment, with input and support from the intelligence community, to be completed during the first 20 days of the national security review.
If CFIUS finds that the covered transaction does not present national security risks or that other provisions of law provide adequate and appropriate authority to address the risks, CFIUS may end its review. If CFIUS chooses to conclude its review at this point, CFIUS is to advise the parties in writing that the transaction has been cleared and allowed to proceed. According to information provided by Treasury, CFIUS has historically concluded action on the majority of transactions during or at the end of the 30-day national security review. The committee’s determination must be certified to specified members of Congress after the review is completed. However, if at the end of the national security review, CFIUS has not yet determined that there are no unresolved national security concerns and the committee requires additional time, CFIUS may proceed to a national security investigation, which must be completed within 45 days.
If, during the 45-day national security investigation, CFIUS identifies an unresolved national security concern, it works with the parties to mitigate, if appropriate, any national security risks that may exist. If an agency identifies an unresolved national security concern, the agency develops an analysis of the potential risks posed by the covered transaction and includes recommendation for action, such as mitigation measures or referral to the President, and shares this analysis with other members of the committee. Mitigation measures may include ensuring that only authorized persons have access to certain technologies, information, or facilities, or providing the U.S. government the right to review certain business decisions and to object if the decisions raise national security concerns. According to Treasury officials, CFIUS member agencies aim for mitigation that would be effective, can be monitored, and would be enforceable. If there is a difference of opinion among CFIUS member agencies about the level or type of mitigation that should be utilized, CFIUS agencies discuss the matters to reach consensus.
In some cases, parties may choose to withdraw and resubmit the notice. If CFIUS has determined that national security concerns cannot be mitigated, according to Treasury officials, CFIUS typically advises the parties that the committee will refer the matter to the President for decision. According to Treasury officials, parties have the opportunity to withdraw and resubmit the notice if they need additional time to discuss CFIUS’s concerns or to present additional information or mitigation proposals for CFIUS’s consideration. Sometimes parties choose to withdraw and abandon the transaction if, for instance, CFIUS proposes mitigation measures that the parties choose not to accept. Parties may also abandon the transaction for commercial reasons unrelated to the CFIUS review. If parties choose to withdraw and resubmit a transaction, the national security review begins again, and the committee has another 75 days to complete the review of the transaction.
If CFIUS obtains consensus from committee members that there are no unresolved national security concerns or the national security concerns have been mitigated, the national security investigation ends, and the covered transaction receives safe harbor. Treasury and the co-lead agency send written certification to specified members of Congress that there are no unresolved national security concerns. However, if the committee concludes that a proposed foreign investment threatens to impair the U.S. national security and the threat cannot be mitigated, CFIUS will elevate the notice to the President for determination and CFIUS may recommend that the President suspend or prohibit the transaction. According to Treasury officials, parties may also withdraw their notice at this point rather than have the President decide whether to block the transaction.
If, at the end of the national security investigation, CFIUS elevates a transaction to the President for determination, the President has 15 days from the completed investigation to decide to prohibit or suspend the acquisition, or to take no action. Only four transactions reviewed by CFIUS have been the subject of a presidential prohibition since the committee was established in 1975.
Stakeholders Have Concerns about the Increased CFIUS Workload but Treasury Has Not Coordinated Staffing Level Assessments
CFIUS has experienced an increase in workload in recent years, but Treasury, as CFIUS lead, has not coordinated member agency efforts to better understand staffing levels needed to complete core committee functions. According to CFIUS member agency officials, the volume of transactions notified to the committee and the complexity of CFIUS reviews in terms of technology, transaction structure, and national security concerns have increased substantially from 2011 through 2016, while CFIUS staffing levels have experienced a modest increase during the same time period. Member agency officials stated that CFIUS is able to review all transactions that have been voluntarily notified to the committee. However, many stakeholders, including most member agency officials and several external experts, expressed concerns that CFIUS member agencies were limited in their ability to complete other CFIUS functions, such as identifying non-notified transactions. In addition, agency officials were unsure if they would have sufficient staff if the CFIUS workload were to continue to increase. Standards for Internal Control in the Federal Government states that management should establish the organizational structure necessary to achieve its objectives and periodically evaluate this structure. Treasury has not coordinated member agency efforts to better understand the staffing levels needed to complete the current and future workload associated with core functions of the committee.
The Volume of Covered Transactions CFIUS Reviewed Increased between 2011 and 2016
Despite figures decreasing in one year, overall, the number of covered transactions that CFIUS reported it reviewed increased from 111 transactions in 2011 to 172 transactions in 2016, or almost 55 percent (see table 1). In 2017, CFIUS reviewed 238 transactions, according to Treasury officials. According to member agency officials, the increased volume of covered transactions resulted in increased work for all CFIUS members, no matter which agency is the co-lead, because each member agency must review each transaction notified to the committee.
The number of reported covered transactions requiring national security investigations almost doubled during this same period, increasing from 40 transactions in 2011 to 79 transactions in 2016. Treasury officials told us that they estimated that the total number of transactions that proceeded to national security investigations was greater in 2017 than it was in 2016. They said that the increase in the number of covered transactions that require a national security investigation is another indication that the committee’s workload has increased. One Treasury official noted that the number of times that parties withdraw and resubmit transactions can increase the workload of the committee as it must review the transaction each time it is submitted.
Additionally, the number of reported covered transactions that include mitigation measures has increased. Each year, CFIUS places mitigation measures on a relatively small number of covered transactions. For example, according to Treasury officials, 18 (roughly 10 percent) of 172 transactions the committee reviewed in 2016 resulted in mitigation measures. According to member agency officials, mitigation measures rarely expire; thus, the number of these measures increases over time, as does the accompanying workload for co-lead agencies tasked with overseeing the measures.
The Increased Complexity of CFIUS Reviews Has Increased CFIUS Workload
Officials from CFIUS member agencies stated that the complexity of CFIUS reviews in terms of technology, transaction structure, and national security concerns has increased in recent years. They said that additional time and staff have been required to address this rise in complexity and to complete these reviews. For instance, one member agency official told us that reviews of transactions from parties whose companies use new and emerging technologies, such as artificial intelligence and robotics, typically require input from agency subject matter experts to help the committee understand how, if at all, the acquisition of these technologies by foreign parties could create national security concerns.
According to member agency officials, the amount of time and number of staff needed to review a transaction can fluctuate greatly based on, among other things, the technology involved. One agency official said that 6 of their employees, on average, are involved in reviewing a less complex transaction, but up to 15 employees may be necessary to complete the review if the technology involved is more complicated. The number of agency staff involved can increase further if senior level management is required to participate in the review. This official also stated that most of the transactions reviewed in the past were from sectors that agency officials were familiar with and involved more predictable issues, but recently, transactions more frequently involved complex technology, which required additional expertise. Officials from another member agency stated that a majority of their staff involved in reviewing transactions do not have CFIUS as a primary duty and that their agency has reallocated resources to address the increased case load. One Treasury official stated that one case was so complex that it required one staff member to dedicate all of their time to its review, and the other responsibilities of this employee had to be shifted to other members of the staff.
Additionally, according to member agency officials, reviews of transactions involving technologies the government frequently uses have increased, requiring additional time and staff to understand how this technology affects various agencies. For instance, member agency officials said that reviewing transactions involving semiconductors, which are commonly used in an array of products used by the government, typically requires additional time and staff because CFIUS member agencies must understand, among other things, how the approval of a transaction could affect systems across government agencies.
According to CFIUS member agency officials, the structures of the transactions the committee reviews have also become more complex, requiring more time and staff to assess. For example, business arrangements—such as complex corporate arrangements, joint ventures, loan arrangements, nondisclosure agreements, and memoranda of understanding—may require the work of additional staff. Treasury officials also stated that these arrangements can make it more difficult to determine whether the transaction is covered by CFIUS authorities, as there may be commercial relationships that affect the parties’ decision- making. According to Treasury officials, such arrangements can also increase the complexity of the national security review, as they may create additional “indirect threats” that must also be analyzed.
Member agency officials explained that it has become more challenging to identify the ultimate beneficial owners—the persons who ultimately own and control a company—due to the structure of the transaction. According to Treasury officials, in certain countries, it can be difficult to distinguish between control by a private entity and control by a state entity due to the various relationships created by the transaction structure. In these cases, CFIUS often requires additional information from the parties in such transactions before the national security review can begin. Member agency officials stated that they had been encountering these arrangements more frequently, and additional time and staff had been required to examine the national security implications of these transactions.
Finally, the nature of the national security concerns the committee considers has expanded beyond the traditional threats, requiring more time and staff to assess them, according to member agency officials. National security concerns include traditional ones, such as threats to U.S. critical infrastructure. Emerging concerns include the possibility of a foreign entity obtaining access to personally identifiable information that, if disclosed, could be exploited for purposes that have national security consequences or the proximity of property to areas considered sensitive by the U.S. government.
Treasury Has Not Coordinated Assessments of Staffing Levels Needed to Complete CFIUS Core Functions
According to agency officials, the number of staff assigned to CFIUS activities has not kept pace with the increase in covered transactions reviewed by CFIUS. According to one Treasury official, the more an agency is required to act as co-lead, the more time and staff are needed of the agency. After Treasury, which acts as co-lead on every review, the Departments of Defense (DOD), Energy (DOE), and Homeland Security (DHS) acted as co-lead on the largest number of CFIUS reviews in 2016 (see table 2).
According to information provided by member agency officials, CFIUS saw a modest increase in staff assigned to CFIUS activities since 2011, with Treasury, DOD, DOE, DHS, and State adding a few staff, while staffing levels did not rise at the other member agencies. The total number of staff assigned to CFIUS activities increased from 82 in 2011 to 91 in 2016, an increase of 11 percent. During that same period, covered transactions reviewed by CFIUS increased from 111 transactions in 2011 to 172 transactions in 2016, an increase of almost 55 percent (see fig. 3).
Member agency officials stated that the number of staff assigned to work on CFIUS activities may fluctuate throughout the year based on the committee’s work. For example, as previously discussed, CFIUS member agencies may rely on experts with other responsibilities throughout each agency to provide assistance with the review as the need arises. For instance, in fiscal year 2016, DOE had four staff dedicated to CFIUS, but one DOE official said he reaches out to relevant subject matter experts, who have other responsibilities, to provide input on transactions within their area of expertise.
Treasury officials stated that staff have been able to review the number of transactions that have been voluntarily notified to CFIUS to date. One Treasury official said that, despite the increase in the number of transactions reviewed by CFIUS, the committee has almost always provided a determination to the parties within the timeframes required as to whether the covered transaction should be allowed to proceed or blocked by the President. Further, Treasury officials stated that despite staff constraints, CFIUS has, as needed, appropriately mitigated the national security concerns for the transactions the committee has approved.
However, several member agency officials and external experts expressed concerns that, due to staff constraints, CFIUS member agencies were limited in their ability to complete other CFIUS functions, such as monitoring mitigation measures and identifying non-notified transactions. First, the time and staff necessary to monitor mitigation measures varies. For instance, according to one member agency official, some mitigation measures require daily monitoring from officials, while other mitigation measures require only the review of an annual report submitted by parties to the transaction. Several member agency officials acknowledged that they have fewer staff than they would like to devote to monitoring mitigation measures.
Second, these member agency officials also said that they are not able to devote the amount of time they would like to the task of identifying non- notified transactions. CFIUS member agencies review data on mergers and acquisitions to identify non-notified transactions of concern, those that have not been notified to CFIUS for review. In recent years, according to agency officials, CFIUS has seen an increase in the number of non-notified transactions CFIUS could potentially review. One official indicated that in 2016, their agency examined 2,683 potential non-notified transactions, an increase of roughly 38 percent from 2014. Member agency officials stated that because non-notified transactions are frequently reviewed after the acquisition has been completed, the process of mitigating potential national security concerns of non-notified transactions can be difficult. Several member agency officials suggested that they would like to devote more time to examining non-notified transactions, but staff constraints limit the amount of time agencies can spend conducting this task.
Several member agency officials said that they do not know if current staffing levels would be able to address a further increase in CFIUS workload. Treasury officials noted that the volume of transactions reviewed by CFIUS will likely continue to increase. Moreover, congressional bills have been introduced that, if enacted, would alter the CFIUS process. As discussed later in this report, agency officials stated that some of these potential changes would likely further increase CFIUS workload. According to several CFIUS member agency officials, if the CFIUS workload were to increase, additional staff would likely be necessary to complete committee functions, such as identifying non- notified transactions and monitoring mitigation measures. Officials from two member agencies also expressed concerns about their ability to review transactions that have been notified to the committee if the volume of CFIUS notices increased.
According to Treasury officials, CFIUS does not have a centralized budget, and Treasury does not have authority to determine CFIUS staffing levels at committee member agencies. Treasury officials stated that they have taken steps, in coordination with the Office of Management and Budget (OMB), to collect data from the member agencies on current staffing levels expended on CFIUS core functions but have not established timeframes for this data collection. Standards for Internal Control in the Federal Government states that management should establish the organizational structure necessary to achieve its objectives and periodically evaluate this structure. Treasury officials stated that they have conducted an assessment of Treasury’s staffing needs and have encouraged other agencies to do the same. However, Treasury, as CFIUS lead, has not coordinated member agencies’ efforts to better understand the staffing levels needed to address the current and future CFIUS workload associated with core committee functions, such as identifying and reviewing non-notified transactions. Without this information, CFIUS may be limited in its ability to fulfill its objectives and address threats to the national security of the United States.
CFIUS Member Agencies and External Experts Provided Views on Benefits and Drawbacks of Possible Changes to CFIUS
Officials from CFIUS member agencies (voting and nonvoting) and selected nonmember participant agencies, as well as external experts, expressed a range of views on the potential benefits and drawbacks to possible changes to CFIUS. In our interviews with them, these stakeholders discussed a variety of possible changes to CFIUS that we organized into three categories: (1) altering the structure of CFIUS, (2) redefining which merger and acquisition transactions should be considered for CFIUS review, and (3) expanding the list of factors CFIUS considers as it evaluates the impacts of a foreign transaction on national security. For the most part, CFIUS member agencies and nonmember participant agencies stated that the existing structure is working effectively and described drawbacks to potential changes, such as changing membership or voting rights. Perspectives among agency officials and external experts varied on the potential effects of redefining which transactions should be considered for review, such as requiring CFIUS to review all covered transactions. Agency officials and external experts described a range of potential effects of expanding the list of factors CFIUS considers. They generally stated that including a net economic benefit test in the review, for example, would not be beneficial. Many officials and external experts agreed that one potential drawback of many of the possible changes is a likely increase to the CFIUS workload, generating concerns about the committee’s capacity to complete its core functions.
Agencies Participating in CFIUS Were Generally Satisfied with the Structure of the Committee
In general, officials from member and nonmember agencies participating in CFIUS were satisfied with the structure of the committee. Possible changes, which would affect the way CFIUS is organized and does its work, include changes to the chairmanship of CFIUS, changes to the voting membership of CFIUS (adding new voting members and giving voting rights to current nonvoting members), and changes to the timeframes under which CFIUS works. However, for the most part, CFIUS member agencies and nonmember participant agencies reported that the existing structure works effectively. See tables 3, 4, 5, and 6 for details on the perspectives expressed on these changes.
Perspectives Varied on the Effects of Changes to the Types of Transactions to Be Considered for CFIUS Review
Member agency officials and external experts offered a range of views about the effects of changes to the types of transactions reviewed by CFIUS. Possible changes, which would affect which merger and acquisition transactions would be considered for CFIUS review, include changes to the definition of a covered transaction and changing the voluntary notification process to make review of all or some covered transactions mandatory. Stakeholders we spoke with identified benefits and drawbacks to each of these changes. Many stakeholders agreed that one potential drawback of these possible changes is a likely increase to the CFIUS workload. See tables 7, 8, and 9 for details on the perspectives expressed.
Most Stakeholders Were Satisfied with the Factors CFIUS Currently Considers in Reviewing Foreign Transactions
Member agency officials and external experts were generally satisfied with the list of factors CFIUS currently considers when it reviews foreign transactions and offered a variety of opinions on the effects of changes to them. Possible changes include expanding the statutory national security factors to be considered and introducing an economic impact assessment. Stakeholders we spoke with identified benefits and drawbacks to each of these changes. See tables 10 and 11 for details on the perspectives expressed.
Conclusions
The United States maintains an open investment climate that recognizes the benefits of foreign investment to its economy. CFIUS reviews certain foreign acquisitions, mergers, or takeovers of U.S. businesses to determine the effect of the transaction on the national security of the United States. The increased number of covered transactions notified to CFIUS and the complexity of these cases compared with the modest increase in the number of people assigned to reviewing them have, according to member agency officials, taxed the staff of CFIUS member agencies. Member agency officials and external experts have expressed particular concern that CFIUS member agencies were limited in their ability to complete core functions, such as identifying non-notified transactions and monitoring mitigation measures. At the same time, congressional bills have been introduced proposing changes to FINSA that could increase the committee’s workload. Officials from Treasury and other member agencies are aware of pressures on their CFIUS staff given the current workload and have expressed concerns about possible workload increases. Treasury and OMB have begun to collect information from agencies on their current CFIUS staffing levels. This is a crucial first step that could facilitate a better understanding for both the committee and Congress of the current staffing levels across the committee’s organizational structure. However, Treasury, as CFIUS lead, has not coordinated member agency efforts to assess the current and future staffing levels needed to complete the committee’s core functions. Without attaining an understanding of the staffing levels needed to address the current and future CFIUS workload, particularly if legislative changes to CFIUS’s authorities further expand its workload, CFIUS may be limited in its ability to fulfill its objectives and address threats to the national security of the United States.
Recommendation for Executive Action
The Secretary of the Treasury, as the chair of CFIUS, and working with member agencies, should coordinate member agencies’ efforts to better understand the staffing levels needed to address the current and projected CFIUS workload associated with core committee functions. (Recommendation 1)
Agency Comments and Our Evaluation
We provided a draft of this report for review and comment to the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Justice, Labor, State, and the Treasury as well as the Offices of the U.S. Trade Representative, Science and Technology Policy, and the Director of National Intelligence, and the Federal Communications Commission. We also provided a draft to the Office of Management and Budget.
Treasury provided written comments, which are reproduced in appendix V. In its comments, Treasury stated that it is working with OMB to determine current resource levels across the CFIUS member agencies and has encouraged agencies to assess their staffing needs. Treasury also stated that it generally concurred with the draft report’s recommendation to “conduct an assessment to better understand staffing levels needed to address the current and projected CFIUS workload.” However, Treasury noted that CFIUS does not have a centralized budget, and Treasury does not have the authority over CFIUS staffing levels at member agencies. We acknowledge Treasury’s points and, therefore, we modified the report and clarified the recommendation to focus on Treasury’s coordination role, since, as we note in the report, Treasury is responsible for coordinating the operations of the committee and communicating on the committee’s behalf with the parties, members of Congress, and the general public. Treasury stated in an email that the clarifications to the recommendation address the point raised in its comment letter.
USDA also provided written comments, reproduced in appendix VI. In its comments, USDA stated that it generally agreed with the findings in GAO’s draft report. The letter further noted that USDA is satisfied with Treasury’s willingness to include USDA in cases related to food and agriculture and is comfortable continuing to work as a non-voting member of CFIUS.
The Departments of Commerce, Homeland Security, State, Treasury, and the Offices of the U.S. Trade Representative and Science and Technology Policy provided written technical comments, which we incorporated as appropriate.
The Departments of Defense, Energy, Health and Human Services, Justice, Labor, the Office of the Director of National Intelligence, and the Federal Communications Commission indicated via email that they did not have comments.
As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Justice, Labor, State, and the Treasury as well as the Offices of the U.S. Trade Representative, Science and Technology Policy, and the Director of National Intelligence, and the Federal Communications Commission. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov.
If you or your staff have questions about this report, please contact Kimberly Gianopoulos at (202) 512-8612 or [email protected] or Marie A. Mak at (202) 512-2527 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII.
Appendix I: Objectives, Scope, and Methodology
This report (1) examines changes in the Committee on Foreign Investment in the United States’ (CFIUS) workload and staffing from 2011 through 2016, and (2) provides information on stakeholder views on potential changes to CFIUS.
To address these objectives, we reviewed relevant laws, executive orders, and regulations. We interviewed officials from each CFIUS voting member agency, including the Departments of Commerce, Defense, Energy, Homeland Security, Justice, State, and the Treasury as well as the Offices of the U.S. Trade Representative and Science and Technology Policy. We also interviewed officials from the two nonvoting ex officio members, the Office of the Director of National Intelligence and the Department of Labor. In addition, we interviewed officials from nonmember agencies that have CFIUS case-related expertise, including the Departments of Agriculture and Health and Human Services, and the Federal Communications Commission.
To examine the changes in CFIUS workload and staffing levels over the past 5 years, we analyzed information on workload and staffing levels at the voting member agencies from 2011 through 2016, the most recent information available at the time of our review. We also reviewed the 2014 and 2015 CFIUS annual reports. In addition, we interviewed officials from the nine CFIUS voting member agencies about their workload and staffing levels; any changes in the volume, types, and complexity of transactions reviewed by CFIUS; and their ability to complete the core functions of the committee. We requested information from the 9 CFIUS voting member agencies on the number of staff assigned to CFIUS more than 50 percent of their time.
To collect information on stakeholder views on potential changes to CFIUS, we conducted individual semi-structured interviews with selected stakeholders, which consisted of officials from the nine CFIUS voting member agencies, the two ex officio nonvoting member agencies, and three selected nonmember agencies that have CFIUS case-related expertise, as well as with external experts. To identify external experts, we asked stakeholders to recommend other stakeholders we should speak with (i.e., snowball sampling). From our list of potential stakeholders, we selected 16 external experts, including former government officials, lawyers who represent parties with transactions notified to CFIUS, and representatives from industry associations and think tanks. In our interviews, we collected views and information on the challenges that CFIUS faces, options for addressing the challenges, and the possible benefits and drawbacks of these options. The information obtained from these stakeholders cannot be generalized across all stakeholders; however, these stakeholders provided insights into the possible effects of implementing certain changes to CFIUS.
We conducted this performance audit from December 2016 to February 2018 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives.
Appendix II: Characteristics of Transactions Reviewed by the Committee on Foreign Investment in the United States (CFIUS)
Appendix II: Characteristics of Transactions Reviewed by the Committee on Foreign Investment in the United States (CFIUS)
CFIUS reviews covered transactions from a large variety of industries, but the largest number of transactions reviewed come from the manufacturing sector. In 2016, the manufacturing sector represented approximately 42 percent of the 172 covered transactions reviewed by CFIUS and, in recent years, the number of transactions reviewed from that sector has increased from 49 transactions in 2011 to approximately 72 transactions in 2016. Computer and electronic transactions, such as those by companies that produce semiconductor technology, accounted for approximately 32 of the 72 covered transactions from the manufacturing sector that CFIUS reviewed in 2016. For instance, in 2016, CFIUS reviewed the potential acquisition of Aixtron, a Germany-based semiconductor firm with assets in the United States, by the Chinese firm Fujian Grand Chip Investment Fund. That year, the President chose to prohibit the acquisition of the U.S. business of Aixtron upon determining that the foreign purchasers might take action that threatens to impair the national security of the United States in exercising control of the U.S. business of Aixtron. Treasury, as the chair of CFIUS, stated in a press release that the national security risks posed by the transaction related to, among other things, a Chinese firm obtaining the company’s body of knowledge and experience.
Transactions from the manufacturing sector involve a variety of other industries, including textiles, chemicals, and food manufacturing. For example, in 2013, according to a report from the U.S.-China Economic and Security Review Commission, CFIUS reviewed the acquisition of Smithfield Foods Inc., for $7.1 billion, by China’s Shuanghui International Holdings Ltd. A letter submitted by members of the Senate Agriculture Committee raised concerns that the transaction posed a threat to the nation’s food security; however, according to Security and Exchange Commission filings, CFIUS ultimately completed its investigation and cleared the transaction to proceed.
Acquisitions by Chinese-owned companies accounted for the largest number of covered transactions reviewed by CFIUS from 2014 through 2016. According to CFIUS, the number of covered transactions the committee reviewed from China has increased substantially in recent years, from 10 transactions in 2011 to 67 in 2016. In previous years, companies from the United Kingdom were party to the largest share of covered transactions submitted for CFIUS review; however, from 2013 through 2015, parties from the United Kingdom and Canada submitted the second and third largest number of notices. Forty-four percent of all covered transactions reviewed by the committee during this time period involved companies from China, the United Kingdom, or Canada.
Appendix III: Factors to Determine Whether Submitted Transactions Pose a National Security Risk
Appendix III: Factors to Determine Whether Submitted Transactions Pose a National Security Risk The potential effects of the transaction on the domestic production needed for projected national defense requirements. The potential effects of the transaction on the capability and capacity of domestic industries to meet national defense requirements, including the availability of human resources, products, technology, materials, and other supplies. The potential effects of a foreign person’s control of domestic industries and commercial activity on the capability and capacity of the United States to meet the requirements of national security. The potential effects of the transaction on U.S. international technological leadership in areas affecting U.S. national security. The potential national security-related effects on U.S. critical technologies. The potential effects on the long-term projection of U.S. requirements for sources of energy and other critical resources and material. The potential national security-related effects of the transaction on U.S. critical infrastructure, including critical physical infrastructure such as major energy assets. The potential effects of the transaction on the sales of military goods, equipment, or technology to countries that present concerns related to terrorism; missile proliferation; chemical, biological, or nuclear weapons proliferation; or regional military threats. The potential that the transaction presents for transshipment or diversion of technologies with military applications, including the relevant country’s export control system.
Whether the transaction could result in the control of a U.S. business by a foreign government or by an entity controlled by or acting on behalf of a foreign government. The relevant foreign country’s record of adherence to nonproliferation control regimes and record of cooperating with U.S. counterterrorism efforts.
Other factors that the President or the committee may determine to be appropriate, generally or in connection with a specific review or investigation.
Appendix IV: Reported Number of Agency Staff Assigned to Committee Activities
Department of the Treasury (Chair)
Appendix V: Comments from the Department of the Treasury
Appendix VI: Comments from the Department of Agriculture
Appendix VII: GAO Contacts and Staff Acknowledgments
GAO Contacts
Kimberly M. Gianopoulos, (202) 512-8612 or [email protected]. Marie A. Mak, (202) 512-4841 or [email protected].
Staff Acknowledgments
In addition to the contacts named above, Christine Broderick (Assistant Director), Christina Werth (Analyst-in-Charge), Anthony Costulas, Scott Purdy, Kendal Robinson, Lynn Cothern, Grace Lui, Justin Fisher, and Neil Doherty contributed to this report. | Why GAO Did This Study
The United States economy has historically been the largest recipient of foreign direct investment in the world—receiving $373 billion in 2016, according to U.S. government statistics. Ensuring that these foreign investments do not harm national security can be a challenge. CFIUS is an interagency group that reviews transactions under its authority—certain foreign acquisitions or mergers of U.S. businesses—to determine their effects on U.S. national security, while maintaining an open investment climate. If CFIUS identifies concerns, it may work with parties to the transaction to mitigate them. In rare cases, CFIUS may recommend that the President block or suspend a transaction.
GAO was asked to review the CFIUS process and possible changes to that process. This report (1) examines changes in CFIUS's workload and staffing from 2011 through 2016, and (2) provides information on stakeholder views on potential changes to CFIUS. GAO analyzed CFIUS information on staffing levels and transactions reviewed, and interviewed officials from member agencies, selected nonmember agencies that have CFIUS-related expertise, and knowledgeable external experts, such as think tanks.
What GAO Found
states that management should establish the organizational structure necessary to achieve its objectives and periodically evaluate this structure. Treasury—the agency that leads CFIUS— has not coordinated member agencies' efforts to better understand the staffing levels needed to address the current and future workload associated with core functions of the committee. Without this information, CFIUS may be limited in its ability to fulfill its objectives and address national security concerns.
Officials from CFIUS member agencies and selected nonmember agencies, as well as external experts, expressed a range of views on the potential benefits and drawbacks to possible changes to CFIUS. GAO organized the possible changes into three categories: (1) altering the structure of CFIUS, (2) redefining which transactions should be considered for CFIUS review, and (3) expanding the factors CFIUS considers when evaluating the impacts of a foreign transaction on national security. Agency officials were generally satisfied with CFIUS' structure, such as the committee's chair and membership. Views among officials and experts varied on redefining which transactions should be considered for review, such as requiring CFIUS to review all transactions covered by its authority regardless of notification. Officials and experts generally did not support expanding the list of national security factors CFIUS considers, such as by adding a net economic benefit test. Agency officials and experts agreed that one trade-off related to some possible changes is a likely increase to the CFIUS workload, which they noted is already straining agencies' staff resources.
What GAO Recommends
Treasury, as CFIUS lead, should coordinate member agencies' efforts to better understand the staffing levels needed to address the current and projected CFIUS workload associated with core committee functions. Treasury concurred. |