Nuclear Plant Risk Studies: Then and Now

September 29, 2017 | 6:00 am
Dave Lochbaum
Former Contributor

Nuclear plant risk studies (also called probabilistic risk assessments) examine postulated events like earthquakes, pipe ruptures, power losses, fires, etc. and the array of safety components installed to prevent reactor core damage. Results from nuclear plant risk studies are used to prioritize inspection and testing resources–components with greater risk significance get more attention.

Nuclear plant risk studies are veritable forests of event trees and fault trees. Figure 1 illustrates a simple event tree. The initiating event (A) in this case could be something that reduces the amount of reactor cooling water like the rupture of a pipe connected to the reactor vessel. The reactor protection system (B) is designed to detect this situation and immediately shut down the reactor.

Fig. 1. (Source: Nuclear Regulatory Commission)

The event tree branches upward based on the odds of the reactor protection system successfully performing this action and downward for its failure to do so. Two emergency coolant pumps (C and D) can each provide makeup cooling water to the reactor vessel to replenish the lost inventory. Again, the event tree branches upward for the chances of the pumps successfully fulfilling this function and downward for failure.

Finally, post-accident heat removal examines the chances that reactor core cooling can be sustained following the initial response. The column on the right describes the various paths that could be taken for the initiating event. It is assumed that the initiating event happens, so each path starts with A. Paths AE, ACE, and ACD result in reactor core damage. The letters added to the initiating event letter define what additional failure(s) led to reactor core damage. Path AB leads to another event tree – the Anticipated Transient Without Scram (ATWS) event tree because the reactor protection system failed to cause the immediate shut down of the reactor and additional mitigating systems are involved.

The overall risk is determined by the sum of the odds of pathways leading to core damage. The overall risk is typically expressed something like 3.8×10-5 per reactor-year (3.8E-05 per reactor-year in scientific notation). I tend to take the reciprocal of these risk values. The 3.8E-05 per reactor-year risk, for example, becomes one reactor accident every 26,316 years—the bigger the number, the lower the risk.

Fault trees examine reasons for components like the emergency coolant pumps failing to function. The reasons might include a faulty control switch, inadequate power supply, failure of a valve in the pump’s suction pipe to open, and so on. The fault trees establish the chances of safety components successfully fulfilling their needed functions. Fault trees enable event trees to determine the likelihoods of paths moving upward for success or downward for failure.

Nuclear plant risk studies have been around a long time. For example, the Atomic Energy Commission (forerunner to today’s Nuclear Regulatory Commission and Department of Energy) completed WASH-740 in March 1957 (Fig. 2). I get a kick out of the “Theoretically Possible but Highly Improbable” phrase in its subtitle. Despite major accidents being labeled “Highly Improbable,” the AEC did not release this report publicly until after it was leaked to UCS in 1973 who then made it available. One of the first acts by the newly created Nuclear Regulatory Commission (NRC) in January 1975 was to publicly issue an update to WASH-740. WASH-1400, also called NUREG-75/014 and the Rasmussen Report, was benignly titled “Reactor Safety Study: An Assessment of Accident Risks in U.S. Commercial Nuclear Power Plants.”

Fig. 2. (Source: Atomic Energy Commission)

Nuclear plant risk studies can also be used to evaluate the significance of actual events and conditions. For example, if emergency coolant pump A were discovered to have been broken for six months, analysts can change the chances of this pump successfully fulfilling its safety function to zero and calculating how much the broken component increased the risk of reactor core damage. The risk studies would determine the chances of initiating events occurring during the six months emergency coolant pump A was disabled and the chances that backups or alternates to emergency coolant pump A stepped in to perform that safety function. The NRC uses nuclear plant risk studies to determine when to send a special inspection team to a site following an event or discovery and to characterize the severity level (i.e., green, white, yellow, or red) of violations identified by its inspectors.

Nuclear Plant Risk Studies: Then

In June 1982, the NRC released NUREG/CR-2497, “Precursors to Potential Severe Core Damage Accidents: 1969-1979, A Status Report,” that reported on the core damage risk from 52 significant events during that 11-year period. The events included the March 1979 meltdown of Three Mile Island Unit 2 (TMI-2), which had a core damage risk of 100%. The effort screened 19,400 licensee event reports submitted to the AEC/NRC over that period, culled out 529 event for detailed review, identified 169 accident precursors, and found 52 of them to be significant from a risk perspective. The TMI-2 event topped the list, with the March 1975 fire at Browns Ferry placing second.

The nuclear industry independently evaluated the 52 significant events reported in NUREG/CR-2497. The industry’s analyses also found the TMI-2 meltdown to have a 100% risk of meltdown, but disagreed with all the other NRC risk calculations. Of the top ten significant events, the industry’s calculated risk averaged only 11.8% of the risk calculated by the NRC. In fact, if the TMI-2 meltdown is excluded, the “closest” match was for the 1974 loss of offsite power event at Haddam Neck (CT). The industry’s calculated risk for this event was less than 7% of the NRC’s calculated risk. It goes without saying (but not without typing) that the industry never, ever calculated a risk to be greater than the NRC’s calculation. The industry calculated the risk from the Browns Ferry fire to be less than 1 percent of the risk determined by the NRC—in other words, the NRC’s risk was “only” about 100 times higher than the industry’s risk for this event.

Fig. 3. Based on figures from June 1982 NRC report. (Source: Union of Concerned Scientists)

Bridging the Risk Gap?

The risk gap from that era can be readily attributed to the immaturity of the risk models and the paucity of data. In the decades since these early risk studies, the risk models have become more sophisticated and the volume of operating experience has grown exponentially.

For example, the NRC issued Generic Letter 88-20, “Individual Plant Examination for Severe Accident Vulnerabilities.” In response, owners developed plant-specific risk studies. The NRC issued documents like NUREG/CR-2815, “Probabilistic Safety Analysis Procedures Guide,” to convey its expectations for risk models. And the NRC issued a suite of guidance documents like Regulatory Guide 1.174, “An Approach for Using Probabilistic Risk Assessment in Risk-Informed Decision on Plant-Specific Changes to the Licensing Basis.” This is but a tiny sampling of the many documents issued by the NRC about how to conduct nuclear plant risk studies—guidance that simply was not available when the early risk studies were performed.

Complementing the maturation of nuclear plant risk studies is the massive expansion of available data on component performance and human reliability. Event trees begin with initiating events—the NRC has extensively sliced and diced initiating event frequencies. Fault trees focus on performance on the component and system level, so the NRC has collected and published extensive operating experience on component performance and system reliability. And the NRC compiled data on reactor operating times to be able to develop failure rates from the component and system data.

Given the sophistication of current risk models compared to the first generation risk studies and the fuller libraries of operating reactor information, you would probably think that the gap between risks calculated by industry and NRC has narrowed significantly.

Except for being absolutely wrong, you would be entirely right.

Nuclear Plant Risk Studies: Now

Since 2000, the NRC has used nuclear plant risk studies to establish the significance of violations of regulatory requirements, with the results determining whether a green, white, yellow, or red finding gets issued. UCS examined ten of the yellow and red findings determined by the NRC since 2000. The “closest” match between NRC and industry risk assessment was for the 2005 violation at Palo Verde (AZ) where workers routinely emptied water from the suction pipes for emergency core cooling pumps. The industry’s calculated risk for that event was 50% (half) of the NRC’s calculated risk, meaning that the NRC viewed this risk as double that of the industry’s view. And that was the closest that the risk viewpoints came. Of these ten significant violations, the industry’s calculated risk averaged only 12.7% of the risk calculated by the NRC. In other words, the risk gap narrowed only a smidgen over the decades.

Fig. 4. Ratios for events after 2000. (Source: Union of Concerned Scientists)

Risk-Deformed Regulation?

For decades, the NRC has consistently calculated nuclear plant risks to be about 10 time greater than the risks calculated by industry. Nuclear plant risk studies are analytical tools whose results inform safety decision-making. Speedometers, thermometers, and scales are also analytical tools whose results inform safety decision-making. But a speedometer reading one-tenth of the speed recorded by a traffic cop’s radar gun, or a thermometer showing a child to have a temperature one-tenth of her actual temperature, or a scale measuring one-tenth of the actual amount of chemical to be mixed into a prescription pill are unreliable tools that could not continue to be used to make responsible safety decisions.

Yet the NRC and the nuclear industry continue to use risk studies that clearly have significantly different scales.

On May 6, 1975, NRC Technical Advisor Stephen H. Hanauer wrote a memo to Guy A. Arlotto, the NRC’s Assistant Director for Safety and Materials Protection Standards. The second paragraph of this two-paragraph memo expressed Dr. Hanauer’s candid view of nuclear plant risk studies: “You can make probabilistic numbers prove anything, by which I mean that probabilistic numbers ‘prove’ nothing.”

Oddly enough, the chronic risk gap has proven the late Dr. Hanauer totally correct in his assessment of the value of nuclear plant risk studies. When risk models permit users to derive results that don’t reside in the same zip code yet alone the same ball park, the results prove nothing.

The NRC must close the risk gap, or jettison the process that proves nothing about risks.