Fault tree analysis explained

Fault tree analysis (FTA) is a type of failure analysis in which an undesired state of a system is examined. This analysis method is mainly used in safety engineering and reliability engineering to understand how systems can fail, to identify the best ways to reduce risk and to determine (or get a feeling for) event rates of a safety accident or a particular system level (functional) failure. FTA is used in the aerospace,[1] nuclear power, chemical and process,[2] [3] [4] pharmaceutical,[5] petrochemical and other high-hazard industries; but is also used in fields as diverse as risk factor identification relating to social service system failure.[6] FTA is also used in software engineering for debugging purposes and is closely related to cause-elimination technique used to detect bugs.

In aerospace, the more general term "system failure condition" is used for the "undesired state" / top event of the fault tree. These conditions are classified by the severity of their effects. The most severe conditions require the most extensive fault tree analysis. These system failure conditions and their classification are often previously determined in the functional hazard analysis.

Usage

Fault tree analysis can be used to:[7] [8]

History

Fault tree analysis (FTA) was originally developed in 1962 at Bell Laboratories by H.A. Watson, under a U.S. Air Force Ballistics Systems Division contract to evaluate the Minuteman I Intercontinental Ballistic Missile (ICBM) Launch Control System.[9] [10] [11] [12] The use of fault trees has since gained widespread support and is often used as a failure analysis tool by reliability experts.[13] Following the first published use of FTA in the 1962 Minuteman I Launch Control Safety Study, Boeing and AVCO expanded use of FTA to the entire Minuteman II system in 1963–1964. FTA received extensive coverage at a 1965 System Safety Symposium in Seattle sponsored by Boeing and the University of Washington.[14] Boeing began using FTA for civil aircraft design around 1966.[15] [16]

Subsequently, within the U.S. military, application of FTA for use with fuses was explored by Picatinny Arsenal in the 1960s and 1970s.[17] In 1976 the U.S. Army Materiel Command incorporated FTA into an Engineering Design Handbook on Design for Reliability.[18] The Reliability Analysis Center at Rome Laboratory and its successor organizations now with the Defense Technical Information Center (Reliability Information Analysis Center, and now Defense Systems Information Analysis Center[19]) has published documents on FTA and reliability block diagrams since the 1960s.[20] [21] [22] MIL-HDBK-338B provides a more recent reference.[23]

In 1970, the U.S. Federal Aviation Administration (FAA) published a change to 14 CFR 25.1309 airworthiness regulations for transport category aircraft in the Federal Register at 35 FR 5665 (1970-04-08). This change adopted failure probability criteria for aircraft systems and equipment and led to widespread use of FTA in civil aviation. In 1998, the FAA published Order 8040.4,[24] establishing risk management policy including hazard analysis in a range of critical activities beyond aircraft certification, including air traffic control and modernization of the U.S. National Airspace System. This led to the publication of the FAA System Safety Handbook, which describes the use of FTA in various types of formal hazard analysis.[25]

Early in the Apollo program the question was asked about the probability of successfully sending astronauts to the moon and returning them safely to Earth. A risk, or reliability, calculation of some sort was performed and the result was a mission success probability that was unacceptably low. This result discouraged NASA from further quantitative risk or reliability analysis until after the Challenger accident in 1986. Instead, NASA decided to rely on the use of failure modes and effects analysis (FMEA) and other qualitative methods for system safety assessments. After the Challenger accident, the importance of probabilistic risk assessment (PRA) and FTA in systems risk and reliability analysis was realized and its use at NASA has begun to grow and now FTA is considered as one of the most important system reliability and safety analysis techniques.[26]

Within the nuclear power industry, the U.S. Nuclear Regulatory Commission began using PRA methods including FTA in 1975, and significantly expanded PRA research following the 1979 incident at Three Mile Island.[27] This eventually led to the 1981 publication of the NRC Fault Tree Handbook NUREG - 0492,[28] and mandatory use of PRA under the NRC's regulatory authority.

Following process industry disasters such as the 1984 Bhopal disaster and 1988 Piper Alpha explosion, in 1992 the United States Department of Labor Occupational Safety and Health Administration (OSHA) published in the Federal Register at 57 FR 6356 (1992-02-24) its Process Safety Management (PSM) standard in 19 CFR 1910.119. OSHA PSM recognizes FTA as an acceptable method for process hazard analysis (PHA).

Today FTA is widely used in system safety and reliability engineering, and in all major fields of engineering.

Methodology

FTA methodology is described in several industry and government standards, including NRC NUREG - 0492 for the nuclear power industry, an aerospace-oriented revision to NUREG - 0492 for use by NASA, SAE ARP4761 for civil aerospace, MIL - HDBK - 338 for military systems, IEC standard IEC 61025[29] is intended for cross-industry use and has been adopted as European Norm EN 61025.

Any sufficiently complex system is subject to failure as a result of one or more subsystems failing. The likelihood of failure, however, can often be reduced through improved system design. Fault tree analysis maps the relationship between faults, subsystems, and redundant safety design elements by creating a logic diagram of the overall system.

The undesired outcome is taken as the root ('top event') of a tree of logic. For instance, the undesired outcome of a metal stamping press operation being considered might be a human appendage being stamped. Working backward from this top event it might be determined that there are two ways this could happen: during normal operation or during maintenance operation. This condition is a logical OR. Considering the branch of the hazard occurring during normal operation, perhaps it is determined that there are two ways this could happen: the press cycles and harms the operator, or the press cycles and harms another person. This is another logical OR. A design improvement can be made by requiring the operator to press two separate buttons to cycle the machine—this is a safety feature in the form of a logical AND. The button may have an intrinsic failure rate—this becomes a fault stimulus that can be analyzed.

When fault trees are labeled with actual numbers for failure probabilities, computer programs can calculate failure probabilities from fault trees. When a specific event is found to have more than one effect event, i.e. it has impact on several subsystems, it is called a common cause or common mode. Graphically speaking, it means this event will appear at several locations in the tree. Common causes introduce dependency relations between events. The probability computations of a tree which contains some common causes are much more complicated than regular trees where all events are considered as independent. Not all software tools available on the market provide such capability.

The tree is usually written out using conventional logic gate symbols. A cut set is a combination of events, typically component failures, causing the top event. If no event can be removed from a cut set without failing to cause the top event, then it is called a minimal cut set.

Some industries use both fault trees and event trees (see Probabilistic Risk Assessment). An event tree starts from an undesired initiator (loss of critical supply, component failure etc.) and follows possible further system events through to a series of final consequences. As each new event is considered, a new node on the tree is added with a split of probabilities of taking either branch. The probabilities of a range of 'top events' arising from the initial event can then be seen.

Classic programs include the Electric Power Research Institute's (EPRI) CAFTA software, which is used by many of the US nuclear power plants and by a majority of US and international aerospace manufacturers, and the Idaho National Laboratory's SAPHIRE, which is used by the U.S. Government to evaluate the safety and reliability of nuclear reactors, the Space Shuttle, and the International Space Station. Outside the US, the software RiskSpectrum is a popular tool for fault tree and event tree analysis, and is licensed for use at more than 60% of the world's nuclear power plants for probabilistic safety assessment. Professional-grade free software is also widely available; SCRAM[30] is an open-source tool that implements the Open-PSA Model Exchange Format[31] open standard for probabilistic safety assessment applications.

Graphic symbols

The basic symbols used in FTA are grouped as events, gates, and transfer symbols. Minor variations may be used in FTA software.

Event symbols

Event symbols are used for primary events and intermediate events. Primary events are not further developed on the fault tree. Intermediate events are found at the output of a gate. The event symbols are shown below:The primary event symbols are typically used as follows:

An intermediate event gate can be used immediately above a primary event to provide more room to type the event description.

FTA is a top-to-bottom approach.

Gate symbols

Gate symbols describe the relationship between input and output events. The symbols are derived from Boolean logic symbols:

The gates work as follows:

Transfer symbols

Transfer symbols are used to connect the inputs and outputs of related fault trees, such as the fault tree of a subsystem to its system. NASA prepared a complete document about FTA through practical incidents.

Basic mathematical foundation

Events in a fault tree are associated with statistical probabilities or Poisson-Exponentially distributed constant rates. For example, component failures may typically occur at some constant failure rate λ (a constant hazard function). In this simplest case, failure probability depends on the rate λ and the exposure time t:

P=1-e-

where:

Pλt

if

λt<0.001

A fault tree is often normalized to a given time interval, such as a flight hour or an average mission time. Event probabilities depend on the relationship of the event hazard function to this interval.

Unlike conventional logic gate diagrams in which inputs and outputs hold the binary values of TRUE (1) or FALSE (0), the gates in a fault tree output probabilities related to the set operations of Boolean logic. The probability of a gate's output event depends on the input event probabilities.

An AND gate represents a combination of independent events. That is, the probability of any input event to an AND gate is unaffected by any other input event to the same gate. In set theoretic terms, this is equivalent to the intersection of the input event sets, and the probability of the AND gate output is given by:

P (A and B) = P (A ∩ B) = P(A) P(B)

An OR gate, on the other hand, corresponds to set union:

P (A or B) = P (A ∪ B) = P(A) + P(B) - P (A ∩ B)

Since failure probabilities on fault trees tend to be small (less than .01), P (A ∩ B) usually becomes a very small error term, and the output of an OR gate may be conservatively approximated by using an assumption that the inputs are mutually exclusive events:

P (A or B) ≈ P(A) + P(B), P (A ∩ B) ≈ 0

An exclusive OR gate with two inputs represents the probability that one or the other input, but not both, occurs:

P (A xor B) = P(A) + P(B) - 2P (A ∩ B)

Again, since P (A ∩ B) usually becomes a very small error term, the exclusive OR gate has limited value in a fault tree.

Quite often, Poisson-Exponentially distributed rates[32] are used to quantify a fault tree instead of probabilities. Rates are often modeled as constant in time while probability is a function of time. Poisson-Exponential events are modelled as infinitely short so no two events can overlap. An OR gate is the superposition (addition of rates) of the two input failure frequencies or failure rates which are modeled as Poisson point processes. The output of an AND gate is calculated using the unavailability (Q1) of one event thinning the Poisson point process of the other event (λ2). The unavailability (Q2) of the other event then thins the Poisson point process of the first event (λ1). The two resulting Poisson point processes are superimposed according to the following equations.

The output of an AND gate is the combination of independent input events 1 and 2 to the AND gate:

Failure Frequency = λ1Q2 + λ2Q1 where Q = 1 - e-λt ≈ λt if λt < 0.001

Failure Frequency ≈ λ1λ2t2 + λ2λ1t1 if λ1t1 < 0.001 and λ2t2 < 0.001

In a fault tree, unavailability (Q) may be defined as the unavailability of safe operation and may not refer to the unavailability of the system operation depending on how the fault tree was structured. The input terms to the fault tree must be carefully defined.

Analysis

Many different approaches can be used to model a FTA, but the most common and popular way can be summarized in a few steps. A single fault tree is used to analyze one and only one undesired event, which may be subsequently fed into another fault tree as a basic event. Though the nature of the undesired event may vary dramatically, a FTA follows the same procedure for any undesired event; be it a delay of 0.25 ms for the generation of electrical power, an undetected cargo bay fire, or the random, unintended launch of an ICBM.

FTA analysis involves five steps:

  1. Define the undesired event to study.
    • Definition of the undesired event can be very hard to uncover, although some of the events are very easy and obvious to observe. An engineer with a wide knowledge of the design of the system is the best person to help define and number the undesired events. Undesired events are used then to make FTAs. Each FTA is limited to one undesired event.
  2. Obtain an understanding of the system.
    • Once the undesired event is selected, all causes with probabilities of affecting the undesired event of 0 or more are studied and analyzed. Getting exact numbers for the probabilities leading to the event is usually impossible for the reason that it may be very costly and time-consuming to do so. Computer software is used to study probabilities; this may lead to less costly system analysis.
      System analysts can help with understanding the overall system. System designers have full knowledge of the system and this knowledge is very important for not missing any cause affecting the undesired event. For the selected event all causes are then numbered and sequenced in the order of occurrence and then are used for the next step which is drawing or constructing the fault tree.
  3. Construct the fault tree.
    • After selecting the undesired event and having analyzed the system so that we know all the causing effects (and if possible their probabilities) we can now construct the fault tree. Fault tree is based on AND and OR gates which define the major characteristics of the fault tree.
  4. Evaluate the fault tree.
    • After the fault tree has been assembled for a specific undesired event, it is evaluated and analyzed for any possible improvement or in other words study the risk management and find ways for system improvement. A wide range of qualitative and quantitative analysis methods can be applied.[33] This step is as an introduction for the final step which will be to control the hazards identified. In short, in this step we identify all possible hazards affecting the system in a direct or indirect way.
  5. Control the hazards identified.
    • This step is very specific and differs largely from one system to another, but the main point will always be that after identifying the hazards all possible methods are pursued to decrease the probability of occurrence.

Comparison with other analytical methods

FTA is a deductive, top-down method aimed at analyzing the effects of initiating faults and events on a complex system. This contrasts with failure mode and effects analysis (FMEA), which is an inductive, bottom-up analysis method aimed at analyzing the effects of single component or function failures on equipment or subsystems. FTA is very good at showing how resistant a system is to single or multiple initiating faults. It is not good at finding all possible initiating faults. FMEA is good at exhaustively cataloging initiating faults, and identifying their local effects. It is not good at examining multiple failures or their effects at a system level. FTA considers external events, FMEA does not. In civil aerospace the usual practice is to perform both FTA and FMEA, with a failure mode effects summary (FMES) as the interface between FMEA and FTA.

Alternatives to FTA include dependence diagram (DD), also known as reliability block diagram (RBD) and Markov analysis. A dependence diagram is equivalent to a success tree analysis (STA), the logical inverse of an FTA, and depicts the system using paths instead of gates. DD and STA produce probability of success (i.e., avoiding a top event) rather than probability of a top event.

See also

Notes and References

  1. Book: Goldberg. B. E.. Everhart. K.. Stevens. R.. Babbitt. N.. Clemens. P.. Stout. L.. System engineering toolbox for design-oriented engineers. 1994. Marshall Space Flight Center. 3–35 to 3–48. https://ntrs.nasa.gov/search.jsp?R=19950012517. en. 3.
  2. Book: Center for Chemical Process Safety . Guidelines for Hazard Evaluation Procedures. 3rd. April 2008. Wiley. 978-0-471-97815-2.
  3. Book: Center for Chemical Process Safety . Guidelines for Chemical Process Quantitative Risk Analysis. 2nd. October 1999. American Institute of Chemical Engineers. 978-0-8169-0720-5.
  4. Book: U.S. Department of Labor Occupational Safety and Health Administration . Process Safety Management Guidelines for Compliance. 1994. OSHA 3133. U.S. Government Printing Office.
  5. ICH Harmonised Tripartite Guidelines. Quality Guidelines (January 2006). Q9 Quality Risk Management.
  6. Lacey . Peter . An Application of Fault Tree Analysis to the Identification and Management of Risks in Government Funded Human Service Delivery . Proceedings of the 2nd International Conference on Public Policy and Social Sciences . 2011 . 2171117 .
  7. Web site: Fault Tree Explanation . 2024-05-31 . ftvisualisations . en.
  8. Web site: Projects . 2024-05-31 . ftvisualisations . en.
  9. Ericson . Clifton . Fault Tree Analysis - A History . Proceedings of the 17th International Systems Safety Conference . 1999 . 2010-01-17 . dead . https://web.archive.org/web/20110723124816/http://www.fault-tree.net/papers/ericson-fta-history.pdf . 2011-07-23 .
  10. Rechard . Robert P. . Historical Relationship Between Performance Assessment for Radioactive Waste Disposal and Other Types of Risk Assessment in the United States . pdf . Risk Analysis . 19 . 5 . 763–807 . 1999 . 10.1023/A:1007058325258 . 10765434 . 704496 . SAND99-1147J . 2010-01-22. subscription .
  11. Winter . Mathias . Software Fault Tree Analysis of an Automated Control System Device Written in ADA . pdf . Master's Thesis . 1995 . https://web.archive.org/web/20120515221443/http://handle.dtic.mil/100.2/ADA303377 . dead . May 15, 2012 . ADA303377 . 2010-01-17.
  12. Benner . Ludwig . Accident Theory and Accident Investigation . Proceedings of the Society of Air Safety Investigators Annual Seminar . 1975 . 2010-01-17.
  13. Martensen, Anna L. . Butler, Ricky W. . The Fault-Tree Compiler. Langely Research Center. January 1987 . NTRS. June 17, 2011.
  14. DeLong . Thomas . A Fault Tree Manual . Master's Thesis . pdf . 1970 . https://web.archive.org/web/20160304031008/http://www.dtic.mil/get-tr-doc/pdf?AD=AD0739001 . dead . March 4, 2016 . AD739001 . 2014-05-18.
  15. Book: Eckberg , C. R. . WS-133B Fault Tree Analysis Program Plan . Rev B . The Boeing Company . Seattle, WA . 1964 . https://web.archive.org/web/20160303225811/http://www.dtic.mil/get-tr-doc/pdf?AD=AD0299561 . dead . March 3, 2016 . D2-30207-1 . 2014-05-18.
  16. Book: Hixenbaugh , A. F. . Fault Tree for Safety . The Boeing Company . Seattle, WA . 1968 . https://web.archive.org/web/20160303224602/http://www.dtic.mil/get-tr-doc/pdf?AD=AD0847015 . dead . March 3, 2016 . D6-53604 . 2014-05-18.
  17. Book: Larsen, Waldemar. Fault Tree Analysis. January 1974. Picatinny Arsenal. https://web.archive.org/web/20140518022301/http://www.dtic.mil/get-tr-doc/pdf?AD=AD0774843. dead. May 18, 2014. 2014-05-17. Technical Report 4556.
  18. Book: Evans, Ralph A.. Engineering Design Handbook Design for Reliability. January 5, 1976. US Army Materiel Command. https://web.archive.org/web/20140518022549/http://www.dtic.mil/dtic/tr/fulltext/u2/a027370.pdf. live. May 18, 2014. 2014-05-17. AMCP-706-196.
  19. Web site: DSIAC – Defense Systems Information Analysis Center . 2023-03-25 . en-US.
  20. Book: Begley . T. F. . Cummings. Fault Tree for Safety . RAC . 1968 . ADD874448 . 2010-01-17 -->.
  21. Book: Anderson, R. T.. Reliability Design Handbook. March 1976. Reliability Analysis Center. https://web.archive.org/web/20140518020425/http://www.dtic.mil/get-tr-doc/pdf?AD=ADA024601. live. May 18, 2014. 2014-05-17. RDH 376.
  22. Book: Mahar, David J.. Fault Tree Analysis Application Guide. 1990. Reliability Analysis Center. James W. Wilbur .
  23. Book: Electronic Reliability Design Handbook . 7.9 Fault Tree Analysis . B . . 1998 . pdf . MIL - HDBK - 338B . 2010-01-17 .
  24. Book: ASY-300. Safety Risk Management. June 26, 1998. Federal Aviation Administration. 8040.4.
  25. Book: FAA. System Safety Handbook. December 30, 2000. Federal Aviation Administration.
  26. Book: Vesely , William . Fault Tree Handbook with Aerospace Applications . . 2002 . https://web.archive.org/web/20161228133244/https://elibrary.gsfc.nasa.gov/_assets/doclibBidder/tech_docs/25.%20NASA_Fault_Tree_Handbook_with_Aerospace_Applications%20-%20Copy.pdf . dead . 2016-12-28 . 2018-07-16 . etal.
  27. Book: Acharya , Sarbes . Severe Accident Risks: An Assessment for Five U.S. Nuclear Power Plants . . Wasthington, DC . 1990 . NUREG - 1150 . 2010-01-17. etal.
  28. Book: Vesely , W. E. . Fault Tree Handbook . . 1981 . NUREG - 0492 . 2010-01-17 . etal.
  29. Book: Fault Tree Analysis . Edition 2.0 . . 2006 . IEC 61025 . 978-2-8318-8918-4 .
  30. Web site: SCRAM 0.11.4 — SCRAM 0.11.4 documentation . scram-pra.org . 13 January 2022 . https://web.archive.org/web/20161123011255/https://scram-pra.org/ . 23 November 2016 . dead.
  31. Web site: The Open-PSA Model Exchange Format — The Open-PSA Model Exchange Format 2.0. open-psa.github.io.
  32. Olofsson and Andersson, Probability, Statistics and Stochastic Processes, John Wiley and Sons, 2011.
  33. Ruijters . Enno . Stoelinga . Mariëlle I. A. . February–May 2015 . Fault tree analysis: A survey of the state-of-the-art in modeling, analysis and tools . Computer Science Review . 15–16 . 29–62 . 10.1016/j.cosrev.2015.03.001.