Physics of failure is a technique under the practice of reliability design that leverages the knowledge and understanding of the processes and mechanisms that induce failure to predict reliability and improve product performance.
Other definitions of Physics of Failure include:
The concept of Physics of Failure, also known as Reliability Physics, involves the use of degradation algorithms that describe how physical, chemical, mechanical, thermal, or electrical mechanisms evolve over time and eventually induce failure.While the concept of Physics of Failure is common in many structural fields,[2] the specific branding evolved from an attempt to better predict the reliability of early generation electronic parts and systems.
Within the electronics industry, the major driver for the implementation of Physics of Failure was the poor performance of military weapon systems during World War II.[3] During the subsequent decade, the United States Department of Defense funded an extensive amount of effort to especially improve the reliability of electronics,[4] with the initial efforts focused on after-the-fact or statistical methodology.[5] Unfortunately, the rapid evolution of electronics, with new designs, new materials, and new manufacturing processes, tended to quickly negate approaches and predictions derived from older technology. In addition, the statistical approach tended to lead to expensive and time-consuming testing. The need for different approaches led to the birth of Physics of Failure at the Rome Air Development Center (RADC).[6] Under the auspices of the RADC, the first Physics of Failure in Electronics Symposium was held in September 1962.[7] The goal of the program was to relate the fundamental physical and chemical behavior of materials to reliability parameters.[8]
The initial focus of physics of failure techniques tended to be limited to degradation mechanisms in integrated circuits. This was primarily because the rapid evolution of the technology created a need to capture and predict performance several generations ahead of existing product.
One of the first major successes under predictive physics of failure was a formula[9] developed by James Black of Motorola to describe the behavior of electromigration. Electromigration occurs when collisions of electrons cause metal atoms in a conductor to dislodge and move downstream of current flow (proportional to current density). Black used this knowledge, in combination with experimental findings, to describe the failure rate due to electromigration as
MTTF=A(J-n
| ||||
)e |
Physics of failure is typically designed to predict wearout, or an increasing failure rate, but this initial success by Black focused on predicting behavior during operational life, or a constant failure rate. This is because electromigration in traces can be designed out by following design rules, while electromigration at vias are primarily interfacial effects, which tend to be defect or process-driven.
Leveraging this success, additional physics-of-failure based algorithms have been derived for the three other major degradation mechanisms (time dependent dielectric breakdown [TDDB], hot carrier injection [HCI], and negative bias temperature instability [NBTI]) in modern integrated circuits (equations shown below). More recent work has attempted to aggregate these discrete algorithms into a system-level prediction.[10]
TDDB: τ = τo(T) exp[''G''(''T'')/ εox][11] where τo(T) = exp(−Ea / kT), G(T) = 120 + 5.8/kT, and εox is the permittivity.
HCI: λHCI = A3 exp(−β/VD) exp(−Ea / kT) [12] where λHCI is the failure rate of HCI, A3 is an empirical fitting parameter, β is an empirical fitting parameter, VD is the drain voltage, Ea is the activation energy of HCI, typically −0.2 to −0.1 eV, k is the Boltzmann constant, and T is absolute temperature.
NBTI: λ = A εoxm VTμp exp(−Ea / kT)[13] where A is determined empirically by normalizing the above equation, m = 2.9, VT is the thermal voltage, μp is the surface mobility constant, Ea is the activation energy of NBTI, k is the Boltzmann constant, and T is the absolute temperature.
The resources and successes with integrated circuits, and a review of some of the drivers of field failures, subsequently motivated the reliability physics community to initiate physics of failure investigations into package-level degradation mechanisms. An extensive amount of work was performed to develop algorithms that could accurately predict the reliability of interconnects. Specific interconnects of interest resided at 1st level (wire bonds, solder bumps, die attach), 2nd level (solder joints), and 3rd level (plated through holes).
Just as integrated circuit community had four major successes with physics of failure at the die-level, the component packaging community had four major successes arise from their work in the 1970s and 1980s. These were
Peck:[14] Predicts time to failure of wire bond / bond pad connections when exposed to elevated temperature / humidity
TTF=A0(RH)-2.7f(V)\exp\left(
Ea | |
kBT |
\right)
Engelmaier:[15] Predicts time to failure of solder joints exposed to temperature cycling
N | \left[ | ||||
|
2\epsilon'f | |
\DeltaD |
| |||||
\right] |
\DeltaD(leadless)=\left[
FLD\Delta(\alpha\DeltaT) | |
h |
\right]
Steinberg:[16] Predicts time to failure of solder joints exposed to vibration
Z | ||||
|
2} | |
n |
Z | ||||
|
}
IPC-TR-579:[17] Predicts time to failure of plated through holes exposed to temperature cycling
\sigma= | (\alphaE-\alphaCu)\DeltaTAEEEECu |
AEEE+ACuECu |
, for\sigma\leSY
-0.6 | |
N | |
f |
0.75 | ||
D | +0.9 | |
f |
Su | |
E |
\left[
\exp(Df) | |
0.36 |
| |||||
\right] |
-\Delta\epsilon=0
Each of the equations above uses a combination of knowledge of the degradation mechanisms and test experience to develop first-order equations that allow the design or reliability engineer to be able to predict time to failure behavior based on information on the design architecture, materials, and environment.
More recent work in the area of physics of failure has been focused on predicting the time to failure of new materials (i.e., lead-free solder,[18] [19] high-K dielectric[20]), software programs,[21] using the algorithms for prognostic purposes,[22] and integrating physics of failure predictions into system-level reliability calculations.[23]
There are some limitations with the use of physics of failure in design assessments and reliability prediction. The first is physics of failure algorithms typically assume a 'perfect design'. Attempting to understand the influence of defects can be challenging and often leads to Physics of Failure (PoF) predictions limited to end of life behavior (as opposed to infant mortality or useful operating life). In addition, some companies have so many use environments (think personal computers) that performing a PoF assessment for each potential combination of temperature / vibration / humidity / power cycling / etc. would be onerous and potentially of limited value.