Probabilistic design explained

Probabilistic design is a discipline within engineering design. It deals primarily with the consideration and minimization of the effects of random variability upon the performance of an engineering system during the design phase. Typically, these effects studied and optimized are related to quality and reliability. It differs from the classical approach to design by assuming a small probability of failure instead of using the safety factor.[1] Probabilistic design is used in a variety of different applications to assess the likelihood of failure. Disciplines which extensively use probabilistic design principles include product design, quality control, systems engineering, machine design, civil engineering (particularly useful in limit state design) and manufacturing.

Objective and motivations

When using a probabilistic approach to design, the designer no longer thinks of each variable as a single value or number. Instead, each variable is viewed as a continuous random variable with a probability distribution. From this perspective, probabilistic design predicts the flow of variability (or distributions) through a system.[2]

Because there are so many sources of random and systemic variability when designing materials and structures, it is greatly beneficial for the designer to model the factors studied as random variables. By considering this model, a designer can make adjustments to reduce the flow of random variability, thereby improving engineering quality. Proponents of the probabilistic design approach contend that many quality problems can be predicted and rectified during the early design stages and at a much reduced cost.[3]

Typically, the goal of probabilistic design is to identify the design that will exhibit the smallest effects of random variability. Minimizing random variability is essential to probabilistic design because it limits uncontrollable factors, while also providing a much more precise determination of failure probability. This could be the one design option out of several that is found to be most robust. Alternatively, it could be the only design option available, but with the optimum combination of input variables and parameters. This second approach is sometimes referred to as robustification, parameter design or design for six sigma.

Sources of variability

Though the laws of physics dictate the relationships between variables and measurable quantities such as force, stress, strain, and deflection, there are still three primary sources of variability when considering these relationships.

The first source of variability is statistical, due to the limitations of having a finite sample size to estimate parameters such as yield stress, Young's modulus, and true strain.[4] Measurement uncertainty is the most easily minimized out of these three sources, as variance is proportional to the inverse of the sample size.

We can represent variance due to measurement uncertainties as a corrective factor

B

, which is multiplied by the true mean

X

to yield the measured mean of

\barX

. Equivalently,

\barX=\barBX

.

This yields the result

\barB=

\barX
X
, and the variance of the corrective factor

B

is given as:

Var[B]=

Var[\barX]
X

=

Var[X]
nX

where

B

is the correction factor,

X

is the true mean,

\barX

is the measured mean, and

n

is the number of measurements made.

The second source of variability stems from the inaccuracies and uncertainties of the model used to calculate such parameters. These include the physical models we use to understand loading and their associated effects in materials. The uncertainty from the model of a physical measurable can be determined if both theoretical values according to the model and experimental results are available.

The measured value

\hatH(\omega)

is equivalent to the theoretical model prediction

H(\omega)

multiplied by a model error of

\phi(\omega)

, plus the experimental error

\varepsilon(\omega)

.[5] Equivalently,

\hatH(\omega)=H(\omega)\phi(\omega)+\varepsilon(\omega)

and the model error takes the general form:

\phi(\omega)=

n
\sum
i=0

ai\omegan

where

ai

are coefficients of regression determined from experimental data.

Finally, the last variability source comes from the intrinsic variability of any physical measurable. There is a fundamental random uncertainty associated with all physical phenomena, and it is comparatively the most difficult to minimize this variability. Thus, each physical variable and measurable quantity can be represented as a random variable with a mean and a variability.

Comparison to classical design principles

Consider the classical approach to performing tensile testing in materials. The stress experienced by a material is given as a singular value (i.e., force applied divided by the cross-sectional area perpendicular to the loading axis). The yield stress, which is the maximum stress a material can support before plastic deformation, is also given as a singular value. Under this approach, there is a 0% chance of material failure below the yield stress, and a 100% chance of failure above it. However, these assumptions break down in the real world. The yield stress of a material is often only known to a certain precision, meaning that there is an uncertainty and therefore a probability distribution associated with the known value. Let the probability distribution function of the yield strength be given as

f(R)

.

Similarly, the applied load or predicted load can also only be known to a certain precision, and the range of stress which the material will undergo is unknown as well. Let this probability distribution be given as

f(S)

.

The probability of failure is equivalent to the area between these two distribution functions, mathematically:

Pf=P(R<S)=

infty
\int\limits
-infty
infty
\int\limits
-infty

f(R)f(S)dSdR

or equivalently, if we let the difference between yield stress and applied load equal a third function

R-S=Q

, then:

Pf=

infty
\int\limits
-infty
infty
\int\limits
-infty

f(R)f(S)dSdR=

0
\int\limits
-infty

f(Q)dQ

where the variance of the mean difference

Q

is given by
2
\sigma
Q

=

2
\sqrt{\sigma
R

+

2
\sigma
S
} .

The probabilistic design principles allow for precise determination of failure probability, whereas the classical model assumes absolutely no failure before yield strength.[6] It is clear that the classical applied load vs. yield stress model has limitations, so modeling these variables with a probability distribution to calculate failure probability is a more precise approach. The probabilistic design approach allows for the determination of material failure under all loading conditions, associating quantitative probabilities to failure chance in place of a definitive yes or no.

Methods used to determine variability

In essence, probabilistic design focuses upon the prediction of the effects of variability. In order to be able to predict and calculate variability associated with model uncertainty, many methods have been devised and utilized across different disciplines to determine theoretical values for parameters such as stress and strain. Examples of theoretical models used alongside probabilistic design include:

Additionally, there are many statistical methods used to quantify and predict the random variability in the desired measurable. Some methods that are used to predict the random variability of an output include:

See also

Footnotes

  1. Book: Sundararajan, S . Probabilistic Structural Mechanics Handbook . 1995 . Springer . 978-0412054815.
  2. Book: Ang, Alfredo H-S . Probability Concepts in Engineering: Emphasis on Applications to Civil and Environmental Engineering . Tang . Wilson H . John Wiley & Sons . 2006 . 978-0471720645 . 2nd . en.
  3. Doorn . Neelke . Hansson . Sven Ove . 2011-06-01 . Should Probabilistic Design Replace Safety Factors? . Philosophy & Technology . en . 24 . 2 . 151–168 . 10.1007/s13347-010-0003-6 . 2210-5441. free .
  4. 1997 . Soares . C. Guedes . Probabilistic Methods for Structural Design . Solid Mechanics and Its Applications . en . 10.1007/978-94-011-5614-1 . 0925-0042.
  5. Ditlevsen . Ove . 1982-01-01 . Model uncertainty in structural reliability . Structural Safety . 1 . 1 . 73–86 . 10.1016/0167-4730(82)90016-9 . 0167-4730.
  6. Book: Haugen, Edward B. . Probabilistic mechanical design: Edward B. Haugen . 1980 . Wiley . 978-0-471-05847-2 . New York.
  7. Kong . Depeng . Lu . Shouxiang . Frantzich . Hakan . Lo . S. M. . 2013-12-01 . A method for linking safety factor to the target probability of failure in fire safety engineering . Journal of Civil Engineering and Management . English . 19 . S1 . S212–S212 . 10.3846/13923730.2013.802718. free .

References

External links