Uncertainties in the prediction results
Contents
2.6. Uncertainties in the prediction results#
2.6.1. Uncertainties of reliability estimates based on Part III models#
Unless otherwise stated, the prediction models provided in EEE chapter, MEC chapter and MIS chapter aim at providing realistic estimates, similar to average values observed in orbit. Nevertheless, the predictions will never perfectly match reality. It is the purpose of the present chapter to discuss various sources of uncertainties affecting the prediction results and how these are considered in the methodology. Section 2.6.3.2 at the end of this Chapter provides a practical approach that can be used for the identification of the most important uncertainties associated with a prediction, which should be communicated together with the prediction result.
Obviously, aiming at realistic estimates also requires realistic model input. Assumptions made during model usage should be clearly marked and justified by the user. Model usage uncertainties will also be discussed in the following
2.6.2. Classification and relevance of different prediction uncertainties#
Apart from the fact that Reliability Prediction is always dealing with the modelling of uncertainties – a system or component may fail or not – reliability in itself can never be predicted with 100% accuracy. The models used are just a mathematical representation of reality, and the resulting estimates are generally associated with (prediction) uncertainties.
Very broadly, we may thus distinguish two different types of uncertainties:
Aleatory uncertainties, resulting from the natural variability of the underlying processes
Epistemic uncertainties, resulting from our poor understanding of the reliability problem
In contrast to the aleatory uncertainties, which can be modelled but not reduced, epistemic uncertainties can be reduced by collecting more information.
The modelling of aleatory uncertainties is the core of reliability prediction, assuming e.g. that failures may occur at random points in time, which can be represented by suitable probabilistic models. Based on these models, the reliability (probability of success) is estimated in relation to a certain period of time and the associated operational and environmental conditions. However, also the reliability estimates are generally uncertain, representing a second layer of uncertainties. The consideration of these epistemic uncertainties is discussed in the following.
2.6.2.1. Epistemic uncertainties classification#
Epistemic uncertainties may be further classified as follows:
Statistical uncertainties, resulting from limited data samples used to derive a model
Model uncertainties, resulting from modelling assumptions and simplifications made
Another way of classification is by the modelling step or person introducing the uncertainties:
Model development uncertainties are introduced by the model developer
Model usage uncertainties are introduced by the model user (e.g. model input assumptions)
Statistical uncertainties are generally introduced during model development, when a limited data sample is used e.g. to estimate a reliability metric such as a failure rate. Model uncertainties can be introduced both during model development and during model usage.
In the following, the focus will first be on the modelling and propagation of statistical uncertainties, showing their limited relevance for the uncertainty of system level reliability estimates (Section 2.6.2.2). After this, a broader perspective on epistemic uncertainties is taken, including also the more general model uncertainties (Section 2.6.2.3).
2.6.2.2. The role of statistical uncertainties in system level reliability prediction#
Statistical uncertainties arise whenever a reliability metric (e.g. a failure rate) is estimated from statistical data. The uncertainty resulting from the inherent randomness in the data sample is largest when the sample size is very small and decreases when more data gets available; in the theoretical limit (sample size n→∞) the estimate converges to the true value.
Quantification of statistical uncertainties
The quantification of statistical uncertainties during the process of model development is in principle straightforward, taking its basis in the mathematical theory of statistical estimation. The concept of estimation by interval is illustrated in Fig. 2.6.1, using the example of a Gamma distributed failure rate λ. Different methods from frequentist or Bayesian statistics can be used to derive an estimation interval [\(λ_{\text{lo}}\); \(λ_{\text{up}}\)] which will contain the true value with a probability – or confidence – of β=1α. The width of the interval depends on the value chosen for β, but also on the size of the sample used for the estimation (e.g. number of observed failures, cumulated hours).
The following methods are worth mentioning in this context, see Section 2.4.4.3 for discussion:
Confidence intervals, e.g. ChiSquare estimator for constant failure rates
Maximum Likelihood Estimation, large sample theory with Fisher Information matrix to derive the variance around the mean (defined as the Maximum Likelihood estimate).
Bayesian credibility intervals, combining Likelihood information from the data sample with a prior distribution, e.g. a conjugate Gamma prior for constant failure rate estimation.
Uncertainty propagation in system level reliability prediction
Statistical uncertainties are introduced during the development of elementary reliability models using statistical methods, used to predict the reliability of parts, assemblies or in some cases for higher levels such as for standardized equipment. In system level reliability prediction, many of these estimates are combined using system level modelling techniques such as Reliability Block Diagrams or Fault Tree Analysis. While it is in principle possible to propagate the statistical uncertainties using e.g. Monte Carlo simulations, it is not meaningful to do so. Assuming independent and unbiased estimates, statistical uncertainty will rapidly decrease at system level due to the Law of Large Numbers. Only uncertainties associated with reliability estimates for items having a very large contribution to the system level (un)reliability may have a relevant impact and should be highlighted when discussing the uncertainties associated with the prediction.
From a system perspective this implies that statistical uncertainties associated with elementary reliability models should be addressed (only) under the following conditions:
The components or spacecraft elements considered have a large (>510%) contribution to the overall system unreliability and
The sample size used for statistical modelling was very small (<10 failures).
When considering epistemic uncertainties more generally (as in Section 2.6.2.3 below), it may be sufficient to simply highlight the elementary reliability models fulfilling these conditions, indicating that the effect of statistical uncertainties may not be negligible from a system level perspective.
2.6.2.3. A more general view on epistemic uncertainties#
As has been discussed in Section 2.6.2.2 above, statistical uncertainties due to limited data samples underlying the different elementary reliability estimates are of limited relevance from a system level point of view. The main reason for this conclusion is that statistical uncertainties associated with different elementary reliability models are statistically independent and therefore uncertainty reduces at system level due to the Law of Large Numbers.
When considering model uncertainties instead, arising from assumptions and simplifications made during the model development and usage, the assumption of statistical independence across elementary reliability estimates must not hold true. To give an example, uncertainties associated with the mission profile definition at spacecraft level are generally flown down to all lower levels, and thus in principle affect all elementary models in the same (or at least a similar) way. As a result, the uncertainties associated with lower level estimates are not independent anymore and the Law of Large Numbers does not apply.
In conclusion, when considering epistemic uncertainties more generally, model uncertainties are potentially much more relevant from a system level reliability prediction point of view than statistical uncertainties, whose effect is decreased by statistical independence and the Law of Large Numbers. Thus, to assess the uncertainties associated with a reliability prediction, it is important to consider all kinds of epistemic uncertainties introduced during the development of usage of the different reliability prediction models.
In such a general setting, it makes sense to switch from a quantitative treatment to a qualitative assessment of epistemic uncertainties. Nevertheless, a quantitative way of thinking is still relevant to decide which uncertainties are the most relevant ones for a system level prediction. From the considerations discussed above, we may conclude that the “system level relevance” ranking of various uncertainties associated with a prediction should be driven by the following considerations:
What is the contribution of each model to the overall system level (un)reliability?
The models with the largest contribution are also likely to have a large contribution to the overall epistemic uncertainty associated with the prediction.At which level have the various uncertainties been introduced?
Uncertainty sources at higher levels (e.g. mission profile definition at system level) have a larger contribution to the overall epistemic uncertainties than uncertainties associated with part level reliability predictions.What is the dependency structure between various uncertainty sources?
Uncertainties that are statistically dependent (e.g. the same assumption is used for various elementary reliability models) have a stronger effect than independent uncertainty sources.What is the order of magnitude of each relevant epistemic uncertainty?
From the uncertainty sources judged to be most relevant from a system level point of view (based on the previous criteria), those leading to the largest scatter will have the largest impact on the system level reliability prediction uncertainty.
Section 2.6.3 contains a questionnaire approach which can be used to identify different sources of epistemic uncertainties associated with a reliability prediction model or method and its usage. For practical purposes, a full discussion and consideration of all identified uncertainties is not necessarily relevant and may even be considered as excessive information if the goal is to understand the main uncertainties associated with a prediction at system level. Instead, the focus should be on a few uncertainties having the largest impact. A rough qualitative ranking to identify e.g. the Top 5 uncertainty drivers can be established based on the criteria listed above.
2.6.3. Identification of relevant uncertainties associated with a prediction#
To assess the uncertainties associated with a prediction model or method and its usage, as a first step the relevant sources of uncertainties need to be identified. This can be achieved with the aid of the questionnaires given in the following.
Based on the discussion in Section 2.6.2, the identification of uncertainty sources will not be limited to statistical uncertainties, but considering epistemic uncertainties in a more general sense. Statistical uncertainties (focussing on those still relevant at system level) are considered only as one potential uncertainty source and the distinction between statistical and model uncertainties will not be stressed any further. Instead, the questionnaires will distinguish between model development uncertainties (including statistical uncertainties) and model usage uncertainties.
Answers to the questionnaires assessing the model development uncertainties associated with the models used in EEE chapter, MEC chapter and MIS chapter are collected in Annex B Prediction uncertainties for Part III models.
2.6.3.1. Model development uncertainties#
Model development uncertainties are introduced by the model developer and are thus inherent in the models. At least the most important ones should be communicated together with the model; the questionnaires given below can be used for this purpose. The model user has generally no influence on the magnitude of the model development uncertainties unless there is the possibility to update the model, e.g. with the aid of new data (Section 2.4.7).
The questionnaires are given for two different methods or modelling approaches based on which reliability prediction models can be developed: Statistical methods based on failure data collected in the field or in tests (Table 2.6.1) and Physics of Failure approaches based on a dedicated modelling of the underlying failure mechanisms (Table 2.6.2).
The classification of methods provided in Section 2.4.3 includes a third category, combining Physics of Failure methods with statistical data. The uncertainties associated with these approaches should be assessed with the aid of a single questionnaire (for statistical or Physics of Failure methods) if the modelling is dominated by one of the two methods. Otherwise, both questionnaires should be filled out. The answers given to each question should indicate how potential uncertainties and weaknesses are alleviated with the aid of the combined approach.
Models from handbooks and published data sources are assessed using the same approach. Questions that cannot be clearly answered based on the published information should be classified as “Unknown”.


2.6.3.2. Model usage uncertainties#
Model usage uncertainties are introduced by the model user, e.g. when making assumptions for the model input. Also in this case the relevant sources of uncertainties should be identified and the most important ones (having the largest impact on the prediction) should be communicated together with the prediction. The questionnaire given in Table 2.6.3 can be used for this purpose.

The questionnaire should be used at the level of the system for which the prediction is made, e.g. for a single equipment, a payload, a platform or a whole spacecraft. The first part of the questionnaire relates to system level modelling uncertainties, including questions to identify major uncertainties associated with the model input from lower levels. The latter can be assessed by applying the same questionnaire to each component (if it is a system in itself), or the second part of Table 2.6.3 if the lower level input is based on elementary reliability prediction models.
For an efficient usage of the questionnaire in Table 2.6.3, in consistency with the criteria discussed in Section 2.6.2.3, a stepwise iterative procedure may be followed for the uncertainty assessment:
Apply the first part of the questionnaire at the highest level of assembly considered
Relevant uncertainty sources introduced at system level are likely to have a larger impact on the prediction that uncertainties introduced at lower levels.
The last question in Section 1.2 of the questionnaire, related to the uncertainties inherent in predictions at lower levels, may remain unanswered in the first iteration. To proceed with the next steps, it is only relevant to identify the components (systems or parts) having the largest contribution to overall unreliability
Apply the questionnaire to assess the uncertainties associated with predictions at the next lower level, focusing on the components having the largest contribution
Use the first part of the questionnaire if the component is modelled using systemlevel reliability prediction methods
Use the second part of the questionnaire if the component is modelled using elementary reliability models
Uncertainties associated with predictions that do not have a large contribution to system level unreliability do not have to be considered in detail.
Go further down in the system hierarchy to assess uncertainties associated with predictions at lower levels, as necessary.
The number of “dominating” components considered may be reduced in each step
Component models may be grouped if the same modelling approach is used for a variety of parts (e.g. models for EEE components taken from the same source).
Summarize the uncertainty assessment by making a (qualitative) statement about the major (e.g. Top 5) uncertainties associated with the predictions at the highest level of assembly.
The criteria discussed in Section 2.6.2.3 may be used to decide which uncertainty sources are most relevant at system level.
When following this procedure, some uncertainties introduced at the highest level of assembly considered may in fact be flown down to all lower levels. These uncertainties are likely to dominate the system level reliability prediction. Large uncertainties introduced at lower levels may become relevant if the contribution of the item’s probability of failure to system level unreliability is very large. It should be noted that this becomes more and more unlikely the further the assessment goes down in the system hierarchy. Going into the details of each part level model is thus certainly not necessary to assess the major uncertainties associated with the prediction.