ECSS E ST 60 10C15November2008
ECSS E ST 60 10C15November2008
15 November 2008
Space engineering
Control performance
ECSS Secretariat
ESA-ESTEC
Requirements & Standards Division
Noordwijk, The Netherlands
ECSS‐E‐ST‐60‐10C
15 November 2008
Foreword
This Standard is one of the series of ECSS Standards intended to be applied together for the
management, engineering and product assurance in space projects and applications. ECSS is a
cooperative effort of the European Space Agency, national space agencies and European industry
associations for the purpose of developing and maintaining common standards. Requirements in this
Standard are defined in terms of what shall be accomplished, rather than in terms of how to organize
and perform the necessary work. This allows existing organizational structures and methods to be
applied where they are effective, and for the structures and methods to evolve as necessary without
rewriting the standards.
This Standard has been prepared by the ECSS‐E‐ST‐60‐10 Working Group, reviewed by the ECSS
Executive Secretariat and approved by the ECSS Technical Authority.
Disclaimer
ECSS does not provide any warranty whatsoever, whether expressed, implied, or statutory, including,
but not limited to, any warranty of merchantability or fitness for a particular purpose or any warranty
that the contents of the item are error‐free. In no respect shall ECSS incur any liability for any
damages, including, but not limited to, direct, indirect, special, or consequential damages arising out
of, resulting from, or in any way connected to the use of this Standard, whether or not based upon
warranty, business agreement, tort, or otherwise; whether or not injury was sustained by persons or
property or otherwise; and whether or not loss was sustained from, or arose out of, the results of, the
item, or any services that may be provided by ECSS.
2
ECSS‐E‐ST‐60‐10C
15 November 2008
Change log
3
ECSS‐E‐ST‐60‐10C
15 November 2008
Table of contents
Introduction................................................................................................................7
1 Scope.......................................................................................................................8
5 Stability and robustness specification and verification for linear systems ....24
5.1 Overview .................................................................................................................. 24
5.2 Stability and robustness specification ...................................................................... 25
5.2.1 Uncertainty domains ................................................................................... 25
5.2.2 Stability requirement ................................................................................... 27
5.2.3 Identification of checkpoints ....................................................................... 27
5.2.4 Selection and justification of stability margin indicators .............................. 28
5.2.5 Stability margins requirements ................................................................... 28
5.2.6 Verification of stability margins with a single uncertainty domain ............... 29
4
ECSS‐E‐ST‐60‐10C
15 November 2008
5.2.7 Verification of stability margins with reduced and extended
uncertainty domains ................................................................................... 29
References ...............................................................................................................56
Bibliography.............................................................................................................57
Figures
Figure A-1 : Example showing the APE, MPE and RPE error indices ................................... 31
Figure A-2 : Example showing the PDE and PRE error indices ............................................. 31
Figure A-3 : Example of a statistical ensemble of errors. ....................................................... 32
Figure A-4 : The different ways in which a requirement for P(|ε|<1º) > 0,9 can be met ......... 33
5
ECSS‐E‐ST‐60‐10C
15 November 2008
Figure A-5 : Illustration of how the statistics of the pointing errors differ depending on
which statistical interpretation is used ................................................................. 33
Figure C-1 : Scenario example............................................................................................... 51
Tables
Table B-1 : Parameters whose distributions are assessed for the different pointing error
indices (knowledge error indices are similar)....................................................... 42
Table B-2 : Budget contributions from bias errors, where B represents the bias ................... 43
Table B-3 : Budget contributions from zero mean Gaussian random errors .......................... 44
Table B-4 : Uniform Random Errors (range 0-C) ................................................................... 44
Table B-5 : Budget contributions for periodic errors (low period sinusoidal) .......................... 45
Table B-6 : Budget contributions for periodic errors (long period sinusoidal) ......................... 46
Table B-7 : Some common distributions of ensemble parameters and their properties ......... 48
Table C-1 : Example of contributing errors, and their relevant properties .............................. 52
Table C-2 : Example of distribution of the ensemble parameters .......................................... 53
Table C-3 : Example of pointing budget for the APE index .................................................... 54
Table C-4 : Example of pointing budget for the RPE index .................................................... 54
Table D-1 : Correspondence between Pointing error handbook and ECSS-E-ST-60-10
indicators ............................................................................................................. 55
6
ECSS‐E‐ST‐60‐10C
15 November 2008
Introduction
7
ECSS‐E‐ST‐60‐10C
15 November 2008
1
Scope
This standard deals with control systems developed as part of a space project. It
is applicable to all the elements of a space system, including the space segment,
the ground segment and the launch service segment.
It addresses the issue of control performance, in terms of definition,
specification, verification and validation methods and processes.
The standard defines a general framework for handling performance indicators,
which applies to all disciplines involving control engineering, and which can be
applied as well at different levels ranging from equipment to system level. It
also focuses on the specific performance indicators applicable to the case of
closed‐loop control systems – mainly stability and robustness.
Rules are provided for combining different error sources in order to build up a
performance error budget and use this to assess the compliance with a
requirement.
NOTE 1 Although designed to be general, one of the major
application field for this Standard is spacecraft
pointing. This justifies why most of the examples
and illustrations are related to AOCS problems.
NOTE 2 Indeed the definitions and the normative clauses
of this Standard apply to pointing performance;
nevertheless fully specific pointing issues are not
addressed here in detail (spinning spacecraft cases
for example). Complementary material for
pointing error budgets can be found in ECSS‐E‐
HB‐60‐10.
NOTE 3 For their own specific purpose, each entity (ESA,
national agencies, primes) can further elaborate
internal documents, deriving appropriate
guidelines and summation rules based on the top
level clauses gathered in this ECSS‐E‐ST‐60‐10
standard.
This standard may be tailored for the specific characteristic and constrains of a
space project in conformance with ECSS‐S‐ST‐00.
8
ECSS‐E‐ST‐60‐10C
15 November 2008
2
Normative references
9
ECSS‐E‐ST‐60‐10C
15 November 2008
3
Terms, definitions and abbreviated terms
1 As a preliminary note, the error signals introduced in clause 3.2 are very general. They represent any
type of physical quantity (e.g. attitude, temperature, pressure, position). According to the situation and
to the nature of the control system, they are scalar or multi-dimensional.
10
ECSS‐E‐ST‐60‐10C
15 November 2008
NOTE 2 A knowledge error index is applied to the
difference between the actual output of the system
and the known (estimated) system output.
NOTE 3 The most commonly used indices are defined in
this chapter (APE, RPE, AKE etc.). The list is not
limitative.
11
ECSS‐E‐ST‐60‐10C
15 November 2008
NOTE 2 See annex A.1.4 for discussion of how to specify
the interval Δt, and annex A.1.3 for defining
requirements on the knowledge error.
NOTE 2 Where the time intervals Δt1 and Δt2 are separated
by a non‐zero time interval ΔtPDE.
NOTE 3 The durations of Δt1 and Δt2 are sufficiently long to
average out short term contributions. Ideally they
have the same duration. See annex A.1.4 for
further discussion of the choice of Δt1 , Δt2, ΔtPDE.
NOTE 4 The two intervals Δt1 and Δt2 are within a single
observation period
12
ECSS‐E‐ST‐60‐10C
15 November 2008
3.2.10 performance reproducibility error (PRE)
difference between the means of the performance error taken over two time
intervals within different observation periods
NOTE 1 This is expressed by:
PRE (Δt1 , Δt2 ) = eP (Δt2 ) − eP (Δt1 )
1 1
= ∫
Δt2 Δt2
eP (t ) dt −
Δt1 Δ∫t1
eP (t ) dt
NOTE 2 Where the time intervals Δt1 and Δt2 are separated
by a time interval ΔtPRE.
NOTE 3 The durations of Δt1 and Δt2 are sufficiently long to
average out short term contributions. Ideally they
have the same duration. See annex A.1.4 for
further discussion of the choice of Δt1, Δt2, ΔtPRE.
NOTE 4 The two intervals Δt1 and Δt2 are within different
observation periods
NOTE 5 The mathematical definitions of the PDE and PRE
indices are identical. The difference is in the use:
PDE is used to quantify the drift in the
performance error during a long observation,
while PRE is used to quantify the accuracy to
which it is possible to repeat an observation at a
later time.
13
ECSS‐E‐ST‐60‐10C
15 November 2008
NOTE 2 As stated here the exact relationship between t and
Δt is not well defined. Depending on the system it
can be appropriate to specify it more precisely: e.g.
t is randomly chosen within Δt, or t is at the end of
Δt. See annex A.1.4 for further discussion
3.2.13 robustness
ability of a controlled system to maintain some performance or stability
characteristics in the presence of plant, sensors, actuators and/or environmental
uncertainties
NOTE 1 Performance robustness is the ability to maintain
performance in the presence of defined bounded
uncertainties.
NOTE 2 Stability robustness is the ability to maintain
stability in the presence of defined bounded
uncertainties.
3.2.14 stability
ability of a system submitted to bounded external disturbances to remain
indefinitely in a bounded domain around an equilibrium position or around an
equilibrium trajectory
14
ECSS‐E‐ST‐60‐10C
15 November 2008
Abbreviation Meaning
AKE absolute knowledge error
APE absolute performance error
LTI linear time invariant
MIMO multiple input – multiple output
MKE mean knowledge error
MPE mean performance error
PDE performance drift error
PDF probability density function
PRE performance reproducibility error
RKE relative knowledge error
RMS root mean square
RPE relative performance error
RSS root sum of squares
SISO single input – single output
15
ECSS‐E‐ST‐60‐10C
15 November 2008
4
Performance requirements and budgeting
4.1.1 Overview
For the purposes of this standard, a performance requirement is a specification
that the output of the system does not deviate by more than a given amount
from the target output. For example, it can be requested that the boresight of a
telescope payload does not deviate by more than a given angle from the target
direction.
In practice, such requirements are specified in terms of quantified probabilities.
Typical requirements seen in practice are for example:
• “The instantaneous half cone angle between the actual and desired payload
boresight directions shall be less than 1,0 arcmin for 95 % of the time”
• “Over a 10 second integration time, the Euler angles for the transformation
between the target and actual payload frames shall have an RPE less than 20
arcsec at 99 % confidence, using the mixed statistical interpretation.”
• “APE(ε) < 2,5 arcmin (95 % confidence, ensemble interpretation), where
ε = arccos(xtarget.xactual)”
Although given in different ways, these all have a common mathematical form:
Since there are different ways to interpret the probability, the applicable
statistical interpretation is also given.
These concepts are discussed in Annex A.
16
ECSS‐E‐ST‐60‐10C
15 November 2008
17
ECSS‐E‐ST‐60‐10C
15 November 2008
18
ECSS‐E‐ST‐60‐10C
15 November 2008
appropriate in most situations, but, like all approximations, it is used it with
care. It is not possible to give quantitative limits on its domain of validity; a
degree of engineering judgement is involved.
Further discussion is given in annex A.2.1.
NOTE In general error budgeting is not sufficient to
extensively demonstrate the final performance of a
complex control system. The performance
validation process also involves appropriate,
detailed simulation campaign using Monte‐Carlo
techniques, or worst‐case simulation scenarios.
19
ECSS‐E‐ST‐60‐10C
15 November 2008
NOTE 2 The variance can be considered equivalent to the
standard deviation, as they are simply related. The
root sum square (RSS) value is only equivalent in
the case that the mean can be shown to be zero.
NOTE 3 Further information about the shape of the
distribution is only needed in the case that the
approximations used for budgeting are
insufficient.
20
ECSS‐E‐ST‐60‐10C
15 November 2008
σ group ≤ ∑c σi =1
i i
σ group = ∑c σi =1
2
i i
2
σ total = ∑σ
groups
2
group
21
ECSS‐E‐ST‐60‐10C
15 November 2008
not recommended. Annex A.2.1 discusses this
further.
⎯ X max is the maximum range for the total error, given in the
specification.
NOTE 1 This condition is based on the assumption that the
total combined distribution has Gaussian or close
to Gaussian shape. This is not always the case: see
annex A.2.1 for more details.
NOTE 2 This condition is conservative.
NOTE 3 For example: This applies to the case of
“rotational” pointing errors, in which separate
requirements are given for each of the Euler angles
between two nominally, aligned frames.
a. If the total error etotal = eA2 + eB2 is a quadratic sum of two independent
errors eA and eB , each of which being a linear combination of individual
contributing errors, the following condition shall be met to ensure that
the budget is compliant with the specification:
μ A2 + μ B2 + nP σ A2 + σ B2 ≤ X max
where
22
ECSS‐E‐ST‐60‐10C
15 November 2008
⎯ X max is the maximum value for the total error, given in the
specification.
NOTE 1 This condition is extremely conservative and is not
an exact formula. See annex A.2.4 for more details.
NOTE 2 This applies to the case of “directional” pointing
errors, in which a requirement is given on the
angle between the nominal direction of an axis and
its actual direction. In this case eA and eB are the
Euler angles perpendicular to this axis.
σ A >> μ A , σ B >> μ B , σ A ≈ σ B
the following condition shall be met to ensure that the budget is
compliant with the specification:
where
⎯ X max is the maximum value for the total error, given in the
specification.
⎯ ‘log’ is the natural logarithm (base e)
NOTE 1 This condition is based on the properties of a
Rayleigh distribution. It is a less conservative
formula than the general case (4.2.4.2.1) – see
annex A.2.4 for more details.
NOTE 2 This applies to the case of “directional” pointing
errors in which the errors on the perpendicular
axes are similar.
23
ECSS‐E‐ST‐60‐10C
15 November 2008
5
Stability and robustness specification and
verification for linear systems
5.1 Overview
When dealing with closed‐loop control systems, the question arises of how to
specify the stability and robustness properties of the system in the presence of
an active feedback.
For linear systems, stability is an intrinsic performance property. It does not
depend on the type and level of the inputs; it is directly related to the internal
nature of the system itself.
For an active system, a quantified knowledge of uncertainties within the system
enables to:
• Design a better control coping with actual uncertainties.
• Identify the worst case performance criteria or stability margins of a
given controller design and the actual values of uncertain parameters
leading to this worst case.
In this domain the state‐of‐the‐art for stability specification is not fully
satisfactory. A traditional rule exists, going back to the times of analogue
controllers, asking for a gain margin better than 6 dB, and a phase margin better
than 30°. But this formulation proves insufficient, ambiguous or even
inappropriate for many practical situations:
• MIMO systems cannot be properly handled with this rule, which applies
to SISO cases exclusively.
• There is no reference to the way these margins are adapted (or not) in the
presence of system uncertainties; do the 6 dB / 30° requirement still hold
in the event of numerical dispersions on the physical parameters?
• In some situations, it is well known to control engineers that gain and
phase margins are not sufficient to characterise robustness; additional
indicators (such as modulus margins) can be required.
• In the next clauses a more consistent method is proposed for specifying
stability and robustness. It is intended to help to formulate clear
unambiguous requirements in an appropriate manner, and the supplier
to understand what is necessary with no risk of ambiguity.
NOTE 1 This standard focuses on the structure of the
requirement. The type of margins, the numerical
values for margins, or even the pertinence of
24
ECSS‐E‐ST‐60‐10C
15 November 2008
setting a margin requirement are left to the
discretion of the customer, according to the nature
of the problem.
NOTE 2 More generally, this standard does not affect the
definitions of the control engineering methods and
techniques used to assess properties of the control
systems.
5.2.1.1 Overview
As a first step, the nature of the uncertain parameters that affect the system, and
the dispersion range of each of these parameters are specified. This defines the
uncertainty domain over which the control behaviour is investigated, in terms
of stability and stability margins.
To illustrate the underlying idea of this clause, Figure 5‐1 shows the two
possible situations depicted in 5.2.1.2, 5.2.1.3 and 5.2.1.4, for a virtual system
with two uncertain parameters, param_1 and param_2:
• On the left, a single uncertainty domain is defined, where stability is
verified with given margins (“nominal margins”).
• On the right, the uncertainty domain is split into two sub‐domains: a
reduced one, where the “nominal” margins are ensured, and an extended
one, where less stringent requirements are put – “degraded” margins
being acceptable.
param_1 param_1
25
ECSS‐E‐ST‐60‐10C
15 November 2008
b. This domain shall consist of:
1. A list of the physical parameters to be investigated.
2. For each of these parameters, an interval of uncertainty (or a
dispersion) around the nominal value.
3. When relevant, the root cause for the uncertainty.
NOTE 1 The most important parameters for usual AOCS
applications are the rigid body inertia, the
cantilever eigenfrequencies of the flexible modes
(if any), the modal coupling factors, and the
reduced damping factors.
NOTE 2 Usually the uncertainty or dispersion intervals are
defined giving a percentage (plus and minus)
relative to the nominal value.
NOTE 3 These intervals can also be defined referring to a
statistical distribution property of the parameters,
for instance as the 95 % probability ensemble.
NOTE 4 In practice the uncertainty domain covers the
uncertainties and the dispersions on the
parameters. In the particular case of a common
design for a range or a family of satellites with
possibly different characteristics and tunings, it
also covers the range of the different possible
values for these parameters.
NOTE 5 The most common root causes for such
uncertainties are the lack of characterization of the
system parameter (for example: solar array flexible
mode characteristics assessed by analysis only),
intrinsic errors of the system parameter
measurement (for example: measurement error of
dry mass), changes in the system parameter over
the life of the system, and lack of characterization
of a particular model of a known product type.
26
ECSS‐E‐ST‐60‐10C
15 November 2008
5.2.1.4 Extended uncertainty domain
a. An extended uncertainty domain should be defined, over which the
system operates safely, but with potentially degraded stability margins
agreed with the customer.
NOTE 1 The definition of this extended uncertainty domain
by the customer is not mandatory, and depends on
the project validation and verification philosophy.
NOTE 2 For the practical use of this extended uncertainty
domain, see Clause 5.2.7.
27
ECSS‐E‐ST‐60‐10C
15 November 2008
28
ECSS‐E‐ST‐60‐10C
15 November 2008
29
ECSS‐E‐ST‐60‐10C
15 November 2008
Annex A (informative)
Use of performance error indices
30
ECSS‐E‐ST‐60‐10C
15 November 2008
ε Interval Δt
RPE (t1 )
MPE (Δt)
APE (t 1)
t1 Time t
Figure A‐1: Example showing the APE, MPE and RPE error indices
Δt 1 Δt 2
Time t
Figure A‐2: Example showing the PDE and PRE error indices
31
ECSS‐E‐ST‐60‐10C
15 November 2008
the time for all members of the statistical ensemble (Figure A‐4, left), or for
100 % of the time for 90 % of the ensemble (Figure A‐4, right), or some
intermediate case.
Which of these should apply depends upon the details of the mission and the
payload. For example, if we are trying to make sure that during an observation
at least 90 % of the light ends up in the payload field of view, then the first case
applies (90 % of the time), while if we require continuous observation then the
second (90 % of the ensemble) is more appropriate.
When giving a pointing requirement for an index I, make it clear what the
probability is referring to. This is known as the statistical interpretation of the
requirement. There are three commonly used statistical interpretations:
• The ensemble interpretation: there is at least a probability PC that I is
always less than Imax. (I.e. a fraction PC of the members of the statistical
ensemble have I < Imax at all times)
• The temporal interpretation: I is less than Imax for a fraction PC of the time
(i.e. this holds for any member of the ensemble)
• The mixed interpretation: for a random member of the ensemble at a
random time, the probability of I being less than Imax is at least PC.
Other statistical interpretations are possible, such as that a requirement is met
for 95 % of the time over 95 % of possible directions on the sky; the three
interpretations above can be though of as extreme cases. It is important that it is
always made clear which interpretation is used, as the statistics are very
different for the different cases. This is illustrated in Figure A‐5 for a simple
case where ε =A sin(ωt).
Note that the temporal interpretation is supposed to hold for any member of
the ensemble. Since the ensemble potentially includes extreme cases, unlikely to
occur in reality, in practice a “modified temporal interpretation” is used, where
the ensemble is redefined to exclude such situations. (For example,
configurations with one or more parameters outside the 3σ limits can be
excluded from the ensemble.)
32
ECSS‐E‐ST‐60‐10C
15 November 2008
1 1
0.8 0.8
ε(t)
ε(t)
0.6 0.6
0.4 0.4
0.2 0.2
0 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t t
Figure A‐4 : The different ways in which a requirement for P(|ε|<1º) > 0,9 can be met
10 10 10
4 0 4 0 4 0
0 5 10 -10 0 10 -10 0 10
2 2 2
Error
Error
0 0 0
-2 -2 -2
-4 -4 -4
-6 -6 -6
-8 -8 -8
Figure A‐5 : Illustration of how the statistics of the pointing errors differ depending on
which statistical interpretation is used
In this example ε =A sin(ωt), where A has a uniform distribution in the range 0‐
10. Left: ensemble interpretation (worst case time for each ensemble member).
Centre: temporal interpretation (all times for the worst case ensemble member).
Right: Mixed interpretation (all times for all ensemble members).
33
ECSS‐E‐ST‐60‐10C
15 November 2008
(Δt, Δt1, Δt2) to be specified when giving a requirement. Usually, but not always,
the same duration Δt is used for all of these indices.
Often there is an obvious choice for the averaging time, especially if there are
discrete observations. For example: if the spacecraft is taking a succession of
images, each lasting 1 minute, then all averaging should be done over a 1
minute observation period.
The MPE then represents the bias over an observation, the RPE the stability
during the observation, and the PDE and PRE the difference between two
observations which are nominally identical (see below). This discussion is in
terms of the pointing of a spacecraft payload, but can be easily generalised to
apply to other types of system.
In other cases this choice can be more complex, especially if continuous
pointing is required over a long duration, with no natural breakdown into
discrete periods. The averaging time Δt is specified some other way, for
example by the minimum time required to collect useful data (payload
integration time). However this time relates sensibly to the errors acting on the
system: there is no point averaging over an integration time if all of the
contributing errors have a longer period. In the end the choice comes down to a
judgement of what is appropriate for the particular system.
For the PDE and PRE indices (drift and reproducibility), there is an additional
timescale, namely the interval between the two averaging periods. Some ways
in which such timescales can be defined are:
• If the mission makes an extended observation, over many payload
integration periods, then the duration of this observation should be used
in the PDE definition.
• If the mission is expected to keep its accumulated errors below some
bounds for some minimum time (to avoid frequent calibrations requiring
time out from normal mode), then this is the timescale which should be
used in the PDE.
• If there is a requirement that the mission should make an observation,
and then be able to repeat this observation at some later time, then this
timescale that should be used for the PRE definition.
It should be noted that the PDE and PRE indices are defined identically, except
for whether or not there has been a calibration (resetting of accumulated errors)
between the intervals.
It is also possible to define the PRE index without an explicit reference to a
timescale, as the ability of the calibration process to return to an earlier state. In
this type of situation, no quantitative value for the time interval ΔTPRE is
specified.
When specifying requirements it should always be made clear whether these
timescales are treated statistically or as worst cases. Examples of how these can
differ are:
• PRE defined as a randomly chosen interval (in a continuous observation)
and a randomly chosen time within that interval.
• PRE defined as the worst case time within a randomly chosen interval.
34
ECSS‐E‐ST‐60‐10C
15 November 2008
• PDE defined as difference between an observation at a random time and
an observation a fixed period later.
• PDE defined as the difference between an observation at local noon and
an observation 12 hours later.
35
ECSS‐E‐ST‐60‐10C
15 November 2008
• If the requirement is at a high confidence (e.g. 99,73 %) then small
differences in the behaviour of the tails of the distribution changes
whether the system is compliant or not.
In such cases more exact methods are recommended, though the approximate
techniques used here are still useful for an initial rough assessment.
How these quantities are obtained depend upon the error variation over time,
the index being assessed, the statistical interpretation being applied, the
available information regarding the ensemble parameters, and so on.
• For many errors, such as sensor noise, the mean and variance of the error
is supplied directly by the manufacturer.
• For requirements using the ‘temporal’ statistical interpretation, there are
generally simple formulae which can be applied, providing that the
worst case ensemble parameters can be identified.
• For requirements using the ‘ensemble’ statistical interpretation, the
appropriate choice of PDF (and hence the formulae for mean and
variance) depends upon how much is known about the ensemble
parameter.
36
ECSS‐E‐ST‐60‐10C
15 November 2008
• For requirements using the ‘mixed’ statistical interpretation, it is not
trivial to find the means and variances of the overall distribution.
However in many common cases approximate formulae exist.
More details can be found in Annex B together with formulae for some of the
more common cases.
⎛ etotal , x ⎞ N ⎛ ei , x ⎞
⎜ ⎟ ⎜ ⎟
⎜ etotal , y ⎟ = ∑ RR →Fi ⎜ ei , y ⎟
-1
⎜e ⎟ i =1 ⎜e ⎟
⎝ total , z ⎠ ⎝ i,z ⎠
If we know the means {μi,x, μi,y, μi,z} and standard deviations {σi,x, σi,y, σi,z} for
each individual contributing error (see Annex B) then the mean and variances
of the total error angles are given by:
⎛ μ total , x ⎞ N ⎛ μi,x ⎞
⎜ ⎟ ⎜ ⎟
[ ] [ ]
N
⎜ μ total , y ⎟ = ∑ RR →Fi ⎜ μ i , y ⎟ , σ total = ∑ RR →Fi σ i RR →Fi
-1 2 -1 2
⎜μ ⎟ i =1 ⎜μ ⎟ i =1
⎝ total , z ⎠ ⎝ i,z ⎠
where [σ total ] is the total covariance matrix and [σ i ] are the individual
2 2
⎛ σ total
2
⎞ ⎛ σ i2,x ⎞
⎜ 2 , x ⎟ N -1 2⎜ ⎟
{
⎜ σ total , y ⎟ = ∑ RR →Fi } ⎜ σ i2, y ⎟
⎜ σ 2 ⎟ i =1 ⎜σ 2 ⎟
⎝ total , z ⎠ ⎝ i,z ⎠
37
ECSS‐E‐ST‐60‐10C
15 November 2008
Where {M} means that each element of the matrix is squared individually, not
2
μI total , x
= μ I1 , x + μ I 2 , x + K + μ I N , x σ total
2
, x = σ 1, x + σ 2 , x + K + σ N , x
2 2 2
μI total , y
= μ I1 , y + μ I 2 , y + K + μ I N , y , σ total
2
, y = σ 1, y + σ 2 , y + K + σ N , y
2 2 2
μI total , z
= μ I1 , z + μ I 2 , z + K + μ I N , z σ total
2
, z = σ 1, z + σ 2 , z + K + σ N , z
2 2 2
That is, the error index applied to the total error is the sum of the error index
applied to each of the contributing errors. This is the basis of the tables given in
Annex B.
In the case where two errors are known or suspected to be correlated, it is
recommended to alter slightly the summation rules for variance (to make them
more conservative). Example are the cases in which two errors both vary at the
orbital period, in which case it is very possible that they are both influenced by
the same factors (phase of orbit, sun angle, etc.) and can therefore be in phase.
Suppose that correlation is known (qualitatively) or suspected for two of the
contributing errors (A and B). This can be dealt with by applying the more
conservative approach of combining their standard deviations linearly instead
of using the usual RSS formula:
= σ 12 + σ 22 + K + (σ A + σ B ) K + σ N2
2
σ total
2
The means are still summed linearly as before. The justification for this formula
is that the two errors are effectively summed and treated as a single error (i.e.
that the value of one depends upon the value of the other). It can however be
more convenient to list them separately in a budget, to show that all effects
have been included.
It is also possible to have a summation rule incorporating a factor expressing
the degree of coupling, but since this involves determining the degree of
coupling this is more complicated
Note that the summation rules given above are not the only ones that can be
found in the literature. In particular, the approach adopted by the ESA pointing
38
ECSS‐E‐ST‐60‐10C
15 November 2008
error handbook was to classify errors according to their time periods (short
term, long term, systematic) and use a different way to sum between classes.
This was an attempt to make the summation rules more conservative, and such
rules have worked well in practice, but there is no mathematical justification for
such approaches, and they can overestimate the overall variance if the inputs
have been estimated well.
φ x ≈ e y2 + e z2 .
The resulting distribution is not even close to having Gaussian shape. In the
special case that both of the perpendicular axes have almost distributions with
identical standard deviations σtotal and negligible means, then the directional
error follows a Rayleigh distribution, and the system is compliant to the
requirement providing that:
39
ECSS‐E‐ST‐60‐10C
15 November 2008
leads to more conservative results. For example, with the hypothesis that both
errors have negligible means and similar standard deviations, the requirement
at 95% confidence gives 2.83σ total ≤ φ max , while for 99,73 % confidence the
condition is 4.24 σ total ≤ φ max
40
ECSS‐E‐ST‐60‐10C
15 November 2008
Annex B (informative)
Inputs to an error budget
B.1 Overview
The summation rules recommended in Clause 4.2.3 assume that the means and
variances are known for all errors contributing to the total. In many cases these
parameters can be easily available (quoted in a sensor specification for
instance), in other cases it requires some work to obtain values, first deriving
the probability distribution, then taking its mean and variance.
Some general rules should always be applied, the most important of which is to
follow a conservative approach and to overestimate values rather than
underestimate. As an example, assuming a‐priori that a given error has an
approximately Gaussian shape, identifying the bounds with the 3σ (99,7 %)
level and computing μ and σ accordingly, can in fact severely underestimate the
value of σ if the error has a uniform distribution or sinusoidal form.
The values used differ depending upon the index being assessed (see Table B‐1)
and the statistical interpretation applied:
• Ensemble interpretation: variation across the statistical ensemble at the
worst case time. In this case, the mean and variance used in the budget
refers to the variation of the possible worst case value, and depend upon
the accuracy to which the ensemble parameters are known.
• Temporal interpretation: variation over time for the worst case member
of the statistical ensemble. In this case, the mean and variance used in the
budget relates to the variation over time, and can be derived analytically
if the time variation is known.
• Mixed interpretation: variation across both time and over the statistical
ensemble. It can be shown (see B.7) that in this case the correct value of
the variance to use in the budgets is related to the root mean square
(RMS) value of the ensemble parameter.
This Clause discusses the most common types of error, and gives tables to show
the correct inputs to the budgets for different cases. Not all of the formulae
given are exact, but they are generally a sufficiently good approximation. Other
types of error can be analysed using similar methods.
NOTE In the tables of Annex B, the notations are as
follows: E for “ensemble”, T for “time”, M for
“mixed”; P(e) is the probability density function,
μ(e) is the mean, and σ(e) is the standard deviation
of the error.
41
ECSS‐E‐ST‐60‐10C
15 November 2008
Table B‐1: Parameters whose distributions are assessed for the different pointing error
indices (knowledge error indices are similar)
Index
Parameter whose distribution
being Description
is assessed
assessed
APE e(t ) Error value over time
42
ECSS‐E‐ST‐60‐10C
15 November 2008
Table B‐2: Budget contributions from bias errors, where B represents the bias
Distribution
Index S.I. Index
P(e) μ(e) σ(e)
APE E P(B) μB B σB B For P(B), μ B and σ B see B.6
B B
43
ECSS‐E‐ST‐60‐10C
15 November 2008
Table B‐3: Budget contributions from zero mean Gaussian random errors
Distribution
Index S.I. Notes
P(e) μ(e) σ(e)
APE E P(3s ) = 13 P(s ) 3μ 3σ See text. For P(s), μ and σs see B.6
G(m,V) = Gaussian with specified mean
T G(0 , sWC2) 0 sWC
& variance, sWC = worst case s
∫ G (0, s )P(s)ds
2 For P(s), μs and <s> see B.6. For derivation
M 0 s
see B.7
MPE All 0 0 0 Zero mean so no MPE contribution
RPE All As for APE Zero mean so RPE identical to APE
PDE All Short timescale, and assume mean value
No contribution
PRE All does not change over time
NOTE: Zero mean is assumed; if a non‐zero mean is present it can be treated as a separate bias‐type error
P (12 C) = 2 1 1
E 2 μC 2 σC For P(C), μC and σC see B.6.
δ(12 C WC ) 1
MPE T 2 C WC 0 CWC = worst case C
P (12 C) = 2 1 1
M 2 μC 2 C For P(C), μC and <C> see B.6. For derivation see B.7
P (12 C) = 2 1 1
E 2 μC 2 σC For P(C), μC and σC see B.6.
T
(
U - 12 C, 12
0
1
12
C W U(xmin,xmax) = uniform in range xmin to xmax. CWC = worst
RPE case C
∫ (- 1 1
2
C, 12 C
M 0 12
For P(C), μC and <C> see B.6. For derivation see B.7
PDE All Short timescale, and assume mean value does not
No contribution
PRE All change over time
44
ECSS‐E‐ST‐60‐10C
15 November 2008
Table B‐5: Budget contributions for periodic errors (low period sinusoidal)
Distribution
Index S.I. Notes
P(e) μ(e) σ(e)
APE E P(A) μA σA For P(A), μA and σA see B.6
T π -1 1
0 A WC AWC = worst case A
(A WC - e )(A WC + e ) 2
NOTE: Zero mean is assumed; if a non‐zero mean is present it can be treated as a separate bias‐type error
45
ECSS‐E‐ST‐60‐10C
15 November 2008
Table B‐6: Budget contributions for periodic errors (long period sinusoidal)
Inde Distribution
S.I. Notes
x P(e) μ(e) σ(e)
APE E P(A) μA σA For P(A), μA and σA see B.6
π -1 1
T 0 A WC AWC = worst case A
(A WC - e )(A WC + e ) 2
μ x = ∫ xP ( x)dx ,
σ x2 = ∫ ( x − μ x ) 2 P( x)dx ,
46
ECSS‐E‐ST‐60‐10C
15 November 2008
x = ∫ x P( x)dx
2
This can occur if the physical process underlying the ensemble is known well
enough to predict its behaviour. An example is a pointing error caused by
position uncertainty, in which the distribution of the ensemble parameter can
be obtained knowing the uncertainties in the process of determining the orbit.
Table B‐7 gives some common distributions used to describe ensemble
parameters, and their important properties.
47
ECSS‐E‐ST‐60‐10C
15 November 2008
Table B‐7: Some common distributions of ensemble parameters and their properties
Distribution Mean Variance RMS
Name
P(x) μx σx2 <x>
Delta P( x) = δ ( x − μ ) μ 0 μ
1 ⎛ 1 ⎛ x − μ ⎞2 ⎞
Gaussian P( x) = exp ⎜ − ⎜ ⎟ ⎟ μ σ2 μ2 + σ 2
σ 2π ⎝ 2⎝ σ ⎠ ⎠
1
P( x) = for xmin < x < xmax xmax + xmin ( xmax − xmin ) 2
3
xmax − xmin
3
48
ECSS‐E‐ST‐60‐10C
15 November 2008
P (x ) = ∫ P( x | A) P( A) dA
μ x = ∫ xP ( x)dx
Substituting the above expression for P(x) and doing some manipulation gives
μx = ∫ x (∫ P(x | A) P( A) dA)dx
= ∫ ( ∫ x P(x | A) dx )P( A) dA
= ∫ μ x ( x | A)P( A) dA
That is, the overall mean can be found using the distribution of A providing
that the mean is known for a given value of A. Similarly for the variance σ2:
σ x2 = ∫ σ x2 ( x A) P( A)dA
This avoids computing the full distribution P(s). For example, suppose that we
have a noise error with a zero mean Gaussian distribution over time:
1 ⎛ 1 x2 ⎞
P( x S ) = exp ⎜ − 2 ⎟
2π S 2 ⎝ 2S ⎠
In this case the ensemble parameter is the temporal standard deviation, S, and
for a given value of S μ=0 and σ=S. If we know that S lies between 0 and Smax,
but nothing else, then assuming a uniform distribution gives
μ x = ∫ μ ( S ) P ( S ) dS = 0
S max
1
σ = ∫ σ ( S ) P ( S )dS =
2
x
2
∫ S 2 (1/ S max ) ds = 2
S max
0 3
Which is much easier than finding the full distribution
⎛ 1 x2 ⎞
Smax
1 1
P( x) = ∫
0 S max 2π S 2
exp ⎜ − 2 ⎟
⎝ 2S ⎠
dS
49
ECSS‐E‐ST‐60‐10C
15 November 2008
Annex C (informative)
Worked example
50
ECSS‐E‐ST‐60‐10C
15 November 2008
y
Attitude determination x
using star tracker z
Target imaged on
payload detector grid
51
ECSS‐E‐ST‐60‐10C
15 November 2008
Note also that correlations between errors are not considered for this simple
example.
The choice of the ensemble parameter distribution is different for each error:
• The time dependent star sensor errors are assigned a delta‐function
distribution. This means that the value of the ensemble parameter (i.e. the
noise level) are essentially known exactly
• The thermoelastic distortion and payload/star sensor misalignment are
assigned uniform distributions. This means that their bounds can be
identified, but not the behaviour between them (see B.6)
• The other errors are assigned Gaussian distributions, meaning that a
distribution of their possible values is known. For example, the targeting
error is related to the initial position error, and even though this is not
known a‐priori we can predict the probability of it having a particular
value.
More details are given in Annex B.
52
ECSS‐E‐ST‐60‐10C
15 November 2008
53
ECSS‐E‐ST‐60‐10C
15 November 2008
54
ECSS‐E‐ST‐60‐10C
15 November 2008
Annex D (informative)
Correspondence with the pointing error
handbook
For the specific domain of AOCS and spacecraft pointing, the terminology
introduced in this Standard brings in some innovation in regard to the ESA
Pointing Error Handbook. This was made mandatory by the care for covering a
more general field of application, not restricted to AOCS, and to deal with error
signals of any physical type – not only angles and angular rates.
In order to help the user to switch to the new reference terms, Table D‐1
explains how to translate the Pointing Error Handbook indicators into the
ECSS‐ E‐ST‐60‐10 ones:
APE (absolute pointing error) APE (absolute performance error), applied to the
pointing error
RPE (relative pointing error) RPE (relative performance error), applied to the
pointing error
PDE (pointing drift error) PDE (performance drift error), applied to the pointing
error
PRE (pointing reproducibility error) PRE (performance reproducibility error), applied to the
pointing error
MPE (median pointing error) MPE (mean performance error), applied to the pointing
error
AME (absolute measurement error) AKE (absolute knowledge error), applied to the angular
knowledge
ARE (absolute rate error) APE (absolute performance error), applied to the
angular rate
ARME (absolute rate measurement error) AKE (absolute knowledge error), applied to the angular
rate knowledge
.
55
ECSS‐E‐ST‐60‐10C
15 November 2008
References
56
ECSS‐E‐ST‐60‐10C
15 November 2008
Bibliography
57