A Performance-Based System Maturity Assessment Framework
A Performance-Based System Maturity Assessment Framework
com
Conference on Syst
Eds.: C.J.J. Paredis, C. Bishop, D. Bodner, Georgia Institute of Technology, Atlanta, GA, March 19-22, 2013.
© 2013 Ryan Gove and Joe Uzdzinski. Published by Elsevier B.V. Selection and/or peer-review under responsibility of Georgia
Institute of Technology.
Abstract
The lack of system-level maturity has led to the development of many processes and methods for documenting, assessing, and
resolving maturity and readiness. Yet, as the literature shows, there is no single integrated methodology to assess system
maturity and performance throughout the lifecycle of a system, from concept development, through design, and ultimately to the
operational deployment of the system. There is a need to develop a framework whereby emerging technologies and their
capabilities may be evaluated up-front with an assessment conducted in terms of both technology maturity and effectiveness.
The challenge is to understand the dependence or sensitivity of the technology of interest in terms of its impact on the overall
system technology readiness and performance over time. The paper will lay out the specific requirements for such a framework.
This definition will be critical in assessing measures as system maturity metrics since a critical question which is implied by this
ce)
aid in improving t
current state of the system but also provide a path to improvement. System performance measures will be introduced to provide
a performance foundation for the overall framework. A key system performance measure will include projected system
performance or effectiveness based on component performance capabilities. A logical assignment of relevant metrics by
program phase will be made based on specific criteria. using various readiness/
maturity models and metrics to map from system development state to desired measures maturity level will be described.
System Maturity; Performance Assessment, Technology Readiness
1. Introduction
Systems Engineering (SE) represents a variety of processes and models which are applied to various types of
systems development, integration, and sustainment efforts. The application of any one of these is a result of
multiple factors including, but not limited to; the industry, the domain, the organization, and the personal and
professional experiences of those executing the processes. The theoretical foundations of SE provide a basis for
these processes, which are then tailored to their own environment.
With such diverse SE process execution, managing the system maturity becomes a great challenge. It is therefore
incomplete to postulate that a single, homogeneous method for system maturity assessment is achievable. The
research to date [1] [2] [3] [4] has focused on the defense- and space-oriented domains, and has proven effective in
those domains. To expand beyond this point, any attempt to manage system maturity must be based on
heterogeneous mixtures of current assessment methodologies linked via a set of logical connections. An IF-THEN
conditional structure which tailors the system maturity assessment to the relevant environment, using a
development paradigm this could be tied to functional requirement fulfillment, but as is often the case, functional
requirements are not enough, especially when system integration and/or sustainment are considered [5] [3]. Since
the validation of functional requirements is a direct output of the SE process [6] [7] [8], and a major factor in current
system maturity methods, this work will examine the problem of system maturity via performance-based factors.
Specifically, how can a framework be constructed which provides the logical assessments for handling diverse
systems while assessing system maturity as a function of performance-based factors?
To clarify the use of some terms in the work, the following definitions for framework, readiness and maturity are
offered. Zachman [9] defines a framework as a logical structure intended to provide a comprehensive representation
of a whole as separated into subparts that are mutually exclusive, unambiguous, taken together include all
possibilities, and which are independent of the tools and methods used. Tetley and John [10] have addressed the
importance of the terms readiness and maturity within systems engineering. They claim that System Maturity is a
stateful metric which can be categorized as the being one of three states: System is Immature (SI), System Maturity
in Progress, and System Maturity has been Achieved (SMA). These states exist within the process of system
development, and can be associated with verification of the system. Readiness, they contend, is a Boolean value (ie.
The system is either ready or not). F
Given the trends in the development of defense systems, tending toward firmer up-front requirements and more
cost-effective solutions [11], the Department of Defense has sought to place greater emphasis on affordability and
agility [12] [13] earlier in the development of systems. Thus, given these competing forces, there is a need for a
systematic understanding and assessment of the performance and readiness of emerging technologies into complex
systems as they develop and evolve over time through the procurement and acquisition process. There is a
particular challenge as complex systems are impacted by the development of constituent technologies at various
points in the development cycle. The challenge is to understand the dependence or sensitivity of the technology of
interest in terms of its impact on the overall system technology readiness, cost, and performance over time. Such a
framework may be of benefit in the commercial sector as well. There is a commensurate need for a model in support
of the aforementioned framework for assessing front-end technology maturity, cost, and system performance that
may predict the impact of that technology on system effectiveness and overall cost as the system matures.
A number of frameworks and methodologies have been proposed to address the transition of emerging
technologies into complex systems. Such methods become increasingly important in the current cost-constrained
environment. Sauser et al [14] apply systems maturity and development tools to technology maturity and
associated cost. Koen et al [15] offer a concept development model for assessing innovation early in product
development. Verma et al [16] [17] address the importance of front-end requirements to system concept design.
Bodner et al [18] introduce the concept of process concurrency of complex acquisition processes and immature
technologies. Valerdi and Kohl [19] propose a method for assessing technology risk through the introduction of a
technology risk driver. Dar et al [20] from the Aerospace Corporation and Jet Propulsion Laboratory (JPL)
conducted a comprehensive survey of risk management practices and compiled lessons learned. Technology
readiness assessment approaches have been defined and applied in the literature to track the maturity and evolution
of technology over time [14] [20] [21] [22] [23] [24] [25] [26] [27]. Approaches and readiness metrics [21] [22]
[23] [24] [26] for technology development and readiness assessment have been developed and applied in specific
instances. In addition, system of systems (SoS) have been examined to address the coordination of requirements.
However, there is not a distinct link to the performance of the system; even though the technology and, in some
instances, the associated cost to achieve that technology, have been assessed. DeLaurentis has applied the SoS DoD
technical management and SE processes and has developed a conceptual model [28] which depicts the processes in
a hierarchical fashion and represents the flow of control between the processes throughout the acquisition life-cycle.
Other models [26] [29] [30] have been developed to characterize capability decisions. Again, the proposed models
do not depict the performance associated with the system being developed or procured.
Mavris et al have introduced a framework called Virtual Stochastic Life-Cycle Design (VSLCD) [31] in which
the lifecycle of a system is represented through optimization models and through which uncertainty may be
690 Ryan Gove and Joe Uzdzinski / Procedia Computer Science 16 (2013) 688 – 697
addressed and mitigated. In other work during that time, Kirby and Mavris suggest the use of the Technology
Identification, Evaluation, and Selection (TIES) method [32] for evaluating and prioritizing technologies. An
Action Control Theory framework is posited by Worm [33] for characterizing complex missions. Elements of this
methodology may be applied to the assessment of constitu-ent complex system technologies. Heslop et al propose
the Cloverleaf Model [34], in which they evaluate technology strengths, market attractiveness, commercialization,
and management support. Ramirez-Marquez and Sauser postulate an optimization model [35] for system readiness.
Although these sources seek to quantitatively assess the maturity of the system or complex system of interest, there
is no methodology for associating this technology maturity with the performance of the system or constituent
systems.
A dynamic form of Quality Function Deployment is introduced by Ender et al [36] to identify subsystem
functions associated with critical technology elements (CTEs). A critical technology element is software or
hardware that is necessary for the successful development or integration of the new system [7].
The impact of varying constituent systems and sub-system technologies on technology maturity assessment of
Systems of Systems is affirmed in references [37] [38] [39] [40]. Effective ways to accomplish complex system or
SoS evolution given the varying development cycles of the constituent systems and technology sets are also
evaluated. An extension of this methodology to characterize the performance of these functions is needed.
Ramirez-Marquez et al and others [35] [37] [41] support evaluation of strategies for effective integration of
constituent systems into systems of systems [35] [41] [42]. A number of sources [43] [44] [42] assert the
importance of growth of the constituent systems or subsystems within a complex system when adapting to multiple
SoSs or complex sys-tems. Potential follow-up work could represent the effects of this adaptation on the
performance of such systems or constituent systems. There is a need for a framework [45] to define the front-end
concept and cost effectiveness upon which technology maturity may be assessed and tracked as the system and/or
constituent systems mature. Sauser et al [46] investigate the impact of cost and earned value in assessing
technology maturity. Mandelbaum [4] makes the case for the selection of critical technology elements based on
systems supportability cost and time.
Azizian et al. [21] presents a review of maturity assessment approaches which analyzes and decomposes these
assessments into three groups qualitative, quantitative, and automated. The qualitative group consists of
Manufacturing Readiness Levels (MRLs), Integration Readiness Levels (IRL), TRL for non-system technologies,
TRL for Software, Technology Readiness Transfer Level, Missile Defense Agency Hardware TRL, Moorhouses
Risk versus TRL Metric, Advanced Degree of Diffuculity (AD2), and Research and Development Degree of
Difficulty (RD3). These metrics are then assessed utilizing the SWOT (Strength Weakness Opportunity Threat)
model which reveals that while each is able to excel at individual assessments, none provide a complete picture of
complex system maturity. Next, the quantitative group consists of SRL, SRLmax, Technology Readiness and Risk
Assessment (TRRA), Integrated Technology Analysis Methodology (ITAM), TRL for Non-Developmental Item
Software, Technology Insertion (TI) Metric, and TRL Schedule Risk Curve. The SWOT analysis points to the fact
that while the metrics are inclusive and robust, their complex mathematical calculation are not ideal for rapid,
iterative analyses and can be prone to simple mathematical errors during calculation. However, it is noted that the
fact that these metrics provide tangible data for decision making outweighs these shortcomings. Finally, the
automated methods of TRL calculator, MRL calculator, Technology Program Management Model (TPMM), and
UK MoD SRL are assessed. It is found during the SWOT analyses that the automated methods combine some of
the best qualities of both the qualitative and quantitative, but also some of their issues, namely the fact that most of
the methods involve the answering of questions, which can be misinterpreted or just simply incorrectly answered.
These metrics represent the current state of system maturity and readiness assessment, and must be explored to
determine what significance each adds and how they interrelate.
In order to begin to assess, categorize, and integrate metrics it is necessary to start with a clear definition of what
a metric is, and more importantly why they are useful in the context of system maturity, as it has been established
that at the lowest level it is maturity which is to be measured [25]. Kiemele et al. [47]
. The definition is taken from a book on
The framework for approaching the problem of inter-disciplinary technology management must be one which
addresses the issue from multiple directions, and focuses the best practices of the various domains to provide value
to the system in the form of a focused and accurate system maturity plan. This plan is one of the outputs of the
framework and is derived from the answers to a set of the most basic systems analysis questions, or elements of the
analysis. Sauser and Boardman [48] address these in the form of "what, why, how, where, and when, and this work
will adapt that format in formulating the specific questions and seeking answers. These questions are provided as a
starting point and may be adapted and/or extended based on the scope and circumstances of the system under
development. The technology, integration, and system maturity metrics described previously will form the base
from which the answers to these basic queries will be used to filter the metrics and provide a logical assessment
from which a plan can be formulated.
2.1 Where
First we must identify where we are in the system lifecycle as the system should be handled differently based on
what lifecycle phase we are currently in. If we are early in concept development, or requirements derivation, etc we
should handle the TRL and IRL evaluations and risks differently than if we are in the I&T, initial operations, or
sustainment phases. Buede [49], ISO 15288 [8] along with others [7] [3] [50], provides the lifecycle phases which
can be selected as an enumerated set: development, manufacturing, deployment, training, operations and
maintenance, refinement, and retirement.
2.2 What
The system architecture is a major focal point for how the system is integrated and can drive how the integration
and system maturity is evaluated, some basic architecture are client-server (hierarchical) or distributed (directed
graph). Yet in modern systems theory one must also account for service-orientated, virtual, and cluster architectures
which further complicate the evaluation. The system architecture should be documented as a graph of some form
which can be abstracted to a level where graph theory can be applied to the assessment. At this point, the preceding
logic must begin to be applied, if we are in early system development, the architecture as a whole may need to be
evaluated from a reliability, sustainability, and maintainability perspective, yet, if we are in refinement, then we may
consider the integration of a new component or critical technology element (CTE) and its interaction with the
architecture.
2.3 How
2.4 When
Simply put, this qu again this portion is greatly dependent upon the lifecycle
phase and when the transition to the next phase needs to occur. The idea here would be to analyze the current
related to the system architecture. For example, if we are about to transition to integration and test, under a client-
server architecture and one client-CTE has a low IRL(s), high-delta TRL, but low R&D3 then there may be less risk
than a server-CTE with high IRL(s), low-delta TRL, but high R&D3 relating some specific function(s). The
evaluation of whether to continue, or start moving more resources towards either element, must be underscored with
a reason, which brings us to the final question.
2.5 Why
692 Ryan Gove and Joe Uzdzinski / Procedia Computer Science 16 (2013) 688 – 697
We
contest there are two, very different, but equally important answers to this question which get to the core of the
Systems Engineering Process: Functional vs Performance requirements. Essentially, functional requirements
demand that two or more CTEs be integrated into a system which meets the requirements that neither of the
Performance requirements can be more trivial, yet equally as important. It is through this lens that we will show the
final level of system maturity analysis. The performance-based analysis is not meant to replace a requirements-
based one, but to complement it, since SRL, TRL, and IRL consider requirement-fulfillment, or lack there-of,
inherently [51] [1] [35]. The performance-based view provides a system assessment from a non-functional
standpoint while also allowing for cross-comparing of elements. The potential areas that can be addressed by this
include COTS selection, CTE development direction, and technology insertion.
The Framework as described consists of a logical set of connections between the various metrics, which provides
a method for evaluating and improving the system maturity. When a system is developed or upgraded the process is
executed (1.0 in Figure 1) which is quantified via a
performance-based assessment . The answers to the (2.0) (5.0) are generally
straight-forward and typically determined by contractual constraints (3.0)
(4.0) are evaluated by the metrics mentioned in the background section, based on their applicability to the system
lifecycle phase. In response to 1.0, a performance-based assessment is conducted. This assessment entails modeling
the impact of the new technology or technologies on the performance of the system. The performance-based
assessment is then compared to the output of the other processes (1.2), e.g., readiness levels (TRLs and SRL) and a
final system maturity assessment (1.3) prioritizing these metrics, is conducted. The technology-performance trade
space for a specific technology of interest within a complex system may be characterized by both technology
metrics (e.g., TRLs/SRL) and technical performance metrics (TPMs). The final step of the Framework is to
quantitatively determine if the performance gain is worth the risk, and if it is, to provide a system maturity
improvement plan. The complete process is documented by Figure 1.
4. Application of Framework
We use the automobile as a sample complex system consisting of a number of systems (and associated critical
technology elements) including the engine, transmission system, cooling system, braking system, and steering
system. We now show how we might systematically evaluate responses to questions posed in the framework.
To assess the reason for conducting this potential technology transition; e.g., upgrading the fuel efficiency of an
automobile, we conduct a series of performance-based and technology maturity evaluations.
To answer this question, we first conduct a performance-based assessment (1.1). Performance of the critical
technology elements within these systems is a measurable and testable value(s) of a system-level goal to increase
vehicle fuel-efficiency. Typical Technical Performance Measures (TPMs) include those factors which drive the
overall goal of the system and are a function of a number of performance parameters associated with the constituent
Ryan Gove and Joe Uzdzinski / Procedia Computer Science 16 (2013) 688 – 697 693
systems and associated critical technology elements. We may use models and test data to conduct this assessment.
We then evaluate the performance-based output against other metrics (e.g., technology readiness levels) as in
(1.2) in Figure 1. Table 1 demonstrates the technology-performance trade space for the automobile. The sensitivity,
or impact, of key performance parameters, pi, on overall performance, is represented by TPM/ i. Note that a
typical TPM, automobile Miles per Gallon, is a function of a number of system performance parameters (p i)
including engine fuel consumption per kilowatt of power produced, torque transmittal efficiency and parasitic losses
in the transmission, fuel consumption delta produced by passenger comfort systems, and coefficient of friction as a
result of the aerodynamics of the body along with the differentials. The impact of these performance drivers on the
overall vehicle performance is represented by the TPM/ i and can assist in understanding and prioritizing critical
specific performance drivers. In order to provide a quantitative example we have created some sample data; it
should be noted that this data is only for theoretical purposes. This first step in the process is to answer the
questions posed, and apply the framework logic. Again, for purposes of practicality, there is no scientific basis to
the logic-selections of the metrics; this is simply to validate the concept of the framework. With that in mind, Table
2 is the output of the query answering process and metric selections. Note that the metrics associated with each of
the questions are indicated. For example, in terms of why we are upgrading the vehicle.
the sensitivity of the MPG will directly impact the readiness and maturity of the overall vehicle as new technologies
are introduced to meet the performance goal.
5.0 When? Standard EV, along with overall schedule will be used 5.1 = EV
With the selections complete, we will now provide some sample data which would be part of the iterative
assessment; this is presented in Table 3. The trending of this data should be evaluated at each assessment interval
and compared to the initial and previous intervals; this is representative of the process which occurs in sub-processes
(1.2), and (1.3), in Figure 1.
SRL_c = .45
SRL_c = .45 SRL_c = .50 SRL_c = .50 SRL_c = .63 SRL_c = .70 SRL_c = .79
What? SRL_t = .85
As a note -
based decision which weighs the risk of moving forward against the proposed gains, in the example provided at the
month-15 point, the risk involved to gain 10 MPG should have been evaluated
to only gain 3 MPG. In this same iteration the SRL was progressed and the ITI slightly reduced, these metrics
Ryan Gove and Joe Uzdzinski / Procedia Computer Science 16 (2013) 688 – 697 695
focus.
This framework is still at the conceptual stage and requires advances in multiple areas. The immediate value of
this framework is the realization of the system maturity as a function of a system-level performance measure, while
retaining the critical assessment data provided by other system maturity metrics. Again, the scope of this work is
simply to introduce the concept of a performance based system maturity framework to the community, not to define
the semantics of such a framework.
The general structure of the framework as defined in this work sets the stage for the top-down development to
follow for each sub-process within the framework. With this in mind, the following areas for future improvement
have been identified:
Develop and iterate the logical assessment sub-processes for metric selection (3.1) & (4.1) Thus far,
the only dependent variable identified is the system lifecycle phase, the potential for other input
variables must be investigated. In addition, the logical IF-THEN structure must be defined via intense
literature review of the available metrics and validated against real-
process which gets tailored to the individual organization, industry, etc is used for each sub-process.
Define the sub-process for evaluating the performance-based assessment (1.2) & (1.3) Similar to the
metric sub-processes, defining and developing the logic and quantitative assessment processes is critical
to a framework which is structured in such a fashion as to remove bias and subjectivity. In addition,
criteria for making the risk-based decision (1.4) must be defined from the literature using the output of
the assessment.
As defined in the literature a metric helps to not only provide a current state assessment, but also
(1.5) which
uses all the information available in the assessment to highlight the areas where risk exists and needs to
be mitigated to gain the largest performance improvement.
696 Ryan Gove and Joe Uzdzinski / Procedia Computer Science 16 (2013) 688 – 697
6. References
[1] B. Sauser and E. Forbes, "Defining an Integration Readiness Level for Defense Acquisition," Singapore, 2009.
[2] B. Sauser, J. E. Ramirez-Marquez, R. Magnaye and W. Tan, "A Systems Approach to Expanding the Technology Readiness
Level within Defense Acquisition," vol. 1, 2008.
[3] United States Government Accountability Office, "Better Management of Technology Development Can Improve Weapon
Systems Outcomes," GAO, 1999.
[4] Jay Mandelbaum, "Identifying and Assessing Life-Cylce-Related Critical Technology Elements (CTEs) for Technology
Readiness Assessments (TRAs)," Independent Defense Analysis Paper P-4164, Nov 2006, 2006.
[5] Rashmi Jain, Anithashree Chandrasekaran, George Elias, and Robert Cloutier, "Exploring the Impact of Systems
Architecture and Systems Requirements on Systems Integration Complexity," vol. 2, no. 2, 2008.
[6] M. d. Santos Soares and J. Vrancken, "Requirements Specification and Modeling through SysML," Montréal, Canada, 2007.
[7] Department of Defense, "Technology Readiness Assessment (TRA) Deskbook," 2009.
[8] IEEE, "ISO/IEC 15288: Systems and software engineering System life cycle processes, Second Edition," IEEE, Std
15288-2008, 2008.
[9] J. Zachman, "A framework for information systems architecture," IBM Systems Journal, vol. 38, no. 2-3, pp. 454-470, 1987.
[10] Tetlay, Abideen and John, Philip. , "Determining the Lines of System Maturity, System Readiness and Capability Readiness
in the System Development Lifecycle.," in 7th Annual Conference on Systems Engineering Research, 2009.
[11] Remarks by Secretary Of Defense Robert Gates at the Army War College, Carlisle, Pa.
[12] "Ashton Carter to Acquisition, Technology and Logistics Professionals, 24 Aug 2011 Memorandum on Should-Cost and
Affordability".
[13] "Frank Kendall to Acquisition, Technology and Logistics Professionals, 06 Dec 2011 Memorandum on Value Engineering
(VD) and Obtaining Greater Efficiency and Productivity in Defense Spending".
[14] "B. Sauser and J. Ramirez-Marquez, Development of Systems Engineering Maturity Models and Management Tools, Report
No. SERC-2011-TR-014, Jan 21, 2011".
[15]
A., Karol, R., Seibert, R., Slavejkov, A. and Wagner, "New Concept Development Model: Providing Clarity and a Common
Language to the "Fuzzy Front End" of Innovation," Research - Technology Management, 2 (44), pp. 46-55.
[16] "Verma, D. and W. J. Fabrycky, Development of a Fuzzy Requirements Matrix to Support Conceptual System Design,
Proceedings, International Conference on Engineering Design (ICED), Praha, August 22-24, 1995.".
[17] "Verma, D. and J. Knezevic, Development of a Fuzzy Weighted Mechanism for Feasibility Assessment of System
Reliability During Conceptual Design, International Journal of Fuzzy Sets and Systems, Vol. 83, No. 2, October 1996.".
[18] "D. Bodner, B. Rouse, and I. Lee, The Effect of Processes and Incentives on Acquisition Cost Growth, Eighth Annual
Acquisition Research Symposium 30 April 2011".
[19] "R. Valerdi and Kohl, An Approach to Technology Risk Management, Engineering Systems Division Symposium MIT,
Cambridge, MA, March 29-31, 2004".
[20] R. Dar, S. Guarro, and J. Rose, Risk Management Best Practices, http://trs-
new.jpl.nasa.gov/dspace/bitstream/2014/37899/1/04-0461.pdf, accessed Dec, 2011.
[21] N. Azizian, S. Sarkani and T. Mazzuchi, "A Comprehensive Review and Analysis of Maturity Assessment Approaches for
Improved Decision Support to Achieve Efficient Defense Acquisition," San Francisco, 2009.
[22] Sauser B., Ramirez-Marquez, J.E., Romulo Magnaye, R. and Tan, W., "A Systems Approach to Expanding the Technology
Readiness Level within Defense Acquisition," Naval Postgraduate School Graduate School of Business and Public Policy,
SIT-AM-09-002, 2009.
[23] Sauser B., Ramirez-Marquez, J.E., Devanandham H. and DiMarzio, D., "Development of a System Maturity Index for
Systems Engineering," International Journal of Industrial and Systems Engineering, Vol. 3, No. 6, pp. 673-691, 2008.
[24] W. S. Majumdar, "System of Systems Technology Readiness
2007.
[25] A. Tetlay and P. John, "Determining the Lines of System Maturity, System Readiness and Capability Readiness in the
System Development Lifecycle," UK, 2009.
[26] Graettinger, C. P., Garcia, S., Siviy, J., Schenk, R. J., and Van Syckle, P.J., "Using the Technology Readiness Scales to
2005.
[28] DeLaurentis D.A. and Sauser B., "Dynamic Modeling of Programmatic and Systematic Interdependence for System of
Systems," System of Systems Engineering Collaborators Information Exchange (SoSECIE), 2010.
[29] Barber E. and Parsons N., "A Performance Model to Optimise the Capability Choices Made in the Procurement Phase
within the Australian Defence Force," International Journal of Defense Acquisition Management, Vol. 2, pp. 32 - 48, 2009.
[30] M. Boudreau, "Cost As an Independent Variable (CAIV): Front-End Approaches to Achieve Reduction in Total Ownership
Cost," Naval Postgraduate School Graduate School of Business and Public Policy, Acquisition-Sponsored Research Report
Series, 2005.
[31] D. N. Mavris, D. A. DeLaurentis, O. Bandte, M. A. Hale, "A Stochastic Approach to Multi-disciplinary Aircraft Analysis
and Design," Georgia Institute of Technology Report AIAA98-0912, 1998.
[32] Michelle R. K. and Mavris, D. N., "A Method for Technology Selection Based on Benefit, Available Schedule and Budget
Resources," Aerospace Systems Design Laboratory, Georgia Institute of Technology, 2000.
[33] Worm A., "When Theory Becomes Practice: Integrating Scientific Disciplines for Tactical Mission Analysis and Systems