Embedded Software Development Methods For Mission Critical Satellite Operations
Embedded Software Development Methods For Mission Critical Satellite Operations
Henry Sanmark
Thesis supervisor:
Thesis advisor:
Professuuri: Automaatiotekniikka
Tyn valvoja: Prof. Ville Kyrki
Tyn ohjaaja: DI Ignacio Chechile
Tehtvkriittinen lento-ohjelmisto on oleellinen osa avaruusaluksen toimintaa. Se hallit-
see kaikkia alijrjestelmi kuten hytykuormia, ja useat erilaiset standardit sek kytn-
nt pyrkivt varmistamaan ohjelmiston eheyden sek luotettavuuden. Ers huomionarvoi-
sin ominaisuus tmnkaltaisen ohjelmiston kehittmisess on ohjelmiston ja laitteiston
yhtaikainen kehittminen, miss eheys pit ottaa huomioon.
Tllaisen ohjelmiston kehittmisest on olemassa useita perinteisi tapoja. Nm kuiten-
kin krsivt hitaasta kehitysnopeudesta eivtk taivu mahdollisiin muuttuviin vaatimuk-
siin. Viimeisin vuosikymmenin on kehitetty uudempia kehittyneit tapoja kirjoittaa
ohjelmistoja, mutta nm on kehitetty pasiassa muiden teollisuudenalojen tarpeisiin
eivtk tten toimi itsessn tehtvkriittisille sulautetuille jrjestelmille.
Tm diplomity tutkii tehtvkriittisen sulautetun jrjestelmn ohjelmiston kehittmisen
tehostamista, jonka lisksi etsittiin ratkaisuja, mill tavalla kyseisi toimia voidaan
kytt hyvksi rajoitetuilla resursseilla sek nopealla kehitysaikataululla. Diplomity
tehtiin toimeksiantona ICEYE-nimiselle suomalaiselle NewSpace-yritykselle.
Tss diplomityss esitelln aluksi perinteiset ohjelmistokehitysmenetelmt avaruus-
ohjelmistolle. Mys erilaiset modernit ohjelmistokehitystekniikat esitelln. Niden
pohjalta analysoidaan ICEYEn nykyist ohjelmistokehityst ja diplomityn aikana ke-
hitetyt jrjestelmt esitelln. Analysoinnin ppaino on jaettu neljlle eri osa-alueelle,
jotka ovat kehitystykalut, laitteiston ja ohjelmiston yhdenaikainen kehitys, jatkuva
integraatio sek jatkuva julkaisu ja lopuksi testausmenetelmt. Lopuksi nm kaikki
kiedotaan yhteen ja tehdn lopulliset ptelmt, sek esitelln tulevaisuuden kehityseh-
dotukset. Tulokset esittvt, ett kehityksess pitisi suosia hyvksymistestauksen kautta
perittyj ketteri kehitysperiaatteita, jossa jrjestelmtason vaatimukset voidaan jljitt
koko kehityksen aikana. Kytss olevien tykalujen tytyy olla oikeutettuja. Lopullinen
jrjestelm testataan simuloiduilla tehtvnaikaisilla tapahtumilla. Tm diplomity
mys esitt kehitetyt asetustiedostot, testausympristn sek tykalujen analysoinnin.
Preface
My first touch to spacecraft development was in 2013 when I started working with first
Finnish nanosatellite Aalto-1. Later, its software development was my topic for Bachelors
Thesis. It did not stop there, and later I was involved in ICEYE project in 2014 - way
before it was a real company. Two years later, I am finishing my Masters Thesis in the
same project as I used to work with before. The quick impulsive thought about involving
myself in the spacecraft development became my career and passion, while my expertise
on embedded software development has increased significantly. Thing which I could never
have imagined when I started studying in Aalto University. This is more than I could have
ever hoped for.
I thank Ville Kyrki for supervising my Masters Thesis and giving valuable feedback
during the development. The greatest thank you goes to Ignacio Chechile, who was my
advisor and provided expert level feedback for my thesis and for the whole ICEYE software
development. Other thanks go to the rest of ICEYE team who have been supportive during
these months and also for Aalto Space Crew for giving me the opportunity to create
something where the sky is not the limit.
Even if spacecraft projects have provided me amazing experiences, the humble thanks
go to the amazing other people around me during these years. These years spent as
an active teekkari in Aalto Student Culture have given me an opportunity to attend and
create awesome small and big events, meet new people, create phenomenal projects
around the whole of Finland, such as sending over 1,500 fellow students to visit schools,
and of course break a world record by building the worlds largest sauna. Especially
people behind these groups: IE13, IE14, TJ14, ASH13, AS, RBH, VT, NC, !111111,
Joutomiehet, Tempaus2016, Tempausesikunta, Tempaus Tapahtumatoimikunta, AYY,
Polyteekkarimuseo and all others which I forgot, deserve their own thanks. Because of
you, I have learned more than I could have ever asked for and found my own passion and
ambition to do great things. Great thanks also go to my family and closest friends for their
support during these years. It has been an indispensable resource at all times.
My years in Otaniemi have been a huge adventure. I consider these adventures as a
massive book of stories, and this Masters Thesis is the last Chapter of that book. It is time
to put this piece of work in the Library next to the Shield in the wall and start exploring
the next episode of this adventure.
Henry J. O. Sanmark
v
Contents
Abstract ii
Preface iv
Contents v
1 Introduction 1
1.1 Problem statement and research questions . . . . . . . . . . . . . . . . . 2
1.2 Structure of the thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Background 3
2.1 Spacecraft flight software and hardware . . . . . . . . . . . . . . . . . . 3
2.1.1 Software architecture . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Fault detection, isolation and recovery . . . . . . . . . . . . . . . 6
2.1.3 Software requirements and standards . . . . . . . . . . . . . . . 9
2.2 Principles of failsafe . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.1 Design and coding . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.3 Software development operations . . . . . . . . . . . . . . . . . . . . . . 15
2.3.1 From waterfall to agile . . . . . . . . . . . . . . . . . . . . . . . 17
2.3.2 DevOps for embedded systems . . . . . . . . . . . . . . . . . . . 19
3 ICEYE project 24
3.1 Project definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Software structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.1 ICEYE OS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.2.2 MCU . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Design constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
5 Discussion 62
5.1 Integration of tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2 Workflow and practices . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
6 Conclusions 67
6.1 Suggestions for future work . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.2 Final thoughts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
References 70
The research problem will be answered from the perspective of the following research
questions:
1 What development tools can be used and are they blocking or improving the devel-
opment?
2 How can hardware design be efficiently implemented into software?
3 How can continuous integration be used in mission critical embedded software
development?
4 How can system level tests be implemented for flight software?
This thesis presents all developed configurations and environments which improve the
software development process for flight software. Even if software exists also, for example,
in ground stations, these are out of the scope of the thesis. In addition, research analysis is
also presented in theoretical form and giving possible future implementation suggestions.
All these points are integrated and analysed from the perspective of how they improve the
overall software development process.
2 Background
This chapter provides background understanding and motivation of this thesis. It introduces
the fundamental requirements for traditional spacecraft projects as a basis for understanding
the challenges which NewSpace companies face in their design. First, traditional spacecraft
flight software and its functionality in general are presented to give the reader an overview
of the technical challenges and possible future approaches. This covers how requirements
are defined in software, what are the main mission-specific functionalities and how software
architecture is designed. Secondly, the basic principles of mission-criticalness is introduced.
Last, software development models are presented and compared to each other. These
software development models are first presented in general and then applied in mission
critical embedded systems for satellites.
Figure 1: The typical space system, with the emphasis on software elements. The system
consist of the spacecraft and ground segment, which itself contains multiple subsystems.
Based on the illustration developed by Jones et al. [11]
Eickhoff emphasizes [12, pp. 5] several aspects which distinguish OBC from standard
industry embedded controllers or automotive controllers. These are:
4
From the spacecraft mission point of view, OBC and OBCSW have to be detailed
concerning the following additional requirements [12, pp. 5]:
Telecommand (TC) and telemetry (TM) packet management must comply to the
customers baseline such as PUS (Packet utilization standard) by Europan Space
Agency [13]
Spacecraft mission operations concept has to take into account ground station visi-
bility, ground station network, link budgets and operational timeline from ground
commands
Control all nominal platform and payload functions from the ground
Control all FDIR procedures from the ground
Support OBCSW updates, patches and mission extension functions
Spacecraft flight software is not limited only to OBCSW. Every subsystem, such
as payloads contains its own hardware and software, which are developed alongside
OBCSW. While OBC controls different subsystems with different interfaces, the subsystem
embedded software must also cover some of the requirements presented above while they
are focusing on a single dedicated operation. Therefore, the hardware and software designs
of subsystems follow their operational requirements. [14] The simplified picture of the
whole spacecraft hardware and software structure is presented in Figure 2.
Figure 2: A simplified structure of spacecraft flight software. Every payload has its own
software and hardware module, which are controlled by OBC and OBCSW. Data link is
established to communicate with the ground station.
As for OBC hardware, RISC (Reduced Instruction Set Computing) chips are dominant
in todays designs [12, pp. 42]. ARM (Advanced RISC Machine) based solutions have
become more common because of energy and cost efficiency as well as high performance
[15], and multiple modern nano- and microsatellites especially uses ARM architecture
[16] [17] [18] [19].
Besides the process architecture, OBC hardware includes memory - SRAM/SDRAM
and PROM/EEPROM - mass storage unit, I/O ports, data buses, debug interfaces, transpon-
der interfaces, power supplies, interrupt controllers, thermal control units, timers, fault
tolerance devices, such as for CMOS latchup, and reconfiguration units [12, pp. 126] [20,
pp. 636-639]. All of these modules must fulfill the preliminary requirements of OBC
6
hardware which are presented in the list above. Figure 3 shows an example how OBC
is connected to overall avionics system, where interfaces are connected to I/O board and
finally to different subsystems. The Figure presents what different bus types are possible
to use and what subsystems, such as GPS and PCM, are connected to OBC.
Figure 3: An example of OBC as an electrical block diagram. The Figure shows how OBC
is connected to different payloads with its interfaces. Extracted from Eickhoffs illustration.
[12, pp. 53]
8
Figure 4: The static three-level software architecture for OSRA-P model. The similar
approach can also be detected in OBCSW static architecture, where functional blocks are
defined in different layers. [22]
Recovery techniques
With functional recovery, an alternative unit taken into control in system pro-
viding limited features with limited resources
With degraded recovery, it must be determined which components must be
turned off without losing the ability to control the spacecraft until possible
repairs are made, such as safe mode
For FDIR implementation, there are two possible main concepts. First concept is "hard
wired" FDIR, where a lower level module triggers a fixed function of the next FDIR level
in case of anomalies. The final level of the FDIR procedure may go up to the highest level,
and therefore, result in using a safe mode. In this case, fixed functions have to be declared
and OBCSW patched for every level. In addition, appropriate status telemetry must be
generated to report the GS what happened. The second possible concept is more flexible,
where ESA PUS (Packet Utilization Standard) is used with service concepts. In this case,
anomalies detected by PUS triggers an event which handles appropriate actions and reports
proper telemetry messages. This method is more complex to implement and test, but offers
reconfiguration during flight and more coherent structure. [12, pp. 117]
FDIR has also been studied recently and more advanced solutions have been developed
that aim to improve FDIR procedures. One proposed approach is SMART-FDIR, which
uses AI (Artificial Intelligence) to improve real-time performance, robust architecture,
auto-learning and decision making capabilities for every FDIR iteration. As an exam-
ple, SMART-FDIR is used in GOCE (Gravity Field and Steady-State Ocean Circulation
Explorer) Satellite System. In this system, fault detection uses FIR (Fuzzy Inductive
Reasoning) and dynamic I/O-model. Fault isolation uses possibilistic logic theory for
system behavioral model, and recovery uses logical and structural model reconfiguration
and newly activated behaviours. [24] Another proposed approach is AFDIR (Advanced
Fault Detection, Isolation and Recovery). This adds more features to traditional FDIR with
Kalman filtering, weighted sum-squared residual test, generalized likelihood test, random
sample consensus method, and various spacecraft simulations for computing "expected"
values. It uses two integrative diagnosis methods: probabilistic reasoning, using Bayesian
networks, and model-based diagnosis, using causal networks. [25] Studies based on both of
these methods have provided significant advances to conventional FDIR procedures, such
as dynamic system modeling, false alarm management, better AI-based failure analysis
and deduce underlying failures from multiple, superficial indications or symptoms [24]
[25] [26].
In NewSpace companies, the similar advances may not be possible due to limited
resources. Therefore there are even simpler ways to implement FDIR, which can be
a simple loop which checks different conditions or their combinations, and then acts
with corresponding actions. ICEYE software developers stated that the main approach
if FDIR design is to keep the FDIR as simple as possible, because huge combinations
FDIR conditions may lead to too intensive testing which is considered as a risk in terms of
schedule.
functional requirements and project management perspective. Eickhoff describes [12, pp.
130] that OBCSW requirements should define its architectural structure, functional require-
ments with function hierarchies, algorithm performance requirements, data handling and
operational requirements, scheduling and timing precision, FDIR, development processes
and verification/validation requirements. In these requirements, it must be defined precisely
how software must operate during its execution.
For example, it is possible to construct a "Function Tree" which describes all functions
and its relations to spacecraft operations and data handling. Based on Function Tree, it
is easy to define a table of contents for functional requirements of OBCSW alongside
non-functional requirements. An example of Function Tree and of a requirements tree
based on it is shown in Figure 5.
Figure 5: An extracted example of Function Tree and a requirements Table based on it.
Function Tree is generated from functional analysis which is then converted into software
requirements. [12, pp. 133]
There are numerous ways to model requirements and their relationships with different
systems. One of the commonly used ways is UML (Unified Modeling Language). UML is
used to visualize the design of the system, which can be used for defining how different
subsystems and methods interfere with each other and what are their relations. [12, pp.
144] UML is not the only possible notation, and depending on the developers choice, it is
also possible to use, for example, simplified flowcharts or modeling languages other than
UML. Evaluating the benefits of different methods is not in the scope of this thesis. The
choice of the requirements modeling practice should be left to developers which themselves
know the best practice in their working environment.
OBCSW requirements do not define how features are implemented, but more how it
must operate. Requirements are later verified by other methods, which are [12, pp. 135]
Requirements analysis
11
Review of design
Code inspection
On-board software tests
validation and verification [12, pp. 166]. In United States, NASA holds most of the
governmental standards which cover, for example, project engineering, systems engineering
and technical definitions [7, pp. 269]. In Europe, ESA develops ECSS standards for the
same use, and they are used to conflate development practises [11]. ECSS provides multiple
standards for different purposes and they are divided into different fields of engineering.
The main approach for ECSS standards is that they are divided into three different main
branches, which are "Management", "Product Assurance" and "Engineering". Inside these
three main branches, there are also four levels of standards [4]:
From software point of view, a family of ECSS-E-40 standards exist, and they define
everything in these three main branches regarding software engineering. Also the family
of ECSS-Q-80 considers software [4]. Their main purpose is to define "requirements on
those processes broken down into component activities" and "their expected inputs and
outputs". ECSS-E-70 processes are defined in Table 2 [4] [11].
Because software development is driven by multiple requirements and standards, the
overall activity should be emphasized. Therefore, at project end one must provide a
compatibility matrix, stating achieved compliance and which documents, review minutes,
product assurance reports prove the compliance [12, pp. 172]. Example of the process is
presented in Figure 6.
Figure 6: Traditional software and process requirements, driving development process. The
Figure presents how multiple documents define different phases of the software development
until the last working piece of code is developed. [12, pp. 172]
13
Software Maintenance
Software problems analysis
Software problems correction
Re-acceptance
Software migration
Software retirement
Dependencies
Responsibilities
Customer actions in milestones
Readability
Redundancy
Provability
Readability refers to code design, which is easy to read and understood by humans.
This involves transparent naming of variables, functions and extensive commenting. It must
15
be clear for developers how to maintain code, and because human readable is also human
recoverable, which also promotes failsafe principles. This also indicates a factor that
software design is not only limited to one developer, and other team members can verify
the software or continue the work. Even if there is no unambiguous way to write code,
it is preferable that company-wide coding conventions are defined to promote coherent
software structure between modules. ESA has defined some coding conventions, but there
are also other possible procedures which can be used [7, pp. 91-93] [28].
The second principle is redundancy, where it must be ensured that different software
components do not overlap with each other. A practical example is that a new software
module developed by another software engineer breaks some functionality in another
module which causes anomalies in software execution, such as disabling some self-checks.
Interfaces between modules must be defined accurately as defined in Chapter 2.1.3. To
improve redundancy, automated system level regression testing can also be used in CI
(Continuous Integration). CI and testing procedures are discussed in detail in Chapters 4.5
and 4.6 [7, pp. 94-97] [29].
Redundancy is close to the third principle, provability. In small systems, it is easy
to verify all units throughout, but when more complex systems are introduced this turns
out a significant issue. Provability can be achieved with proper testing practices which
include, for example, unit testing, acceptance testing and functional system testing. Be-
sides validation and verification with testing practices, modular software structure and
encapsulation also provide provability while improving readability and redundancy. With
proper software design and self-check functionality, it can also be ensured that software
functions are executed correctly while giving proper logging information, such as warning
and error messages. The basic principle in provability is to verify reality, meaning that
software must be ensured to work as planned during operation. [7, pp. 97-99] [20, pp.
665-668] [30]
Figure 7: Software development process and review milestones. The process follows
the classical V-model, where left side presents project definition and requirements of the
spacecraft and right side the testing and integration. The corresponding review milestones
are presented according to development phase. [12, pp. 170]
sequences it is hard to fix previous phases if problems occur. [20, pp. 662-663] [31]
Even if V-model and waterfall model have been widely used in traditional software
development in spacecrafts, they lack aspects related to project management. V-model and
waterfall model offer great ways to directly follow ECSS-defined standards and design
milestones but they do not support changed requirements during the software development.
Even if mission critical software must be well defined with strict requirements considering
the space environment, a project may change during the development and even strict re-
quirements may change over time. In addition, especially in longer spacecraft development
projects, these models might not respond to changed market or technology advances which
are faster than project development itself. Again, V-model and waterfall model also work,
and therefore, adopting new methods is a slow process considering the risks. Larman
[32] has reviewed several problems related to the waterfall, which can also be seen in the
V-model:
Waterfall works best for projects with little change, little novelty, and low complexity
Waterfall pushes high-risk and difficult elements to end of the project
Waterfall aggravates complexity overload
Waterfall is poorly suited to deal with changing requirements
Waterfall encourages late integration
Waterfall produces unreliable up-front schedules and estimates
When analysing these problems and comparing them to previously presented software
requirements considering the spacecraft, we can clearly see that especially complexity,
changed requirements after reviews, for example, significant changes after Critical Design
Review and late integration are problems which do not fullfil the fundamental principles
of flight software development. Even though the well-defined strict requirements play a
17
Figure 8: Waterfall software development model and its phases. In waterfall principle,
every phase starts only when the previous phase has ended and this leads to final product.
huge role, it cannot be assumed that software development follows direct path without
obstacles or changes. Therefore, more advanced ways to develop software are required
which promotes the role of the developers and constantly living project. However, strict
requirements of the spacecraft development must also be held constantly in mind.
The basic idea of the agile methods is that software development is divided into multiple
self-organizing cross-functional teams, where software is developed within fast iterations
where parts are developed as sub-assemblies rather than as a one final product. [34]. This
method emphasizes adaptive planning, constant changes, continuous improvement and
early delivery. Agile methods are based on four main values of which the "agile manifesto"
[34] is written from. These four main values of agile methods are:
18
Table 3: Results of SSCs research of combining ECSS standards and agile practices. [4]
to clients in meetings.
Customer collaboration
Efficient and face-to-face communication
Requirements cannot be fully collected at the beginning of the software de-
velopment cycle, therefore continuous customer or stakeholder involvement is
very important.
Responding to change
Iterative, incremental and evolutionary
Agile methods are focused on quick responses to change and continuous devel-
opment.
The main difference between agile methods and waterfall methods are the approach of
quality and testing. As seen in waterfall model, testing phase follows the implementation
and building whereas in agile, these are considered to be run in the same iteration. Therefore
testing is done alongside the whole development phase, not just for the final product. This
provides that software pieces can be validated during the whole development cycle and
new features can be added with shorter feedback time [1] [34] [35] [36] [37]. Figure 9
presents the main differences between waterfall and agile methods.
The challenges of implementing agile processes in spacecraft design have been dis-
cussed. In an industrial case study for SSC (Swedish Space Corporation), it was analysed
how agile methods with ECSS standards can be used in practice. The report [4] and its
findings are shown in Table 3.
In a report by Nicoll [38], it was shown that when developing safety-critical software
which followed EN 50128 standard with safety level 4 (highest), using agile methods
provided improved development cycles and verified process. It was also noted, that
especially certain elements of agile methods improved safety, which were [31]
1 Test-first development
2 Early incremental production of working code
3 Pair-programming
When it comes to high level formal safety regulations, such as ECSS standards, it
was detected that agile methods are completely compatible with them. Therefore, it
is encouraged that spacecraft software developers implement agile methods instead of
traditional iterative approaches [1] [31] [37].
Figure 9: Differences between waterfall (left) and agile (right) development methods for
the similar piece of software, which covers safety level DO-178B. Multiple features have
their requirements defined in tables which are implemented into a working code. [31]
Culture: People over processes and tools. Software is made by and for people.
Automation: Automation is essential for DevOps to gain quick feedback.
Measurement: DevOps finds a specific path to measurement. Quality and shared, or
at least aligned, incentives are critical.
21
Plan This stage involves requirements definition of the software, both functional
and non-functional requirements alongside project defined requirements, such
as release plan. It also defines all metrics related to software production.
Create Create consists of software programming, building and configuration. It is well
tied with version control tools and build tools.
Verify In this stage, software quality is measured with different testing procedures
such as acceptance testing, regression testing and performance testing.
Staging Involves all activities which are essential before deploying the software. This
includes build approvals, package configuration, triggered releases and staging
or holding. This phase is also referred to as "preproduction" or "packaging".
Release The actual software release in production. In this phase, release schedule is
defined, fallbacks and recovery are described, and software deployment is
executed.
Configure All configuration activities after the deployment. This requires actions such
as infrastructure storage, database and network provisioning and application
configuration.
Monitor The last stage, where feedback from end-user is collected and analysed for
further development.
Sharing: Creates a culture where people share ideas, processes, and tools.
The key of implementing DevOps can be expressed with DevOps toolchain, which
presents the stages of software development lifecycle and it is coordinated with DevOps
practices. In this toolchain, DevOps stages are defined and practical actions related to it
are defined. The elements of DevOps toolchain are shown in Table 4 and relations between
stages in Figure 10.
Figure 10: Phases of DevOps toolchain, which presents the difference between "develop-
ment" and "operations" and how phases follow each other.
critical software differs significantly from deploying web based applications, and therefore,
does not facilitate operations in spacecraft development the same way [3]. The main
difference is that with embedded systems, software is just one part of the development
alongside hardware. Therefore, the presence of hardware introduces challenges for DevOps
in embedded systems, especially when development is hardware-driven [40].
Figure 11: The main differences between web domain and embedded domain in DevOps
procedures. Figure shows the significant features while applying DevOps for different
processes and which key challenges emerged. Based on the illustration by Lwakatare et al.
[40]
In a case study performed by Lwakatare et al. [40] it was shown that DevOps in
embedded systems faced four major challenges. The first one (1) is hardware dependency
and combatibility with multiple versions. The result of this is that these companies
have silos in development teams inside different modules. Even though development
cycles are short on software side, there are much longer cycles on hardware side, causing
challenges. The second (2) issue is limited visibility to customers production environment.
Providing testing environments with required reliably, it cannot be guaranteed that testing
environments matches the development environment. Third issue (3) is about scarcity
of tools. While there are numerous tools available for SaaS application development,
embedded software can be unique and there are no specific tools to automatically deploy
required software to production. This is also connected to a previously presented issue that
for mission critical software, possibly no reliable software is not available and continuous
deployment is not a preferred approach. The last issue (4) is about monitoring system
performance data. It has been hard for companies to collect data in post-deployment which
23
could be used as feedback for product improvement. The main differences between DevOps
in web applications and embedded systems regarding these four issues are presented in
Figure 11.
However, even if DevOps in embedded domain faces challenges, most of its features
can be applied for development life cycle. DevOps practices must be fitted in companys
needs and environment, and after realizing challenges, some of the practices can be applied
after modifications. When we are considering mission critical software, such as spacecraft
software, DevOps must be analysed differently.
For example, NASA Jet Propulsion Laboratory has adopted DevOps practices for
their real-time telemetry gathering. They use state-of-the-art DevOps techniques and in
contrary to previous researches, they emphasize fast feedback and opinion for failure.
Therefore, the slogan for NASA "Failure is not an option" has been questioned because
fast feedback from mistakes in DevOps is considered a key element [41]. Naturally failure
is not an option after flight model has been launched to space, but if stages after staging
are performed in-house and customer is considered as a current hardware design, DevOps
can be applied. Therefore, in contrast to web applications, systems are not deployed to
production with fast iterations and monitored by customer, but instead, with peer-reviews
and simulations with currently codesigned hardware. This naturally requires that sufficient
tools, such as CI tools, are available, as presented when describing challenges for DevOps
in embedded systems.
Besides technical challenges, DevOps provides improvements in software development
and operations when its stages are defined slightly differently. DevOps still contains
several challenges which should be taken into account when analysing the whole software
development life cycle.
24
3 ICEYE project
In this Chapter, ICEYE as a company and its satellite project is presented in order to give
the reader an understanding of the case study. First, an overview of ICEYE mission is
presented as a background information. In the second section, ICEYE software structure
is presented, and lastly, the greatest constraints regarding design conventions are briefly
discussed.
Satellites themselves have mass less than 100kg and size around 3.2m x 0.5m x
25
0.33m. Some of the subsystems are ordered as COTS (commercial-of-the-shelf), but some
subsystems are developed in-house. In-house developed subsystems are RPU-PCM (Radar
Processing Unit - Power Conditioning Module) box which contains processing board,
tranceiver, combiner & divider and CDR lite, OBC box containing OBC with S-band
and X-band adapters and X-band antenna. Systems such as radios, ADCS and other are
purchased from suppliers. [43]
Figure 13: ICEYE on-board software structure. Flight software consists of OBC software
running on OBC and MCU software running on subsystems. [44]
3.2.1 ICEYE OS
ICEYE OS has been developed with Buildroot [45], which is a set of tools, such as Make-
files, and patches to build and maintain a complete lightweight embedded GNU/Linux
distribution and its cross-compilation toolchains. Alongside the embedded Linux distri-
bution, it contains custom flight software in order to control everything on satellite. The
basic approaches to use buildroot-based distribution are [46]
26
Figure 14: The abstract functional allocation diagram of ICEYE OBSW. Figure shows how
OBC software and MCU software have different functionalities and how software itself is
constructed. [44]
In Chapter 2.1. it was described that OBCSW should be an RTOS, but for ICEYE
OS and GNU/Linux, this is not the case. However, the MCU software described in
27
Chapter 3.2.2. runs on RTOS and it is responsible for crucial data processing for the flight
mission. ICEYE OS, however, works as an upper level supervisor for all subsystems and
instruments. Therefore, with properly integrated FDIR procedures, it fulfills the ICEYE
mission requirements. In addition, this approach has been used in several projects earlier,
such as Aalto-1, and it is considered as a working solution. [18]
3.2.2 MCU
MCU software is divided into several pieces of software, which run all subsystems of
the spacecraft. They are based on ARM Cortex-R boards and run under FreeRTOS. The
following subsystems are considered as MCU software: [47]
This software consists of code blocks which communicate with the spacecraft, handle
the telemetry and telecommands regarding the subsystem, present the initialization and
interfaces between modules and introduce used variables for the subsystem. Therefore, all
subsystems share some common functionalities. The generic layout of MCU software is
presented in Figure 15.
Figure 15: General MCU software architecture. Figure shows which common modules
and features every MCU application contains. [47]
Figure 16: MCU software compilation process. Software is maintained inside git repository,
and where it is finally constructed with three software libraries and CAN bootloader
software resulting in the final binary. [47]
the MVP. This leads to a situation where every single step cannot be verified. Leaps
in development must be taken and verified at the system level [43]. This conflicts with
previously defined approaches, where every single step should be verified. It promotes
risks considering the mission, but with simplified design and focusing on MVP in terms
of high-level requirements, it is possible to build a sufficient and reliable product with
sufficient testing.
Due to limited resources, iterative development must be executed. This means that
instead of strictly following book examples and verifying every single step, development
process follows the iteration of: design, build, test, fail, learn, repeat. Therefore, the
cornerstones of development of the satellite are based on the same principles as agile
and DevOps principles where failures are welcome and feedback is used to improve
development.
In terms of software, even though unit testing is not used, testing is a crucial part of
the system. Therefore, software requires system level testing over unit level testing which
means that a proper test plan must be defined. The basic design approach is that ICEYE
tries to achieve redundancy at the satellite level, not at unit level. In this thesis, testing plan
for ICEYE is presented.
In addition, software development should use practices, which promote fast develop-
ment velocity in order to improve overall spacecraft development. Therefore, in this thesis,
all development practices which are supported by selected tools and are easily documented
are presented, leading to fully functional flight software.
29
4.1 Overview
As presented in Chapter 3.2, ICEYE OBSW consists of OBC software and MCU software.
Embedded software is responsible for all functionality of the spacecraft. The overall
design follows the common conventions of software design presented in general design
approaches. Instead of following all strictly defined phases and verifying all steps, some
shortcuts have be taken and focus laid on system level functionality. This is because of
limited resources and short development velocity cycle of a NewSpace company.
Even if ICEYE software is analysed only on the system level, it does not erase the
fact that software must ensure mission criticalness and verify functionality before launch.
Therefore, based on researches presented in Chapter 2, requirements analysis must be
defined so that it meets the challenges of presented standards, such as ECSS standards, and
every step which is distinguished from standardized approach is validated.
In following sections, general information on how software is constructed is provided
and current approaches are validated. In case of better approaches, new suggested develop-
ment methods are presented and results provided. Also, all developed systems which were
designed during the thesis are presented, which support new suggested conventions and
principles.
These Chapters and selected outcomes are based on the general background information
about spacecraft software development, which was discussed in Chapter 2. In addition,
modern software development methods are applied which were also discussed in Chapter
2. Everything should be considered from the a perspective that the target platform is a
NewSpace company with limited resources and tight development schedule.
All information has been gathered from the following sources:
30
After every subsection has been analysed and outcomes proposed, the overall conclu-
sions are discussed and potential obstacles identified. These results and designed solutions
were implemented directly to ICEYE software development. Therefore, the results and
user feedback were received from developers after implemented solutions were taken in
use.
FTA itself is a deductive failure analysis, which is used in resolving the undesired
events in a way that logic diagram with specific symbols are used. [48] In ICEYE, FTA is
used to determine (1) which failure occurs, (2) isolation action, (3) conditions to detect a
failure and (4) action after the detected failure.
Based on Critical Design Review, which occured during the thesis in January 2017 [49],
it was stated that the previous design of the FDIR was not sufficient. FDIR was mainly
dependent on the scheduler and proper FDIR procedures were not defined for different
31
subsystems. Therefore the FTA-based FDIR approach was selected and is currently under
development. In addition, Critical Design Review stated that software must define all
those requirements which are not defined in a specific payload, taking a leading role in
requirements definition as a software perspective even if the payload design and mission
operation is the base of the requirements. This, however, indicates a fundamental problem
within limited resources and time: even if there are limited resources to get MVP to operate,
it cannot be achieved by taking shortcuts in defining mandatory requirements in terms of
provability and redundancy.
When considering different requirements levels, NASA has noticed that when defining
system level requirements, which is essential for the ICEYE, some systems requirements
were not rigorously tracked and controlled [50]. This results in confusion during the test
cycle about success criteria. Therefore, systems requirements need to be clearly traceable
back to science requirements and down to instrument and satellite requirements. Thus,
systems requirements must be clearly derived from science requirements, then rigorously
documented and configuration controlled. This also occurs on software side, where
software requirements are derived from payload functionality. However, it is noticeable
that handling software requirements differently than other project requirements creates
unnecessary work. Therefore, NASA recommends to manage flight software requirements
within existing project-level requirements management and verification processes [51].
In a fast developing company, new challenges and requirements are met constantly and
everything cannot be defined beforehand. In NASAs "lessons learned" [52] article about
traditional requirements development, common strict goals for the team were emphasized
from the beginning. The lessons learned concluded the following:
Identify the interested parties and their goals, objectives, and requirements up front.
Use one requirements development process from the start, and stick to it.
Ensure parties of interest are bringing forth mature requirements and that these
requirements are funded.
Write specifications and verifications that can be tested and measured.
However, when considering a NewSpace company, this approach might not be suitable,
and it has also been considered that NASAs approaches are not flexible enough in terms of
changing requirements and small teams. This, however, is directly relevant to traditional
software development methods, such as waterfall, which was discussed in Chapter 2.3.
Therefore, this thesis suggests that in requirements definition, agile practices are used as
presented in Chapter 2.3.1.
Even if mission criticalness demands that requirements are strict and not constantly
changing, it limits the ability to meet new challenges. Therefore, when analysing software
requirements, or requirements in general, mission operations requirements are most likely
to stick to an early definition but requirements considering the payloads, especially those
developed in-house, might change constantly. In these cases, this thesis proposes to
consider acceptance test-driven development (ATDD) where initial requirements come
from mission operation and subsystem software requirements are test-driven. Therefore,
based on ATDD principles [53], the designed and therefore proposed structure is following:
32
1 Define high-level requirements considering the mission, which are most likely not to
change
2 Identify the software architecture, which is used to construct software.
3 Start implementing test cases per software, how it should perform in flight
4 Define software requirements considering the test cases. This also helps identifying
the test plan for system level testing.
5 If software requirements change after iterations, re-define the test plan and consider
this as a new software requirements list
6 Review the test plan, and requirements, after iterations
ATDD can be easily considered in terms of system level testing which was introduced
as an essential point in NewSpace product development. After developers have identified
the use cases of the software functionality as test cases in system level, test cases can be
turned into proper requirements which fulfill the mission criticalness. If it is seen that
the level of test cases are not defined well enough, it is easy to switch back to testing and
therefore define more requirements for the system with more details. The requirement
listing is iterative, and at the same time, it provides a proper test plan for the system level
testing. Because we are considering the mission critical software, reviewing the test plan
is mandatory as it is the only possible way to ensure that all required functionality and
mission criticalness is covered.
ATDD also helps us in defining software where hardware sets the restrictions. Hardware
developers can define system level test cases from the hardware perspective, which can
be converted to software requirements. Therefore, development requires more interaction
between HW developers and SW developers, where HW developers provide the "use case"
and test plan for the single piece of hardware, which can be included in the whole system
level acceptance test. The details about HW/SW codesign is presented with more details in
Chapter 4.4.
ATDD, among other agile methods, can provide straightforward development style and
significant improvements for the small teams [37] [54] [55]. On the other hand, it must
be well defined how these methods are used in practice because small teams probably do
not suffer from same issues as larger companies, considering the development methods,
and therefore every single practice, such as ATDD, must be verified in development model
separately. The main aspects of different development methods are the tradeoffs between
cost, time, quality and scope. [55] In this specific case in ICEYE and considering the
development environment as a NewSpace company, those tradeoffs for ATDD can be
presented as following:
Cost
+ Prevents re-designing the whole board or software after requirements might
change using automation
- Updating system level requirements and test cases demands additional work,
and thus, costs
Time
33
Figure 17: Acceptance test-driven development approach. At the first iteration, the funda-
mental requirements which are likely not to change are defined. When these requirements
are defined, software tests are written and software modules are then developed with
agile principles. This is processed until enough iterations have passed and software is
considered ready.
Based on the analysis above and using ATDD with the other development methods
presented in other Chapters, especially interface-driven HW/SW codesign in Chapter 4.4,
it is recommended to use ATDD as a potential approach in software development.
Alongside checking how tools are used, it must also be defined how different tools
are integrated with each other. In this context, integration means how tools work with
each other and that overall workflow is seamless. For example, if a company uses same
development tools created by a single provider, interfaces between several tools are usually
well defined. However, if different tools from different providers are used, the interfaces and
compatability should be ensured with, for example, third party plugins or self-developed
interface tools.
In Chapter 2.3.2, it was underlined that one of the faced obstacles in embedded systems
development is the availability of tools. This issue is highlighted in cases where a custom
piece of hardware can be programmed only with a proprietary software and toolchains
have no open source alternatives. In this case, it might be an issue that developers are
strictly tied to, for example, one single IDE (Integrated Development Environment) which
could restrict using other possible tools while interfaces do not work as expected and IDE
itself does not support all required features.
35
For NewSpace companies, the case is however that not all hardware is custom made
and most of the subsystems are bought as COTS. In addition, in cases where custom
hardware is developed, for example OBC, microprocessors and its features are developed
by other providers. Only a minimal work is actually developed in-house, and therefore other
subsystem developers should provide enough documentation and tools for development.
The use of open source tools is encouraged, if possible. These tools do not have
possible license fees, and community support and documentation is probably sufficient. In
addition, these tools usually support third party plugins and integration with other tools is
usually easier than in proprietary tools [56]. If there are alternatives between open source
and proprietary ones, it should be analysed if a proprietary tool provides more benefits in
usage.
The choice of tools is always dependant on the develoment lifecycle and which other
tools are present. Therefore, it is not encouraged to only analyse single tools which can be
used in the whole development, but instead how they integrate with each other.
Alongside tools for version control, reviews, building and others, it should also be
considered if automation can be applied in software development. One of the DevOps
principles is to automate some stages as much as possible such as CI. [39] Therefore,
when considering the tools available for software development in embedded systems, it
should also be considered can it be automated? Considering the available tools, the design
on testing and other issues, automation can also be applied in embedded systems in the
same way as in SaaS applications. These possible solutions are discussed in this thesis and
results for analysed.
In case of potential obstacles considering the mission criticalness or embedded systems
are identified, they are discussed and potential alternative approaches presented even if
they are not implemented in the current solution.
VCSs, such as git, have become de facto tools for modern software development
where the software development can be distributed and allowing numerous different
developers to co-operate [57]. Even if different VCSs provide different features and
slightly different approaches, their main idea remains the same. VCSs do not only promote
the developers work by faster release cycles, but also promotes marketing capabilities
and product management [59] [60]. An example of common git workflow with different
branches is presented in Figure 18.
Figure 18: An example of git development flow. Different software modules are developed
in multiple branches, which are later merged into master branch for different software
releases. The provided branch names and commits to them are considered only as an
example and do not present the actual flow at ICEYE.
Gerrit is a web based code collaboration tool, which consists of repository management
and review tools. It is intented to work with Git, and therefore works only with Git
based repositories. Gerrit allows managing repositories between team members alongside
reviewing other developers latest commits, which promotes code quality and ensures
coherent structure of development. When modern review tools have emerged, such as
Gerrit, they have provided valuable information about defect detection in post-release.
Case studies have shown [61] [62] that using proper review methods creates higher quality
and less defect-prone software. This however requires that reviewing is done efficiently,
and components with low review participation are estimated to contain up to five additional
post-release defects. [61]
Alongside Git and Gerrit, a tool called Repo is also used. Repo is simply a repository
management tool which allows managing multiple different repositories, or projects, by
37
Figure 19: An example of Gerrit window after a reviewed and merged code. Gerrit presents
all information about the commit, such as commit message, reviewers and, changed files
and related changes. In this example, ground segment telemetry parser for FDIR has been
fixed.
unifying them into one single command. At ICEYE different projects are developed under
different repositories, such as different subsystem software, and therefore managing every
single one of them and keeping them locally up to date is achieved using a Repo. [63]
The basic pattern of working with repositories using Git, Repo and Gerrit is the
following:
Reviewing is also a part of the ICEYE software development, using Gerrit. However,
in contrast to traditional software projects, ICEYE software development is done in a
small team and in addition, reviews are not always performed correctly. Usually commit
verification is just clicked through without the actual review process in order to speed up
the development. We can, however, analyse the current reviewing process and therefore
consider how these processes affect to whole software development practices which leads
us to two questions:
Figure 20: The basic workflow of using Git, Repo and Gerrit. From local folder, changes
are added to index, where git and repo keeps files updated and then transfers them to the
review server which is Gerrit in this case.
Table 5: A taxonomy of the considered control (top) and reviewing metrics (bottom). [61]
Metric Description Rationale
Process Prod
Size Number of lines of code. Large components are more likely to be defect-prone.
Complexity Cyclomatic complexity More complex components are likely more defect-prone.
Prior defects Number of defects fixed prior to release. Defects may lingerin components that were recently defective.
Churn Sum of added and removed lines of code. Components that have undergone a lot of change are likely detect-prone.
Change entropy A measure of the volatily of the change process. Components with a volatile change process, where changes are spread
amongst several files are likely defect-prone.
Human factors
Total authors Number of unique authors. Components with many unique authors likely lack strong ownership,
which in turn may lead to more defects
Minor authors Number of unique authors who have con- Developers who make few changes to a component may lack the ex-
tributed less than 5% of changes pertise required to perform the change in a defect-free manner. Hence,
components with many minor contributors are likely defect-prone.
Major authors Number of unique authors who have con- Similarly, components with a large number of major contributors, i.e.,
tributed at least 5% of changes those with component-specific expertise are less likely to be defect-
prone.
Author ownership The proportion of changes contributed by the Components with a highly active component owner are less likely to be
author who made the most changes. defect-prone.
Coverage
Proportion of reviewed changes The proportion of changes that have been re- Since code review will likely catch defects, components where changes
viewed in the past. are most often reviewed are less likely to contain defects.
Proportion of reviewed churn The proportion of churn that has been reviewed Despite the defect-inducing nature of code churn, code review should
in the past. have a preventative impact on defect-proneness. Hence, we expect that
the larger the proportion of code churn that has been reviewed, the less
defect prone a module will be.
Participation
Proportion of self-approved The proportion of changes to a component that By submitting a review request, the original author already believes that
changes are only approved for integration by the original the code is ready for integration. Hence, changes that are only approved
author. by the original author have essentially not been reviewed.
Proportion of hastily reviewed The proportion of changes that are approved for Prior work has shown that when developers review more than 200 lines
changes integration at a rate that is faster than 200 lines of code per hour, they are more likely to produce lower quality software.
per hour. Hence, components with many changes that are approved at a rate faster
than 200 lines per hour are more likely to be defect-prone.
Proportion of changes without dis- The proportion of changes to a component that Components with many changes that are approved for integration with-
cussion are not discussed. out critical discussion are likely to be defect-prone.
In order to analyse the code and reviewing methods, the taxonomy of metrics is defined
in Table 5 which can be used as a base for analysing the value of reviews. McIntosh et.
al. [61] have used these metrics and analysed how the review process affects in code
development based on the extracted commits, defects and code lines using Multiple Linear
Regression (MLR) models. In their research, it was found that:
If a large proportion of the code changes that are integrated during development
are either: (1) omitted from the code review process (low review coverage), or (2)
have lax code review involvement (low review participation), then defect-prone
code will permeate through to the released software product.
While the research was targeted to software projects which were neither mission-critical
nor embedded projects Qt-project, VTK (The Visualization Toolkit) and ITK (Insight
Segmentation and Registration Toolkit) the same principles of writing and reviewing
commits still exists. While the results were not dependant on the lines of code, even if it
was taken into consideration, results can also be applied to the software project which is
the size of ICEYE project. However, because the model requires that majority of commits
are linked to reviews, and while ICEYE uses reviewing tools but they do not use them
efficiently in practice due to strict schedules and limited resources, the same model cannot
be applied for ICEYE software development directly. Therefore, calculated values for
benefits of reviews in the current development methods cannot be presented.
This however indicates the importance of the review process, and is also an example of
the situation where tools have been taken into development cycle but not used properly.
The review is used in small teams by interacting with each other, and only single developers
are responsible for unique parts of the software. Thus, even if review process is practiced,
it is not used in Gerrit properly. The worst case is that the tool actually slows down the
development velocity depending on the faced problems and resources allocated to tools
maintenance. Based on the interviews, it was noted that Gerrit is in fact considered as an
obstacle while using version controlling. Even if only one reviewer is required to accept
patches, they are not read through or actually reviewed and the latency between the actual
push and review process could lead to possible merge conflicts which has happened.
Based on this, it is presented that Gerrit is dropped in the review process. In the current
scope, it does not provide any benefits when it is not used and reviewing should only be
performed only between different team members. Thus, reviewing should be emphasized
in terms of software criticalness, but in current situation this should be done between team
members which are responsible for their piece of software. Even if this leads to potential
dangers in "software ownership", this does not have any direct answer in terms of resources.
However, when the number of developers and reviewers increases, the currently used Gerrit
should be re-evaluated and taken back into the process.
Besides Gerrit, in general it is possible to use other repository management tools such
as GitLab [64]. While GitLab also supports reviewing, it also supports CI, wikis, test
automation and other features. However, comparing these different solutions is not in the
scope of this thesis.
or ARM Cortex-R (MCU software). ICEYE OS is built with Buildroot tools, which were
presented in Chapter 3.2. MCU software use custom toolchain provided by MCU provider.
In this Chapter, the internal structure of both ICEYE OS building and MCU software
building is analysed and proposed changes presented.
Buildroot contains mainly just a set of Makefiles that download, configure, and com-
pile software with the correct options. Alongside Makefiles, there are several patches
available for different software. These are mainly the ones related to toolchain, such as
gcc, binutils and uClibc. Makefiles are divided into multiple subdirectories which
define different configuration options and building targets for GNU/Linux environment,
such as kernel, toolchain, processor architecture, user space software, which in this case is
ICEYE flight software, bootloader, initial system and filesystem. When the main Makefile,
which calls all the sub-Makefiles, is executed, it creates the corresponding output folders
for binaries, generates toolchain targets and last, builds all targets defined in Makefiles
TARGETS variable. [45]
When Buildroot has been succesfully compiled, it will produce different software assets
which are [43]
When ICEYE OS has been built, it can be flashed to OBC using U-boot bootloader,
which is the universal embedded system bootloader.
While Buildroot is open-source and highly customizeable toolkit, it is possible to run
with multiple different toolchains and environments. Besides, it can be easily integrated to
the CI workflow as a Jenkins job, so its usage is highly encouraged. While current solution
with Buildroot is not the only acceptable one, and embedded GNU/Linux distributions
and software can be built also with tools such as Yocto Project [65], Buildroot provides
all required tools to develop ICEYE OS and include it in an efficient pipeline. However,
analysing different toolkits to build embedded GNU/Linux software is not in the scope of
this thesis.
Because MCU software is built for completely different environment, ARM Cortex-R
running with FreeRTOS, it cannot use the same tools than ICEYE OS, and therefore,
requires a completely different set of toolchain and building methods.
MCU software is mainly developed with the Eclipse-based IDE. The tool differs from
Eclipse in a way, that it contains all toolchains and building options, such as Makefile
generation, directly within the IDE. Therefore, in order to build software for MCU,
developer must only execute "Build" command inside the IDE to generate executable
binary.
However, this method results in issues when integrating tools to each other. For example,
implementing the IDE inside CI causes problems when build servers must have additional
dependencies to run the IDE, even if it would be called through command line options. In
addition, because IDE uses its custom configuration parameters to, for example, generate
Makefiles, it makes it more difficult for developers to track down issues relating to build
process, which also conflicts with the earlier defined problems of trackable requirements.
41
Building the software relying only on the IDE and its configurations is considered as a
bad practice [66] which also restricts the development capabilities. Therefore, the build
process must be platform-independent. However, IDEs can be useful to easily develop
software on single developers PC, but limitations arise when some of the processes, which
are taken care of by the IDE itself, should be integrated with multiple other tools which are
essential for the pipeline.
In order to get rid of the IDE building tools, own build process must be defined. Even if
toolchain is provided by MCU provider, it can be separated from the actual IDE and used
with self-defined compiler options. Writing own Makefiles from the scratch is considered a
hard process especially in large software projects. A better approach was designed during
the thesis research. Therefore, CMake is used to write platform and compiler independent
configuration files, which define the whole build process.
CMake is an open-source, cross-platform family of tools designed to build, test and
package software. CMake is used to control the software compilation process using simple
platform and compiler independent configuration files, and to generate native makefiles
and workspaces that can be used in the compiler environment of developers choice. [67]
This allows getting rid of IDE dependant configurations and providing flexible way to
maintain dependencies and optimizing the whole build. In addition, when using platform
independent tools, integrating build process to CI is much more fluent.
As presented in Chapter 3.2.2, every piece of MCU software consists of three parts: (1)
FreeRTOS library, (2) commonLib and (3) application itself. Therefore, when building
MCU software, two levels of CMake files has been defined.
Main CMake- file, located under application root
Subfolder/library files, located under every subfolder containing source files
The main idea is that "main file" contains all necessary settings for building the
application and calls other files under the project tree. In the main file, all necessary folders
are included and the actual application is built. All subfolder/library files generates only
objects which are used to build the program in main file. Therefore, developer must refer
to subfolder contents as objects to build an application succesfully. However, FreeRTOS
and commonLib are built as library files (.a) and are linked to main executable during the
last phases of build process. See appendices A and B for an example of the CMakeList.txt
files used to build MCU software.
After the binary has been built, it must be converted to the right binary format with
object and hex convertion. This is called a "post-build step". In this case, MCU provider
tools are used. The final resulted binary file is ready to be flashed to the OBC.
To understand how the actual code is processed, we can refer to Texas Instruments
"ARM Optimizing C/C++ Compiler" site which covers the whole process and which is
used as a base for writing CMake files. The same process is also presented in Figure 21.
[68]
1 The compiler accepts C/C++ source code and produces ARM assembly language
source code.
2 The assembler translates assembly language source files into machine language
relocatable object files.
42
Figure 21: ARM software development flow and tools relating to it. Every phase is handled
with corresponding build tools and combined together within maintained CMake files,
which result in final executable ARM binary. [68]
3 The linker combines relocatable object files into a single absolute executable object
file. As it creates the executable file, it performs relocation and resolves external
references. The linker accepts relocatable object files and object libraries as input.
4 The archiver allows you to collect a group of files into a single archive file, called
a library. The archiver allows you to modify such libraries by deleting, replacing,
extracting, or adding members. One of the most useful applications of the archiver
is building a library of object files.
43
5 The run-time-support libraries contain the standard ISO C and C++ library functions,
compiler-utility functions, floating-point arithmetic functions, and C I/O functions
that are supported by the compiler.
6 The library-build utility automatically builds the run-time-support library if compiler
and linker options require a custom version of the library.
7 The hex conversion utility converts an object file into other object formats. You can
download the converted file to an EPROM programmer.
8 The absolute lister accepts linked object files as input and creates .abs files as output.
You can assemble these .abs files to produce a listing that contains absolute, rather
than relative, addresses. Without the absolute lister, producing such a listing would
be tedious and would require many manual operations.
9 The cross-reference lister uses object files to produce a cross-reference listing show-
ing symbols, their definitions, and their references in the linked source files.
10 The C++ name demangler is a debugging aid that converts names mangled by the
compiler back to their original names as declared in the C++ source code. As shown
in Figure 21, you can use the C++ name demangler on the assembly file that is output
by the compiler; you can also use this utility on the assembler listing file and the
linker map file.
11 The disassembler decodes object files to show the assembly instructions that they
represent.
12 The main product of this development process is an executable object file that can
be executed in a ARM device.
It is remarkable that the process of how the final executable object binary is generated,
is the same no matter what set of IDE or project handlers are used. However, the availability
of toolchains could restrict the choice of available open source tools if they do not work
with each other.
The achieved outcome of this build process indicates that it is recommended to promote
platform and compiler independent configuration files and building options. Alongside
unifying the building environment for different developers and CI build servers, it also
provides easier way to share same configurations via Git repositories rather than configuring
every single build computer separately using the manual in documentation.
systems are complex, concurrent development is more difficult and requires engineers who
master both hardware and software. [69]
In this Chapter, the current hardware and software development principles, considering
their relation and dependency on each other, are analysed and an improved proposal of the
future development practices is provided.
Even though this process results in the working embedded system, it has multiple sig-
nificant disadvantages considering the development velocity and other project management.
In addition, when newly written software catches problems of the hardware, it is more
difficult to fix problems, which itself leads to slower development velocity. [69] The most
notable issue is time delay between PCB iteration and software development. During the
interviews, it was stated that updating the OBC API and actual software takes three weeks
before software is implemented to hardware, even if software team receives the schematics
of the PCB before it gets manufactured. This also causes another issues to rise, which
45
is currently not a problem at ICEYE, but is not sustainable and can cause problems in
other companies or facilities: software developers need to know how hardware works and
to be able to read schematics. Therefore, software developers should also be hardware
oriented. It is usually an advantage for companies that hardware developers also know
software and vice versa. Even if as embedded software developers, they know how HW
works but as software oriented they do not have as high experience with hardware as is
required. This can lead to potential development issues where interfaces between hardware
and software are not clear enough. Thus, interfaces between hardware and software should
be clearly defined and not relying on the fact, that hardware developers know how to write
software, and that software developers know how to build hardware. Even if knowing
both of these sides is mandatory for embedded systems developers, they are significantly
different expertises.
In traditional spacecraft projects, newly emerged solutions exist, such as Virtual Plat-
forms. In this concept, the whole PCB is emulated the same way as virtual machines, so
all electronics are available as a software way before PCB is manufactured. [70] [71] [72]
This is not used at ICEYE in terms of resources, but can be considered in larger projects.
While current experience between hardware and software is not considered as a problem
by developers, during the interviews it was admitted that after potential growth of the
company, from NewSpace company to a company with bigger revenues, it could be a
potential risk. Besides, the current slow development velocity was considered a problem
but still a sufficient solution in current development.
In the next section, a proposal of how to improve the interfaces between software
and hardware is provided alongside the improved development velocity from hardware
iterations to a working piece of software, where earlier described agile practices are applied.
Table 6: Five abstract interface levels for codesigning hardware and software. [69]
Explicit inter- The currently used model for SoC (Single System on Chip) design
faces describes hardware as RTL (Register Transfer Level) modules. The CPU
acts as the HW/SW interface, and designers use explicit memory and I/O
architectures to detail the software down to assembly code or low-level
C programs.
Data transfer At this level, the CPU is abstract. Hardware and software modules inter-
act by exchanging transactions through an explicit interconnect structure,
a model generally referred to as TLM (Transaction-level Modeling). In
addition to designing interfaces for different hardware modules, refining
a TLM model requires designing a CPU subsystem for each software
subsystem.
Synchronization At this level, the interconnect and synchronization are abstractions. The
hardware and software modules interact by exchanging data following
well-defined communication protocols. The MPI (Message Passing In-
terface) is an example of this approach. Refining an abstract HW/SW
interface model requires first designing the interconnect and then cor-
recting the synchronization schemes. Data transfer must also be refined
down to the RTL.
Communication At this level, the communication protocol is abstract. The hardware and
software modules interact by exchanging abstract data without regard
to the protocol used or the synchronization and interconnect the design
will implement. The design typically uses the SDL (Specification and
Description Language) to abstract communication. Refining an SDL
model requires first selecting a communication protocol for example,
message passing or shared memory and then following the refinement
steps used in lower abstraction levels.
Partitioning The ultimate abstraction level is the functional model in which hard-
ware and software are not partitioned. Designers can use a variety of
models to abstract HW/SW partitioning, including sequential program-
ming languages such as C/C++, concurrent languages, and higher-level
models such as algebraic notation. Refining such a model requires first
separating the software and hardware functions and then performing the
refinements used in higher abstraction levels.
47
Figure 22: The proposed approach. In this approach, system and architecture are specified
and based on that, hardware and software interfaces are defined and both systems last
developed, based on this codesign. This approach is proposed to be used with earlier
defined ATDD approach with multiple iterations on software and hardware modules by
defining test cases. [69]
message location in SW and also in HW are defined and by which steps the message
is processed between SW and HW.
3 Synchronization: In this phase, the communication is defined in deeper level. Thus,
the actual implementation of software and hardware is defined. For example, how
message is processed in flight software and how the message is processed inside
the PCB, such as in Processing board. In software side, functions are written and
detailed implementation of PCB is designed.
4 Data transfer: The actual communication architecture is defined, meaning how the
data processed inside software and hardware is implemented and then passed on
to OBC API. This phase encapsules the whole flight software-OBC API-hardware
processing.
5 Explicit interfaces: All detailed memory locations and I/O architecture, which are
highly hardware dependent.
However, this method also requires that inside Step 3, synchronization, the PCB
49
Figure 23: Full HW/SW interface codesign scheme: explicit interfaces, data transfer,
synchronization, communication, and partitioning. Based on illustration from Jerraya et
al. [69]
functionality is tested throughout with simple functions. It does not serve if OBC API
is well designed but the actual board is not functional due to design failures. Therefore,
before defining the whole data transfer process, it must be ensured that PCB fulfills its
requirements. This can be simply used to call single functions inside current OBC API
implementation, for example turn Processing board on and waiting for correct response.
After that, the overall data transfer process can be defined and explicit interfaces designed
until the end on whole system level.
defecting change can be pinpointed quickly. [73] In addition, coherent automated pipeline
removes manual work while improving code quality [29]. CI and CD have developed the
best practices of modern software development while they allow developers move faster
and keep the high quality of standards with code [41] [66].
While CI and CD are closely tied together, they are slightly different terms. Continuous
Integration refers to practice of integrating changes from different developers to mainline,
possibly even several times a day. This means code does not divert greatly between different
developers. Continuous Delivery itself refers to practice to keep codebase deployable at
any point. Therefore, after automated tests, software can be configured and it is ready for
deployment. Alongside continuous delivery, there is also a term continuous deployment
which differs from continuous delivery. In continuous deployment, a deployment phase is
also performed automatically. In this case, when we refer to CI/CD pipeline, we talk about
continuous integration and continuous delivery.
While CI/CD pipeline is easy to implement in SaaS, problems emerge in embedded
systems, such as spacecraft, especially on hardware. Even if software can easily be
implemented on shared repository with the same practices as presented above, the presence
of hardware arises difficulties. The most significant problem is related to testing phase,
which is discussed in Chapter 4.6. In addition, "deployment" is not achieved similarly in
SaaS as the final product operates in space. The common conventions were pointed out in
Chapter 2.3.2. In addition, when PCBs are considered stable in the late development, the
usefulness of CI/CD drops after new changes are not introduced the same way.
Figure 24: The overview of Jenkins main window. The main window presents all jobs
which are handled by Jenkins and their latest status. This view can be expanded with
multiple plugins and other customization.
Figure 25: ICEYE CI/CD pipeline. Pipeline presents how software is developed locally,
and after review process with Gerrit, it is handled by Jenkins where automated jobs are
executed and reported to the developers.
There are two different build types in use called feedback and RELEASE. Feedback
builds are considered as the build types which are triggered automatically when a new
patch is implemented to main branch, and the results of the reports of latest build are
reported. Release-builds are triggered manually when the newest software is considered
52
1 How is it possible to integrate automatic tests to CI/CD workflow when tests are
executed on real hardware?
2 How to execute test cases safely on hardware, especially when people are not present?
When testing software, hardware is also tested. Therefore, when in terms of this
thesis, we talk about "software testing" it must be kept in mind that all software runs on
real hardware. It is possible to run software on simulations, which is commonly used in
multiple spacecraft projects [12, pp. 256] [76], but in terms of custom designed software
and hardware, building a separate hardware simulator alongside constantly developing
piece of hardware would take too much resources. Even if resources are limited in
developing the satellite, the verification and validation requirements are not smaller, which
results in significant share of total resources to be assigned into testing. This should be
considered when calculating total resources of development. [76]. In cases where both
real hardware and simulator can be developed concurrently, it might be encouraged to
double-verify the design. However, in this case, a real hardware is used in testing.
The most significant is phase is to define, how these tests can also be implemented
during the simulated missions. Thus, when hardware and software are developed far
enough, the mission is simulated using appropriate simulators with simulated telemetry
data and such, where automated sequences are run separately. Therefore the tests are not
part of the CI/CD workflow, but instead works as real life scenarios on-flight, such as: "if
Processing board turns suddenly off, does FDIR handle the system correctly"?
For question 1 the solution is presented in Chapter 4.6.1, which covers the tools
required for the testing purposes and how to technically implement the testing procedure.
The second question and testing arrangements are discussed in Chapter 4.6.2.
54
Robot Framework is a generic test automation framework for acceptance testing and
acceptance test-driven development (ATDD). It has easy-to-use tabular test data syntax
and it utilizes the keyword-driven testing approach. Its testing capabilities can be extended
by test libraries implemented either with Python or Java, and users can create new higher-
level keywords from existing ones using the same syntax that is used for creating test cases.
[78]
The architecture of the Robot Framework in Figure 26 presents the basic approach
for building tests. The modular architecture allows using Robot Framework in multiple
different systems. Being also an open source project written with Python, there are only
minimal limitations to run tests on modern computers which can also run Python.
Figure 26: A modular Robot Framework functional architecture, extracted from Robot
Framework site. This Figure presents how Robot Framework is located inside the whole
system and how system under testing is handled. [78]
Figure 27: A simple test case implemented in Robot Framework. This test case tests if user
can login to page with corrent login and username.
Figure 28: Custom keywords and variables, which are used in the test case above. Note
that Selenium2Library is used to implement these custom keywords.
56
Figure 29: An example log file of executed test, which is generated in HTML form after a
finished test execution.
Robot Framework is powerful in the way that it can be used for multiple purposes and
for different systems. Only the required libraries should be included in the test library to
write the tests. On OBC, the ICEYE OS is GNU/Linux based distribution with Python
support and a working OBC API which allows executing payload functions from Python
libraries. Even if running tests technically on ICEYE OS is possible, it is not as straight-
forward than in regular desktop environments. Therefore, when designing system level
tests at ICEYE, three requirements were defined in software team meetings considering
the capabilities of the framework and restrictions of platform.
Requirement 1: Test must be executed remotely, and test results shall be stored
in test PC instead of OBC filesystem. Requirement 2: Tests shall not import any
of the libraries locally. All libraries and functions under test are located remotely.
Requirement 3: Tests can be run on multiple computers inside the network.
OBC and ICEYE OS are part of the whole satellite system, which should be kept as
simple as possible. This is the base for requirement 1. Therefore installing any unnecessary
tools which are not required in actual mission is discouraged. In addition, test result logfiles
should also be stored in central place, which is not the system under testing, promoting the
57
ability to log history, easier access for all testers to read results. In addition, data is not lost
if target platform breaks.
Robot Framework supports a library called Remote, which allows executing tests in
another system where the Robot Framework itself is located. This could have been used
in ICEYE test system, if this would not require installing an additional remote server on
OBC, which is not allowed based on the description above.
Requirement 2 means that the run OBC API is located inside OBC, and it should not
have any duplicates inside test computer. Therefore, it is not possible to remotely download
libraries and then include them into a test suite, or concurrently update the same library file
inside the test computer and OBC. This does not meet the actual execution of tests, and
therefore, could lead to misleading results when library is located in another place than it
actually should. Therefore, the test must ensure that OBC API and other required libraries
are used completely remotely.
Requirement 3 allows us to run test sequences manually when needed just by typing the
test command on your local development PC or running test automatically as, for example,
Jenkins job inside the whole pipeline. However, it was declared that the computer inside
the testing laboratory is mainly used for manual execution even if it is possible from every
single computer.
Based on these requirements, a different approach than a regular use of Robot Frame-
work was essential in order to use the benefits of Robot Framework in system level testing.
Therefore, instead of using Remote library, tests are based on SSHLibrary which allows
using SSH (Secure Shell) connection inside Robot Framework tests. Because ICEYE OS
is GNU/Linux distribution and its development is dependent on SSH connection, SSH is
included in the software. After connecting to OBC over SSH, which in this case is a radio
connection, a Python shell is opened and every required command is passed to OBC inside
Python shell. The returned values are stored as variables are read from Python shell, and
then reported back to Robot Framework machine which reads the actual result. Briefly, the
system is the following:
The Robot Framework files, which includes all resource files and actual test cases,
have a guideline at ICEYE which was written during the thesis. The guideline is based on
common conventions of writing Robot Framework tests, and therefore file structure shall
be the following:
test_suite.robot
58
Figure 30: The example test execution flow in ICEYE software using Robot Framework.
This principle is used in multiple different tests, which differ significantly from the tradi-
tional approach.
The test suite, which contains all test cases with simple keywords
Defines the test structure at the most abstract layer, and is easily readable even
for non-developers
Multiple different test suite files exists for different testing procedures, such as
imaging sequences.
test_keywords.robot
Collection of keywords which wraps the Python shell commands
User defined keywords
Instead of wrapping directly OBC API keywords, this contains a single func-
tionality required for test case
59
Alongside tests which are constructed in this way, it was also required that it is possible
to run test Python scripts which are located on-board. These are, for example, scripts
which are used in test procedures but also part of the final flight software. In these test
files, it is only possible to get the overall success or failure status of the script without the
detailed information, which is printed to shell output. Therefore, these test files
Using both of these methods, it is possible to either verify that a set of PCBs is
responsive by calling their corresponding methods, as a health check, or run a desired
sequence which is required to be tested.
1 Healthcheck testing
2 Sequential system level testing
Imaging test
Downlinking test
60
Healthcheck used to check automatically that the hardware is responsive. This means,
that using OBC API, every subsystem should be turned on and off, subsystems respond
to test case ping commands and otherwise check that subsystems are responsive and if
subsystems generate errors or not. This test case provides simple and fast healthchecks but
does not track the actual functionality of subsystems. This one can be used in HW/SW
development phase, where abstract level of synchronization is defined to ensure that a set
of functionality is provided.
Sequential system level testing, such as imaging test and downlinking test, are examples
of system level testing which was introduced in earlier Chapters. It follows the actual
sequence on how software and hardware should perform and generate the results which
testers can read from RF reports. Currently, the sequential system level testing is only
implemented on imaging and downlinking sequence, which were developed during the
thesis research. The same approach can be used in all other subsystems and system level
sequences. It should be noted that the presented test plan is also developed for laboratory
environment and does not completely fulfill the requirements of in-flight but rather provides
automated data in development phase. Next, the test plan for imaging and downlinking
sequence inside laboratory enviroment is presented.
When a tester wants to declare which parameters are used in a specific test case, a file
parameters.robot is written which contains all test parameters. All test cases themselves
are written as test templates. This allows required tests to be data-driven meaning that
same test case can be run with different set of parameters. Thus, the same test sequence
can be run automatically multiple times with different data. Test sequence is divided
into two sections, imaging and downlinking, which both act as their own tests. Jenkins,
however, handles both of these jobs in correct order as upstream and downstream jobs.
While imaging test is executed, after its success the downlinking is executed. The sequence
itself works as following:
1 With given parameters, initialize an imaging phase. This includes turning payloads
on and configuring mandatory files.
2 Acquire an image and update current configuration.
3 Re-initialize the imaging with new parameters, meaning that a new picture shall be
taken with different settings.
4 Acquire the new image and update configuration.
6 Shutdown the imaging phase, where payloads are shut down and image data is
transferred to correct location.
7 Prepare system for downlinking with provided parameters, which turns on and
conFigures required payloads.
8 Before downlinking, turn on the XBand downconventer in laboratory from the web
interface so image data can be received and analysed. This phase uses Selenium to
automate navigating web pages. This phase could also be done manually by pressing
a button in a laboratory, but in case of automation, this can be achieved by web
interface.
61
9 Start recording the XBand radio, so downlinked image is actually received. This
phase is also performed from web interface using Selenium.
9 Downlink image by sending image data from RAM to XBand Radio.
10 Stop recording from web interface.
11 Turn off XBand downconventer from web interface.
12 Wait until laboratory system has processed the received file(s) and it appears on local
web drive.
13 Verify that file data is correct and no errors are occured.
With simple test steps, it is possible to track which phase of the test did not work and
report possible failures accordingly to help developers to find possible bugs in the code.
In addition, data-driven tests are required while same sequence is used multiple times but
with different settings which must be tested. The same approach can also be used in other
system level sequences, and test "user story" is possible to be written in simple English. An
example of the test files is attached as Appendix B, which presents how imaging phase is
written. The actual downlinking phase uses the same approach, with additional keywords
for other systems.
Even if the presented test plan is defined afterwards, it shows how tests should be
defined first and software written afterwards in terms of ATDD principles. Based on
hardware and software requirements, the test "user story" shall be defined the same way
as described above. After that, the code shall be developed based on that sequence and
validated through successful testing. With automated tests, this can be applied to the overall
pipeline.
While the presented example of downlinking sequence is considered in the case per-
formed in laboratory environment within the automated sequence inside CI/CD, this same
approach can also be used in earlier defined mission simulations, where automated se-
quences are executed and supervised during the simulated mission. Therefore, when system
has a simulation running, the test case is executed and results are reported to the tester.
62
5 Discussion
An approach of this thesis was to analyse multiple development phases and their correlation
to each other. We have presented the phase-specific analysis in each Chapter and provided
the basic information to understand motivation for these solutions. To understand the effect
of analysis, every phase and their integration to each other shall be analysed which can be
wrapped up as a improved development practices.
of code ownership which should be monitored during the development by following the
company guidelines and common conventions of software design.
The other finding was that integration should always be considered. During the thesis
work, the MCU software building was rewritten with CMake which directly provided
implementation of build process in Jenkins servers which did not exist before. In addition,
using Robot Framework, it is possible to integrate testing sequences to CI which were
executed manually beforehand. With relatively easily applied open source tools, they do
not include enormous hours for setting up and use (see Table 7) and provide fast feedback
and therefore improved software quality.
The findings considering the tools can be summed as following:
Mission-critical software development benefits from the latest tools known from
common industry.
The key principle considering the tools is how they integrate with each other, meaning
how they can be applied in the development pipeline.
Every tool must be justified. It should not be applied to the development environment
if it is not used properly.
Favor open source tools, if possible. Their integration with each other is more
guaranteed and do not introduce huge license fees or additional workhours to get
interfaces working. Use proprietary software only if mandatory and if it clearly
provides significant benefits.
Always use tools, which can be applied in different environments without additional
work. For example, building software with CMake instead of IDEs Build function is
encouraged in order to verify it works on every developers machine and CI servers
with similar settings and prevents so called "works on my machine" cases.
Always consider the choice of tools from hardware development perspective.
Beyond all choice of tools and their integration, the culture of software development
pays a major role.
levels. Then, software is locally developed and verified if possible before submitting to
Gerrit. When submitting a patch to Gerrit, the piece of code is reviewed. Upon acceptance,
automatic feedback builds are triggered and code coverage and style is verified with, for
example, Cppcheck. When verified, software is "released", meaning it is deployed to
satellite ETB (Electrical Test Bench), commonly known as FlatSat (see "Release" in Table
8). Then, system level tests are executed with Robot Framework where the test cases are
derived from requirements as "user stories". During all automated phases, notifications of
execution are published in readable form. This can be achieved with, for example, Slack
team collaboration tool [80] or mailing lists. The workflow can be expressed as following:
Beyond that, DevOps "release", "configure" and "monitor" phases should be handled
differently. It should be noted that due to spacecraft mission, the only way to gather enough
data from both software and hardware is to simulate the actual mission after the release.
Thus, it is preferred that the operations listed in Table 8 are performed.
66
While the workflow analysis was based on DevOps principles, it is clear that the
workflow itself contains elements from traditional waterfall principle presented in Chapter
2.3. This flow can also be viewed as a waterfall point of view, where all phases are
performed after the previous phase has ended. Waterfall is often preferred in mission-
critical software design because of its robustness even though it lacks development velocity.
However, if the development workflow is considered with the base of DevOps model, agile
principles can be applied to these phases.
DevOps principles are used as an approach with the cultural change and promoting
the choice of tools, emphasizing the continuous integration and automation. Then, all
presented phases are agile themselves and should be built with small iterations. This means
that, starting from the requirements, these are defined from top-level with short iterations,
and the software is built based on the current definition. SW and HW interfaces are also
defined with short iterations with abstraction levels. When it comes to actual development
phase, more traditional agile principles in software are used to ensure that it is ready when
a new PCB arrives. Then, system level testing in FlatSat can be performed and simulations
run.
In case of defects, it is easy to go back to one phase and fix, for example, requirements
or interfaces between HW and SW. The main idea is to get feedback from the working
system as fast as possible, which is possible with clearly defined steps in development and
automated feedback.
The most significant obstacle considering the workflow is the cultural change that
software and hardware developers meet their common goals and stick to development plan
with commonly agreed conventions regarding interfaces and design. While software is
faster to change based on feedback than hardware, this model should also provide fast
feedback to HW designers even if the new board is under development and software is
already working with the latest changes.
67
6 Conclusions
The purpose of this thesis was to analyse mission-critical spacecraft software development,
especially in terms of limited resources and tight development schedule. This was achieved
by analysing the characteristics of spacecraft software and modern software development
practices. Then, this information was reflected to ICEYE project which was used as a case-
study and the in-depth analysis was performed in requirements definition, development
tools, hardware and software codesign, continuous integration and testing practices.
First, the traditional characteristics of OBC and payload software was analysed. It was
indicated that specific software development consists of multiple standards which promotes
the role of robustness and fault tolerance. In addition, the overall software structure and
architecture was presented and commonly used design practices introduced. Two possible
architectures, static and dynamic, were presented for OBC, and for payloads a possible
architecture approach called OSRA-P was introduced. In addition, an in-depth analysis of
FDIR functionality was performed in terms of mission-criticalness.
In software development practices, the traditional waterfall principle was briefly pre-
sented. Waterfall is commonly used in spacecraft software development due to its ensured
robust result, but lacks in flexibility could lead to complexity overload and encourages late
integration. Modern agile practices were introduced as a solution for problems of waterfall,
where especially test-driven development, early incremental code production and pair
programming was emphasized as key solutions. Also DevOps practices were introduced
which promotes automation, fast feedback and measurements in cultural change. However,
several aspects of DevOps are not suitable for mission-critical embedded software because
of differences from SaaS applications, and therefore, some compromises must be made
especially in software deployment and measurements.
ICEYE software development itself is based on limited resources and tight schedule,
where software and other systems are developed in short iterations in order to get MVP
which meets the DevOps principles. Therefore, considering DevOps was taken as a base
for software development analysis and previously presented limitations and architectural
challenges were promoted.
Analysing requirements definition, it was noted that especially system level require-
ments should be trackable and plans for changing requirements should be defined. From
that point of view and from lessons learned in agile studies, ATDD approach is promoted
as a base for software development. In that practice, requirements are defined from mission
perspective and then with several iterations specified in more details, where test cases are
also defined and then later used in system level testing.
In case of choice of tools, modern software development tools such as Git, Gerrit
and Repo were presented. While Git, alongside with Repo, are commonly encouraged in
software development, and this was endorsed with multiple studies, the main focus was
included in review phase analysis with Gerrit. The analysis showed that review process is
highly encouraged, but in terms of ICEYE development using Gerrit as a reviewing tool
lacks proper usage. Therefore, it is adviced to promote reviewing between single team
members in the office and consider Gerrit only when number of developers increases in
the future.
Considering the build process, the design of current ICEYE software building was
68
analysed and a new system for building embedded software was designed with proper
toolchain, which can be integrated to the automated pipeline, as suggested in DevOps
practices. Additionally the building ARM software was presented to get an understanding
of the process. It was also suggested that open source tools should be used.
For hardware and software codesign, the fundamental problems considering infor-
mation sharing and design cycles were outlined. The solution to improve this was to
design hardware and software in different abstraction levels, where hardware and software
interface models are generated based on ATDD practices and those itself from current
system specifications.
DevOps practices embraces automation, and therefore CI/CD practices were introduced
and latest trends relating to it analysed. All previous phases were integrated to CI pipeline,
but the system was divided into 2 different phases: building and testing. This was essential
because the lattest phases of DevOps toolchain are not directly compatible with hardware
restrictions of system. Therefore, the build pipeline was automated, but automated tests
are triggered only when hardware is verified and PCBs flashed with latest binaries. This
could also be done automatically, but it was not in the scope of this thesis.
In system level testing, an acceptance testing framework called Robot Framework
was used and its functionality presented. The designed test structure was presented,
which used ICEYE OBC API and SSH connection, which enabled running tests remotely
without additional servers installed on the OBC. In addition, a test plan with healthchecks
and automated functional sequences was presented, which were derived from mission
specification, but limited to laboratory environment. However, also testing during the
simulated mission was briefly presented as a final testing practice which uses the same
principle as automated tests in CI/CD workflow.
Last, it was noted even if waterfall is considered the robust approach for software
development, using a DevOps-derived model with agile practices leads to functional
results, which consist of short iterations with tight milestones. To achieve this, cultural
changes to software and hardware developers should be considered so that proposed actions
can be actualized.
When we analyse the original research problem "How to improve embedded software
development in mission critical satellite systems, when robustness and fast development
cycle in the small "NewSpace" company must be taken into consideration?", we can sum
the results as following:
This thesis analysed the software development in multiple aspects to improve the
overall quality and development velocity. However, every single phase of the de-
velopment should be analysed separately with unique researches, which was not
possible in terms of thesis scope and schedule.
Automated scenarios for mission simulations should be constructed. This means
that those system level tests which were introduced with Robot Framework, should
also be included in simulation as real cases. The current design allows executing
tests in laboratory environment and with scheduled sequences, but when the whole
operation is functional, tests should be executed as random encounters and therefore
testing the real capabilities of spacecraft.
DevOps model for mission-critical software development requires additional re-
search. Even if model was widely used in this thesis and used as a base for analysis,
this thesis should not be considered as a complete DevOps guide for embedded
software development. It is also possible to suggest an alternative models which
takes into account the flaws of DevOps model in this field of industry.
References
[1] K. Knnl, S. Suomi, T. Mkil, V. Rantala, and T. Lehtonen, Can embedded
space system development benefit from agile practices, EURASPI Journal on
Embedded Systems, vol. 3, 2017. DOI: 10.1186/s13639-016-0040-z.
[2] K. Knnl, S. Suomi, T. Mkil, V. Rantala, and T. Lehtonen, Sulautettujen jr-
jestelmien kettern ksikirjan lisosa: Ketteryys avaruusteollisuudessa. University
of Turku, Technology Research Center; Tekes, Oct. 2005, Digital publication, ISBN:
978-951-29-6283-9.
[3] C. Ebert, G. Gallardo, J. Hernantes, and N. Serrano, DevOps, IEEE Software,
vol. 33, no. 3, pp. 94100, Jun. 2016, doi:10.1109/MS.2016.68, ISSN: 0740-7459.
[4] E. Ahmad, B. Raza, R. Feldt, and T. Nordebck, ECSS Standard Compliant
Agile Software Development - An Industrial Case Study, in In Proceedings of
the National Conference for Software Engineering (NSEC 2010), 2010. [Online].
Available: http://www.cse.chalmers.se/~feldt/publications/ahmad_
2010_nsec.html.
[5] B. Graaf, M. Lormans, and H. Toetenel, Embedded software engineering: the
state of the practice, IEEE Software, vol. 20, no. 6, pp. 6169, Jul. 2003. DOI:
10.1109/MS.2003.1241368.
[6] J. Engblom, Continuous Integration for Embedded Systems using Simulation,
Wind River, Embedded World Congress, Nrnberg, Germany, Feb. 2015.
[7] K. Fowler, Mission-critical and safety-critical systems handbook: Design and
development for embedded applications. Sharfus Draid, Inc., Amsterdam, Elsevier,
2010, ISBN: 978-0-7506-8567-2.
[8] J. Axelsson, E. Papatheocharous, and J. Andersson, Characteristics of software
ecosystems for Federated Embedded Systems: A case study, Information and Soft-
ware Technology, Special issue on Software Ecosystems, vol. 56, no. 11, pp. 1457
1475, Nov. 2014.
[9] On-board computers / On-board data handling, http://www.esa.int/Our_
Activities / Space _ Engineering _ Technology / Onboard _ Computer _ and _
Data_Handling/Onboard_Computers, web page, accessed: 08.01.2017.
[10] On-board software | ESC Aerospace, http://www.esc-aerospace.com/?page_
id=460, web page, accessed: 08.01.2017.
[11] M. Jones et al., Introducing ECSS Software-Engineering Standards within ESA,
Practical approaches for space- and ground-segment software, ECSS software-
engineering standards, bulletin 111, Aug. 2002.
[12] J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations: An
Introduction. Springer Heidelberg, Dordrecht, London, New York, 2012, ISBN:
978-3-642-25169-6.
[13] ECSS-E-70-41A: Ground systems and operations - Telemetry and telecommand
packet utilization, ESA Publications Division, ESTEC, P.O. Box 299, 2200 AG
Noordwijk, The Netherlands, European Space Agency, ECSS, 2003.
71
[14] C. Honvault and G. Furano, Towards a HW/SW reference architecture for pay-
loads, Material slides, European Space Research and Technology Centre (ESTEC),
Noordwijk ZH, The Netherlands, Oct. 2014.
[15] I. Xabier, V. Balaji, O. Emre, and D. Shidhartha, A Triple Core Lock-Step (TCLS)
ARM Cortex-R5 Processor for Safety-Critical and Ultra-Reliable Applications,
46th Annual IEEE/IFIP International Conference on Dependable Systems and
Networks Workshops, no. 46, pp. 246249, Jun. 2016.
[16] C. Brand, The Development of an ARM-based OBC for a Nanosatellite, Masters
thesis, University of Stellenbosch, Department of Electrical Engineering, 2007.
[17] G. Dreijer, The Evaluation of an ARM-based On-Board Computer For a Low Earth
Orbit Satellite, Masters thesis, University of Stellenbosch, Dec. 2002.
[18] A. Kestil, T. Tikka, P. Peitso, J. Rantanen, A. Nsil, K. Nordling, H. Saari, R.
Vainio, P. Janhunen, J. Praks, and M. Hallikainen, Aalto-1 nanosatellite - technical
description and mission objectives, Geoscientific Instrumentation, Methods and
Data Systems, vol. 2, no. 1, pp. 121130, 2013. DOI: 10 . 5194 / gi - 2 - 121 -
2013. [Online]. Available: http : / / www . geosci - instrum - method - data -
syst.net/2/121/2013/.
[19] E. Razzaghi, A. Yanes, J. Praks, and M. Hallikainen, Design of a reliable OBC
for Aalto-1 nanosatellite mission, in Proceedings of the 2nd IAA Conference on
University Satellites Missions and CubeSat Workshop, Feb. 2013, pp. 447460.
[20] V. Pisacane, Fundamentals of Space Systems. Oxford University Press, Inc., New
York, 2005, ISBN: 0-19-516205-6.
[21] P. Fortescue, G. Swinerd, and J. Stark, Spacecraft Systems Engineering, 4th ed. A
John Wiley and Sons, Ltd., 2011, ISBN: 978-111-99710-1-6.
[22] V. Bos and A. Trcka, On-board Software Reference Architecture for Payloads,
Contract No: 4000110034/13/NL/LvH, The Software Systems Division (TEC-SW)
and Data Systems Division (TEC-ED) Final Presentation Days, ESA/ESTEC, The
Netherlands, Dec. 2015.
[23] Fault-Detection, Fault-Isolation and Recovery (FDIR) Techniques - Utilize FDIR De-
sign Techniques to provide for Safe and Maintainable On-Orbit Systems - Technique
DFE-7, Technique DFE-7, NASA engineering practices.
[24] A. Guiotto, A. Martelli, and C. Paccagnini, SMART-FDIR: use of Artificial
Intelligence in the implementation of a Satellite FDIR, Alenia Spazio S.p.A.,
Software & Simulation Architectures, Tech. Rep., 2003.
[25] N. Holsti and M. Paakko, Towards Advanced FDIR Components, Technical report,
2001.
[26] ESA - Software engineering and Standardisation- Fault Detection, Isolation and Re-
covery, http://www.esa.int/TEC/Software_engineering_and_standardisation/
TEC4WBUXBQE_0.html, Accessed: 10.01.2017.
72
[40] L. E. Lwakatare et al., Towards DevOps in the Embedded Systems Domain: Why
is It so Hard?, 49th Hawaii International Conference on System Sciences, 2016,
ISSN : 1530-1605/16.
[41] D. Isla, Interplanetary DevOps at NASA JPL, presentation slides, Boston, MA:
USENIX Association, 2016.
[42] ICEYE Ltd. official website, http://iceye.fi, accessed 10th January 2017.
73
[62] A. Bacchelli and C. Bird, Expectations, Outcomes, and Challenges of Modern Code
Review, ICSE 2013, San Francisco, CA, USA, 2013, ISSN: 978-1-4673-3076-3/13.
[63] Repo command reference, https : / / source . android . com / source / using -
repo.html, accessed 20th March 2017.
[64] Code, test and deploy together with GitLab open source git repo management
software, https://about.gitlab.com/, accessed 29th March 2017.
[65] S. Rifenbark, Yocto Project Mega Manual, http://www.yoctoproject.org/
docs/2.2.1/mega-manual/mega-manual.html, revision 2.2.1, 2017.
[66] K. Baltzer and D. Bchi, Continuous Integration for Embedded Systems, presentation
slides, 2010.
[67] CMake Reference Documentation, https://cmake.org/cmake/help/v3.8/,
accessed 20th March 2017.
[68] ARM Optimizing C/C++ Compiler, www.ti.com/lit/pdf/spnu151, Dec. 2016.
[69] A. A. Jerraya and W. Wolf, Hardware/Software Interface Codesign for Embedded
Systems, Feb. 2005, ISSN: 0018-9162/05.
[70] Imperas, http://www.imperas.com/, accessed 19th April 2017.
[71] SystemC, http://accellera.org/downloads/standards/systemc, accessed
19th April 2017.
[72] Wind River SimicS, https://www.windriver.com/products/simics/simics/
_po/_0520.pdf, product overview paper.
[73] H. L. Akshaya, S. N. Jagadish, J. Vidya, and K. Veena, A Basic Introduction
to DevOps Tools, International Journal of Computer Science and Information
Technologies, vol. 6, no. 3, 2015, ISSN: 0985-9646.
[74] Jenkins Documentation, https://jenkins.io/doc/, accessed 20th March 2017.
[75] K. Hirvikoski, Advances in Streamlining Software Delivery on the Web and its Re-
lations to Embedded Systems, Masters thesis, University of Helsinki, Department
of Computer Science, Apr. 2015.
[76] A. J. Stephen, Survey of Verification and Validation Techniques for Small Satellite
Software Development, Long Beach, CA, Tech. Rep., May 2015.
[77] Preliminary Design Review (PDR) meeting notes, internal notes, May 2016.
[78] Robot Framework User Guide v.3.0.2. http://robotframework.org/robotframework/
3.0.2./RobotFrameworkUserGuide.html.
[79] Robot Framework Web Demo, https : / / bitbucket . org / robotframework /
webdemo, repository accessed 20th March 2017.
[80] Slack: Where work happens, https://slack.com/, accessed 13th March 2017.
75
# FreeRTOS
i n c l ud e _ d i r e c t or i e s ( freertos
freertos / include
freertos / source )
add_subdirectory (
freertos / source
)
# commonLib
i n c l ud e _ d i r e c t or i e s ( lib
lib / generic
lib / generic / libADC
lib / generic / libDummy
lib / generic / libFDIR
lib / generic / libFuncTab
lib / generic / libGPIO
lib / generic / libI2C
lib / generic / libInclude
lib / generic / libInterface / libIn te r fa ce CA N
lib / generic / libTelecommand
lib / generic / libTelemetry
lib / generic / libUtil
lib / generic / libVarTab
)
add_subdirectory ( lib / generic / libADC )
add_subdirectory ( lib / generic / libDummy )
add_subdirectory ( lib / generic / libFDIR )
add_subdirectory ( lib / generic / libFuncTab )
add_subdirectory ( lib / generic / libGPIO )
add_subdirectory ( lib / generic / libI2C )
add_subdirectory ( lib / generic / libInterface / l ib I nt er fa ce C AN )
add_subdirectory ( lib / generic / lib Telecomma nd )
add_subdirectory ( lib / generic / libTelemetry )
76
# Application
i n c l ud e _ d i r e c t or i e s ( app
app / application
app / application / initialization
app / application / interface
app / application / meassure
app / application / norMemory
app / application / services
app / application / spiAnlg
app / application / telecommand
app / application / telemetry
app / codeHalcoGen
app / include
app / targetConfigs
)
add_subdirectory ( app / application )
add_subdirectory ( app / application / initi alizatio n )
add_subdirectory ( app / application / interface )
add_subdirectory ( app / application / meassure )
add_subdirectory ( app / application / norMemory )
add_subdirectory ( app / application / services )
add_subdirectory ( app / application / spiAnlg )
add_subdirectory ( app / application / telecommand )
add_subdirectory ( app / codeHalcoGen )
add_library ( commonLib
$ < TARGET_OBJECTS : libADC >
$ < TARGET_OBJECTS : libDummy >
$ < TARGET_OBJECTS : libFDIR >
$ < TARGET_OBJECTS : libFuncTab >
$ < TARGET_OBJECTS : libGPIO >
$ < TARGET_OBJECTS : libI2C >
$ < TARGET_OBJECTS : libInterfaceCAN >
$ < TARGET_OBJECTS : libTelecommand >
$ < TARGET_OBJECTS : libTelemetry >
$ < TARGET_OBJECTS : libVarTab >
)
S E T _ T A R G E T _ P R O P E R T I E S ( commonLib PROPERTIES PREFIX " " )
add_executable ( processingboard
$ < TARGET_OBJECTS : application >
$ < TARGET_OBJECTS : initialization >
$ < TARGET_OBJECTS : interface >
$ < TARGET_OBJECTS : meassure >
$ < TARGET_OBJECTS : norMemory >
$ < TARGET_OBJECTS : services >
$ < TARGET_OBJECTS : spiAnlg >
$ < TARGET_OBJECTS : telecommand >
$ < TARGET_OBJECTS : codeHalcoGen >
)
t a r g e t _ l i n k _ l i b r a r i e s ( freertosLib
$ENV { T I _ H E R C U L E S _ C O M P I L E R _ L I B }/ r t s v 7 R 4 _ T _ b e _ v 3 D 1 6 _ e a b i . lib )
t a r g e t _ l i n k _ l i b r a r i e s ( commonLib freertosLib )
t a r g e t _ l i n k _ l i b r a r i e s ( processin gb o ar d commonLib )
Subfolder file
set ( i n i t i a l i z a t i o n S o u r c e s
initCAN . c
initialization . c
initializationFuncTab .c
initializationTelecommand .c
initRS485 . c
initVarTab . c
)
Initialize imaging
[ Documentation ] This test tests if initia lization is correct
[ Arguments ] $ { idInput } $ { startAddress } $ { filename } $ { idSettings } ${
, idFpga } $ { gainVGA } $ { idCh ainDivid er }
Acquire Image
[ Documentation ] Verify image acquiring
[ Arguments ] $ { start _ t i m e _ 1 0 0 us }
Shutdown Imaging
[ Documentation ] Shutdown payload after a successful imaging sequence
Initialize shutdownPayload
Power Down CDR Lite $ { SHUTDOWN }
Turn Off Tranceiver $ { SHUTDOWN }
Get Configuration File Info
Send Image Metadata To Processing Board $ { SHUTDOWN }
Transfer Data From RAM to Flash $ { SHUTDOWN }
Create Image Data
Power Down Processing Board $ { SHUTDOWN }
Re - initialize Imaging
[ Documentation ] Reinitialize imaging if want to take more images within
, timeframe
[ Arguments ] $ { idInput } $ { startAddress } $ { filename } $ { idSettings } ${
, idFpga } $ { gainVGA } $ { idCh ainDivid er }
Reset Payload
[ Documentation ] Resets the imaging payload .
[ Arguments ] $ { idInput } $ { startAddress } $ { filename } $ { idSettings } ${
, idFpga } $ { gainVGA } $ { idCh ainDivid er }
Abort Imaging
[ Documentation ] Aborts the imaging .
Initialize aborting
Power Down CDRs
Power Down CDR Lite $ { ABORT }
Turn Off Tranceiver $ { ABORT }
Power Down Processing Board $ { ABORT }
Keywords
*** Settings ***
Documentation Imaging sequence related keywords
Library String
# ##########################
# Initialization keywords #
# ##########################
Initialize acquireImage
[ Arguments ] $ { start _ t i m e _ 1 0 0 us }
Write import acqu ireLibra ry
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } ImportError
Initialize shutdownPayload
Write import s h ut do wn L ib ra ry
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } ImportError
Initialize re - initialization
[ Arguments ] $ { idInput } $ { startAddress } $ { filename } $ { idSettings } ${
, idFpga } $ { gainVGA } $ { idCh ainDivid er }
81
Write import r e i n i t i a l i z e L i b r a r y
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } ImportError
$ { params }= Catenate SEPARATOR = , $ { SPACE } $ { idInput } $ {
, startAddress } $ { filename } $ { idSettings } $ { idFpga } $ { gainVGA } ${
, idChainDivider }
$ { command }= Catenate $ { REINIT } = reinitializeLibrary .
, r e i n it i a l i z e L ib r a r y ( $ { params })
Write $ { command }
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } NameError
Initialize resetPayload
[ Arguments ] $ { idInput } $ { startAddress } $ { filename } $ { idSettings } $ {
, idFpga } $ { gainVGA } $ { idCh ainDivid er }
Write import resetLibrary
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } ImportError
$ { params }= Catenate SEPARATOR = , $ { SPACE } $ { idInput } $ {
, startAddress } $ { filename } $ { idSettings } $ { idFpga } $ { gainVGA } $ {
, idChainDivider }
$ { command }= Catenate $ { RESET } = resetLibrary . resetLibrary ( $ {
, params })
Write $ { command }
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } NameError
Initialize aborting
Write import abortLibrary
$ { stdout }= Read delay =1.0 s
Should not contain $ { stdout } ImportError
# # # # # ## # # # # # # # # ## # # #
# Wrapper keywords #
# # # # # ## # # # # # # # # ## # # #
Turn On Tranceiver
Execute Command $ { IMAGING } t u r n O n T r a n c e i ve r
Set Gates
# Temporary , shall be deleted soon ( ish )
Write imaging . setGates ()
Turn On CDRs
Execute Command $ { IMAGING } turnOnCDRs
Imaging Step
Execute Command $ { ACQUIRE } imagingStep
Genererate Catalogue
Execute Command $ { COMPILE } generateNiceCatalogue
Parameters
# Parameter file for tests
# Using this file , add test parameters which are run inside the test case
# and call the test sequence with the corresponding list
#
# SYNTAX : idInput , startAddress , filename , idSettings , idFpga , gainVGA ,
, idChainDivider , start_ti m e _ 1 0 0 u s
# - repeat list how manu imaging sequences are performed
#
# Note that parameters are provided as a STRING , which is then
# parsed to corresponding parameter list inside tests .
#
# When adding new parameter lists , remember to add a corresponding
# test case with corresponding test template . See test suite for details .
Resources
*** Settings ***
Documentation Resource file for SSH connection and
... Python shell interaction . The core functionality of
... this test suite .
Library SSHLibrary
Library String
# Evaluation values
$ { ALL_OK } Success
$ { ALL_TURNED_OFF } All off
$ { ERROR_OCCURED } Error occured
$ { NOT _ALL OWE D_R ANG E } Not in allowed range
$ { UNSUPPORTED } Unsupported filetype
Quit Shell
Write quit ()
Verify value
[ Documentation ] This keyword excepts " dbus . Int32 ( VALUE ) " messages
[ Arguments ] $ { result } $ { expected }
Execute
[ Documentation ] Executes function in subsystem and returns its value by
, reading it directly from Python shell .
[ Arguments ] $ { subsystem } $ { cmd } $ { params }= $ { EMPTY } $ { timeout
, }=3 min
# Construct the API command in Python way ( both subsystem and cdrutil )
$ { command }= Run Keyword If $ { subsystem } == $ { CDRUTILS }
... Catenate SEPARATOR = $ { CDRUTILS } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { IMAGING }
... Catenate SEPARATOR = $ { IMAGING } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { ACQUIRE }
... Catenate SEPARATOR = $ { ACQUIRE } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { SHUTDOWN }
... Catenate SEPARATOR = $ { SHUTDOWN } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { REINIT }
... Catenate SEPARATOR = $ { REINIT } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { RESET }
... Catenate SEPARATOR = $ { RESET } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { ABORT }
... Catenate SEPARATOR = $ { ABORT } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { COMPILE }
... Catenate SEPARATOR = $ { COMPILE } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { DL_PREP }
... Catenate SEPARATOR = $ { DL_PREP } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { DL }
... Catenate SEPARATOR = $ { DL } . $ { cmd } ( $ { params
, } )
... ELSE IF $ { subsystem } == $ { DL_END }
... Catenate SEPARATOR = $ { DL_END } . $ { cmd } ( $ { params
, } )
... ELSE
... Catenate SEPARATOR = $ { API_INSTANCE } . $ { subsystem }
, . $ { cmd } ( $ { params } )
86
Execute Command
[ Documentation ] Wrapper keyword for cases where " Execute " and " Verify "
... are called .
[ Arguments ] $ { subsystem } $ { cmd } $ { params }= $ { EMPTY } $ { timeout }=3 min
$ { retval }= Execute $ { subsystem } $ { cmd } $ { params } $ { timeout }
Verify Value $ { retval } $ { ALL_OK }
Set Value
[ Documentation ] Almost like " Set Variable " , but uses Python shell in this
, test suite
[ Arguments ] $ { variable } $ { retval }