TheContributionofStandardProgrammingLanguagestoSoftwareQuality
TheContributionofStandardProgrammingLanguagestoSoftwareQuality
net/publication/3407288
CITATIONS READS
4 128
1 author:
B. A. Wichmann
None, retired
137 PUBLICATIONS 2,291 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by B. A. Wichmann on 28 March 2018.
Abstract:
This paper examines the contribution that a standard programming language can make to the quality of
the software written in that language. It is demonstrated by means of an example that there is a substantial
difference in languages and their implementations in the assurance provided of the correctness of the
resulting program. Demonstrating that a program is correct with respect to the semantics of the language
is a useful step in quality assurance.
Introduction
Software quality is virtually impossible to measure and hard to assess. Yet, as the ACARD report noted
[1], it is of vital importance to the software industry.
Quality management and quality assurance methods. For the general approach to this, see the
British Standards Institution Handbook [12] which contains the main quality standards [30, 31].
Use of appropriately trained staff.
Use of appropriate techniques.
This paper discusses how the use of standard programming languages affect the quality of systems and
can contribute to the assessment of the quality attained.
According to this conventional view, the important stage is probably the functional specification in which
the requirements are specified in precise terms in natural language. Everybody is aware of the pitfalls of
using natural language to write precise specifications or even as a programming language, for instance,
see David Hill's classic article [23].
There is no doubt that programs themselves do provide precision in that their behaviour is predictable (in
all practical cases). When a problem arises between the natural language specification and the program,
the exact `meaning' of the program is not in question. Hence if the specification could be transformed into
a form having the same precision as a program, then it would be easier to resolve problems concerned
with the meaning of a specification.
`Formal Methods' provide a means for giving a mathematically-based specification whose meaning should
not be in dispute. Methods such as VDM [18] and Z [22] are not just unfettered mathematics but have
almost all of the characteristics of programming languages. In other words, programming language
technology can contribute to the earlier parts of the life cycle.
There is a lively debate about the use of formal methods. There are substantial difficulties at present due
to the immaturity of both the methods and tools used to support them. At least one mature tool exists for
VDM [2]. However, there seems little doubt that these problems will be overcome so that mathematically
based methods will become more widely used, perhaps with much of the symbolism hidden from view
and replaced by natural language and graphics.
In order to characterize the effect of standard programming languages on software quality, we need to be
more specific about the properties of the standards.
The definition of a conventional programming language is not straightforward. One dilemma is whether to
define just the language, or whether to define requirements for language processors as well. Traditionally,
FORTRAN [4] and COBOL have done the former, while the more modern languages have done the latter.
From the perspective of software quality, there is no doubt that requirements on language processors are
highly advantageous. For instance, a requirement for processors to reject (or give warnings for) programs
which do not adhere to the statically determined aspects of a standard can allow the detection of many
programming errors.
Programming languages vary quite significantly in the demands that are made for checking a program. At
the very least, the syntax must be checked to conform to some description given in BNF or an equivalent
gif . Subsequent to the syntax checking, other checks are performed which is usually said to be the `static
semantics'. Examples in this area are:
The strength of these checks in detecting programming errors varies with the requirements of the
language. The default declaration rules in FORTRAN and PL/I [5] imply that mistyped identifiers cannot
be detected as a fault. Quality assurance procedures for using FORTRAN should require that techniques
are applied to avoid faults arising from default declarations. Similarly, the implicit type conversion rules
in PL/I imply that various logical flaws in the use of an entity cannot be detected. In both cases, the best
compilers provide warnings concerning such constructs which should reduce the risks of errors being
introduced by `weaknesses' in the language design.
Interpreted languages are less satisfactory than compiled ones, since the delay in performing some static
checks until program execution increases the risk that an error will occur in a `working' program.
However, many interpreted languages perform some checks at run-time which are omitted altogether in
compiled languages. In consequence, interpretation itself cannot be regarded as a risk -- it is merely an
implementation technique.
As an example of the problems, the BASIC system on the author's desk has the property that lines are not
checked for syntactic correctness until execution (a danger), but array subscript checks are performed
(better than some compiled languages). Another interesting example of an interpreted language is
PostScript where the complete `program' will not be read by a printer until the first few pages are being
printed -- hence late checking is built-in to the concept of the language.
Several languages are said to have strong typing and therefore provide better checking. Languages in this
category include Pascal [29], Modula-2 [11] and Ada [26]. For an amusing (and informative) article
comparing the strong typing of Ada with the weak typing of C, see [24].
As an example of the strong typing in Pascal, consider the following erroneous code which was supposed
to convert characters to lower case:
The error is the second plus sign which should be a minus. With the Pascal compiler the author was then
using, the program failed to compile! The reason for this excellent property was that the expression on the
right hand side had a range of values which was always outside those of type char, the type of ch. To be
fair, a Pascal compiler is not required to fail the compilation of such a program, but the standard
encourages this behaviour. For Ada, the corresponding program would not be in error, but would raise
CONSTRAINT_ERROR on execution of the statement. Ada compilers typically warn programmers of this, but
are not required to do so.
Although there are distinct advantages in strong typing, it does depend upon proper exploitation in the
system design. For instance, both Ada and Modula-2 have good facilities for information hiding and data
abstraction. However, neither language can prohibit code written in the style of FORTRAN. The
important point about strong typing is that a few members of a team can impose a discipline on the use of
key interfaces.
A key issue with larger systems is to ensure that the integration process runs smoothly. A programming
language can provide support here by separate compilation. In both Ada and Modula-2, it is impossible to
call a subprogram with the wrong parameter types. Hence by defining appropriate types, some errors
during integration are avoided completely. To achieve that same assurance with FORTRAN is not
straightforward unless special checks are made by the compiler.
In ISO/ANSI C [3], function prototypes have been introduced to increase the amount of checking that a
compiler can undertake. However, this is very different from Modula-2 and Ada where the checking is a
mandatory requirement.
Scope of Standards
Standards conventionally exclude some areas which have a significant impact upon their strength in
locating program faults. Some of these areas are:
Complexity.
Language standards typically exclude issues like the capacity limits for compilers (although C [3] is
a notable exception). This implies that testing methods based upon random programs constructed to
arbitrary complexity cannot be used for formal validation. Experience with testing Pascal compilers
in this way suggests that this form of testing is quite effective [43].
Floating point.
All languages except Ada say virtually nothing about the semantics of floating point. This gap is
being filled by the proposed Language Independent Arithmetic Standard [32].
Uniformity issues.
Most programming languages given substantial latitude for implementations to vary. This can give
significant problems which could just be described as `portability' issues. However, many systems
are required to run in many environments and program working in one environment can easily fail
in ways that are hard to detect in another. The international Ada working group has a project to
produce recommendations to reduce such non-uniformities [35].
External environment.
Many applications require that a program interfaces with its environment in a way not supported by
the standard. This will change significantly when language bindings for Posix become widely
implemented. Additionally, real-time programs need to interface to hardware devices which is not
always easy to validate due to the need to use assembler and also to interface to the calling
convention of the compiler.
The correctness issue must be addressed in certain application areas, such as for safety and security. In
both contexts, assurances of the producer are not regarded as sufficient and hence it must be possible for
the software and its production process to be reviewed independently to justify the claims made for its
correctness. Sometimes the concern is not about total correctness but just some vital aspect of the
software. This process of independent review is described for the Canadian Reactor Protection software in
[7], with an example given in [6].
The correctness problem has a long standing research record [8]. However, the requirements expressed by
the regulatory agencies are not necessarily addressed directly by the academic research. The formal proof
of correctness of some software is of little benefit itself unless the proof can be independently reviewed
(together with the assumptions upon which the proof is based). In fact, a formal proof is currently so
difficult and expensive that it is only of interest to a very small sector. However, this sector can be
expected to grow with the advent of (Interim) Defence Standard 00-55 [37].
Either with or without formal proof, programming languages provide an excellent platform for
independent review. The languages provide a degree of formality that is widely understood and can be
machine processed. In consequence, it is difficult to see how `reviewable' software can be produced
without a pivotal role for the programming language.
One significant technical problem with the production of ultra reliable software is the potential for bugs in
the high-level language compiler. Obviously, this can be avoided by performing the proof at the machine-
code level [14]. Apart from the cost, this is very unattractive, since the technology is machine-specific and
will rapidly become obsolete with the changing hardware scene. (Indeed, many current RISC chips have
delayed jumps and other novel features making proof more difficult.) Recently a demonstration has been
undertaken of the equivalence of the machine with the corresponding source code, locating two bugs in
the process [39].
One research activity to alleviate the problems of the use of high level languages is the use of a standard
intermediate language. If translation from the intermediate language to machine code can be undertaken
by a trusted compiler, then validation of the intermediate language code may provide sufficient assurance.
The intermediate language can be designed to be easy to generate code from, and also to be easy to
generate itself. For examples of this approach see [45, 20, 21, 38].
Since the use of intermediate languages is a research topic, and in any case, formal proof is a minority
concern, the correctness issue must be addressed by other means. This implies a pragmatic approach, and
in this case, the use of the programming language seems the only credible tool. Moreover, the increased
use of standard high-level languages implies that a common technology can be applied across different
sectors and different hardware platforms.
Several tools are available which allow detailed analysis of high-level language code up to undertaking
formal proof [41, 13]. These tools are not universally applicable due to the subset of the language used,
and perhaps, the absence of an implementation in the right development environment. Nevertheless, much
can be done to provide some assurance about the code if it is written in a standard language. Tools which
provide `metrics' can give some crude measures, but are more useful for determining those areas of a large
system which should be further analysed, rather than showing that a program adheres to the programming
language semantics. Similarly, style checkers are useful to provide a crude measure of the intelligibility of
the source code rather than giving the hard information that is required to show conformity with the
language standard.
Functional languages often allows the semantic requirements to be expressed more elegantly, thus
reducing the validation or formal proof requirements. Unfortunately, most existing functional languages
have a large run-time system with such features as a built-in garbage collectors, which makes proof of the
object code very difficult (if possible at all).
Practical measures
This paper assumes the use of a compiler, conforming to the appropriate language standard, as a tool to
aid software quality. The manner in which the compiler is used can influence software quality--for
instance, use of compiling options to increase the run-time checks will reduce the risks of many
undetected errors.
Due to the method of development, programs will appear to work correctly on the system for which they
were written. One useful test is to port a program to another system to see if problems arise. This can
often be the case. For instance, on the VAX, many variables which have not been assigned a value
actually have the value `0', and it is easy to have a fault in a program which inadvertently depends upon
this. Porting the program to another system may well allow of the detection of the problem. For instance,
the Pascal compiler that the author frequently uses actually checks for unassigned variable access. This
may not seem an important problem, but the actual value obtained could well change with compiler
release and the like.
Another problem area is that of extensions to the standard language. Compiling options should be used to
identify such extensions, even in the case where such extensions are needed for the application. The main
problem with many extensions is that the semantics is poorly specified.
The use of non-standard languages can hardly be recommended. For instance, the vendor usually reserves
the right to change the language in arbitrary ways, and often adds a disclaimer about the accuracy of the
manual (see page gif ).
Use of Programming Languages in Critical
Standards
Several standards are designed to achieve acceptable quality in software for specific application areas.
These standards take contrasting views of the role of programming languages as follows:
00-55.
The UK Draft Defence Standard makes rather specific requirements for the language used:
A formally-defined syntax.
A means of enforcing the use of any subset employed.
A well-understood semantics and a formal means of relating code to the Formal Design.
Block structured.
Strongly typed.
These requirements seem straightforward and in the context of critical applications, reasonable.
Be validated.
Be verified to a level assessed by a risk analysis.
Be developed against the standard, unless it can be shown that the object code of the
application is equivalent to the source.
DO178B.
No requirements are given in this standard for safety-critical civil avionics software [40].
ITSEC.
The Information Technology Security Evaluation Criteria [33] has requirements which vary
according to the level of assurance required. For E3 this is `well defined languages only', for E4 the
additional requirement of `compiler options documented', and then finally at E5 `source code of
run-time libraries' required. A comparison between 00-55 and ITSEC on the question of
programming languages is the subject of a separate note [47].
WG9.
The proposed standard for safety-critical software is undergoing revision before acceptance [28].
The main requirements on the use of programming languages are stated in a table which gives a
recommendation against the Level of criticality of the software. The table is as follows:
tabular63
Here, R means `Recommended', N means `Not', H means `Highly', and `-' means no
recommendation.
The table (in the standard) also refers to a one page section giving general advice. The material can
be regarded as a good first draft, rather than authoritative.
None of these standards refers to published material which analyses in more depth the problems of
insecurities in languages, such as [15, 25, 44], or the design of a language which specifically addresses the
absence of insecurities [16].
Language Subsets
In many cases, the full standard programming language has features which make checking of the source
code difficult. Hence the checking process can be simplified if such features are not permitted, as can be
seen from [15].
One problem arising from the use of such subsets is that no such subset is itself standardized. This means
that the use of a subset is often just a pragmatic choice dependent upon the validation tools in use. This
usually implies that compilers and other tools are for the full language and hence cannot exploit the use of
the subset. For instance, many subsets exclude recursion (to avoid the need to show that the recursion is
bounded), but a compiler will use a general stack mechanism rather than static storage for variables. This
in turn means that the validation process must ensure that sufficient stack space is allocated for all
potential executions.
An important advance in the Safety and Security Annex of the Ada 9X standard is that the user can
specify a subset in a way the compiler can check and exploit to produce simpler code [27].
In order to validate source code against the program specification, it is highly advantageous to understand
what potential programming errors will be detected by the compiling system in use. For instance, a
conforming Pascal compiler will make type checks which need not, therefore, be repeated by a separate
validation step.
It is helpful if compilers provide assistance in program development that goes beyond the requirements of
the standard. For instance, many compilers provide a warning if a variable is not used or not assigned a
value. An extensive debugging system can also help to gain confidence in a program.
The issues that go beyond the standard can be subjected to independent test. A service of this nature has
been proposed for Ada [9] and Pascal [10]. Such an evaluation service would be especially useful for
compilers, since any fault in a compiler could seriously jeopardize a project.
There is one problem with this approach. Different programming languages do not provide the same level
of assurance, as was illustrated in section 3 above. Therefore we must specify a level platform to indicate
the differences in assurance provided. For instance, a language which provides less assurance due to the
absence of strong typing, implies that more validation effort is required to obtain the same assurance for a
specific program.
The method applied is to take an example, and analyze it to determine the work required to provide a
fixed level of validation for the various languages considered.
The approach taken here is to list those features of programming languages which are inadequately
defined in at least one language, and then quantify the work needed to validate the above program in the
various languages considered.
The approach we take is rather modest. As a validation issue, we wish to show that the above program
will execute without `error' in the language sense. This means that we wish to independently check that
the program will not break a language rule. Some language rules are rigorously enforced by a compiler
and hence we need take no specific action, but in those cases in which no check is performed, we must
analyze the code to ensure that the rule is not broken.
Apart from demonstrating the assistance programming languages give to software quality, we wish to
show the differences in currently widely used languages. Hence the validation steps required here are
contrasted for the languages FORTRAN, Pascal, Ada and C. For each validation step, we note the tools
that might be available to assist in the analysis.
Language specification.
This analysis assumes that appropriate language rules exist. This need not be the case for
proprietary languages. For instance, the PL/M-86 User's Guide contains the following disclaimer:
Intel Corporation makes no warranty of any kind with regard to this material,
including, but not limited to, the implied warranties of merchantability and fitness for a
particular purpose. Intel Corporation assumes no responsibility for any errors that may
appear in this document. Intel Corporation makes no commitment to update nor keep
current the information contained in this document.
Implicit variables.
For FORTRAN, but not the other languages, a simple mistyping of an identifier can result in the
implicit declaration of a variable which could, in turn, lead to an undetected error. The solution (for
FORTRAN) is to check for no such unexpected implicitly declared variables by using a simple tool
or cross-reference facility.
External routines.
FORTRAN and C allow the use of external routines without explicit declarations. This implies that
no checks can be performed upon the use of such routines by the compiler. Hence for both
FORTRAN and C a relatively complex tool is required to give the same level of assurance as that
given by Pascal or Ada. The ISO C standard provides function prototypes which give a mechanism
to check the parameters of external routines. Unfortunately, many C programs conform to the
earlier informal standard [36] and hence do not use this facility. The utility `lint' provides some
protection via warnings.
For the example program, it should be compiled with the `standards switch' so that no non-standard
(and unexpected in this case) routines are being used.
Array index and subranges.
Accessing an array element requires that the index value is in the range for the array in all the
languages being considered. Languages like Pascal and Ada which encourage the use of subranges
make it easy to check that most (if not all) index values are in range.
In the example program, each indexing operation must be checked. Consider two cases as follows:
The lower bound of the loop (1), is clearly within the range of the indices for the three arrays,
Ar, Br and Sr. The upper bound is Terms = Digits div DRadix <= 1000 div 4 = 250.
Hence all the three indices are in range.
These loops contain array indices for the arrays Ar, Br and Sr. We must show that kminus1 is
in range both to ensure the value of j remains in range and to show the validity of the two
explicit uses of kminus1. However, an analysis of all the references to kminus1 and k shows
that the value is only potentially outside on the final two lines of the repeat loop. Since no
indexing is performed in these two lines, no index violation can occur.
The above two cases check 23 index uses in the program. A similar reasoning can be applied to the
remaining 24 cases. Informal reasoning cannot be relied upon completely, since mistakes are easily
made. For this reason, program proof tools may be useful here, even if they are restricted to
showing the absence of programming errors.
Languages which do not provide subranges make the analysis much more difficult. Indeed, many
Pascal compilers can and do omit checks on array subscript ranges since the declaration shows the
check is not needed, while for languages like FORTRAN and C, performing the check by hand can
be very difficult.
Integer arithmetic.
If an integer expression has a mathematical value beyond the range supported by an
implementation, then the program is in error. This error is rarely detected with FORTRAN or C
implementations, is usually detected in Pascal, and is required to be detected in Ada
implementations.
The Language Independent Arithmetic Standard [32] requires that violations of the integer range
are trapped and notified to the user.
For this program, the ranges given for the integer variables, make it easy (but tedious) to check that
integer values are within the 32-bit range required by the program. For Pascal and Ada, it is
possible to add a test within the program, that the integer range has at least the 32-bits required.
As an example of the reasoning that is needed for this program, consider the first for loop within the
repeat loop:
Similar reasoning applies to the divide loop. However, no attempt is made to normalize the array
Sr, and hence the values of this array have a larger bound of 2*500*100000, which is well within
the 32-bit limit.
The FORTRAN language does not require that the range of integers is specified, so implementation
details must be obtained to perform the checks. Typical C implementations make no check, and
wrap-round rather than giving an overflow indication.
In general, it is very difficult to show that a program which does significant floating point
computations cannot overflow. For a discussion of this point, see [46].
In this program, floating point is only used in the final calculation of the inaccuracy of the
computation. It is easy to see that this calculation will not overflow or cause problems with
underflow or numerical instability.
Control flow.
We need to demonstrate that the program retains control in the expected manner. For Pascal and C
(but not Ada), a case statement can be in error if the case selected does not have a corresponding
statement. In addition, for C it is necessary to check that every function variable has been assigned.
Checking control flow for most high level language programs is straightforward.
Control flow checking implicitly assumes that other forms of error trapping do not occur.
Data representation.
In interfacing a program to its external environment, it is typically necessary to exploit the actual
hardware representation of data, either by use of machine code, or by means of low-level language
features within the language (as permitted in Ada).
With data input, the use of low-level facilities or machine-code is potentially dangerous since the
compiler will assume the data is correctly stored. For languages like Ada, Pascal and C which
support high-level data structures, the compiler assumptions can be complex to state. Hence there is
a significant risk if data other than simple integers are interfaced to the external environment.
Implicit conversions.
Pascal, FORTRAN and C allow implicit conversions of integers to reals. In some cases, this can
result in an information loss with inevitable potential dangers. No such implicit conversions are
present in the example program.
Undefined extensions.
Non-standard extensions are rarely defined with the same precision as the main (standard) language.
Moreover, there can sometimes be complex interactions between the main language and an
extension. Those extensions which are syntactic can be detected by a simple tool. Extensions which
give an additional meaning to a program are hard to detect and allow for.
For instance, some Pascal implementations do not detect the error in a case statement of no
corresponding statement to the case expression value, but execute an empty statement instead.
Detecting this would require a detailed analysis of the semantics of each case statement within a
program.
Detection of these extensions is straightforward with a validated compiler which should have an
option to inhibit the use of extensions.
Uninitialized variables.
The use of such variables is a common programming mistake which is not detected by most
implementations.
The detection of uninitialized simple variables can be undertaken with some tools [41, 13]. The
detection of uninitialized components of arrays and records is much more complex.
The flow control and variable usage in the example program is such that a hand check is possible.
The single loop to initialize the arrays avoids the error which is most difficult to detect.
Storage overflow.
For Ada, Pascal and C, successful program execution depends upon sufficient storage for the
dynamic entities declared. Simpler programs in these languages can be analysed statically to show
that a specific amount is sufficient.
An analysis tool should check for the absence of recursion, and also that the features requiring
dynamic storage (new in Pascal and Ada, malloc in C) are not used. A few exotic features in Ada
make this check rather more awkward, see [44].
Recursion.
For correct execution in FORTRAN, the absence of recursion must be checked. For Ada, Pascal and
C, the use of recursion is defined, but the absence of recursion is the simplest way to bound the
storage requirements.
Parameter aliasing.
FORTRAN and Ada, but not Pascal and C, assume that two parameters or parameters and globals
are distinct. If a program does alias parameters, then the program could malfunction. However, it
appears that such a program would merely produce incorrect results, rather than fail due to error
detection (or the lack of it) in an implementation. Hence it appears that aliasing is only a problem if
program proof were attempted, and so this issue is not directly relevant to the level of validation
being attempted here.
L.
A validated compiler for the language undertakes the necessary checks.
T.
A simple tool can be used to make the necessary check.
TT.
A complex tool is needed to make the check.
C.
A review of the code is needed.
Of course, a simple tool could be a person(!), and different methods could be applied to obtain the same
level of assurance. A `complex' tool is one which needs to do an analysis of various parts of a program to
extract the needed information, such as the transitive closure calculation to detect recursion.
The ANSI/ISO C standard allows detailed checks to be performed, but since traditionally, these have not
been provided, few systems make any serious attempt at providing checks which are necessarily dynamic
for the language (due to the weak typing). However, one model implementation of the language attempts
to make all the checks that the standard allows [34]. Passing such a test would provide a level of
assurance near to that for strongly typed languages.
Conclusions
Software quality is rather like justice--it must not only be done, but must be seen to be done. This implies
that the software product must be capable of independent validation. This, in turn, can only be achieved if
the software is in a form that can be understood by others. The difficulty of producing understandable
software is at the heart of the software maintenance problem.
A high-level language program text represents the easiest route to software quality since the software
should indeed be understandable. Moreover, if the language is one which has been standardized by ISO,
then there is a substantial body of literature to aid programmers.
In the most critical applications in which quality is essential, a mathematical proof of the correspondence
between the specification and the implementation may be required. Although such techniques can be
applied to machine-level languages [14], it is easier and cheaper to obtain assurance by analysis of the
high-level language code. Here, the use of formal methods to define high-level languages is important [17,
42], since only then can one make precise statements about the meaning of programs.
A useful intermediate step in the validation of a high-level language program is to show that the program
does not `fail' in the programming language sense, that is, it does not break the language rules. Even if this
limited goal cannot be achieved, the exact reasons for the failure of the program in this respect is a useful
validation objective.
The actual programming language (and development system) used will have a marked impact upon the
cost of this limited objective. For instance, the majority of Ada programs which are incorrect according to
the Ada standard are rejected by a compiler, while with C a substantial analysis is required to achieve the
same level of assurance.
Acknowledgements
This work was funded by the Department of Trade and Industry (Software Quality Unit). Critical reviews
by R Scowen, N North and D Schofield at NPL have assisted in the presentation, although the views
expressed are the personal ones of the author.
References
1
``Software: A vital key to UK Competitiveness'', Cabinet Office: Advisory Council for Applied
Research and Development, HMSO 1986.
2
``SpecBox'', Adelard, Coburn House, Coburn Road, London E3 2DA.
3
American National Standard Programming Language C. ANSI X3.159-1989.
4
American National Standard Programming Language FORTRAN. ANSI X3.9-1978. (Also ISO
1539:1980.)
5
American National Standard Programming Language PL/I. ANSI X3.53-1976. (Also ISO
6160:1979.)
6
G H Archinoff, R J Hohendorf, A Wassyng, B Quigley and M R Borsch. Verification of the
shutdown system software at the Darlington Nuclear Generating Station. International Conference
on Control & Instrumentation in Nuclear Installations. Glasgow. May 1990.
7
G J K Asmis, H O Tezel and J D Kendall. The Canadian process for the regulatory approval of
safety critical software in nuclear power reactors. International Conference on Control &
Instrumentation in Nuclear Installations. Glasgow. May 1990.
8
R S Boyer and J S Moore. A Verification condition generator for FORTRAN. The correctness
problem in computer science, Academic Press, pp9-101. 1981.
9
Ada Evaluation Suite. BSI-QA. Milton Keynes. 1988.
10
Pascal Validation Suite, version 5.0 obtainable from the Pascal Compiler Validation Service, British
Standards Institution - Quality Assurance Services, Milton Keynes. 1988.
11
British Standards Institution. Second Committee Draft. Modula-2 Standard. ISO/IEC JTC1
SC22/WG13/D181. December 1993.
12
Quality Assurance. Handbook 22. British Standards Institution. Fourth edition, 1990.
13
B A Carré and T J Jennings. SPARK -- The SPADE Ada Kernel. University of Southampton. March
1988.
14
D Clutterbuck and B A Carré. The verification of low-level code. Software Engineering Journal, Vol
2 pp97-111, 1988.
15
W J Cullyer, S J Goodenough and B A Wichmann, ``The Choice of Computer Languages in Safety-
Critical Systems'', Software Engineering Journal. Vol 6, No 2, pp51-58. March 1991.
16
I F Currie. New Speak -- an unexceptional language. Software Engineering Journal, Vol l pp l70-
l76, l987.
17
The Draft Formal Definition of Ada. Dansk Datamatik Center. 1987. (Funded by the Commission
of the European Communities).
18
J Dawes. ``The VDM-SL Reference Guide''. Pitman Publishing. 1991. ISBN 0-273-03151-1
19
``STARTS (Software Tools for Application to large Real-Time Systems) Purchasers Handbook'',
second edition, DTI, May 1989.
20
J M Foster. The algebraic specification of a target machine: Ten15. in High Integrity Systems,
edited by C T Sennett. Pitman. 1989.
21
J M Foster. TDF Specification. A purpose-built Architecture Neutral Distribution Format. RSRE.
October 1990.
22
I Hayes, ``Specification Case Studies'', Prentice-Hall, 1987.
23
I D Hill, ``Wouldn't it be nice if we could write computer programs in ordinary English - or would
it?'', The Computer Bulletin, Vol 16, 1972.
24
A D Hill, The choice of programming language for highly reliable software -- a comparison of C
and Ada. Part 1, Ada User, Vol 12, pp11-32, Part 2, pp92-103. 1991.
25
Holzapfel R and Winterstein G. Ada in Safety Critical Applications. Ada-Europe Conference 1988.
CUP. 1988.
26
J D Ichbiah et al: Reference Manual for the Ada programming language. ANSI/MIL-STD 1815A,
US Department of Defense, February 1983. (Also ISO-8652:1987.)
27
Ada 9X Reference Manual, Draft version 3.0. Intermetrics. June 1993.
28
IEC/SC65A/WG9(Secretariat 122) ``Software for computers in the application of industrial safety-
related systems''. Draft for comment. December 1991.
29
ISO/IEC 7185:1990 Information technology -- Programming languages -- Pascal.
30
ISO 9000:1987, Quality management and quality assurance standards -- Guidelines for selection
and use.
31
ISO 9001:1987, Quality systems -- Model for quality assurance in production and installation.
32
ISO/IEC DIS 10967-1:1993, Information technology -- Language independent arithmetic, Part 1:
Integer and floating point arithmetic.
33
``Information Technology Security Evaluation Criteria'', Provisional Harmonised Criteria. Version
1.2. 1991. (UK contact point: CESG Room 2/0805, Fiddlers Green Lane, Cheltenham, Glos, GL52
5AJ.)
34
D Jones, The Open Systems Portability Checker. (to be published, available from Knowledge
Software Ltd, Farnborough, Hants, GU14 9RZ, UK.)
35
P D Kenward and B A Wichmann (Editors). Approved Ada Uniformity Issues. NPL Report DITC
172/91. January 1991. (See also Ada User Vol 12, No 1 pp32-36. March 1991.)
36
B W Kernighan and D M Ritchie. The C Programming Language. Prentice-Hall, 1978.
37
Interim Defence Standard 00-55, ``The Procurement of Safety Critical Software in Defence
Equipment'', Ministry of Defence, (Part1: Requirements; Part2: Guidance). April 1991.
38
J S Moore. PITON: A verified assembly level language. Computational Logic Inc. September 1988.
39
Demonstrating Equivalence of Source Code and PROM Contents. D J Pavey and L A Winsborrow.
Safety and Reliability Engineering. Nuclear Electric. (to be published)
40
Issued in the USA by the Requirements and Technical Concepts for Aviation (document RTCA
SC167/D0-178B) and in Europe by the European Organization for Civil Aviation Electronics
(EUROCAE document ED-12B). (draft .7, dated 27th July 1992.)
41
Rex, Thompson and Partners Ltd : The capabilities of MALPAS - a software verification and
validation tool; RTP/4002, April 1987.
42
C L N Ruggles (Editor). Formal Methods in Standards. A report from the BCS working group.
BCS/Springer-Verlag. ISBN 3-540-19577-7. 1990.
43
M Davies and B A Wichmann. Experience with a compiler testing tool. NPL Report DITC 138/89.
March 1989. p22. NTIS ref: PB89-193585/WFT
44
B A Wichmann. Insecurities in the Ada programming language. NPL Report DITC 137/89, January
1989.
45
B A Wichmann. ``Low-Ada: An Ada validation tool''. NPL Report DITC 144/89. August 1989.
46
B A Wichmann. The use of floating point in critical systems. Computer Journal, Vol35, No1, pp41-
44. 1992.
47
B A Wichmann. Requirements for programming languages in safety and security software
standards. Computer Standards & Interfaces. Vol 14 pp433-441, 1992.
Example Program
program calpi(input, output);
{ This program calculates PI to arbitrary accuracy }
This document was generated using the LaTeX2HTML translator Version 96.1 (Feb 5, 1996) Copyright ©
1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The translation was initiated by Brian Wichmann on Fri Sep 4 10:48:02 BST 1998
...equivalent
A few languages, such as Prolog and PostScript, which are interpreted, do not have a highly
prescriptive syntax; these language are not considered here.
Brian Wichmann
Fri Sep 4 10:48:02 BST 1998