For critical software, the key issue is assurance of the application, that is, gaining sufficient confidence in the application in order to authorize its use. The Ada 95 language contributes to this process by providing a language definition which minimizes potential insecurities and which thus facilitates independent program validation and verification. However, the size and richness of the language also raise some issues that need to be addressed if it is to be fully exploited for safety and security applications.
The Safety and Security Annex is designed to address these concerns. It should be noted that the prospective users of this annex form a small, specialized community who historically have been served by special contracts between large user organizations and specific vendors. However, such an approach can only satisfy the requirements of enterprises with significant resources. Since the Annex is part of the Ada 95 standard, "off-the-shelf" products should be able to satisfy the same requirements at substantially reduced costs. This will allow Ada 95 to be used with assurance by different application areas, including those outside the military sector such as medical electronics or electronic funds transfer. Over the period that the Ada 95 standard can be expected to be operative, the number of applications in such areas could rise quite steeply.
The UK Ministry of Defence standard for the procurement of safety- critical software [MoD 91] is based upon the use of mathematical specifications and formal proofs or rigorous arguments for showing compliance and has been effectively used in some environments. However, the complete application of this standard is often not feasible and hence other methods must be used, even within the defense context. Whatever approach is taken, the final form of the program is vital, since it is an analysis of the program itself which must provide the basis of most of the assurance procedures. This implies that the programming language used for the application is likely to have a key role. The Annex aids but does not require the application of mathematical specification techniques.
A mature standard in the safety-critical area which takes a different view to formal methods is the international avionics standard [DO-178B]. Here the emphasis is on design, analysis and test. Although there is little stated about programming languages, it is clear that any analysis will either depend upon the programming language or upon a corresponding analysis of the object code generated by the compiler. Quite rightly, the standard requires isolation from the correctness of the compiler for the most critical software, implying that one must either reason from the object code or else show that the object and source code are equivalent. The Annex provides facilities to aid in the analysis of the object code, including access to data values during testing, in order to ease validation.
In the context of safety, the requirements surrounding the application of computers to nuclear shut-down systems has been well documented [Archinoff 90]. In the same application area, the need to show that compiler errors can be detected is given in [Pavey 93].
In the security area, the general requirements are well documented in [DoD 85, ITSEC 91]. Although the latter document does imply some requirements on the programming language in use, they are at a level that is not really relevant to this Annex.
To reason about a program requires that the structures used within the program be well-defined and properly implemented. However, almost all existing programming languages standardized by ISO are not defined in a mathematically precise form. Hence substantial care must be taken to ensure that the features of the language used are well-defined and that the program accords with the intention of the programmer.
Since programmers are fallible, languages which require checks for some forms of error are an advantage, especially if the design can allow the software to request the system to return to a safe state. Ada 83 is the only widely used language suitable for critical applications which requires such checking (and Ada 95 of course continues in this tradition). Many critical applications do not exploit this checking but demonstrate (perhaps by mathematical proof) that the checks could not fail. Undertaking this form of static checking is very expensive in staff time, and hence is not practical for less critical applications. A brief summary of the securities and insecurities of standard languages is shown in Figure H-1.
+--------------+--------------------------------+------------------------+ | Standard | Security Features | Insecurities | | Language | | | +--------------+--------------------------------+------------------------+ | Ada 83 |Runtime checks required |Access to unset scalars | | |Pointer initialization | | | |Type-secure across packages | | | |Can recover from check failures | | +--------------+--------------------------------+------------------------+ | Modula-2 |Type-secure across modules |Unset pointers (and | |(not yet an |Limited recovery from failures | scalars) | | ISO standard)| | | +--------------+--------------------------------+------------------------+ | Pascal |Strongly typed |Runtime checks optional | | | |Unset pointers (and | | | | scalars) | +--------------+--------------------------------+------------------------+ | C |(Additional tools: |150 undefined "features"| | | make and lint) |Runtime checking often | | | | not done | +--------------+--------------------------------+------------------------+ | Fortran 77 |Type checking |Default declarations | | |No pointers |No checking across | | | | routines | +--------------+--------------------------------+------------------------+ Figure H-1: Securities and Insecurities in Standard Languages
A comparison of programming languages for safety and security applications showed that a subset of Ada 83 was a good choice [Cullyer 91]. The subsequent maturity of Ada compilers, the better understanding of the language, and the provision of Ada-specific tools, makes the language the first choice for many applications. In contrast, the C language is deprecated in the IEC draft standard on safety software [IEC/SC65A 91] since it is very difficult to demonstrate that C code is restricted to the subset which is well-defined. Ada 95 fills a number of the insecurities noted in Ada 83 [Wichmann 89], and hence it provides the smallest area of uncertainty to the developer.
The C++ language is not listed in Figure H-1, since it is not yet standardized by ISO. There are substantial problems in the validation of C++ code, some of which arise due to the lack of an agreed standard. Moreover, since almost all C programs can be compiled by a C++ compiler, the inherent problems of the validation of C code are transferred to C++. In addition, compiler vendors are at a disadvantage compared to Ada in not having an internationally agreed and comprehensive validation suite for the language.
The insecurities may be avoided by suitably chosen subsets of Ada, such as that provided by SPARK [Carre' 88]. The same subset could be used for Ada 83 and Ada 95, and may be exploited by the compiler through the pragma Restrictions described below. Note that the use of such subsets is not in conflict with Ada's traditional "no subsets, no supersets" policy, since for compiler validation purposes there must be a complete implementation of the language, and not simply a subset. The point is that any particular program will not use the full language, and if there are a set of features whose exclusion can result in software for which validation and verification is facilitated, then the implementation can enforce such exclusions in one mode of operation and can link in reduced versions of the run-time system.
Until now, language standards have not addressed the problem of the validation of the object code generated by a compiler. This is troublesome unless information is provided linking the source code to the object code. Such checking will be required for many years to come while there is a possibility of errors being introduced by a compiler. The alternative of having "trusted compilers" does not yet seem viable for the major languages.
The user can indicate exclusion of particular features in a partition by means of pragma Restrictions. For instance, the user can indicate that tasking is not needed, thus allowing the run-time system to be very much smaller and simpler. Similarly, the user can ensure that other language features are avoided. Through this facility, the language used on a particular system can be reduced to a quite small subset of Ada 95. This can facilitate the analysis of the source code, since not only can language features be avoided for which verification and validation are impractical, but one can be assured that these restrictions are enforced by the compiler.
One might argue in favor of a smaller language than Ada 95 for safety- critical systems, rather than a mechanism for indicating features to be excluded. However, past experience has shown that there is no consensus on a suitable subset is difficult since specific applications do require most features of Ada for convenient and maintainable coding.
After agreeing to the Ada 95 standard, WG9 discussed the problems in the validation and verification of Ada programs. Such activities could be substantially cheaper if tools could effectively analyze Ada source code to verify safety or security requirements. In practice, this is likely to require that the code conforms to some conventions, suh as being within a subset. These issues are to be investigated by the WG9 Verification Rapporteur Group (VRG).
A key issue for the language in critical applications is that of understandable execution. Ada 95 addresses this issue in several ways:
Although this feature is in the core language [RM95 13.9.2], it is discussed here since it is relevant to the safety and security applications. The Valid attribute allows the user to check whether the bit-pattern for a scalar object is valid with respect to the object's nominal subtype. A reason for including such a facility is that the other language-defined operations that could conceivably undertake the same function might not have the intended effect. The following example illustrates the issue.
declare I : Integer range 1 .. 10; -- Uninitialized A : array (1 .. 10) of Float; begin ... A(I) := 1.0; ... end;
In Ada 83, users are sometimes surprised to learn that a compiler is permitted to remove the index check for A(I). The reason that such an optimization is allowed is that a reference to an uninitialized scalar object in Ada 83 yields an erroneous execution, and therefore unspecified effects. Thus the compiler may apply the following logic:
Perhaps even more surprising, the programmer cannot count on the following style to ensure that a check is carried out:
declare I : Integer range 1 .. 10; -- Uninitialized A : array (1 .. 10) of Float; begin ... if I in 1 .. 10 then A(I) := 1.0; else raise Bad_Index; end if; end;
In this example the compiler may optimize the test
I in 1 .. 10to true, using the same logic that led to the suppression of the index check in A(I), namely that if the program's execution is not erroneous then the value of I will be within its declared subtype and hence the membership test can be omitted, and if the program's execution is erroneous then the effect is unspecified and thus again the test can be omitted.
Ironically, if the programmer had declared the variable I without a range constraint, then it is likely that the check would be performed (unless data flow analysis can show otherwise). The fact that including a range constraint with a scalar declaration might reduce the security of the code (by failing to raise a run-time exception) is against the Ada design philosophy.
This problem is addressed in several ways in Ada 95. First, a reference to an uninitialized scalar variable is no longer erroneous but rather a bounded error: the permitted effects are to yield a valid value (i.e., one within the variable's nominal subtype), to yield an invalid value (one within the variable's type but outside its nominal subtype), or to raise the exception Program_Error. This rule prevents the compiler from "reasoning from erroneousness". In the first example, unless the compiler has by default set the initial value of I to be within the range 1 .. 10 (which is not recommended, since it would mask errors), it will need either to raise Program_Error (because of the reference to an uninitialized object) or to generate a range check.
The second example will not have the problem of the in membership test being optimized away, but there is still the possibility that the reference to the uninitialized value of I will raise Program_Error, which is presumably not what the programmer intended. Moreover, to allow membership tests to be implemented simply and efficiently, the membership test only performs a range check and thus might not work as desired for enumeration types with "holes" in the representation.
These remaining issues have motivated the introduction of a simple, intuitive facility for checking that the value of a scalar object is within the object's nominal subtype. That is the purpose of the Valid attribute, which applies to any scalar object. As illustration, an alternative version of the second example, using Valid rather than the membership test, is as follows
declare I : Integer range 1 .. 10; -- Uninitialized A : array (1 .. 10) of Float; begin ... if I'Valid then A(I) := 1.0; else raise Bad_Index; end if; end;
The purpose of the Valid attribute is to check the contents of a scalar object without formally reading its value. Using this attribute on an uninitialized object is not an error of any sort, and is guaranteed to either return True or False (it will never raise Program_Error), based on the actual contents of the object, rather than on what the optimizer might have assumed about the contents of the object based on some declaration.
Although the use of the Valid attribute for checking the validity of uninitialized data is somewhat contrived, other examples are more realistic, such as checking data from:
The Valid attribute could potentially be applied to a wider range of types than that of scalars. Unfortunately, this extension is not easy to define with the rigor that should be expected. For instance, what action should an implementation perform to attempt to determine if a bit-pattern of an access type is a valid value? If the attribute did have a larger scope but with unspecified semantics, then its use on critical systems would require an implementation-defined specification, checked by analysis of the object code produce by the compiler. This complexity did not seem justified.
If the attribute is to be used on a record read from a file that was written by a COBOL program, it is important that the Ada program can check that the value is meaningful rather than execute code based upon the premise that the value is legal. In the context of safety-critical applications, such alien data is likely to be provided by some external device. Such data could well be a composite value, in which case the attribute must be applied to the scalar components. Checking on non- scalar components or for potential gaps between components cannot be undertaken with Valid.
It is not necessarily logically sound to apply Valid to a composite type, due to the presence of components which are undefined, as in the following example
type Stack_Data is array (1 .. 100) of Character range 'A' .. 'Z'; type Stack is record Index : Integer range 0 .. 100; Data : Stack_Data; end record;
Only those elements of the array up to the position of Index need to be checked for validity, but there is no way for a general definition to capture such semantics.
In formulating the design of the Valid attribute, we considered several alternatives. One was to have it as a scalar subtype attribute function, applied to the object of that subtype whose validity was to be checked; for example,
declare subtype S is Integer range 1 .. 10; I : S; begin ... if S'Valid(I) then ... else ... end if; ... end;
However, this would present semantic difficulties since calling a function causes evaluation of the actual parameters; the main idea behind the Valid attribute, however, is that it should not read the value of the object, since this might raise an exception.
Another approach we considered was to have the attribute as a function associated with a "target" scalar subtype (either as an attribute or through generic instantiation) applied to a value of any (other) "source" scalar subtype. The idea is to check a source bit pattern (say an Integer value) to see if it is a valid value for the target subtype. As an example, if Enum is an enumeration type with "holes" (that is, it has a representation clause with non-contiguous integer values), and V is an Integer value, then Enum'Valid(V) would return True if V has one of the identified values, and False otherwise. The idea is to check validity before the source bits are copied to the target object.
One problem with such an approach, however, is that it raises some semantic and implementation questions with respect to the expected sizes of both the source value and the target subtype; this issue does not arise with the notation X'Valid since the object X is not itself evaluated. Another problem is that it is rather unwieldy in the common case where validity needs to be checked for data that is being read from an external device into a record, and where the record fields have scalar subtypes. In such a case it is simplest and most efficient to read the data into the record first, and then to check validity. This works conveniently using the Valid attribute applied to a scalar object
declare type Rec is record A : Character range 'A' .. 'Z'; B : Integer range 1 .. 10; end record; R : Rec; begin Read(R); if not R.A'Valid then ... end if; if not R.B'Valid then ... end if; ... end;
With the alternative style, it would be necessary to have a record type with fields corresponding to the full types (Character and Integer), to read the data into an object of this record type, check validity of the fields, and then copy it into R so that constraint checks can be enforced on subsequent assignments to these fields.
As a result of analysis of these design alternatives, we decided on the approach where validity is realized through an attribute that is applicable to a scalar object, rather than an attribute function associated with a target scalar subtype.
A value can be abnormal instead of having an invalid representation [RM95 13.9.1]. From the point of view of safety and security, such abnormal values are a potential disaster, since they can give rise to erroneous execution - the opposite of predictable execution which the Annex strives to provide.
In general, it is not possible for a scalar value to be abnormal, and in any case, the user can take precautions against scalar values with an invalid representation by suitable use of the Valid attribute. It might be possible for an implementation to obtain an abnormal floating point value if a signalling NaN was produced in which no trap-handler was provided, since access to the value would produce unpredictable results.
(A signalling NaN is a bit pattern used in place of a floating point number in systems that support IEC 559; see [IEC 89]. Access to such a bit pattern will cause an interrupt, if the processor state is set correctly. If the interrupt is not then handled, disaster could ensue. In fact, signalling NaNs could be used to detect unset floating point values with very little overhead on some machines, although existing Ada systems do not appear to do this.)
Abnormal values can arise from an abort statement interrupting an assignment operation, or interactions with the environment external to the Ada program. Typically, the data type would be composite with a complex internal structure which can be placed in a state that the Ada system cannot subsequently handle. Task objects and types with discriminants are potential candidates for types which can have abnormal values. Vendors providing support for this Annex should be able to indicate if and how such value can arise (unless they require the use of the Restrictions pragma to effectively exclude such values).
Abnormal values can also arise if a language defined check would fail but the check has been suppressed. Suppressing checks can obviously lead to problems, since any storage could be overwritten making it generally impossible for the Ada run-time system to retain control.
The use of an undefined scalar is a very common programming error which must be detected in critical systems (see [Wichmann 92] for a discussion of some of the subtleties of this issue in connection with an earlier draft of the Ada 9X mapping). As observed above, the Ada 95 rule treating this as a bounded error rather than an erroneous execution will inhibit certain compiler optimizations that would make this kind of error difficult to detect. However, it does not prevent the compiler from giving an in-range "default" initial value to otherwise uninitialized scalars, which would also make it difficult to find errors.
In the light of these considerations, Ada 95 supplies the configuration pragma Normalize_Scalars, which serves to ensure that elaboration of the declaration of an otherwise uninitialized scalar, sets the object to an invalid value if such a value exists. If no such invalid value exists for a scalar object, then the implementation needs to identify this (for example on the program listing) so that the programmer is alerted to ensure that the object is assigned before it is referenced. In such cases, the program will have a predictable (but not necessarily portable) value and the implementation needs to document the in-range value taken.
The name Normalize_Scalars reflects the intent of the pragma. A "normal" value for a scalar object is either valid (if within the object's nominal subtype) or invalid (if outside). Since scalar initializations induced by the pragma might or might not be invalid, "normalize" is an appropriate description. In general, an invalid value will be outside the object's nominal subtype, but there are also cases where it is possible for the implementation to produce an invalid value even when the nominal subtype has the same range as the type. For example, suppose that an implementation of the type Boolean reserves 8 bits for objects that are not in packed arrays or in records with representation clauses, with 16#00# corresponding to False and 16#01# to True. In the presence of pragma Normalize_Scalars, an otherwise uninitialized Boolean variable will be set to an invalid value, which is neither 0 nor 1.
Some interactions with other pragmas need to be understood by prospective users. First, if a scalar object is the argument to pragma Import, then its (probable lack of) initialization is not affected by pragma Normalize_Scalars. This is reasonable, since an imported data item is under the control of foreign language code that is not subject to Ada semantics. Note that if an Ada scalar variable corresponds to a memory-mapped I/O location, then any implicit initialization could have an unwanted effect. This can be avoided by importing the scalar variable.
Another interaction is with pragma Restrictions. If a system is being developed in which exception handling is absent, then the use of pragma Normalize_Scalars is inappropriate, and even dangerous. With the pragma Restrictions(No_Exceptions) in effect, there is no object code generated to perform constraint checks. Clearly, referencing an out-of-range scalar would then result in an unpredictable effect.
One aspect of predictability is to understand the behavior of a program in situations identified by the language rules as either bounded errors or unspecified effects. Thus the implementation needs to document these behaviors, either as part of a listing or tool-processable output, or (if a general rule) as independent documentation. Some specific requirements are now discussed.
Some parameters can be passed by reference or by copy. A different mechanism could even be chosen for two calls of the same subprogram. Incorrect programs can be affected by the choice and hence safety and security applications need to check that either this error has not been made or that the effect will be acceptable. The simplest solution is for the compiler to indicate the choice made. If the choice is a simple static one (say, always by reference, except for entry parameters which are always by copy), then this could be stated once in the compiler documentation, otherwise the object code listing (or appropriate tool) should indicate the choice. In fact, some compilers have a complex algorithm which varies from call to call, especially for slices. The complexity of this algorithm is not relevant to the issue, merely that the choices made should be clear.
Many safety critical applications are in the form of an infinite loop. It is important that this loop should not permanently consume storage. Therefore it must be possible, by reviewing the object code, to ensure that this does not happen. In the security context, it is important that storage used to contain classified information does not leak via the storage allocation system. It is possible that this requirement can be met by proposals for a user-defined allocation and de-allocation of storage - in which case the run-time system may be less critical. There is a parameter to the Restrictions pragma to avoid storage leaks.
Of course, even if there is no net consumption of storage within a program, and if the storage is not allocated and de-allocated via a stack, it will be necessary to show that storage fragmentation does not undermine the system. For this reason, the algorithm used by the run- time system is important and must be fully documented.
For time-critical applications, additional constraints could be required on the run-time routines, such as having a tight upper bound in execution time. This requirement is not specified here, since it will be application specific and many systems avoid the use of the heap and hence are unlikely to have a problem.
Problems can arise if the evaluation of numeric expressions involves extended range or extra precision. Counter-intuitive results can be produced with floating point expressions when combined with equality tests, which is quite common on IEEE systems. Hence the vendor should document the approach taken, independent of any annotation of the object code, so that any potential confusion can be avoided. This implies that for any specific expression, the range and precision with which it is computed should be clear from the documentation (see [RM95 H.2(2)]).
The evaluation of the exponentiate operator is not defined exactly in [RM95 4.5.6(11..12)]. Apart from the association of the multiplications (which only makes a marginal difference to the result), performing the division (for negative exponents) at the end or initially make a significant difference to the occurrence of overflow.
Adherence of an Ada implementation to the Language Independent Arithmetic standard [ISO 93] would be appropriate in the context of this annex, since that standard requires that the numeric operations used are precisely defined.
The relevant features supplied by the Annex are the pragmas Reviewable and Inspection_Point.
Due to the well-known fact that all compilers have bugs, it is the conventional wisdom of the safety critical community to avoid assuming that the generated object code is automatically correct. For instance, the approach taken in the avionics standard DO-178B is one of Design, Review and Test [DO-178B]. As far as practical, the review and test activities are undertaken at the object code level. Indeed, if the reliability requirements of Class 1 flight critical software (the most critical) are to be attained, every attempt must be made to detect errors induced by a compiler. This is expensive, but unavoidable given current technology.
The pragma Reviewable applies to a partition so that the compiler can generate code to match special documentation thus permitting independent review of the object code. The following specific requirements apply.
Since the elaboration order may have a visible effect, it is essential that the chosen ordering be indicated by the implementation in forms convenient both for human readers (such as in a program listing) and for processing by automated tools. An example of such a tool is a static analyzer that determines the absence of accesses to undefined variables.
It cannot be assumed that the vendor will provide every tool needed for validation and verification. In any case, complete reliance upon such tools may not be acceptable. Hence it should be possible to extract the object code in a form that can be easily machine processed. An example of a tool is one which provides maximal times for instruction sequences so that time-critical software can be shown to meet deadlines.
The wording in the Annex on the requirement of producing output suitable for tools uses the word "should" rather than the conventional (and stronger) "shall". This is because it may not be possible to check compliance with the requirement objectively. Since there are no standard data interchange formats for such information, there is no means of giving a precise description of what is "suitable".
The implementation needs to allow the user to determine which objects are assigned to which registers, and the lifetimes of those assignments.
An important aspect of code generation is the assignment of registers. The most general register assignment algorithm is known to be NP complete and hence it is quite unreasonable for compiler documentation to detail such an algorithm (especially since it may be proprietary). However, the result of the algorithm for the specific safety/security application is to be provided. The area allocated to any object is to be specified so that an object can be accessed, either via the register or via main memory for the entire lifetime of the object. Compilers typically produce code to address internal information as well as the information directly related to objects declared by the program. This issue is not specified in the Annex, since it is unclear how it can be specified in a testable form.
For each reference to a scalar object, the implementation needs to identify whether the object is either "known to be initialized" or "possibly uninitialized". Note that pragma Normalize_Scalars does not affect this analysis.
Since the access to unset scalars can lead to severe errors, and compilers already perform some of the analysis required, the purpose of this requirement is to provide the information to aid validation and verification. In the case of "possibly uninitialized", the information would depend upon the strength of the analysis performed by the compiler, and hence different compilers (or even the same compiler under different options) could not be expected to give identical information.
For the most critical applications, it is necessary to check that the machine instructions required by an application are correctly handled by the processor hardware. Microcode faults in processor chips are not uncommon and therefore such checking may be needed [Wichmann 93]. A list of the used instructions aids the checking since unused instructions need not be checked. It would be helpful to identify instructions only used in the run-time system, but this is not essential.
For checking timing constraints, a user needs to consider only the instructions listed.
Code sequences derived entirely from one Ada statement (or declaration) must be indicated as such. In those cases in which a code sequence is derived from a single Ada statement, this statement should be identified. Due to some optimizations, it could be that this identification is difficult. In such cases, some optimizations could be disabled when the pragma Reviewable is in force, rather than enhancing the compiler to meet the requirements with full optimization. In this area, a tool could be much more useful for independent validation and verification rather than an annotated listing.
Some compilers provide information based upon line numbers rather than Ada statements. For the purposes of this annex, it can be assumed that there is only one statement per line. For a single statement, several code sequences could be involved, especially if instruction scheduling merges the code from more than one statement. Addressing instructions derived from more than one statement would not have to be identified as such.
If an Ada statement results in no object code, then a positive indication of removal is required, rather than a mere absence of object code from a statement.
The user may need to compute the storage requirements for a program so that the absence of the Storage_Error exception can be checked. For subprograms containing only statically sized objects, an implementation should indicate the size of the stack frame required.
The implementation must indicate where compiler-generated run-time checks occur in the object code, and must also provide a method of determining which exceptions can be raised by any statement.
The handling of the predefined exceptions is problematic. Exception sites need not be in the same compilation unit as the handlers that service them. Some method is needed to indicate, explicitly in the object code, the actual locations at which exceptions are raised and handled. Some mechanism should be available, either through source or object code analysis, that permits the analysis of the program to determine which handler is used for each site that can raise an exception and to identify sites for which no handler is supplied. It would probably be most useful if this was in the form of a tool, rather than tables which required detailed analysis of each case. An example of a tool to undertake exception analysis is given by [Schaefer 93].
Since exceptions raised by predefined operations are not explicitly indicated in the source, and since the implementation is allowed some freedom in choosing actual execution order, this facility is best supported at the object code level. Even if a vendor does not choose to perform such an analysis, the information necessary to perform it should be made available to the user. For a detailed analysis of the issues involved, see Chapter 2 of the report on Formal Studies of Ada 9X [DoD 92].
Clearly, the fact that a compiler generates a call to an out-of-line routine does not obviate the need for reviewing the object code of the called routine. Hence the same requirements for reviewing the object code must apply to the run-time system components.
A point in the program text can be marked as an inspection point through a pragma of the same name, optionally identifying a set of objects whose values are to be available. At the corresponding point(s) in the object code, the vendor is required to provide a means of determining the values of the specified objects, or, if none was specified, then a means of determining the values of all live objects. This implies that the object code can be analyzed with special tools so that properties of the code and object values can be verified, independently of the source code. In theory, full mathematical verification could be undertaken, although this implies that the specification of the application is available in a suitable form.
This proposal arose out of the discussion from the special meeting held in April 1993 [Brosgol 93] attended by experts associated with producing safety critical systems. The idea is to break down a program into code sections separated by inspection points to facilitate validation of the code. This idea is new, although the concept of "break-points" in assembly language debugging is clearly similar.
Note that a single occurrence of pragma Inspection_Point in the source text may correspond to several inspection points in the object code; for example, if the pragma appears in an inlined subprogram, a generic unit, or a loop that has been "unrolled" by the optimizer.
There are, not surprisingly, some interactions between the pragma and optimizations. Since a user will in general examine the values of inspectable objects when execution is suspended at an inspection point, it is essential that compilers not perform "dead code elimination" on prior assignments to such objects. On the other hand, disabling all optimization is too extreme and in fact is unnecessary. Thus the compiler is allowed to store inspectable objects in registers; the implementation needs to provide sufficient information to make this mapping known to the user. The compiler is also allowed to move expressions (including those which could raise predefined exceptions) over inspection points.
The main design decision in connection with inspection points was whether to provide a single pragma, or to separate it into two: one that identifies the inspectable objects, and the other that identifies the points in program execution where currently live inspectable objects could be inspected. An advantage of the two-pragma approach is separation of concerns, and the ability to specify that, say, a variable declared in a package body is inspectable outside the package without needing to have all live objects inspectable. (Note that, with the single pragma approach the name of a variable declared in a package body is inaccessible outside the package and hence cannot be used as an argument to pragma Inspection_Point. Thus in the single-pragma approach, if the user needs to be able to inspect such a variable, pragma Inspection_Point with no arguments needs to be provided, which makes all objects inspectable and thus may inhibit some optimizations.)
In the end, the choice of a single-pragma approach was based on anticipated usage of the functionality provided, and in particular on the desire to avoid tedious source code changes and recompilations during software development. That is, the exact set of objects that need to be inspected might not be apparent at the outset. With the two-pragma approach, a decision to identify an additional variable will require a source code modification and may induce significant recompilation costs depending on where the variable is declared. With the single-pragma approach, the default is to have all variables inspectable and hence this is not a problem.
In some respects the issue is similar to the decision taken in Ada that when a package is withed, by default all entities declared in the package's visible part are available in the withing unit. An alternative approach would have the latter specify exactly those entities that are needed. Although seemingly producing a narrower and more specific interface, this would in practice yield long lists of needed entities that no one would read, and programmers would end up using a notation that made all visible entities available. In the case of inspection points, the anticipated typical usage is to have all objects inspectable; in those contexts where the programmer knows that only a limited set is of interest, a specific list can be provided as argument to pragma Inspection_Point.
Pragma Inspection_Point with a specific list of names provides some of the capabilities of an assertion facility without the need of an additional language feature. For example, if one writes
pragma Inspection_Point(Alpha, Beta);then, when the program is executed under the control of an appropriate monitor / debugger, it may be suspended at the corresponding point in the object code. Information is available at that point to examine Alpha and Beta. Therefore an assertion, say that Alpha < Beta, can be checked at that point. Note that no change to the generated code is required, which would be an issue if the assert capability were to be provided via an alternative mechanism. Variables such as Alpha and Beta must be in scope. Also, if such Inspection_Point pragmas are added at several points in a program, it may be possible to formally verify that the object code between the pragmas performs the operations in the source code.
A key technique that those developing critical software adopt is that of restricting the language constructs used. For instance, if tasking is not used, then the validation process is much simpler, since certain kinds of programming errors specific to tasking (such as deadlock, race conditions, and so on) cannot arise, and, moreover, the run-time system does not need to include any tasking support.
Although the term "subset" often has negative connotations, since in the past uncontrolled subsets for other languages have led to major portability problems, in the context of safety and security applications the use of subsets is essential. The issue is then how to satisfy these requirements without sacrificing the obvious significant advantages that Ada has enjoyed as a single-language standard. Interestingly, the current revision to the COBOL standard is going to a single-language model, in contrast to the language modules approach of the past, partly because of the success that Ada 83 has had with program portability.
The approach adopted is for the user to indicate excluded features as arguments to pragma Restrictions. The default behavior is for a compilation unit (or more generally a partition) to be rejected if it uses a feature identified in the pragma. Hence an Ada 95 compiler may enforce usage subsets in the manner required by [MoD 91], thus avoiding the potential risk of manual checking for adherence to restrictions. Moreover, an implementation is permitted to "bundle" restrictions, since otherwise compiler vendors would need to support 2N versions of the run- time system, where N is the number of possible arguments to the pragma. Thus the pragma should not be seen as a way for precisely defining a subset, but as a framework which a vendor can exploit. As an example, consider the safety critical system produced by Alsys and described in [Brygier 93]. Here, an Ada 83 subset is defined which has a zero-byte run-time system, called CSMART. This subset is vendor-specific since it is oriented around the structure of the Alsys run-time system. An Ada 95 version of this system could be developed which is based on language- defined parameters to the Restrictions pragma.
The set of restrictions identified in the Safety and Security Annex is a representative set, but a compiler implementation may extend it. For example, in the SPARK system [Carre' 88] the requirement is to use only those language features to which formal proof tools can be applied. Hence features such as generics are excluded, even though the implementation issues are straightforward.
Other analysis tools [Rex 88] impose similar restrictions on the source language. A vendor intending to support Ada 95 in those environments may thus add further restrictions to those defined in the Annex.
Although the default behavior in the presence of pragma Restrictions is to reject a compilation unit (or partition) if a restricted feature is used, the implementation may have a "full language" mode of operation where the use of a restricted feature elicits a warning message versus a fatal error. This could be useful in some environments, and it helps address the compiler validation concerns that might otherwise surround an implementation of a subset.
Possible scenarios showing uses of the Restrictions pragma are given in the following two examples.
Vendor A produces a compiler and an implementation of the Safety and Security Annex, targeted to safety applications which use a standard similar to [DO-178B]. To simplify the production of the compiler and ensure the documentation aspects of reviewable object code are met, they require the use of the arguments No_Protected_Types, No_Allocators, No_Dispatch, No_Delay, No_Exceptions and Max_Tasks = 0 when the pragma Reviewable is applied to a partition.
The user of this system has chosen this compiler because of the option above, knowing that the object code produced has a very simple structure, and that therefore the source code and object code are easily related. The user understands that since checking code does not appear in the object code, it is essential for this application to ensure that the code is indeed exception-free. To this end, a program analysis tool is being used. The pragma Normalize_Scalars is not used.
To ensure that some language features are not used which would cause problems for the program analysis tool, additional parameters are specified by the user to the Restrictions pragma as follows: No_Floating_Point, No_Fixed_Point, No_Access_Subprograms. In other words, the design requirements of not using these features are enforced by the compiler by means of the pragma.
In fact, the program analysis tool cannot handle Unchecked_Conversion. However, this restriction cannot be imposed by use of the Restrictions pragma since the application does require its use. In consequence, the use of Unchecked_Conversion is confined to one package which is not analyzed by the tool. This package uses the attribute Valid to ensure that the raw data will not subsequently cause an unbounded error.
Vendor B produces a compiler specifically for the security community who produce systems complying with the Orange Book and ITSEC [DoD 85, ITSEC 91]. Here, the full language is supported by the vendor when pragma Reviewable is enforced, since some applications require almost all of the language.
The user chooses this compiler since the Annex is supported with the full language. Tested packages which have formed part of other systems are being imported into their applications, and therefore the imposition of the restrictions is not usually feasible.
Again, the user has tools to analyze the Ada source code to validate both the security kernel and the main application code for adherence to the security policy. Since the kernel and the application code are all in one partition, it is only possible to use the pragma Restrictions for those features not used in any part of the system. For instance, Unchecked_Conversion is used in the kernel but should not appear in the application code, and hence this rule cannot be enforced by the Restrictions pragma. On the other hand, the declaration of access-to- subprogram types is not required at all, and hence this restriction is checked by the pragma.
Since the full language is supported by the vendor, the documentation provided to adhere to the pragma Reviewable is quite complex. To aid the review of the object code, the vendor provides a special tool, based upon the debugger, to ease the process of independent validation and verification. In particular, the tool can be used to locate the object code arising from a specific statement or declaration. This facility depends on the removal of optimizations that would be applied in the absence of pragma Reviewable. The user decides to employ the pragma Normalize_Scalars to reduce the risk of a covert channel and also to aid the detection of programming errors. (In principle, program components could communicate information via unset scalar values, thus establishing a secret or covert communication channel.)
Hence, in this example, the pragma Restrictions has a modest effect upon the compiler and its method of use.
The Restrictions pragma could clearly be used in other contexts apart from safety and security. For instance, in the context of teaching Ada, it might be convenient to ensure students are using just those features which would be allowed when developing high integrity software. Since the pragma is a configuration pragma, it should be simple to set up an Ada compilation system so that the student does not need to use pragma Restrictions explicitly.
Some high performance applications could also use the pragma to ensure an appropriately tailored run-time system is used.
The majority of the Ada 95 standard consists of specific features for which the conventional validation process works well. Corresponding to each feature, a number of Ada test programs can be written for which the outcome can be stated and checked, usually within the test program itself. In contrast, the essence of the Safety and Security Annex is not language features but high assurance for the application program. Thus it is clear that validation of implementations to this Annex is different from both the other Annexes and also from Ada 83. The problem of devising objective tests needs to be considered carefully in the context of the high assurance required, not just adherence to the standard.
The Safety and Security community have concerns which cannot be addressed by the Ada 95 standard itself. These issues are recorded here for completeness and to demonstrate that the standard was not the correct place to have them resolved.
The general requirement is for a "high assurance" compiler and run- time system. However, it is not possible to characterize this by means of requirements which are consistent with a language standard. The philosophy adopted by the Annex is to require information so that a user can judge whether a specific compiler meets the (subjective) requirements of an application.
Some of the issues which arise here are as follows:
In short, the requirements for critical systems go beyond those aspects which are covered by a conventional standard, and hence those which can be satisfied by the ordinary validation process.
The facilities of the Safety and Security Annex relate generally to the requirements in 9.1 (Predictability of Execution), 9.2 (Certifiability), and 9.3 (Enforcement of Safety-Critical Programming Practices).
More specifically, the study topic
S9.1-A(1) - Determining Implementation Choicesis met by the Normalize_Scalars pragma and by the requirements for documentation of implementation decisions.
The requirement
R9.1-A(2) - Ensuring Canonical Application of Operationsis met by the pragma Inspection_Point and by the provision of Off as an argument to the pragma Optimize.
The requirement
R9.2-A(1) - Generating Easily Checked Codeis met by the Reviewable and Inspection_Point pragmas.
The requirement
R9.3-A(1) - Allow Additional Compile-Time Restrictionsis met by the pragma Restrictions.