Ada 95 Quality and Style Guide Chapter 10
Chapter 10 Improving Performance
In many ways, performance is at odds with maintainability and
portability. To achieve improved speed or memory usage, the most
clear algorithm sometimes gives way to confusing code. To exploit
special purpose hardware or operating system services, nonportable
implementation dependencies are introduced. When concerned about
performance, you must decide how well each algorithm meets its
performance and maintainability goals. Use the guidelines in this
chapter with care; they may be hazardous to your software.
The best way to build a system that satisfies its performance
requirements is through good design. You should not assume that
speeding up your code will result in a visible increase in system
execution. In most applications, the overall throughput of the
system is not defined by the execution speed of the code but by
the interaction between concurrent processes and the response
time of the system peripherals.
Most of the guidelines in this chapter read ". . . when measured
performance indicates." "Indicates" means that
you have determined that the benefit in increased performance
to your application in your environment outweighs the negative
side effects on understandability, maintainability, and portability
of the resulting code. Many of the guideline examples show the
alternatives that you will need to measure in order to determine
if the guideline is indicated.
10.1 PERFORMANCE ISSUES
Performance has at least four aspects: execution speed, code size,
compilation speed, and linking speed. Although all four are important,
most people think of execution speed when performance is mentioned,
and most of the guidelines in this chapter focus on execution
speed.
Performance is influenced by many factors, including the compilation
software, hardware, system load, and coding style. While only
coding style is typically under the control of the programmer,
the other factors have so much influence that it is impossible
to make flat statements such as "case statements
are more efficient than
if-then-else structures." When performance is critical,
you cannot assume that a coding style that proves more efficient
on one system will also be more efficient on another. Decisions
made for the sake of performance must be made on the basis of
testing the alternatives on the actual system on which the application
will be fielded.
10.2 PERFORMANCE MEASUREMENT
While most well-known tools for measuring performance are stand-alone
programs that concentrate on execution speed, there is a comprehensive
tool that covers all four aspects of performance. The Ada Compiler
Evaluation System (ACES) is the result of merging two earlier
products: the United States Department of Defense's Ada Compiler
Evaluation Capability and the United Kingdom Ministry of Defence's
Ada Evaluation System. It offers a comprehensive set of nearly
2,000 performance tests along with automated setup, test management,
and analysis software. This system reports (and statistically
analyzes) compilation time, linking time, execution time, and
code size. The analysis tools make comparisons among multiple
compilation-execution systems and also provide comparisons of
the run-time performance of tests using different coding styles
to achieve similar purposes.
Version 2.0 of the ACES, released in March of 1995, includes a
Quick-Look facility that is meant to replace the Performance Issues
Working Group (PIWG) suite. The Quick-Look facility is advertised
as being easy to download, install, and execute in less than a
day, while providing information that is as useful as that generated
by the PIWG suite. In addition, Version 2.0 contains a limited
number of Ada 95 tests (all of which are also included in the
Quick-Look subset). Version 2.1, including broad coverage of the
"core" Ada 95 language, is scheduled for release in
March 1996.
At the time of this writing, the ACES software and documentation
can be obtained via anonymous FTP from the host sw-eng.falls-church.va.us,
directory /public/AdaIC/testing/aces. For World Wide
Web access, use the following uniform resource locator (URL):
http://sw-eng.falls-church.va.us/AdaIC/testing/aces/.
While measuring performance may seem to be a relatively straightforward
matter, there are significant issues that must be addressed by
any person or toolset planning to do such measurement. For detailed
information, see the following sources: ACES (1995a, 1995b, 1995c);
Clapp, Mudge, and Roy (1990); Goforth, Collard, and Marquardt
(1990); Knight (1990); Newport (1995); and Weidermann (1990).
10.3 PROGRAM STRUCTURE
10.3.1 Blocks
guideline
- Use blocks (see Guideline 5.6.9) to introduce late initialization
when measured performance indicates.
example
rationale
notes
10.4 DATA STRUCTURES
10.4.1 Dynamic Arrays
guideline
- Use constrained arrays when measured performance indicates.
rationale
10.4.2 Zero-Based Arrays
guideline
- Use zero-based indexing for arrays when measured performance
indicates.
rationale
10.4.3 Unconstrained Records
guideline
- Use fixed-size components for records when measured performance
indicates.
example
rationale
10.4.4 Records and Arrays
guideline
- Define arrays of records as parallel arrays when measured performance
indicates.
example
rationale
10.4.5 Record and Array Aggregates
guideline
- Use a sequence of assignments for an aggregation when measured
performance indicates.
rationale
10.5 ALGORITHMS
10.5.1 Mod and rem Operators
guideline
- Use incremental schemes instead of mod and rem
when measured performance indicates.
example
rationale
10.5.2 Short-Circuit Operators
guideline
- Use the short-circuit control form when measured performance
indicates.
example
rationale
10.5.3 Case Statement Versus elsif
guideline
- Use the case statement when measured performance indicates.
example
rationale
10.5.4 Checking for Constraint Errors
guideline
- Use hard-coded constraint checking when measured performance
indicates.
example
rationale
10.5.5 Order of Array Processing
guideline
- Use column-first processing of two-dimensional arrays when measured
performance indicates.
example
rationale
10.5.6 Assigning Alternatives
guideline
- Use overwriting for conditional assignment when measured performance
indicates.
example
rationale
10.5.7 Packed Boolean Array Shifts
guideline
- When measured performance indicates, perform packed Boolean
array shift operations by using slice assignments rather than
repeated bit-wise assignment.
example
rationale
10.5.8 Subprogram Dispatching
guideline