Chapter 26: From Design to Implementation | |
The goal of the implementation phase is to implement a system correctly, efficiently, and quickly on a particular set or range of computers, using particular tools and programming languages. This phase is a set of activities with:
Designers see objects as software abstractions. Implementors see them as software realities. However, as with the transition from analysis to design, structural continuity of concepts and constructs means that design and even analysis notions should flow smoothly and traceably.
The chief inputs from design to implementation may be categorized in a manner similar to those of previous phases. Again, while the headings are the same, the details differ.
Implementation activities are primarily environmental. They deal with the realities of particular machines, systems, languages compilers, tools, developers, and clients necessary to translate a design into working code.
Just as the design phase may include some ``analysis'' efforts approached from a computational standpoint, the implementation phase essentially always includes ``design'' efforts. Implementation-level design is a reconciliation activity, where in-principle executable models, implementation languages and tools, performance requirements, and delivery schedules must finally be combined, while maintaining correctness, reliability, extensibility, maintainability and related criteria.
While OO methods allow and even encourage design iteration, such activities must be tempered during the implementation phase. In analogy with our remarks in Chapter 25, if everything can change, then nothing can be implemented reliably. Implementation phase changes should ideally be restricted to occasional additions rather than destructive modifications.
Implementation activities may be broken across several dimensions, including the construction of intracluster software, intercluster software, infrastructure, tools, and documentation, as well as testing, performance monitoring, configuration management and release management. Most of these were touched on briefly in Chapter 15.
Many excellent texts, articles, manuals, etc., are available on OO programming in various languages, on using various tools and systems, and on managing the implementation process. In keeping with the goals and limitations of this book, we restrict further discussion of the implementation phase to a few comments about testing and assessment that follow from considerations raised in Parts I and II.
A design must be testable. An implementation must be tested. Tests include the following:
When tests fail, the reasons must be diagnosed. People are notoriously poor at identifying the problems actually causing failures. Effective system-level debugging requires instrumentation and tools that may need to be hand-crafted for the application at hand. Classes and tasks may be armed with tracers, graphical event animators, and other tools to help localize errors.
Analysis-level performance requirements may lead to design-phase activities to insert time-outs and related alertness measures in cases where performance may be a problem. However, often, designers cannot be certain whether some of these measures help or hurt.
Thus, while designers provide plans for building software that ought to pass the kinds of performance requirements described in Chapter 11, their effects can usually only be evaluated using live implementations. Poorer alternatives include analytic models, simulations, and stripped-down prototypes. These can sometimes check for gross, ball-park conformance, but are rarely accurate enough to assess detailed performance requirements.
Performance tests may be constructed using analogs of any of the correctness tests listed in the previous section. In practice, many of these are the very same tests. However, rather than assessing correctness, these check whether steps were performed within acceptable timing constraints.
The most critical tests are those in which the workings of the system itself are based on timing assumptions about its own operations. In these cases performance tests and correctness tests completely overlap. For example, any processing based on the timed transition declarations described in Chapters 11 and 19 will fail unless the associated code performs within stated requirements.
As with correctness tests, the reasons for performance test failures must be diagnosed. Again, people are notoriously poor at identifying the components actually causing performance problems. Serious tuning requires the use of performance monitors, event replayers, experimentation during live execution, and other feedback-driven techniques to locate message traffic and diagnose where the bulk of processing time is spent and its nature.
Performance tuning strategies described in Chapter 25 may be undertaken to repair problems. Alternatively, or in addition, slower objects may be recoded more carefully, coded in lower level languages, moved to faster processors, and/or moved to clusters with faster interprocess interconnections.
If all other routes fail, then the implementors have discovered an infeasible requirement. After much frustration, many conferences, and too much delay, the requirements must be changed.
Ideally, object-oriented implementation methods and practices seamlessly mesh with those of design. Implementation activities transform relatively environment independent design plans into executable systems by wrestling with environment dependent issues surrounding machines, systems, services, tools, and languages.
As mentioned, many good accounts of implementation processes and activities are available. For example, Berlack [2] describes configuration management. McCall et al [4] provide a step-by-step approach to tracking reliability. OO-specific testing strategies are described more fully by Berard [1]. System performance analysis is discussed in depth by Jain [3]. Shatz [5] describes monitoring techniques for distributed systems.