Page 251 - DCAP305_PRINCIPLES_OF_SOFTWARE_ENGINEERING
P. 251
Unit 12: Refactoring
12.1.8 A Critical View on Refactoring Notes
The most obvious argument against refactoring would be that there really is not much new to it.
All the techniques applied in refactoring have been around for years. On the other hand, there
clearly is a demand for an easy-to-use handbook for software transformations and maintenance
in general. Refactoring catalogues can well serve that purpose.
More serious disappointment is that, after all, refactoring seems to offer very little support for
adaptation and maintenance of large legacy systems. Fowler emphasizes that refactoring should
be an integral part of the development process, but gives almost no indication on how to work
with complex systems that have not been constructed to enable modifications. Some concrete
tips on where to begin refactoring and how to proceed would have been helpful.
Fowler claims that refactoring makes redesign inexpensive. This seems rather an exaggerated
statement if anything else but the lowest level of design is concerned. Refactoring provide ways
to safely switch between implementation mechanics with different characteristics. There is,
however, much more to design than choosing a class structure or an object interaction scheme.
Fowler’s refactoring does not deal with higher level matters such as implementation environment
selection, distribution strategies, or user interface characteristics.
Search more about the refactoring.
12.2 Verification
Our process for applying verification refactoring in practice is shown in Figure 12.3. A semantics-
preserving transformation from the library is chosen by the user (or suggested automatically),
and the transformer then checks the applicability of the selected transformation mechanically
and applies it mechanically if it is applicable. When all of the selected transformations have been
applied, a metrics analyzer collects and analyzes the code properties of the transformed code,
and presents the complexity metrics to the user. If the metric results are not acceptable, or if
they are acceptable but later verification proofs cannot be established, the process goes back to
refactoring and more transformation are performed. The role of the source-code metrics is to give
the user insight into the likely success of the two Echo proofs. We hypothesize that the metrics
we use are an indication of relative complexity and therefore of likely verification difficulty, and
we present some support for this hypothesis in the case study. Verification refactoring cannot
be fully automatic in the general case, because recognizing effective transformation requires
human insight except in special cases.
Furthermore, some software, especially domain specific applications might require
transformations that do not exist in the library. In such circumstance, the user can specify and
prove a new semantics-preserving transformation using the proof template we provide and
add it to the library.
To facilitate exploration with transformations, if the user has confidence in a new transformation,
the semantics-preserving proof can be postponed until the transformation has been shown to be
useful or even until the remainder of the verification is complete. In most cases; the order in which
transformations are applied does not matter. Clearly, however, when two transformations are
interdependent, they have to be applied in order. A general heuristic is that those transformations
that change the program structure and those that can vastly reduce the code size should be
applied earlier.
LOVELY PROFESSIONAL UNIVERSITY 245