
Abstract
This paper examines the long-standing need within the software engineering discipline for technical design that is authoritative. A design process is authoritative if there exist technical means of materializing the design document as a working product, thus guaranteeing that the end result is indeed as described by the design. We notice the scarcity and inadequacy of existing solutions for software design, we look at solutions in other engineering disciplines, and we conclude with realizations on what it would take to come up with a solution that works for software.
(Useful pre-reading: About these papers)
Prior art
Through the decades, plenty of tools and methodologies have been developed with the aim of aiding the software design process. A common pattern among them is that they try to make some aspect of development more visual rather than textual. They fall into one of the following categories:
- Visual Implementation tools (For example: Visual Programming Languages (W) like Snap!, Scratch, EduBlocks, Blockly, etc.,) - They are indeed visual, and they do indeed produce runnable software, but their structure and level of detail is identical to the structure and level of detail of program code in the form of text, so they express implementations rather than designs.
- Visualization tools (For example: class diagrams, dependency diagrams, call trees, etc.) - They are restricted to the visualization, exploration, and documentation, but not the editing of existing software, nor the design of new software. As such, they are reverse engineering tools, not design tools.
- Niche tools (For example: Web Services Description Language (WSDL) (W), Business Process Execution Language (BPEL) (W), etc.) - They are exclusively focused on specific domains such as web services, business processes, etc., and cannot be used for software design at large.
- "Look ma, no code" tools (For example: Rapid Application Development (RAD) tools (W), No-Code Development Platforms (NCDPs) (W), and Low-Code Development Platforms (LCDPs) (W)) - They impose limitations on what can be done; they impose the use of a massive vendor-specific platform; they do not scale; they are aimed at non-programmers, allowing easy creation of simple user-interface-centric applications to quickly (and usually haphazardly) meet specific narrow business needs.
- Modelling tools (For example: Microsoft Visio (W), Modelling Languages (W) such as Unified Modeling Language (UML) (W), The C4 model (W), etc.) - They are restricted to modelling, so they produce designs that bear no necessary relationship to reality. They aim to constrain what is supposed to be included in a design, but these constrains exist only in theory, because they are not enforced by any technical means.
For a more detailed look at prior art, see michael.gr - The state of affairs in computer-aided software design.
Of all the technologies listed above, only modelling tools can legitimately be said to be of any potential usefulness in the software design process at large.
The unsuitability of modelling
Modelling tools allow designs that bear no relationship to reality: they are not informed via any technical means about the actual components available for incorporation in a design, nor about valid ways of interconnecting them. Consequently, modelling tools are nothing more than fancy whiteboards: they cannot guarantee, via any technical means, the feasibility of a design. (This is so by definition; otherwise, it would not be modelling, it would be engineering.)
Essentially, modelling tools are non-authoritative: no matter how sophisticated the model is, the authoritative source of truth for the structure of the system remains the source code, not the model.
The source code should ideally constitute a faithful implementation of the model, but there are no technological safeguards to guarantee that it does, and as a matter of fact it usually cannot, because the model is almost never feasible as designed to begin with.
For these reasons, modelling is of severely limited value, and programmers largely regard it as loathsome double book-keeping.
For a list of ways in which modelling as a means of design fails the software engineering discipline, please see michael.gr - The perils of whiteboards.
Other engineering disciplines
In long-established engineering disciplines such as mechanical, electrical, civil, etc., for several decades now, design work has been facilitated by Computer-Aided Design (CAD) tools (W) and Computer-Aided Engineering (CAE) tools (W).
Mechanical engineers use CAD tools to create documents describing complicated three-dimensional structures with detailed information about materials, dimensions, and tolerances. The tools perform various forms of analysis to verify the validity and feasibility of the design. Based on the results, the engineers can edit the design to optimize it, and repeat the analysis as necessary. Eventually, the design document is sent to a shop where CNC machining (W) or 3D-printing (W) is used to create the parts with minimal human intervention.
In electronic engineering, which is the discipline from which most parallels can be drawn to software engineering, virtually all design work since the 1980s is being done using Electronic Design Automation (EDA) / Electronic Computer-Aided Design (ECAD) tools (W). These tools have revolutionized electronic design by using a standardized notation to not only describe, analyze, and optimize products, but also to manufacture them.
- Electronic schematic diagrams use a standard notation which is understood by all electronic engineers. A new hire begins their first day at work by studying the schematics, and before the end of the day they are often able to pick up the soldering iron and start doing productive work. Contrast this with software engineering, where a new hire usually cannot be productive before spending weeks studying source code and documentation, and having numerous knowledge transfer meetings with senior engineers who know the system.
- Most importantly, ECAD tools bridge the gap from the physical world to the design, and from the design back to the physical world. The tools have libraries of electronic components available for inclusion in a design, and electronic manufacturing has long ago advanced to the point where an electronic design document can be turned into a functioning circuit board with nearly zero human intervention. Thus, electronic design documents today are authoritative: the end products are accurately described by their designs.
The problem with software
Unfortunately, thus far, the software engineering discipline has been following a very different path from other engineering disciplines: technical software design documents are scarce, and authoritative technical software design documents are completely non-existent. This situation has been allowed to go on for so long, partly because in software we already have a certain other kind of document which is authoritative, and this is the source code.
However, source code is an implementation, or at best a detailed technical description, but not a technical design. To say that the technical design of a software system is a listing of the lines of source code that make up that software system is equivalent to saying that the technical design of the Great Wall of China is a list of all the bricks that make up the Great Wall of China.
![]() |
(A tiny part of) the Great Wall of China by Hao Wei, CC BY 2.0, click to enlarge. |
A technical design is supposed to list operative components, and to show how they are interconnected, but not to delve past the level of detail of the component. Unfortunately, we do not have that for software, at least not in an authoritative form.
It is a great paradox of our times that the software engineering discipline is bereft of authoritative design tools, when such tools are the bread and butter of the long-established engineering disciplines.
In lieu of authoritative tools, software design today is practiced using conventional, non-authoritative means, such as box-and-arrow drawing applications, which, as explained earlier, are only capable of modelling, and therefore amount to nothing more than fancy whiteboards.
The end-result of all this is the following:
- Software systems do not match their designs.
Even if the technical design happens to describe a software system that could actually be built as described, there are no technological safeguards to guarantee that it will: the software engineers and the operations engineers are free to build and deploy a system that deviates from the design, and neither the architects, nor the management, have any way of knowing.
- Software systems diverge from their designs over time.
Even if the deployed software system initially matches its design, the system is bound to evolve. The design should ideally evolve in tandem, but it rarely does, again because there are no technological safeguards to enforce this: the engineers are free to modify and redeploy the system without updating the design document, and in fact they usually do, because it saves them from double book-keeping. Thus, over time, the design bears less and less relationship to reality.
If, due to the above reasons, you suspect that your technical design document is counterfactual, and you would like to know exactly what it is that you have actually deployed and running out there, you have to begin by asking questions to the software engineers and the operations engineers.
In order to answer your questions, the engineers will in turn have to examine source code, version control histories, build scripts, configuration files, server provisioning scripts, and launch scripts, because the truth is scattered in all those places. In some cases they might even have to try and remember specific commands that were once typed on a terminal to bring the system to life.
If this sounds a bit like it is held together by shoestrings, it is because it is in fact held together by shoestrings.
Thus, the information that you will receive will hardly be usable, and even if you manage to collect it all, make sense out of it, and update the design document with it, by the time you are done, the deployed system may have already changed, which means that your design document is already obsolete.
As a result, it is generally impossible at any given moment to know the actual technical design of any non-trivial software system in existence.
This is a very sorry state of affairs for the entire software industry to be in.
Towards a solution
If we consider all the previously listed problems that plague software design as conventionally practiced, and if we look at how the corresponding problems have been solved in long-established engineering disciplines, we inescapably arrive at the following realization:
The technical design of a system can only be said to accurately describe that system if there exist technical means of having the system automatically created from the design.
In order to automatically create a system from its design, the design must be semantically valid. This brings us to a second realization:
The semantic validity of a technical design can only be guaranteed if there exist technical means of informing the design with components available for incorporation and restricting the design to only valid ways of interconnecting them.
The above statements define a design process as authoritative.
An authoritative software design document is an essential engineering instrument instead of an abstract work of art:
- The design document contains all the information necessary for provisioning target environments with software components, instantiating the components, and wiring them together; this information not only need not, but in fact must not be encoded anywhere else in the source code; this eliminates double book-keeping, which is considered by developers as another layer of red tape which is preventing them from getting things done, and is the complaint most often heard from developers about conventional software design.
- The design document is the only means through which the system can be re-deployed after making a change to either the code, or the design, or both; this guarantees that the deployed system will always be exactly as described by the design, so there is no possibility of the design ever becoming outdated, which is the complaint most often heard from architects about programmers.
Any attempt to introduce authoritative design in the software engineering discipline would necessarily have to borrow concepts from the electronic engineering discipline. This means that the solution must lie within the realm of Component-Based Software Engineering (CBSE) (W), where systems consist of well-defined components, connectable via specific interfaces, using well-defined connectivity rules.
What we need is a toolset that implements such a paradigm for software. The toolset must have knowledge of available components, knowledge of the interfaces exposed by each component, and rules specifying valid ways of connecting those interfaces. The toolset must then be capable of materializing the design into a running software system.
The toolset must not repeat the mistakes and suffer from the drawbacks of previous attempts at component-based software engineering. Thus, the toolset must meet the following goals:
- Facilitate any programming language.
By this we do not mean that it should be possible to freely mix C++ components with Java components; what we mean is that it should be possible to express in one place a C++ subsystem containing C++ components interconnected via C++ interfaces, and in another place a Java subsystem containing Java components interconnected via Java interfaces, and at a higher scope to have each of these subsystems represented as an individual opaque component, where connections between the two components are made via language-agnostic interfaces (e.g. REST) or cross-language interfaces (e.g. JNI, JNA, etc.)
- Facilitate any level of scale, from embedded systems to network clouds.
This means that the nature of a component and the nature of an interface must not be restricted, so that they can be realized in different ways at different levels of scale. For example, at the embedded/C++ level of scale, a component might be defined as a C++ class exposing C++ interfaces, whereas at the internet level of scale a component is likely to be defined as a (physical or virtualized) network host exposing TCP interfaces.
- Guarantee type-safety at any scale.
Type safety can be carried across different levels of scale by means of parametric polymorphism (generic interfaces.) For example, a type-safe interface between a client and a server in a network can be described with a construct like Tcp<Rest<AcmeShopping>> which stands for a TCP connection through which we are exchanging REST transactions according to a schema which corresponds to some programmatic interface called "AcmeShopping".
- Require minimal extra baggage.
Components should not be required to include a lot of extra overhead to facilitate their inclusion in a design. Especially at the embedded level, components should ideally include zero overhead.
This means that a C++ class which accepts as constructor parameters interfaces to invoke and exposes interfaces for invocation by virtue of simply implementing them should ideally be usable in a design as-is.
The extra functionality necessary for representing the component during design-time, provisioning a target environment with it, instantiating it, and wiring it should be provided by a separate companion module, which acts as a plugin to the design toolset, and exists only during design-time and deployment-time, but not during run-time.
- Support automatic deployment.
The toolset must be capable of deploying a software system of arbitrary complexity to a production environment of arbitrary complexity, and it must be capable of doing so with no human intervention other than the pressing of a "Deploy" button. To this end, toolset must support components representing various different kinds of environments such as network hosts, isolated devices, operating systems, virtual machines, etc. and each of these components must be configurable with everything necessary in order to provision a certain environment with the corresponding part of the design.
-
Support iterative development.
Once a system has been designed, coded, and deployed, it is a fact of life that it will keep evolving. The design toolset must support re-deploying after modifying the code, or the design, or both.
- Support automatic wiring.
Once an execution environment has been provisioned with software components, the components must be wired together in order to start running. Traditionally, the wiring of freshly instantiated components is done by carefully hand-crafted code, to account for circular dependency issues between components. If we are to have fully automated deployment, the wiring cannot be done by hand-crafted code anymore; it must be automated, therefore it must be standardized. This in turn means that certain connectivity rules are necessary in order to guarantee that software designs do not suffer from circular dependency issues that would require custom handling. For more on this, see michael.gr - Call Graph Acyclicity.
- Facilitate incremental adoption.
It should be possible to express, via an authoritative design document, the structure of a small subsystem within a larger system whose structure has not (yet) been expressed authoritatively.
- In systems of medium scale and above, this may be handled by making the core deployment and wiring engine of the toolset available on demand, during runtime, to quickly materialize a small subsystem within the larger system.
- In embedded-scale systems, it should be possible to utilize code generation to do the instantiation and the wiring, so as to avoid having the core engine present in the target environment.
- Utilize a text-based document format.
In software we make heavy use of version control systems, which work best with text files, so the design documents must be text-based. The text format would essentially be a system description language, so it must be programmer-friendly in order to facilitate editing using a text editor or an IDE. A graphical design tool would read text of this language into data structures, allow the visual editing of such data structures, and save them back as text.
- Facilitate dynamic software systems.
Every non-trivial system has the ability to vary, at runtime, the number of instances of some components in response to changing computation needs, and to choose to instantiate different types of components to handle different needs. Therefore, a toolset aiming to be capable of expressing any kind of design must be capable of expressing, at a minimum, the following dynamic constructs:
- Plurality: Multiple instantiation of a certain component, where the number of instances is decided at runtime.
- Polymorphism: Fulfilling a certain role by instantiating one of several different types of components capable of fulfilling that role, where the choice of which component type to instantiate is made at runtime.
- Polymorphic plurality: A combination of the previous two: A runtime-variable array of components where each component can be of a different, runtime-decidable type.
- Facilitate multiple alternative configurations (layers).
In virtually every software development endeavor there is a core system design which is materialized in a number of variations to cover different needs. For example:
- Debug vs. release
- Testing vs. production
- With instrumentation or without
- With hardware emulation vs. a targeting the actual hardware
The bulk of the components and the wires of the design exist in all configurations, but some configurations prescribe additional components and slightly different wiring.
Therefore, the toolset must facilitate the expression of alternative configurations so that each configuration can be defined authoritatively.
To facilitate this, the toolset must support design layers, similar to drawing layers found in drawing applications like Photoshop. Note that design layers are unrelated to the architectural layers found in layered architectures, although it is possible that people will figure out ways to represent architectural layers using design layers.
The details of how layers are going to work in order to support configurations are to be decided, but one preliminary idea is to have one or more base layers where the bulk of the components are laid out, and a few mutually exclusive configuration layers on top of them. A configuration layer combines with one or more base layers to form a complete system, and is deployable, whereas base layers do not describe complete systems and are therefore not deployable by themselves.
- Be extensible.
The design document must support the inclusion of arbitrary metadata to be used by various tools, which can be either separate applications, or plugins to the graphical editor. Examples of metadata:
- Keeping track of documentation of interest to different stakeholders, for example Architectural Decisions (W) [architectural-decisions].
- Keeping track of Team Architecture, i.e. which development teams are responsible for building and/or maintaining different parts of the design. [team-architecture]
- Recording various technical characteristics, such as data flow. (Every interface can be associated with a direction of data flow with respect to the direction of invocation: when invoked, some interfaces only pull data, some only push data, and some perform bi-directional transfer of data.)
- Recording, either manually or automatically, various metrics such as:
- Technical debt estimations
- Threat modelling
- Compliance considerations and responsibilities
- Test code coverage results
- Performance statistics
- Frequency of change statistics
Using such metadata and plugins, the graphical editor may allow switching between views to visualize various aspects of the system overlaid on the component diagram, such as, for example, data flow instead of control flow, a heat map of technical debt, a heat map of test code coverage, a heat map of frequency of change, etc.
- Be accessible and attractive.
The extent and speed by which a new software development technology is adopted greatly depends on how accessible and attractive the technology is. To this end:
- The core toolset must be free and open source software. (Profit may be made from additional, optional tools, such as a visual editor.) This also means that the toolset must be a cross-platform, installable software package rather than a cloud offering.
- A clear distance must be kept from unattractive technologies like UML, XML, etc.
- The literature around the toolset must avoid wooden language and alienating terms such as "enterprise architecture", "standards committee", "industry specifications consortium", etc
- Efficiently manage complexity.
Software designs can become formidably complicated. One of the major goals of a design methodology is to manage complexity and to reduce clutter. Therefore, the toolset must support the following constructs:
Containers
Some systems are so large that expressing them in a single diagram may be inconvenient to the point of being unworkable. To address this, the toolset must facilitate hierarchical system composition by means of container components. A container encapsulates an entire separately-editable diagram and exposes some of the interfaces of the contained components as interfaces of its own. Thus, containers can be used to abstract away entire sub-designs into opaque black-boxes within greater designs. Container components moust be boundlessly nestable.
Viae
Large numbers of wires traveling long distances within a diagram can have a detrimental effect on the intelligibility of the diagram. For this reason, the concept of the "via" will be borrowed from electronic design. (Plural vias or viae.) A via is a named circle into which a wire may terminate and thus vanish from view. All viae with the same name are implicitly connected without having to show the wires between them. This is especially useful for wires of interfaces representing cross-cutting concerns, which are ubiquitous, and therefore do not need to be shown everywhere.
A via is strongly typed like any pin; when the first pin is wired to a via, the via implicitly takes the type of that pin. Viae are to be drawn as little circles.
Ribbons
Sometimes there may be multiple parallel wires that travel over long distances on a diagram. Some of them might even go in opposite directions. To reduce clutter, the toolset must make it possible to group such wires together in a ribbon. At each end of a ribbon is a connector, which breaks the ribbon into individual pins and shows the name and type of each pin, so that individual wires can be drawn from there to component pins.
Ribbons and connectors are pseudo-elements, in the sense that they only exist in the design diagram and have no counterpart in code. Ribbons are to be drawn as two parallel hairlines with a slanted hash between them. The shape of connectors is to be determined, but it will probably be borrowed from electronic design. Ribbons can also be routed in and out of viae. Ribbon viae are to be slightly bigger than single-wire viae.
- Establish a universal notation.
To ensure that every developer can easily understand a design document that they see for the first time, the toolset must standardize the notation used in software diagrams, the same way that electronic schematic diagrams follow a standard notation which is universally understood by all electronic engineers.
The details are to be decided, but some preliminary ideas about styling and conventions are as follows:
(Need to show an illustration here.)
Diagrams are drawn using nothing but monochrome lines. (Black lines on a white background, or white lines on a blue background, etc.) This is because color opens up too many possibilities for distractions and for non-standard representations. The use of color should be reserved for:
- Distinguishing between different layers when multiple layers are drawn superimposed.
- Transient concepts such as:
- Mouse-over in the graphical editor
- Selection in the graphical editor
- Validation errors
- Visualization of statistics (especially heat maps)
Nonetheless, people will probably figure out that they can present a design in a colorful way by placing different components on different layers, choosing a different color for each layer, and having all layers displayed simultaneously. However, should they decide to do that, they are on their own: the toolset will not offer any features specifically intended to facilitate this.
Wires are to be drawn using hairlines.
Pins are also to be drawn using hairlines. Outputs will be triangular arrows pointing out of a component, inputs will be triangular arrows pointing into a component. The name and type of each pin is to be drawn outside the shape of the component, allowing components to be relatively small and requiring a lot of empty space around them to fit the names of the pins. The pin name is to be drawn with a bigger font than for the pin type.
Wires may bend only in right angles. When two wires cross, this means that they are isolated from each other. When multiple outputs converge into a single input, a small but discernible dot at the point of convergence indicates that the wires are connected.
At various points along a wire there can be tiny skinny arrows to remind the viewer of the direction of the wire (always from the output to the input.)
Component shapes are to be drawn using thick lines. The default shape for every component type is a plain rectangle, with the name and type of the component rendered in the center. The component name is to be drawn using a bigger font than the component type.
Some component types perform simple and standard functions, which can usually be inferred from their pins, for example adapters from one interface to another, or converters that transform data from one form to another. For such simple components, there is merit in refraining from displaying their name and type, and instead displaying them with special shape, thereby making them occupy less space in the design, and making the design more expressive. The toolset will initially offer a few special shapes:
- A triangular component shape intended for component types that act as converters.
- An AND-gate component shape for component types that play the role of adapters.
Over time, more component types that perform simple and standard functions will inevitably be identified. This will lead to a demand to introduce additional component shapes, bearing some resemblance to electronic or flowchart symbols, to represent those components; however, the intention is to be conservative in this, and only introduce new shapes if the demand for them is strong and widespread.
The preferred placement of pins on the perimeter of a component shall be:
- Inputs along the left and top edges
- Outputs along the right and bottom edges
The convention for pin placement shall be:
- General-purpose and cross-cutting concern interfaces:
- inputs along the top edge
- outputs along the bottom edge.
- Application-specific interfaces:
- inputs along the left edge.
- outputs along the right edge.
This arrangement is analogous to electronic design, where the convention is that signals flow from left to right and voltages from top to bottom.
Footnotes
[architectural-decisions] See Architectural Decision Records by Michael Nygard: https://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions -> link is dead, new link: https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions For ADRs as a vehicle of engagement between architects and developers instead of documentation, see Mark Richards - The Intersection of Architecture and Implementation - DDD Europe.
[team-architecture] See the concept of "Team Architecture" in "Practical (a.k.a. Actually Useful) Architecture" by Stefan Tilkov, GOTO 2023, section 2, "Explicitly architect your team setup") -- Related term: Team Topologies.
No comments:
Post a Comment