Designing for Maintenance: The First Principles of Architectural Design

The First Principles of Architectural Design

When designing frameworks and system architectures, scalability is one of the characteristics that people want to pursue. From the articles in the technical community, we can see a large number of related dictionaries, such as “extensibility through configuration and definition”, or “business process” extensibility, as well as various “plug-ins” and “extensible” extended point” and so on.

Really, from most of the projects we’ve been through, these systems don’t really scale that well. Some, probably when designing the system, are responsive to current needs and not suitable for future scenarios – that’s another story. In this case, the architect lacks the consideration of the boundary limit when designing the system.

Another explanation for the bounds is that the system achieves scalability at development time, but ignores the problems posed by the dimensional period. In the software development cycle, maintenance costs often occupy the main part of the cost, as Ken Beck said at the beginning.

At this time, people can’t help but start thinking again, is a system not designed for maintenance really scalable?

Is the architecture really scalable?

In many systems, it is developed with good scalability as it is designed. It’s just that its scalability may lose its effectiveness during operation and maintenance.

A rumor of corruption in the DDD system

This is an unproven DDD rumor.

In Domain-Driven Design: How to Cope with Software Core Complexity, Eric Evans gives a systematic approach to Domain-Driven Design based on his experience in refactoring projects and integrates a series of domain-specific related practices. We believe that Eric went through a series of good designs when re-architecting this system.

After more than ten years, the architecture of this system has become like a big ball of mud, and it is difficult to see the careful design at the beginning. The flow of personnel and the loss of knowledge inheritance make the system gradually corrupt – this is almost a common problem in most systems.

OSGi modular regression testing

OSGi is an interesting modular and plug-in solution, and Eclipse is the most well-known application scenario. As far as versatility, OSGi has many benefits, for example, dynamically loading, refreshing, and dumping modules ceaselessly the service, it can understand the modularization and forming of the framework.

Before the popularity of the microservice architecture, its pluggability was very attractive for larger systems. When releasing some new features, we don’t need to release the whole system, only one of the bundles (plugins) is needed.

Just a web application built on OSGi will become a terrible monolith, each bundle may be built as a “microservice”, and there are mutual calls between bundles – in the microkernel architecture, we do not allow such existence. As a result, once we update bundles, we tend to publish the entire system rather than individual bundles.

Now, we actually need to perform relapse testing on the whole framework.

Legacy code generated by low code

This “legacy code” refers to low-code platforms that are difficult to test. The low-code platform here refers to a general low-code platform.

In that no-code programming, a DSL is seen as a core element, a well-designed domain-specific language/programming-like language (non-JSON DSL). Based on DSL design, it can provide good testability and continuous integration capabilities for the system, which is not available in ordinary JSON.

And once the low code we generate is not testable, it can become legacy code – it depends on the automated testing mechanism built into the platform and the design of automated version migrations.

"Intricacy, similar to drive, doesn't vanish, it doesn't emerge out of thin air, 
it generally moves to start with one article then onto the next or one structure to another."

If the framework itself doesn’t consider the issues of testability and quality, someone has to reconsider them.

The other side of flexibility

In the past, PL/SQL was the most flexible system I’ve ever seen, “people can do it without writing code”. We can think of it as a DSL, which is similar to the configuration system we have encountered and can be implemented quickly by simply “configuring”.

This flexible “configuration” is very interesting. We can test them repeatedly and predictably in a test environment, manually. In our pursuit of automation and stability today, this instability has become extremely terrifying.

On the mobile side, message push is carried out in the form of configuration and interface, and we can often receive push messages sent by testers to the production environment.

In a sense, this flexibility should be seen more as a remedy than as a core part of system design.


Software development is a team activity.

Unbridled sweetness spread

The more in large systems, the more obvious the broken window effect – once one person does not follow the specification, there will be more and more people who violate the specification.

This problem originates from the fact that the relevant specifications are not solidified through processes or tools. For example, there is no strict code inspection process, and there is a lack of automated architecture guarding tools.

For example, for the sake of a single team, the interface of the underlying library was temporarily modified to open a hole. As a result, other teams in the follow-up will require new openings, causing the underlying library to deviate from the original design. Therefore, before adding new ephemeral interfaces, consider the issues you will encounter when migrating and evolving your system.

We allow this temporary solution to exist (an emergency launch will always exist) but should redesign and fill related problems after it occurs.

Lack of automation assurance

Without considering debugging, for flexible systems such as configuration, it is difficult to automate testing of functions, and it is impossible to perform continuous integration and continuous deployment. For medium and large software systems, it will become a nightmare for maintenance (development + operation and maintenance).

In particular, once the configuration system lacks a version management mechanism, it will bring new challenges to software rollback.

A series of factors required for software development should be analyzed again in the case of scalability + flexibility.

Lack of local test environment

When encountering some complex bugs, whether it is Serverless, or declarative and configuration methods, it is necessary to conduct joint debugging with other development systems. We need to build a local environment to quickly reproduce bugs in order to fix them. Based on the existing platform scenario, scheduling and testing need to be performed in the cloud.

However, this mixed-height model will change after the popularity of ” Development in the Cloud “.

Build orderly for maintenance

While writing “Architecture 3.0” recently, my colleague @NoaLand and I have been discussing the ordering of architectures in anticipation of building an ordering model to solve part of the maintenance problem. In today’s scenario, in addition to solving the above problems, the following aspects may be required.

Face up to the complexity of the problem

As you can see, the problem itself is complex, and only by facing it up can we truly bring about breakthroughs.

Periodic communication of design ideas

As we know, the mobility of software development team members affects the stability of the system. Every few years, the members of the team will be refreshed – such as in the Internet industry, three years is an old employee. The relevant knowledge of the system will be forgotten in a corner with the flow of personnel, and the new team members do not understand the original design of the system.

Among the many approaches we’ve tried, one that has worked well is to ask new employees to describe the architecture of the system after they have come in for a period of time (say, three months). All the while, right a portion of their mental inclinations about the framework.

Timely resolution of technical debt

During the implementation process, there is a need to continually make time for the technical debt – a cliché. Among a series of solutions, continuously updating dependencies is a very simple and effective strategy.

Related Articles

Back to top button
Wordpress Social Share Plugin powered by Ultimatelysocial