I sometimes joke that I have gone full-circle: from Structural Engineer to Software Architect. Structural Engineering and Architecture are of course not at all the same thing (like you can never suggest that to either Structural Engineers or Architects!), but people tend to mix those two up. I in fact think it should be called ‘Software Structural Engineer’ instead of Software Architect. That would be more correct and would make my joke funnier. Let me explain…
I have a Master of Science in Structural Engineering, focusing on geotechnical, concrete and steel structures but I became a Software Engineer as soon as I left university.
During my studies I was always reminded (mostly by the professors) about this one fundamental truth: actio est reactio (action equals reaction). Structural Engineering is in its core the study of how forces travel through a system and how that system reacts to them. Reality is too complex to calculate all at once, so engineers use structural idealization, a concept I will explain in the following and relate how I think it applies to Software Engineering.
To determine if a system can safely transfer the forces applied to it to the ground, engineers use a model that consists of some very basic elements that are represented by some lines on a piece of paper. This is called the Global Analysis.
It lets them grasp the behavior of the entire system at a glance and answers the fundamental question of whether the structure as a whole will stand.
This comes at the cost of accuracy, because this analysis is simplified and assumes that every individual element will be able to withstand the forces applied to it. These assumptions need to be verified to find out if for example the concrete is strong enough, the soil can support the weight without yielding and the connections have enough screws and bolts.
Analyzing the parts #
To verify the assumptions, elements are ‘cut-out’ of the Global Analysis and the forces in the cuts are calculated (actio est reactio!). So the mental load of the system as a whole is replaced by forces at the boundaries (Interfaces!) of the new much smaller model. Which lets us add other details that we needed to neglect before, like the material used and the omitted dimensions of the element (the element was modelled as a two dimensional line before).
Knowing where to place those cuts is a big part of the engineering training. You want to get away with as few subsystems as possible to keep the number of calculations down, so you need to find the most critical spots. In order to do that you look at different failure states. If the failure state you are looking at is bending for example, you cut your element free at the point where the bending forces in the material are likely highest. Learning where that spot likely is was a huge part of my training.
When I first read about Domain-driven Design’s Bounded Contexts I was reminded of Structural Idealization. You have a Context Map where you make sure that your system is in balance, that it can take and distribute the load, but you are not interested in details here. And each Bounded Context is a free cut part of your system that communicates through its boundaries. The Bounded Context does not know about the system and no other system knows about the inner life of the Bounded Context, it does not leak its context and it solves its task. A Bounded Context is defined around a task (i.e. calculate if this part will be able to withstand these bending forces) and not around a subdomain (i.e ‘beams’, or ‘all elements that are made of steel’).
Embracing Iteration #
Once the calculation is done the engineer might discover that in order to withstand the forces applied to it, the element needs to be stiffer or bigger and heavier than anticipated in the Global Analysis.
That, of course, triggers a recalculation of the Global Analysis and all other subsystems that were depending on the assumptions via the forces at their boundaries. And so every change in a subsystem triggers a recalculation of all other systems until the system is in equilibrium and we have made sure every element can bear its load in all relevant failure states.
Software Professionals often think of physical construction as a one-shot solution. Unlike software, changes are hard and expensive in physical structures. We assume that because the result is hard to change retroactively, the design process must have been linear. Humans have been constructing buildings much longer than they have built software. We know where to cut, we have standardized models, and we have established safety factors and regulations (which is a topic for another post). Yet, we still iterate over a design multiple times.
Even in standard systems, like a shallow foundation or a steel portal frame, the external factors like soil and water pressure or the specific wind loads, are never identical. A calculation fails at a specific joint, which forces a change in the overall thickness, which in turn forces a recalculation of the entire foundation. While the act of constructing a building is not iterative, the process of designing it is.
Drawing parallels to Software Engineering is not hard here: iteration isn’t a failure in planning, it is inherent to designing complex systems. Instead of trying to think extra hard about problems beforehand we should embrace iterating over our design. It is a privilege and advantage that software can be changed so easily and cheaply.
Last words #
Not all problems can be solved by looking closer at the details. If a single element is failing because it is overloaded, adding more concrete to it often just makes the system heavier and more unbalanced. The same is true for adding more code to an overloaded module. Sometimes, you have to zoom out, look at the Global Analysis, and change the path the forces are taking entirely.