As system complexity grows, software teams are being asked to carry more context than they used to. That much is obvious. But, what can actually make that complexity manageable without turning every serious program into a long exercise in rediscovery?
This isn’t a new question. Barry Boehm was writing about the rising cost of late change in the early 1980s in Software Engineering Economics, and Fred Brooks had already been warning that software’s real difficulties include complexity, changeability, and the need for conceptual integrity in The Mythical Man-Month / No Silver Bullet. The language varies a bit over the decades, but the underlying concern does not change very much. When systems become harder to see whole, they become harder to reason about, harder to modify safely, and harder to verify with confidence.
When I say “stronger foundations”, I’m not talking about meaningless formalities. I’m referring to a requirements basis that is intentional, interfaces that are explicit enough to survive handoffs, architecture that gives the design a well-defined form, traceability that helps people understand why something exists, and verification that reduces uncertainty instead of merely consuming time. Recent work still treats requirements traceability as an active problem, which tells its own story: the old need to preserve intent across a lifecycle did not disappear just because toolchains improved. See, for example, this recent systematic mapping study of requirements traceability.
A good foundation does more than prevent disorder. It makes useful things possible. One of the first is earlier convergence. A team with clearer intent and sharper interfaces can make design decisions with less churn, because fewer basic questions are left soft long enough to be rediscovered during integration or test. SEI’s more recent work on technical debt and data-driven analysis is really about the same downstream reality: architectural and design choices do not vanish after coding starts; they come back later as rework, sustainment burden, and awkward constraints on future change.
Another thing stronger foundations make possible is better quality from the beginning. That is old news too. Formal inspection practice has been around for decades. NASA’s Software Formal Inspections Standard says plainly that the purpose of inspections is to detect and eliminate defects as early as possible in the lifecycle, and NASA inspection studies report that well-conducted inspections can remove 60% to 90% of existing defects before those defects drift farther downstream. Older Fagan-inspection material makes the same point from the IBM tradition: inspecting requirements, design, code, and test artifacts early improves productivity and reduces customer-visible defects because the defects are being removed while they are still relatively cheap to understand and fix.
That matters because late verification often gives people a false sense of movement. Test activity can increase while confidence does not rise proportionally. I have seen versions of that pattern often enough that I no longer find it surprising. Stronger foundations help because verification has something firmer to attach to: clearer intent, better-defined interfaces, and a more legible chain from requirement to design decision to evidence. That is one reason traceability continues to matter in serious environments, even if the tooling and terminology keep changing.
There is another issue that is seen too often with inspections. Even among teams that are conscientious about doing them, many defer these reviews until development is all but done. As a result, the lessons learned come too late to influence further development, and the inspections find the same issues and defects over and over again. That problem does not get simpler when design and code can be generated more quickly than they can be reviewed.
There is also a planning dimension to all this, and it has been around for a while as well. Watts Humphrey’s Personal Software Process and later PSP Body of Knowledge were built on the idea that disciplined data collection around size, effort, defects, and estimation can improve planning skills and reduce defect levels. SEI’s TSP in Practice reported teams delivering an average of 6% later than planned, improving productivity by an average of 78%, and producing products with far fewer defects than typical software projects. The Cyberpartnership software-process work pushed in much the same direction from a security standpoint, arguing that software producers should adopt processes that measurably reduce specification, design, and implementation defects, conduct measured trials of available approaches, and broaden the use of practices that can reduce vulnerabilities across the lifecycle. People can certainly debate how broadly any particular numbers travel, and they should. Even so, the larger point holds up rather well: foundations strong enough to support measurement, estimation, disciplined defect management, and security-oriented process improvement make programs easier to steer and harder to break by accident.
Fred Brooks used the phrase "conceptual integrity", and I think that phrase still has some life in it. Software systems become difficult partly because their internal shape becomes difficult to hold in the mind. Brooks argued that conceptual integrity is central to system design, and that observation has aged better than many supposedly newer ideas. If the architecture, interfaces, and behavioral intent of the system are blurry, then every extension, integration effort, or safety argument has to work harder than it should. If those things are reasonably clear, the system becomes easier to evolve without reopening every old wound.
There is also a human side to stronger foundations, and I do not think it should be dismissed as a soft add-on. Teams need enough interpersonal safety to surface uncertainty early, admit that something is underspecified, and ask a question before the cost of asking it becomes politically inconvenient. Popular summaries of Google’s Project Aristotle, including this InfoQ piece on psychological safety and this Forbes summary, point in the same direction: teams learn and perform better when people can expose risk and uncertainty without paying too high a social price. That does not replace sound engineering artifacts. It does make it more likely that weak assumptions will be surfaced while there is still time to do something useful about them.
At this point, the argument is not really that stronger foundations are fashionable. They are not. The argument is that they keep proving useful. The older sources show that software engineering has been wrestling with cost of change, conceptual integrity, inspections, estimation, and defect prevention for decades. The newer sources show that the same themes remain alive under updated names such as technical debt, traceability at scale, and data-driven engineering management.
I’m not suggesting that stronger foundations remove complexity. That would be too much to ask of any method. What they seem to do, when they are real, is keep complexity from running the program. They make the work easier to steer, easier to verify, easier to extend, and a little less dependent on rediscovering old truths at the worst possible moment.
AI-assisted development makes these foundations even more important: when teams can generate more code and design content faster, the value of clear requirements, explicit interfaces, traceability, and disciplined verification only increases.
A great deal of future progress in software engineering still depends on knowing, a little earlier and a little more clearly, what the system is supposed to do, how the pieces are meant to fit, and what evidence would justify believing that they do. That is not a new lesson. It is one that keeps surviving contact with time.