
One of the most dangerous trends in cybersecurity has been the gradual shift of security decision-making from trained engineers to individuals with little or no background in secure system design. Since the late 1990s, this transition has brought the language of “business risk” into technical engineering decisions where it never belonged. The result? A tendency to treat critical security choices like routine operational trade-offs, as if selecting firewall rules were no different than choosing carpet colors for the executive conference room.
The roots of sound security engineering aren’t new. Saltzer and Schroeder’s 1975 paper, The Protection of Information in Computer Systems, laid out clear principles for building secure systems, and like clockwork, those principles have been ignored in the name of convenience. In 1984, Ken Thompson’s Reflections on Trusting Trust made it painfully clear how dangerous it is to blindly trust software tools and development environments. Nearly 40 years later, too many developers, system owners, and managers have never read it.
Ross Anderson’s extensive research on composable systems demonstrates how stacking insecure components into larger systems introduces new, unforeseen vulnerabilities. His point is straightforward: you can’t bolt security onto a flawed system after it’s constructed. It must be integrated from the beginning.
The notion that security engineering decisions are merely a matter of “risk acceptance” is a significant error. Security engineering is not about accepting known flaws simply because addressing them is inconvenient or costly. The objective is to create and operate systems that can withstand impacts, fail safely, and resist hostile actions without disintegrating. Additionally, no spreadsheet, memo, or executive approval can eliminate the risk inherent in a flawed design. The risk does not disappear just because it has been logged and filed; it lies in wait, quietly, until it detonates.
One tool that has enabled this bad habit is the Plan of Action and Milestones (POAM). Originally a reasonable remediation planning tool, it is now often used to kick the can down the road. Too many organizations treat having a POAM as though it solves the problem. It does not. It is paperwork covering a landmine.
To illustrate the absurdity of this situation, imagine if corporations constructed and managed physical buildings in the same careless manner as they often treat cybersecurity. Fire safety codes would merely be seen as suggestions. The risk of fatalities on certain floors would be officially “accepted” and documented. Emergency exits would remain locked, with plans to address the issue “next year if budgets allow.” Fire suppression systems would be overlooked because someone deemed the odds of a fire as “within acceptable business thresholds.” Air quality systems would be neglected until enough employees collapsed, creating a politically painful scenario.
And when disaster finally struck, a fire, a building collapse, or mass illness, leadership would issue statements about “legacy infrastructure challenges” and “lessons learned,” then repeat the exact same actions. In terms of physical safety, this would be considered criminal negligence. In cybersecurity, it happens every day.
The foundation for structured security engineering was established long before risk management frameworks became a trend in this field. The 1970 Department of Defense Ware Report outlined how to secure information systems, introducing concepts such as trusted computing bases, security kernels, and verifiable access controls, ideas that remain relevant today. The Ware Report emphasized that security engineering is a discipline grounded in sound architecture and system assurance, rather than in operational shortcuts.
Subsequent milestones further solidified this foundation. In 1975, Jerome Saltzer and Michael Schroeder published The Protection of Information in Computer Systems, a landmark paper that formalized key principles of security engineering, including least privilege, fail-safe defaults, and complete mediation. These ideas remain relevant today and continue to be widely cited in security literature.
In 1984, Ken Thompson delivered his now-famous lecture “Reflections on Trusting Trust,” where he demonstrated how a compiler could be maliciously altered to insert vulnerabilities into any software it built, including itself, without leaving a trace in the source code. Thompson’s point was blunt and essential: trusting tools without verifying them is a dangerous mistake, and no system can be considered secure if the build environment is compromised.
In the decades that followed, Ross Anderson expanded on these ideas, particularly through his extensive work captured in Security Engineering: A Guide to Building Dependable Distributed Systems, first published in 2001. Anderson’s writing stressed that when insecure components are combined, they create complex systems susceptible to unexpected and often catastrophic failures. He emphasized that security cannot simply be added on later. It must be integrated into the architecture from the outset, with each component examined for its interaction within the larger system.
These milestones were not isolated insights; they served as clear warnings. Yet, most of them have been ignored or sidelined ever since.
Even when risk management was formalized for government systems in Reagan’s 1982 Executive Order 12356 and 1983’s National Security Decision Directive 145, it was intended to complement secure system design, not replace it. It was never a permission slip for managers to overrule engineers in the name of quarterly reports.
But in the late ’90s, private-sector executives and non-technical managers took over the language of risk management, transforming complex engineering problems into negotiable business risks. That change undermined the principles established in the Ware Report and all that followed, resulting in the performative, compliance-driven, risk-accepting mess we find ourselves in today.
The fallout isn’t academic. Cyber incidents might not leave bodies in the hallway, but they devastate companies, drain economies, cripple infrastructure, and shatter trust. Careers are destroyed. Businesses fail. People’s lives are upended. Real risk management isn’t a way to excuse known vulnerabilities. It’s a discipline designed to protect critical operations and systems worth defending.
A major reason this distortion persists is that the financial and organizational incentives for both leadership and vendors directly contradict sound security engineering. Executives are rewarded for cutting costs, meeting quarterly targets, and avoiding disruptions, rather than for investing in invisible, long-term resilience. Vendors and consultants benefit more from selling frameworks, dashboards, and risk quantification exercises than from fixing structural flaws that would lessen the need for those services. Every delay, every risk acceptance memo, and every deferred remediation represents billable hours, software renewals, and the illusion of progress without the inconvenience of meaningful change.
A common refrain in these conversations is that “security could spend everything and still not be able to fully secure the enterprise.” It’s a convenient lie. It frames security as an unbounded, unsolvable problem to justify inaction and avoid confronting the real issue: poor system design and the tolerance of known flaws. The truth is, while no system can be made perfectly secure, most catastrophic outcomes are entirely preventable with disciplined engineering. The point isn’t to eliminate all risk; it’s to prevent predictable, system-wide failures by refusing to build on rotten foundations. Good security engineering isn’t about infinite budgets; it’s about making smart design choices early and refusing to accept architectural vulnerabilities as a cost of doing business.
Until leadership incentives, vendor priorities, and system design practices realign with these fundamentals, the cycle of preventable failures will persist. Every breach, outage, or compromise will remain unsurprising. They will be the inevitable outcome of decisions made years earlier in boardrooms and budget meetings, where risk was a factor to be documented instead of eliminated.
Security leaders must regain control. Effective engineering, not boardroom horse-trading, should direct system design and operation. Accepting catastrophic risk for short-term convenience isn’t leadership; it’s dereliction. The outcome is both inevitable and entirely predictable.