By Holt Hackney
The October session of the Chief Architect Forum brought together Dan Swanson, series editor for CRC Press’s security and cyber series, and Patrick Hayes, author of Integrated Assurance, and moderated by Bryce Ominski, author of Digital Momentum. What emerged was a frank assessment of where security and risk practice now stand, and how far they still must go. The focus stayed on a few recurring themes: the limits of traditional tools, the pressure for real time resilience, the unchecked spread of AI, and the cultural work that still needs to happen inside organizations.
From book projects to risk conversations
Swanson began with a quick look at the CRC series. What started as a small leadership-oriented collection has grown into roughly a hundred titles across security, audit, risk, privacy, and architecture. In recent years, interest has shifted toward integration and resilience. Practitioners want help dealing with vendors, complex systems, and supply chains that no longer sit neatly inside a single organization.
Against that backdrop, Hayes described how his own book did not end up as the one he thought he was writing. “I always wanted to write a book,” he said. “I just never thought I actually would.” Once he committed to the project, forcing himself to explain his work on paper changed his view of it. What began as a book about cybersecurity integration gradually became a book about risk assurance.
Readers and advisers pushed him on language. Some asked why he chose the term “assurance” at all, since many people associate it with accounting. His answer was that assurance belongs in the risk domain. That shift in framing led him to focus less on individual controls and more on translation, measurement, and communication with executives.
“I realized I was not writing a security integration book,” Hayes said. “I wrote a risk management book. How to translate into business language, how to talk to executives, how to measure impact in a way they can use.”
Ominski responded with a sentiment many authors shared. “The writing forces you to confront what you really believe,” he said. “You think you know your own ideas until you try to explain them with precision.”
Resilience requires real-time awareness
Both guests agreed that resilience has become the central concern for serious practitioners, and that current approaches often fall short. Ominski framed the issue bluntly early in the session. “If we wait for a breach and then respond, we have already lost too much time,” he said.
Swanson pointed to real time monitoring as a basic requirement. Resilience, in his view, depends on seeing what is happening in the environment as it occurs, not months later in an audit report. “To be truly resilient you have to have real time insight into what is happening,” Swanson said. Reviews that arrive a quarter later cannot influence outcomes that have already unfolded.
Hayes expanded that point by challenging the long-standing reliance on SIEM platforms for detection and response. Organizations used them because that is what they had, not because those tools were designed for fast operational decisions. By the time events are collected, parsed, normalized, and correlated, a security incident is often well underway.
“We tried to make the tools we had fit the job,” Hayes said. “The problem is they were built for audit and reporting, not for stopping something in progress.” The move toward XDR and AI-assisted analytics, he argued, reflects a quiet admission that earlier architectures did not deliver what operational teams really needed.
Trust, ecosystems, and visible confidence
The conversation returned several times to trust and confidence, not as abstract values but as explicit design goals for architecture and governance.
Supply chain failures and high-profile outages have eroded public confidence in large providers. Hayes cited the CrowdStrike update that disrupted Microsoft environments as a clear illustration. It was not a classic security breach, but it revealed fragile coordination between suppliers, a lack of safe rollback options, and a heavy dependence on a single path for critical services.
Customers, Hayes suggested, drew a simple conclusion. Large suppliers can and do fail in ways that directly affect their ability to operate. That realization pushes organizations to look for backup options and to demand more transparent assurance that basic safeguards are in place.
Swanson noted that some authors in the CRC series now talk about “risk deficit” rather than technical deficit. The idea is straightforward. Every rushed implementation and every decision to “fix it later” adds to a growing backlog of risk that very few organizations measure.
Here, Swanson argued, governance must evolve. Periodic audits that focus on past events do not help boards or executives understand the current level of risk in their systems. Continuous assurance, real time monitoring, and clearer links between controls and business outcomes are becoming necessary.
During this exchange Ominski added, “Trust is no longer assumed. It has to be demonstrated in a way stakeholders can recognize.”
AI adoption and a new layer of debt
If there was a single topic that drew the sharpest concern, it was the way organizations are adopting AI.
Hayes described AI as a new threat vector that many companies have rushed into without architectural planning or governance. In his view, the industry is creating a new category of debt that may exceed what already exists in legacy systems.
“AI is being adopted haphazardly in many organizations,” Hayes said. Marketing teams connect tools to mail systems. Staff paste corporate content into public models. Guardrails are light or nonexistent. In many cases no one has defined how to test models, how to check for poisoning, or how to verify that outputs remain reliable over time.
Hayes argued that the field has done a poor job securing software in general, and is now repeating the same mistakes with AI, only faster. The difference is that AI systems can act and adapt at a pace human attackers cannot match.
Swanson added that boards and senior leaders still struggle with their role in major technology shifts. They do not want to manage details, but they are responsible for strategy and oversight. With AI, as with earlier changes, many boards have not yet decided how to oversee investments that fundamentally reshape business operations.
Ominski put a fine point on it. “We are moving into risks we have not fully imagined,” he said. “The pace alone forces us to rethink how we govern technology.”
The changing shape of the CISO role
The discussion then turned to leadership. Ominski asked how the role of the CISO might change in this environment.
Swanson pointed to Microsoft’s announcement that its global CISO would be supported by nineteen deputy CISOs. Each deputy would focus on a specific business area or domain. Other large organizations, he noted, are also moving toward team-based security leadership, although not always at that scale.
Swanson sees this as recognition that no single individual can hold the required depth across business, technology, and risk. As systems grow more complex, specialized leaders will need to embed within business lines while still aligning with a central strategy.
Hayes agreed that context matters. A CISO in a small company often handles almost everything related to security and risk, while mostly focused on cybersecurity. In a large multinational, the same title may refer to a role that coordinates teams across governance, risk, and compliance, with cybersecurity falling under an operations team.
Hayes also highlighted a more persistent issue. Even as organizations talk more about risk and investment, CISOs still struggle to get meaningful time with boards. Often their material is compressed into a few bullet points, delivered secondhand, and only partially discussed.
“The challenge is still penetrating the board,” Hayes said. “You may get five minutes on the agenda if you get there at all.”
Internal exploitability, not just external exposure
One of Hayes’s more pointed criticisms targeted how organizations talk about attack surface. He noted that when most security teams use the term, they mean the external perimeter. Internet facing systems, border controls, and so on.
Inside the environment, however, conditions can be very different. Once an attacker gains access through stolen credentials or a misconfigured service, lateral movement often becomes straightforward. If it’s an insider, the barrier to entry decreases dramatically. End of life systems, poor configurations, and unmanaged cloud environments create opportunities that are not often captured in vulnerability scans.
Hayes argued that vulnerabilities alone are the wrong focal point. Teams should ask how exploitable their internal environment is, how quickly access could turn into real harm, and which systems or datasets would be affected first.
“Not every vulnerability is equally urgent. Those that can be exploited in minutes or that sit near critical data deserve a different treatment than those that are harder to reach. That type of thinking,” Hayes says, “requires far closer cooperation between IT operations and security teams than many organizations currently have.”
Ominski captured the shift when he said, “The perimeter is not the story anymore. The real question is what happens once someone is inside.”
Culture, learning, and reinforcement
In the final part of the discussion, culture took center stage.
Swanson noted that professionals are facing constant learning demands at the same time experienced staff are retiring. That creates a need for continuity through leadership, shared practices, and accessible material. “The CRC series,” Swanson said, “aims to capture and share that experience in a reusable way.”
Hayes connected culture directly to his work on his second book in the series called Relevant Impact. The first volume of Integrated Assurance focused on foundations and structure. The next one looks at practical implementation and behavior. He is skeptical that annual or occasional awareness training can deliver what organizations expect from it. People face real pressures in their daily lives. Stress, time constraints, and competing priorities affect how they respond to phishing attempts or suspicious activity, regardless of prior training.
“We cannot train the human out of being human,” he said, “this is why they will always be a constant attack vector.”
Instead, Hayes argued for processes and systems that anticipate human limitations and positively reinforces reporting of security limitations. Staff should be recognized for surfacing problems early. Governance should not only set rules but reinforce good behavior and make it easier to do the right thing. “Developers should feel comfortable raising concerns when they see poor software coding practices,” Hayes said, “or things like user credentials being exposed in a design workflow.”
Hayes went on, “In one incident response situation, the organization had a written plan that looked solid. However, when the event occurred, almost none of the people named in the plan still worked there.” The document gave a false sense of readiness. Only regular refresh and real time thinking can avoid that trap.
Security, Hayes concluded, has always been a risk and business issue first. In his view it sits closer to business strategy than most people assume, second only to finance in its influence on how an organization can grow, absorb shocks, and maintain trust.
Ominski closed with an observation that tied the full conversation together. “We have talked about technology and culture and governance,” he said. “What it really comes down to is whether organizations are ready to operate with the level of clarity this new environment demands.”
By the end of the session, one message was clear. Tools are changing, threats are moving faster, and AI is adding new uncertainty, but the core work remains the same. Organizations must connect architecture, operations, risk, and culture in a more deliberate way if they want real assurance rather than simply more data.
