by Larry Clinton
The greatest cyber risk an organization can have is doing a faulty cyber-risk assessment.
This is one of the key insights from Doug Hubbard’s paradigm-shifting book “How to Measure Anything in Cybersecurity Risk”.
While in Chicago this week to do a series of Master Classes on the Economics of Cyber Risk for the National Association of Corporate Directors (NACD), I had an opportunity to meet with Mr. Hubbard, whose work is in the vanguard of the movement to finally apply systematic empirical assessments of cyber risk that will enable enterprises to calibrate realistic risk tolerances, budgeting, and mitigation plans.
Hubbard’s work is already being adapted into functional tools available in the market, such as X-Analytics and Factor Analysis of Information Risk (FAIR), which are in turn being integrated into advanced technology offerings and insurance products. Literally thousands of private companies are now using these methods and thus taking the step into the next generation of cybersecurity — one which holds the potentiality of actually demonstrating success.
This is one of the rare snippets of good news we ever hear in the cybersecurity space.
These modern cyber risk assessment methods are a generational leap from the previous models dominated by elongated lists of controls, often enforced by regulatory mandates. These checklists are typically amassed in various frameworks which are loaded into crude matrices and heat maps which were long on heat and short on providing any light into actual cyber-risk management.
One of the most compelling elements of Hubbard’s work is his thorough review of the research literature on the impact of these matrices and heat maps of controls which generates the conclusion that “there is not a single study indicating that the use of such methods actually helps to reduce risk.” (Where have I heard that before?)
Even worse, Hubbard carefully walks through the currently mainstream methods and demonstrates that due to the fuzzy terminology and bogus math embedded in these out-of-date models, their use can actually be counterproductive in terms of providing a sound basis for setting risk parameters, making judgments on enterprise risk appetite and budgets. All of which results in taking our inherently vulnerable cyber-risk posture and making it worse.
As Im’ discussing the cyber threat and how to address it with attendees at the NACD conference, I’m struck by the sense of cyber-budgeting exhaustion that is beginning to take hold with this community. Recent years have seen cybersecurity spending in the enterprise space soar (several multiples higher than in the government space). But no element of an enterprise can get their budget increased by 20-30% every year — especially if you can’t document effectiveness. The ability to begin to assess cyber-risk in economic terms using models that begin to roughly approximate the methods used to assess other enterprise risks is a welcome innovation that will enable wiser allocation of perpetually scarce cyber resources.