Cyber Risk Scoring

Introduction

In this blog, we will explore different Cyber Risk Scoring (CRS) algorithms. Also understand real-world examples of WMDs, their societal impact, and how these lessons apply to Cyber Risk Scoring (CRS)—a burgeoning field in cybersecurity. We will delve into the mechanics of CRS models, their benefits, and the risks of deploying them without sufficient oversight.

In a world increasingly driven by data, mathematical models and algorithms play a pivotal role in decision-making. However, when these models are poorly designed or misused, they can become what Cathy O’Neil termed Weapons of Math Destruction” (WMDs)—tools that cause more harm than good. WMDs are characterized by their opacity, scale, and potential for harm, making them dangerous when left unchecked.


Cyber Risk Scoring Algorithms

Cyber risk scoring algorithms are frameworks or models used to assess and quantify the risk associated with cyber threats and vulnerabilities. These scores help organizations prioritize risk mitigation efforts and allocate resources effectively. These algorithms range from simple mathematical formulas to sophisticated, multi-factor models that provide deeper insights. Cyber risk scoring is integral to cybersecurity frameworks and risk management strategies, helping stakeholders make informed decisions.


1. Simple Cyber Risk Scoring Algorithms

a. Risk = Probability x Impact

  • This is a fundamental and widely used formula to calculate risk.
  • Description: Probability refers to the likelihood of a cyber event occurring. Impact refers to the severity of consequences if the event happens. The product of these two factors gives a basic risk score.
  • Example: If the probability of a ransomware attack is 70% (0.7) and its impact is critical, rated as 100, then the risk score is: Risk Score = 0.7 x 100 = 70
  • OWASP Risk Rating Methodology uses this simpler approach. OWASP provides structured categories to estimate scores (e.g., High = 9–10, Medium = 5–8, Low = 1–4).

Pros: Simple, intuitive, and easy to use. Cons: May oversimplify risk and lacks granularity.


2. Complex Cyber Risk Scoring Algorithms

a. CVSS (Common Vulnerability Scoring System)

  • Purpose: A standardized framework for rating the severity of software vulnerabilities.
  • Components: Base Score: Intrinsic properties of a vulnerability (e.g., attack vector, attack complexity, confidentiality impact).Temporal Score: Changing factors like exploit maturity and patch availability. Environmental Score: Customizable based on the organization’s environment.
  • Scoring Range: 0 to 10 (Low, Medium, High, Critical).

Example:

  • A vulnerability that is exploitable over the network with high impact on confidentiality and integrity may score 8.5 (High).
  • This is simplified version of CVSS. More can be read here
  • It is important to note that CVSS is not a measure of risk.

b. OpenSSF Scorecards

  • Purpose: Evaluates open-source project security.
  • Criteria: Use of security best practices (e.g., dependency updates, static code analysis).Scores are produced based on a combination of heuristics and automated checks. Scoring output helps identify risks in open-source projects.

c. SVCC (Severity, Vulnerability, Countermeasure, Criticality) Model

  • Purpose: A structured approach to assess and prioritize risks by evaluating assets, vulnerabilities, and mitigating measures.
  • Components: Severity: The impact of the risk if exploited. Vulnerability: The extent to which the asset is exposed to threats. Countermeasure: Effectiveness of existing security controls in mitigating the threat. Criticality: Importance of the asset to the organization’s operations.
  • Formula: Risk Score = Severity x Vulnerability / Countermeasure x Criticality
  • Example: If a critical server (high criticality) has a vulnerability with severity 8, mitigated by a countermeasure effectiveness of 2, the risk score could be: Risk Score = (8 x 0.9) / 2 x 10 = 36

Pros: Comprehensive and adaptable to different organizational needs.

Cons: Requires detailed input and accurate data for meaningful results.

d. Risk Rating Services

Organizations often rely on specialized third-party risk rating services to assess the cyber risk posture of their vendors, partners, and supply chain. These services use external-facing data and proprietary algorithms to generate risk scores without requiring internal access to third-party systems. Popular Services Include: BitSight, SecurityScorecard, RiskRecon and others


3. Cyber Risk Quantification (CRQ) Algorithms (Dollar Value Assignment)

In addition to providing risk scores, some advanced algorithms quantify risks in financial terms to help businesses understand the monetary impact of cyber threats. This approach transforms abstract risks into actionable business insights. Read further through a scientific lens.

a. FAIR (Factor Analysis of Information Risk)

  • A quantitative risk assessment model that calculates the financial impact of cyber risks.
  • Core Components: Loss Event Frequency: Probability of a risk occurring. Loss Magnitude: Estimated financial loss if the risk materializes.
  • Output: Provides risk in terms of dollars, helping align cyber risk with business objectives.

b. Cyber Insurance Models

  • Insurance companies use complex algorithms that consider factors like: Industry-specific risks. Organizational cyber hygiene. Historical attack trends.
  • The result is an estimated financial liability, which determines premium costs.

What if Risk Scoring goes wrong ?

While cyber risk scoring methodologies help prioritize and manage risks, what if the scoring model itself is flawed or inappropriate for your organization?

A simplistic model might underestimate critical threats, leaving key vulnerabilities unaddressed, while an overly complex one could produce misleading results due to inaccurate data inputs or assumptions.

Imagine assigning a low risk score to a vulnerability because of outdated countermeasure data, only to face a costly breach. Relying on the wrong scoring methodology could create a false sense of security, misallocate resources, and ultimately expose the organization to unforeseen cyber incidents. Are you confident your risk scoring method truly reflects your risk reality?

If risk scoring methodologies are applied incorrectly or built on flawed assumptions, they can transform into “weapons of math destruction”—systems that create more harm than good.

A faulty scoring model can systematically underestimate critical risks, misguide decision-makers, and divert resources away from real threats. Worse, these models, cloaked in the appearance of objectivity and precision, can lead to dangerous complacency or blind trust in flawed results. Instead of protecting the organization, they become tools that amplify vulnerabilities, leaving you exposed to devastating cyber incidents.

Are you certain your scoring approach isn’t doing more damage than defense? But before that let us examine the hazards of Weapons of Math Destruction(WMD) models


Real-World Examples of Weapons of Math Destruction

1. Credit Scoring Models

The Problem:Credit scoring models are often opaque and rely on questionable correlations rather than causations. For example, certain algorithms penalize individuals for living in neighborhoods with lower average incomes, regardless of their personal creditworthiness.

The Impact: Millions of people are denied loans or charged exorbitant interest rates based on biased or incomplete data. These decisions can perpetuate economic inequality and limit social mobility.

2. Predictive Policing Algorithms

The Problem:Predictive policing models use historical crime data to forecast future crime hotspots. However, these models often reflect systemic biases, over-policing minority communities while underestimating crime elsewhere.

The Impact: This approach reinforces a cycle of discrimination and mistrust, with communities unfairly targeted based on biased data rather than actual criminal activity.

3. Hiring Algorithms

The Problem:Many companies use AI-driven tools to screen resumes and rank candidates. These tools have been found to penalize applicants who attended certain universities, used specific keywords, or belonged to underrepresented demographics.

The Impact: Qualified candidates are unfairly rejected, and workplace diversity suffers due to biased algorithms.

4. Healthcare Risk Models

The Problem:Algorithms in healthcare often prioritize cost over patient outcomes. For instance, a widely used healthcare algorithm in the U.S. systematically recommended less care for Black patients because it equated healthcare spending with health needs—a flawed assumption.

The Impact: Disparities in healthcare access and treatment outcomes were exacerbated, affecting the well-being of marginalized groups.

These examples illustrate how flawed algorithms can harm individuals and society, even when designed with good intentions.


When Cyber Risk Scoring (CRS) Becomes a Weapon of Math Destruction

CRS models, despite their potential to revolutionize cybersecurity, can exhibit the characteristics of WMDs if not carefully managed. Here’s how they can fail across the three aspects of Opacity, Scale, and Harm:

Opacity: Lack of Transparency in CRS Models

Examples of Opacity in CRS:

  • Black-Box Algorithms: Proprietary CRS tools often do not disclose their methodologies or assumptions. For instance, a model might assign a high-risk score to cloud adoption without clarifying that it’s based on outdated threat patterns. This lack of clarity prevents stakeholders from questioning or improving the model.
  • Over-Reliance on Historical Data: CRS models frequently depend on historical breach data, which may not reflect emerging threats like quantum computing risks or AI-driven attacks. If the process of deriving risk scores isn’t transparent, organizations could be blindsided by unforeseen threats.
  • Hidden Assumptions: A CRS model might assume that threats are evenly distributed across all assets, ignoring the fact that certain assets (e.g., critical databases) are more attractive targets. These assumptions can skew risk prioritization.

Scale: Broad Influence of CRS on Organizations and Industries

Examples of Scale in CRS:

  • Industry-Wide Adoption of Flawed Models: If a widely used CRS model underestimates supply chain risks, it could lead to underinvestment in third-party risk management across entire sectors. A single major breach could then cascade across interconnected organizations, amplifying systemic risks.
  • Automation Without Oversight: Organizations often automate decision-making based on CRS outputs. For instance, automated budget allocation might focus exclusively on risks with high monetary impacts, neglecting lower-cost but high-likelihood risks that could disrupt critical operations.
  • Compliance Reporting Errors: Regulatory bodies may require organizations to use CRS tools for compliance reporting. If the model produces inaccurate or biased results, it could mislead regulators and create systemic vulnerabilities in the financial or healthcare industries.

Harm: Negative Consequences of Misapplied CRS Models

Examples of Harm in CRS:

  • Misallocated Resources: A CRS model might recommend heavy investment in ransomware protection while downplaying insider threats due to a lack of sufficient data on insider incidents. This imbalance can leave critical gaps in an organization’s defenses.
  • False Sense of Security: Over-reliance on CRS outputs can lead to complacency. For instance, if a model inaccurately predicts a low likelihood of phishing attacks, an organization might deprioritize employee training, leaving it vulnerable to simple but effective social engineering tactics.
  • Reputational and Financial Damage: Consider a CRS model that significantly underestimates the potential cost of a data breach. If a breach occurs, the organization could face severe financial penalties, lawsuits, and reputational damage far exceeding the predicted impact, all because of flawed modeling.
  • Exclusion of Non-Quantifiable Risks: Certain risks, like the reputational impact of ethical lapses or privacy violations, may not be easily quantifiable. If CRS models fail to account for these, organizations could overlook significant risks with long-term consequences.

Ensuring CRS Models Stay Ethical

To prevent CRS models from becoming WMDs, organizations should:

1. Prioritize Transparency

  • Choose CRS tools that disclose methodologies and assumptions.
  • Encourage stakeholders to challenge and validate the model’s results.

2. Embrace Diversity in Data

  • Use data that reflects a wide range of scenarios, avoiding overreliance on historical trends that might exclude emerging risks.

3. Continuously Validate Models

  • Regularly test CRS outputs against real-world events to ensure accuracy.
  • Update models to reflect changes in the threat landscape.

4. Incorporate Human Oversight

  • Use CRS models as a complement to, not a replacement for, human judgment.
  • Engage cross-functional teams to interpret and act on CRS results.

5. Align Metrics with Reality

  • Focus on metrics that matter, such as cost of downtime, data breach penalties, and recovery time objectives, rather than abstract scores.

Conclusion

Cyber Risk Scoring models offer a promising way to demystify cybersecurity and align it with business priorities. However, without proper design, governance, and oversight, these tools risk becoming the cybersecurity equivalent of Weapons of Math Destruction.

By learning from the mistakes of other fields and embedding ethical principles into CRS practices, we can ensure these models serve as a force for good, enabling organizations to navigate the complex digital landscape with confidence and integrity.

Related

Schedule a Demo​
Book a session with one of our senior Customer Success Specialists.​

Use Cases

Ofofo Cyber Security Marketplace

Copyright © 2024 Seconize Technologies Pvt Ltd. All rights reserved.