Host Based Risk Scoring (Part 1): How do you calculate Risk?
Hey all! This is the first post in a series about the concepts of a Host Based Risk Scoring System. This is an idea I had a few years ago (Spring 2012), while doing a lot of testing of McAfee and Symantec host products. The work involved trying to determine how effective the products were against varying attack vectors and post-exploitation movement. One of the attack vectors was “Embedding custom shellcode in an Excel Macro”. It was successful and the products didn’t alert to it, yet I haven’t seen a system implement these methodologies.
This raises a fierce debate about whether macros are a useful function to an enterprise or whether they are simply a security risk. The “Security vs. Functionality vs. Ease-of-use” debate has been going on for years. As security is increased, “Functionality” and “Ease of Use” are decreased by a proportional amount. The only way to be 100% secure is to disconnect your computer from the network and turn it off.
So this begs the question of “When is it appropriate to increase security and decrease functionality?” STRATCOM touched on this in 2006 when they released SD 527-1 referring to “Information Operations Condition (INFOCON)” (PKI certificate required, but you may be able to find other sources via Google). SD 527-1 outlined different levels of security based on the threat to the DoD’s cyber assets, as well as what measures should be implemented depending on that threat. Measures such as re-baselining all systems, validating accounts, and changing passwords were addressed, as well as many more.
The idea of adjusting a system’s security based on the threat (or more specifically the “Risk”) is intriguing. The first piece of the puzzle is trying to determine how we can calculate the Risk to a system.
Many people use the term “Risk” synonymously with vulnerability. Determining the true risk to a system would ultimately take into account ALL the following measurements:
Calculated dynamically on the system by scanning for software and configuration vulnerabilities. Must be scanned on a regular basis using a vulnerability scanning application that resides on the system.
Potential of Compromise
Calculated dynamically on the system from sensor input of host based intrusion detection systems, antivirus, firewall, log auditing applications, and/or other monitoring tools.
Static value provided to the system based on threat intelligence. Current implementations utilize the Department of Defense Information Operations Conditions (INFOCON) levels to determine amount of threat to the host. Level can be either global or local.
Static value based on the information contained on the system or the role the system accomplishes (e.g. Domain Controller, email server, SCADA, etc.)
Calculating the total risk score is complicated by the way the different scores relate to each other. As an example, if a system has a very high adversarial value (i.e. Domain Controller) then the slightest vulnerability should trigger an event to protect the system. Similarly, if a system has a high vulnerability level, then even a small intrusion detection alert should prompt actions to contain the potential compromise from further damage, such as privilege elevation. The following figure depicts some of these relationships:
Take a look out at Host Based Risk Scoring (Part 2), where I go into detail about how to calculate a Vulnerability score, based on items such as Software Vuls (CVEs), Configuration Vuls (CCE), and some of the associated individual scores (CVSS).