CVSS – Is 3 The Magic Number?

 

We have now come to the end of our blog series discussing CVSSv3. Over the past several months, we’ve attempted to cover the many disadvantages and some advantages of this standard, and how it compares to CVSSv2. New problems have been introduced, old problems remain, but improvements have also been made.

Blog Series Feedback

As we continued to evaluate and gathered our thoughts on CVSSv3, we believed it would be useful to share them as part of this blog series. We’ve received a surprising amount of feedback, which has been very much appreciated and eye-opening. It is clear to us that the attention received underlines that even though CVSSv3 has existed since 2015, many security practitioners are still not sure about the standard and are – as expected – very slow to adopt it. It also underlines that this is a topic of interest to many, who want useful vulnerability prioritization tools. This is why it is so important that the CVSS SIG should listen to what the industry wants and needs.

This blog series has lead to many good discussions internally at Risk Based Security and with our clients and prospects. We are happy to learn that it has also fostered similar great discussions within other organizations – even those already using CVSSv3. Hopefully, it has been a way for us to help many others better wrap their heads around CVSSv3 and determine whether or not it makes sense for their organization to adopt it.

Here are some of the key points summarized from the feedback we’ve received:

  • It seems to be a 50/50 split between people wanting file-based attack vectors to be treated as ‘Local’ (CVSSv3 approach) vs ‘Remote’ (CVSSv2 approach). However, everyone agreed that the special exceptions for whether or not a file is opened in a browser or browser plugin vs 3rd party application has created a mess.
  • Almost everyone acknowledges that CVSSv3 still has significant flaws and is not reliable. However, the majority still believes that CVSSv3 is more accurate than CVSSv2.
  • There was strong agreement that consideration of exploit reliability and complexity i.e. requirement to overcome advanced exploit mitigation techniques and similar should not be a criteria for evaluating “Attack Complexity (AC)” or any other base metric. In fact, not only does the CVSS SIG disregard this criteria themselves in their provided scoring examples, it is also disregarded by pretty much all vendors scoring CVSSv3 seemingly with the exception of Microsoft. Instead, CVSSv3 users want “Attack Complexity (AC)” to only reflect the difficulty in mounting an attack e.g. considering Man-in-the-Middle (MitM) vectors and similar.
  • We were asked our thoughts on the temporal and environmental scoring of CVSS. While the scope of our blog series was only the base scoring, the general thoughts shared with us were that temporal scores are broken based on how they impact base scores, but that environmental scoring could be used to offset the limitations of CVSSv3. For example, the Environmental Score Modified Attack Vector (MAV) metrics would allow organizations to choose to compensate for the issues regarding file based attacks. We believe while using environmental metrics could indeed help to address the shortcomings, that is not the intended use of the new modified base metrics.

One thing that really stood out from the feedback is that many people do not find CVSSv3 to be reliable. Consider that CVSSv3 is specifically intended as a standard to assist in prioritizing vulnerability management, so it is failing to deliver on a core goal. Yet it was also interesting to hear some people state that they find the resulting scores more accurate.

Unfortunately, while our analysis does support more accurate scores for certain types of vulnerabilities, many scores are still not remotely reflecting real-world impact. While it won’t be possible to get perfect, too many severe vulnerabilities related to file-based attacks are downplayed, while many other vulnerability types have inflated scores. This ultimately makes CVSSv3 less useful as a prioritization tool.

So what is our conclusion of CVSSv3 here at the end?

When we initially learned about some of the changes to CVSSv3, we were very hopeful. While we do acknowledge and see improvements over CVSSv2, the newly introduced problems – in partial due to guideline changes – ultimately do not make it much better than what it supersedes. At least it is not what we had hoped for and what we believe the industry needs.

CVSSv3 in its current form just has too many problems outstanding. We consider it still somewhat in the preview stage and hope that the feedback we have provided to the CVSS SIG is considered and implemented in a later revision.

As it stands, we find it hard for most organizations to justify investing time and effort converting to CVSSv3 over CVSSv2. We also believe that with the base scores increasing, it has the potential to backfire as everything may seem as high priority. The good news is that we still believe, as mentioned several times during this blog series, that it does have promise with some changes to the guidelines and tweaks to the scoring.

While we are not pleased with the current state of CVSSv3, we continue to support the development and hope to influence positive changes to the standard. In our VulnDB product we have implemented our initial phase to support CVSSv3 and have further plans to increase support for CVSSv3 scoring. We do this in part because we hope for future improvements that will make CVSSv3 more useful, but also simply because some customers are starting to ask for it, simply because it’s a newer version of the standard and not always because it’s perceived as better.

While we want to follow the standard as closely as possible, we are still working on our implementation to determine, if we are to follow the standard to the letter or ignore/tweak a few of the guidelines to provide more accurate scoring, making it a more reliable prioritization tool. Similarly, we may ignore certain scoring examples given in the ”Common Vulnerability Scoring System v3.0: Examples” document, as they are clearly based on wrong premises as discussed in previous blog posts in this CVSSv3 series.

As a closing statement, we like to highlight the goal of the FIRST CVSS SIG when designing CVSSv3: “CVSS version 3 sets out to provide a robust and useful scoring system for IT vulnerabilities that is fit for the future”.

Has CVSSv3 achieved this goal?

After performing an objective review of the standard, our response is very clear: “No, unfortunately not. Not only will it not be fit for the future as technology continues to change, it is not really fit for organizations’ needs today.