The first in a series of three blogs by Grant and Jason on the process of identifying actionable insights.
A couple of weeks ago we discussed the process security operations teams go through to separate the signal from the noise. We reviewed the steps that McAfee has undertaken in designing its Security Fusion Centers to identify the signals in our own operating environment. Getting the basics of security operations right, understanding our security architecture, and carefully assessing priorities and risk are all vital to honing in on the signals.
A study of 500 CISOs from large enterprises across the USA, UK, and Germany, published by Bromium in February, found that the average enterprise-sized security operations center (SOC) receives 4,146 alerts every single day. Now more than 70 percent of those – about 2,900 – are actually false positives. But that still leaves more than 1,200 alerts to investigate on a daily basis. Additionally, from our internal view, we believe that 95% of signals are false positives.
What is needed is a way to narrow the lens aperture and focus on the critical data set that generates accurate signals that are demanding decisions now. As we seem to do repeatedly, the cybersecurity industry takes its cue from the military, which have tackled this problem before.
This chart, published by the U.S. Joint Chiefs of Staff in 2013, describes the process by which data is collected from the operating environment and is then processed and distributed in a consumable manner as information. That information is then analyzed in context of other potentially related information and presented as intelligence. Intelligence, by design, is an insight that may be acted upon.
In cybersecurity, the collection and processing actions are typically automated through various tools like event receivers, SIEM correlation engines, and endpoint detection and response (EDR) systems. The analysis phase, however, has been nearly exclusively the domain of human analysts, because data is often incomplete or lacking context.
How to get these partial data sets to paint the full picture is the trick. We’re often dealing with data that is “dirty.” Complexity is compounded when partial data sets are used to make complex security decisions. Doing the data wrangling to tell a story that estimates or predicts an outcome has been, until very recently, too complex for machines to manage.
Complexity is simplified when the full picture and data set are captured. This is the toughest task in machine learning because we often capture data that can’t be used, data that is valuable but not used, and data that is used partially.
Painting the complete picture by identifying relevant patterns and clues from previous analysis is a complex process consisting of a reinforcing loop of education and information. Learning to spot the most relevant signal requires a teacher and an apt pupil. We’ll have a look at that team in our next blog.
McAfee technologies’ features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Learn more at mcafee.com. No computer system can be absolutely secure.
McAfee does not control or audit third-party benchmark data or the websites referenced in this document. You should visit the referenced website and confirm whether referenced data is accurate.