I recently had the opportunity to brief an industry analyst on the rapid advancement of artificial intelligence (AI) in solving public cloud security. Both the analyst and I had navigated the inception and commercialization of intrusion prevention systems (IPS) and have been skeptical for many years that just because a security technology is capable of preventing a threat or an active attack, customers won’t necessarily operate the technology in a protection mode.
Even today, I’d estimate that 80% of network IPS is operated in detection-only mode and, while faring better, host-based IPS is likely deployed in 60% of enterprise environments as detect and alert only.
During our conversation, a question arose for both of us, two IPS-wary veterans: are customers prepared to turn on protection? Or, more specifically, why would customers enable the protection piece now? Since my initial discussion with the analyst, I’ve been looking for a more complete way to articulate why I think IPS history fails to answer the question of why “protection” will inevitably be enabled by default for businesses operating from the cloud.
With or without the next generation or NG prefix, IPS is expected to operate in a standalone manner — a bit like a firewall — to independently analyze a local stream of traffic or data, identify specific events or attack techniques, raise an alert, and (if operated in prevention mode) block or terminate that stream. Enterprise security teams are typically reluctant to enable IPS blocking due to three critical objections:
1. Prevention decisions are made in real-time at the IPS device level, reliant upon inspecting streaming traffic at that precise time and physical location within the network. IPS is therefore not “context aware” — which leads to high levels of false positives if expert environmental tuning is not regularly performed.
2. Prevention actions are performed locally against the stream of traffic and data being inspected. While the IPS may be best placed to detect a particular threat at that physical location within the enterprise, it is often not the optimal place to take a preventative action.
3. The prevention methods available to an IPS are fairly crude, ranging from firewall-level blocking of traffic, to performing TCP Resets for session termination, requiring coarse-grained parameters for response (e.g., IP address, port number, protocol).
When it comes to enterprise security and threat visibility, public cloud and its wielding of AI are, quite simply, game changers.
At its core, the requirement to meter and bill customers for compute, storage, or traffic — accompanied with the transparent capability to dynamically balance workloads and elastically scale on demand — afford a level of environmental visibility and control unheard of in traditional enterprise network architectures.
While most public cloud provider’s built-in security products share the same nomenclature as their enterprise network cousins, they bear little resemblance under the hood as they are architected specifically for that providers’ cloud and benefit from unique environmental visibility, shared logging and alerting management, cross-product analytics, built-in automation and orchestration APIs, and increasingly advanced AI capabilities.
Few of these advancements are immediately visible to public cloud customers, so why will security teams enable and allow “protection” in the cloud like they never did with IPS? I believe the technical answer lies in a combination of the following:
- “Protection” decisions are automatically applied to the most efficient and natural place in the cloud environment rather than just the location where primary detection may have taken place. This enables greater precision when blocking.
- Mitigation steps don’t have to be harsh all-or-nothing controls and can instead be distributed among multiple security products and cloud applications simultaneously. This greatly reduces possible negative business impact and adverse user experiences, for example, combining conditional access controls and network traffic throttling when handling a suspicious user event initiated from a shared and trusted remote device.
- Cross-product visibility and threat telemetry are combined, allowing intelligent systems to identify new threats and reach decisions on what and how to mitigate at lower thresholds and with higher confidence than stand-alone single-source protection products.
- Threat detection precision, anomaly identification and labeling, and overall detection confidence levels have increased as legacy signature and statistics-based approaches have been replaced with high-accuracy supervised learning models and behavioral anomaly capabilities.
While the technical capabilities of cloud security offerings lend themselves to higher confidence and trust in their protection ability, I think that two more important dynamics are driving “protection by default” adoption in the cloud faster than on-premises.
First, the escalation in volume, sophistication, and rapidity of attack is forcing organizations to respond to threats quicker and in a more automated fashion than ever; it’s just easier to preemptively enable protection and tune out business exceptions afterward.
Second, and likely most important, is that the majority of businesses moving to public cloud do not have in-house information security expertise. Put simply, security alerts are unactionable and a distraction for them. They demand a secure platform to run their business and expect the cloud provider to fully protect them.
Gunter Ollmann serves as CTO for security and helps drive the cross-pillar strategy for the cloud and AI security groups at Microsoft. He has over three decades of information security experience in an array of cyber security consulting and research roles. Before to joining … View Full Bio