Tal Klein

7 October 2020

4 min read

As DevOps professionals debate the pros and cons of using the edge versus the cloud, there are many aspects to consider, explains Tal Klein, CMO at Rezilion. Klein relies on his cloud workload protection expertise to share insights on data security, the growing intricacies of edge architecture, and the importance of implementing immutability into workloads.

Give a brief summary of Rezilion and what your company does.

Rezilion is a cloud workload protection platform that enables DevOps and Security teams effortlessly protect complex cloud-workloads using their existing DevOps tools without the need for additional policies or agents. We help on both sides of the CI/CD lifecycle: In build-time (left side), Rezilion reduces attack surface and vulnerability patching by 67%. In runtime (right side), Rezilion autonomously detects breaches, mitigates exploits and assures workloads are running in their desired state. Gartner calls what we do “Desired State Enforcement.”

How do you see the devops field evolving as more compute power and applications move toward the edge?

One of the interesting side-effects of edge vs. cloud is that the shared responsibility model becomes murkier than ever. Who is responsible for data security in the cloud? The cloud service provider is responsible for the security of the infrastructure, the application service provider is responsible for the security of their services and to ensure all known vulnerabilities are patched, the edge device provider is responsible for the local data, and the customers is responsible for setting the proper policy around security governance, risk, and compliance. It’s increasingly complicated, so there’s a growing need for security-as-code, meaning, for security to be baked into the infrastructure itself in order to remove all these vagueries.

What are some of the unique challenges of managing devops on edge devices?

As more capabilities are pushed out to the edge, the architecture of the edge is becoming more complex and important to secure, but the daylight between devops and security is growing, not narrowing. Since edge software is largely built using containers, ensuring those containers’ attack surface is as small as possible is critical. Keeping those containers compliant is hard enough to achieve — keeping them secure is even moreso difficult, often leading to difficult tradeoffs between security and functionality.

What are some key considerations developers must keep in mind when deploying applications at the edge?

Since we are a cloud workload protection company, I’ll focus on risk as a key consideration. It’s crucial to define the clarity of requirements and acceptable level of risk for edge applications: Applications at the edge often have limited clarity of requirements at the outset and must accept some delivery risk. Defining risk appetite is important, but more important is having clarity of vision into which components of the application are truly risky and which represent unnecessary bloat. Without a reliable mechanism for continuous adaptive risk and trust assessment, applications at the edge risk dramatically increasing both vendor and enterprise attack surface.

How do compute and connectivity limitations of edge devices change the way developers must think about their apps and services?

First, edge computing devices must be assumed to be heterogeneous and subject to sanctioned and unsanctioned modification — this means that resilience to change (immutability) must be an architectural consideration. Second, constant network connectivity must not be assumed. Intermittent or varied network connectivity requires security controls to continue to provide protection, even if disconnected from their management console. Third, in some security control protection cases compute capacity will be limited, so an architecture of of low-overhead, minimum viable protection must be an option — the right blend of left side/right side compliance and protection-as-code capabilities are yield infinitely more resilience than traditional security tollgates. Finally, the edge computing platform should act as an aggregation point for the collection of telemetry from edge devices. As such, the edge computing platform should have the capability to monitor desired state to identify if a device appears to be compromised or malfunctioning, and then — without human intervention — bring it back to a healthy state if it is compromised.

Anything else to add?

I just want to reiterate on the importance of shifting to an immutable infrastructure mindset when considering edge application development. With immutable infrastructure, protection strategies shift to focus on application control and desired state enforcement at runtime, and a stronger emphasis on prioritizing vulnerabilities that pose actual risk (rather than, say, are merely included in a container package). Implementing immutability into workloads will ensure that only known good and approved code resides in memory during the lifetime of the workload.