The following article originally appeared in Container Journal
If you have worked in cloud computing, DevOps or related fields in recent years, you’ve no doubt come across Kubernetes. One of the earliest and most popular open source container orchestration systems, Kubernetes, also known as K8s, enables development teams to group containers that make up an application into logical units for easy administration (stylistic) and discovery in cloud environments. There are other container orchestrators out there—Nomad from HashiCorp, Red Hat’s OpenShift, Helios and Azure Container Instances are all popular alternatives. But there are just as many variations on what amount to Kubernetes-specific management services, such as Google Kubernetes Engine (Kubernetes originated as a Google project), Amazon Elastic Kubernetes Service and Azure Kubernetes Service.
Obviously, there is a lot of demand out there for container orchestration, and there are a lot of different ways to take advantage of what K8s has to offer. Check out the case study section of the Kubernetes website, and you will see accolades from all areas of the developer community. Developers working at brands like Spotify, Adidas, IBM, Nokia, Box—the list goes on and on—all have great things to say about Kubernetes. And with good reason, because it:
- Excels at accelerating deployment times for new applications. In today’s DevOps environments that embrace continuous integration/continuous delivery (CI/CD) of new applications, speedy time-to-market is a prime directive for chief product officers and head engineers.
- Helps to reduce IT costs and optimize operations. K8 has native autoscaling through horizontal and vertical pod autoscalers which automate resource allocation based on the needs of the application. These capabilities also greatly reduce the need for manual operations on the infrastructure.
- Simplifies and speeds migration of legacy applications. For established brands in sectors like finance, retail, transportation or health care, it is highly like you have core on-premises applications that have been running in your data center for a decade or more. For these brands, digital transformation is a key business objective, which means getting legacy apps onto the cloud. K8s supports fast, automated migration using containerized replatforming methodologies.
- Empowers organizations to take advantage of multi-cloud and hybrid environments. The major cloud providers—AWS, Google, Azure—all continually improve and build out their systems to compete on capabilities and price. Kubernetes ensures that you can operate your apps in whichever cloud environment works best for you. This flexibility ensures you can avoid lock-in with any individual provider and preserves your ability to work with whichever resource is the most advantageous at any given time.
- Ensures availability and scalability. One of the original, and still important value propositions of cloud computing is that you can deploy more processing power, storage, and other hardware resources—or reduce them—as your business needs change. Depending on your deployment and your environment, however, this kind of elastic scalability is not always easily realized. Again, with autoscaling APIs, K8s enables you to dynamically scale up to handle peak loads and scale down quickly to ensure you are not spending unnecessarily, no matter which cloud service you are using.
Effective Yet Vulnerable
Organizations adopt Kubernetes primarily to accelerate business objectives and scale growth. But what happens when the very thing that facilitates growth becomes a liability? Unfortunately, that is the experience of many Kubernetes adopters. A recent Red Hat survey of 300 DevOps, engineering and security professionals found that 93% of respondents experienced at least one security incident in their Kubernetes environments in the last 12 months, sometimes leading to revenue or customer loss. In response to another question, 55% of respondents have had to delay an application rollout because of security concerns over the last 12 months. One other notable point that bears consideration: the number one security concern among K8s users is not attacks (16%), it is exposures due to misconfigurations in their container and Kubernetes environments (46%).
This situation traces back directly to how complex Kubernetes is—especially regarding mastering the Kubernetes development workflow—and how difficult it can be to effectively secure K8s environments at each of its potential weak points. Unfortunately, the processes and tools necessary for securing these identities and access rights are not at all well understood within most K8s teams. A typical Kubernetes implementation can have many hundreds of human and synthetic users requiring access rights to complete tasks. The tendency within most DevOps organizations is that too many accounts end up with over-privileged access rights that remain open for extended periods of time—hours or days past when that access is strictly necessary.
The default stance within the DevOps community is that security is a top priority—up to a point; that point being anything that gets in the way of fast deployment of new code is a nuisance to be avoided. The need for speed coupled with a poor understanding of security best practices is a combustible mix that explains why 93% of survey respondents experienced at least one security incident. But there is no good reason security issues related to access permissions and privileges remain an endemic issue in the K8s community in 2022.
Establishing New Security Best Practices for Kubernetes
Most cloud solutions with an identity engine—where Kubernetes is typically deployed—attempt to keep things simple for the administrators and users, but as we know, this leads to over-provisioned access. Newer security technologies that have come onto the market in recent years directly address these security vulnerabilities using ephemeral just-in-time (JIT) privileging, more effective secrets governance and zero-standing privileges (ZSP). Let’s consider each of these in brief detail.
Where a K8s user previously had standing access privileges that (potentially) extended around the clock indefinitely, implementing JIT reduces your attack surface by granting privileges to users on-demand according to their role. With JIT, access rights expire automatically, i.e., after a predefined time, at the close of a timed coding session, or when an employee leaves the organization. This ensures that organizations minimize attack surfaces constantly and move toward a least-privilege access (LPA) model.
Secrets Governance Enforcement
With JIT permissioning, human and synthetic IDs can quickly check out a role-based, elevated privilege profile for a specific cloud service, either for the duration of a session or task, for a set amount of time or until the user checks the profile back in manually. Once the task is complete, privileges are automatically revoked.
Zero Standing Privileges
The ability to dynamically add and remove privileges lets your DevSecOps team maintain a zero standing privilege (ZSP) security posture. It works on the concept of zero-trust, which means that, by default, no one and nothing is trusted with standing access to your Kubernetes account and data. In a ZSP model, human and non-human users gain access to restricted resources the moment they need them and only for as long as they need them. Such JIT permission methodology results in the least number of open privileges at any given moment in time.
The good news with Kubernetes security is that we know what the problem is: Misconfigurations associated with access privileges. We also know that existing open source security solutions aimed at the K8s market are widely seen as having a limited effect and have not been widely adopted. KubeLinter—at 36%—is the only one that is being used in at least a third of K8s deployments. The most successful model for Kubernetes security will be one built on zero-trust that embraces ephemeral JIT privileges, strong secrets governance and ZSP. We know the way forward. Let’s start moving in that direction.