Are you currently experiencing an attack?

Are you currently experiencing an attack?

Containers and Security: Part 1

Containers have become almost ubiquitous on the web’s infrastructure today. They are a fundamental building block of distributed systems, and they feature heavily in cloud-native architecture.

This means that container security has become critically important. In this two-part article series, we’ll discuss this question: “What new security challenges do containers introduce, and how do we solve them?”

Before we get there, some groundwork must be laid. What are containers, anyway? And what benefits do they offer?

What Are Containers?

Containers are essentially a form of virtualization. Traditional virtual machines (VMs) emulate an entire node or server, providing a virtual emulation of the hardware that might be present in a typical hardware computing system. Cloud providers like AWS depend on this type of virtualization to power cloud computing services like EC2, giving them the capability to partition hundreds or even thousands of virtual computing nodes while using a fraction of the hardware that classic servers would demand.

In contrast with traditional virtualization, containers work based on the premise of virtualizing the underlying operating system (OS) instead of the entire hardware. Both paradigms employ a one-to-many relationship: Traditional virtualization allows for many nodes partitioned from one hardware server, whereas container virtualization allows for many containers partitioned from one instance of an OS.

The power of containers lies in their portability and packaging. All the dependencies necessary to run the application in the container are packaged within the container itself. The container can run anywhere a compatible container engine is installed: a laptop or workstation, a local server, or even another VM or cloud computing node. Containers as an abstraction have enabled engineering teams to have a greater degree of flexibility than ever before.

Why Choose Containers?

The “run anywhere” flexibility of containers is a very attractive proposition. One of the classic problems with modern software development is maintaining a close parity with development and production environments. The more homogenous these environments are, the simpler and less error-prone deployments become.

With containers, all the dependencies are already baked in; whatever runs on a development machine will also run on a production server. This development pattern allows for a potentially immutable build artifact: The ops overhead of deploying, running, and troubleshooting production workloads is much lower when production is an exact match to what was compiled and built during development.

Because containers are built with only the dependencies they need and virtualize a fixed portion of the underlying OS, they typically can be started and stopped in a fraction of the time of a traditional VM or server. This makes them an excellent choice for workloads that have demand/load spikes. Many SaaS-based services encounter varying loads over the course of days or weeks, with new product launches or features driving a sudden increase in traffic. With containers, additional capacity can be spun up quickly to meet demand, ensuring customers have a seamless experience without lag or errors. The value proposition to business and marketing stakeholders is obvious: Customer experience is crucial to the continued success of an organization, and the opportunities to get it right are limited.

DevOps is a key initiative for many engineering organizations, and it depends on software and processes that enable rapid deployment and feature iteration. Because of this, the fast scaling, immutable infrastructure, and infrastructure as code capabilities offered by containers make them a great tool for teams looking to transition to a DevOps (or even better, DevSecOps) culture.

The Rise of Containers

Containers might seem like an obvious choice for any modern application deployment, and containerization has, in fact, become a popular choice for production workloads. But what else is driving their popularity?

Classic system architecture typically revolved around the idea of a monolithic application: One large application, as well as the supporting systems, would all be deployed on a single, large server or computing resource. While this reduced complexity and reliance on networking, there were some obvious problems. Any issues with the underlying server hardware typically meant a complete outage for the application. Changes or updates to any part of the application would generally require maintenance to the entire stack, again necessitating a complete outage with potentially significant downtime.

Microservices provided a new architectural pattern designed to mitigate the drawbacks of a monolithic system. Each dependent system or software would be deployed on a separate, scalable infrastructure. Changes could be deployed to any component without impacting other parts of the application, and any potential bottlenecks in a given service could be eliminated by simply scaling up that part of the service. Containers are an obvious choice in a microservices infrastructure, as their portability and scalability are uniquely suited to it.

Cloud and cloud-native services, particularly Kubernetes, depend heavily on containers as well. A key selling point for customers is the ability to flexibly deploy and scale workloads for any need. While VMs provide an improvement over legacy hardware, containerization is a further advance in terms of speed and scalability.

Containers in the Cloud

The major cloud providers all offer multiple options for deploying and utilizing containers. They can deploy and manage their own containers on top of vanilla computing resources, or take advantage of the various managed container services.

Originally launched by Google, Kubernetes has garnered heavy focus in the past couple of years and grown to be one of the most prominent cloud-native projects, with wide adoption across a variety of technology organizations. The underlying building block of a typical Kubernetes infrastructure is a container. From that, it enables vast, sprawling distributed application systems, including load balancing, DNS, and service discovery. Some engineering teams can run their entire digital platform via Kubernetes.

From the Big 3 providers (AWS, GCP, and Azure), customers can choose to use EC2, Cloud Compute, or Azure pools, respectively, to run their own infrastructure. If they wish to offload some of the administrative and operational overhead, they can use AWS’ Elastic Kubernetes Service (EKS), GCP’s Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS). Engineering teams can provide a containerized build artifact and—with some configuration—can have a fully featured Kubernetes cluster deployed with a minimal investment of time.

Containers are powerful — but there are tradeoffs

Containers are a powerful abstraction, providing several benefits over traditional, monolithic architecture. Their speed and flexibility have been a driving force in software engineering innovation.

However, there is the flip side of the coin: With any new technology or service, there come new security concerns and, ultimately, vulnerabilities. What security issues could containers present that simply are not present in traditional architectures, and are the current security tools available enough to mitigate them? Are there modern security solutions that support containerization and other innovations such as service meshes?

In Part 2 of this series, the security benefits and potential security concerns of containers will be discussed. We will also explore the security features and possible shortcomings of the services offered by the Big 3 cloud providers.

Get your price quote

Fill out your email below, and we will send you a price quote tailored to your needs

This website uses cookies to ensure you get the best experience on our website.