The emergence of a new technology paradigm typically means that everything else in the ecosystem is playing “catch-up.” In this regard, containers are no different. The dramatic shift in the landscape of computing workloads means that new challenges in domains like service discovery, management, and critical security don’t have readily available solutions, and organizations deploying containerized workloads are left to wonder if their current security solutions are adequate.
Part 1 of this series was an introduction to containers, exploring their use cases and adoption in modern distributed computing. Now in Part 2, we’ll take an in-depth look at security—a critical aspect of containers. There have been plenty of discussions about the potential security pitfalls of containerized workloads, but there are possible benefits as well. We’ll discuss these, along with the security tools and features of the Big 3 cloud providers.
Security Benefits of Containers
A pragmatic look at containers reveals that there are potential security upsides to implementing them if organizations take the time to properly plan and manage their container deployments. Containers provide several inherent operational benefits that ultimately lead to an improved security posture.
Fewer Complex Tasks, Reduced Human Error
Human behavior is often cited as the weakest link in the security chain. Consequently, any operational engineering task that depends on manual intervention is a potential vulnerability waiting to happen. In legacy infrastructure, patching and updating servers was typically a tedious task, involving long hours of engineering and operational toil, downtime, and debugging. Server or operating system updates could also introduce new bugs or performance regressions. Operational tasks like this, with a high degree of complexity or difficulty, are often left incomplete, leading to vulnerabilities. Containers have largely done away with these problems. They can be easily updated in place, with no changes to the underlying system, and when the system requires upgrading, containers can be migrated to other computing resources. Furthermore, an easier update process leads to more successful, complete updates and improved overall security.
Smaller Attack Surface
Since most containers typically house a single runtime application, they additionally offer a smaller overall attack surface compared to a traditional monolithic application server. Most application servers will have several background services running, including a wide swath of open ports. But containers run a single, isolated program, usually with just one open port. Judicious operations engineers can lock down most of the underlying systems the containers run on, leaving only the limited ports and services needed for the containerized workload to function properly.
Less Configuration Drift
The immutability of containers also presents a sizable security benefit, even beyond their ability to be part of immutable infrastructure. Maintaining homogeneous development and production environments is a constant struggle for most DevOps organizations. Too much configuration drift between systems and deployment stages can render security audit and analysis tools far less effective. In a containerized environment, the difference between the systems on which a container is developed versus where it runs is much smaller. Immutability also means that if a security issue crops up, it will be much simpler to trace and to isolate the at-fault change.
Security Challenges of Containers
Containers do, however, present new challenges as well, and they require new ways of thinking to properly secure critical production data and applications. The added complexity of containerized infrastructure can make for a difficult time for personnel and security tooling alike.
The complexity inherent in container systems manifests itself at both the individual host level and across the overall system architecture. At the host level, the addition of the container engine adds complexity to the local network, plus a potentially insecure daemon and API to the list of running services. Taking Docker as an example, containers depend on a separate container subnetwork that runs on the host. Engineers need to be careful to configure this network appropriately for their use case, ensuring only the minimum number of ports are shared with the host system. In this case, the underlying daemon utilizes a Unix socket for communication and access. Controlling Docker remotely via an API (as is likely to occur in distributed environments) means that an open socket with root-level access is now exposed to the broader network. Careful consideration should be given to properly securing the underlying substrate in this case.
Everything is Distributed
Taking a higher-level look at the overall system architecture, the added complexity of containers and microservices can also make logical security more difficult. In legacy monolithic application infrastructures, both the application and database lived on the same server, meaning communication between different parts of an application never traversed the wider network. But in a containerized microservices deployment, virtually all parts of the application communicate over the entire network, with individual APIs and backend services deployed in containers on separate hosts. In order to make large-scale microservice infrastructure manageable, a service mesh is often needed, which in turn means that security for the service mesh is vital.
Expanded Blast Radius
In an actual security compromise, containers may increase the blast radius of the event. In many production environments, hosts will have multiple containers running on them, potentially running different applications or services. If these containers are not configured securely, i.e., running as root, a security compromise of one container could lead to compromise of all containers, as well as the underlying host. Again, you must be thorough when it comes to your security approach to minimize potential vulnerabilities.
Securing Cloud Containers
The Big 3 cloud providers—AWS, Google, and Microsoft Azure—all provide a variety of tools and services to support container-based deployments. Customers can choose to use either a managed service approach, or roll out their own using vanilla computing resources.
Going the managed-service route eliminates the heavy administrative and operational overhead of a self-managed container deployment; this can be ideal for smaller teams with limited operations staff. Offloading the administrative overhead also means the provider will typically assume more of the security responsibility, particularly for the underlying compute nodes. However, managed services, which are typically more feature-rich, usually carry a heavy premium and lack some of the flexibility and customization possible with regular deployments. For larger organizations, the higher costs are harder to justify if they can field a dedicated operations team in-house.
Each provider offers different security tools and services to help manage container deployments. With its focus on Kubernetes, GCP gives customers several tools out of the box, including critical supply chain protections for container images. Azure Security Center offers similar benefits, with continuous auditing of Kubernetes clusters, container hosts, and container images. AWS’ Fargate provides managed services for container hosting, including Kubernetes and Containers-as-a-Service. Meanwhile, AWS makes general reference to their security tools and services for containers in its documentation, but they leave it up to the customer to decide which general security services and tools best apply to a given infrastructure. AWS also provides a robust marketplace of third-party vendors and partners, where organizations can seek out purpose-built container security solutions.
Used Correctly, Containers Offer Great Advantages
Despite the challenges presented by containers, they offer several operational, development, and security advantages, especially as legacy tools and methods are often inefficient and ineffective in today’s containerized world. However, choosing to implement containers means committing to a new approach to securing your organization’s infrastructure.
Cloud providers offer different paths to container adoption: Lean teams can take advantage of managed services to outsource their operational burden, getting their applications deployed quickly and securely. Larger teams and enterprises can utilize the Big 3’s highly scalable computing resources to build their own infrastructure, making use of additional services and partner tools as needed. Modern security solutions are also available to help. Reblaze offers tools for cloud-first, containerized deployments as part of its comprehensive web security platform: a security solution that runs natively on the Big 3 cloud platforms, and can provide WAAP (Web Application and API Protection) for modern organizations.