The content of this post is solely the responsibility of the author. AT&T does not adopt or endorse any of the views, positions, or information provided by the author in this article.
In today’s rapidly evolving digital landscape, containerized microservices have become the lifeblood of application development and deployment. Resembling miniature virtual machines, these entities enable efficient code execution in any environment, be it an on-premises server, a public cloud, or even a laptop. This paradigm eliminates the criteria of platform compatibility and library dependency from the DevOps equation.
As organizations embrace the benefits of scalability and flexibility offered by containerization, they must also take up the security challenges intrinsic to this software architecture approach. This article highlights key threats to container infrastructure, provides insights into relevant security strategies, and emphasizes the shared responsibility of safeguarding containerized applications within a company.
Understanding the importance of containers for cloud-native applications
Containers play a pivotal role in streamlining and accelerating the development process. Serving as the building blocks of cloud-native applications, they are deeply intertwined with four pillars of software engineering: the DevOps paradigm, CI/CD pipeline, microservice architecture, and frictionless integration with orchestration tools.
Orchestration tools form the backbone of container ecosystems, providing vital functionalities such as load balancing, fault tolerance, centralized management, and seamless system scaling. Orchestration can be realized through diverse approaches, including cloud provider services, self-deployed Kubernetes clusters, container management systems tailored for developers, and container management systems prioritizing user-friendliness.
The container threat landscape
According to recent findings of Sysdig, a company specializing in cloud security, a whopping 87% of container images have high-impact or critical vulnerabilities. While 85% of these flaws have a fix available, they can’t be exploited because the hosting containers aren’t in use. That said, many organizations run into difficulties prioritizing the patches. Rather than harden the protections of the 15% of entities exposed at runtime, security teams waste their time and resources on loopholes that pose no risk.
One way or another, addressing these vulnerabilities requires the fortification of the underlying infrastructure. Apart from configuring orchestration systems properly, it’s crucial to establish a well-thought-out set of access permissions for Docker nodes or Kubernetes. Additionally, the security of containers hinges on the integrity of the images used for their construction.
Guarding containers throughout the product life cycle
A container’s journey encompasses three principal stages. The initial phase involves constructing the container and subjecting it to comprehensive functional and load tests. Subsequently, the container is stored in the image registry, awaiting its moment of execution. The third stage, container runtime, occurs when the container is launched and operates as intended.
Early identification of vulnerabilities is vital, and this is where the shift-left security principle plays a role. It encourages an intensified focus on security from the nascent stages of the product life cycle, encompassing the design and requirements gathering phases. By incorporating automated security checks within the CI/CD pipeline, developers can detect security issues early and minimize the chance of security gaps flying under the radar at later stages.
On a separate note, the continuous integration (CI) phase represents a critical juncture in the software development life cycle. Any lapses during this phase can expose organizations to significant security risks. For instance, employing dubious third-party services for testing purposes may inadvertently lead to data leaks from the product base.
Consequently, container security necessitates a comprehensive approach, where each element of the software engineering chain is subject to meticulous scrutiny.
Responsibility of security professionals and developers
Information security professionals have traditionally operated in real-time, resolving issues as they emerge. The adoption of unified application deployment tools such as containers facilitates product testing pre-deployment. This proactive approach revolves around the inspection of containers for malicious code and vulnerable components in advance.
To maximize the effectiveness of this tactic, it’s important to determine who is responsible for safeguarding container infrastructure within an organization. Should this responsibility rest with information security specialists or developers? The answer may not be unequivocal.
In the realm of containers, the principle of “who developed it owns it” often takes precedence. Developers are entrusted with managing the defenses and ensuring the security of their code and applications. Concurrently, a separate information security team formulates security rules and investigates incidents.
Specialists responsible for container security must possess a diverse skill set. The essential proficiencies include understanding the infrastructure, expertise in Linux and Kubernetes, and readiness to adapt to the rapidly evolving container orchestration landscape.
Managing secrets
Containerized microservices communicate with each other and with external systems through secure connections, necessitating the use of secrets like keys and passwords for authentication. Safeguarding this sensitive data in containers is imperative to prevent unauthorized access and data leaks. Kubernetes provides a basic mechanism for secrets management, ensuring that keys and passwords are not stored in plaintext.
Nonetheless, due to the absence of a comprehensive secrets life cycle management system in Kubernetes, some IT teams resort to ad hoc products to address the challenge. These tools streamline the process of adding secrets, supervise the use of keys over time, and enforce restrictions to prevent unauthorized access to sensitive data that flows between containers. Although managing secrets can be complex, organizations must prioritize securing such information in containerized environments.
Security tools in container ecosystems
Organizations often grapple with the suitability of traditional security tools, such as data loss prevention (DLP), intrusion detection systems (IDS), and web application firewalls (WAF), for securing containers. Classic next-generation firewalls (NGFW) may turn out less efficient in controlling traffic within virtual cluster networks. However, specialized NGFW tools that operate within clusters can effectively monitor data in transit.
A solution called Cloud-Native Application Protection Platform (CNAPP) is a go-to instrument in this arena. The main thing on the plus side of it is a unified approach to safeguarding cloud-based ecosystems. With advanced analytics reflected in a single front-end console, CNAPP provides comprehensive visibility across all clouds, resources, and risk factors. Importantly, it identifies context around risks in a specific runtime environment, which is a foundation for prioritizing the fixes. These features help organizations steer clear of blind spots in their security postures and remediate issues early.
To strike a balance between the use of traditional security solutions and tools focused on protecting virtualized runtime environments, an organization should assess its IT infrastructure to identify which parts of it are on-premises systems and which are cloud-native applications. It’s worth noting that firewalls, antivirus software, and intrusion detection systems still do a great job securing the perimeter and endpoints, so they definitely belong in the average enterprise’s toolkit.
Going forward
Containers pose numerous benefits, but they also introduce distinct security challenges. By understanding these challenges and addressing them through best practices integrated across the software development life cycle, organizations can establish a resilient and secure container territory.
Mitigating container security risks requires a collaboration between developers and information security specialists. Developers shoulder the responsibility of managing defenses, while the InfoSec team establishes security rules and undertakes incident investigations. By leveraging specialized tools and security products, organizations can effectively manage secrets, monitor container traffic, and take care of vulnerabilities before they can be exploited by threat actors.
To recap, container security is a multifaceted matter that calls for a proactive and collaborative approach. By implementing protective measures at every stage of the container life cycle and nurturing seamless cooperation between teams, organizations can build a sturdy foundation for secure and resilient microservices-based applications.