In 2020, 92 percent of cloud native developers reported using containers in production, according to a Cloud Native Computing Foundation (CNCF) survey—a massive leap from 23 percent in 2016. Given this surge, container security is more important than ever.
Data protection is a crucial aspect of every company’s IT strategy, particularly when data or an application moves outside of an organization, such as to the cloud or a partner location. It becomes even more critical when multiple applications are running in an environment, such as data processing pipelines or Kubernetes microservice architecture. In response to these evolving security needs, a new technology called confidential computing is changing how organizations and industries secure their data. Here, I’d like to share how confidential computing works and how developers can use it to build secure cloud native applications.
What is confidential computing?
Read more in issue 17
This issue shares practical considerations and functional approaches to containerization, and examines the container ecosystem’s scope, scale, and panoramic possibilities.
Confidential computing protects data while it’s in use by isolating the computation in a hardware-based trusted execution environment (TEE). A TEE provides a level of assurance of data integrity, data confidentiality, and code integrity, adding another layer of protection for sensitive workloads. As a result, an attack’s potential surface area is reduced, since there’s no longer a need to trust traditional execution layers. Combined with tighter control of the trusted computing base (TCB)—a computer system’s hardware, firmware, and software components, which combine to provide the system with a secure environment—a TEE strengthens the overall security of target workloads.
With a TEE architecture, the data owner controls who can view, execute on, and compute the data, which protects data in use, provides code integrity, and establishes trust between components in the system. This opens up new opportunities for applications and key management systems to perform additional verifications. Typically, to enable trusted collaboration between organizations or teams, one party must redact or anonymize the data, a costly, error-prone process that can lead to low-impact insights. Confidential computing removes this manual step, enabling organizations to process data and code in a TEE at cloud scale without having to trust other applications first. Unauthorized entities, meanwhile, are placed outside the bounds of that trust.
Confidential computing thus makes it possible to share, for example, ML inferencing on combined data sets. In this scenario, the data owner would encrypt the data and enforce key release policies that are bound to the container and the platform it’s executing. The key release, combined with attestation, adds layers of protection to each data set.
How to run containers confidentially
In the cloud native space, organizations often build proprietary software, whether through data processing or ML models, and this software, packaged as containers, must be protected from misuse. Because a workload can be isolated, encrypted, and made inaccessible when run inside a hardware TEE, it’s well-protected against vulnerabilities in the guest operating system, hypervisor, and host operating system. Leveraging this approach, a container using confidential computing can protect its data even if it’s running on the same host as a compromised container.
With Kubernetes, this means you no longer have to trust the administrators or operators of a cluster to securely run highly sensitive property data. When a confidential container is launched on a confidential computing–capable Kubernetes node, it creates an enclave that’s process-isolated and sandboxed. The enclave’s allocated memory is exclusive to the process, allowing for both isolation between containers and per-container protection. With container encryption, signing, and attestation, the enclave can provide code integrity to protect against malicious attacks that may try to modify the code within the container.
Confidential computing also provides secure key release functionality for data decryption trust attached to hardware and TEE measurements—a functionality that makes it possible to secure your crypto material and attach it to the measured values from the hardware, platform, and software it’s running on. Together, these capabilities allow businesses to trust that their code and data are secure when building microservice-based applications.
Toward open standards
The open-source community plays an important role in cloud native development, and efforts are underway to unify confidential computing across industry communities and build cross-platform and cross-cloud compatibility for confidential workloads.
The Confidential Computing Consortium (CCC), which includes Microsoft, Intel, AMD, Nvidia, Red Hat, Google, Fortanix, and many others, is actively engaging with developers, data custodians, and operators to use confidential computing to facilitate portability across TEEs and clouds. The CCC is also collaborating with the CNCF’s Special Interest Group for Security to improve overall security across the cloud native landscape by developing open standards, which would give customers a more consistent experience across hardware, software, and cloud deployments, as well as help standardize and reduce operations overhead.
As containers become a ubiquitous part of running mission-critical business applications, container security will take center stage, be it through secure DevOps practices or data custodians demanding confidentiality and transparency within the executing environment. Together, we can all help weave the future fabric of container security design.