How to cloud native

On leveraging open-source container technologies and open specifications in your cloud migration strategy.
Part of
Issue 17 May 2021


Today, many development teams are still running applications in data centers as monoliths. Good old battle-tested three-tier architectures galore! Your developers get paid to write code and throw it over the wall, and your sysadmins get paid to be on call and run the apps. Typically, as a result of this divide, you deliver and deploy major versions once a year, maybe twice if you’re really good at what you do. That’s perfectly fine if you’re not Google or Netflix—to paraphrase an adage popularized by author Jim Butcher, you just need to be faster than your direct competition. 

But to really outpace your rivals, you need to invest in tooling and methods to get features that benefit your users into production faster. Migrating to the cloud and leveraging open source and open specifications can grease the wheels, minimizing your dependence on vendors, enabling faster iteration, and facilitating trust between stakeholders. These benefits make an open-source approach a particularly productive way to support the move to containerized applications in the cloud.

Navigating the shift to the cloud

In digital transformation projects and smaller, more tactical engagements, a core motivation for migrating to the cloud is often self-service. Rather than IT operations provisioning and maintaining platforms in addition to owning deployment, self-service approaches can enable developers to help themselves with anything from cluster-level access to more PaaS-like environments. Containerized microservices are vital in this context: Containers address the packaging challenge, while microservices allow individual developer teams to iterate and release their service and features independently, unlike in a monolith.

This change in dynamics will require some organizational adjustments, particularly within IT, as teams must communicate more proactively and consistently—not just via formal tickets—to facilitate the release of new features. IT operations and developers will find their incentives aligning, in sharp contrast to the bygone days when devs typically wanted to maximize the number of features shipped while ops wanted to minimize ships to keep the overall system stable. As you move to smaller deployment units, and since the cloud hides building blocks like VMs and networking behind APIs, developers will wind up taking responsibility for operational routines, including security, observability, and being on call for the code they deploy. The resulting shift in the software supply chain means developers will have to be acquainted with good operational practices, such as the instrumentation of the application source code to emit logs, metrics, and traces or scanning code for vulnerabilities.

As you make this transition, remember that it takes time to build trust—between peers, up the management chain, and with partners and suppliers. The cultural change a migration to the cloud entails is often a high-friction part of the move. Greenfield and/or small projects are a helpful salve during this process: You can see results fast, learn from mistakes, and, by exercising DevOps practices like working in small batches and committing to fast feedback loops, you build trust incrementally.

Betting on open source and specs

When deployed as part of your library or behind a service, free, open-source software (FOSS) has many benefits—chiefly, the ability to fix troubled code yourself. Sure, maintaining a fork isn’t exactly a sustainable enterprise for most of us, but having ready access to temporary, effective mitigation measures can be far more productive than being dependent on a vendor to work on a support case. 

If you opt to build on open specifications—maybe even a standard sanctioned by a body like the Internet Engineering Task Force, World Wide Web Consortium, or Cloud Native Computing Foundation—another plus is that you can use the same setup on premises and in the cloud. For on-prem setups, you’d deploy an open-source solution, a mixed FOSS/closed-source solution, or even a proprietary solution. In the cloud, you’d use a managed service based on an open-source project or one that’s compliant with an open spec. The list of open-source examples is practically endless, including OpenTelemetry and OpenMetrics in the observability space, Open Policy Agent for defining and enforcing organizational or regulatory policies, the Open Container Initiative image and runtime specifications for portable container packaging and execution, and the Kubernetes API, Envoy xDS APIs, or the Service Mesh Interface for container orchestration and networking. An added advantage is that for each of these specifications, APIs, and formats, there’s usually more than one FOSS implementation to choose from, broadening your options. 

Open specs and FOSS can also facilitate the migration itself: When you move workloads to the cloud from a data center, you can offload the heavy lifting of upgrading infra components like operating systems, libraries, and security patches to your cloud provider of choice, freeing you up to focus on your core business and products higher up the stack.

Planning for the roadblocks

For all these advantages, there are, naturally, some challenges you might face. Let’s focus on the two most prominent ones I’ve witnessed in the wild. 

The first is that FOSS isn’t necessarily supported the way an enterprise service would be. Anyone can take FOSS and run it, license permitting, as they see fit. But what if you hit a bug, or you’d like a feature added, or you want to have it scale tested? The software’s author has no obligation to give you the remedy you’re seeking. The “F” in FOSS implies a certain set of actions you’re allowed and encouraged to take with the software that wouldn’t be possible with closed, proprietary software, but it doesn’t mean you get free support. Instead, you’ll likely need to pay a vendor for support, or build up the engineering muscle internally.

The second is a community-related challenge: You need to align, to a certain extent, with the pace of the project’s community. Take the Kubernetes release cycle, for example: With a minor Kubernetes release coming out every three months, you’ll end up upgrading your platform two to three times a year. This includes core dependencies such as cluster add-ons, the application it hosts, and third-party components like runtime scanners or observability agents that you need to test to make sure they all continue to work. Wardley maps can help you understand where on the commodity versus custom-built spectrum a given FOSS offering and its ecosystem is, which can give you a sense of how much effort it will take to align with the project’s community over time.

There’s nothing wrong with focusing on something lower-level, such as containers, over a more PaaS-like or even FaaS-level abstraction when adopting FOSS components to drive your cloud native journey. Being able to optimize your workload with respect to your specific business requirements may well be what gives you the edge over a competitor. Just be aware of what level of abstraction you’re operating on, and be ready to deal with the consequences of that abstraction, as it may impact portability and developer productivity.

Making migration make sense

So, how to reach for the cloud? Start by thinking of the migration as more of an organizational and cultural challenge than a technological one. Leveraging open specifications and open source as part of your migration strategy can empower teams, facilitate the necessary dev-ops alignment that enables you to ship features faster, and improve accountability as you move to smaller deployment units. It can also produce more (human-)scalable systems based on the self-service principle. Keep a realistic perspective of what open source offers, and you’ll be able to sidestep the gotchas and best utilize the tools at your disposal.

About the author

Michael Hausenblas is an open-source product development advocate at AWS. He’s interested in containers, observability, and service meshes.


Buy the print edition

Visit the Increment Store to purchase print issues.


Continue Reading

Explore Topics

All Issues