APIs at scale

Leaders at Adobe, Airbnb, Kong, and PubNub talk API design, documentation, and development.
Part of
Issue 14 August 2020

APIs

Geoff Townsend

VP of engineering
Kong

Company size: 51–200

Stephen Blum

CTO and cofounder
PubNub

Company size: 51–200

Jessica Tai

Software engineer
Airbnb

Company size: 5,000-10,000

Ryan Stewart

Director of product, Creative Cloud extensibility
Adobe

and

Prerna Vij

Technical product manager, Creative Cloud extensibility
Adobe

Company size: 10,001+


How does your organization think about API design?

As a company that creates an API management platform, we’re built on open-source API technology. We’ve always had an API-first mentality. All of our UIs are built as a presentation layer, which is powered by an API. Users configure the Kong API gateway using the Admin API to control the Kong platform. All this [aims to] allow users easy CI/CD integration with our internal and third-party tools.

— Geoff Townsend, Kong

Real-time, simple, secure, and fast. As an infrastructure-as-a-service provider, our API must be streaming updates as they happen, rather than requiring the client to poll for updates. The key questions we think about are, “How can we make the API easy to use?” and “Can we make it fast globally?” The answers are core to the developer experience. We also have to make sure the API is secure.

— Stephen Blum, PubNub

For the past couple years, during the migration from our Ruby on Rails monolith to a service-oriented architecture (SOA), our data service APIs looked similar to create, read, update, and delete operations with ActiveRecord. However, as our SOA matures, we’re rethinking our API design from extensibility and capability perspectives, instead of focusing on how to minimize breakage during a large architectural migration.

— Jessica Tai, Airbnb

In the Creative Cloud, creative assets are a foundational element, and often our API design will start with building out models that represent one or more kinds of creative assets. From there, we often take inspiration and learning from our in-product APIs to provide some consistency in functionality and developer experience. Ultimately, APIs should be easily understood, well-documented, consistent, and provide useful feedback when something works and when something fails. Where there are standards, we try to stick to them, and where we have to make our own API surface, we need to remember the human who’s going to be using the API.

— Ryan Stewart and Prerna Vij, Adobe

What’s your organization’s approach to API documentation?

We take a “spec-first” approach to development. Wasting developer time [on creating] documentation, which often goes out of date the moment the API is published, is a real problem. To address this, we did a couple things. First, we acquired Insomnia last year, which is an API and GraphQL testing tool. We’re extending it and open-sourcing those extensions to do spec-first development for APIs using the OpenAPI Specification (formerly Swagger). This allows users to do both definition and testing of APIs in one tool. Second, the Kong Developer Portal consumes these OpenAPI Specifications and auto-generates live documentation users can leverage to test and explore their APIs. If they don’t have or don’t want to write OpenAPI Specifications, Kong Brain, which automates API documentation and configuration, can automatically generate an OpenAPI Specification from traffic on the Kong gateway. These OpenAPI Specifications can then be sent into a File API for consumption by the Kong Developer Portal—the whole API life cycle, really.

— Geoff Townsend, Kong

Supporting all platforms leads to bulky documentation. As you scale and support more APIs, it quickly becomes cumbersome for internal teams to keep up and for developers—the primary audience—to navigate. Despite this challenge, we strive for [consistency] across platforms and optimizing for developer expectations and experience. This problem is never-ending, so we’ve dedicated a [team to] enhancing the developer journey. We use drivers such as data outcomes to drive our priorities, and we’re looking for a higher trials-to-sign-up ratios and lower average time to trial.

— Stephen Blum, PubNub

Enabling block comments for our Apache Thrift Interface Description Language (IDL) adds context at the endpoint, request, and response field levels in addition to metadata about the services, such as communication channels for the service owner or the technical design document. Our Thrift IDL with documentation comments is automatically parsed and displayed in a web UI upon code deployment. This enables our internal documentation website to be up to date, as any changes are picked up whenever the service is deployed.

— Jessica Tai, Airbnb

Where possible, we try to leverage Swagger documentation. We’re also thinking beyond the standard reference documentation and creating other enablement materials like code samples, tutorials, [and] other longform documentation that goes into detail on specific aspects of the API. We primarily rely on storing markdown files in GitHub and have converters that turn the markdown files into more consumable HTML pages. This makes the documentation more accessible (because it looks better), allows us to track issues in the repo itself, and empowers our developers to fix or enhance the documentation with their own pull requests.

— Ryan Stewart and Prerna Vij, Adobe

What kinds of resources does your organization dedicate to building APIs? What does that organizational structure look like?

Each team is in charge of documenting and building the API endpoints for their part of the product. For instance, the developer portal team has developed OpenAPI Specification files, which can be leveraged in the default portal configuration. All our teams love Insomnia and use it to test their APIs as they develop them—it’s a great debugging tool. Our sales engineers get the best feedback because they use it to do proofs of concept for customers all the time.

— Geoff Townsend, Kong

We’re an API company, which means everyone is dedicated to APIs. It’s not just our engineers—the customer success team, marketing team, and product team are all focused on delivering excellent API experiences. We use a process for API product design that involves learning from our customers’ needs. Once we understand the requirements and design principles, we follow an RFC process, which defines the technical components required to deploy a production-grade, global, real-time API. From an engineering team perspective, we have architects who design and guide implementation direction; DevOps, which improves our in-house developer and runtime experience; and engineers who implement and build the software to deliver API value.

— Stephen Blum, PubNub

A dedicated service framework engineering team (part of our platform infrastructure organization) creates reusable components and infrastructure to build reliable, scalable services and APIs. We use Apache Thrift to define our APIs for our remote procedure call framework and annotations. We also invested in a team to provide out-of-the-box options and standardization for our thousands of service APIs. The team has enabled many other features, such as rate limiting and traffic replay mechanisms, to be configurable with annotations instead of requiring each service owner to build or integrate their own solution.

— Jessica Tai, Airbnb

Increasingly, we pair API engineering teams with technical product managers who can flesh out API requirements and provide input on the developer experience. We also have models where a technical writer is embedded with the teams to ensure documentation is relevant and stays up to date. For APIs that are going to be used by teams across Adobe and used to power major services, an architecture council defines an API specification and manages any changes to the specification. The service teams are then responsible for implementing the API in accordance with the specification so clients leveraging the services are developing against a consistent API surface.

— Ryan Stewart and Prerna Vij, Adobe

How does your organization manage API changes?

We focus on backward compatibility and follow major/minor version bumps, like most open-source software. Backward-incompatible changes are only allowed in major releases once a year. We mark endpoints that are going away as “deprecated” for several releases before we officially communicate their retirement.

— Geoff Townsend, Kong

First and foremost, we version our APIs. Each API can be maintained as a new API, or the backend can be deployed to support the new and old formats. We take advantage of Swagger and OpenAPI to do that. We’re all for making sure customers on older versions of our APIs are still able to make use of PubNub. Especially for mission-critical services that are built on top of PubNub, it’s not right to force customers to upgrade to a new version of an API unless there’s a good reason for it. We try to keep it simple when it comes to versioning, using a version ID at the front of the path, such as /v2/api-name.

— Stephen Blum, PubNub

Engineers are empowered to modify their own service APIs. Airbnb once had service framework engineers as mandatory reviewers, but as the number of services and APIs multiplied and engineers became more comfortable with Thrift, we automated a series of tests that check for backward incompatibility. With that tooling in place, service owners decide whether to move forward with the API change. We’ve also placed standardized observability into the client services that call each of our API endpoints, which allows us to understand what to migrate and monitor for modifications to the API.

— Jessica Tai, Airbnb

We try to avoid making unnecessary changes by carefully architecting up front and using standards where they make sense—no point reinventing the wheel. But when change is necessary, we ask: Is it worth it? Is the fix super expensive? Do we need to add some backward compatibility options? The goal is to see what dependents we could break, and plan from there.

For our cloud APIs, potential changes are documented and brought to an architecture council. The council reviews the change, votes on it, and, if it’s approved, updates the underlying specification. Support documentation that illustrates how to convert from the old to the new API surface are always part of the release plan. The key here is time. Dependent teams need to know about big changes in advance so they can make plans to adjust.

Clients are notified throughout the process, and once the specification has been updated, service teams are tasked with scheduling the changes to their implementation of the API. The schedule is coordinated across teams.

— Ryan Stewart and Prerna Vij, Adobe

How does your organization test and collect feedback on its API?

We receive feedback from both our enterprise customers and our open-source community. The best thing about being open-source software is there are plenty of people willing to help and give you input. They’re also the most vocal when you make a mistake! So we pay a lot of attention to perception when we release and deprecate our APIs.

— Geoff Townsend, Kong

Globally distributed test servers monitor [the] speed and deliverability of real-time API data. We use our own tools, as well as third parties like Catchpoint and Pingdom. Outside of the technical monitoring tools, it’s important to make sure we keep track of how often these APIs are actually used to make more informed decisions going forward, including whether to keep the API alive [and whether] we need to rethink our go-to-market around it. And, of course, we keep in touch with our customers to make sure the API is serving their needs.

— Stephen Blum, PubNub

In terms of usability, we rely heavily on feedback from our pilot integrations with other engineers [at] Airbnb. Upon reviewing production incidents, we also evaluate our APIs and create action items for improving instrumentation or standardization. We’re working on better, automated solutions, such as an easy way to perform load testing.

— Jessica Tai, Airbnb

We tend to have slightly different processes for our internal and external-facing APIs. As part of exposing APIs to external developers, we’ve built a process that allows us to validate and improve an API in multiple stages. After identifying a specific API or set of APIs to expose, an internal team is tasked with creating integrations on those APIs. The team also carves out time to provide early feedback on the API so they can suggest changes while they work on the integration. After that, we give a select group of partners access to the APIs so they can start creating their own integrations. They help identify any feature gaps or developer experience issues. Once the API team has tracked and, where applicable, made changes based on that feedback, we open it up to our wider third-party developer community and record any feedback they have. We actively seek that feedback in individual partner communications, surveys, forums, and at in-person events like MAX [Adobe’s annual creativity conference].

— Ryan Stewart and Prerna Vij, Adobe

What new developments in API design are you paying the closest attention to? What do you think the future of APIs will hold?

As we move from being solely an API company to being a cloud connectivity company, we’re seeing a shift from REST APIs to services that connect systems. The new world of APIs will be gRPC, TCP custom protocols, Kafka, GraphQL, and more. We think the connectivity promise of APIs will continue to be realized, but the methods will expand. It’s like the evolution of APIs from XML/SOAP to JSON and specifications. This is exciting and shouldn’t be scary.

— Geoff Townsend, Kong

Modern APIs leverage the best of modern transports such as HTTP2 and gRPC. We’ve deployed these two modern API transports to offer the best possible service, and we continue to keep an eye on upcoming technologies. Just because we provide a great technology platform, doesn’t mean we can’t keep making it better!

— Stephen Blum, PubNub

GraphQL is more expressive and flexible with schema stitching. Because of this, we’re exploring how it interacts with frontend clients and how it enables querying of data across multiple services. Another hope for the future is more out-of-the-box resilience and robustness features—such as timeouts and management of hard versus soft dependencies with fallback values—instead of requiring organizations to custom build these.

— Jessica Tai, Airbnb

We’ve been investing in Hypermedia as the Engine of Application State (HATEOAS) REST APIs because of the way it decouples the client and server and allows us to improve and augment our services without breaking clients. We’ve been particularly interested in how to improve the developer experience and how to document Hypermedia APIs effectively. We’ve also started emphasizing events and webhooks as a complement to our APIs and providing flexibility to developers in terms of how they interact with our services. APIs exist to enable developers to connect disparate pieces in useful and often innovative ways. We need to keep the long view in mind—so many products and tools have gone by the wayside because an endpoint or API no longer functions. A commonly heard phrase at Adobe is “democratize feature development,” and we believe a well-built API surface and platform should enable anyone to do just that.

— Ryan Stewart and Prerna Vij, Adobe

Buy the print edition

Visit the Increment Store to purchase print issues.

Store

Continue Reading

Explore Topics

All Issues