When I first started coding in the mid-to-late 2000s, REST APIs were all the rage. A few years earlier Roy Fielding had published his popular dissertation on designing network-based architecture, which caught on quickly, and rightfully so: At its core, it encouraged the building of uniform, shared interfaces to reduce complexity and make interfaces simpler. The industry was rapidly moving from SOAP to REST, as the latter made it feel more natural for users to compose the flows they needed for their specific behavior. REST worked amazingly well—and still does—for predominantly server-to-server flows, which were almost the entire internet at the time. Frameworks like Rails and Django made it possible to build a REST API in minutes, dozens of HTTP libraries were developed to interact over the network in just a few lines of code, and software like curl made it easy to test an API in seconds.
Small world, big changes
Since then, the application landscape has changed dramatically which in turn has started to impact the way our APIs are designed and shaped. The first iPhone was released in 2007, followed by the App Store in 2008, kicking off a gradual move toward client-heavy interfaces. Mobile platforms grew quickly, and with them came single-page applications (SPAs) and progressive web apps (PWAs). After its release in 2010, Backbone.js grew in popularity as one of the first frameworks to help developers build SPAs. Frameworks like Ember.js, AngularJS, and eventually React and React Native, entered the picture, continuing to emphasize building larger SPA and PWA clients. This emphasis began forcing changes in APIs to return all the data clients need in one request—instead of many—to reduce the round trips and improve the observed latency.
With all the new technology being built and all the new users coming online, at what point will our APIs need to adapt?
Moreover, internet usage and access has exploded globally, with exponential growth in many regions since 2010. This distribution of access has been, quite literally, world-changing, and requires developers to think about scaling their applications in ways they hadn’t before. It’s increasingly necessary to consider what both the application and developer experience will feel like for people far from your data centers.
Despite the transformative nature and sheer scale of these changes, the way we build APIs has remained relatively constant. This isn’t bad—if anything, it speaks to how well REST was able to solve a major problem, and how its principles resonated with developers. It does beg the question, though—with all the new technology being built and all the new users coming online, at what point will our APIs need to adapt? What will we need to do to ensure developers continue to make progress? To offer a resolutely positive developer experience today and tomorrow, our APIs—and our thinking behind them—may well need to change.
Something(s) to think about
It’s hard to ignore math (or science).
It’s hard to ignore math (or science): The global shift in internet traffic, plus the realities of the laws of physics, will start to challenge how we build APIs. Quite simply, there are limits on how quickly we can transfer data over growing distances, which, in turn, means some or all of our users may have a less-than-ideal experience if we don’t consider this accordingly.
To frame this concretely, imagine a user in Cape Town, South Africa, making a request to a data center located in Seattle, Washington. That’s about a 32,000-kilometer round trip, which will take approximately 100ms to travel as-the-crow-flies at the speed of light. Add in the hops between different ISPs across regions, SSL handshakes, and multiple bounces to synchronize data, and you might start adding dozens of seconds.
There are no clear “right” answers for how to work with these limitations, but individuals and companies have started exploring solutions that will allow their APIs to scale around them. In scenarios where data needs to move across the globe, how should we think about data consistency for our APIs? Do we trade latency for strong consistency or move to eventual consistency? APIs so far have defaulted to strong consistency: You POST a new resource and GET the list endpoint, and it’s guaranteed to be there because the update happens fast. But what happens if you can no longer provide users with that guarantee, and how should APIs change to accommodate that?
Until humanity is able to develop its own anisble (or unless you’re operating at Google scale), we’ll have to work around consistency tradeoffs.
To work around these complexities, Google built Spanner, a database that provides strong consistency with wide replication across global scales. Achieved using a custom API called TrueTime, Spanner leverages custom hardware that uses GPS, atomic clocks, and fiber-optic networking to provide both consistency and high availability for data replication. Of course, it still doesn’t solve the time it takes data to transfer between two far-flung locations, but until humanity is able to develop its own ansible (or unless you’re operating at Google scale), we’ll have to work around consistency tradeoffs.
While we consider what the data consistency story of APIs becomes, we could preempt some of the degradation in experience—like inconsistent data or slow requests—by embracing real-time streams or more asynchronous flows. Asynchronous flows have become more prevalent in client-facing applications, but they haven’t yet seen much spread to APIs. Instead of blocking the user to wait for data to process or synchronize, we could build more asynchronous flows that push status updates in real time, resulting in fewer requests and an earlier interaction with the data.
In a similar spirit of proactivity, could we be better about informing developers of events, minimizing requests to the API and the observed end-to-end latency? Historically, developers have either polled for new data, batched requests to mass-load changes, or used webhooks to see if anything’s changed. Webhooks have existed for ages and let APIs push events to users, but they can be challenging to work with and were built around HTTP concepts, which can limit the full potential of having a bi-directional event channel. What new experiences or products could stem from exposing bi-direction streaming APIs? Could native streaming APIs better suit real-time synchronization by providing more seamless updating logic? And, if we pursued them, where would we place the implementation burden—should developers build the queues to process the data or should the API filter the firehose?
As programmatic workflows become more complex, it becomes harder to process data instantaneously. Growing a push-model to replace the pull-model that we’ve become used to will become increasingly necessary to keep developers in sync with changes. Interfacing with the “real” world is, well, slow—there are many blocking flows that require manual human interaction (which will always be slower than computers alone, try as we humans might), processes that require uploading or downloading flat files for processing, and even computer systems that stop processing outside of working hours. So do we move to APIs that expose state machines that represent the status of the real world? If yes, how should we elegantly expose the complexity that comes with that? And how should a user validate a complex state machine or represent it through tests?
Giving developers more control over their interfaces allows them to shape their requests to exactly what’s needed.
As API complexity increases, not all users will need the same feature set, and a customizable API or porcelain wrappers could help developers simplify their applications. Customization is a pretty broad topic, ranging from field selection and custom API structures to building custom flows with request chaining or custom handlers for async processes. Today, most APIs don’t provide users much in the way of customization because it can require significant upfront investment. Still, API customization is not a new concept, with JSONAPI providing Sparse Fieldsets and GraphQL offering field selection as part of the query language. Here, allowing for more customization would more certainly let developers shift the onus of the implementation details and logic to the API. Widespread adoption and sometimes dramatic differences in internet connectivity mean developers must think about the efficiency of their requests, and the continuous addition of new products increases the complexity of APIs as they try to adhere to REST. Giving developers more control over their interfaces allows them to shape their requests to exactly what’s needed, instead of sending superfluous data—or not sending enough.
Looking to tomorrow
There’s a lot of unharnessed opportunity to adapt and build our APIs anew for tomorrow’s world. I suspect we’ll long tend to our roots in REST and HTTP—they’re still critical tools for developers, in many cases providing the perfect experience to build simple, modular, and purposeful APIs. Much of what has made these great for developers is the ubiquity—building these APIs is second nature for many, and there are countless products, tools, and resources that make integrating them both flexible and foundational. But as our connected landscape shifts, we’re starting to outgrow a lot of our tried-and-true patterns, mending torn seams, and finding ourselves with a patched-together developer experience. As we consider what to hold on to and what to change, we need to remember that we’re building for our future developers. If we’re willing to adapt, we can build APIs that empower their success for years to come.