diff --git a/docs/API.en.epub b/docs/API.en.epub index 4e4b5ed..c28c169 100644 Binary files a/docs/API.en.epub and b/docs/API.en.epub differ diff --git a/docs/API.en.html b/docs/API.en.html index 0aeae89..4972c39 100644 --- a/docs/API.en.html +++ b/docs/API.en.html @@ -2,11 +2,11 @@ Sergey Konstantinov. The API - + - + @@ -577,7 +577,7 @@ ul.references li p a.back-anchor {
  • -
    +

    @@ -590,7 +590,7 @@ ul.references li p a.back-anchor {

    This book is written to share the expertise and describe the best practices in designing and developing APIs. It comprises six sections dedicated to:

    • The API design
    • API patterns
    • -
    • Backwards compatibility
    • +
    • Backward compatibility
    • HTTP API & REST
    • SDK and UI libraries
    • API product management.
    @@ -598,45 +598,45 @@ ul.references li p a.back-anchor { Creative Commons «Attribution-NonCommercial» Logo

    This book is distributed under the Creative Commons Attribution-NonCommercial 4.0 International licence.

    Source code available at github.com/twirl/The-API-Book

    -

    Share: · · ·

    +

    Introduction

    Chapter 1. On the Structure of This Book 

    -

    The book you're holding in your hands is dedicated to developing APIs as a separate engineering task. Though many concepts we're going to discuss apply to any type of software, our primary goal is to describe those problems and approaches to solving them that are most relevant in the context of the API subject area.

    +

    The book you're holding in your hands is dedicated to developing APIs as a separate engineering task. Although many concepts we're going to discuss apply to any type of software, our primary goal is to describe those problems and approaches to solving them that are most relevant in the context of the API subject area.

    We expect that the reader possesses expertise in software engineering, so we do not provide detailed definitions and explanations of the terms that a developer should already be familiar with in our understanding. Without this knowledge, it will be rather uncomfortable to read the last section of the book (and even more so, other sections). We sincerely apologize for this but that's the only way of writing the book without tripling its size.

    -

    The book comprises the Introduction and six large sections. The first three (namely, “The API Design”, “The API Patterns”, and “The Backwards Compatibility”) are fully abstract and not bound to any concrete technology. We hope they will help those readers who seek building a systematic understanding of the API architecture in developing complex interface hierarchies. The proposed approach, as we see it, allows for designing APIs from start to finish, from a raw idea to concrete implementation.

    -

    The fourth and fifth sections are dedicated to specific technologies, namely developing HTTP APIs (in the “REST paradigm”) and SDKs (we will talk mostly of UI component libraries).

    -

    Finally, in the sixth section, which is the least technical of all, we will discuss APIs as products and focus on non-engineering aspects of the API lifecycle: doing market research, positioning the service, communicating to consumers, setting KPIs for the team, etc. We insist that the last section is equally important to both PMs and software engineers as products for developers thrive only if the product and technical teams jointly work on them.

    +

    The book comprises the Introduction and six large sections. The first three (namely, “The API Design”, “The API Patterns”, and “The Backward Compatibility”) are fully abstract and not bound to any concrete technology. We hope they will help those readers who seek to build a systematic understanding of the API architecture in developing complex interface hierarchies. The proposed approach, as we see it, allows for designing APIs from start to finish, from a raw idea to concrete implementation.

    +

    The fourth and fifth sections are dedicated to specific technologies, namely developing HTTP APIs (in the “REST paradigm”) and SDKs (we will mostly talk about UI component libraries).

    +

    Finally, in the sixth section, which is the least technical of all, we will discuss APIs as products and focus on non-engineering aspects of the API lifecycle: doing market research, positioning the service, communicating to consumers, setting KPIs for the team, etc. We insist that the last section is equally important to both PMs and software engineers as products for developers thrive only if the product and technical teams work jointly on them.

    Let's start.

    Chapter 2. The API Definition 

    -

    Before we start talking about the API design, we need to explicitly define what the API is. Encyclopedia tells us that “API” is an acronym for the “Application Program Interface.” This definition is fine but useless. Much like the “Man” definition by Plato: Man stood upright on two legs without feathers. This definition is fine again, but it gives us no understanding of what's so important about a Man. (Actually, not “fine” either. Diogenes of Sinope once brought a plucked chicken, saying “That's Plato's Man.” And Plato had to add “with broad nails” to his definition.)

    -

    What the API means apart from the formal definition?

    -

    You're possibly reading this book using a Web browser. To make the browser display this page correctly, a bunch of stuff must work correctly: parsing the URL according to the specification, the DNS service, the TLS handshake protocol, transmitting the data over HTTP protocol, HTML document parsing, CSS document parsing, correct HTML+CSS rendering, and so on and so forth.

    -

    But those are just the tip of the iceberg. To make the HTTP protocol work you need the entire network stack (comprising 4-5 or even more different level protocols) to work correctly. HTML document parsing is being performed according to hundreds of different specifications. The document rendering operations call the underlying operating system APIs, or even directly graphical processor APIs. And so on: down to modern CISC processor commands that are implemented on top of the API of microcommands.

    -

    In other words, hundreds or even thousands of different APIs must work correctly to make basic actions possible, like viewing a webpage. Modern Internet technologies simply couldn't exist without these tons of APIs working fine.

    +

    Before we start talking about the API design, we need to explicitly define what the API is. Encyclopedias tell us that “API” is an acronym for “Application Program Interface.” This definition is fine but useless, much like the “Man” definition by Plato: “Man stands upright on two legs without feathers.” This definition is fine again, but it gives us no understanding of what's so important about a Man. (Actually, it's not even “fine”: Diogenes of Sinope once brought a plucked chicken, saying “That's Plato's Man.” And Plato had to add “with broad nails” to his definition.)

    +

    What does the API mean apart from the formal definition?

    +

    You're possibly reading this book using a Web browser. To make the browser display this page correctly, a bunch of things must work correctly: parsing the URL according to the specification, the DNS service, the TLS handshake protocol, transmitting the data over the HTTP protocol, HTML document parsing, CSS document parsing, correct HTML+CSS rendering, and so on and so forth.

    +

    But those are just the tip of the iceberg. To make the HTTP protocol work you need the entire network stack (comprising 4-5 or even more different level protocols) to work correctly. HTML document parsing is performed according to hundreds of different specifications. Document rendering operations call the underlying operating system APIs, or even directly graphical processor APIs. And so on, down to modern CISC processor commands that are implemented on top of the API of microcommands.

    +

    In other words, hundreds or even thousands of different APIs must work correctly to make basic actions possible such as viewing a webpage. Modern Internet technologies simply couldn't exist without these tons of APIs working fine.

    An API is an obligation. A formal obligation to connect different programmable contexts.

    -

    When I'm asked of an example of a well-designed API, I usually show a picture of a Roman aqueduct:

    +

    When I'm asked for an example of a well-designed API, I usually show a picture of a Roman aqueduct:

    The Pont-du-Gard aqueduct. Built in the 1st century AD.  Image Credit: igorelick @ pixabay
    The Pont-du-Gard aqueduct. Built in the 1st century AD. Image Credit: igorelick @ pixabay
    • it interconnects two areas,
    • -
    • backwards compatibility being broken not a single time in two thousand years.
    • +
    • backward compatibility has not been broken even once in two thousand years.
    -

    What differs between a Roman aqueduct and a good API is that in the case of APIs, the contract is presumed to be programmable. To connect the two areas, writing some code is needed. The goal of this book is to help you in designing APIs that serve their purposes as solidly as a Roman aqueduct does.

    -

    An aqueduct also illustrates another problem of the API design: your customers are engineers themselves. You are not supplying water to end-users: suppliers are plugging their pipes into your engineering structure, building their own structures upon it. On one hand, you may provide access to the water to many more people through them, not spending your time plugging each individual house into your network. On the other hand, you can't control the quality of suppliers' solutions, and you are to be blamed every time there is a water problem caused by their incompetence.

    -

    That's why designing the API implies a larger area of responsibility. API is a multiplier to both your opportunities and mistakes.

    Chapter 3. Overview of Existing API Development Solutions 

    +

    What differs between a Roman aqueduct and a good API is that in the case of APIs, the contract is presumed to be programmable. To connect the two areas, writing some code is needed. The goal of this book is to help you design APIs that serve their purposes as solidly as a Roman aqueduct does.

    +

    An aqueduct also illustrates another problem with the API design: your customers are engineers themselves. You are not supplying water to end-users. Suppliers are plugging their pipes into your engineering structure, building their own structures upon it. On the one hand, you may provide access to water to many more people through them, not spending your time plugging each individual house into your network. On the other hand, you can't control the quality of suppliers' solutions, and you are to blame every time there is a water problem caused by their incompetence.

    +

    That's why designing an API implies a larger area of responsibility. An API is a multiplier to both your opportunities and your mistakes.

    Chapter 3. Overview of Existing API Development Solutions 

    In the first three sections of this book, we aim to discuss API design in general, not bound to any specific technology. The concepts we describe are equally applicable to web services and, let's say, operating systems (OS) APIs.

    Still, two main scenarios dominate the stage when we talk about API development:

    • developing client-server applications
    • developing client SDKs.
    -

    In the first case, we almost universally talk about APIs working atop the HTTP protocol. Today, the only notable examples of non-HTTP-based client-server interaction protocols are WebSocket (though it might, and frequently does, work in conjecture with HTTP) and highly specialized APIs like media streaming and broadcasting formats.

    +

    In the first case, we almost universally talk about APIs working atop the HTTP protocol. Today, the only notable examples of non-HTTP-based client-server interaction protocols are WebSocket (though it might, and frequently does, work in conjunction with HTTP) and highly specialized APIs like media streaming and broadcasting formats.

    HTTP API

    -

    Though the technology looks homogenous because of using the same application-level protocol, in reality, there is a significant diversity regarding different approaches to realizing HTTP-based APIs.

    +

    Though the technology looks homogenous because of using the same application-level protocol, in reality, there is significant diversity regarding different approaches to realizing HTTP-based APIs.

    First, implementations differ in terms of utilizing HTTP capabilities:

    • either the client-server interaction heavily relies on the features described in the HTTP standard (or rather standards, as the functionality is split across several different RFCs),
    • -
    • or HTTP is used as a transport, and there is an additional abstraction level built upon it (i.e., the HTTP capabilities, such as the headers and status codes nomenclatures, are deliberately reduced to a bare minimum, and all the metadata is handled by the higher-level protocol).
    • +
    • or HTTP is used as transport, and there is an additional abstraction level built upon it (i.e., the HTTP capabilities, such as the headers and status codes nomenclatures, are deliberately reduced to a bare minimum, and all the metadata is handled by the higher-level protocol).

    The APIs that belong to the first category are usually denoted as “REST” or “RESTful” APIs. The second category comprises different RPC formats and some service protocols, for example, SSH.

    Second, different HTTP APIs rely on different data formats:

    @@ -650,19 +650,19 @@ ul.references li p a.back-anchor {

    The term “SDK” is not, strictly speaking, related to APIs: this is a generic term for a software toolkit. As with “REST,” however, it got some popular reading as a client framework to work with some underlying API. This might be, for example, a wrapper to a client-server API, or a UI to some OS API. The major difference from the APIs we discussed in the previous paragraph is that an “SDK” is implemented for a specific programming language and platform, and its purpose is translating the abstract language-agnostic set methods (comprising a client-server or an OS API) into concrete structures specific for the programming language and the platform.

    Unlike client-server APIs, such SDKs can hardly be generalized as each of them is developed for a specific language-platform pair. There are some interoperable SDKs, notable cross-platform mobile (React Native, Flutter, Xamarin) and desktop (JavaFX, QT) frameworks and some highly-specialized solutions (Unity).

    Still, SDKs feature some generality in terms of the problems they solve, and Section V of this book will be dedicated to solving these problems of translating contexts and making UI components.

    Chapter 4. API Quality Criteria 

    -

    Before we start laying out the recommendations, we ought to specify what API we consider “fine,” and what's the profit of having a “fine” API.

    -

    Let's discuss the second question first. Obviously, API “finesse” is first of all defined through its capability to solve developers' and users' problems. (One may reasonably say that solving problems might not be the main purpose of offering the API of ours to developers. However, manipulating public opinion is out of this book's author's interest. Here we assume that APIs exist primarily to help people, not for some other covertly declared purposes.)

    -

    So, how the “fine” API design might assist developers in solving their (and their users') problems? Quite simply: a well-designed API allows developers to do their jobs in the most efficient and comprehensible manner. The distance from formulating a task to writing working code must be as short as possible. Among other things, it means that:

    +

    Before we start laying out the recommendations, we ought to specify what API we consider “fine,” and what the benefits of having a “fine” API are.

    +

    Let's discuss the second question first. Obviously, API “finesse” is primarily defined through its capability to solve developers' and users' problems. (One could reasonably argue that solving problems might not be the main purpose of offering an API to developers. However, manipulating public opinion is not of interest to the author of this book. Here we assume that APIs exist primarily to help people, not for some other covertly declared purposes.)

    +

    So, how might a “fine” API design assist developers in solving their (and their users') problems? Quite simply: a well-designed API allows developers to do their jobs in the most efficient and convenient manner. The distance from formulating a task to writing working code must be as short as possible. Among other things, this means that:

      -
    • it must be totally obvious out of your API's structure how to solve a task +
    • it must be totally obvious from your API's structure how to solve a task
        -
      • ideally, developers at first glance should be able to understand, what entities are meant to solve their problem
      • +
      • ideally, developers should be able to understand at first glance, what entities are meant to solve their problem
    • the API must be readable;
        -
      • ideally, developers write correct code after just looking at the methods nomenclature, never bothering about details (especially API implementation details!)
      • -
      • it is also very important to mention that not only problem solution (the “happy path”) should be obvious, but also possible errors and exceptions (the “unhappy path”) as well
      • +
      • ideally, developers should write correct code after just looking at the methods' nomenclature, never bothering about details (especially API implementation details!)
      • +
      • it is also essential to mention that not only should the problem solution (the “happy path”) be obvious, but also possible errors and exceptions (the “unhappy path”)
    • the API must be consistent @@ -671,9 +671,9 @@ ul.references li p a.back-anchor {
    -

    However, the static convenience and clarity of APIs are simple parts. After all, nobody seeks for making an API deliberately irrational and unreadable. When we are developing an API, we always start with clear basic concepts. Providing you've got some experience in APIs, it's quite hard to make an API core that fails to meet obviousness, readability, and consistency criteria.

    -

    Problems begin when we start to expand our API. Adding new functionality sooner or later results in transforming once plain and simple API into a mess of conflicting concepts, and our efforts to maintain backwards compatibility will lead to illogical, unobvious, and simply bad design solutions. It is partly related to an inability to predict the future in detail: your understanding of “fine” APIs will change over time, both in objective terms (what problems the API is to solve, and what is the best practice) and in subjective terms too (what obviousness, readability, and consistency really mean to your API design).

    -

    The principles we are explaining below are specifically oriented to making APIs evolve smoothly over time, not being turned into a pile of mixed inconsistent interfaces. It is crucial to understand that this approach isn't free: a necessity to bear in mind all possible extension variants and to preserve essential growth points means interface redundancy and possibly excessing abstractions being embedded in the API design. Besides, both make the developers' jobs harder. Providing excess design complexities being reserved for future use makes sense only if this future actually exists for your API. Otherwise, it's simply overengineering.

    Chapter 5. The API-first approach 

    +

    However, the static convenience and clarity of APIs are simple parts. After all, nobody seeks to make an API deliberately irrational and unreadable. When we develop an API, we always start with clear basic concepts. Providing you have some experience in APIs, it's quite hard to make an API core that fails to meet obviousness, readability, and consistency criteria.

    +

    Problems begin when we start to expand our API. Adding new functionality sooner or later results in transforming once plain and simple API into a mess of conflicting concepts, and our efforts to maintain backward compatibility will lead to illogical, unobvious, and simply bad design solutions. It is partly related to an inability to predict the future in detail: your understanding of “fine” APIs will change over time, both in objective terms (what problems the API is to solve, and what is best practice) and in subjective terms too (what obviousness, readability, and consistency really mean to your API design).

    +

    The principles we are explaining below are specifically oriented towards making APIs evolve smoothly over time, without being turned into a pile of mixed inconsistent interfaces. It is crucial to understand that this approach isn't free: the necessity to bear in mind all possible extension variants and to preserve essential growth points means interface redundancy and possibly excessive abstractions being embedded in the API design. Besides, both make the developers' jobs harder. Providing excess design complexities being reserved for future use makes sense only if this future actually exists for your API. Otherwise, it's simply overengineering.

    Chapter 5. The API-first approach 

    Today, more and more IT companies accept the importance of the “API-first” approach, i.e., the paradigm of developing software with a heavy focus on developing APIs.

    However, we must differentiate the product concept of the API-first approach from a technical one.

    The former means that the first (and sometimes the only) step in developing a service is creating an API for it, and we will discuss it in “The API Product” section of this book.

    @@ -693,23 +693,23 @@ ul.references li p a.back-anchor {
  • rule #2 means partners won't need to change their implementations should some inconsistencies between the specification and the API functionality pop up.
  • -

    Therefore, for your API consumers, the API-first approach is a guarantee of a kind. However, it only works if the API was initially well-designed: if some irreparable flaws in the specification surfaced out, we would have no other option but break rule #2.

    Chapter 6. On Backwards Compatibility 

    -

    Backwards compatibility is a temporal characteristic of your API. An obligation to maintain backwards compatibility is the crucial point where API development differs from software development in general.

    -

    Of course, backwards compatibility isn't an absolute. In some subject areas shipping new backwards-incompatible API versions is a routine. Nevertheless, every time you deploy a new backwards-incompatible API version, the developers need to make some non-zero effort to adapt their code to the new API version. In this sense, releasing new API versions puts a sort of a “tax” on customers. They must spend quite real money just to make sure their product continues working.

    +

    Therefore, for your API consumers, the API-first approach is a guarantee of a kind. However, it only works if the API was initially well-designed: if some irreparable flaws in the specification surfaced out, we would have no other option but break rule #2.

    Chapter 6. On Backward Compatibility 

    +

    Backward compatibility is a temporal characteristic of your API. An obligation to maintain backward compatibility is the crucial point where API development differs from software development in general.

    +

    Of course, backward compatibility isn't an absolute. In some subject areas shipping new backwards-incompatible API versions is a routine. Nevertheless, every time you deploy a new backwards-incompatible API version, the developers need to make some non-zero effort to adapt their code to the new API version. In this sense, releasing new API versions puts a sort of a “tax” on customers. They must spend quite real money just to make sure their product continues working.

    Large companies, which occupy firm market positions, could afford to charge such a tax. Furthermore, they may introduce penalties for those who refuse to adapt their code to new API versions, up to disabling their applications.

    -

    From our point of view, such a practice cannot be justified. Don't impose hidden levies on your customers. If you're able to avoid breaking backwards compatibility — never break it.

    +

    From our point of view, such a practice cannot be justified. Don't impose hidden levies on your customers. If you're able to avoid breaking backward compatibility — never break it.

    Of course, maintaining old API versions is a sort of a tax either. Technology changes, and you cannot foresee everything, regardless of how nice your API is initially designed. At some point keeping old API versions results in an inability to provide new functionality and support new platforms, and you will be forced to release a new version. But at least you will be able to explain to your customers why they need to make an effort.

    We will discuss API lifecycle and version policies in Section II.

    Chapter 7. On Versioning 

    Here and throughout this book, we firmly stick to semver principles of versioning.

    1. API versions are denoted with three numbers, e.g., 1.2.3.
    2. The first number (a major version) increases when backwards-incompatible changes in the API are introduced.
    3. -
    4. The second number (a minor version) increases when new functionality is added to the API, keeping backwards compatibility intact.
    5. +
    6. The second number (a minor version) increases when new functionality is added to the API, keeping backward compatibility intact.
    7. The third number (a patch) increases when a new API version contains bug fixes only.

    Sentences “a major API version” and “new API version, containing backwards-incompatible changes” are therefore to be considered equivalent ones.

    It is usually (though not necessary) agreed that the last stable API release might be referenced by either a full version (e.g., 1.2.3) or a reduced one (1.2 or just 1). Some systems support more sophisticated schemes of defining the desired version (for example, ^1.2.3 reads like “get the last stable API release that is backwards-compatible to the 1.2.3 version”) or additional shortcuts (for example, 1.2-beta to refer to the last beta release of the 1.2 API version family). In this book, we will mostly use designations like v1 (v2, v3, etc.) to denote the latest stable release of the 1.x.x version family of an API.

    -

    The practical meaning of this versioning system and the applicable policies will be discussed in more detail in the “Backwards Compatibility Problem Statement” chapter.

    Chapter 8. Terms and Notation Keys 

    +

    The practical meaning of this versioning system and the applicable policies will be discussed in more detail in the “Backward Compatibility Problem Statement” chapter.

    Chapter 8. Terms and Notation Keys 

    Software development is characterized, among other things, by the existence of many different engineering paradigms, whose adepts sometimes are quite aggressive towards other paradigms' adepts. While writing this book, we are deliberately avoiding using terms like “method,” “object,” “function,” and so on, using the neutral term “entity” instead. “Entity” means some atomic functionality unit, like class, method, object, monad, prototype (underline what you think is right).

    As for an entity's components, we regretfully failed to find a proper term, so we will use the words “fields” and “methods.”

    Most of the examples of APIs will be provided in a form of JSON-over-HTTP endpoints. This is some sort of notation that, as we see it, helps to describe concepts in the most comprehensible manner. A GET /v1/orders endpoint call could easily be replaced with an orders.get() method call, local or remote; JSON could easily be replaced with any other data format. The semantics of statements shouldn't change.

    @@ -831,7 +831,7 @@ Cache-Control: no-cache

    Simplifying developers' work and the learning curve. At each moment of time, a developer is operating only those entities which are necessary for the task they're solving right now. And conversely, badly designed isolation leads to the situation when developers have to keep in mind lots of concepts mostly unrelated to the task being solved.

  • -

    Preserving backwards compatibility. Properly separated abstraction levels allow for adding new functionality while keeping interfaces intact.

    +

    Preserving backward compatibility. Properly separated abstraction levels allow for adding new functionality while keeping interfaces intact.

  • Maintaining interoperability. Properly isolated low-level abstractions help us to adapt the API to different platforms and technologies without changing high-level entities.

    @@ -2219,7 +2219,7 @@ X-Idempotency-Token: <token> → 409 Conflict

    — the server found out that a different token was used in creating revision 124, which means an access conflict.

    -

    Furthermore, adding idempotency tokens not only resolves the issue but also makes advanced optimizations possible. If the server detects an access conflict, it could try to resolve it, “rebasing” the update like modern version control systems do, and return a 200 OK instead of a 409 Conflict. This logic dramatically improves user experience, being fully backwards compatible, and helps to avoid conflict-resolving code fragmentation.

    +

    Furthermore, adding idempotency tokens not only resolves the issue but also makes advanced optimizations possible. If the server detects an access conflict, it could try to resolve it, “rebasing” the update like modern version control systems do, and return a 200 OK instead of a 409 Conflict. This logic dramatically improves user experience, being fully backward-compatible, and helps to avoid conflict-resolving code fragmentation.

    Also, be warned: clients are bad at implementing idempotency tokens. Two problems are common:

    • you can't really expect clients generate truly random tokens — they may share the same seed or simply use weak algorithms or entropy sources; therefore you must put constraints on token checking: token must be unique to a specific user and resource, not globally;
    • @@ -2595,7 +2595,7 @@ try {

      Instead of a version, the date of the last modification of the resource might be used (which is much less reliable as clocks are not ideally synchronized across different system nodes; at least save it with the maximum possible precision!) or entity identifiers (ETags).

      The advantage of optimistic concurrency control is therefore the possibility to hide under the hood the complexity of implementing locking mechanisms. The disadvantage is that the versioning errors are no longer exceptional situations — it's now a regular behavior of the system. Furthermore, client developers must implement working with them otherwise the application might render inoperable as users will be infinitely creating an order with the wrong version.

      NB: which resource to select for making versioning is extremely important. If in our example we create a global system version that is incremented after any order comes, users' chances to successfully create an order will be close to zero.

      Chapter 18. Eventual Consistency 

      -

      The approach described in the previous chapter is in fact a trade-off: the API performance issues are traded for “normal” (i.e., expected) background errors that happen while working with the API. This is achieved by isolating the component responsible for controlling concurrency and ensuring strict consistency within the system. Still, the achievable throughput of the API is still limited, and the only way of scaling it up is removing the strict consistency from the external API and thus allowing reading system state from read-only replicas:

      +

      The approach described in the previous chapter is in fact a trade-off: the API performance issues are traded for “normal” (i.e., expected) background errors that happen while working with the API. This is achieved by isolating the component responsible for controlling concurrency and only exposing read-only tokens in the public API. Still, the achievable throughput of the API is limited, and the only way of scaling it up is removing the strict consistency from the external API and thus allowing reading system state from read-only replicas:

      // Reading the state,
       // possibly from a replica
       const orderState = 
      @@ -2613,7 +2613,7 @@ try {
       }
       

      As orders are created much more rarely than read, we might significantly increase the system performance if we drop the requirement of returning the most recent state of the resource from the state retrieval endpoints. The versioning will help us avoid possible problems: creating an order will still be impossible unless the client has the actual version. In fact, we transited to the eventual consistency model: the client will be able to fulfill its request sometime when it finally gets the actual data. In modern microservice architectures, eventual consistency is rather an industrial standard, and it might be close to impossible to achieve the opposite, i.e., strict consistency.

      -

      NB: let us stress that you might choose the approach only in the case of exposing new APIs. If you're already providing an endpoint implementing some consistency model, you can't just lower the consistency level (for instance, introduce eventual consistency instead of the strict one) even if you never documented the behavior. This will be discussed in detail in the “On the Waterline of the Iceberg” chapter of “The Backwards Compatibility” section of this book.

      +

      NB: let us stress that you might choose the approach only in the case of exposing new APIs. If you're already providing an endpoint implementing some consistency model, you can't just lower the consistency level (for instance, introduce eventual consistency instead of the strict one) even if you never documented the behavior. This will be discussed in detail in the “On the Waterline of the Iceberg” chapter of “The Backward Compatibility” section of this book.

      Choosing weak consistency instead of a strict one, however, brings some disadvantages. For instance, we might require partners to wait until they get the actual resource state to make changes — but it is quite unobvious for partners (and actually inconvenient) they must be prepared to wait for changes they made themselves to propagate.

      // Creates an order
       const api = await api
      @@ -2655,7 +2655,7 @@ const pendingOrders = await api.
       
    • if you return a client error instead, the number of such errors might be considerable, and partners will need to write some additional code to deal with the errors;
  • -
  • this approach is still probabilistic, and will only help in a limited number of use cases (to be discussed in the next chapter).
  • +
  • this approach is still probabilistic, and will only help in a limited number of use cases (to be discussed below).
  • There is also an important question regarding the default behavior of the server if no version token was passed. Theoretically, in this case, master data should be returned, as the absence of the token might be the result of an app crash and subsequent restart or corrupted data storage. However, this implies an additional load on the master node.

    Evaluating the Risks of Switching to Eventual Consistency

    @@ -2672,7 +2672,7 @@ const pendingOrders = await api.
  • the client works with the data incorrectly (does not preserve the identifier of the last order or the idempotency key while repeating the request)
  • the client tries to create an order from two different instances of the app that do not share the common state.
  • -

    The first case means there is a bug in the partner's code; the second case means that the user is deliberately testing the system's stability — which is hardly a frequent case (or, let's say, the user's phone went off and they quickly switched to a tablet — rather rare case, we must admit).

    +

    The first case means there is a bug in the partner's code; the second case means that the user is deliberately testing the system's stability — which is hardly a frequent case (or, let's say, the user's phone went off and they quickly switched to a tablet — rather rare case as well, we must admit).

    Let's now imagine that we dropped the third requirement — i.e., returning the master data if the token was not provided by the client. We would get the third case when the client gets an error:

    • the client application lost some data (restarted or corrupted), and the user tries to replicate the last request.
    • @@ -2681,22 +2681,22 @@ const pendingOrders = await api.

      Mathematically, the probability of getting the error is expressed quite simply. It's the ratio between two durations: the time period needed to get the actual state to the time period needed to restart the app and repeat the request. (Keep in mind that the last failed request might be automatically repeated on startup by the client.) The former depends on the technical properties of the system (for instance, on the replication latency, i.e., the lag between the master and its read-only copies) while the latter depends on what client is repeating the call.

      If we talk about applications for end users, the typical restart time there is measured in seconds, which normally should be much less than the overall replication latency. Therefore, client errors will only occur in case of data replication problems / network issues / server overload.

      If, however, we talk about server-to-server applications, the situation is totally different: if a server repeats the request after a restart (let's say because the process was killed by a supervisor), it's typically a millisecond-scale delay. And that means that the number of order creation errors will be significant.

      -

      As a conclusion, returning eventually consistent data by default is only viable if an API vendor is either ready to live with background errors or capable of making the lag of getting the actual state much less than the typical app restart time.

      Chapter 19. Asynchronicity and Time Management

      Chapter 20. Lists and Accessing Them

      Chapter 21. Bidirectional Data Flows. Push and Poll Models

      Chapter 22. Organization of Notification Systems

      Chapter 23. Atomicity

      Chapter 24. Partial Updates

      Chapter 25. Degradation and Predictability

      Section III. The Backwards Compatibility

      Chapter 26. The Backwards Compatibility Problem Statement 

      -

      As usual, let's conceptually define “backwards compatibility” before we start.

      -

      Backwards compatibility is a feature of the entire API system to be stable in time. It means the following: the code that developers have written using your API continues working functionally correctly for a long period of time. There are two important questions to this definition and two explanations:

      +

      As a conclusion, returning eventually consistent data by default is only viable if an API vendor is either ready to live with background errors or capable of making the lag of getting the actual state much less than the typical app restart time.

      Chapter 19. Asynchronicity and Time Management

      Chapter 20. Lists and Accessing Them

      Chapter 21. Bidirectional Data Flows. Push and Poll Models

      Chapter 22. Organization of Notification Systems

      Chapter 23. Atomicity

      Chapter 24. Partial Updates

      Chapter 25. Degradation and Predictability

      Section III. The Backward Compatibility

      Chapter 26. The Backward Compatibility Problem Statement 

      +

      As usual, let's conceptually define “backward compatibility” before we start.

      +

      Backward compatibility is a feature of the entire API system to be stable in time. It means the following: the code that developers have written using your API continues working functionally correctly for a long period of time. There are two important questions to this definition and two explanations:

      1. What does “functionally correctly” mean?

        -

        It means that the code continues to serve its function, i.e., to solve some users' problems. It doesn't mean it continues working indistinguishably from the previous version: for example, if you're maintaining a UI library, changing functionally insignificant design details like shadow depth or border stroke type is backwards compatible, whereas changing the sizes of the visual components is not.

        +

        It means that the code continues to serve its function, i.e., to solve some users' problems. It doesn't mean it continues working indistinguishably from the previous version: for example, if you're maintaining a UI library, changing functionally insignificant design details like shadow depth or border stroke type is backward-compatible, whereas changing the sizes of the visual components is not.

      2. What does “a long period of time” mean?

        -

        From our point of view, the backwards compatibility maintenance period should be reconciled with the typical lifetime of applications in the subject area. Platform LTS periods are decent guidance in most cases. Since the applications will be rewritten anyway when the platform maintenance period ends, it is reasonable to expect developers to move to the new API version as well. In mainstream subject areas (i.e., desktop and mobile operating systems) this period lasts several years.

        +

        From our point of view, the backward compatibility maintenance period should be reconciled with the typical lifetime of applications in the subject area. Platform LTS periods are decent guidance in most cases. Since the applications will be rewritten anyway when the platform maintenance period ends, it is reasonable to expect developers to move to the new API version as well. In mainstream subject areas (i.e., desktop and mobile operating systems) this period lasts several years.

      -

      From the definition becomes obvious why backwards compatibility needs to be maintained (including taking necessary measures at the API design stage). An outage, full or partial, caused by an API vendor, is an extremely uncomfortable situation for every developer, if not a disaster — especially if they pay money for the API usage.

      -

      But let's take a look at the problem from another angle: why the problem of maintaining backwards compatibility exists in the first place? Why would anyone want to break it? This question, though it looks quite trivial, is much more complicated than the previous one.

      -

      We could say that we break backwards compatibility to introduce new features to the API. But that would be deceiving: new features are called “new” for a reason, as they cannot affect existing implementations which are not using them. We must admit there are several associated problems, which lead to the aspiration to rewrite our code, the code of the API itself, and ship a new major version:

      +

      From the definition becomes obvious why backward compatibility needs to be maintained (including taking necessary measures at the API design stage). An outage, full or partial, caused by an API vendor, is an extremely uncomfortable situation for every developer, if not a disaster — especially if they pay money for the API usage.

      +

      But let's take a look at the problem from another angle: why the problem of maintaining backward compatibility exists in the first place? Why would anyone want to break it? This question, though it looks quite trivial, is much more complicated than the previous one.

      +

      We could say that we break backward compatibility to introduce new features to the API. But that would be deceiving: new features are called “new” for a reason, as they cannot affect existing implementations which are not using them. We must admit there are several associated problems, which lead to the aspiration to rewrite our code, the code of the API itself, and ship a new major version:

      • the codebase eventually becomes outdated; making changes, even introducing totally new functionality, becomes impractical;

        @@ -2716,8 +2716,8 @@ const pendingOrders = await api.

        When you shipped the very first API version, and the very first clients started to use it, the situation was perfect. There was only one version, and all clients were using only it. When this perfection ends, two scenarios are possible.

        1. -

          If the platform allows for fetching code on-demand as the good old Web does, and you weren't too lazy to implement that code-on-demand feature (in a form of a platform SDK — for example, JS API), then the evolution of your API is more or less under your control. Maintaining backwards compatibility effectively means keeping the client library backwards-compatible. As for client-server interaction, you're free.

          -

          It doesn't mean that you can't break backwards compatibility. You still can make a mess with cache-control headers or just overlook a bug in the code. Besides, even code-on-demand systems don't get updated instantly. The author of this book faced a situation when users were deliberately keeping a browser tab open for weeks to get rid of updates. But still, you usually don't have to support more than two API versions — the last one and the penultimate one. Furthermore, you may try to rewrite the previous major version of the library, implementing it on top of the actual API version.

          +

          If the platform allows for fetching code on-demand as the good old Web does, and you weren't too lazy to implement that code-on-demand feature (in a form of a platform SDK — for example, JS API), then the evolution of your API is more or less under your control. Maintaining backward compatibility effectively means keeping the client library backwards-compatible. As for client-server interaction, you're free.

          +

          It doesn't mean that you can't break backward compatibility. You still can make a mess with cache-control headers or just overlook a bug in the code. Besides, even code-on-demand systems don't get updated instantly. The author of this book faced a situation when users were deliberately keeping a browser tab open for weeks to get rid of updates. But still, you usually don't have to support more than two API versions — the last one and the penultimate one. Furthermore, you may try to rewrite the previous major version of the library, implementing it on top of the actual API version.

        2. If the code-on-demand feature isn't supported or is prohibited by the platform, as in modern mobile operating systems, then the situation becomes more severe. Each client effectively borrows a snapshot of the code working with your API, frozen at the moment of compilation. Client application updates are scattered over time to much more extent than Web application updates. The most painful thing is that some clients will never be up to date, because one of three reasons:

          @@ -2727,7 +2727,7 @@ const pendingOrders = await api.
        3. users can't get updates because their devices are no longer supported.

      In modern times these three categories combined could easily constitute tens of per cent of auditory. It implies that cutting the support of any API version might be a nightmare experience — especially if developers' apps continue supporting a more broad spectrum of platforms than the API does.

      -

      You could have never issued any SDK, providing just the server-side API, for example in a form of HTTP endpoints. You might think that the backwards compatibility problem is mitigated (by making your API less competitive on the market because of a lack of SDKs). That's not true: if you don't provide an SDK, then developers will either adopt an unofficial one (if someone bothered to make it) or just write a framework themselves — independently. “Your framework — your problems” strategy, fortunately, or not, works badly: if developers write low-quality code atop your API, then your API is of low quality itself — definitely in the view of developers, possibly in the view of end-users, if the API performance within the app is visible to them.

      +

      You could have never issued any SDK, providing just the server-side API, for example in a form of HTTP endpoints. You might think that the backward compatibility problem is mitigated (by making your API less competitive on the market because of a lack of SDKs). That's not true: if you don't provide an SDK, then developers will either adopt an unofficial one (if someone bothered to make it) or just write a framework themselves — independently. “Your framework — your problems” strategy, fortunately, or not, works badly: if developers write low-quality code atop your API, then your API is of low quality itself — definitely in the view of developers, possibly in the view of end-users, if the API performance within the app is visible to them.

      Certainly, if you provide a stateless API that doesn't require client SDKs (or they might be auto-generated from the spec), those problems will be much less noticeable, but not fully avoidable unless you never issue any new API version. If you do, you will still have to deal with some fragmentation of users by API and SDK versions.

      @@ -2739,17 +2739,17 @@ const pendingOrders = await api.
    • interfaces change.

    As usual, the API provides an abstraction to a much more granular subject area. In the case of our coffee machine API example, one might reasonably expect new machine models to pop up, which are to be supported by the platform. New models tend to provide new APIs, and it's hard to guarantee they might be adopted while preserving the same high-level API. And anyway, the code needs to be altered, which might lead to incompatibility, albeit unintentional.

    -

    Let us also stress that vendors of low-level API are not always as resolute regarding maintaining backwards compatibility for their APIs (actually, any software they provide) as (we hope so) you are. You should be warned that keeping your API in an operational state, i.e., writing and supporting facades to the shifting subject area landscape, will be a problem of yours, and sometimes rather a sudden one.

    +

    Let us also stress that vendors of low-level API are not always as resolute regarding maintaining backward compatibility for their APIs (actually, any software they provide) as (we hope so) you are. You should be warned that keeping your API in an operational state, i.e., writing and supporting facades to the shifting subject area landscape, will be a problem of yours, and sometimes rather a sudden one.

    Platform Drift

    Finally, there is a third side to the story — the “canyon” you're crossing over with a bridge of your API. Developers write code that is executed in some environment you can't control, and it's evolving. New versions of operating systems, browsers, protocols, and programming language SDKs emerge. New standards are being developed and new arrangements made, some of them being backwards-incompatible, and nothing could be done about that.

    Older platform versions lead to fragmentation just like older app versions do, because developers (including the API developers) are struggling with supporting older platforms, and users are struggling with platform updates — and often can't get updated at all, since newer platform versions require newer devices.

    The nastiest thing here is that not only does incremental progress in a form of new platforms and protocols demand changing the API, but also does a vulgar fashion. Several years ago realistic 3d icons were popular, but since then the public taste changed in a favor of flat and abstract ones. UI components developers had to follow the fashion, rebuilding their libraries, either shipping new icons or replacing the old ones. Similarly, right now the “night mode” feature is introduced everywhere, demanding changes in a broad range of APIs.

    Backwards-Compatible Specifications

    -

    In the case of the API-first approach, the backwards compatibility problem gets one more dimension: the specification and code generation based on it. It becomes possible to break backwards compatibility without breaking the spec (let's say by introducing eventual consistency instead of the strict one) — and vice versa, modify the spec in a backwards-incompatible manner changing nothing in the protocol and therefore not affecting existing integrations at all (let's say, by replacing additionalProperties: false with true in OpenAPI).

    -

    The question of whether two specification versions are backwards compatible or not rather belongs to a gray zone, as specification standards themselves do not define this. Generally speaking, the “specification change is backwards-compatible” statement is equivalent to “any client code written or generated based on the previous version of the spec continues working correctly after the API vendor releases the new API version implementing the new version of the spec.” Practically speaking, following this definition seems quite unrealistic for two reasons: it's impossible to learn the behavior of every piece of code-generating software there (for instance, it's rather hard to say whether code generated based on a specification that includes the parameter additionalProperties: false will still function properly if the server starts returning additional fields).

    +

    In the case of the API-first approach, the backward compatibility problem gets one more dimension: the specification and code generation based on it. It becomes possible to break backward compatibility without breaking the spec (let's say by introducing eventual consistency instead of the strict one) — and vice versa, modify the spec in a backwards-incompatible manner changing nothing in the protocol and therefore not affecting existing integrations at all (let's say, by replacing additionalProperties: false with true in OpenAPI).

    +

    The question of whether two specification versions are backward-compatible or not rather belongs to a gray zone, as specification standards themselves do not define this. Generally speaking, the “specification change is backward-compatible” statement is equivalent to “any client code written or generated based on the previous version of the spec continues working correctly after the API vendor releases the new API version implementing the new version of the spec.” Practically speaking, following this definition seems quite unrealistic for two reasons: it's impossible to learn the behavior of every piece of code-generating software there (for instance, it's rather hard to say whether code generated based on a specification that includes the parameter additionalProperties: false will still function properly if the server starts returning additional fields).

    Thus, using IDLs to describe APIs with all advantages it undeniably brings to the field, leads to having one more side to the technology drift problem: the IDL version and, more importantly, versions of helper software based on it, are constantly and sometimes unpredictably evolving.

    -

    NB: we incline to recommend sticking to reasonable practices, i.e., don't use the functionality that is controversial from the backwards compatibility point of view (including the above-mentioned additionalProperties: false) and, while evaluating the safety of changes, consider spec-generated code behave just like a manually written one. If you still get into the situation of unresolvable doubts, your only option is to manually check every code generator with regards to whether its output continues working with the new version of the API.

    -

    Backwards Compatibility Policy

    +

    NB: we incline to recommend sticking to reasonable practices, i.e., don't use the functionality that is controversial from the backward compatibility point of view (including the above-mentioned additionalProperties: false) and, while evaluating the safety of changes, consider spec-generated code behave just like a manually written one. If you still get into the situation of unresolvable doubts, your only option is to manually check every code generator with regards to whether its output continues working with the new version of the API.

    +

    Backward Compatibility Policy

    To summarize the above:

    • you will have to deploy new API versions because of apps, platforms, and subject areas evolution; different areas are evolving at a different pace, but never stop doing so;
    • @@ -2786,7 +2786,7 @@ const pendingOrders = await api.
    • finally, stating all three numbers (major version, minor version, and patch) allows for fixing a concrete API release with all its specificities (and errors), which — theoretically — means that the integration will remain operable till this version is physically available.

    Of course, preserving minor versions infinitely isn't possible (partly because of security and compliance issues that tend to pile up). However, providing such access for a reasonable period of time is rather a hygienic norm for popular APIs.

    -

    NB. Sometimes to defend the single accessible API version concept, the following argument is put forward: preserving the SDK or API application server code is not enough to maintain strict backwards compatibility as it might be relying on some un-versioned services (for example, some data in the DB that are shared between all the API versions). We, however, consider this an additional reason to isolate such dependencies (see “The Serenity Notepad” chapter) as it means that changes to these subsystems might lead to the inoperability of the API.

    Chapter 27. On the Waterline of the Iceberg 

    +

    NB. Sometimes to defend the single accessible API version concept, the following argument is put forward: preserving the SDK or API application server code is not enough to maintain strict backward compatibility as it might be relying on some un-versioned services (for example, some data in the DB that are shared between all the API versions). We, however, consider this an additional reason to isolate such dependencies (see “The Serenity Notepad” chapter) as it means that changes to these subsystems might lead to the inoperability of the API.

    Chapter 27. On the Waterline of the Iceberg 

    Before we start talking about the extensible API design, we should discuss the hygienic minimum. A huge number of problems would have never happened if API vendors had paid more attention to marking their area of responsibility.

    Provide a Minimal Amount of Functionality

    At any moment in its lifetime, your API is like an iceberg: it comprises an observable (i.e., documented) part and a hidden one, undocumented. If the API is designed properly, these two parts correspond to each other just like the above-water and under-water parts of a real iceberg do, i.e. one to ten. Why so? Because of two obvious reasons.

    @@ -2835,7 +2835,7 @@ if (status) { }

    We presume we may skip the explanations why such code must never be written under any circumstances. If you're really providing a non-strictly consistent API, then either the createOrder operation must be asynchronous and return the result when all replicas are synchronized, or the retry policy must be hidden inside the getStatus operation implementation.

    -

    If you failed to describe the eventual consistency in the first place, then you simply couldn't make these changes in the API. You will effectively break backwards compatibility, which will lead to huge problems with your customers' apps, intensified by the fact they can't be simply reproduced by QA engineers.

    +

    If you failed to describe the eventual consistency in the first place, then you simply couldn't make these changes in the API. You will effectively break backward compatibility, which will lead to huge problems with your customers' apps, intensified by the fact they can't be simply reproduced by QA engineers.

    Example #2. Take a look at the following code:

    let resolve;
     let promise = new Promise(
    @@ -2973,7 +2973,7 @@ PUT /v1/partners/{partnerId}/coffee-machines
     
     
     

    Usually, just adding a new optional parameter to the existing interface is enough; in our case, adding non-mandatory options to the PUT /coffee-machines endpoint.

    -

    NB. When we talk about defining the contract as it works right now, we're talking about internal agreements. We must have asked partners to support those three options while negotiating the interaction format. If we had failed to do so from the very beginning, and now are defining these in a course of expanding the public API, it's a very strong claim to break backwards compatibility, and we should never do that (see the previous chapter).

    +

    NB. When we talk about defining the contract as it works right now, we're talking about internal agreements. We must have asked partners to support those three options while negotiating the interaction format. If we had failed to do so from the very beginning, and now are defining these in a course of expanding the public API, it's a very strong claim to break backward compatibility, and we should never do that (see the previous chapter).

    Limits of Applicability

    Though this exercise looks very simple and universal, its consistent usage is possible only if the hierarchy of entities is well-designed from the very beginning and, which is more important, the vector of the further API expansion is clear. Imagine that after some time passed, the options list got new items; let's say, adding syrup or a second espresso shot. We are totally capable of expanding the list — but not the defaults. So the “default” PUT /coffee-machines interface will eventually become totally useless because the default set of three options will not only be any longer of use but will also look ridiculous: why these three options, what are the selection criteria? In fact, the defaults and the method list will be reflecting the historical stages of our API development, and that's totally not what you'd expect from the helpers and defaults nomenclature.

    Alas, this dilemma can't be easily resolved. On one hand, we want developers to write neat and laconic code, so we must provide useful helpers and defaults. On the other hand, we can't know in advance which sets of options will be the most useful after several years of expanding the API.

    @@ -3195,7 +3195,7 @@ PUT /formatters/volume/ru/US
  • It describes nicely the integrations we've already implemented (it costs almost nothing to support the API types we already know) but brings no flexibility to the approach. In fact, we simply described what we'd already learned, not even trying to look at the larger picture.
  • This design is ultimately based on a single principle: every order preparation might be codified with these three imperative commands.
  • -

    We may easily disprove the #2 statement, and that will uncover the implications of the #1. For the beginning, let us imagine that on a course of further service growth, we decided to allow end-users to change the order after the execution started. For example, request a contactless takeout. That would lead us to the creation of a new endpoint, let's say, program_modify_endpoint, and new difficulties in data format development (as new fields for contactless delivery requested and satisfied flags need to be passed both directions). What is important is that both the endpoint and the new data fields would be optional because of the backwards compatibility requirement.

    +

    We may easily disprove the #2 statement, and that will uncover the implications of the #1. For the beginning, let us imagine that on a course of further service growth, we decided to allow end-users to change the order after the execution started. For example, request a contactless takeout. That would lead us to the creation of a new endpoint, let's say, program_modify_endpoint, and new difficulties in data format development (as new fields for contactless delivery requested and satisfied flags need to be passed both directions). What is important is that both the endpoint and the new data fields would be optional because of the backward compatibility requirement.

    Now let's try to imagine a real-world example that doesn't fit into our “three imperatives to rule them all” picture. That's quite easy as well: what if we're plugging not a coffee house, but a vending machine via our API? From one side, it means that the modify endpoint and all related stuff are simply meaningless: the contactless takeout requirement means nothing to a vending machine. On the other side, the machine, unlike the people-operated café, requires takeout approval: the end-user places an order while being somewhere in some other place then walks to the machine and pushes the “get the order” button in the app. We might, of course, require the user to stand up in front of the machine when placing an order, but that would contradict the entire product concept of users selecting and ordering beverages and then walking to the takeout point.

    Programmable takeout approval requires one more endpoint, let's say, program_takeout_endpoint. And so we've lost our way in a forest of five endpoints:

      @@ -3307,7 +3307,7 @@ registerProgramRunHandler(

      Another reason to justify this solution is that major changes occurring at different abstraction levels have different weights:

      • if the technical level is under change, that must not affect product qualities and the code written by partners;
      • -
      • if the product is changing, e.g., we start selling flight tickets instead of preparing coffee, there is literally no sense to preserve backwards compatibility at technical abstraction levels. Ironically, we may actually make our API sell tickets instead of brewing coffee without breaking backwards compatibility, but the partners' code will still become obsolete.
      • +
      • if the product is changing, e.g., we start selling flight tickets instead of preparing coffee, there is literally no sense to preserve backward compatibility at technical abstraction levels. Ironically, we may actually make our API sell tickets instead of brewing coffee without breaking backward compatibility, but the partners' code will still become obsolete.

      In conclusion, as higher-level APIs are evolving more slowly and much more consistently than low-level APIs, reverse strong coupling might often be acceptable or even desirable, at least from the price-quality ratio point of view.

      NB: many contemporary frameworks explore a shared state approach, Redux being probably the most notable example. In the Redux paradigm, the code above would look like this:

      @@ -3384,7 +3384,7 @@ ProgramContext.dispatch = (action) => {

      Replacing specific implementations with interfaces not only allows us to respond more clearly to many concerns that pop up during the API design phase but also helps us to outline many possible API evolution directions, which should help us in avoiding API inconsistency problems in the future.

      Chapter 32. The Serenity Notepad 

      -

      Apart from the abovementioned abstract principles, let us give a list of concrete recommendations: how to make changes in existing APIs to maintain backwards compatibility.

      +

      Apart from the abovementioned abstract principles, let us give a list of concrete recommendations: how to make changes in existing APIs to maintain backward compatibility

      1. Remember the Iceberg's Waterline

      If you haven't given any formal guarantee, it doesn't mean that you can violate informal ones. Often, just fixing bugs in APIs might render some developers' code inoperable. We might illustrate it with a real-life example that the author of this book has actually faced once:

        @@ -3402,12 +3402,12 @@ ProgramContext.dispatch = (action) => {
        3. Isolate the Dependencies

        In the case of a gateway API that provides access to some underlying API or aggregates several APIs behind a single façade, there is a strong temptation to proxy the original interface as is, thus not introducing any changes to it and making life much simpler by sparing an effort needed to implement the weak-coupled interaction between services. For example, while developing program execution interfaces as described in the “Separating Abstraction Levels” chapter we might have taken the existing first-kind coffee-machine API as a role model and provided it in our API by just proxying the requests and responses as is. Doing so is highly undesirable because of several reasons:

          -
        • usually, you have no guarantees that the partner will maintain backwards compatibility or at least keep new versions more or less conceptually akin to the older ones;
        • +
        • usually, you have no guarantees that the partner will maintain backward compatibility or at least keep new versions more or less conceptually akin to the older ones;
        • any partner's problem will automatically ricochet into your customers.

        The best practice is quite the opposite: isolate the third-party API usage, i.e., develop an abstraction level that will allow for:

          -
        • keeping backwards compatibility intact because of extension capabilities incorporated in the API design;
        • +
        • keeping backward compatibility intact because of extension capabilities incorporated in the API design;
        • negating partner's problems by technical means:
          • limiting the partner's API usage in case of load surges;
          • @@ -3642,7 +3642,7 @@ ProgramContext.dispatch = (action) => {

            Different companies employ different approaches to determining the granularity of API services, i.e., what is counted as a separate product and what is not. To some extent, this is a matter of convenience and taste judgment. Consider splitting an API into parts if:

            • it makes sense for partners to integrate only one API part, i.e., there are some isolated subsets of the API that alone provide enough means to solve users' problems;
            • -
            • API parts might be versioned separately and independently, and it is meaningful from the partners' point of view (this usually means that those “isolated” APIs are either fully independent or maintain strict backwards compatibility and introduce new major versions only when it's absolutely necessary; otherwise, maintaining a matrix which API #1 version is compatible with which API #2 version will soon become a catastrophe);
            • +
            • API parts might be versioned separately and independently, and it is meaningful from the partners' point of view (this usually means that those “isolated” APIs are either fully independent or maintain strict backward compatibility and introduce new major versions only when it's absolutely necessary; otherwise, maintaining a matrix which API #1 version is compatible with which API #2 version will soon become a catastrophe);
            • it makes sense to set tariffs and limits for each API service independently;
            • the auditory of the API segments (either developers, business owners, or end users) is not overlapping, and “selling” granular API to customers is much easier than aggregated.
            @@ -3962,7 +3962,7 @@ ProgramContext.dispatch = (action) => {
          • if a version of the current page exists for newer API versions, there is an explicit link to the actual version;
          • docs for deprecated API versions are pessimized or even excluded from indexing.
          -

          If you're strictly maintaining backwards compatibility, it is possible to create a single documentation for all API versions. To do so, each entity is to be marked with the API version it is supported from. However, there is an apparent problem with this approach: it's not that simple to get docs for a specific (outdated) API version (and, generally speaking, to understand which capabilities this API version provides). (Though the offline documentation we mentioned earlier will help.)

          +

          If you're strictly maintaining backward compatibility, it is possible to create a single documentation for all API versions. To do so, each entity is to be marked with the API version it is supported from. However, there is an apparent problem with this approach: it's not that simple to get docs for a specific (outdated) API version (and, generally speaking, to understand which capabilities this API version provides). (Though the offline documentation we mentioned earlier will help.)

          The problem becomes worse if you're supporting not only different API versions but also different environments / platforms / programming languages; for example, if your UI lib supports both iOS and Android. Then both documentation versions are equal, and it's impossible to pessimize one of them.

          In this case, you need to choose one of the following strategies:

            @@ -3996,7 +3996,7 @@ ProgramContext.dispatch = (action) => {

            Finally, the last aspect we would like to shed the light on is managing partners' expectations regarding the further development of the API. If we talk about consumer qualities, APIs differ little from other B2B software products: in both cases, you need to form some understanding of SLA conditions, available features, interface responsiveness and other characteristics that are important for clients. Still, APIs have their specificities

            Versioning and Application Lifecycle

            Ideally, the API once published should live eternally; but as we all are reasonable people, we do understand it's impossible in the real life. Even if we continue supporting older versions, they will still become outdated eventually, and partners will need to rewrite the code to use newer functionality.

            -

            The author of this book formulates the rule of issuing new major API versions like this: the period of time after which partners will need to rewrite the code should coincide with the application lifespan in the subject area (see “The Backwards Compatibility Problem Statement” chapter). Apart from updating major versions, sooner or later you will face issues with accessing some outdated minor versions as well. As we mentioned in the “On the Waterline of the Iceberg” chapter, even fixing bugs might eventually lead to breaking some integrations, and that naturally leads us to the necessity of keeping older minor versions of the API until the partner resolves the problem.

            +

            The author of this book formulates the rule of issuing new major API versions like this: the period of time after which partners will need to rewrite the code should coincide with the application lifespan in the subject area (see “The Backward Compatibility Problem Statement” chapter). Apart from updating major versions, sooner or later you will face issues with accessing some outdated minor versions as well. As we mentioned in the “On the Waterline of the Iceberg” chapter, even fixing bugs might eventually lead to breaking some integrations, and that naturally leads us to the necessity of keeping older minor versions of the API until the partner resolves the problem.

            In this aspect, integrating with large companies that have a dedicated software engineering department differs dramatically from providing a solution to individual amateur programmers: on one hand, the former are much more likely to find undocumented features and unfixed bugs in your code; on the other hand, because of the internal bureaucracy, fixing the related issues might easily take months, save not years. The common recommendation there is to maintain old minor API versions for a period of time long enough for the most dilatory partner to switch no the newest version.

            Supporting Platforms

            Another aspect crucial to interacting with large integrators is supporting a zoo of platforms (browsers, programming languages, protocols, operating systems) and their versions. As usual, big companies have their own policies on which platforms they support, and these policies might sometimes contradict common sense. (Let's say, it's rather a time to abandon TLS 1.2, but many integrators continue working through this protocol, or even the earlier ones.)

            @@ -4004,7 +4004,7 @@ ProgramContext.dispatch = (action) => {

            Moving Forward

            Finally, apart from those specific issues, your customers must be caring about more general questions: could they trust you? Could they rely on your API evolving, absorbing modern trends, or will they eventually find the integration with your API on the scrapyard of history? Let's be honest: given all the uncertainties of the API product vision, we are very much interested in the answers as well. Even the Roman viaduct, though remaining backwards-compatible for two thousand years, has been a very archaic and non-reliable way of solving customers' problems for quite a long time.

            You might work with these customer expectations by publishing roadmaps. It's quite common that many companies avoid publicly announcing their concrete plans (for a reason, of course). Nevertheless, in the case of APIs, we strongly recommend providing the roadmaps, even if they are tentative and lack precise dates — especially if we talk about deprecating some functionality. Announcing these promises (given the company keeps them, of course) is a very important competitive advantage to every kind of consumer.

            -

            With this, we would like to conclude this book. We hope that the principles and the concepts we have outlined will help you in creating APIs that fit all the developers, businesses, and end users' needs, and in expanding them (while maintaining the backwards compatibility) for the next two thousand years or so.