diff --git a/docs/API.en.epub b/docs/API.en.epub index 35c797f..d7e947e 100644 Binary files a/docs/API.en.epub and b/docs/API.en.epub differ diff --git a/docs/API.en.html b/docs/API.en.html index 7e471b3..798526b 100644 --- a/docs/API.en.html +++ b/docs/API.en.html @@ -3772,7 +3772,7 @@ X-Idempotency-Token: <token>
Finally, there is a third aspect to consider — the “canyon” you are crossing over with a bridge of your API. Developers write code that is executed in an environment beyond your control, and it evolves. New versions of operating systems, browsers, protocols, and programming language SDKs emerge. New standards are being developed and new arrangements are made, some of which are backward-incompatible, and there is nothing that can be done about that.
Older platform versions contribute to fragmentation just like older app versions as developers (including the API developers) struggle to support older platforms. At the same time, users face challenges with platform updates. In many cases, they are unable to update their devices to newer platform versions since newer platform versions require newer devices.
The most challenging aspect here is that not only does incremental progress, in the form of new platforms and protocols, necessitate changes to the API, but also the vulgar influence of trends. Several years ago realistic 3D icons were popular, but since then, public taste has changed in favor of flat and abstract ones. UI component developers had to follow the fashion, rebuilding their libraries by either shipping new icons or replacing the old ones. Similarly, the current trend of integrating the “night mode” feature has become widespread, demanding changes in a wide range of APIs.
-In the case of the API-first approach, the backward compatibility problem adds another dimension: the specification and code generation based on it. It becomes possible to break backward compatibility without breaking the spec (for example, by introducing eventual consistency instead of strict consistency) — and vice versa, modify the spec in a backward-incompatible manner without changing anything in the protocol and therefore not affecting existing integrations at all (for example, by replacing additionalProperties: false
with true
in OpenAPI).
The question of whether two specification versions are backward-compatible or not belongs to a gray zone, as specification standards themselves do not define this. Generally speaking, the statement “specification change is backward-compatible” is equivalent to “any client code written or generated based on the previous version of the spec continues to work correctly after the API vendor releases the new API version implementing the new version of the spec.” Practically speaking, following this definition seems quite unrealistic for two reasons: it is impossible to learn the behavior of every piece of code-generating software out there (for instance, it's rather hard to say whether code generated based on a specification that includes the parameter additionalProperties: false
will still function properly if the server starts returning additional fields).
Thus, using IDLs to describe APIs with all the advantages they undeniably bring to the field, leads to having one aspect of the technology drift problem: the IDL version and, more importantly, versions of helper software based on it, are constantly and sometimes unpredictably evolving. If an API vendor employs the “code-first” approach, meaning that the spec is generated based on the actual API code, the occurrence of backward-incompatible changes in the server code — spec — code-generated SDK — client app chain is only a matter of time.
@@ -3815,8 +3815,8 @@ X-Idempotency-Token: <token>Of course, preserving minor versions indefinitely is not possible (partly because of security and compliance issues that tend to accumulate). However, providing such access for a reasonable period of time is considered a hygienic norm for popular APIs.
NB. Sometimes to defend the concept of a single accessible API version, the following argument is put forward: preserving the SDK or API application server code is not enough to maintain strict backward compatibility as it might rely on some unversioned services (for example, data in the DB shared between all API versions). However, we consider this an additional reason to isolate such dependencies (see “The Serenity Notepad” chapter) as it means that changes to these subsystems might result in the API becoming inoperable.
Before we start talking about the extensible API design, we should discuss the hygienic minimum. A huge number of problems would have never happened if API vendors had paid more attention to marking their area of responsibility.
-Before we start talking about extensible API design, we should discuss the hygienic minimum. A huge number of problems would have never happened if API vendors had paid more attention to marking their area of responsibility.
+At any moment in its lifetime, your API is like an iceberg: it comprises an observable (i.e., documented) part and a hidden one, undocumented. If the API is designed properly, these two parts correspond to each other just like the above-water and under-water parts of a real iceberg do, i.e. one to ten. Why so? Because of two obvious reasons.
Revoking the API functionality causes losses. If you've promised to provide some functionality, you will have to do so “forever” (until this API version's maintenance period is over). Pronouncing some functionality deprecated is a tricky thing, potentially alienating your customers.
Rule #1 is the simplest: if some functionality might be withheld — then never expose it until you really need to. It might be reformulated like that: every entity, every field, and every public API method is a product decision. There must be solid product reasons why some functionality is exposed.
-Your obligations to maintain some functionality must be stated as clearly as possible. Especially regarding those environments and platforms where no native capability to restrict access to undocumented functionality exists. Unfortunately, developers tend to consider some private features they found to be eligible for use, thus presuming the API vendor shall maintain them intact. Policy on such “findings” must be articulated explicitly. At the very least, in case of such non-authorized usage of undocumented functionality, you might refer to the docs and be in your own rights in the eyes of the community.
+A rule of thumb is very simple: if some functionality might be withheld — then never expose it until you really need to. It might be reformulated like this: every entity, every field, and every public API method is a product decision. There must be solid product reasons why some functionality is exposed.
+Your obligations to maintain some functionality must be stated as clearly as possible, especially regarding those environments and platforms where no native capability to restrict access to undocumented functionality exists. Unfortunately, developers tend to consider some private features they found to be eligible for use, thus presuming the API vendor shall maintain them intact. The policy on such “findings” must be articulated explicitly. At the very least, in the case of such non-authorized usage of undocumented functionality, you might refer to the docs and be within your rights in the eyes of the community.
However, API developers often legitimize such gray zones themselves, for example, by:
One cannot make a partial commitment. Either you guarantee this code will always work or do not slip the slightest note such functionality exists.
-The third principle is much less obvious. Pay close attention to the code which you're suggesting developers to develop: are there any conventions that you consider evident, but never wrote them down?
+The third principle is much less obvious. Pay close attention to the code that you're suggesting developers write: are there any conventions that you consider self-evident but never wrote down?
Example #1. Let's take a look at this order processing SDK example:
// Creates an order
let order = api.createOrder();
// Returns the order status
let status = api.getStatus(order.id);
-Let's imagine that you're struggling with scaling your service, and at some point moved to the asynchronous replication of the database. This would lead to the situation when querying for the order status right after the order creation might return 404
if an asynchronous replica hasn't got the update yet. In fact, thus we abandon a strict consistency policy in a favor of an eventual one.
What would be the result? The code above will stop working. A user creates an order, then tries to get its status — but gets the error. It's very hard to predict what approach developers would implement to tackle this error. Probably, they would not expect this to happen at all.
+Let's imagine that you're struggling with scaling your service, and at some point switched to eventual consistency, as we discussed in the corresponding chapter. What would be the result? The code above will stop working. A user creates an order, then tries to get its status but receives an error instead. It's very hard to predict what approach developers would implement to tackle this error. They probably would not expect this to happen at all.
You may say something like, “But we've never promised strict consistency in the first place” — and that is obviously not true. You may say that if, and only if, you have really described the eventual consistency in the createOrder
docs, and all your SDK examples look like this:
let order = api.createOrder();
let status;
@@ -3862,8 +3861,8 @@ if (status) {
…
}
-We presume we may skip the explanations why such code must never be written under any circumstances. If you're really providing a non-strictly consistent API, then either the createOrder
operation must be asynchronous and return the result when all replicas are synchronized, or the retry policy must be hidden inside the getStatus
operation implementation.
If you failed to describe the eventual consistency in the first place, then you simply couldn't make these changes in the API. You will effectively break backward compatibility, which will lead to huge problems with your customers' apps, intensified by the fact they can't be simply reproduced by QA engineers.
+We presume we may skip the explanations of why such code must never be written under any circumstances. If you're really providing a non-strictly consistent API, then either the createOrder
operation must be asynchronous and return the result when all replicas are synchronized, or the retry policy must be hidden inside the getStatus
operation implementation.
If you failed to describe the eventual consistency in the first place, then you simply couldn't make these changes in the API. You will effectively break backward compatibility, which will lead to huge problems with your customers' apps, intensified by the fact that they can't be simply reproduced by QA engineers.
Example #2. Take a look at the following code:
let resolve;
let promise = new Promise(
@@ -3873,9 +3872,9 @@ let promise = new Promise(
);
resolve();
-This code presumes that the callback function passed to a new Promise
will be executed synchronously, and the resolve
variable will be initialized before the resolve()
function call is executed. But this assumption is based on nothing: there are no clues indicating the new Promise
constructor executes the callback function synchronously.
Of course, the developers of the language standard can afford such tricks; but you as an API developer cannot. You must at least document this behavior and make the signatures point to it; actually, the best practice is to avoid such conventions, since they are simply unobvious while reading the code. And of course, under no circumstances dare you change this behavior to an asynchronous one.
-Example #3. Imagine you're providing animations API, which includes two independent functions:
+This code presumes that the callback function passed to a new Promise
will be executed synchronously, and the resolve
variable will be initialized before the resolve()
function call is executed. But this assumption is based on nothing: there are no clues indicating that the new Promise
constructor executes the callback function synchronously.
Of course, the developers of the language standard can afford such tricks; but you as an API developer cannot. You must at least document this behavior and make the signatures point to it. Actually, the best practice is to avoid such conventions since they are simply not obvious when reading the code. And of course, under no circumstances dare you change this behavior to an asynchronous one.
+Example #3. Imagine you're providing an animations API, which includes two independent functions:
// Animates object's width,
// beginning with the first value,
// ending with the second
@@ -3888,8 +3887,8 @@ object.observe(
'widthchange', observerFunction
);
-A question arises: how frequently and at what time fractions the observerFunction
will be called? Let's assume in the first SDK version we emulated step-by-step animation at 10 frames per second. Then the observerFunction
will be called 10 times, getting values '140px'
, '180px'
, etc., up to '500px'
. But then, in a new API version, we have switched to implementing both functions atop of a system's native functionality — and so you simply don't know, when and how frequently the observerFunction
will be called.
Just changing call frequency might result in making some code dysfunctional — for example, if the callback function makes some complex calculations, and no throttling is implemented since developers just relied on your SDK's built-in throttling. And if the observerFunction
ceases to be called when exactly '500px' is reached because of some system algorithms specifics, some code will be broken beyond any doubt.
A question arises: how frequently and at what time fractions will the observerFunction
be called? Let's assume in the first SDK version we emulated step-by-step animation at 10 frames per second. Then the observerFunction
will be called 10 times, getting values '140px'
, '180px'
, etc., up to '500px'
. But then, in a new API version, we have switched to implementing both functions atop the operating system's native functionality. Therefore, you simply don't know when and how frequently the observerFunction
will be called.
Just changing the call frequency might result in making some code dysfunctional. For example, if the callback function performs some complex calculations and no throttling is implemented because developers relied on your SDK's built-in throttling. Additionally, if the observerFunction
ceases to be called exactly when the '500px'
value is reached due to system algorithm specifics, some code will be broken beyond any doubt.
In this example, you should document the concrete contract (how often the observer function is called) and stick to it even if the underlying technology is changed.
Example #4. Imagine that customer orders are passing through a specific pipeline:
GET /v1/orders/{id}/events/history
@@ -3914,16 +3913,16 @@ object.observe(
}
]}
-Suppose at some moment we decided to allow trustworthy clients to get their coffee in advance before the payment is confirmed. So an order will jump straight to "preparing_started"
or even "ready"
without a "payment_approved"
event being emitted. It might appear to you that this modification is backwards-compatible since you've never really promised any specific event order being maintained, but it is not.
Let's assume that a developer (probably, your company's business partner) wrote some code implementing some valuable business procedures, for example, gathering income and expenses analytics. It's quite logical to expect this code operates a state machine, which switches from one state to another depending on getting (or getting not) specific events. This analytical code will be broken if the event order changes. In the best-case scenario, a developer will get some exceptions and will have to cope with the error's cause; in the worst case, partners will operate wrong statistics for an indefinite period of time until they find the issue.
-A proper decision would be, first, documenting the event order and the allowed states; second, continuing generating the "payment_approved" event before the "preparing_started" one (since you're making a decision to prepare that order, so you're in fact approving the payment) and add extended payment information.
+Suppose at some moment we decided to allow trustworthy clients to get their coffee in advance before the payment is confirmed. So an order will jump straight to "preparing_started"
or even "ready"
without a "payment_approved"
event being emitted. It might appear to you that this modification is backward-compatible since you've never really promised any specific event order being maintained, but it is not.
Let's assume that a developer (probably your company's business partner) wrote some code implementing some valuable business procedures, for example, gathering income and expenses analytics. It's quite logical to expect this code operates a state machine that switches from one state to another depending on specific events. This analytical code will be broken if the event order changes. In the best-case scenario, a developer will get some exceptions and will have to cope with the error's cause. In the worst case, partners will operate the incorrect statistics for an indefinite period of time until they find the issue.
+A proper decision would be, first, documenting the event order and the allowed states; second, continuing to generate the "payment_approved" event before the "preparing_started" one (since you're making a decision to prepare that order, so you're in fact approving the payment) and add extended payment information.
This example leads us to the last rule.
-State transition graph, event order, possible causes of status changes — such critical things must be documented. However, not every piece of business logic might be defined in a form of a programmatical contract; some cannot be represented in a machine-readable form at all.
-Imagine that one day you start to take phone calls. A client may contact the call center to cancel an order. You might even make this functionality technically backwards-compatible, introducing new fields to the “order” entity. But the end-user might simply know the number, and call it even if the app wasn't suggesting anything like that. Partner's business analytical code might be broken likewise, or start displaying weather on Mars since it was written knowing nothing about the possibility of canceling orders somehow in circumvention of the partner's systems.
-A technically correct decision would be to add a “canceling via call center allowed” parameter to the order creation function. Conversely, call center operators might only cancel those orders which were created with this flag set. But that would be a bad decision from a product point of view as it's quite unobvious to users that they can cancel some orders by phone and can't cancel others. The only “good” decision in this situation is to foresee the possibility of external order cancellations in the first place. If you haven't foreseen it, your only option is the “Serenity Notepad” to be discussed in the last chapter of this Section.
In the previous chapters, we have tried to outline theoretical rules and illustrate them with practical examples. However, understanding the principles of the change-proof API design requires practice above all things. An ability to anticipate future growth problems comes from a handful of grave mistakes once made. One cannot foresee everything but can develop a certain technical intuition.
-So, in the following chapters, we will try to probe our study API from the previous Section, testing its robustness from every possible viewpoint, thus carrying out some “variational analysis” of our interfaces. More specifically, we will apply a “What If?” question to every entity, as if we are to provide a possibility to write an alternate implementation of every piece of logic.
+State transition graph, event order, possible causes of status changes, etc. — such critical things must be documented. However, not every piece of business logic can be defined in the form of a programmable contract; some cannot be represented in a machine-readable form at all.
+Imagine that one day you start taking phone calls. A client may contact the call center to cancel an order. You might even make this functionality technically backward-compatible by introducing new fields to the “order” entity. But the end-user might simply know the number and call it even if the app wasn't suggesting anything like that. The partner's business analytical code might be broken as well or start displaying weather on Mars since it was written without knowing about the possibility of canceling orders in circumvention of the partner's systems.
+A technically correct decision would be to add a “canceling via call center allowed” parameter to the order creation function. Conversely, call center operators might only cancel those orders that were created with this flag set. But that would be a bad decision from a product point of view because it is not obvious to users that they can cancel some orders by phone and not others. The only “good” decision in this situation is to foresee the possibility of external order cancellations in the first place. If you haven't foreseen it, your only option is the “Serenity Notepad” that will be discussed in the last chapter of this Section.
In the previous chapters, we have attempted to outline theoretical rules and illustrate them with practical examples. However, understanding the principles of designing change-proof APIs requires practice above all else. The ability to anticipate future growth problems comes from a handful of grave mistakes once made. While it is impossible to foresee everything, one can develop a certain technical intuition.
+Therefore, in the following chapters, we will test the robustness our study API from the previous Section, examining it from various perspectives to perform a “variational analysis” of our interfaces. More specifically, we will apply a “What If?” question to every entity, as if we are to provide a possibility to write an alternate implementation of every piece of logic.
NB. In our examples, the interfaces will be constructed in a manner allowing for dynamic real-time linking of different entities. In practice, such integrations usually imply writing an ad hoc server-side code in accordance with specific agreements made with specific partners. But for educational purposes, we will pursue more abstract and complicated ways. Dynamic real-time linking is more typical in complex program constructs like operating system APIs or embeddable libraries; giving educational examples based on such sophisticated systems would be too inconvenient.
Let's start with the basics. Imagine that we haven't exposed any other functionality but searching for offers and making orders, thus providing an API of two methods: POST /offers/search
and POST /orders
.
Let us make the next logical step there and suppose that partners will wish to dynamically plug their own coffee machines (operating some previously unknown types of API) into our platform. To allow doing so, we have to negotiate a callback format that would allow us to call partners' APIs and expose two new endpoints providing the following capabilities:
@@ -4454,7 +4453,7 @@ ProgramContext.dispatch = (action) => {There are obvious local problems with this approach (like the inconsistency in functions' behavior, or the bugs which were not found while testing the code), but also a bigger one: your API might be simply unusable if a developer tries any non-mainstream approach, because of performance issues, bugs, instability, etc., as the API developers themselves never tried to use this public interface for anything important.
NB. The perfect example of avoiding this anti-pattern is the development of compilers; usually, the next compiler's version is compiled with the previous compiler's version.
Whatever tips and tricks described in the previous chapters you use, it's often quite probable that you can't do anything to prevent API inconsistencies from piling up. It's possible to reduce the speed of this stockpiling, foresee some problems, and have some interface durability reserved for future use. But one can't foresee everything. At this stage, many developers tend to make some rash decisions, e.g., releasing a backwards-incompatible minor version to fix some design flaws.
+Whatever tips and tricks described in the previous chapters you use, it's often quite probable that you can't do anything to prevent API inconsistencies from piling up. It's possible to reduce the speed of this stockpiling, foresee some problems, and have some interface durability reserved for future use. But one can't foresee everything. At this stage, many developers tend to make some rash decisions, e.g., releasing a backward-incompatible minor version to fix some design flaws.
We highly recommend never doing that. Remember that the API is also a multiplier of your mistakes. What we recommend is to keep a serenity notepad — to write down the lessons learned, and not to forget to apply this knowledge when a new major API version is released.
The problem of designing HTTP APIs is unfortunately one of the most “holywar”-inspiring issues. On one hand, it is one of the most popular technologies but, on the other hand, it is quite complex and difficult to comprehend due to the large and fragmented standard split into many RFCs. As a result, the HTTP specification is doomed to be poorly understood and imperfectly interpreted by millions of software engineers and thousands of textbook writers. Therefore, before proceeding to the useful part of this Section, we must clarify exactly what we are going to discuss.
It has somehow happened that the entire modern network stack used for developing client-server APIs has been unified in two important points. One of them is the Internet Protocol Suite, which comprises the IP protocol as a base and an additional layer on top of it in the form of either the TCP or UDP protocol. Today, alternatives to the TCP/IP stack are used for a very limited subset of engineering tasks.
@@ -4479,7 +4478,7 @@ ProgramContext.dispatch = (action) => {We will refer to such APIs as “HTTP APIs” or “JSON-over-HTTP APIs.” We understand that this is a loose interpretation of the term, but we prefer to live with that rather than using the phrase “JSON-over-HTTP endpoints utilizing the semantics described in the HTTP and URL standards” each time.
Before we proceed to discuss HTTP API design patterns, we feel obliged to clarify one more important terminological issue. Often, an API matching the description we gave in the previous chapter is called a “REST API” or a “RESTful API.” In this Section, we don't use any of these terms as it makes no practical sense.
What is “REST”? In 2000, Roy Fielding, one of the authors of the HTTP and URI specifications, published his doctoral dissertation titled “Architectural Styles and the Design of Network-based Software Architectures,” the fifth chapter of which was named “Representational State Transfer (REST).”
-As anyone can attest by reading this chapter, it features a very much abstract review of a distributed client-server architecture that is not bound to either HTTP or URL. Furthermore, it does not discuss any API design recommendations. In this chapter, Fielding methodically enumerates restrictions that any software engineer encounters when developing distributed client-server software. Here they are:
+As anyone can attest by reading this chapter, it features a very much abstract overview of a distributed client-server architecture that is not bound to either HTTP or URL. Furthermore, it does not discuss any API design recommendations. In this chapter, Fielding methodically enumerates restrictions that any software engineer encounters when developing distributed client-server software. Here they are:
We have no idea why, out of all the overviews of abstract network-based software architecture, Fielding's concept gained such popularity. It is obvious that Fielding's theory, reflected in the minds of millions of software developers, became a genuine engineering subculture. By reducing the REST idea to the HTTP protocol and the URL standard, the chimera of a “RESTful API” was born, of which nobody knows the definition.
Do we want to say that REST is a meaningful concept? Definitely not. We only aimed to explain that it allows for quite a broad range of interpretations, which is simultaneously its main power and its main weakness.
On one hand, thanks to the multitude of interpretations, the API developers have built a perhaps vague but useful view of “proper” HTTP API architecture. On the other hand, the lack of concrete definitions has made REST API one of the most “holywar”-inspiring topics, and these holywars are usually quite meaningless as the popular REST concept has nothing to do with the REST described in Fielding's dissertation (and even more so, with the REST described in Fielding's manifesto of 2008).
-The terms “REST architectural style” and its derivative “REST API” will not be used in the following chapters since it makes no practical sense as we explained above. We referred to the constraints described by Fielding many times in the previous chapters because, let us emphasize it once more, it is impossible to develop distributed client-server APIs without taking them into account. However, HTTP API (meaning JSON-over-HTTP endpoints utilizing the semantics described in the HTTP and URL standards) as we will describe it in the following chapter aligns well with the “average” understanding of “REST / RESTful API” as per numerous tutorials on the Web.
The terms “REST architectural style” and its derivative “REST API” will not be used in the following chapters since it makes no practical sense as we explained above. We referred to the constraints described by Fielding many times in the previous chapters because, let us emphasize it once more, it is impossible to develop distributed client-server APIs without taking them into account. However, HTTP APIs (meaning JSON-over-HTTP endpoints utilizing the semantics described in the HTTP and URL standards) as we will describe them in the following chapter align well with the “average” understanding of “REST / RESTful API” as per numerous tutorials on the Web.
The third important exercise we must conduct is to describe the format of an HTTP request and response and explain the basic concepts. Many of these may seem obvious to the reader. However, the situation is that even the basic knowledge we require to move further is scattered across vast and fragmented documentation, causing even experienced developers to struggle with some nuances. Below, we will try to compile a structured overview that is sufficient to design HTTP APIs.
To describe the semantics and formats, we will refer to the brand-new RFC 9110, which replaces no fewer than nine previous specifications dealing with different aspects of the technology. However, a significant volume of additional functionality is still covered by separate standards. In particular, the HTTP caching principles are described in the standalone RFC 9111, while the popular PATCH
method is omitted in the main RFC and is regulated by RFC 5789.
An HTTP request consists of (1) applying a specific verb to a URL, stating (2) the protocol version, (3) additional meta-information in headers, and (4) optionally, some content (request body):
@@ -4547,14 +4546,14 @@ Content-Type: application/json?
or #
symbols or the end of the line.
/
symbol as a delimiter. However, the standard does not prescribe such decomposition or define any semantics for it./
symbol as a delimiter. However, the standard does not define any semantics for it./
and one without it (for example, /root/leaf
and /root/leaf/
), are considered different paths according to the standard. Conversely, two URLs that differ only in trailing slashes in their paths are considered different as well. However, we are unaware of a single argument to differentiate such URLs in practice..
and ..
parts, which are supposed to be interpreted similarly to analogous symbols in file paths (meaning that /root/leaf
, /root/./leaf
, and /root/branch/../leaf
are equivalent). However, the standard again does not prescribe this..
and ..
parts, which are supposed to be interpreted similarly to analogous symbols in file paths (meaning that /root/leaf
, /root/./leaf
, and /root/branch/../leaf
are equivalent).?
symbol and either #
or the end of the line.
key=value
pairs split by the &
character. Again, the standard does not require this.key=value
pairs split by the &
character. Again, the standard does not require this or define the semantics.As both static and behavioral analyses are heuristic, it's highly desirable to not make decisions based solely on their outcome but rather ask the suspicious users to additionally prove they're making legitimate requests. If such a mechanism is in place, the quality of an anti-fraud system will be dramatically improved, as it allows for increasing system sensitivity and enabling pro-active defense, i.e., asking users to pass the tests in advance.
In the case of services for end users, the main method of acquiring the second factor is redirecting to a captcha page. In the case of APIs it might be problematic, especially if you initially neglected the “Stipulate Restrictions” rule we've given in the “Describing Final Interfaces” chapter. In many cases, you will have to impose this responsibility on partners (i.e., it will be partners who show captchas and identify users based on the signals received from the API endpoints). This will, of course, significantly impair the convenience of working with the API.
-NB. Instead of captcha, there might be other actions introducing additional authentication factors. It might be the phone number confirmation or the second step of the 3D-Secure protocol. The important part is that requesting an additional authentication step must be stipulated in the program interface, as it can't be added later in a backwards-compatible manner.
+NB. Instead of captcha, there might be other actions introducing additional authentication factors. It might be the phone number confirmation or the second step of the 3D-Secure protocol. The important part is that requesting an additional authentication step must be stipulated in the program interface, as it can't be added later in a backward-compatible manner.
Other popular mechanics of identifying robots include offering a bait (“honeypot”) or employing the execution environment checks (starting from rather trivial ones like executing JavaScript on the webpage and ending with sophisticated techniques of checking application integrity checksums).
The illusion of having a broad choice of technical means of identifying fraud users should not deceive you as you will soon discover the lack of effective methods of restricting those users. Banning them by cookie / Referer
/ User-Agent
makes little to no impact as this data is supplied by clients, and might be easily forged. In the end, you have four mechanisms for suppressing illegal activities:
In this aspect, integrating with large companies that have a dedicated software engineering department differs dramatically from providing a solution to individual amateur programmers: on one hand, the former are much more likely to find undocumented features and unfixed bugs in your code; on the other hand, because of the internal bureaucracy, fixing the related issues might easily take months, save not years. The common recommendation there is to maintain old minor API versions for a period of time long enough for the most dilatory partner to switch no the newest version.
Another aspect crucial to interacting with large integrators is supporting a zoo of platforms (browsers, programming languages, protocols, operating systems) and their versions. As usual, big companies have their own policies on which platforms they support, and these policies might sometimes contradict common sense. (Let's say, it's rather a time to abandon TLS 1.2, but many integrators continue working through this protocol, or even the earlier ones.)
-Formally speaking, ceasing support of a platform is a backwards-incompatible change, and might lead to breaking some integration for some end users. So it's highly important to have clearly formulated policies regarding which platforms are supported based on which criteria. In the case of mass public APIs, that's usually simple (like, API vendor promises to support platforms that have more than N% penetration, or, even easier, just last M versions of a platform); in the case of commercial APIs, it's always a bargain based on the estimations, how much will non-supporting a specific platform would cost to a company. And of course, the outcome of the bargain must be stated in the contracts — what exactly you're promising to support during which period of time.
+Formally speaking, ceasing support of a platform is a backward-incompatible change, and might lead to breaking some integration for some end users. So it's highly important to have clearly formulated policies regarding which platforms are supported based on which criteria. In the case of mass public APIs, that's usually simple (like, API vendor promises to support platforms that have more than N% penetration, or, even easier, just last M versions of a platform); in the case of commercial APIs, it's always a bargain based on the estimations, how much will non-supporting a specific platform would cost to a company. And of course, the outcome of the bargain must be stated in the contracts — what exactly you're promising to support during which period of time.
Finally, apart from those specific issues, your customers must be caring about more general questions: could they trust you? Could they rely on your API evolving, absorbing modern trends, or will they eventually find the integration with your API on the scrapyard of history? Let's be honest: given all the uncertainties of the API product vision, we are very much interested in the answers as well. Even the Roman viaduct, though remaining backwards-compatible for two thousand years, has been a very archaic and non-reliable way of solving customers' problems for quite a long time.
+Finally, apart from those specific issues, your customers must be caring about more general questions: could they trust you? Could they rely on your API evolving, absorbing modern trends, or will they eventually find the integration with your API on the scrapyard of history? Let's be honest: given all the uncertainties of the API product vision, we are very much interested in the answers as well. Even the Roman viaduct, though remaining backward-compatible for two thousand years, has been a very archaic and non-reliable way of solving customers' problems for quite a long time.
You might work with these customer expectations by publishing roadmaps. It's quite common that many companies avoid publicly announcing their concrete plans (for a reason, of course). Nevertheless, in the case of APIs, we strongly recommend providing the roadmaps, even if they are tentative and lack precise dates — especially if we talk about deprecating some functionality. Announcing these promises (given the company keeps them, of course) is a very important competitive advantage to every kind of consumer.
With this, we would like to conclude this book. We hope that the principles and the concepts we have outlined will help you in creating APIs that fit all the developers, businesses, and end users' needs, and in expanding them (while maintaining the backward compatibility) for the next two thousand years or so.