1
0
mirror of https://github.com/twirl/The-API-Book.git synced 2025-01-17 17:44:13 +02:00

style fix

This commit is contained in:
Sergey Konstantinov 2023-04-09 19:25:58 +03:00
parent 40e1c64c1b
commit 694747340e
16 changed files with 88 additions and 87 deletions

View File

@ -4,7 +4,7 @@ When all entities, their responsibilities, and their relations to each other are
An important assertion at number 0:
##### 0. Rules must not be applied unthinkingly
##### 0. Rules Must Not Be Applied Unthinkingly
Rules are just simply formulated generalizations from one's experience. They are not to be applied unconditionally, and they don't make thinking redundant. Every rule has a rational reason to exist. If your situation doesn't justify following the rule — then you shouldn't do it.
@ -14,11 +14,11 @@ This idea applies to every concept listed below. If you get an unusable, bulky,
It is important to understand that you can always introduce concepts of your own. For example, some frameworks willfully reject paired `set_entity` / `get_entity` methods in a favor of a single `entity()` method, with an optional argument. The crucial part is being systematic in applying the concept. If it's rendered into life, you must apply it to every single API method, or at the very least elaborate a naming rule to discern such polymorphic methods from regular ones.
#### Ensuring readability and consistency
#### Ensuring Readability and Consistency
The most important task for the API vendor is to make code written by third-party developers atop of the API easily readable and maintainable. Remember that the law of large numbers works against you: if some concept or a signature might be treated wrong, they will be inevitably treated wrong by a number of partners, and this number will be increasing with the API popularity growth.
##### Explicit is always better than implicit
##### Explicit Is Always Better Than Implicit
Entity name must explicitly tell what it does and what side effects to expect while using it.
@ -62,7 +62,7 @@ Two important implications:
**1.2.** If your API's nomenclature contains both synchronous and asynchronous operations, then (a)synchronicity must be apparent from signatures, **or** a naming convention must exist.
##### Specify which standards are used
##### Specify Which Standards Are Used
Regretfully, humanity is unable to agree on the most trivial things, like which day starts the week, to say nothing about more sophisticated standards.
@ -87,7 +87,7 @@ One particular implication of this rule is that money sums must *always* be acco
It is also worth saying that in some areas the situation with standards is so spoiled that, whatever you do, someone got upset. A “classical” example is geographical coordinates order (latitude-longitude vs longitude-latitude). Alas, the only working method of fighting frustration there is the “Serenity Notepad” to be discussed in Section II.
##### Entities must have concrete names
##### Entities Must Have Concrete Names
Avoid single amoeba-like words, such as “get,” “apply,” “make,” etc.
@ -95,7 +95,7 @@ Avoid single amoeba-like words, such as “get,” “apply,” “make,” etc.
**Better**: `user.get_id()`.
##### Don't spare the letters
##### Don't Spare the Letters
In the 21st century, there's no need to shorten entities' names.
@ -124,7 +124,7 @@ str_search_for_characters(
**NB**: sometimes field names are shortened or even omitted (e.g., a heterogenous array is passed instead of a set of named fields) to lessen the amount of traffic. In most cases, this is absolutely meaningless as usually the data is compressed at the protocol level.
##### Naming implies typing
##### Naming Implies Typing
A field named `recipe` must be of a `Recipe` type. A field named `recipe_id` must contain a recipe identifier that we could find within the `Recipe` entity.
@ -157,7 +157,7 @@ The word “function” is many-valued. It could mean built-in functions, but al
**Better**: `GET /v1/coffee-machines/{id}/builtin-functions-list`
##### Matching entities must have matching names and behave alike
##### Matching Entities Must Have Matching Names and Behave Alike
**Bad**: `begin_transition` / `stop_transition`
`begin` and `stop` terms don't match; developers will have to dig into the docs.
@ -183,7 +183,7 @@ Several rules are violated:
We're leaving the exercise of making these signatures better for the reader.
##### Avoid double negations
##### Avoid Double Negations
**Bad**: `"dont_call_me": false`
— humans are bad at perceiving double negation; make mistakes.
@ -212,7 +212,7 @@ GET /coffee-machines/{id}/stocks
```
— then developers will have to evaluate the flag `!beans_absence && !cup_absence` which is equivalent to `!(beans_absence || cup_absence)` conditions, and in this transition, people tend to make mistakes. Avoiding double negations helps little, and regretfully only general advice could be given: avoid the situations when developers have to evaluate such flags.
##### Avoid implicit type conversion
##### Avoid Implicit Type Conversion
This advice is opposite to the previous one, ironically. When developing APIs you frequently need to add a new optional field with a non-empty default value. For example:
@ -295,7 +295,7 @@ POST /v1/users
**NB**: the contradiction with the previous rule lies in the necessity of introducing “negative” flags (the “no limit” flag), which we had to rename to `abolish_spending_limit`. Though it's a decent name for a negative flag, its semantics is still unobvious, and developers will have to read the docs. That's the way.
##### No results is a result
##### No Results Is a Result
If a server processed a request correctly and no exceptional situation occurred — there must be no error. Regretfully, an antipattern is widespread — of throwing errors when zero results are found.
@ -330,7 +330,7 @@ POST /v1/coffee-machines/search
This rule might be reduced to: if an array is the result of the operation, then the emptiness of that array is not a mistake, but a correct response. (Of course, if an empty array is acceptable semantically; an empty array of coordinates is a mistake for sure.)
##### Errors must be informative
##### Errors Must Be Informative
While writing the code developers face problems, many of them quite trivial, like invalid parameter types or some boundary violations. The more convenient the error responses your API return, the less the amount of time developers waste struggling with it, and the more comfortable working with the API.
@ -385,7 +385,7 @@ POST /v1/coffee-machines/search
It is also a good practice to return all detectable errors at once to spare developers' time.
##### Maintain a proper error sequence
##### Maintain a Proper Error Sequence
**First**, always return unresolvable errors before the resolvable ones:
@ -494,11 +494,11 @@ POST /v1/orders
You may note that in this setup the error can't be resolved in one step: this situation must be elaborated over, and either order calculation parameters must be changed (discounts should not be counted against the minimal order sum), or a special type of error must be introduced.
#### Developing machine-readable interfaces
#### Developing Machine-Readable Interfaces
In pursuit of API clarity for humans, we frequently forget that it's not developers themselves who interact with the endpoints, but the code they've written. Many concepts that work well with user interfaces are badly suited for the program ones: specifically, developers can't make decisions based on textual information, and they can't “refresh” the state in case of some confusing situation.
##### The system state must be observable by clients
##### The State of the System Must Be Observable by Clients
Sometimes, program systems provide interfaces that do not expose to the clients all the data on what is now being executed on the user's
behalf, specifically — which operations are running and what their statuses are.
@ -576,7 +576,7 @@ This rule is applicable to errors as well, especially client ones. If the error
}
```
##### Specify lifespans of resources and caching policies
##### Specify Caching Policies and Lifespans of Resources
In modern systems, clients usually have their own state and almost universally cache results of requests — no matter, session-wise or long-term, every entity has some period of autonomous existence. So it's highly desirable to make clarifications; it should be understandable how the data is supposed to be cached, if not from operation signatures, but at least from the documentation.
@ -624,7 +624,7 @@ GET /price?recipe=lungo⮠
}
```
##### Pagination, filtration, and cursors
##### Pagination, Filtration, and Cursors
Any endpoints returning data collections must be paginated. No exclusions exist.
@ -760,11 +760,11 @@ GET /v1/records/modified/list⮠
This scheme's downsides are the necessity to create separate indexed event storage, and the multiplication of data items, since for a single record many events might exist.
#### Ensuring the technical quality of APIs
#### Ensuring the Technical Quality of APIs
Fine APIs must not only solve developers' and end users' problems but also ensure the quality of the solution, e.g. do not contain logical and technical mistakes (and do not provoke developers to make them), save computational resources, and in general implement the best practices applicable to the subject area.
##### Keep the precision of fractional numbers intact
##### Keep the Precision of Fractional Numbers Intact
If the protocol allows, fractional numbers with fixed precision (like money sums) must be represented as a specially designed type like Decimal or its equivalent.
@ -772,7 +772,7 @@ If there is no Decimal type in the protocol (for instance, JSON doesn't have one
If conversion to a float number will certainly lead to losing the precision (let's say if we translate “20 minutes” into hours as a decimal fraction), it's better to either stick to a fully precise format (e.g. opt for `00:20` instead of `0.33333…`) or to provide an SDK to work with this data, or as a last resort describe the rounding principles in the documentation.
##### All API operations must be idempotent
##### All API Operations Must Be Idempotent
Let us remind the reader that idempotency is the following property: repeated calls to the same function with the same parameters won't change the resource state. Since we're discussing client-server interaction in the first place, repeating requests in case of network failure isn't an exception, but a norm of life.
@ -857,7 +857,7 @@ Also, be warned: clients are bad at implementing idempotency tokens. Two problem
* you can't really expect clients generate truly random tokens — they may share the same seed or simply use weak algorithms or entropy sources; therefore you must put constraints on token checking: token must be unique to a specific user and resource, not globally;
* clients tend to misunderstand the concept and either generate new tokens each time they repeat the request (which deteriorates the UX, but otherwise healthy) or conversely use one token in several requests (not healthy at all and could lead to catastrophic disasters; another reason to implement the suggestion in the previous clause); writing detailed doc and/or client library is highly recommended.
##### Avoid non-atomic operations
##### Avoid Non-Atomic Operations
There is a common problem with implementing the changes list approach: what to do if some changes were successfully applied, while others are not? The rule is simple: if you may ensure the atomicity (e.g. either apply all changes or none of them) — do it.
@ -1005,7 +1005,7 @@ It would be more correct if the server did nothing upon getting the second reque
Just in case: nested operations must be idempotent themselves. If they are not, separate idempotency tokens must be generated for each nested operation.
##### Don't invent security
##### Don't Invent Security Practices
If the author of this book was given a dollar each time he had to implement the additional security protocol invented by someone, he would already retire. The API developers' passion for signing request parameters or introducing complex schemes of exchanging passwords for tokens is as obvious as meaningless.
@ -1013,7 +1013,7 @@ If the author of this book was given a dollar each time he had to implement the
**Second**, it's quite presumptuous (and dangerous) to assume you're an expert in security. New attack vectors come every day, and being aware of all the actual threats is a full-day job. If you do something different during workdays, the security system designed by you will contain vulnerabilities that you have never heard about — for example, your password-checking algorithm might be susceptible to the [timing attack](https://en.wikipedia.org/wiki/Timing_attack), and your webserver, to the [request splitting attack](https://capec.mitre.org/data/definitions/105.html).
##### Explicitly declare technical restrictions
##### Explicitly Declare Technical Restrictions
Every field in your API comes with restrictions: the maximum allowed text length, the size of attached documents, the allowed ranges for numeric values, etc. Often, describing those limits is neglected by API developers — either because they consider it obvious, or because they simply don't know the boundaries themselves. This is of course an antipattern: not knowing what are the limits automatically implies that partners' code might stop working at any moment because of the reasons they don't control.
@ -1021,7 +1021,7 @@ Therefore, first, declare the boundaries for every field in the API without any
The same reasoning applies to quotas as well: partners must have access to the statistics on which part of the quota they have already used, and the errors in the case of exceeding quotas must be informative.
##### Count the amount of traffic
##### Count the Amount of Traffic
Nowadays the amount of traffic is rarely taken into account — the Internet connection is considered unlimited almost universally. However, it's still not entirely unlimited: with some degree of carelessness, it's always possible to design a system generating the amount of traffic that is uncomfortable even for modern networks.
@ -1045,7 +1045,7 @@ If the first two problems are solved by applying pure technical measures (see th
As a useful exercise, try modeling the typical lifecycle of a partner's app's main functionality (for example, making a single order) to count the number of requests and the amount of traffic that it takes.
##### Avoid implicit partial updates
##### Avoid Implicit Partial Updates
One of the most common API design antipatterns is an attempt to spare something on detailed state change descriptions.
@ -1186,11 +1186,11 @@ X-Idempotency-Token: <idempotency token>
This approach is much harder to implement, but it's the only viable method to implement collaborative editing since it explicitly reflects what a user was actually doing with entity representation. With data exposed in such a format, you might actually implement offline editing, when user changes are accumulated and then sent at once, while the server automatically resolves conflicts by “rebasing” the changes.
#### Ensuring API product quality
#### Ensuring API Product Quality
Apart from the technological limitations, any real API will soon face the imperfection of the surrounding reality. Of course, any one of us would prefer living in the world of pink unicorns, free of piles of legacy code, evil-doers, national conflicts, and competitors' scheming. Fortunately or not, we live in the real world, and API vendors have to mind all of those while developing the API.
##### Use globally unique identifiers
##### Use Globally Unique Identifiers
It's considered a good form to use globally unique strings as entity identifiers, either semantic (i.e. "lungo" for beverage types) or random ones (i.e. [UUID-4](https://en.wikipedia.org/wiki/Universally_unique_identifier#Version_4_(random))). It might turn out to be extremely useful if you need to merge data from several sources under a single identifier.
@ -1200,7 +1200,7 @@ One important implication: **never use increasing numbers as external identifier
**NB**: in this book, we often use short identifiers like "123" in code examples; that's for the convenience of reading the book on small screens. Do not replicate this practice in a real-world API.
##### Stipulate future restrictions
##### Stipulate Future Restrictions
With the API popularity growth, it will inevitably become necessary to introduce technical means of preventing illicit API usage, such as displaying captchas, setting honeypots, raising the “too many requests” exceptions, installing anti-DDoS proxies, etc. All these things cannot be done if the corresponding errors and messages were not described in the docs from the very beginning.
@ -1208,13 +1208,13 @@ You are not obliged to actually generate those exceptions, but you might stipula
It is extremely important to leave room for multi-factored authentication (such as TOTP, SMS, or 3D-secure-like technologies) if it's possible to make payments through the API. In this case, it's a must-have from the very beginning.
##### Don't provide endpoints for mass downloading of sensitive data
##### No Bulk Access to Sensitive Data
If it's possible to get through the API users' personal data, bank card numbers, private messages, or any other kind of information, exposing which might seriously harm users, partners, and/or you — there must be *no* methods of bulk getting the data, or at least there must be rate limiters, page size restrictions, and, ideally, multi-factored authentication in front of them.
Often, making such offloads on an ad-hoc basis, e.g. bypassing the API, is a reasonable practice.
##### Localization and internationalization
##### Localization and Internationalization
All endpoints must accept language parameters (for example, in a form of the `Accept-Language` header), even if they are not being used currently.

View File

@ -2,7 +2,8 @@
Let's summarize the current state of our API study.
##### Offer search
##### Offer Search
```
POST /v1/offers/search
{
@ -52,7 +53,7 @@ POST /v1/offers/search
}
```
##### Working with recipes
##### Working with Recipes
```
// Returns a list of recipes
@ -72,7 +73,7 @@ GET /v1/recipes/{id}
}
```
##### Working with orders
##### Working with Orders
```
// Creates an order
@ -101,7 +102,7 @@ GET /v1/orders/{id}
POST /v1/orders/{id}/cancel
```
##### Working with programs
##### Working with Programs
```
// Returns an identifier of the program
@ -131,7 +132,7 @@ GET /v1/programs/{id}
}
```
##### Running programs
##### Running Programs
```
// Runs the specified program
@ -156,7 +157,7 @@ POST /v1/programs/{id}/run
POST /v1/runs/{id}/cancel
```
##### Managing runtimes
##### Managing Runtimes
```
// Creates a new runtime

View File

@ -32,7 +32,7 @@ Let us remind the reader that [an API is a bridge](#intro-api-definition), a mea
Apart from our aspirations to change the API architecture, three other tectonic processes are happening at the same time: user agents, subject areas, and underlying platforms erosion.
#### The fragmentation of consumer applications
#### the Fragmentation of Consumer Applications
When you shipped the very first API version, and the very first clients started to use it, the situation was perfect. There was only one version, and all clients were using only it. When this perfection ends, two scenarios are possible.
@ -52,7 +52,7 @@ When you shipped the very first API version, and the very first clients started
Certainly, if you provide a stateless API that doesn't require client SDKs (or they might be auto-generated from the spec), those problems will be much less noticeable, but not fully avoidable unless you never issue any new API version. If you do, you will still have to deal with some fragmentation of users by API and SDK versions.
#### Subject area evolution
#### Subject Area Evolution
The other side of the canyon is the underlying functionality you're exposing via the API. It's, of course, not static and somehow evolves:
@ -64,7 +64,7 @@ As usual, the API provides an abstraction to a much more granular subject area.
Let us also stress that vendors of low-level API are not always as resolute regarding maintaining backwards compatibility for their APIs (actually, any software they provide) as (we hope so) you are. You should be warned that keeping your API in an operational state, e.g. writing and supporting facades to the shifting subject area landscape, will be a problem of yours, and sometimes rather a sudden one.
#### Platform drift
#### Platform Drift
Finally, there is a third side to the story — the “canyon” you're crossing over with a bridge of your API. Developers write code that is executed in some environment you can't control, and it's evolving. New versions of operating systems, browsers, protocols, and programming language SDKs emerge. New standards are being developed and new arrangements made, some of them being backwards-incompatible, and nothing could be done about that.
@ -72,7 +72,7 @@ Older platform versions lead to fragmentation just like older app versions do, b
The nastiest thing here is that not only does incremental progress in a form of new platforms and protocols demand changing the API, but also does a vulgar fashion. Several years ago realistic 3d icons were popular, but since then the public taste changed in a favor of flat and abstract ones. UI components developers had to follow the fashion, rebuilding their libraries, either shipping new icons or replacing the old ones. Similarly, right now the “night mode” feature is introduced everywhere, demanding changes in a broad range of APIs.
#### Backwards compatibility policy
#### Backwards Compatibility Policy
To summarize the above:
* you will have to deploy new API versions because of apps, platforms, and subject areas evolution; different areas are evolving at a different pace, but never stop doing so;
@ -96,7 +96,7 @@ Let's briefly describe these decisions and the key factors for making them.
* if you provide server-side APIs and compiled SDKs only, you may basically not expose minor versions at all (see below); however, at some maturity stage providing at least two latest versions becomes a must.
* if you provide code-on-demand SDKs, it is considered a good form to provide an access to previous minor versions of SDK for a period of time sufficient enough for developers to test their applications and fix issues if necessary. Since minor changes do not require rewriting large portions of code, it's fine to align the lifecycle of a minor version with the app release cycle duration in your industry, which is usually several months in the worst cases.
#### Simultaneous access to several API versions
#### Keeping Several API Versions
In modern professional software development, especially if we talk about internal APIs, a new API version usually fully replaces the previous one. If some problems are found, it might be rolled back (by releasing the previous version), but the two builds never co-exist. However, in the case of public APIs, the more the number of partner integrations is, the more dangerous this approach becomes.

View File

@ -2,7 +2,7 @@
Before we start talking about the extensible API design, we should discuss the hygienic minimum. A huge number of problems would have never happened if API vendors had paid more attention to marking their area of responsibility.
##### Provide a minimal amount of functionality
#### Provide a Minimal Amount of Functionality
At any moment in its lifetime, your API is like an iceberg: it comprises an observable (e.g. documented) part and a hidden one, undocumented. If the API is designed properly, these two parts correspond to each other just like the above-water and under-water parts of a real iceberg do, i.e. one to ten. Why so? Because of two obvious reasons.
@ -12,7 +12,7 @@ At any moment in its lifetime, your API is like an iceberg: it comprises an obse
Rule \#1 is the simplest: if some functionality might be withheld — then never expose it until you really need to. It might be reformulated like that: every entity, every field, and every public API method is a *product decision*. There must be solid *product* reasons why some functionality is exposed.
##### Avoid gray zones and ambiguities
##### Avoid Gray Zones and Ambiguities
Your obligations to maintain some functionality must be stated as clearly as possible. Especially regarding those environments and platforms where no native capability to restrict access to undocumented functionality exists. Unfortunately, developers tend to consider some private features they found to be eligible for use, thus presuming the API vendor shall maintain them intact. Policy on such “findings” must be articulated explicitly. At the very least, in case of such non-authorized usage of undocumented functionality, you might refer to the docs and be in your own rights in the eyes of the community.
@ -23,7 +23,7 @@ However, API developers often legitimize such gray zones themselves, for example
One cannot make a partial commitment. Either you guarantee this code will always work or do not slip the slightest note such functionality exists.
##### Codify implicit agreements
##### Codify Implicit Agreements
The third principle is much less obvious. Pay close attention to the code which you're suggesting developers to develop: are there any conventions that you consider evident, but never wrote them down?
@ -135,7 +135,7 @@ A proper decision would be, first, documenting the event order and the allowed s
This example leads us to the last rule.
##### Product logic must be backwards-compatible as well
##### Product Logic Must Be Backwards-Compatible as Well
State transition graph, event order, possible causes of status changes — such critical things must be documented. However, not every piece of business logic might be defined in a form of a programmatical contract; some cannot be represented in a machine-readable form at all.

View File

@ -52,7 +52,7 @@ There is also another side to this story. As UIs (both ours' and partners') tend
The problems we're facing are the problems of *strong coupling*. Each time we offer an interface like described above, we in fact prescript implementing one entity (recipe) based on implementations of other entities (UI layout, localization rules). This approach disrespects the very basic principle of the “top to bottom” API design because **low-level entities must not define high-level ones**.
#### The rule of contexts
#### The Rule of Contexts
To make things worse, let us state that the inverse principle is also correct: high-level entities must not define low-level ones as well, since that simply isn't their responsibility. The exit from this logical labyrinth is that high-level entities must *define a context*, which other objects are to interpret. To properly design the interfaces for adding a new recipe we shouldn't try to find a better data format; we need to understand what contexts, both explicit and implicit, exist in our subject area.

View File

@ -2,7 +2,7 @@
Apart from the abovementioned abstract principles, let us give a list of concrete recommendations: how to make changes in existing APIs to maintain backwards compatibility.
##### Remember the iceberg's waterline
##### Remember the Iceberg's Waterline
If you haven't given any formal guarantee, it doesn't mean that you can violate informal ones. Often, just fixing bugs in APIs might render some developers' code inoperable. We might illustrate it with a real-life example that the author of this book has actually faced once:
* there was an API to place a button into a visual container; according to the docs, it was taking its position (offsets to the container's corner) as a mandatory argument;
@ -11,14 +11,14 @@ If you haven't given any formal guarantee, it doesn't mean that you can violate
If fixing an error might somehow affect real customers, you have no other choice but to emulate this erroneous behavior until the next major release. This situation is quite common if you develop a large API with a huge audience. For example, operating systems developers literally have to transfer old bugs to new OS versions.
##### Test the formal interface
##### Test the Formal Interface
Any software must be tested, and APIs ain't an exclusion. However, there are some subtleties there: as APIs provide formal interfaces, it's the formal interfaces that are needed to be tested. That leads to several kinds of mistakes:
1. Often the requirements like “the `getEntity` function returns the value previously being set by the `setEntity` function” appear to be too trivial to both developers and QA engineers to have a proper test. But it's quite possible to make a mistake there, and we have actually encountered such bugs several times.
2. The interface abstraction principle must be tested either. In theory, you might have considered each entity as an implementation of some interface; in practice, it might happen that you have forgotten something and alternative implementations aren't actually possible. For testing purposes, it's highly desirable to have an alternative realization, even a provisional one, for every interface.
##### Isolate the dependencies
##### Isolate the Dependencies
In the case of a gateway API that provides access to some underlying API or aggregates several APIs behind a single façade, there is a strong temptation to proxy the original interface as is, thus not introducing any changes to it and making life much simpler by sparing an effort needed to implement the weak-coupled interaction between services. For example, while developing program execution interfaces as described in the [“Separating Abstraction Levels”](#api-design-separating-abstractions) chapter we might have taken the existing first-kind coffee-machine API as a role model and provided it in our API by just proxying the requests and responses as is. Doing so is highly undesirable because of several reasons:
* usually, you have no guarantees that the partner will maintain backwards compatibility or at least keep new versions more or less conceptually akin to the older ones;
@ -32,7 +32,7 @@ The best practice is quite the opposite: isolate the third-party API usage, e.g.
* caching some data and states to have the ability to provide some (at least partial) functionality even if the partner's API is fully unreachable;
* finally, configuring an automatic fallback to another partner or alternative API.
##### Implement your API functionality atop public interfaces
##### Implement Your API Functionality Atop Public Interfaces
There is an antipattern that occurs frequently: API developers use some internal closed implementations of some methods which exist in the public API. It happens because of two reasons:
* often the public API is just an addition to the existing specialized software, and the functionality, exposed via the API, isn't being ported back to the closed part of the project, or the public API developers simply don't know the corresponding internal functionality exists;
@ -42,7 +42,7 @@ There are obvious local problems with this approach (like the inconsistency in f
**NB**. The perfect example of avoiding this anti-pattern is the development of compilers; usually, the next compiler's version is compiled with the previous compiler's version.
##### Keep a notepad
##### Keep a Notepad
Whatever tips and tricks described in the previous chapters you use, it's often quite probable that you can't do *anything* to prevent API inconsistencies from piling up. It's possible to reduce the speed of this stockpiling, foresee some problems, and have some interface durability reserved for future use. But one can't foresee *everything*. At this stage, many developers tend to make some rash decisions, e.g. releasing a backwards-incompatible minor version to fix some design flaws.

View File

@ -2,7 +2,7 @@
Before we proceed to the API product management principles, let us draw your attention to the matter of profits that the API vendor company might extract from it. As we will demonstrate in the next chapters, this is not an idle question as it directly affects making product decisions and setting KPIs for the API team. In this chapter, we will enumerate the main API monetization models. [In brackets, we will provide examples of such models applicable to our coffee-machine API study.]
##### Developers = end users
##### Developers = End Users
The easiest and the most understandable case is that of providing a service for developers, with no end users involved. First of all, we talk about software engineering tools: APIs of programming languages, frameworks, operating systems, UI libraries, game engines, etc. — general-purpose interfaces, in other words. [In our coffee API case, it means the following: we've developed a library for ordering a cup of coffee, possibly furnished with UI components, and now selling it to coffeeshop chains owners whoever willing to buy it to ease the development of their own applications.] In this case, the answer to the “why have an API” question is self-evident.
@ -20,17 +20,17 @@ There is also a plethora of monetizing techniques; in fact, we're just talking a
Remarkably, such APIs are probably the only “pure” case when developers choose the solution solely because of its clean design, elaborate documentation, thought-out use cases, etc. There are examples of copying the API design (which is the sincerest form of flattery, as we all know!) by other companies or even enthusiastic communities — that happened, for example, with the Java language API (an alternate implementation by Google) and the C# one (the Mono project) — or just borrowing apt solutions — as it happened with the concept of selecting DOM elements with CSS selectors, initially implemented in the *cssQuery* project, then adopted by *jQuery*, and after the latter became popular, incorporated as a part of the DOM standard itself.
##### API = the main and/or the only mean of accessing the service
##### API = the Main and/or the Only Access to the Service
This case is close to the previous one as developers again, not end users, are API consumers. The difference is that the API is not a product per se, but the service exposed via the API is. The purest examples are cloud platforms APIs like Amazon AWS or Braintree API. Some operations are possible through end-user interfaces, but generally speaking, the services are useless without APIs. [In our coffee example, imagine we are an operator of “cloud” coffee machines equipped with drone-powered delivery, and the API is the only mean of making an order.]
Usually, customers pay for the service usage, not for the API itself, though frequently the tariffs depend on the number of API calls.
##### API = a partner program
##### API = a Partner Program
Many commercial services provide the access to their platforms for third-party developers to increase sales or attract additional audiences. Examples include the Google Books partner program, Skyscanner Travel APIs, and Uber API. [In our case study, it might be the following model: we are a large chain of coffee shops, and we encourage partners to sell our coffee through their websites or applications.] Such partnerships are fully commercial: partners monetize their own audience, and the API provider company yearns to get access to extended auditory and additional advertisement channels. As a rule, the API provider company pays for users reaching target goals and sets requirements for the integration performance level (for example, in a form of a minimum acceptable click-target ratio) to avoid misusing the API.
##### API = additional access to the service
##### API = Additional Access to the Service
If a company possesses some unique expertise, usually in a form of some dataset that couldn't be easily gathered if needed, quite logically a demand for the API exposing this expertise arises. The most classical examples of such APIs are cartographical APIs: collecting detailed and precise geodata and keeping it up-to-date are extremely expensive, while a wide range of services would become much more useful if they featured an integrated map. [Our coffee example hardly matches this pattern as the data we accumulate — coffee machines locations, beverages types — is something useless in any other context but ordering a cup of coffee.]
@ -42,21 +42,21 @@ B2B services are a special case. As B2B Service providers benefit from offering
**NB**: we rather disapprove the practice of providing an external API merely as a byproduct of the internal one without making any changes to bring value to the market. The main problem with such APIs is that partners' interests are not taken into account, which leads to numerous problems:
* The API doesn't cover integration use cases well:
* internal customers usually employ quite a specific technological stack, and the API is poorly optimized to work with other programming languages / operating systems / frameworks;
* internal customers are much more familiar with the API concepts; they might take a look at the source code or talk to the API developers directly, so the learning curve is pretty flat for them;
* documentation only covers some subset of use cases needed by internal customers;
* the API services ecosystem which we will describe in [“The API Services Range”](#api-product-range) chapter later usually doesn't exist.
* internal customers employ quite a specific technological stack, and the API is poorly optimized to work with other programming languages / operating systems / frameworks;
* for external customers, the learning curve will be pretty flat as they can't take a look at the source code or talk to the API developers directly, unlike internal customers that are much more familiar with the API concepts;
* documentation often covers only some subset of use cases needed by internal customers;
* the API services ecosystem which we will describe in [“The API Services Range”](#api-product-range) chapter usually doesn't exist.
* Any resources spent are directed to covering internal customer needs first. It means the following:
* API development plans are totally opaque to partners, and sometimes look just absurd with obvious problems being neglected for years;
* technical support of external customers is financed on leftovers.
All those problems lead to having an external API that actually hurts the company's reputation, not improves it. In fact, you're providing a very bad service for a very critical and skeptical auditory. If you don't have a resource to develop the API as a product for external customers, better don't even start.
All those problems lead to having an external API that actually hurts the company's reputation, not improves it. You're providing a very bad service for a very critical and skeptical auditory. If you don't have a resource to develop the API as a product for external customers, better don't even start.
##### API = an advertisement site
##### API = an Advertisement Site
In this case, we talk mostly about widgets and search engines; to display commercials direct access to end users is a must. The most typical examples of such APIs are advertisement networks APIs. However, mixed approaches do exist either — meaning that some API, usually a searching one, goes with commercial insets. [In our coffee example, it means that the offer searching function will start promoting paid results on the search results page.]
##### API = self-advertisement and self-PR
##### API = Self-Advertisement and Self-PR
If an API has neither explicit nor implicit monetization, it might still generate some income, increasing the company's brand awareness through displaying logos and other recognizable elements while working with the API, either native (if the API goes with UI elements) or agreed-upon ones (if partners are obliged to embed specific branding in those places where the API functionality is used or the data acquired through API is displayed). The API provider company's goals in this case are either attracting users to the company's services or just increasing brand awareness in general. [In the case of our coffee API, let's imagine that we're providing some totally unrelated service, like selling tires, and by providing the API we hope to increase brand recognition and get a reputation as an IT company.]
@ -67,7 +67,7 @@ The target audiences of such self-promotion might also differ:
Additionally, we might talk about forming a community, e.g. the network of developers (or customers, or business owners) who are loyal to the product. The benefits of having such a community might be substantial: lowering the technical support costs, getting a convenient channel for publishing announcements regarding new services and new releases, and obtaining beta users for upcoming products.
##### API = feedback and UGC tool
##### API = a Feedback and UGC Tool
If a company possesses some big data, it might be useful to provide a public API for users to make corrections in the data or otherwise get involved in working with it. For example, cartographical API providers usually allow to post feedback or correct a mistake right on partners' websites and applications. [In the case of our coffee API, we might be collecting feedback to improve the service, both passively through building coffeeshops ratings or actively through contacting business owners to convey users' requests or through finding new coffee shops that are still not integrated with the platform.]
@ -75,11 +75,11 @@ If a company possesses some big data, it might be useful to provide a public API
Finally, the most altruistic approach to API product development is providing it free of charge (or as an open source and open data project) just to change the landscape. If today nobody's willing to pay for the API, we might invest in popularizing the functionality hoping to find commercial niches later (in any of the abovementioned formats) or to increase the significance and usefulness of the API integrations for end users (and therefore the readiness of the partners to pay for the API). [In the case of our coffee example, imagine a coffee machine maker that starts providing APIs for free aiming to make having an API a “must” for every coffee machine vendor thus allowing for the development of commercial API-based services in the future.]
##### Gray zones
##### Gray Zones
One additional source of income for the API provider is the analysis of the requests that end users make. In other words — collecting and re-selling some user data. You must be aware that the difference between acceptable data collecting (such as aggregating search requests to understand trends or finding promising locations for opening a coffee shop) and unacceptable ones are quite vague, and tends to vary in time and space (e.g. some actions might be totally legal at the one side of the state border, and totally illegal at the other side), so making a decision of monetizing the API with it should be carried out with extreme caution.
#### The API-first approach
#### The API-First Approach
Last several years we see the trend of providing some functionality as an API (e.g. as a product for developers) instead of developing the service for end users. This approach, dubbed “API-first,” reflects the growing specialization in the IT world: developing APIs becomes a separate area of expertise that businesses are ready to outsource instead of spending resources to develop internal APIs for their applications by the in-house IT department. However, this approach is not universally accepted (yet), and you should keep in mind the factors that affect the decision of launching a service in the API-first paradigm.

View File

@ -19,7 +19,7 @@ The same fuzziness should be kept in mind while making interviews and getting fe
If you do have an access to end users' actions monitoring (see [“The API Key Performance Indicators”](#api-product-kpi) chapter), then you might try to analyze the typical user behavior through these logs and understand how users interact with the partners' applications. But you will need to make this analysis on a per-application basis and try to clusterize the most common scenarios.
#### Checking product hypotheses
#### Checking Product Hypotheses
Apart from the general complexity of formulating the product vision, there are also tactical issues with checking product hypotheses. “The Holy Grail” of product management — that is, creating a cheap (in terms of resource spent) minimal viable product (MVP) — is normally unavailable for an API product manager. The thing is that you can't easily *test* the solution even if you managed to develop an API MVP: to do so, partners are to *develop some code*, e.g. invest their money; and if the outcome of the experiment is negative (e.g. the further development looks unpromising), this money will be wasted. Of course, partners will be a little bit skeptical towards such proposals. Thus a “cheap” MVP should include either the compensation for partners' expenses or the budget to develop a reference implementation (e.g. a complementary application must be developed alongside the API MVP).

View File

@ -44,7 +44,7 @@ The important question which sooner or later will stand in any API vendor's way
Finally, just the preparations to make the code open might be very expensive: you need to clean the code, switch to open building and testing tools, and remove all references to proprietary resources. This decision is to be made very cautiously, after having all pros and cons elaborated over. We might add that many companies try to reduce the risks by splitting the API code into two parts, the open one and the proprietary one, and also by selecting a license that disallows harming the company's interests by using the open-sourced code (for example, by prohibiting selling hosted solutions or by requiring the derivative works to be open-sourced as well).
#### The auditory fragmentation
#### The Auditory Fragmentation
There is one very important addition to the discourse: as informational technologies are universally in great demand, a significant percentage of your customers will not be professional software engineers. A huge number of people are somewhere on the track of mastering the occupation: someone is trying to write code in addition to the basic duties, another one is being retrained now, and the third one is studying the basics of computer science on their own. Many of those non-professional developers make a direct impact on the process of selecting an API vendor — for example, small business owners who additionally seek to automate some routine tasks programmatically.

View File

@ -2,7 +2,7 @@
The important rule of API product management that any major API provider will soon learn formulates like that: there is no sense to ship one specific API; there is always room for a range of products, and this range is two-dimensional.
#### Horizontal scaling of API services
#### Horizontal Scaling of API Services
Usually, any functionality available through an API might be split into independent units. For example, in our coffee API, there are offer search endpoints and order processing endpoints. Nothing could prevent us from either pronouncing those functional clusters different APIs or, vice versa, considering them as parts of one API.
@ -14,7 +14,7 @@ Different companies employ different approaches to determining the granularity o
**NB**: still, those split APIs might still be a part of a united SDK, to make programmer's life easier.
#### Vertical scaling of API services
#### Vertical Scaling of API Services
However, frequently it makes sense to provide several API services manipulating the same functionality. The thing is that the fragmentation of API customers across their professional skill levels results in several important implications:
* it's almost impossible in a course of a single product to create an API that will fit well both amateur and professional developers: the former need the maximum simplicity of implementing basic use cases, while the latter seek the ability to adapt the API to match technological stack and development paradigms, and the problems they solve usually require deep customization;

View File

@ -19,7 +19,7 @@ Gathering this data is crucial because of two reasons:
In the case of commercial APIs, the quality and timeliness of gathering this data are twice that important, as the tariff plans (and therefore the entire business model) depend on it. Therefore, the question of *how exactly* we're identifying users is crucial.
#### Identifying applications and their owners
#### Identifying Applications and Their Owners
Let's start with the first user category, e.g. API business partners-developers. The important remark: there are two different entities we must learn to identify, namely applications and their owners.
@ -53,7 +53,7 @@ The general conclusion is:
* it is highly desirable to have partners formally identified (either through obtaining API keys or by providing contact data such as website domain or application identifier in a store while initializing the API);
* this information shall not be trusted unconditionally; there must be double-checking mechanisms that identify suspicious requests.
#### Identifying end users
#### Identifying End Users
Usually, you can put forward some requirements for self-identifying of partners, but asking end users to reveal contact information is impossible in most cases. All the methods of measuring the audience described below are imprecise and often heuristic. (Even if partner application functionality is only available after registration and you do have access to that profile data, it's still a game of assumptions, as an individual account is not the same as an individual user: several different persons might use a single account, or, vice versa, one person might register many accounts.) Also, note that gathering this sort of data might be legally regulated (though we will be mostly speaking about anonymized data, there might still be some applicable law).

View File

@ -7,7 +7,7 @@ The task of filtering out illicit API requests generally comprises three steps:
* optionally, asking for an additional authentication factor;
* making decisions and applying access restrictions.
##### Identifying suspicious users
##### Identifying Suspicious Users
Generally speaking, there are two approaches we might take, the static one and the dynamic (behavioral) one.
@ -17,7 +17,7 @@ Generally speaking, there are two approaches we might take, the static one and t
**Importantly**, when we talk about “user,” we will have to make a second analytical contour to work with IP addresses, as malefactors aren't obliged to preserve cookies or other identification tokens, or will keep a pool of such tokens to impede their exposure.
##### Requesting an additional authentication factor
##### Requesting an Additional Authentication Factor
As both static and behavioral analyses are heuristic, it's highly desirable to not make decisions based solely on their outcome but rather ask the suspicious users to additionally prove they're making legitimate requests. If such a mechanism is in place, the quality of an anti-fraud system will be dramatically improved, as it allows for increasing system sensitivity and enabling pro-active defense, e.g. asking users to pass the tests in advance.
@ -27,7 +27,7 @@ In the case of services for end users, the main method of acquiring the second f
Other popular mechanics of identifying robots include offering a bait (“honeypot”) or employing the execution environment checks (starting from rather trivial like executing JavaScript on the webpage and ending with sophisticated techniques of checking application integrity).
##### Restricting access
##### Restricting Access
The illusion of having a broad choice of technical means of identifying fraud users should not deceive you as you will soon discover the lack of effective methods of restricting those users. Banning them by cookie / Referer / User-Agent makes little to no impact as this data is supplied by clients, and might be easily forged. In the end, you have four mechanisms for suppressing illegal activities:
* banning users by IP (networks, autonomous systems);
@ -49,7 +49,7 @@ An opinion exists, which the author of this book shares, that engaging in this s
Out of the author of this book's experience, the mind games with malefactors, when you respond to any improvement of their script with the smallest possible effort that is enough to break it, might continue indefinitely. This strategy, e.g. making fraudsters guess which traits were used to ban them this time (instead of unleashing the whole heavy artillery potential), annoys amateur “hackers” greatly as they lack hard engineering skills and just give up eventually.
#### Dealing with stolen keys
#### Dealing with Stolen Keys
Let's now move to the second type of unlawful API usage, namely using in the malefactor's applications keys stolen from conscientious partners. As the requests are generated by real users, captcha won't help, though other techniques will.

View File

@ -38,6 +38,6 @@ Importantly, whatever options you choose, it's still the API developers in the s
And of course, analyzing the questions is a useful exercise to populate FAQs and improve the documentation and the first-line support scripts.
#### External platforms
#### External Platforms
Sooner or later, you will find that customers ask their questions not only through the official channels, but also on numerous Internet-based forums, starting from those specifically created for this, like StackOverflow, and ending with social networks and personal blogs. It's up to you whether to spend time searching for such inquiries; we would rather recommend providing support through those sites that have convenient tools for that (like subscribing to specific tags).

View File

@ -9,7 +9,7 @@ Before we start describing documentation types and formats, we should stress one
In fact, newcomers (e.g. those developers who are not familiar with the API) usually want just one thing: to assemble the code that solves their problem out of existing code samples and never return to this issue again. Sounds not exactly reassuringly, given the amount of work invested into the API and its documentation development, but that's what the reality looks like. Also, that's the root cause of developers' dissatisfaction with the docs: it's literally impossible to have articles covering exactly that problem the developer comes with being detailed exactly to the extent the developer knows the API concepts. In addition, non-newcomers (e.g. those developers who have already learned the basics concepts and are now trying to solve some advanced problems) do not need these “mixed examples” articles as they look for some deeper understanding.
#### Introductory notes
#### Introductory Notes
Documentation frequently suffers from being excessively clerical; it's being written using formal terminology (which often requires reading the glossary before the actual docs) and being unreasonably inflated. So instead of a two-word answer to the user's question a couple of paragraphs are conceived — a practice we strongly disapprove of. The perfect documentation must be simple and laconic, and all the terms must be either explained in the text or given a reference to such an explanation. However, “simple” doesn't mean “illiterate”: remember, the documentation is the face of your product, so grammar errors and improper usage of terms are unacceptable.
@ -62,7 +62,7 @@ Usually, tutorials contain a “Quick Start” (“Hello, world!”) section: th
Also, “Quick starts” are a good indicator of how exactly well did you do your homework of identifying the most important use cases and providing helper methods. If your Quick Start comprises more than ten lines of code, you have definitely done something wrong.
##### Frequently asked questions and a knowledge base
##### Frequently Asked Questions and a Knowledge Base
After you publish the API and start supporting users (see the previous chapter) you will also accumulate some knowledge of what questions are asked most frequently. If you can't easily integrate answers into the documentation, it's useful to compile a specific “Frequently Asked Questions” (aka FAQ) article. A FAQ article must meet the following criteria:
* address the real questions (you might frequently find FAQs that were reflecting not users' needs, but the API owner's desire to repeat some important information once more; it's useless, or worse — annoying; perfect examples of this anti-pattern realization might be found on any bank or air company website);
@ -72,7 +72,7 @@ Also, FAQs are a convenient place to explicitly highlight the advantages of the
If technical support conversations are public, it makes sense to store all the questions and answers as a separate service to form a knowledge base, e.g. a set of “real-life” questions and answers.
##### Offline documentation
##### Offline Documentation
Though we live in the online world, an offline version of the documentation (in a form of a generated doc file) still might be useful — first of all, as a snapshot of the API valid for a specific date.
@ -91,10 +91,10 @@ In this case, you need to choose one of the following strategies:
* if the documentation topic content is totally identical for every platform, e.g. only the code syntax differs, you will need to develop generalized documentation: each article provides code samples (and maybe some additional notes) for every supported platform on a single page;
* on the contrary, if the content differs significantly, as is in the iOS/Android case, we might suggest splitting the documentation sites (up to having separate domains for each platform): the good news is that developers almost always need one specific version, and they don't care about other platforms.
#### The documentation quality
#### The Documentation Quality
The best documentation happens when you start viewing it as a product in the API product range, e.g. begin analyzing customer experience (with specialized tools), collect and process feedback, set KPIs and work on improving them.
#### Was this article helpful to you?
#### Was This Article Helpful to You?
[Yes / No](https://forms.gle/WPdQ9KsJt3fxqpyw6)

View File

@ -11,7 +11,7 @@ A direct solution to this problem is providing a full set of testing APIs and ad
There are two main approaches to tackling these problems.
##### The testing environment API
##### The Testing Environment API
The first option is providing a meta-API to the testing environment itself. Instead of running the coffee-shop app in a separate simulator, developers are provided with helper methods (like `simulateOrderPreparation`) or some visual interface that allows controlling the order execution pipeline with minimum effort.
@ -19,7 +19,7 @@ Ideally, you should provide helper methods for any actions that are conducted by
The disadvantage of this approach is that client developers still need to know how the “flip side” of the system works, though in simplified terms.
##### The simulator of pre-defined scenarios
##### The Simulator of Pre-Defined Scenarios
The alternative to providing the testing environment API is simulating the working scenarios. In this case, the testing environment takes control over “underwater” parts of the system and “plays” all external agents' actions. In our coffee example, that means that, after the order is submitted, the system will simulate all the preparation steps and then the delivery of the beverage to the customer.
@ -27,7 +27,7 @@ The advantage of this approach is that it demonstrates vividly how the system wo
The main disadvantage is the necessity to create a separate scenario for each unhappy path (effectively, for every possible error), and give developers the capability of denoting which scenario they want to run. (For example, like that: if there is a pre-agreed comment to the order, the system will simulate a specific error, and developers will be able to write and debug the code that deals with the error.)
#### The automation of testing
#### The Automation of Testing
Your final goal in implementing testing APIs, regardless of which option you choose, is allowing partners to automate the QA process for their products. The testing environment should be developed with this purpose in mind; for example, if an end user might be brought to a 3-D Secure page to pay for the order, the testing environment API must provide some way of simulating the successful (or not) passing of this step. Also, in both variants, it's possible (and desirable) to allow running the scenarios in a fast-forward manner that will allow making auto-testing much faster than manual testing.

View File

@ -2,7 +2,7 @@
Finally, the last aspect we would like to shed the light on is managing partners' expectations regarding the further development of the API. If we talk about consumer qualities, APIs differ little from other B2B software products: in both cases, you need to form some understanding of SLA conditions, available features, interface responsiveness and other characteristics that are important for clients. Still, APIs have their specificities
#### Versioning and application lifecycle
#### Versioning and Application Lifecycle
Ideally, the API once published should live eternally; but as we all are reasonable people, we do understand it's impossible in the real life. Even if we continue supporting older versions, they will still become outdated eventually, and partners will need to rewrite the code to use newer functionality.
@ -12,13 +12,13 @@ Apart from updating *major* versions, sooner or later you will face issues with
In this aspect, integrating with large companies that have a dedicated software engineering department differs dramatically from providing a solution to individual amateur programmers: from one side, the former are much more likely to find undocumented features and unfixed bugs in your code; on the other side, because of the internal bureaucracy, fixing the related issues might easily take months, save not years. The common recommendation there is to maintain old minor API versions for a period of time long enough for the most dilatory partner to switch no the newest version.
#### Supporting platform
#### Supporting Platforms
Another aspect crucial to interacting with large integrators is supporting a zoo of platforms (browsers, programming languages, protocols, operating systems) and their versions. As usual, big companies have their own policies on which platforms they support, and these policies might sometimes contradict common sense. (Let's say, it's rather a time to abandon TLS 1.2, but many integrators continue working through this protocol, or even the earlier ones.)
Formally speaking, ceasing support of a platform *is* a backwards-incompatible change, and might lead to breaking some integration for some end users. So it's highly important to have clearly formulated policies on which platforms are supported based on which criteria. In the case of mass public APIs, that's usually simple (like, API vendor promises to support platforms that have more than N% penetration, or, even easier, just last M versions of a platform); in the case of commercial APIs, it's always a bargain based on the estimations, how much will non-supporting a specific platform would cost to a company. And of course, the outcome of the bargain must be stated in the contracts — what exactly you're promising to support during which period of time.
#### Moving forward
#### Moving Forward
Finally, apart from those specific issues, your customers must be caring about more general questions: could they trust you? Could they rely on your API evolving, absorbing modern trends, or will they eventually find the integration with your API on the scrapyard of history? Let's be honest: given all the uncertainties of the API product vision, we are very much interested in the answers as well. Even the Roman viaduct, though remaining backwards-compatible for two thousand years, has been a very archaic and non-reliable way of solving customers' problems for quite a long time.