About API Writing-overview

Source: Internet
Author: User

Background

The main task of the previous team was to make a REST API. When I took over the job, I found that the API was written in an amateur, without taking into account the implementation of several basic http/1.1 RFCs (2616,7232,5988 and so on), so I took some time to rewrite and then write down that article.

From today's point of view, there are a lot of problems with the system that I did, and many issues outside the API are not considered:

    • The usage documentation for the API. It was my practice to write the document in the collaboration system confluence used by the company, but the biggest problem was the separation of code and documents and poor maintenance.

    • Monitoring of the API. The entire API system does not have a system-wide monitoring mechanism, the collection of various metrics rely heavily on the implementation of the API to deal with the fashionable word is that it is impossible to orchestrate.

    • Testing of the API. People who have done a lot of API work know that writing test cases for the API is very painful, you not only have to do the unit test for the code used by the API, but also do smoke test (the most basic functional test) on the API itself, to ensure that all APIs are available , in line with the expected. As the number of test cases that need to be written is huge, we usually write unit test.

Ideally, an API composition should be able to automatically generate documents and test cases, and the API system should also provide a full set of statistical APIs for generating metrics. By default, the API system itself should collect many metrics, such as response Time,status code for each API, and so on, using COLLECTD/STATD to gather information and send it further to Datadog/new rel APM systems such as ICS.

At Adrise, we have a set of API systems that have been running for several years, not RFC-compliant, (almost) no documentation, (almost) no tests, (almost) no monitoring, and, most of all, it's inefficient to develop and operate. So over the past month or two, I have led the development of a new API system.

Goal

Before we build a new system, we need to set some goals. Here are some of the goals I wrote when designing the API:

    • A well defined pipeline to process requests

    • REST API Done Right (methods, status code and headers)

    • Validation Made Easy

    • Security beared in mind

    • Policy based request throttling

    • Easy to add new APIs

    • Easy to document and test

    • Introspection

Among them, introspection contains two levels of meaning:

    • API system automatically collects metrics, self-monitoring

    • It is very convenient for both the writer and the caller to get the information they want.

Selection

With these goals, the next step is to select the technology. Technology selection can not be done separately from the team, if let me personally choose a basic language and framework, I would probably choose based on ERLANG/OTP, using Elixir developed Phoenix, or simply use the Plug (Phoenix's Cornerstone). Because Plug/phoenix's way of building pipeline by composition fits my mind, Elixir support for macro and the pattern matching of the Erlang language core make subsystems such as routing efficient and concise, while Erlang /OTP high concurrency under the robustness of an API system is hard to pursue.

However, I need to consider the reality of the team. At Adrise, we use node. js as the main back-end technology stack (and some php/python/scala), so the API system is best done based on node. js. There are many frameworks for writing APIs under node. js, such as: Express,restify,hapi,loopback,sails.js. After a comprehensive review of these frameworks, I chose restify for three reasons:

    • The interface and structure are very similar to express (the team is very experienced), but more focused on the REST API than Express

    • A range of middleware and route actions can form a flexible and efficient pipeline

    • Simple, scalable, easy to combine with other libraries, well suited as a starting point for a new framework

    • The source code is well understood and can be read in one day (well, it's a dine)

It turns out to be a good choice.

The underlying framework is set, followed by the selection of the core components. The first is validator. Many people do not pay attention to the system validator, or do not have a unified perspective to see validator, so bad. Any system operating environment is a dirty world, everywhere is 魑魅魍魉, filthy, and we want the system itself is pure, is bliss Pure Land, then how to do?

Simple, create a sigh of the wall, blocking five cockroach

Simple, clean input and output. What kind of header,body and QueryString are allowed for an API? What kind of response body is qualified? This needs to be defined clearly. So we need a proper validator. If you say the frame like four lang Draft female, Bimbos let you dazzling, choose validator like Jiang Wei will, it seems to see only Wang Ping Liao to be usable. After a long stroll in GitHub, only Joi and JSON schemas are available to finally fall into the discernment.

JSON schema is very useful, very close to the schema of various API tools (swagger is directly using JSON schema), but too verbose, let the programmer write this a bit too verbose:

And Joi is provided by Hapi validator, interface is very human, the same schema, the code is described in the amount of only the former 1/3:

And it can be relatively easy to reverse output (of course, it needs a variety of adaptation) into the JSON schema. What are the benefits of exporting to a JSON schema? can be used to generate swagger doc! Swagger is an API description language that defines the protocol between a client and a server. Swagger doc can generate the API's documentation and test UI, for example:

In the next article, I'll cover swagger in detail.

Let's look at the ORM again. Students who often use express should understand that express itself does not interfere with how you access data, and anyone can use the data they need to access it as much as they want: either raw db access or ORM. This flexibility is a kind of injury in team collaboration, it makes it easy to write code that is not uniform in style, and it also brings a lot of Ad-hoc code when writing to the database and reading the data from the database, normalization the ORM. So, despite the many notoriety that ORM carries, I want to use ORM at the level of data access involved.

Our system database is heterogeneous, so, pure, only for a class of database effective ORM, such as mongoose/sequelize is not appropriate, the choice is the interface to support a variety of different databases, in the need for special queries or operations can also turn native ORM. In this way, the efficiency of the engineer and the efficiency of the system to a balance. Under node. js, such an ORM is not much, and it seems to be only waterline. Waterline is an ORM of the sails.js Open source, supports a mix of multiple db, in the individual database cannot unify the operation interface (for example MongoDB upsert), you can conveniently the generation model to native, directly uses the database interface.

In addition, the schema of the waterline model is described using JSON, which makes it easy to transform into a Joi schema, validation in the system's import and export.

Next is the log system. A set of API systems may contain multiple servers, so the logs need to be collected, processed, and visualized centrally. Generally speaking, we can use ELK, or third-party services. If the centralized management of the log is considered at the beginning of the design system, the collection of logs should consider a structured structure rather than a string. Strings can be processed using Grok, but after all, it is inefficient to write grok expressions for each type of log. Because node Restify uses Bunyan as the default log, Bunyan can generate a JSON-formatted log, which directly meets our needs.

Finally, we'll look at the test framework. A qualified system cannot be separated from a set of appropriate test frameworks. My choice is ava/rewire/supertest/nyc. Ava is a unit test framwork, similar to the common test framework such as mocha/tape, to solve the same problem, but Ava can be executed concurrently, efficiently, and for ES6 support is great, test case can return Pr Omise,ava deal with the rest of the matter. Sometimes we need to test a function that doesn't have an export in a module, or a function that we don't care about when we Mock some tests, and rewire can easily handle such a problem. SuperTest can do API-level testing, which is functional testing, and NYC can be used to do test coverage.

Article source: Chen Tian Program life

About API Writing-overview

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.