Optimize Varnish using the Varnish module and Varnish configuration language
This is a guest article written by Denis Br æ khus and Espen Braastad. They are fromVarnish SoftwareVarnish API Engine developer. Varnish is used to authenticate the backend for a long time, so let's take a look at what they are doing.
Varnish Software has just released the Varnish API Engine, a high-performance http api gateway that handles authentication, authorization, and all adjustments based on Varnish Cache. Varnish API Engine can easily expand your current API set with a unified access control layer. This unified access control layer has built-in high-capacity read operation cache capabilities and provides real-time metrics.
Varnish API Engine uses well-known components such as memcached, SQLite, and the most important Varnish Cache. The management API is written in Python. The core part of this product is written into an application using VCL (Varnish Configuration Language) based on Varnish and provides scalability using vmod (Varnish Modules.
We hope to take this article as an opportunity to show you how to use VCL to create your own flexible and high-performance application with the assistance of vmod.
Vmod (Varnish module)
VCL is the language used to configure Varnish Cache. When varnishd loads the VCL configuration file, it will convert the file into C code, compile and dynamically load it. Therefore, you can directly embed C code in the VCL configuration file to enable the VCL Extension function. However, from Varnish Cache 3, Varnish Modules has been used, or vmod for short.
In a stack of Varnish Cache, a typical request flow is:
The HTTP request sent by the client is received and processed by Varnish Cache. Varnish Cache checks whether the requested content is in the Cache, and finally it may read the content from the backend. This is a good job, but we can do more.
The VCL language is designed for performance, so it does not provide loop or external calls. VMODs, in other words, are designed to break these limits. This is great for flexibility, but it puts the responsibility for ensuring performance and avoiding latency into VMOD code and behavior.
The API Engine is used to demonstrate how powerful it is to use a combination of VCL and custom VMOD to develop new applications. In the Varnish API Engine, the request process is:
Each request matches a rule set using SQLite VMOD and a memcached counter group using Memcached VMOD. If any one does not match, the request will be rejected. For example, the authentication fails or exceeds the request limit.
Application Example
The following example shows a very simple version of some concepts in the Varnish API Engine. We will create a small application written in VCL. It will search for the request URL in a database containing throttling rules and execute it based on each IP address.
Since testing and maintainability during application development are crucial, we will use Varnish's Integrated Test Tool varnishtest. Varnishtest is a powerful tool used to test the Varnish cache. Varnishtest's simple interface means developers and operation engineers use it to test their VCL/VMOD configurations.
Varnishtest reads a file describing a group of simulated servers, clients, and varnish instances. The client executes the request to reach the server through varnish. Expectations can be set to content, title, HTTP response code, or more. Using varnishtest, we can quickly test our sample application and verify whether our request passes or blocks each defined expectations.
First, we need a database with our throttling rules. Using the sqlite3 command, we create the database in/tmp/rules. db3 and add two or three rules.
$ Sqlite3/tmp/rules. db3 "create table t (rule text, path text );"
$ Sqlite3/tmp/rules. db3 "insert into t (rule, path) VALUES ('3r5 ','/search ');"
$ Sqlite3/tmp/rules. db3 "insert into t (rule, path) VALUES ('15r3600', '/login ');"
These rules allow 3 requests to/secarch and 15 requests to/login every hour. This idea is to execute these rules on the basis of each IP address.
For simplicity, We will write test and VCL configuration in a file, throttle. vtc. However, it is also feasible to separate VCL configurations and perform different tests by using the include statement in the test file.
The first line in this file is used to set the title or name of the test.
Varnishtest "Simple throttling with SQLite and Memcached"
Our testing environment consists of a backend that calls s1. We will first retrieve a request to a URL with no rules in the database.
Server s1 {
Rxreq
Reverse CT req. url = "/"
Txresp
Based on the following expectations, we will continue CT 4 requests to/search after arrival. Note that the query parameters are slightly different to create all these unique requests.
Rxreq
Reverse CT req. url = "/search? Id = 123 & type = 1"
Reverse CT req. http. path = "/search"
Reverse CT req. http. rule = "3r5"
Reverse CT req. http. requests = "3"
CT req. http. period = "5"
Country CT req. http. counter = "1"
Txresp
Rxreq
Reverse CT req. url = "/search? Id = 123 & type = 2"
Reverse CT req. http. path = "/search"
Reverse CT req. http. rule = "3r5"
Reverse CT req. http. requests = "3"
CT req. http. period = "5"
Country CT req. http. counter = "2"
Txresp
Rxreq
Reverse CT req. url = "/search? Id = 123 & type = 3"
Reverse CT req. http. path = "/search"
Reverse CT req. http. rule = "3r5"
Reverse CT req. http. requests = "3"
CT req. http. period = "5"
Country CT req. http. counter = "3"
Txresp
Rxreq
Reverse CT req. url = "/search? Id = 123 & type = 4"
Reverse CT req. http. path = "/search"
Reverse CT req. http. rule = "3r5"
Reverse CT req. http. requests = "3"
CT req. http. period = "5"
Country CT req. http. counter = "1"
Txresp
}-Start
Now it's time to write a VCL mini program. Our test environment consists of a varnish instance v1. First, add the vaernish version and VMOD imports.
Varnish v1-vcl + backend {
Vcl 4.0;
Import std;
Import sqlite3;
Import memcached;
VOMD is usually configured in vcl_init, as does sqlite3 and memcacheed. For sqlite3, we set the database path and the delimiter used in the results of multiple columns. Memcached VMOD supports a variety of configuration options supported by libmemcached.
Sub vcl_init {
Sqlite3.open ("/tmp/rules. db3", "| ;");
Memcached. servers ("-- SERVER = localhost -- BINARY-PROTOCOL ");
}
In vcl_recv, the incoming HTTP request is accepted. First, we extract the request paths without query parameters and potentially dangerous characters. This is very important because this path is part of a later SQL request. The subsequent regular expression will start from the beginning of a line until the character (? &; "') Or a space to end matching req. url.
Sub vcl_recv {
Set req. http. path = regsub (req. url, {"^ ([^? &; "'] +). *"}, "\ 1 ");
{"} Can be used in regular expressions to support the handling of" characters "in regular expression rules. The path we just extracted is only used when searching rules in the database. The response (if any) is stored in req. hhtp. rule.
Set req. http. rule = sqlite3.exec ("SELECT rule FROM t WHERE path = '" + req. http. path + "'limit 1 ");
If we get a response, it will be in RnT format, where R is the number of requests allowed in T seconds. Because this is a string, we need to apply an additional regular expression to split it.
Set req. http. requests = regsub (req. http. rule, "^ ([0-9] +) r. * $", "\ 1 ");
Set req. http. period = regsub (req. http. rule, "^ [0-9] + r ([0-9] +) $", "\ 1 ");
Requests are restricted only when we obtain the correct value from the Regular Expression Filter.
If (req. http. requests! = "" & Req. http. period! = ""){
Add or create a unique Memcached counter to the client. ip address. Set the path value to 1. The expiration time is specified to be the same as the time set by the database's Restriction rules. In this way, you can flexibly set the time according to the rule. The returned value is the new value of the counter, which is consistent with the number of requests of the client. ip in the current path and the current time range.
Set req. http. counter = memcached. incr_set (
Req. http. path + "-" + client. ip, 1, 1, std. integer (req. http. period, 0 ));
Check whether the counter is higher than the limit set in the database. If yes, discard the current request and return a response code of 429.
If (std. integer (req. http. counter, 0)> std. integer (req. http. requests, 0 )){
Return (synth (429, "Too required requests "));
}
}
}
In vxl_deliver, we set the response headers that display the throttling limit and the status of each request that helps the user.
Sub vcl_deliver {
If (req. http. requests & req. http. counter & req. http. period ){
Set resp. http. X-RateLimit-Limit = req. http. requests;
Set resp. http. X-RateLimit-Counter = req. http. counter;
Set resp. http. X-RateLimit-Period = req. http. period;
}
}
In vcl_synth, the error will get the same headers settings.
Sub vcl_synth {
If (req. http. requests & req. http. counter & req. http. period ){
Set resp. http. X-RateLimit-Limit = req. http. requests;
Set resp. http. X-RateLimit-Counter = req. http. counter;
Set resp. http. X-RateLimit-Period = req. http. period;
}
}
After the configuration is complete, add some clients to check whether the configuration is correct. First, we send a request that is not throttled, which means that the URL has no throttling rules in the database.
Client c1 {
Txreq-url "/"
Rxresp
Reverse CT resp. status = 200
Reverse CT resp. http. X-RateLimit-Limit ==< undef>
Reverse CT resp. http. X-RateLimit-Counter ==< undef>
Reverse CT resp. http. X-RateLimit-Period ==< undef>
}-Run
We know that the URL of the next request sent by the client matches the restricted database. We want to set the speed limit header. The limit rule for/search is 3r5, that is to say, in a 5-second period, the first three requests should be successful (return Status Code 200). When the fourth request, it should be limited (return status code 429 ).
Client c2 {
Txreq-url "/search? Id = 123 & type = 1"
Rxresp
Reverse CT resp. status = 200
Reverse CT resp. http. X-RateLimit-Limit = "3"
Country CT resp. http. X-RateLimit-Counter = "1"
Reverse CT resp. http. X-RateLimit-Period = "5"
Txreq-url "/search? Id = 123 & type = 2"
Rxresp
Reverse CT resp. status = 200
Reverse CT resp. http. X-RateLimit-Limit = "3"
Reverse CT resp. http. X-RateLimit-Counter = "2"
Reverse CT resp. http. X-RateLimit-Period = "5"
Txreq-url "/search? Id = 123 & type = 3"
Rxresp
Reverse CT resp. status = 200
Reverse CT resp. http. X-RateLimit-Limit = "3"
Reverse CT resp. http. X-RateLimit-Counter = "3"
Reverse CT resp. http. X-RateLimit-Period = "5"
Txreq-url "/search? Id = 123 & type = 4"
Rxresp
Reverse CT resp. status = 429
Reverse CT resp. http. X-RateLimit-Limit = "3"
Reverse CT resp. http. X-RateLimit-Counter = "4"
Reverse CT resp. http. X-RateLimit-Period = "5"
}-Run
At this point, we know that the request will be limited. To determine the end time of the limit, new requests will be allowed. Before we send the next and last requests, we added a latency. This request should be successful because we have entered a new restriction period.
Delay 5;
Client c3 {
Txreq-url "/search? Id = 123 & type = 4"
Rxresp
Reverse CT resp. status = 200
Reverse CT resp. http. X-RateLimit-Limit = "3"
Country CT resp. http. X-RateLimit-Counter = "1"
Reverse CT resp. http. X-RateLimit-Period = "5"
}-Run
To run the test file, make sure that the memcached service is running and then execute:
$ Varnishtest example. vtc
# Top TEST example. vtc passed (6.533)
Add the-v option to enable verbose for more information from the running test.
If you send a request to our example application, the following response header is received. The first indicates that the request is accepted, and the second indicates that the request is restricted.
$ Curl-iI http: // localhost/search
HTTP/1.1 200 OK
Age: 6
Content-Length: 936
X-RateLimit-Counter: 1
X-RateLimit-Limit: 3
X-RateLimit-Period: 5
X-Varnish: 32770 3
Via: 1.1 varnish-plus-v4
$ Curl-iI http: // localhost/search
HTTP/1.1 429 Too required requests
Content-Length: 273
X-RateLimit-Counter: 4
X-RateLimit-Limit: 3
X-RateLimit-Period: 5
X-Varnish: 32774
Via: 1.1 varnish-plus-v4
The complete throttle. vtc file will output timestamp information before and after VMOD processing to provide overhead data for Memcached and SQLite to query and introduce VMOD. Run 60 requests on varnishtest running the Memcached service on a local Virtual Machine and return the time information for each of the following operations (in MS ):
• SQLite SELECT, maximum: 0.32, minimum: 0.08, average: 0.115
• Memcached incr_set (), maximum: 1.23, minimum: 0.27 average: 0.29
This is not a scientific result, but implies that performance is not high in most cases. Performance is also about horizontal capabilities. The simple example given in this article, when necessary, uses a Global Counter using the Memcached instance pool, and its performance will be extended in a regular manner.
Additional reading
Now there are a lot of available VMODs, and vmod Directory is a good starting point. Some of the highlights in this directory are about cURL, Redis, Digest function VMODs and multiple authentication modules.
Varnish Plus, the complete commercial support version of Varnish Cache, has been bundled with a set of high-quality, supported VMODs. For open-source versions, you can manually download and compile the required VMODs.
Concepts of Cache Server Varnish
Concepts of Cache Server Varnish
Structural notes for Varnish Cache
Install and configure Varnish-5.8 in CentOS 2.1.5
The RedHat script uses the CentOS source to update and install Nginx, PHP 5.3, and Varnish.
Using Varnish to build Cache Server notes
Install and configure the cache service Varnish
Preparations for Varnish compilation and Installation
Configuration Optimization of Varnish cache in Linux
Varnish details: click here
Varnish: click here
Varnish Goes Upstack with Varnish Modules and Varnish Configuration Language
This article permanently updates the link address: