How does. Net simulate session-level semaphores, limit the http interface call frequency (with demo), and. netdemo

Source: Internet
Author: User

How does. Net simulate session-level semaphores, limit the http interface call frequency (with demo), and. netdemo

Now, for various reasons, you must limit the frequency of access to a request or method.
For example, if you provide an external API, a registered user can call up to 100 times per second, and a non-registered user can call up to 10 times per second.
For example, there is a method that eats server resources very much and cannot be called by more than 10 people at the same time; otherwise, the server is fully loaded.
For example, visitors cannot frequently access or speak on special pages.
For example, the second kill activity.
For example, to prevent DDOS attacks, call the ip blacklist and firewall blacklist of the iis server in the script after a certain frequency is reached.
The preceding examples show how to limit the frequency of calling methods from a single aspect. The server layer has the most direct solution to the frequency limit. Now I am talking about frequency control at the code level.

This document provides two examples:Implementation Based on the standalone Environment, The second isDistributed Redis implementation.

--------------------

Taking the first API requirement as an example, we will first describe the implementation in the standalone environment.
According to the inertial thinking, we will naturally think of the Cache expiration policy, but strictly speaking, for HttpRuntime. CacheCache expiration PolicyTo control the concurrency of requests at a frequencyNot Suitable.
HttpRuntime. cache is an application-level Asp.. Net cache technology, through which multiple cache objects can be declared and an expiration time can be set for each object, when the expiration time is reached, the cache object will disappear (that is, when you access this object, it is Null)

Why? For example, we need to limit a method (method name: GetUserList) to 10 times in one second at most. Now we create a Cache object of the int type and set it to expire after one second. Before accessing the GetUserList method, we first determine whether the value of the Cache object is greater than 10. If the value is greater than 10, the GetUserList method is not executed. If the value is less than 10, execution is allowed. Every time you access this object, if it does not exist or expires, it will be created again and again, and the object will never exceed 10.

1 if (int) HttpRuntime. cache ["GetUserListNum"]> 10) // a request later than 10 fails. 2 {3 Console. writeLine ("request prohibited"); 4} 5 else6 {7 HttpRuntime. cache ["GetUserListNum"] = (int) HttpRuntime. cache ["GetUserListNum"] + 1; // otherwise, the cached object value is + 18 Console. writeLine ("allow requests"); 9}

This idea and implementation are relatively simple, but this situation will occur when you set them based on this model:

 

 

For example, each vertex represents an access request. I created a cache object named GetUserListNum at 0 seconds.
From 0 ~ During the 0.5 s period, I visited three times,In 0.5 ~ During the 1 second period, we visited 7 times.. At this point, the object disappears, and then we access it. The object is reset to 0.
In 1st ~ During the period of 1.5 seconds, the system still accessed 7 times., In the second to second ~ Access is performed three times in two seconds.

Based on this simple cache expiration policy model, although we have accessed 10 times per second on average within these two seconds to meet this requirement, if we take a period segment from it, 0.5 seconds ~ During the period of 1.5 seconds, it was also 1 second, but it actually accessed 14 times! The limit is far greater than the maximum number of visits per second.

 

So how can we solve the above problems scientifically? We can simulateSession-level semaphoresThis means is our topic today.
What is semaphores? In terms of code alone, static SemaphoreSlim semaphoreSlim = new SemaphoreSlim (5); it means that in the case of multithreading, only five threads can be accessed at any time.

 

4. Container 4 thread model

Now, we need to design a model before implementing the code.

Assume that we have A user A pipeline containing user A's requests. For example, if user A sends 10 requests in one second, each request is sent, each element in the MPs queue has one more element. However, we can set this MPs queue to accommodate up to 10 elements, and each element is saved for 1 second. After 1 second, the element disappears. In this case, the length of the pipeline is limited, regardless of the speed or quantity. In this way, regardless of the time node or interval, this pipeline can meet our frequency limit requirements.

The pipeline must correspond to the session Id. A new pipeline is generated whenever a new session comes in. The session id can be sessionId, ip address, or token according to your scenario.

Now that this pipeline is session-level, we need a container to install these pipelines. Now, we name the session pipeline with an IP address and load all the pipelines in a container,

Based on the previous settings, we also need to process the elements of each pipeline in the container and remove expired elements. Therefore, you also need to separate a thread for the container to clean up the elements of each pipeline. When the element of an MPS queue is 0, the MPs queue is cleared to save container space.

 

Of course, because of the large number of users, there may be tens of thousands of pipelines in a container. In this case, only one container is used for loading and cleaning, which is obviously not efficient enough. At this time, we have to scale the container horizontally.

For example, we can automatically generate the corresponding number of containers based on the number of Cpu cores, and then conduct traffic diversion for IP addresses based on an algorithm. My current cpu is the core of four logic, and four containers are generated. Whenever a user accesses the cpu, the system first goes through an algorithm that processes the IP address, for example, 192.168.1.11 ~ Enter the Ip address segment 192.168.1.13 In the first container, xxx ~ Xxx is added to the second container, and so on. Accordingly, there are four threads to process the pipelines in the four containers respectively.

 

Then, our 4-container 4-thread model is formed.

Now, focus on coding implementation:

First, we need a carrier that can carry these containers. This carrier is similar to the concept of a connection pool. You can automatically generate a number of containers according to your needs. If you have special requirements, you can also cut out a container management plane on the container, and cut out a thread management plane on the thread for real-time monitoring and scheduling. To implement such a system, container scheduling and thread scheduling are essential, and this Demo completes the main functions, like containers and threads, I have not separated them in the code, and the algorithms are also directly written to death. In actual design, the design of algorithms is still very important, and there are also multi-threaded models, how to lock to maximize efficiency is also the top priority.

In this case, four containers are directly written.

Public static List <Container> ContainerList = new List <Container> (); // Container carrier static Factory () {for (int I = 0; I <4; I ++) {ContainerList. add (new Container (I); // generate 4 containers every four times} foreach (var item in ContainerList) {item. run (); // enable thread }}

Now we assume there are 41 users numbered 0 to 40. Then I wrote this diversion algorithm to death. users numbered 0 to 9 threw their requests to the first container, numbered 10 ~ 19 users are placed in the second container, numbered 20 ~ 29 put it in the third container, numbered 30 ~ 40 users are placed in the fourth container.

The Code is as follows:

Static Container GetContainer (int userId, out int I) // obtain the Container algorithm {if (0 <= userId & userId <10) // The user numbered 0 to 9 returns the first container and so on {I = 0; return ContainerList [0];} if (10 <= userId & userId <20) {I = 1; return ContainerList [1];} if (20 <= userId & userId <30) {I = 2; return ContainerList [2];} I = 3; return ContainerList [3];}

After our session requests pass through the algorithm diversion, we must call a method to identify the number of pipelines. If the number of MPs queues is greater than 10, the request fails. Otherwise, the request is successful.

Public static void Add (int userId) {if (GetContainer (userId, out int I ). add (userId) Console. writeLine ("Container" + I + "user" + userId + "initiate request"); else Console. writeLine ("Container" + I + "user" + userId + "intercepted ");}

The following is the Container iner code.

Here, the thread-safe ConcurrentDictionary class is used for the selection of containers.
Thread security: when multiple threads read and write the same shared element at the same time, data disorder, iterative errors, and other security issues may occur.
ConcurrentDictionary: In addition to using the GetOrAdd method with caution, it is a new type designed by. Net4.0 to solve the safety of the Dictionary thread.
ReaderWriterLockSlim: Compared with the read/write lock optimized by ReaderWriterLock, multiple threads access the read lock at the same time or one thread accesses the write lock.

Private ReaderWriterLockSlim obj = new ReaderWriterLockSlim (); // declare a read/write lock public ConcurrentDictionary <string, ConcurrentList <DateTime> dic = new ConcurrentDictionary <string, concurrentList <DateTime> (); // create the container dic

Then, when you add a pipe to the container, you can use this method:

Public bool Add (int userId) {obj. enterReadLock (); // read lock, which allows multiple threads to write data to this method at the same time. try {ConcurrentList <DateTime> dtList = dic. getOrAdd (userId. toString (), new ConcurrentList <DateTime> (); // create a new ConcurrentList return dtList if it does not exist. counterAdd (10, DateTime. now); // The MPs queue capacity is 10. When the MPs queue capacity is critical, false is returned} finally {obj. exitReadLock ();}}

Here, to ensure the security of the ConcurrentList when the following thread traverses and deletes the ConcurrentList pipeline, a read lock is required here.

And ConcurrentList, because. net has not introduced the thread security (count and add lock) of the List collection class, so I created a new security type inherited from List <T>, three methods are encapsulated here.

public class ConcurrentList<T> : List<T>{    private object obj = new object();        public bool CounterAdd(int num, T value)    {        lock (obj)        {            if (base.Count >= num)                return false;            else                base.Add(value);            return true;        }    }    public new bool Remove(T value)    {        lock (obj)        {            base.Remove(value);            return true;        }    }    public new T[] ToArray()     {        lock (obj)        {            return base.ToArray();        }    }}

Finally, the thread running method is as follows:

Public void Run () {ThreadPool. queueUserWorkItem (c =>{ while (true) {if (dic. count> 0) {foreach (var item in dic. toArray () {ConcurrentList <DateTime> list = item. value; foreach (DateTime dt in list. toArray () {if (DateTime. now. addSeconds (-3)> dt) {list. remove (dt); Console. writeLine ("Container" + seat + "deleted user" + item. key + "a piece of data in the pipeline") ;}} if (list. count = 0) {obj. enterWriteLock (); try {if (list. count = 0) {if (dic. tryRemove (item. key, out ConcurrentList <DateTime> I) {Console. writeLine ("Container" + seat + "cleared user" + item. key + "List Pipeline") ;}} finally {obj. exitWriteLock () ;}}} else {Thread. sleep (100 );}}});}

Finally, it is based on the console and Signalr.

 

Distributed Redis

The above describes a frequency limit model. Compared with a single machine, the distributed model is nothing more than a carrier. We only need to port the container carrier from the program, it is also feasible to build a separate service or directly borrow Redis.

Here we will introduce the implementation of Redis in Distributed scenarios.

Unlike Asp. net multithreading model, probably because Redis's various types of elements are very granular operations lead to the complexity of various locks, so in the network request processing this Redis is a single thread, the implementation based on Redis does not need to take into account the problems unrelated to the logic from the encoding perspective because of the single thread.

In a brief introduction, Redis is a memory database, which is a non-relational database. Its concept is different from the general Mysql Oracle SqlServer relational database, it does not have SQL, field names, table names, and HttpRunTime. the concept of Cache is similar. First, the Operation belongs to the key-Value Pair mode, just like Cache ["key name"], so that the value can be obtained similarly, in addition, you can set an expiration policy for each Key, and the value corresponding to the Key in Redis is not stored as needed. It supports five data types: string (string ), hash, list, set, and sorted set ).

What we want to talk about today is Sorted set ordered sets. Compared with other set types, an ordered set can be used to specify an integral score for the inserted element, we regard this point score as a sorting column, which sorts points internally. The points allow repetition, while the elements in the sorted set are unique.

In the same way, whenever a user accesses the queue (ordered set), an element is added to the user and the points of the element are set to the current time. Then, open a thread in the program to clean up the elements whose credits are less than the agreed time in the pipeline. Because the elements in an ordered set can only be unique values, you only need to satisfy uuid in the assignment.

 

The Code implemented using Redis is similar to this:

The using syntax sugar is used to implement the IDisposable and encapsulated Redis distributed lock, and then the logical judgment is normal.

Although such code can complete functions, it is not friendly enough. Redis is a memory-based database. In terms of performance, the bottleneck lies in the network I/O. Compared with a single request sent by Get, can we use a script to implement most of the logic?

Yes, Redis supports Lua scripts:
Lua is a lightweight and compact scripting language. It is written in standard C language and open in source code form. It is designed to be embedded into applications, this provides flexible scalability and customization for applications.
The general meaning is to directly send a script to Redis or allow it to directly read a script locally to directly implement all the logic.

/// <Summary> /// if the value is greater than 10 (AccountNum) 1 is returned. Otherwise, an element in the set is added and null is returned. // </summary> /// <param name = "zcardKey"> </param> // <param name = "score"> </param> // <param name = "zcardValue"> </param> /// <param name = "AccountNum"> </param> /// <returns> </returns> public string LuaAddAccoundSorted (string zcardKey, double score, string zcardValue, int AccountNum) {string str = "local uu = redis. call ('zcard', @ zcardKey) if (uu> = tonumber (@ AccountNum) then return 1 else redis. call ('zadd', @ zcardKey, @ score, @ zcardValue) end "; var re = _ instance. getDatabase (_ num ). scriptEvaluate (luasate. prepare (str), new {zcardKey = zcardKey, score = score, zcardValue = zcardValue, AccountNum = AccountNum}); return re. toString ();}

Local uu indicates a variable named uu, redis. call is the redis command. This script means that if it is greater than 10 (AccountNum), 1 is returned; otherwise, an element in the set is added and null is returned.

The Element Processing Method in the MPs queue is as follows:

/// <Summary> /// traverses the ordered set of all current prefixes. If the number is 0, 1 is returned. Otherwise, elements that meet the maximum score condition are deleted, if the number of the set is 0, it disappears. // </summary> /// <param name = "zcardPrefix"> </param> /// <param name = "score"> </param> /// <returns> </returns> public string LuaForeachRemove (string zcardPrefix, double score) {StringBuilder str = new StringBuilder (); str. append ("local uu = redis. call ('keys ', @ zcardPrefix) "); // declare a variable to obtain the result set of fuzzy search str. append ("if (# uu = 0) then"); // if the set length is = 0 str. append ("return 1"); str. append ("else"); str. append ("for I = 1, # uu do"); // traverse str. append ("redis. call ('zremrangebyscore ', uu [I], 0, @ score) "); // Delete the str element from 0 to the score point range. append ("if (redis. call ('zcard', uu [I]) = 0) then "); // If the pipe length is = 0 str. append ("redis. call ('del ', uu [I]) "); // Delete str. append ("end"); str. append ("end"); str. append ("end"); var re = _ instance. getDatabase (_ num ). scriptEvaluate (luasate. prepare (str. toString (), new {zcardPrefix = zcardPrefix + "*", score = score}); return re. toString ();

The two pieces of code are completed by sending Lua scripts. Due to the network model of Redis, The LuaForeachRemove method is proposed as a service for separate processing. For the implementation of multi-container and multi-thread, you can open multiple Redis instances. Last put.

Finally, I made a Demo of all these. However, you cannot find a suitable network disk to upload, so you can leave your mailbox (if you leave it, you can send it), or directly add the QQ group file to retrieve it for discussion: 166843154

 

I like to make friends with people like me. I am not affected by the environment. I am my own teacher. Welcome to join the. Net web communication group. QQ group: 166843154 desire and struggle.

 

Author: xiaozeng
Source: http://www.cnblogs.com/1996V/p/8127576.html welcome to reprint, but any reprint must retain the complete article and blog garden source, in the display of the title and the original link.
. Net Communication Group, QQ group: 166843154 desire and struggle

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.