Adhesive framework series-use and implementation of the memory Queue Service Module

Source: Internet
Author: User

As we mentioned earlier, the clients and servers of the Mongodb data service use the memory queue service module to submit data. The following benefits apply when using the memory queue service:

1. asynchronous operations: for example, it takes 10 ms for the client to convert the data and call Wcf to submit the data to the server. After the queue service is used, the method for the client to insert data to the queue can be called within 1 ms, and the next 9 ms will only happen in the background.

2. Reduce instantaneous traffic. For example, if a large amount of data needs to be submitted at a certain time point, such uneven submission may be an instantaneous pressure on the server or the database, after the queue service is used, the queue service will submit at the specified interval without generating instantaneous pressure. uncommitted data will be stored in the memory. Of course, here we implement the memory queue service, because it is not suitable for storing data that cannot be lost. In fact, in general, the memory queue service can also meet the requirements.

3. Error Handling: the queue service has multiple error handling policies to meet various error handling requirements.

4. performance improvement: the queue service allows batch data submission, especially for cross-machine data submission.

5. Improved reuse: the queue service can be used in any place where data is generated and consumed. It is not limited to submitting data to the database, for example, in the alarm service module, we also use the queue service to send text messages and emails, so as to avoid instantaneous pressure on short messages and email servers. After the memory queue service module is extracted, the reuse of such requirements can be greatly improved.

 

In this article, we will first look at how to use the memory queue service. Take the Mongodb data service client as an example. First, you need to initialize the memory queue service:

var memoryQueueService = LocalServiceLocator.GetService<IMemoryQueueService>();                            memoryQueueService.Init(new MemoryQueueServiceConfiguration(string.Format("{0}_{1}", ServiceName, typeFullName), InternalSubmitData)                            {                                ConsumeErrorAction = config.ConsumeErrorAction,                                ConsumeIntervalMilliseconds = config.ConsumeIntervalMilliseconds,                                ConsumeIntervalWhenErrorMilliseconds = config.ConsumeIntervalWhenErrorMilliseconds,                                ConsumeItemCountInOneBatch = config.ConsumeItemCountInOneBatch,                                ConsumeThreadCount = config.ConsumeThreadCount,                                MaxItemCount = config.MaxItemCount,                                NotReachBatchCountConsumeAction = config.NotReachBatchCountConsumeAction,                                ReachMaxItemCountAction = config.ReachMaxItemCountAction,                            });

We can see that the implementation of IMemoryQueueService is obtained through LocalServiceLocator. The definition of IMemoryQueueService is as follows:

Public interface IMemoryQueueService: IDisposable {// <summary> // initialize the queue service /// </summary> /// <param name = "configuration"> </param> void Init (MemoryQueueServiceConfiguration configuration ); /// <summary> /// enter a record in the column /// </summary> /// <typeparam name = "T"> </typeparam> /// <param name = "item"> </param> void Enqueue <T> (T item ); /// <summary> /// multiple records in the column /// </summary> /// <typeparam name = "T"> </typeparam> /// <param name = "item"> </param> void EnqueueBatch <T> (IList <T> item ); /// <summary> /// get the queue status /// </summary> /// <returns> </returns> MemoryQueueServiceState GetState ();}

After the implementation is obtained, Initialization is required, that is, a MemoryQueueServiceConfiguration is provided, which is defined as follows:

Public class MemoryQueueServiceConfiguration {// <summary> // memory queue name /// </summary> [MongodbPresentationItem (DisplayName = "memory queue name")] public string MemoryQueueName {get; set ;}/// <summary> /// delegate to consume data /// </summary> [MongodbPersistenceItem (IsIgnore = true)] public Action <IList <object> ConsumeAction {get; set ;} /// <summary> /// maximum number of queue items /// </summary> [MongodbPresentationItem (DisplayName = "queue Maximum number of items ")] public int MaxItemCount {get; set ;} /// <summary> /// interval of data consumption in milliseconds /// </summary> [interval dbpresentationitem (DisplayName = "interval of data consumption in milliseconds")] public int ConsumeIntervalMilliseconds {get; set ;} /// <summary> /// the time interval between data consumption in milliseconds when an error occurs /// </summary> [interval dbpresentationitem (DisplayName = "the time interval between data consumption in case of an error is millisecond ")] public int ConsumeIntervalWhenErrorMilliseconds {get; set ;}/// <summary> // policy after the maximum number of items is reached /// </Summary> [Rule dbpresentationitem (DisplayName = "policy after the maximum number of items reaches")] public MemoryQueueServiceReachMaxItemCountAction ReachMaxItemCountAction {get; set ;} /// <summary> /// policy for the number of insufficient batches when data is consumed /// </summary> [Rule dbpresentationitem (DisplayName = "policy for the number of insufficient batches when data is consumed ")] public MemoryQueueServiceNotReachBatchCountConsumeAction NotReachBatchCountConsumeAction {get; set ;}/// <summary> // The Policy for data consumption error // </s Ummary> [MongodbPresentationItem (DisplayName = "An error occurred while consuming data")] public MemoryQueueServiceConsumeErrorAction ConsumeErrorAction {get; set;} private int consumeThreadCount; /// <summary> /// total number of consumed threads /// </summary> [MongodbPresentationItem (DisplayName = "Total number of consumed Threads")] public int ConsumeThreadCount {get {return consumeThreadCount;} set {if (value <= 0) throw new ArgumentException ("Invalid argument! "," ConsumeThreadCount "); consumeThreadCount = value ;}} private int consumeItemCountInOneBatch; /// <summary> /// Number of batch items for data consumption /// </summary> [descridbpresentationitem (DisplayName = "batch items for data consumption")] public int ConsumeItemCountInOneBatch {get {return consumeItemCountInOneBatch;} set {if (value <= 0) throw new ArgumentException ("Invalid argument! "," Consumed "); consumed = value ;}} public MemoryQueueServiceConfiguration (string queueName, Action <IList <object> consumeAction) {MemoryQueueName = queueName; ConsumeAction = consumeAction; maxItemCount = 10000; ReachMaxItemCountAction = MemoryQueueServiceReachMaxItemCountAction. abandonOldItems. add (MemoryQueueServiceReachMaxItemCountAction. logtailtioneveryonesecond ). add (MemoryQueueServiceReachMaxItemCountAction. changeConsumeErrorActionToAbandonAndLogException ). add (MemoryQueueServiceReachMaxItemCountAction. decreaseConsumeIntervalOnce ). add (MemoryQueueServiceReachMaxItemCountAction. decreaseConsumeIntervalWhenErrorOnce); ConsumeErrorAction = MemoryQueueServiceConsumeErrorAction. abandonAndLogException; ConsumeThreadCount = 1; ConsumeIntervalMilliseconds = 10; ConsumeIntervalWhenErrorMilliseconds = 100; ConsumeItemCountInOneBatch = 10; NotReachBatchCountConsumeAction = Second. consumeAllItems ;}}

Here we can see that the queue name and the callback Method for Data consumption must be provided in the constructor. Other optional parameters are as follows:

1. Maximum number of items in the queue: the unconsumed data in the queue does not exceed this limit. The more data in the memory, the more memory is occupied. Generally, a proper value must be defined, this prevents consumption method errors from occupying too much memory. However, we do not recommend that you define this value too small. If it is too small, data may be lost during the data submission peak. The default value is 10000.

2. interval of data consumption in milliseconds: The sleep time after the data is successfully consumed, or after the callback Method for Data consumption is successfully called. If this value is too large, it is likely that the data cannot be consumed. The default value is 10 ms.

3. Time Interval for data consumption in case of errors: The sleep time when the consumption method fails to be called. Generally, if the remote service is unavailable or is too busy, it requires a certain period of time to recover. If the Retry Interval is too short, it will also cause pressure on the remote service. The default value is 100 milliseconds.

4. strategy for reaching the maximum number of items: This is a single-digit enumeration, defined as follows:

[Flags] public enum detail {AbandonOldItems = 0x1, // discard old data DoubleMaxItemCountOnce = 0x2, // expand the maximum number of queue items at a time. ChangeConsumeErrorActionToAbandonAndLogException = 0x4, // modify the processing policy that encounters an error to discard the data and record the exception DecreaseConsumeIntervalOnce = 0x8. // reduce the time interval between data submission: DecreaseConsumeIntervalWhenErrorOnce = 0x10, // reduce the interval between submitting data when an error occurs. logtailtioneveryonesecond = 0x20, // an exception is recorded every second}

The default value is to record the exception once per second, and change the processing policy that encounters the error to discard the data and record the exception (for other policies, see Figure 6 ), in addition, reduce the time interval for submitting data in case of an error (for example, 100 is now changed to 50), and reduce the time interval for data submission (for example, 10 is now changed to 5 ). Of course, you can simply discard old data or increase the maximum number of queue items at a time.

5. The policy for consuming data in batches is insufficient: This is an enumeration and is defined as follows:

Public enum MemoryQueueServiceNotReachBatchCountConsumeAction {WaitForMoreItem = 1, // wait for more data ConsumeAllItems = 2, // directly consume all current data}

The default value is to directly consume all current data. Note that if you choose to wait for more data and the data volume is small, the delay in data consumption is likely to be very large, this is because only when the remaining unconsumed data in the queue exceeds the defined batch number.

6. incorrect data consumption policy: This is an enumeration and is defined as follows:

Public enum MemoryQueueServiceConsumeErrorAction {// <summary> // discard data /// </summary> Abandon = 1, /// <summary> /// discard data and record exceptions /// </summary> AbandonAndLogException = 2, /// <summary> /// always re-enter the column // </summary> EnqueueForever = 3, /// <summary> /// always re-import the column and record exceptions /// </summary> EnqueueForeverAndLogException = 4, /// <summary> /// re-enter the column twice /// </summary> EnqueueTwice = 5, /// <summary> /// re-enter the column twice and record exceptions /// </summary> EnqueueTwiceAndLogException = 6 ,}

It defines the processing policy when the consumption data encounters an error. You can choose to discard the data directly, discard the data, record the exception, or always re-join the column, and re-join the column and record the exception. If you choose to always re-import columns and consume data in an error, it is caused by a BUG. For example, if the data is submitted to the remote end, and the data does not support serialization, this data will always be submitted for error, that is, the data will always be in the queue. As the number of such data increases, the queue will be fully occupied by the BUG data, resulting in the queue failure to work normally. Another method is to re-join the column twice, re-join the column twice, and record exceptions. If data cannot be consumed successfully due to transient network problems, you can recolumn the data twice or retry the data twice to eliminate the problem of transient network, if the data has not been submitted twice, the data will be discarded. The default value is discard and the record is abnormal.

7. Number of items consumed in batches: the number of data transferred to the consumption callback method each time data is consumed. If the data consumption involves cross-machine data, the value cannot be too large. If the data consumption is too large, the commit may fail due to a large amount of data at a time (for example, the throttling of Wcf ), in addition, if the policy of waiting for more data is insufficient for batch consumption, the memory queue may pile up a lot of data waiting for submission; however, this value should not be set too small. If it is too small, the performance of frequent network calls is also relatively low. The default value is 10.

8. Total number of consumed threads: that is, the number of threads in the background to consume data. Generally, if the data consumption behavior is not designed to be cross-machine, use one thread in the background, you can set this value to the number of CPUs if you think that the data volume is too large to consume, if there is more, it will only cause more thread scheduling, which is not necessarily more advantageous. The default value is 1.

 

Here, we can see that the configuration of the memory queue used by the Mongodb data service client is defined in the background by configuring the service. The default solution we choose is:

        public MongodbInsertServiceConfigurationItem()        {            TypeFullName = "";            SubmitToServer = true;            ReachMaxItemCountAction = MemoryQueueServiceReachMaxItemCountAction.AbandonOldItems                .Add(MemoryQueueServiceReachMaxItemCountAction.LogExceptionEveryOneSecond);            ConsumeErrorAction = MemoryQueueServiceConsumeErrorAction.AbandonAndLogException;            ConsumeThreadCount = 1;            ConsumeIntervalMilliseconds = 10;            ConsumeIntervalWhenErrorMilliseconds = 1000;            ConsumeItemCountInOneBatch = 100;            NotReachBatchCountConsumeAction = MemoryQueueServiceNotReachBatchCountConsumeAction.ConsumeAllItems;            MaxItemCount = 10000;        }

100 data records are submitted at a time and once every 10 milliseconds (that is, up to 10 thousand records can be submitted in 1 second). If an error occurs, wait for 1 second. After the quota is reached, old data is discarded and an exception is recorded every second, one thread is submitted in the background, and the queue quota item is 10000. When the number of data records is less than 100, how many data records are submitted.

 

After initialization, it is very easy to submit data. You only need to call the column entry method:

submitDataMemoryQueueServices[typeFullName].Enqueue(item);

Or you can submit them in batches:

submitDataMemoryQueueServices[typeFullName].EnqueueBatch(dataList);

Using the queue service is as simple as initializing and columns. When appropriate, the queue service will call the callback method of consumption data defined previously. In addition, the queue service also exposes interfaces for obtaining the queue service status:

 MemoryQueueServiceState GetState();

MemoryQueueServiceState is defined as follows:

[MongodbPersistenceEntity ("State", DisplayName = "memory queue service status", Name = "MemQueue")] public class MemoryQueueServiceState {// <summary> // memory queue name /// </summary> [MongodbPresentationItem (DisplayName = "memory queue name")] public string MemoryQueueName {get; set ;} /// <summary> /// memory queue Configuration /// </summary> [MongodbPresentationItem (DisplayName = "memory queue Configuration")] public MemoryQueueServiceConfiguration Configuration {get; set ;}//< summary> /// Total number of items consumed //</summary> [descridbpresentationitem (DisplayName = "Total number of items consumed")] public long TotalConsumeItemCount {get; set ;} /// <summary> /// Total number of items with consumption errors /// </summary> [descridbpresentationitem (DisplayName = "Total number of items with consumption errors")] public long TotalConsumeErrorItemCount {get; set ;} /// <summary> /// number of remaining items in the current queue /// </summary> [MongodbPresentationItem (DisplayName = "number of remaining items in the current queue")] public int CurrentItemCount {get; set ;}//< summary> /// number of items retried due to the current error /// </summary> [Export dbpresentationitem (DisplayName = "number of items retried due to the current error")] public int CurrentErrorRetryItemCount {get; set ;} /// <summary> /// time of the last consumption error /// </summary> [MongodbPresentationItem (DisplayName = "time of the last consumption error")] public DateTime lastconsumeerroccurtime {get; set ;} /// <summary> /// time when the maximum number of items was reached last time /// </summary> [descridbpresentationitem (DisplayName = "time when the maximum number of items was reached last time")] public DateTime LastReachMaxItemCountOccurTime {get; set ;} /// <summary> /// error message of the last consumption error /// </summary> [MongodbPresentationItem (DisplayName = "exception message of the last consumption error")] public string LastConsumeErrorMessage {get; set ;}}

To save the status data to Mongodb, we also define some features of the Mongodb data service. This section describes the use of the memory queue service module. The implementation of the memory queue service module is not introduced here. This module has only one MemoryQueueService class, and the code is very simple. If you are interested, you can view the source code. One thing to note is that because each queue is independent, the queue service does not use static members to store queue data. For queue service users, it is an obligation to save the example of the queue service as a static member to prevent GC collection.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.