Erlang ETS--Something about cache

Source: Internet
Author: User
Tags message queue

All say that using ETS to write a cache is too simple, it is easy to make a bar, the specific code is not affixed, say the brief needs and how to do (say the design a bit false panic).

Demand Scenarios

>> query system, for primary storage, write multiple queries at once

So, the cache needs to be able to achieve:

UserA in Query Recorda, UserB also need to query Recorda, let UserB waiting, to UserA query after completion, share Recorda query results.

>> limit the amount of memory used for a single ETS table, FIFO

That would require a queue, the queue length of the frequency of large, consider the rabbitmq of the lqueue

>> limits the amount of memory used for a single record , and if it is less than limit, it retains the record, and conversely, does not preserve

>> auxiliary feature (reset memory limit, clean all cache, get cache informations, delete single cache ...)

Query status

Since UserA in query Recorda, if UserB also need to query, let UserB wait. You need to save the state of the query, the structure of the cache:

{queryterms, QueryStatus, Waitinguser, QueryResult}

Queryterms is the query condition

QueryStatus is the query status, processing the query as handling, the query has been processed to handled

Waitinguser is waiting for the user of the query, if QueryStatus is handling, will ' erlang:self () ' Append to Waitinguser, if querystatus for handled, Queryresu LT is the desired query result

QueryResult Query Results

FIFO queue

The cache is not free to consume memory, it needs to add a total limit, and when the limit is exceeded, the cache is FIFO.

In this case, the gen_server process, in addition to maintaining the ETS table, also needs to maintain the queue, then refresh queue len and memory.

Simple code for Refresh memory:

1Handle_info ({Refresh_mem}, #state {queue_mem =Unqueuemem,2Queue =Queue,3etstable = etstable} = state)4Queuemem = Unqueuemem * 1024 * 1024/8,5      Case CatchEts:info (etstable, memory) of6Mem whenErlang:is_integer (Mem)7             if8Mem > Queuemem9                      CaseLqueue:is_empty (Queue) ofTen                         true- One{noreply, State,?hibernate_timeout}; A_, -{{value, oldqueryterms}, Newqueue} =lqueue:out (Queue), - delete_old_ets (etstable, oldqueryterms), the Erlang:send (Erlang:self (), {refresh_mem}), -{noreply, state#state{queue = Newqueue},?Hibernate_timeout} -                     End; -                 true- +{noreply, State,?Hibernate_timeout} -             End +             ; A_, at{noreply, State,?Hibernate_timeout} -     End;

The Queue_mem at L1 is the total memory limit

If the total memory limit is exceeded and the queue is not empty, the queue is out and the record is deleted in the ETS table.

Single cache limit

Now that you want to limit the amount of memory used for a single record, you need to know the memory footprint of the individual record, and the simplest way is:

Ets:info (t, memory)---> Ets:insert (t, R)---> Ets:info (t, memory)

Then calculate the difference between the memory before and after.

In the "single-process write/delete, multi-process read" mode, there is no problem with this approach.

Multi-process Read and write

"Single-process (Gen_server process) write/delete, multi-process read" should be a more reasonable mode, but the drawbacks of this approach is also obvious: inefficient, in the heavy load of single-process pressure increases, process message queue accumulation, and then the problem occurs. (That is, isolation can also affect the system)

What about the way that many processes read and write?

Multi-process Read and write, and then the refresh memory of the work to the gen_server process. This way, for most functions, there is no problem (thanks to the ETS feature), but the implementation of single cache limit feature will have a big impact. The one cache limit requires three operations on ETS:

Ets:info (t, memory)---> Ets:insert (t, R)---> Ets:info (t, memory)

Multi-process Read and write words, it is difficult to avoid in these three operations, interspersed with delete/insert operation, it is difficult to ensure correctness.

This time, you need to safe_fixtable operation. On-line information about safe_fixtable is relatively small, here are some:

1, strong blog (http://www.cnblogs.com/me-sa/archive/2011/08/11/erlang0007.html)

During traversal, you can use safe_fixtable to ensure that there are no errors during traversal, and that all data items are accessed only once. There are few scenarios to traverse one-by-few, and fewer scenarios for using safe_fixtable. But this mechanism is very useful,
Remember that one of the most troublesome things to do in. NET is to traverse the list of online player users. The exception is almost inevitable due to the player's log-out changes. The select Match is implemented internally using Safe_fixtable

2, Google Group discussion (Https://groups.google.com/forum/#!topic/erlang-china/OnwM5uPVjmI)

Other features

The other feature there is nothing to say, heap code only.

Erlang ETS--something about cache

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.