9 Wrong cache design practices

Source: Internet
Author: User
0x0 relies on the default serialization/deserialization

The default serialization/deserialization consumes a large amount of CPU, especially for more complex data types. It is necessary to use a more appropriate serialization/deserialization mechanism for your development language and programming environment. 0x1 A large object exists in a single cache entry

Because of the high CPU cost of serializing and deserializing large objects, large amounts of access to large object caches can quickly deplete the server's CPU under heavy load conditions. Instead, we should break up large objects into smaller objects and cache them separately. (The translator adds: for example, Redis hash is a good solution to this problem.) 0x2 using caching to share objects across multiple threads

A race condition is raised when a part of the program writes the same cache entry at the same time. An external locking mechanism is required to guarantee this. 0x3 assumes that cache entries are stored in the cache as long as they are executed

Never assume that a cache entry must appear in the cache, even if it has just been written to the cache. Because the cache server will evict some cache entries when memory is tight. The code should always do a null check on the value obtained in the cache. 0x4 the entire collection together with the objects contained in the cache

Doing so will result in poor performance because it will result in a large serialization/deserialization overhead. Separate items are cached so that they can be obtained separately. 0x5 The Parent object-child object exists together

Sometimes an object may be contained by more than one parent object at a time. In order to not cache multiple copies of the same data in different places, the child objects are cached separately. The parent object removes the child object when needed. 0x6 will hold an active object cache on the handle of stream, file, registry, or network

Once the cached object is purged from the cache, its corresponding resource will not be recycled, causing system resource leaks 0x7 using different keys to cache the same data items

It is convenient to access a data item by key and index number. This works for local memory (in-memory), because if you reference the same object, it means that as soon as you change the object, multiple access paths can observe the change in the object. However, when using remote caches, this is not the case, and the object state may not be synchronized (multiple copies). 0x8 When a data item is updated or deleted in persistent storage, the corresponding data item in the cache is not updated or deleted

Translated from: Ten program Busting Caching mistakes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.