The cache is new memory

Source: Internet
Author: User

Original in English: Cache is the new RAM

This is a speech at defrag 2014.


This is one of several technical advantages after a long period of technological change. You see the actual breakout. If you just see a part of it, it's hard to infer it correctly. You either have a short-term progress or you are far behind. What is surprising is not the speed of things changing, but the breakthrough of the long-term engineering practice. This is the Strowger switch, an automatically connected telephone line device, invented in 1891.

In the 1951, when it turned to digital switching technology, a typical centralized switching center was essentially a large-scale version of Victorian technology. Each forwarded phone has its own separate Strowger switch.

At the time, it was the best technology. Of course, it seems to us that this is just the world's largest steampunk (steampunk, based on the 19th century science fiction) style of installation art (art installations).

It may be wrong to feel superior to it. Although the integrated circuit (integrated circuit) has been available for 65 years, there are still hundreds of millions of of these devices humming and running. Until now, we are really in a complete electronic computing (solid-state computing, solid-state and mechanical relative, referring to the semiconductor-based) turning point.

One of the most exciting technological shifts is that the new model becomes viable and the other is that the old limit no longer exists. In our industry, both types of change are on the way.

Distributed computing (distributed computing) is now the dominant programming model throughout the entire software stack. The so-called Central processing Unit (processing unit) is no longer centralized, not even a unit. It is only one of a group of Worms (Bugs) crawling on the mountain of data (a mountain of the). The database is the last bastion.

At the same time, the latency between memory and hard disk storage is becoming irrelevant. For more than 30 years, the primary concern for database performance is the huge difference between accessing memory and random data on hard disk storage. Now that we can put all the data in memory, these worries don't have to be considered. Of course not so simple, you can not use a B-tree, Mmap, and then will be able to handle. There are a lot of related things to solve before the full memory-based design is launched.

These two new trends create a completely new way to think, design, and build applications. Now let's talk about how we get there, what we do, and what the future gives us.


( prehistoric times , viewed from below should be 2000 years ago, the user was described as a dinosaur, the author's little humor)

At that time, each component in the frame's composition had a definite description associated with it. Each component is a separate function: the database, the Web server, all become the different roles in a house (a house refers to a room or data center). Incidentally, this is the source of the word "the cloud". A soft/shaggy cloud is a standard sign for the WAN, and we don't have to worry about WAN details at all.


(2000, Load balancer solves everything)

The easy-to-implement distributed computing has been the mainstream pro-gaze. After multiple identical application servers are hidden in a load balancer (load balancer) , the equalizer distributes the load equally to the application server. Load balancing only those parts of the architecture that are state-independent avoid a lot of philosophical problems (theoretically?). )。 When the system expands, these components flank the flanks and finally surround the "the" database. We tell ourselves that it is normal for the database to be replaced with faster disks and faster CPUs, after all, only one machine is needed. The hardware provider is happy to earn our money.


(2002: Backup to fix everything)

Finally, the database backup became reasonable, and with a hot spare database, our conscience was somewhat relieved. Then we tell ourselves that there will be no more failures. Of course, this correctness only existed for a few minutes.

Of course, hot backups are often idle (sitting idle). Once business analysts realize that they can make large-scale queries on production data without touching production data, so-called hot backups are almost as busy and critical as production data. We also told ourselves that it would be nice to take a hot backup temporarily in case of an emergency. But it's like saying that we don't have to bring a spare tire at all, because we can steal one from the other cars.


(2004:memcached/cache to solve everything)

Brad Fitzpatrick then released Memcached, a daemon that can cache data in memory (so called memcached, memory cached). This is a simplified version of the Distributed hash table, and it is really practical, so it is popular in academia. It has many features: backup (a form of replication?), horizontal partitioning, load balancing, simple mathematical operations, etc. We tell ourselves once again that since most of the load is read, why should we urge the database to do the same query again and again from disk? You only need a set of large memory small size (Small-calibre, small caliber) servers, of course hardware providers are also happy to earn us (buy memory) money.

Maybe you need to write some cache invalidation code, which doesn't sound hard, right?


(2004:memcached resolves everything, adding cache invalidation)

Indeed, as it claims, Memcached's program has benefited us for a long time. It replaces the random IO operation of the hard disk with the random IO operation of the memory. Still, the database machine is getting bigger and busier. We realized that the memory overhead of the cache would be at least as large as the working set (otherwise it would be invalid), coupled with the inability to tolerate cache persistence. We tell ourselves that this is the scale of the network (web scales?). ) is overhead.
(2006: Horizontal partitioning/slicing solves everything)

What's more worrying is that applications are getting more and more complex, chatting more and more (chattier, probably the number of times a chat program writes to a database). Database write operations are performed almost every time. Now, writing, rather than reading, has become a bottleneck. That's when we finally take the database segmentation seriously. Facebook initially sliced its user data according to the university field, and then made it "Harvard database" and maintained it for a long time. Flickr is another good example. They use PHP to manually create a segmentation system that uses the hash value of the user ID to slice the database, and memcached the image according to the key. At the technology exchange, they revealed that they had to normalize the data sheet (denormalize) and write two times (Doule-write) on some objects (such as comments, messages, likes).

There's always a price to solve for infinite scaling (infinite scaling), right.


(2008:nosql solve everything)

The problem with manually slicing relational databases is that your relational database is gone. The Shard API actually becomes your query language. You don't have a good headache with the operation, and it's more painful to modify a set of schemas (schema).

This requires a deep breath, listing all the shortcomings and flaws of the SQL implementation that you have chosen, and then blaming SQL. A tidal-wave of nosql, refugee-like XML databases have emerged, and have made a fundamentally impossible commitment. They offer auto-segmentation, flexible mode, some redundancy, ... and that's how it started. But it's better than writing to yourself.

You know, "you don't have to write it yourself" is always a desperate thing to become the main selling point.


(2010:map/reduce solve everything)

Moving to NoSQL is no better than using manual segmentation, because we've abandoned the hope of using common client tools to control and analyze data. But it's not much better. SQL queries that were previously written by commercial people (business folks) become the report code that developers maintain.

Remember the hot backup database for backup and analysis? It now becomes a comeback for the Hadoop filestores and the hive queries on the upper level. Since it worked, the business people are not bothering us any more. But one big problem is the operational complexity of these systems. Like the shuttle, they are sold as reliable and virtually maintenance-free products, but in the end they require a lot of manual action. Another big problem is the deposit and withdrawal of data: It's pretty good to spend a whole day. The third big problem is that IO becomes a bottleneck for both network and disk. We tell ourselves that this is the price of graduating from big data.

Anyway, that's what Google does, right?


(2012:nosql to solve everything again)

As some NoSQL databases mature, their APIs have changed strangely: they start to look like SQL. This is because SQL is a fairly straightforward implementation of the relational set theory (relational set theory), and mathematics is not so well fooled.

I repeat Paul Graham's unbearable and smug comment on Lisp: Once you add group by, filter, join, you can't claim to invent a new query language, because it's just a new dialect of SQL. And the syntax is poor, there is no optimizer.

Since we have bypassed SQL, most of the systems are missing something very important, such as the storage engine, the query optimizer, and these are all based on the relational set theory. Delays to late implementations lead to serious performance problems. Even those that address performance issues (or obscure the problem by parking in memory) are missing something else, such as a suitable backup.

I know a very successful internet start-up company (you've certainly heard) used 4 (!!) Different NoSQL systems to solve the problem.


(2014: What is needed now to solve everything?) )

It is now quite obvious that we will not go back to the single database and the random location of 10 milliseconds (10-million-nanosecond random seek, mentioned in the slide above, read a hard drive for 10 milliseconds) of the former. In the process of looking for the hype cycle that solves all problems once and for all (hype cycle, also called the Technical maturity curve), there is an interesting pattern: a clever approach that relieves a sore point and introduces new pain points.

So what's the next complex tool to add to this diagram? Maybe the real way is to simplify things.

For example, memory: There is a lot of memory on the database machine, it is used for buffering and computing, and memcached machine has a lot of memory. The sum of memory in these systems is at least as large as your working data set. If not, you will earn (Under-bought, low order to buy good goods). Moreover, I very much doubt your cache layer is 100% efficient. I bet you have a lot of data that hasn't been read before it's been replaced, and I bet you never followed it. That doesn't mean you're a bad kid, but it's more of a hassle than a cache.

Many of the features common to these components appear to be mutually combinable and complementary. As long as they are properly arranged.

Once you adopt the following axiom: The system should be distributed, and the data should be digitized (solid-state is purely electrical, not mechanical mechanical), the interesting thing arises: the model is simpler. Temporary memory data structures that are used only when the query is triggered are the only structures. Random access is no longer a big sin, but a normal business process. You don't have to worry about paging, or rebalancing (rebalancing), or the location of the data.


(2014:sql memory cluster to solve everything)

It's a beautiful, simple architecture. Just as the Load Balancer abstracts the application server, the SQL aggregator (aggregators) abstracts the organizational details of read and write. Placing the core of the data-storage strategy under a stable API allows for both sides to change in the case of small interruptions.

Now, it's all right, we finally get to the last good place in history, right?

No matter when you are, complacency about the state of computing art is wrong. There will always be other bottlenecks.

This is AMD's Barcelona chip, a fairly modern design. It has 4 cores, but most of the surface is occupied by the cache and the I/O area around the core, just like the large parking lot around Walmart. In the Pentium era, the cache area accounted for only 15% of the wafer (die). The third revolution in computing is how much faster the CPU is relative to memory. So a large chunk of expensive areas on the wafer are reserved for caching.

In the past, the primary focus of database performance was on memory and hard disk latency, and now we joked that CPU and memory latency are not the same problem, but it does.

And we pretend that shared memory exists, but it's not. With so much core and memory, there will always be some cores that are close to some memory.

When you think about it, the computer does only do two things: Read the symbols, write the symbols. Performance is a function of how much data a computer has to move and where the data goes. The best possible scenario is that a large number of sequential data streams are read once and processed quickly and are no longer used. The GPU is a good example. But the most interesting load is not the case.


(Throughput and latency always laughs last)

Each random pointer guarantees that the cache is not at once, and each competition for the same block of memory (such as write locks) can cause a lot of coordination delays. Even if you have a CPU cache hit rate of 99% (which is virtually impossible), the time to wait for memory is also dominant.

Or say this: If the disk is a new tape, the memory is the new hard disk, and the CPU cache is the new memory. Position is still related.

So, what will solve this problem? It seems that this is the same old paradox: do we optimize random access or optimize serial? Do we accept write or read performance issues? Or do we sit and wait for the hardware speed to come up? Perhaps memory resistors (memristor) or other technologies can make these problems irrelevant. Of course, I also need some money (pony, pony?) )。

The good news is that the overall physical architecture of the distributed database is basically shaped. Data clients no longer need to handle the internal details of 4 or 5 different subsystems. This is not perfect, nor is it mainstream. But the breakthrough will take some time to spread.

But if the bottleneck is still in storage, it means that the rest of the pieces are ripe. Innovation can occur in the field of data structures and algorithms. There are few changes to the cleanup architecture that promise to solve all problems at once. If we're lucky, the next 15, SQL database will slowly become faster and more efficient, and the API is the same.

But until then, industry will not be calm.

http://kb.cnblogs.com/page/509527/

Cache is new memory (RPM)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.