LRU algorithm in the back-end engineer interview, is a more frequent topic, this article with everyone, understand the LRU algorithm, and finally with Python easy implementation of an LRU algorithm based on the cache.
What is a cache
First look at a picture, when we visit the Web page, the browser will send a request to the server, the server will go through a series of operations, the page back to the browser.
Python Learning Communication Group: 125240963 get a python interview for a real question, hoping to help you find a job!
When multiple browsers are accessed simultaneously, multiple requests are initiated in a short period of time, and the server takes a series of identical actions for each request. Duplication of effort not only wastes resources, it can also lead to slower response times.
What is LRU?
LRU's elimination logic
We use a graph to describe the LRU elimination logic, the diagram of the cache is a list structure, above the head node below is the tail nodes, the cache capacity of 8 (8 small lattice):
- Added to the list header when there is new data (meaning that the data has not been cached before)
- When the cache reaches its maximum capacity, it needs to retire more data from the data, and then retire the data at the tail of the list
- Move data to the list header (equivalent to new join cache) when data is hit in the cache
As we can see from the above logic, a data that is frequently accessed is constantly moved to the head of the list, not eliminated from the cache, and the less frequently accessed data, the more easily it is squeezed out of the cache.
20 Line Python Code practice LRU
The next interview in the face of the LRU problem, is not confident?
The interview is no longer scary, 20 lines of Python code to help you understand the LRU algorithm