1. In the ticket booking system case, there is only one ticket for a flight. Assume that one million people open your website to book tickets, ask you how to solve the concurrency problem (the concurrent read/write problem to be considered when it can be extended to any high-concurrency website)
The problem is that people come to visit the website. Before getting out of the ticket, you must ensure that everyone can see the ticket. It is impossible for others to see the ticket. It depends on the person's "luck" (network speed, etc)
The second question is concurrency. Who can make a deal if one million people click to buy at the same time? There is only one ticket in total.
First, we can easily think of several solutions related to concurrency: Lock Synchronization
Synchronization refers to the application layer. When multiple threads come in, they can only be accessed one by one. Java refers to the syncrinized keyword. There are also two layers of locks: one is the object lock mentioned in Java for thread synchronization; the other is the database lock; if it is a distributed system, obviously, only database locks can be used.
Assuming that we adopt a synchronization mechanism or a physical database lock mechanism, how to ensure that people can still see tickets at the same time will obviously sacrifice performance, which is not desirable for High-concurrency websites.
After using hibernate, we propose another concept: Optimistic locks and pessimistic locks (that is, traditional physical locks). Using optimistic locks can solve this problem. Optimistic Locking means that when the table is not locked, the business control is used to solve the concurrency problem. This ensures the concurrent readability of the data and the exclusive data storage, it ensures performance while solving the dirty data problem caused by concurrency.
How to Implement optimistic locks in hibernate:
Premise: Add a redundant field to the existing table, version number, long type principle: 1) only the current version number = database table version number can be submitted 2) after the submission is successful, version ++
Implementation is simple: Add optimistic-lock = "version" to ORMapping. The following is an example.
<Hibernate-mapping> <class name = "com. insigma. Stock. ABC" optimistic-lock = "version" table = "t_stock" schema = "stock">
2. How do you consider the stock trading system, banking system, and large data volume?
First, there is a market record generated every few seconds in the stock exchange system's statement of conduct, and there are 6 records of the number of stocks × 20 × 60 × 6 in a day (assuming the market price is 3 seconds, how many records will this table have in January? When the number of records in a table in Oracle exceeds, the query performance will be poor. How can we ensure system performance?
For example, China Mobile has hundreds of millions of users. How can we design tables? Are all used to exist in one table?
Therefore, for a large number of systems, you must consider Table sharding-(the table names are different, but the structure is the same). There are several common methods: (depending on the situation)
1) by service, for example, for a mobile phone number table, we can consider a table starting with 130 as a table, another table starting with 131, and so on.
2) use Oracle's table sharding mechanism for table sharding
3) if it is a transaction system, we can consider splitting the data by timeline. There is a table for the data of the current day, and historical data is obtained to other tables. The reports and queries of historical data do not affect the transaction of the current day.
Of course, after the table is split, our applications must be adapted accordingly. Or-mapping alone may have to be changed. For example, some services must pass stored procedures.
3. In addition, we have to consider caching
Here the cache refers not only to hibernate, but also to the level-1 and level-2 Cache provided by hibernate. The cache here is independent of the application and still reads data from the memory. If we can reduce frequent database access, it is certainly advantageous to the system. For example, if a product with a keyword is frequently searched for in an e-commerce system, you can consider storing the product list in the cache (in memory ), in this way, you do not need to access the database every time, and the performance is greatly increased.
A simple cache can be understood as a hashmap by yourself and a key for frequently accessed data. value is the value first searched from the database and can be read from the map next time, instead of reading the database, professional cache frameworks such as memcached can be independently deployed as a cache server.
A few examples of big data