What about High Performance? What about high performance?

Source: Internet
Author: User

What about High Performance? What about high performance?

There are a lot of stories in the garden about high performance, high concurrency, and the construction of a daily pv architecture. In fact, it is really nonsense. For most applications, to achieve high performance, we need to minimize network requests (including db, redis, mongo, and mq ). For almost all applications, the performance bottleneck is always on the bandwidth side, and the hardware side is not mentioned here. Let's talk about what we can do.

I haven't found the figure after searching for half a day. For the time cycle from each component to the cpu, I will describe it in text, L1> L2> memory> disk> internet.

Some people say that redis has high performance, large concurrency, and Big Data Access is required. Some people say that the performance of mongo is high, and zeromq is a series of such issues.

 

First, let's talk about network requests, about TCP/IP:

We all know that ip is a hop-by-hop Protocol. That is to say, I can only go from one vro to the next vro, and then to the next vro. If your computer is on the server, you have to go through many vrouters in the middle, the time period will be much longer, much more annoying. This is also why cdn and p2p are used to shorten the network path (reducing bandwidth bearer is also a problem ).

Redis and mongo:

For example, I have a game server with about 4000 online users. A state machine is running and needs to constantly detect various states, experiences, constellations, and open tasks, open skills, etc. If a player is judged to be in about 10 States, 4000 players must be detected within Ms. Otherwise, the latency will be very serious, and the 1 s will be executed about 5 times, if every time the data is retrieved from redis, it is about 5*10*4000 = K times. Let alone redis. What kind of server can't survive? This is only one server.

The problem arises: how can this problem be solved?

Put the data in the memory, directly extract from the memory, and then foreach. Most of the applications are optimized here. Basically, it is no problem to cope with the so-called daily pv millions.

At this point, the problem arises, for internal applications, such as distributed file storage, data analysis, and task scheduling. Swollen?

Big Data has always been a pseudo-proposition. A large amount of data is hard to crack. All big data processing involves dividing the data into small data, processing the data in blocks, and then merging the data. In fact, it can be seen from mysql, oracle, mssql, and a series of rmdb partitions. To improve performance, we must do so. The data volume processed by each module is subdivided into a certain granularity. In this case, the importance of index, group, and hash is shown here.

For example, I have a business system with a daily log of about 10 Gb, about 300 GB in a month, and about 1 TB in the first quarter, I need to read various reports every hour, every day, every week, every month, and every quarter. It is definitely not possible to find them in the count T field each time.

The problem arises: how can this problem be solved?

Analyze data per minute by business, 10g/24/60 about 7 M, then generate a result file after analysis, about several k, 1 hour is 60 files, to view the data per hour, merge the results of 60 files. The specific granularity can be customized according to the specific business. This is a simple example of grouping.

So I need to check all the operations/orders of a user in the last 10 days. The original grouping method is no longer sufficient. What should I do at this time?

When inserting user data, you can follow certain rules, such as the last two digits of the user number to store 10 Gb of data in a file, you can evenly distribute the data to 100 files. When you need to view a user, you can touch the user ID, locate the file directly, and then query the data. This is a relatively simple gourp + index. After understanding this piece, you can write a customized and simple fs on this basis (of course, you need to consider more in the actual situation, including memory swap-in and swap-out, etc, not listed in this article ).

 

It is often said that a multi-threaded program is not as high as a single-threaded program. So how can we compile a multi-thread program that can reasonably utilize cpu resources?

As we all know, thread switching requires additional overhead, so when writing multi-threaded programs, we need to avoid sharing resources as much as possible, so that we can ensure data consistency at the same time, the thread wait time is avoided.

A simple example:

I have a large Dictionary (Dictionary/Map) to store the user's session data. Every thread needs to lock it when reading/writing data in this Dictionary, to ensure data consistency, if two (more) threads read/write data at the same time, other threads need to wait for the current thread to release resources. The more threads, the higher the chance of waiting, the worse the performance. The multi-threaded processing becomes a single-threaded processing. After the waiting is over, whether the thread can be switched back to continue execution is another overhead, this part is managed by the system, which is uncontrollable.

The problem arises: how can this problem be solved?

Rationally allocate thread resources based on hardware and actual test data. For example, I initialized eight threads, each user's request, and checked the total number of threads to ensure the request of each user, when processing data in the same thread, the user data can be stored within each thread, and each thread can access the data within itself, avoiding the lock, it also avoids the resource overhead caused by thread waiting/switching. If no modulo is used, threads are randomly allocated and a hash table is used for storage. Let every thread focus on its own tasks and Job Scheduling is also based on this process. This is also the case when the thread processing mechanism is amplified into message distribution between virtual machines/physical machines.

There are many more, not to list them one by one. The specific business depends on the specific situation.

In general, avoiding network overhead, massive data, and resource competition are the basic elements of high performance.


For that thing

I guess you should want to know how he feels about you and how you can properly express your feelings.

In fact, feelings are just a natural thing. If you are not familiar with it, you can familiarize yourself with it first. Now that you can chat on QQ, you can have more opportunities to chat.

When you know each other better, you can take care of him occasionally to see what his attitude is. I believe that a boy may feel a girl's affection for herself in some details.

If you like him and he also like you, congratulations, you are the happiest person ~

Happy ~

About the graphics card 6630M

Basically, is GT550M slightly better? Which one is N-card? The performance clearly shows that the Y470N-IFI is strong... Cost effectiveness... A card with almost N cards is more suitable for playing games, and a card is more suitable

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.