A Free Trial That Lets You Build Big!
Start building with 50+ products and up to 12 months usage for Elastic Compute Service
The content of this post is typical. If you are interested, consider it.
First, the landlord raised the following question:A recent project in the company is a portal website that requires performance testing. According to the characteristics of the project, the main test items and test solutions are developed.
Follow your reply below:
NetizensXingcyxReply:1. It is useless to find 10 computers, and only 10000 licenses are supported.
NetizensJacklooReply:In general, this type of performance indicators has no practical significance for most software, but more importantly, they require hardware.
Reply from the landlord:Thank you, jackloo!
NetizensJacklooReply:Apply a classic line "high, really high"
NetizensMybasswoodReply:What should we do if we use 0.1 million users.
The following is my opinion:Give the landlord a suggestion.
Another netizenRobustMessage:Do you mean to use the test results of 10000 users to speculate on 0.1 million users?
Even the reply:The opinions mentioned above are good, but the understanding of performance testing is somewhat different.
Reply to me:To jackloo
I hope you can write down your different opinions and comments. Let's discuss and make progress together.Feedback # 1st Floor[Poster] Reply Reference View
ByJackei I will continue to discuss with the author again: 0.1 million users are the expected number of customers after the official launch. Here I would like to ask what is the configuration required for 10 thousand million users online? (For example, a total of 10 thousand users are loaded every two seconds.) I replied again: it seems that the landlord still does not understand.
1. What kind of configuration should be evaluated through your test;
2. Is there a reference basis for your 10000 users and loading a user every two seconds to simulate the actual scenario?
3. You mentioned how many pages to test. What is the user distribution on these pages?# Floor 2[Poster]ReplyReferenceView
ByJackei If you are interested, go to 51 to participate in the discussion. ^_^
Http://bbs.51testing.com/thread-48563-1-1.html# Third FloorReplyReference
Generally, Server Load balancer is used for large websites. For example, 30 servers of a shopping website are divided into WWW, SSL, and www, in this way, the WWW page can be accessed by-million users, and the SSL part can be placed by a maximum of-users at the same time.
At the same time, the database server must reach a certain scale.# 4th floor[Poster]ReplyReferenceView
ByJackei @ Oscarxie
Thank you. The data you provided is more helpful for your understanding of the above discussion.# 5th floor[Poster]ReplyReferenceView
ByJackei Continue to follow up with netizensRobustReply:
Original Post published by jackei
In medicine, bioengineering, and other fields, it is extremely difficult for many experiments to obtain samples. It is also reasonable to use hundreds of samples to obtain estimation of larger overall features.
No error, because the premise is that their sampling method is scientific and reasonable. For sampling in such engineering fields, their samples will be evenly distributed. If the sampling method is incorrect, their results will also be incorrect.
In contrast, in this example, if we take a sample from 10000, we speculate that 0.1 million of such results are correct and scientific? NetizensJacklooReply: Haha. Wonderful. Learn from Jackie and robust.
However, for portal systems like LZ, most of them are static pages because of user permissions. However, as long as it is a multi-server cluster, we can use the test results of one machine to calculate the load capacity after multiple machine clusters, consider the load balancing and routing pressure at most, such as bandwidth, speed, latency, and so on. However, if all the changes occur on one machine, we can only calculate some indicators. From these indicators, we can simply determine whether the indicators are not feasible, for example, if 0.1 million concurrent users only have one hundred megabytes of bandwidth, we can calculate that each user only has 1 kb of bandwidth, which is obviously not feasible. However, the actual results still need to be tested. After all, the system pressure and the number of users do not change linearly.
In addition, I also explain why "this type of performance indicator has no practical significance for most software, and more is the requirement on Hardware ".
This category refers to the maximum number of concurrent users and the maximum number of online users. Because users operate on different functions, the pressure on the system is different. In actual use, the function changes in the overall user operation due to different times or emergencies. Therefore, the absence of uniform functional usage standards makes such indicators a gimmick. All software vendors are self-speaking. They install their own software in an extreme environment and do some non-practical operations to obtain a performance indicator that you will never reach. Just like the fuel consumption of a car manufacturer, you just need to listen to it. It is not counted in actual use.
In addition, this type of system is widely used and mature, and many software can roughly estimate the performance characteristics of the system after the scheme is designed, these results in a low proportion of system tuning in terms of software performance (of course, it is not completely ruled out that the performance will be further improved after some code and configurations are optimized later ), more are from the hardware aspect, such as increasing the memory, raid the hard disk, increasing the bandwidth, or even increasing the machine. Even reply: Hi. Robust. I'm glad someone can discuss the problem like this. ^_^
Let me talk about my opinion.
First, this continuous normal distribution, such as response time, is usually evenly distributed as long as there are enough samples, that is, it is statistically significant. -- What is a sufficient sample size? For example, it is not enough for 100 concurrent users to execute an iteration. You can consider executing 100 iterations for each virtual user to minimize the impact of random sampling on Data Authenticity. Of course, we must understand that the number of samples is limited by the receipt time and related resources. In my personal understanding, we should obtain as many sample data as possible within the range of resource restrictions.
Second, in fact, we are discussing two issues: Performance Testing and scalability testing ), in other words, the system performance is expected through capacity planning.
It should be said that performance testing is also an experiment, but in general it is not the 0.1 million users mentioned above, but the last user starting from the first user access to the system before the system goes offline. Generally, the average response time is used to evaluate the system performance. However, to obtain the most accurate average response time, it is necessary to record the response time of each user from the first user access to the last user before the system goes offline, and then average. But obviously this is unrealistic, just like the example of the vaccine I mentioned above. We can only use performance tests to obtain experimental data through different levels of stress (concurrency), and analyze these experimental data to obtain estimation of the real system performance. STD. (standard deviation), confidence interval and other statistical methods, I will be in my "LoadRunner didn't tell you" STD. the application in performance analysis is discussed in detail.
For scalability testing, it is necessary for many large websites or carrier-level systems, because considering the growth of users, therefore, it is necessary to know whether the system can improve the processing capability of the system through some simple and effective methods (such as adding a cluster) in the future, and whether a huge investment is required to purchase licenses for hardware and various software in the current phase. Generally, we need to analyze the concurrency, system performance, and consumption of software and hardware resources, and use mathematical modeling to obtain a capacity model. In this regard, you can find more information through Google.
I hope this discussion can continue ^_^ or even reply,To user jacklooHaha, we are glad to see that everyone has come back to continue discussing this topic. Now we feel that our discussion is very deep and we hope we can continue.
In addition, let's talk about some personal opinions.
First, if the system response capability changes when the cluster is used, only the data of one cluster is insufficient, we must continue to test 2/3/4 or more (if there are enough resources), and the data and analysis results will be closer to the real situation.
Second, I also agree with you that "this type of performance indicator has no practical significance for most software, and more is the requirement on hardware ". On the one hand, we can see that the performance requirements of the landlord are still debatable. However, we can also think about how much concurrency the system can support in a certain environment. For example, if the current bandwidth is 100 m, what is the concurrency supported? What should I do when I upgrade to 1000 m? Tests based on the existing environment can be used to understand possible bottlenecks of the system under different pressures and serve as a reference for future deployment.
# 6th Floor Reply Reference
By asqcf [unregistered users] Everyone is me and I am everyone.
With the rapid development of the Internet, more and more website space providers (iDCs)
1. How to Select a proper IDC
To choose a good IDC, you must choose a regular company, a reputable company, a company that focuses on the brand image, and a long period of time to carry out business. After all, old brands will not be caused by a customer, after years of influence on the brand image, there is also a later service guarantee. The most important thing is the speed and quality of Website access.
Ii. How to identify access speed and quality
The general practice is to ping the website you want to test on your computer. The result is that you can only test the access speed of your region and network. This test method has great limitations and cannot reflect the actual situation of the website. You cannot find the IDC you need. You cannot make the right choice.
Now, the Free Software we provide can solve the problem for you. The website Evaluation Network Testing System (www.cecela.com) provides website evaluation and network testing services to test the network performance of websites, individuals, network operators, IDCs, and regions, it mainly includes access speed, network stability, comparison and ranking. The system uses the real client mode and provides 24x7 around-the-clock data monitoring. It can be used by website application, time, region, carrier, and other dimensions, comprehensively evaluates the accessibility and access effects of WebSite Services, and assists Internet operators in improving and optimizing website application services, ultimately helping achieve business growth and brand improvement. This software is free of charge. You only need to log on to and download the client and complete the registration. When you get the data you want, other registered users are also tested. This is the true meaning of the use of this software. Address: http://www.cnblogs.com/jackei/archive/2006/11/16/561846.html
Start building with 50+ products and up to 12 months usage for Elastic Compute Service