Discussion on concurrent LoadRunner users and integration points

Source: Internet
Author: User

The three masters Zee, desert flying e, and xingcyx have a wonderful discussion about the number of concurrent users and set points, which makes sense. If you are not clear about these two concepts, you must carefully understand them.
The story begins with a glimpse of xingcyx:
Statement: the following questions and answers are based on my actual work experience and information obtained through various channels. The answers mainly represent my personal opinions and are not standard answers, if you have different opinions, you are welcome to criticize and advise.
Q: Are there any links between the number of concurrent users and the set point? Must the set point be used for testing in performance testing?
A: The number of concurrent users, as the name implies, are users who operate simultaneously. The "operation" here can be a real operation on the system or a connection (usually called "concurrent connections "), the set point is a concurrency in special circumstances. It is mostly used to test the performance of the system in instant pressurization. Therefore, the number of concurrent users is related to the set point, but not necessarily. In the test scenario, you do not need to set the set point, depending on the test target and test policy.
Q: If you do not set a test for a set point, does it mean "concurrent" operations?
A: There is such a saying that the set point is set to ensure "strict" concurrency. In fact, in essence, this is mainly a matter of granularity. The purpose of the integration point is to ensure that a request is submitted from the front-end to the backend strictly at the same time through the control of tools. However, from a micro perspective, there is no strict concurrency. Even if the client submits 100 requests to the backend at the same time by setting the collection point, it will consume data transmitted over the network, they may not arrive at the same time. Even if 100 requests arrive at the server at the same time, they are restricted by various connection pools, buffers, and CPU processing queues of middleware and application systems and databases, or wait on the server side. Therefore, in a strict sense, "concurrency" does not exist. What we need to do is to obtain an optimal balance point within an acceptable range of granularity, let's look at the issue of "concurrency" at this balance point.
Performance testing has two purposes: Evaluation and optimization.
In performance tests aimed at evaluation, users are more concerned with business concurrency, that is, concurrency in real business scenarios, in this case, you only need to set the scenario according to the business operation mode, and you do not need to set the set point.
A set point is a concurrency in special circumstances. It is usually used in performance tests for optimization purposes. The purpose is to put pressure on a module that may have performance problems, to locate the performance bottleneck.
The set point is not used much in my actual testing process.
Zee:
I have always felt that there is no controversy about the integration point. I have seen several posts talking about this in the past two days. One thing I want everyone to agree with is that a set is a relative set.
A set is a set of machines that generate loads. Consider factors such as network and middleware. To the server is certainly not the same time point, so someone wants to be closer to implementing concurrent operations on the server. This is the true concurrency.
I think the first thing you need to do is to analyze the application system and what you want to do.
For example, you want a URL to achieve 1000 simultaneous requests. This goal is clear.
During the discussion, we seldom take specific things as an example. This is a bit unclear. To achieve concurrency. I think we should analyze the application more specifically. Set the target. Instead of constantly discussing how LR can be implemented.
Xingcyx:
In practice, I often encounter this situation:
According to the test requirement, the system should support 200 concurrent users.
Then we start the test and record the script. The next step is to execute the script in the scenario. In the console, set the number of concurrent users of a script to 200, and the test result to pass or fail. At this point, the dispute arises: If the scripts of these 200 users pass, the test results will be acceptable. Can this system support 200 concurrency?
Desert flying e:
Before testing, you must understand the requirements, or test Purpose.
It indicates that "the system should support 200 concurrent users .", This requirement is strictly unqualified because the description is not clear and too vague.
Of course, in practice, this kind of requirement is often used by our testers. Generally, it is common.
For example, for a web system, it can be processed by 2/5/8, or 2/5/10. If it can pass, it will be fine-tuned by developers.
Zee:
Determine the number of concurrencies from the set point to the number of concurrencies. I think the most important part of the transformation lies in the analysis of business.
For example, the user said: 200 concurrent users are required.
The question is, what is the proportion of 200 users, how many people are doing this, how many others are doing it, and how much they are running by percentage, using different scripts.
Let's look at the customer. He cares about 200 users simultaneously clicking the same URL or a certain same resource on the server? I think most of this customer will not care about it. What he wants is when I have 200 users online. The response time is not unacceptable. It is unacceptable. It can be measured by the psychological endurance of ordinary people. Another saying is that 200 people can click the same URL or request the same resource at the same time. I think we can increase the number or set of vusers by computing, or other methods to work toward this goal.
If it is necessary to have so many concurrent users on the server. I think we can only try to narrow it down to one period of time. In my opinion, this is not based on the analysis service,
Xingcyx:
Upstairs is the most common case. In this test requirement, I will set a hybrid scenario to test, that is, set according to the percentage of users who do different things.
However, in other cases, it may not be a practical application system, a development platform, or a work engine. The performance concept involved will be more inclined to the underlying layer, in this case, it may not be as simple as setting up a hybrid scenario for testing as a general application system.
Desert flying e:
Generally, the concurrency refers to the business concurrency, rather than the server concurrency.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.