Software Measurement knowledge point
1. What types of measurements are there? What are their differences? What are the phases of measurement as a process?
2. What are the entities of software measurement? How to Use gqm to define a measurement framework? How to describe the target in gqm?
3. What is function analysis? What are the differences between feature points, object points, and functional points?
4. What is case analysis?
5. How does the helstead method perform software measurement? What are the advantages over code lines?
6. What is the cocomo model? How to Use the cocomo model for cost estimation?
7. What is the annular complexity of the program? How to measure?
8. What metrics are provided by the CK and lk methods in Object-Oriented measurements?
9. In information-based measurements, what is the difference between the methods of the Xie Perd and Herry/kafura?
10. illustrate how the data structure affects the complexity of the software.
11. Which three types of structures are generally considered when measuring the software product structure? What factors are considered in each structural measurement?
12. What major quality factors are taken into account in the ISO 9126 quality model, Bohem quality model, and McCall's quality model?
13. Software Capability Maturity Model CMM.
14. Ability to calculate Function Points Based on data flow charts, use case points based on Use Case charts, and object-oriented measurement based on class charts.
Measurement definition: a process that uses numbers or symbols to represent the real-world objects.
Measurement definition: the feature of a choice that users and designers imagine together to expose in a trustworthy and meaningful way.
Definition of Software Measurement: used to quantify software development resources and/or measurement of software development processes. Including objects that can be directly measured, such as code lines, and objects that are calculated through measurement, such as software quality.
1. What types of measurements are there? What are their differences? What are the phases of measurement as a process?
Answer: (1) the measurement includes the calibration scale, Type Scale, sequence scale, interval scale, proportional scale, and absolute scale.
(2) The calibration scale and type scale belong to the language scale linguistic, the interval scale, the proportion scale and the absolute scale belong to the quantitative scale.
The calibration scale provides a unique and unambiguous concept name, And the definition technology also belongs to the calibration scale.
A type or category that has been defined and named in an object. It is also called an absolute calibration scale.
The sequence scale estimates the measured object values and expresses them in order to reorganize the values and order as characters or symbols.
The interval scale is used to find the growth interval, instead of the ratio. There is no unreasonable 0 interval. The latter half sentence is not well translated.
Proportional scale allows proportional calculation and allows reasonable 0 reference points.
Absolute scale is used to count only one possible absolute property measurement.
(3) awareness of cognitive, semantic, and digital quantitative.
2. What are the entities of software measurement? How to Use gqm to define a measurement framework? How to describe the target in gqm?
Answer: (1) entity type of software measurement:
① A set of activities in process software development. Different software development models use different processes and activities.
② The outcome of the product software process activities can be a program, a software document, or any other deliverables.
③ Resource implementation of these activities may require human resources, equipment, and time.
(2) gqm defined measurement framework: 1. determine the target; 2. details the list of issues of interest. 3. define the measurement criteria for answering these questions; 4. develop tools and mechanisms for data collection and analysis; 5. collect and verify data; 6. analyze the data through post-event analysis to assess whether the data is consistent with the target and provide suggestions for improvement. 7. provide feedback to stakeholders.
(3) how to describe the target in gqm: the target in gqm has four parts: an object of interest, an intent, a viewpoint, and a description of the environment and constraints.
3. What is function analysis? What are the differences between feature points, object points, and functional points?
Answer: (1) function point analysis is the analysis and calculation of the number of adjusted functions UFC and the Value Adjustment Factor vaf in the product. (FP = UFC * vaf. Productivity: FP/person-month. Document rate: page number/FP ).
(2) feature point analysis extends the function point count to the real-time and TLC environment MIS, RT, SC. When the number of algorithms used and the number of logical data files are the same, the function points and Feature Points generate the same results. When used in MIS Projects, the results are usually identical. When applied to more complex system software forms, the Count of feature points is much higher.
Object points are an initial scale Measurement Technique Applied in the early stages of development cycles. Each object is divided into three levels: simple, medium, and difficult. The measurement is determined by the number of window screen, report reports, and component components used.
The function is to measure the program by counting the function used in the product.
4. What is case analysis?
A: case analysis is a method used to improve requirements in Object-Oriented Analysis and Design.
Use Case points are analyzed as follows:
1. Calculate the number of unadjusted roles: UAW
2. Calculate the number of unadjusted Use Cases: uucw
3. Calculate the number of unadjusted Use Cases: uucp = UAW + uucw
4. computing complexity factor: Tcf = 0.6 + 0.01 * tfactor
5. computing environment factor: EF = 1.4-0.03 * efactor
6. Calculate the number of use cases after the cost is adjusted: UPC = uucp * TCF * EF.
5. How does the helstead method perform software measurement? What are the advantages over code lines?
Answer: (1) Method 1: The Halstead method considers a program as a set of tokens, which are composed of two basic elements (variables, constants, addresses, and spaces) and operations defined in the operator programming language. It counts the number of non-repeated operators in the program. The number of operators is μ1, the number of operands is μ2, and the total number of operators is N1 and the number of operations is N2.
Program Word Count Vocabulary: μ = μ1 + μ2
The program length is the total number of operators and operation numbers: N = N1 + N2
Expected program length: N ^ = μ1log2 μ1 + μ2log2 μ2
Program capacity: V = nlog2 μ = nlog2 (μ1 + μ2 ). The Halstead method considers V as the number of thought comparisons required to write a program with a length of N. V is usually used to measure software complexity.
Potential Capacity v *: the minimum capacity that an algorithm can express at will. Assume that there is only one operator and one operator is referenced only once. V * = (2 + μ2 *) log2 (2 + μ2 *) program Level L: The abstraction level of an algorithm's specific implementation. L = V */V
Intelligence content I: How much is expressed in the measurement program ". I = L ^ * V
Difficulty: D = 1/L
Method 2: Halstead is a method for measuring program complexity. The Halstead measurement method not only measures the program length, but also describes the relationship between the minimum implementation of the program and the actual implementation. The level of the program language is explained accordingly. It counts objects with operators and operands in the program. Measure the program capacity and workload based on the number of occurrences of the program.
N1: number of operators N2: Number of operands N1: Total number of operators N2: Total number of operations
Program length: program length n = N1 + N2
Program Vocabulary: Program vocabulary n = N1 + N2
Volume: Program Volume V = nlog2n
Difficulty: Difficulty
Effort: total workload E = D * V
Halstead also provides the formula for the number of errors in the prediction program: E = nlog2 (N1 + N2)/3000
(2) Halstead method compared with the code line:
1. Clear definitions 2. less dependence on specific programming languages 3. Support for early design 4. less dependence on developers' technology.
6. What is the cocomo model? How to Use the cocomo model for cost estimation?
Answer: (1) cocomo refers to constructive cost model, which is a structured cost model proposed by Boehm in 1981 to estimate the scale, cost, and progress of software development projects. The cocomo model is a comprehensive empirical model. The value of parameters in the model is an empirical value. It also integrates many factors and a comprehensive estimation model. It is practical and operable and widely used in EU countries.
(2) 1. Basic cocomo model: estimate the workload (including maintenance) and time required for software development and maintenance at the initial stage of system development.
Workload: E = A * (kloc) B (Unit month); development time: D = C * Ed (Unit month );
A, B, C, and D are empirical constants.
2. Intermediate cocomo model: estimate the workload and development time of each subsystem
Workload: E = A * (kloc) B * EAF: workload adjustment factor (15 factors product, value: 0.70-1.66)
3. Detailed cocomo model: Estimate independent software components, such as the workload and development time of each module of each subsystem
7. What is the annular complexity of the program? How to measure?
A: (1) the results measured using the McCabe method are called the annular complexity of the program. It is equal to the number of Linearly Independent directed loops in a strongly connected program control diagram. A strongly connected graph means that any node can reach all other nodes. Generally, the program control structure is not strongly connected, because a low node that is closer to the exit point often cannot reach a higher node. However, if you draw a virtual arc from the exit point to the entry point, the program control structure will inevitably become strongly connected.
(2) The McCabe method is calculated in three steps:
Step 1: degrade the program flowchart to a directed graph. Each processing frame of the program flowchart is considered as a node, and the streamline is considered as a directed arc connecting each node.
Step 2: connect a virtual directed arc from the exit of the program to the entry in the directed graph.
Step 3: Calculate VG = m-n + 1
Where VG is the number of rings in directed graph G, M is the number of arcs in directed graph G, and N is the number of nodes in directed graph G.
It is suggested that the module scale should be up to 10 or less, that is to say, VG = 10 is a more scientific and accurate upper limit of the module scale. FA 0 FSM 1m2 M1 m2fcm 1m2 M 1 m2 1 flm1 M1
8. What metrics are provided by the CK and lk methods in Object-Oriented measurements?
A: The LK method provides the following options: Loc program size, CS class size (which may indicate excessive class responsibilities), and number of noo overload classes (NOO is too large, which indicates that the design is problematic and the inheritance level abstraction is low) the number of new methods added by NOA (indicating design drift), the class in the SI level is too high, and the class in the level does not conform to the abstract definition, the depth of the DIT inheritance tree, and so on.
The CK method provides the weight and complexity of the WMC: weighted Methods perclass method, the greater the complexity of the dit: depth of inheritance tree, the greater the NOC: number of children subclasses directly inherit from the number of classes (indicating the potential impact of a class on the System and Design), CBO: coupling between objects class Coupling Degree (too large indicates that the class relationship is difficult to maintain), RFC: response for a class response degree (the number of methods that can be used to respond to messages of an object), lcom: lack
The lack of cohension cohesion. Different methods use the same variable instance set to make different representations that there is cohesion.
9. In information-based measurements, what is the difference between the methods of the Xie Perd and Herry/kafura?
Answer: Method 1: The gabperd method is a primitive improvement. Our indicator features eliminate the ambiguity of information and control flow, and focus on the measurement of information flow. However, the measurement of Henry and kafura cannot be achieved.
Method 2: Information-based measurements use fan-in and fan-out measurements. In this method, the stream complexity (m) = (fan-in (m) * (fan-out (M) 2. Xie Perd stressed that this method is an improvement in the measurement at the initial stage.
The henrykafura method considers the information stream complexity (m) = length (m) * (fan-in (m) * (fan-out (M) 2.
10. illustrate how the data structure affects the complexity of the software.
A: The global complexity of the system cannot be measured regardless of the data structure. The control flow measurement fails to identify the complexity hidden in the data structure.
11. Which three types of structures are generally taken into account when measuring the software product structure? What factors are considered in each structural measurement?
Answer: (1) control flow structure, data flow structure, and data structure
(2) The Circle complexity is used to measure the complexity of a module's decision structure. The number is represented by the number of independent paths, that is, the minimum number of paths required for testing to prevent errors reasonably. Large circle complexity indicates that the program code may be of low quality and difficult to test and maintain. Experience shows that the possible errors of the program are closely related to the high complexity of the circle.
The ping command uses the fan-in and fan-out measurements. In this method, the stream complexity (m) = (fan-in (m) * (fan-out (M) 2. Xie Perd stressed that this method is an improvement in the measurement at the initial stage.
Halstead complexity is based on the operators and operators that appear in the program as the counting object, and takes the number of occurrences of them as the counting object to directly measure the indicator. Then, it calculates the program capacity and workload.
12. What major quality factors are considered in the ISO 9126 quality model, Bohem quality model, and McCall's quality model?
Answer: ISO 9126 quality model: functionality, reliability, availability, efficiency, maintainability, and portability.
Bohem quality model: overall applicability, portability, and maintainability
McCall's quality model: correctness, reliability, efficiency, integrity, and availability.
13. Software Capability Maturity Model CMM.
A: CMM is a method used to evaluate software contracting capabilities and help them improve software quality. Focuses on the management of software development processes and the improvement and Evaluation of engineering capabilities. CMM is divided into five levels: the first level is the initial level, the second level is the repeatable level, the third level is the defined level, the fourth level is the Managed Level, and the fifth level is the optimization level.
14. Function Points can be calculated based on the data flow diagram, use case points can be calculated based on the use case diagram, and object-oriented measurement can be performed based on the class diagram.
Answer: (1) Data Flow chart for Function Point Calculation
1. Calculate the basic count of each element
2. Multiply the application complexity weighting factor by the complexity weighting factor, namely, simple, General, and complex.
3. Application Environment Parameters
4. coefficient of adjustment for computing complexity: vaf = 0.65 + 0.01efi (e-generation table summation, I is a subscript)
5. Calculated Function Points: fp = UFC * vaf
(2) Use Case point calculation based on the use case diagram
1. Calculate the weight of the participant before the adjustment: UAW
(Program Interface, that is, the level of interaction with other systems is simple, the weight is factor1; the level of protocol and cooperation is average, the weight is factor2; the level of human-computer interaction GUI is complex, and the weight is factor3)
2. Calculate the weight of the case before the adjustment: uucw (Case Weight Analysis Based on transactions and analysis classes)
3. Calculate the use cases before the node is called: uucp = UAW + uucw
4. Technical complexity factors: Tcf = 0.6 + 0.01 * tfactor
5. Environmental Factors: EF = 1.4-0.03 * efactor
6. Adjusted use case: UCP = uucp * TCF * ef
(3) Class charts for Object-Oriented Measurement
Lk: Scale and inheritance Measure
CK: Class Method complexity, features, inter-module cohesion Measurement
========================================================== ========================================================
Buy one get one:
1. What is the difference between a proxymap and a histogram chart? What is the difference between a runtime diagram and a control diagram?
A: (1) the pairo diagram is also called the 80-20 quasi-side diagram. The frequency bar is arranged from left to right in descending order. The X axis is usually the cause of the defect, while the Y axis is usually the number of defects. It provides the main causes of defects. The X axis of the histogram chart is a parameter arranged by unit interval, and the parameters are arranged from left to right in ascending order. The Y axis contains the frequency.
The frequency bar is sorted by frequency, and the histogram chart is used to show the distribution of parameter features.
(2) the X axis and Y axes of the Operation chart are the same as those of the control chart. The operation chart uses historical data for trend analysis, and the control chart has one midline longer than the operation chart to determine whether the data is out of control, indicates that the operation must be corrected.
2. What is process capability? How to measure process capability?
A: (1) Process Capability refers to the actual processing capability of a process or process in a stable State. It is a sign of the quality of the process. By analyzing the process capability of the manufacturing process, we can keep abreast of the quality assurance capability of each process in the manufacturing process. This provides the necessary information and basis for warranty and product quality improvement.
(2) Process Capability measurement: Process Capability measurement is performed using the CPK index to assess the closeness between the actual process and the target mean of the baseline and the number of process changes.
To measure the organizational process capabilities, you must first have the following information:
1. Specify the upper limit and lower limit
2. The specified width can be obtained based on the specified boundary.
3. upper and lower limits of process boundaries obtained through process measurements
4. The process width can be obtained according to the process boundary. Cpk1 indicates that the process exceeds the predefined minimum standard, and the peak value of the variance process with a small peak value is close to the target.
3. How to describe the walking degree of measurement data in the frequency-domain analysis of measurement data?
A: Walking degree describes how the measured observed data is distributed in the dataset.
It is mainly reflected by the following three parameters:
Extreme is the highest and lowest values in a data set.
Fluctuation range of observed variance measurement values
The standard deviation is the square root of the variance.