Feature analysis is one of the simplest forms of evaluation. It is used to classify and grade various properties of different products so that we can choose the appropriate tools and methods. For example, if we want to buy a tool for a design, let's first list the five properties that we need: a friendly user interface, object-oriented functionality, consistent authentication, the ability to manipulate user stories, and the ability to run on UNIX systems. The evaluation process usually includes metrics, we collect information to identify the different values of independent and related variables, and we organize this information to increase our understanding of ourselves. Metrics can help us differentiate between typical environmental conditions, determine the starting point and the final goal.
We can effectively build a predictive system by experiencing the method of establishing the system's accuracy. That is, we compare the functionality of the model with the known data in a given environment. First declare a hypothesis about the forecast, and then review the data information to see if the hypothesis is true. In an effective model, a reasonable precision is associated with several factors, including who is the person who performed the assessment. An initial evaluator is less likely to assess accuracy than an experienced evaluator. We can also examine the difference between a given model based on the established system and the uncertain system. In an indeterminate model we provide an error window around the actual value, and the width of the window is variable. There are many errors in the prediction system of software cost estimation, time schedule estimation and confidence estimation, which we call uncertainty. For example, if you find that your organization's confidence prediction is within the range of 20% in a given environment, the deviation from predicting the time of the next error will not exceed 20% of the time that the actual next error occurs. We then describe this window in an acceptable range: the maximum deviation between the predicted and the actual value of the limit. Thus, 20% of the above example is the acceptable range of the model. Before you can apply a predictive system, you must first determine how much acceptable range you are asking for.
When we design an experiment or case study, the model will be used to represent a particular, more difficult problem, because their predictions will directly affect the final output. As a result, the prediction becomes the goal, so the developers consciously and unconsciously try to meet this goal. Therefore, the experimental evaluation model is sometimes designed to be a two-way invisible experiment. In this model, it was not until the experiment was conducted that it was possible to know what the objective of the experiment was, and it was not visible to the participants before that goal. On the other side, some models like the confidence model will not affect the outcome of the output. There is no such problem. The final point is that the predictive system does not have to be too complex to build.
12th-Evaluating software products, processes, and resources