Reliability Analysis 1. Concept: Reliability refers to an indicator that reflects the authenticity of tested features based on the consistency or stability of the test results obtained by the test tool. In general, the more consistent the results of two or two tests, the smaller the error, the higher the reliability, it has the following features:
1. Reliability refers to the consistency or stability of the test results, rather than the test or scale itself;
2. The reliability value refers to the consistency under a specific type, not general consistency. The reliability coefficient may have different results due to different times, different subjects or different ratings;
3. reliability is a necessary and inadequate condition for validity. Low reliability and low validity, but high reliability may not indicate high validity;
The Reliability Test relies entirely on the statistical method.
Reliability can be divided into: internal reliability: whether to measure the same concept for a group of questions, and the degree of internal consistency that constitutes the table question items; the common test method is the Cronbach's alpha coefficient. External Reliability: whether the results of the same tester are consistent at different times, and the re-test reliability is the most common test method of external reliability.
Ii. Reliability Indicators:
1. The reliability coefficient is used to indicate the reliability. The greater the reliability coefficient, the greater the credibility of the measurement. The reliability coefficient is highly reliable. The scholar devellis (1991) believes that 0.60 ~ 0.65 (preferably not); 0.65 ~ 0.70 (minimum acceptable value); 0.70 ~ 0.80 (pretty good); 0.80 ~ 0.90 (very good ). Therefore, a table or questionnaire with good reliability coefficient should be above 0.80, And the range between 0.70 and 0.80 is acceptable. The sub-table should be above 0.70, it is acceptable between 0.60 and 0.70. If the internal consistency coefficient of the component table is less than 0.60 or the reliability coefficient of the total table is less than 0.80, you should consider revising the table or adding or deleting questions.
2. reliability Indicators are expressed by correlation coefficient. They can be roughly divided into three types: stability coefficient (cross-time consistency) and equivalence coefficient (cross-form consistency) and internal consistency coefficient (cross-project consistency ).
Iii. Reliability Analysis Methods:
1. retest reliability method:
Repeated tests with the same questionnaire for the same tested interval can also be called a test-retest method to calculate the correlation coefficient of the two test results. Obviously, this is the stability coefficient, that is, cross-time consistency. The retest reliability method applies to fact-based questionnaires, and can also be used for attitudes and opinion-based questionnaires that are not easily affected by the environment. Because the re-test reliability needs to be tested twice for the same sample, and the tested is vulnerable to various events and activities, the interval must be appropriate. Generally, the interval is two weeks or one month.
2. Replica reliability method (same level ):
The duplicate reliability method allows you to enter two copies of the questionnaire at one test and calculate the correlation coefficient of the two copies. This method requires that the two replicas must be completely consistent in terms of content, format, difficulty, and question type of the corresponding question items, in addition to the different expressions. Therefore, the reliability of the replicas is an equivalent coefficient. In the actual survey, the questionnaire is difficult to meet this requirement, and this method is rarely used.
3. Half-reliability method:
The semi-reliability method divides the measurement item into two halves Based on the parity item and scores them separately. The correlation coefficient between the two halves is calculated (the Excel software is used ), then, the reliability coefficient rxx of the entire measurement is determined accordingly. The half-fold reliability is an internal consistency coefficient, which measures the consistency between two half-items. This method is not suitable for measuring fact questionnaires and is often used for the reliability analysis of attitude and opinion questionnaires. In the questionnaire survey, the most common form of Attitude measurement is the 5-level ranking special scale. When performing a semi-reliability analysis, if the table contains a negative question item, the score of the negative question item should be reversed to ensure the consistency of the scoring direction of each question item, then, divide all questions into two half equal as possible based on parity or before and after parity, and calculate the correlation coefficient (rhh) of the two, that is, the reliability coefficient of the half scale ), finally, the reliability coefficient RTT of the whole table is obtained by using the Pearson-brown formula RTT = 2rhh/(1 + rhh.
4. raters' Reliability:
This method is implemented when the degree of standardization of the measurement tool is low. The criteria for determining different ratings also affect the measurement reliability. To test the raters' reliability, you can calculate the correlation coefficient between one group of ratings and the other.
5. A reliability coefficient method:
Cronbach a reliability coefficient is currently the most common reliability coefficient. The formula is as follows: a = (k/k-1) * (1-(Σ I2)/st2)
K indicates the total number of items in the table, I2 indicates the intra-question variance of the I-question score, and st2 indicates the variance of the total score of all items. From the formula, we can see that coefficient A evaluates the consistency between the scores of each item in the table, which is an internal consistency coefficient. This method is applicable to the reliability analysis of attitude and opinion questionnaire (table.
The commonly used reliability test methods in the ranking feature table method are the "Cronbach's a" coefficient and "half-reliability ".
Iv. Reliability Analysis Using SPSS
In SPSS, the module used for test reliability analysis is the reliability analysis under scale, and the factor module under data reduction.
The reliability analysis module is mainly used to test the reliability of the test. It is mainly used to test the reliability of the half, the database Lee and a coefficient, and the Hoyt reliability coefficient. As for the re-test reliability and replica reliability, you only need to combine the data of the scores of the samples in the secondary (partial) test into the same data file, and then use the bivariate under correlate to obtain its correlation coefficient, that is, the re-test or replica reliability, while the Ratcher reliability is the use of the rank correlation and Kendall harmony coefficient.
Table 1 parameters of the model option of the reliability analysis module and corresponding Chinese terms
Keywords |
Function |
Alpha |
Cronbach a coefficient |
Split-half |
Half-reliability. n is the number of questions in the second-component table. |
Gutman |
The shortest method of the lowest limit of Gutman |
Parallel |
Maximum-likelihood Reliability |
Strict parallel |
Maximum probability reliability when the average and variant numbers of each question are the same. |
Table 2 parameters of the statistics option in the reliability analysis module and corresponding Chinese terms
Keywords |
Function |
F test |
Hoyt reliability coefficient |
Friedman Chi |
Analysis on the number of changes of the Friedman hierarchy and the harmony coefficient of Kendall |
Mongoran Chi |
Cochran's q test, applicable to tables with two answers (for example, non-question) |
Hotelling's t |
Hotelling's T2 Test |
Tukey's |
Tukey addition test |
Intraclass |
Correlation Coefficient of average questions in the table |
Use SPSS software for Reliability Analysis