Cronbach's alpha
Cronbach's alpha or coefficient alpha, is a reliability coefficient and a measure of the internal consistency of tests and measures. It was devised by the American psychometrician Lee Cronbach. Today it enjoys such wide-spread usage that numerous studies warn against using Cronbach's alpha uncritically.
History
In his initial 1951 publication, Lee Cronbach described the coefficient as Coefficient ''alpha and included an additional derivation. Coefficient alpha'' had been used implicitly in previous studies, but his interpretation was thought to be more intuitively attractive relative to previous studies and it became quite popular.- In 1967, Melvin Novick and Charles Lewis proved that it was equal to reliability if the true scores of the compared tests or measures vary by a constant, which is independent of the people measured. In this case, the tests or measurements were said to be "essentially tau-equivalent."
- In 1978, Cronbach asserted that the reason the initial 1951 publication was widely cited was "mostly because put a brand name on a common-place coefficient." He explained that he had originally planned to name other types of reliability coefficients, such as those used in inter-rater reliability and test-retest reliability, after consecutive Greek letters, but later changed his mind.
- Later, in 2004, Cronbach and Richard Shavelson encouraged readers to use generalizability theory rather than. Cronbach opposed the use of the name "Cronbach's alpha" and explicitly denied the existence of studies that had published the general formula of KR-20 before Cronbach's 1951 publication of the same name.
Prerequisites for using Cronbach's alpha
- The "parts" must be essentially tau-equivalent;
- Errors in the measurements are independent.
This is often a source of confusion for users who might consider some aspect of the testing process to be an "error". Anything that increases the covariance among the parts will contribute to greater true score variance. Under such circumstances, alpha is likely to over-estimate the reliability intended by the user.
Formula and calculation
Reliability can be defined as one minus the error score variance divided by the observed score variance:Cronbach's alpha is best understood as a direct estimate of this definitional formula with error score variance estimated as the sum of the variances of each "part" :
where:
- represents the number of "parts" in the measure;
- the term causes alpha to be an unbiased estimate of reliability when the parts are parallel or essentially tau equivalent;
- the variance associated with each part i; and
- the observed score variance.
and the variance of a composite is equal to twice the sum of all covariances of the parts plus the sum of the variances of the parts:. Therefore estimates and estimates. It is much easier to compute alpha by summing the part variances than adding up all the unique part covariances .
Alternatively, alpha can be calculated through the following formula:
where:
- represents the average variance
- represents the average inter-item covariance.
Common misconceptions
A high value of Cronbach's alpha indicates homogeneity between the items
Many textbooks refer to as an indicator of homogeneity between items. This misconception stems from the inaccurate explanation of Cronbach that high values show homogeneity between the items. Homogeneity is a term that is rarely used in modern literature, and related studies interpret the term as referring to uni-dimensionality. Several studies have provided proofs or counterexamples that high values do not indicate uni-dimensionality. See counterexamples below.in the uni-dimensional data above.
in the multidimensional data above.
The above data have, but are multidimensional.
The above data have, but are uni-dimensional.
Uni-dimensionality is a prerequisite for. One should check uni-dimensionality before calculating rather than calculating to check uni-dimensionality.
A high value of Cronbach's alpha indicates internal consistency
The term "internal consistency" is commonly used in the reliability literature, but its meaning is not clearly defined. The term is sometimes used to refer to a certain kind of reliability, but it is unclear exactly which reliability coefficients are included here, in addition to. Cronbach used the term in several senses without an explicit definition. Cortina showed that is not an indicator of any of these.Removing items using "alpha if item deleted" always increases reliability
Most psychometric software will produce a column labeled "alpha if item deleted" which is the coefficient alpha that would be obtained if an item were to be dropped. For good items, this value is lower than the current coefficient alpha for the whole scale. But for some weak or bad items, the "alpha if item deleted" value shows an increase over the current coefficient alpha for the whole scale.Removing an item using "alpha if item deleted" may result in 'alpha inflation,' where sample-level reliability is reported to be higher than population-level reliability. It may also reduce population-level reliability. The elimination of less-reliable items should be based not only on a statistical basis but also on a theoretical and logical basis. It is also recommended that the whole sample be divided into two and cross-validated.