A typical research environment as well as an end-of-semester scenario is full of expressions such as:
“I have a thesis to submit and I am going crazy.”
“I hate SPSS.”
“My Reliability score is negative. Shit! I have to do it all over again.”
Well, it is not difficult to write a research paper. Despite the stress and fear attached with a thesis or a dissertation, it is one of the best things to do. I am not crazy if I say that. Trust me; it gives you an exciting experience and opportunity to satisfy that curious craving to explore new things.
Designing surveys/questionnaires, pre-testing, getting them filled, coding and finally analyzing the data, is usually the most dreaded moment. One of the reasons for such disgust is the trouble with reliability analysis. To simplify things, let’s take a look at the following article:
Reliability means that upon repeating the same instrument to collect data, the results generated are consistent. In this way, instead of the measure, the reliability of a research is more directed towards the scores. For this reason, a lower score of reliability shows low level of consistently and points at the inaccuracy of data. To test reliability, there are a number of different coefficients which measure specific parameters and their use differs on the basis of the context of a study. This article will focus on Cronbach’s alpha.
The value of Cronbach’s Alpha can vary depending on the number of total items, the way different items measure the same construct, and the extent to which each item is correlated to the other. A value of .6 to 1 is acceptable where a score closer to 1 is more reliable.
Since the magnitude of Alpha can vary on the basis of the internal consistency and correlation of the items, it is important to look at the corrected item-total correlation. This column basically incorporates the correlation between a question item and the sum score of the other remaining items. For instance, if the corrected item-total correlation of a question is .472, it shows the correlation between this question and the sum score of other (total) questions. The value of .472 shows that there is a positive moderate correlation between a question and the combined score of the rest of the questions. It also demonstrates how one item is internally consistent with the composite score of other items. Therefore items showing a very weak score for corrected item-total correlation can be fixed by deleting them. Also, rephrasing, reversed coding, or checking if items measure the construct correctly, can greatly increase the value of the coefficient of reliability. As a matter of fact, when items show perfect correlation, alpha equals to 1. Also while running a reliability check, the column titled as Cronbach’s Alpha if item deleted shows the impact on the Cronbach’s alpha which greatly contributes to the magnitude of alpha.
Thus, since reliability measures how consistent the scores of a study are after repeating it several times, Cronbach’s alpha is the most widely and accurate measure of coefficient of reliability. A value of .8 to 1 is considered highly reliable where alpha equals to 1 in the case when items are perfectly correlated. The magnitude of alpha is greatly affected by the number of the items, the extent to which they measure a construct, the manner in which they are phrased and coded and their correlation with other items. For this reason, the value could increase by deleting some items and by stressing upon raising the internal consistencies and correlation of items.