For a statistical measurement to be fully meaningful, they should meet both reliability and validity conditions.
Table of Contents
Reliability
Reliability focuses on the degree to which empirical indicators or measures of theoretical concept are stable or consistent across two or more attempts to measure the theoretical concept, simply stated, reliability of measurement concerns the degree to which a particular measuring procedure gives equivalent results over a number of repeated trials.
Mass is for example a theoretical concept which represents the pull of gravityGravity, or Gravitational Force is the force of attraction b... More of a body, mass is given empirical indicators in grams, kilograms, ounces, pounds etc. By means of measuring scale each time a stone is put on a measuring scale, the pointer on the scale moves to the same specific number of kilograms and then we have evidence that the measuring scale is reliable.
Reliability of measurements in research
Validity
The reliability of measurement is not much of use unless the measure also has validity. Essentially, validity is concerned with the question- Are you measuring what you think you are measuring? Validity in this sense is the degree to which an empirical measure or several measures of a concept accurately represent that concept.
The principle of validity requires that we ask quite genuinely whether the ten items singly and collectively represent the concept of interest. If they represent something other than ethnical acceptance, then they are for the specific research question, not valid.
I would willingly accept a…………. ( name of ethnic group)
- As a marriage partner for myself or family member
- As a playmate for my child
- As a friend for myself or family member
- As a business partner from myself
- To live in the same house as myself
- To live next door to mine
- To live in the same neighbourhood as myself
- To belong to the same political party as myself
- To belong to the same church as myself
- To be excluded from my country entirely
Content Validity
This is concerned with whether-or not a test or measuring instrument is representative of the full content of the thing being measured e.g. assume that a teacher is interested administering a test to asses pupils’ understanding of arithmetic concepts or operations? The content of arithmetic as we know it, consists basically of multiplication, division, addition and subtraction. For the test set by the teacher regarding of arithmetic operations to be valid in this sense, it must include items covering all four aspects of arithmetic. In other ways, the content validity of such a test would be in serious doubt if it included items about addition and multiplication, but left out subtraction and division.
Criterion validity
Criterion related validity is involved when the aim of the researcher is to test or measuring instrument to predict an outcome that is external to the test measuring instrument itself. One may intend to e.g. to assess the validity of the Cambridge school certificate exams in terms of how accurate it predicts the academic success of university students. The certificate would be have high validity it is confirmed over time that superior academic ability on the basis of o level performance carries over into performance in university exams for successive years.
Construct validity
Construct validity is involved when interests is to find out the extent to which a particular measure( variable scale test) is related to other variables with which it is expected to on logic grounds. For instance, in the study on ethnic acceptance there was interest in establishing the construct validity of the measure of this concept There were at least three other variables in the study that should have had a logical relationship with the main concept.
Three steps are necessary in establishing construct validity. First researcher identifies which variables should on logical grounds scale or test whose validity is to be assessed. Second, the researcher next establishes through statistical tests the degree to which various variables (or single variable) and the variable of interest go together i.e. co-vary.
Based on step two, the researcher finally interprets what evidence says about the construct validity of the particular variable interest. It is worth stressing that one places more confidence in the validity of a variable that has been related to more than just one other variable
In a few words, a pilot study can be conducted to examine the reliability of the instruments to be used in the study. In order to determine validity of the findings, according to Lincoln and Guba (1985: 219, 301) the following may be observed:
- Prolonged engagement in the field.
- Persistent observation: in order to establish the relevance of the characteristics for the focus.
- Triangulation: of methods, sources, investigators and theories.
- Peer debriefing: exposing oneself to a disinterested peer in a manner akin to cross-examination, in order to test honesty, working hypotheses and to identify the next steps in the research.
- Negative case analysis: in order to establish a theory that fits every case, revising hypotheses retrospectively.
- Member checking: respondent validation, to assess intentionality, to correct factual errors, to offer respondents the opportunity to add further information or to put information on record; to provide summaries and to check the adequacy of the analysis.
Discover more from Education Companion
Subscribe to get the latest posts sent to your email.