Assess Validity of Data Part I

Assess Validity of Data Part I

Assess Validity of Data Part I: Face Validity, Content Validity, Criterion-Related Validity In Nursing Research.

Introduction

The second important criterion for evaluating a quantitative instrument is its validity. Validity is the degree to which an instrument measures what it is supposed to measure. When researchers develop an instrument to measure hopelessness, how can they be sure that resulting scores validly reflect this construct and not something else, like depression?

Reliability and validity are not independent qualities of an instrument. A measuring device that is unreliable cannot possibly be valid. An instrument cannot validly measure an attribute if it is inconsistent and inaccurate. An unreliable instrument contains too many errors to be a valid indicator of the target variable.

An instrument can, however, be reliable without being valid. Suppose we had the idea to assess patients’ anxiety by measuring the circumference of their wrists. We could obtain highly accurate, consistent, and precise measurements of wrist circumferences, but such measures would not be valid indicators of anxiety. Thus, the high reliability of an instrument provides no evidence of its validity; low reliability of a measure is evidence of low validity.

Like reliability, validity has different aspects and assessment approaches. Unlike reliability, however, an instrument’s validity is difficult to establish. There are no equations that can easily be applied to the scores of a hopelessness scale to estimate how good a job the scale is doing in measuring the critical variable.

Face Validity

Face validity refers to whether the instrument looks as though it is measuring the appropriate construct. Although face validity should not be considered primary evidence for an instrument’s validity, it is helpful for a measure to have face validity if other types of validity have also been demonstrated. For example, it might be easier to persuade people to participate in an evaluation if the instruments being used have face validity.

The Content Validity

Content validity concerns the degree to which an instrument has an appropriate sample of items for the construct being measured. Content validity is relevant for both affective measures (i.e., measures relating to feelings, emotions, and psychological traits) and cognitive measures.

For cognitive measures, the content validity question is, how representative are the questions on this test of the universe of questions on this topic? For example, suppose we were testing students’ knowledge about major nursing theories. The test would not be content valid if it omitted questions about, for example, Orem’s self-care theory. Content validity is also relevant in the development of affective measures.

Researchers designing a new instrument should begin with a thorough conceptualization of the construct so the instrument can capture the entire content domain. Such a conceptualization might come from rich first-hand knowledge, an exhaustive literature review, or findings from a qualitative inquiry.

An instrument’s content validity is necessarily based on judgment. There are no completely objective methods of ensuring the adequate content coverage of an instrument. However, it is becoming increasingly common to use a panel of
substantive experts to evaluate and document the content validity of new
instruments.

Requirement of Panel

The panel typically consists of at least three experts, but a larger number may be advisable if the construct is complex. Experts are asked to evaluate individual items on the new measure as well as the entire instrument. Two key issues in such an evaluation are whether individual items are relevant and appropriate in terms of the construct, and whether the items adequately measure all dimensions of the construct.

With regard to item relevance, some researchers compute interrater agreement indexes and a formal content validity index (CVI) across the experts’ ratings of each item’s relevance. One procedure is to have experts rate items on a four-point scale (from 1 not relevant to 4 very relevant). The CVI for the total instrument is the proportion of items rated as either 3 or 4. A CVI score of .80 or better indicates good content validity.

Criterion-Related Validity

Establishing criterion-related validity involves determining the relationship between an instrument and an external criterion. The instrument is said to be valid if its scores correlate highly with scores on the criterion. For example, if a measure of attitudes toward premarital sex correlates highly with subsequent loss of virginity in a sample of teenagers, then the attitude scale would have good validity.

For criterion-related validity, the key issue is whether the instrument is a useful predictor of other behaviors, experiences, or conditions. One requirement of this approach is the availability of a reliable and valid criterion with which measures on the instrument can be compared.

This is, unfortunately, seldom easy. If we were developing an instrument to measure the nursing effectiveness of nursing students, we might use supervisory ratings as our criterion, but can we be sure that these ratings are valid and reliable? The ratings themselves might need validation.

Criterion-related validity is most appropriate when there is a concrete, well-accepted criterion. For example, a scale to measure smokers’ motivation to quit smoking has a clear cut, objective criterion (subsequent smoking). Once a criterion is selected, criterion validity can be assessed easily.

Correlation Coefficient Computation

A correlation coefficient is computed between scores on the instrument and the criterion. The magnitude of the coefficient is a direct estimate of how valid the instrument is, according to this validation method. To illustrate, suppose researchers developed a scale to measure nurses’ professionalism. They administer the instrument to a sample of nurses and ask the nurses to indicate how many articles they have published.

Publications variable was chosen as one of many potential objective criteria of professionalism. The correlation coefficient of .83 indicates that the professionalism scale correlates well with the number of published articles. Whether the scale is really measuring professionalism is a different issue, an issue that is the concern of construct validation discussed in the next section.

Read More: https://nurseseducator.com/methods-of-measurement-reliability-ii/

Leave a Comment