Assess Constructive Validity of Data II

Assess Constructive Validity of Data II

Assess Constructive Validity of Data II. Assess Validity of Data Concurrent validity, Validation by Means of the Criterion-related Approach, Construct Validation

Validity Predictive

Assess Constructive Validity of Data .A distinction is sometimes made between two types of criterion-related validity. Predictive validity refers to the adequacy of an instrument in differentiating between people’s performance on some future criterion. When a school of nursing correlates incoming students’ SAT scores with subsequent grade-point averages, the predictive validity of the SATs for nursing school performance is being evaluated.

Concurrent validity refers to an instrument’s ability to distinguish individuals who differ on a present criterion. For example, a psychological test to differentiate between those patients in a mental institution who can and cannot be released could be correlated with current behavioral ratings of health care personnel. The difference between predictive and concurrent validity, then, is the difference in the timing of obtaining measurements on a criterion.

Validation by means of the criterion-related approach is most often used in applied or practically oriented research. Criterion-related validity is helpful in assisting decision makers by giving them some assurance that their decisions will be effective, fair, and, in short, valid. Construct Validity Validating an instrument in terms of construct validity is a challenging task. The key construct validity questions are: What is this instrument really measuring? Does it adequately measure the abstract concept of interest?

Abstract Concept of Constructive Validity

Assess Constructive Validity of Data. Unfortunately, the more abstract the concept, the more difficult it is to establish construct validity; at the same time, the more abstract the concept, the less suitable it is to rely on criterion-related validity. Actually, it is really not just a question of suitability: What objective criterion is there for such concepts as empathy, role conflict, or separation anxiety? Despite the difficulty of construct validation, it is an activity vital to the development of a strong evidence base.

The constructs in which nurse researchers are interested must be validly measured. Construct validity is inextricably linked with theoretical factors. In validating a measure of death anxiety, we would be less concerned with the adequate sampling of items or with its relationship to a criterion than with its correspondence to a cogent conceptualization of death anxiety. Construct validation can be approached in several ways, but it always involves logical analysis and tests predicted by theoretical considerations.

Constructs are explicated in terms of other abstract concepts; Researchers make predictions about the manner in which the target construct will function in relation to other constructs. One construct validation approach is the known-groups technique. In this procedure, the instrument is administered to groups expected to differ on the critical attribute because of some known characteristic. For instance, in validating a measure of fear of the labor experience, we could contrast the scores of primiparas and multiparas.

We would expect that women who had never given birth would be more anxious than women who had done so, and so we might question the instrument’s validity if such differences did not emerge. We would not necessarily expect large differences; some primiparas would feel little anxiety, and some multiparas would express some fears. Overall, however, we would anticipate differences in average group scores.

More Methods for Validity

Assess Constructive Validity of Data. Another method of construct validation involves an examination of relationships based on theoretical predictions, which is really a variant of the known-groups approach. A researcher might reason as follows:

  1. According to theory, construct X is positively related to construct Y.
  2. Instrument A is a measure of construct X; instrument B is a measure of construct Y.
  3. Scores on A and B are correlated positively, as predicted by theory.
  4. Therefore, it is inferred that A and B are valid measures of X and Y.

This logical analysis is fallible and does not constitute proof of construct validity but yields important evidence. Construct validation is essentially an evidence-building enterprise.

A significant construct validation tool is a procedure known as the multi trait-multimethod matrix method (MTMM) (Campbell & Fiske, 1959). This procedure involves the concepts of convergence and discriminability. Convergence is evidence that different methods of measuring a construct yield similar results. Different measurement approaches should converge on the construct. Discriminability is the ability to differentiate the construct from other similar constructs.

Argue by Campbell and Fiske

Campbell and Fiske argued that evidence of both convergence and discriminability should be brought to bear in the construct validity question. To help explain the MTMM approach, fictitious data from a study to validate a “need for autonomy” measure are presented in

Suppose we measured need for autonomy in nursing home residents by

(1) giving a sample of residents a self-report summated rating scale (the measure we are attempting to validate)

(2) asking nurses to rate residents after observing them in a task designed to elicit autonomy or dependence

(3) having residents respond to a pictorial (projective) stimulus depicting an autonomy-relevant situation.

A second requirement of the full MTMM is to measure a differentiating construct, using the same measuring methods. In the current example, suppose we wanted to differentiate “need for autonomy” from “need for affiliation.” The discriminant concept must be similar to the focal concept, as in our example: We would expect that people with a high need for autonomy would tend to be relatively low on need for affiliation.

Single Validation Study

The point of including both concepts in a single validation study is to gather evidence that the two concepts are distinct, rather than two different labels for the same underlying attribute. The numbers in Table 18-4 represent the correlation coefficients between the scores on six different measures (two traits three methods).

For instance, the coefficient of 38 at the intersection of AUT1-AFF1 expresses the relationship between self-report scores on the need for autonomy and need for affiliation measures. Recall that a minus sign before the correlation coefficient signifies an inverse relationship. In this case, 38 tells us that there was a slight tendency for people scoring high on the need for autonomy scale to score low on the need for affiliation scale. (The numbers in parentheses along the diagonal of this matrix are the reliability coefficients.)

Aspects of MTMM

Assess Constructive Validity of Data II.Various aspects of the MTMM matrix have a bearing on construct validity. The most direct evidence (convergent validity) comes from the correlations between two different methods measuring the same trait. In the case of AUT1–AUT2, the coefficient is .60, which is reasonably high. Convergent validity should be large enough to encourage further scrutiny of the matrix. Second, the convergent validity entries should be higher, in absolute magnitude, than correlations between measures that have neither method nor trait in common.

That is, AUT1–AUT2 (.60) should be greater than AUT2–AFF1 (21) or AUT1–AFF2 4419), as it is in fact. This requirement is a minimum one that, if failed, should cause researchers to have serious doubts about the measures. Third, convergent validity coefficients should be greater than coefficients between measures of different traits by a single method.

Once again, the matrix in Table 18-4 fulfills this criterion: AUT1–AUT2 (.60) and AUT2–AUT3 (.55) are higher in absolute value than AUT1–AFF1 (.38), AUT2–AFF2.39), and AUT3–AFF3 (.32). The last two requirements provide evidence for discrimination validity. The evidence is seldom as clearcut as in this contrived example. Indeed, a common problem with the MTMM is interpreting the pattern of coefficients.

Another issue is that there are no clearcut criteria for determining whether MTMM requirements have been met, that is, there are no objective means of assessing the magnitude of similarities and differences within the matrix. The MTMM is nevertheless a valuable tool for exploring construct validity.

Researchers sometimes decide to use MMTM concepts even when the full model is not feasible, as in focusing only on convergent validity. Executing any part of the model is better than no effort at construct validation.

Read More : https://nurseseducator.com/assess-validity-of-data-v-denzins-1989kimchi-polivka-and-stephenson-1991checking-lincoln-and-guba/

https://nurseseducator.com/assess-validity-of-data-iv/

https://nurseseducator.com/assess-validity-of-data-v-denzins-1989kimchi-polivka-and-stephenson-1991checking-lincoln-and-guba/

https://nurseseducator.com/assess-validity-of-data-iv/

https://nurseseducator.com/assess-the-validity-of-data-part-iii/

https://nurseseducator.com/assess-validity-of-data-ii/

Leave a Comment