Nurses Educator

The Resource Pivot for Updated Nursing Knowledge

Research Instrument to Measure Validity

Measure Validity of Research Instrument

Whats is Validity,Theoretical Specification,Forms or Types of Validity,Content Validity,Criteria of Validity,Construct Validity,How Examine Hypothesis,How to Find Similarities and Differences.

Whats is Validity

     Validity refers to the accuracy of responses on self-report,
norm-referenced
measures of attitudes and behavior. Validity arises from
classical measurement theory, which holds that any score obtained from an
instrument will be a composite of the individual’s true pattern and error
variability. 

    The error is made up of random and systematic components.
Maximizing the instrument’s reliability helps to reduce the random error
associated with the scores (see “Reliability”), although the validity of the
instrument helps to minimize systematic error. Reliability is necessary but not
a sufficient requirement for validity.

Theoretical Specification 

    Validity and theoretical specification are inseparable, and the
conceptual clarification (see “Instrumentation”) performed in instrument
development is the foundation for accurate measurement of the concept. Broadly
stated, validity estimates how well the instrument measures what it purports to
measure. 

    Underlying all assessment of validity is the relationship of the data
to the concept of interest. This affects the instrument’s ability to
differentiate between groups, predict intervention effects, and describe
characteristics of the target group.

Forms or Types of Validity

    Literature usually describes three forms of validity: content,
criteria, and construct.
These forms vary in their value to nursing
measurement, and unlike reliability, singular procedures are not established
that lead to one coefficient that gives evidence of instrument validity.
Instead, validity assessment is a creative process of building evidence to support
the accuracy of measurement.

Content Validity

    Content validity determines whether the items sampled for inclusion
adequately represent the domain of content addressed by the instrument. The
assessment of content validity spans the development and testing phases of instrumentation
and supersedes formal reliability testing. 

    Examination of the content focuses
on linking the item to the purposes or objective of the instrument, assessing
the relevance of each item, and determining if the item pool adequately
represents the content. This process is typically done by a panel of experts,
which may include professional experts or members of the target population. 

    Lynn (1986) has provided an excellent overview of the judgment quantification
process of having judges assert that each item and the scale itself is content
valid. The results of the process produce a content validity index (CVI), which
is the most widely used single measure for supporting content validity. 

    Content
validity should not be confused with the term face validity, which is an
unscientific way of saying the instrument looks as if it measures what it says
it measures. Although content validity is often considered a minor component
for instrument validation, researchers have repeatedly found that precise
attention to this early step has dramatic implications for further testing.

Criteria of Validity

    Criterion validity is the extent to which an instrument may be used
to measure an individual’s present or future standing on a concept through
comparison of responses to an established standard. 

    Examination of the
individual’s current standing is usually expressed as concurrent criteria
validity, although predictive criteria validity refers to the individual’s
future standing. 

    It is important to note that rarely can another instrument be
used as a criterion. A true criterion is usually a widely accepted standard of
the concept of interest. Few of these exist within the areas of interest to
nursing.

Construct Validity

    Construct validity has become the central type of validity
assessment. It is now thought that construct validity really subsumes all other
forms. In essence, construct validation is a creative process that rarely
achieves completion. 

    Instead, each piece of evidence adds to or detracts from
the support of construct validity, which builds with time and use. Nunnally
(1978) proposes three major aspects of construct validity: 

(a) specification of
the domain of observables

(b) extent to which the observables tend to measure
the same concept, which provides a bridge between internal consistency,
reliability, and validity

(c) evidence of theoretically proposed
relationships between the measured and predicted patterns. 

    The first aspect is
similar to content validity
and is essentially handled through formalized
concept clarification in instrument development. The inclusion of this
specification of the domain under construct validity supports the contention
that construct validity is the primary form, with other types forming subsets
within its boundaries.

    The other two aspects of construct validity are examined formally
through a series of steps. These steps form a hypothesis testing procedure in
which the hypotheses are based on the theoretical underpinnings of the
instrument. 

    Hypotheses can relate to the internal structure of the items on the
instrument. Hypotheses can also refer to the instrument’s anticipated
relationship
with other concepts. based on a theoretical formulation. The first
set of hypotheses fall into the second aspect of construct validity testing;
the latter relate to the third aspect.

How Examine Hypothesis

    Although there are no formalized ways to examine the hypothesis
proposed for construct validity testing, some typical approaches have been
identified in nursing research. Primarily, the internal structure of an
instrument is tested through factor analysis and related factor analytic
procedures, such as latent variable modelling. 

    Factor analysis has become one
of the major ways in which nursing researchers examine the construct validity
of an instrument. It is important to note that this approach addresses only the
second aspect of construct validity testing and in it-self is insufficient to
support the validity of an instrument. 

    Factor analysis simply provides evidence
that the underlying factor structure of the instrument is in line with the
theoretically determined structure of the construct. The third aspect of
construct validation provides an opportunity for more creative approaches to
testing. 

    Hypotheses proposed have to do with the relationship of the concept
being measured with other concepts that have established methods of
measurement. These hypotheses deal with convergent and discriminate construct
validity, subtypes that examine the relationship of the concept under study
with similar and dissimilar concepts. 

How to Find Similarities and Differences 

    If data evidence a strong relationship with
similar concepts and no relationship with dissimilar concepts, evidence is
built for the construct validity of the instrument. 
Should data not support
similarities and differences, several options are possible: 

(a) the instrument
under construction may not be accurately measuring the concept

(b) the
instruments for the other concepts may be faulty

(c) the theory on which
the testing was based may be inaccurate. 

    The multi trait-multi-method (MTMM)
matrix has been proposed as a way to formally test convergent and discriminate
construct validity.

    Another approach to examining the relationship among concepts
involves a known group technique. In this method, the researcher hypothesizes
that the instrument will provide a certain level of data from groups with known
levels on the concept the instrument has been designed to measure.

    The above approaches to testing construct validity are only samples
of techniques that can be used. As mentioned, construct validity testing is
creative. Researchers can design unique ways to support the validity of their
instruments. The important point is that whatever is designed must be based on
theory and must be intuitively and logically supported by the investigator.