Assess Validity of Data V: Data Validity and Methods
Assess Validity of Data V Denzin’s (1989),Kimchi, Polivka, and Stephenson (1991),Checking Lincoln and Guba. Assess Validity of Data by Denzin’s (1989), Kimchi, Polivka, and Stephenson (1991), Checking Lincoln and Guba
Data Triangulation with Multiple Methods
Assess Validity of Data V Denzin’s (1989),Kimchi, Polivka, and Stephenson (1991),Checking Lincoln and Guba. Although Denzin’s (1989) seminal work discussed these four types of triangulations as a method of converging on valid understandings about phenomenon, other types have been suggested. For example, Kimchi, Polivka, and Stephenson (1991) have described analysis triangulation (i.e., using two or more analytical techniques to analyze the same set of data).
This approach offers another opportunity to validate the meanings inherent in a qualitative data set. Analysis triangulation can also involve using multiple units of analysis (e.g., individuals, dyads, families). Finally, multiple triangulations are used when more than one of these types of triangulations is used in the collection and analysis of the same data set.
Although Denzin’s (1989) seminal work discussed these four types of triangulation as a method of converging on valid understandings about a phenomenon, other types have been suggested. For example, Kimchi, Polivka, and Stephenson (1991) have described analysis triangulation (i.e., using two or more analytical techniques to analyze the same set of data).
Validity Inherent in Qualitative
This approach offers another opportunity to validate the meanings inherent in a qualitative data set. Analysis triangulation can also involve using multiple units of analysis (e.g., individuals, dyads, families). Finally, multiple triangulations are used when more than one of these types of triangulations is used in the collection and analysis of the same data set.
In summary, the purpose of using triangulation is to provide a basis for convergence on the truth. By using multiple methods and perspectives, researchers strive to sort out “true” information from “error” information, thereby enhancing the credibility of the findings.
Peer Debriefing
Assess Validity of Data V Denzin’s (1989),Kimchi, Polivka, and Stephenson (1991),Checking Lincoln and Guba. Another technique for establishing credibility involves external validation. Peer debriefing involves a session with peers to review and explore various aspects of the inquiry. Peer debriefing exposes researchers to the searching questions of others who are experienced in either the methods of naturalistic inquiry, the phenomenon being studied, or both.
In a peer debriefing session, researchers might present written or oral summaries of the data that have been gathered, categories and themes that are emerging, and researchers’ interpretations of the data. In some cases, taped interviews might be played among the questions that peer de-briefers might address are the following:
- Is there evidence of researcher bias?
- Have the researchers been sufficiently reflective?
- Do the gathered data adequately describe the phenomenon?
- If there are important omissions, what strategies might remedy this problem?
- Are there any apparent errors of fact?
- Are there possible errors of interpretation?
- Are there competing interpretations? More comprehensive or parsimonious interpretations?
- Have all important themes been identified?
- Are the themes and interpretations knit together into a cogent, useful, and creative conceptualization of the phenomenon?
Member Checking
Lincoln and Guba consider members checking the most important technique for establishing the credibility of qualitative data. In a member check, researchers provide feedback to study participants regarding the emerging data and interpretations and obtain participants’ reactions.
If researchers purport that their interpretations are good representations of participants’ realities, participants should be given an opportunity to react to them. Member checking with participants can be carried out both informally in an ongoing way as data are being collected, and more formally after data have been fully analyzed.
Member checking is sometimes done in writing. For example, researchers can ask participants to review and comment on case summaries, interpretive notes, thematic summaries, or drafts of the research report. Member checks are more typically done in face-to-face discussions with individual participants or small groups of participants.
Freedom of Members
Many of the questions relevant to peer debriefings are also appropriate in the context of member checks. Despite the role that member checking can play in enhancing credibility and demonstrating it to consumers, several issues need to be kept in mind.
One is that some participants may be unwilling to participate in this process. Some, especially if the research topic is emotionally charged, may feel they have attained closure once they have shared their concerns, feelings, and experiences. Further discussion might not be welcomed.
Others may decline being involved in member checking because they are afraid it might arouse suspicions of their families. Choudhry (2001) encountered this in her study of the challenges faced by elderly women from India who had immigrated to Canada. When Choudhry asked participants for a second interview to examine the transcripts of their first interviews, the participants refused.
They feared that a second visit to their homes might arouse suspicions among their family members and increase their sense of loss and regret. A second issue is that member checks can lead to misleading conclusions of credibility if participants “share some common myth or front or conspire to mislead or cover up” (Lincoln & Guba, p. 315).
Agree or Disagree Freedom
Assess Validity of Data V Denzin’s (1989),Kimchi, Polivka, and Stephenson (1991),Checking Lincoln and Guba
At the other extreme, some participants might express agreement (or fail to express disagreement) with researchers’ interpretations either out of politeness or in the belief that researchers are “smarter” or more knowledgeable than they themselves are.
Searching for Disconfirming Evidence The credibility of a data set can be enhanced by the researcher’s systematic search for data that will challenge an emerging categorization or descriptive theory. The search for disconfirming evidence occurs through purposive sampling methods but is facilitated through other processes already described here, such as prolonged engagement and peer debriefings.
As the purposive sampling of individuals who can offer conflicting accounts or points of view can greatly strengthen a comprehensive description of a phenomenon. Lincoln and Guba (1985) refer to a similar activity of negative case analysis, a process by which researchers revise their interpretations by including cases that appear to disconfirm earlier hypotheses.
Goals
The goal of this procedure is to continuously refine a hypothesis or theory until it accounts for all cases. Researcher Credibility Another aspect of credibility discussed by Patton (2002) is researcher credibility, that is, the faith that can be put in the researcher. In qualitative studies, researchers are the data collecting instruments as well as creators of the analytic.
Therefore, researcher qualifications, experience, and reflexivity are important in establishing confidence in the data. It is sometimes argued that, for readers to have confidence in the validity of a qualitative study’s findings, the research report should contain information about the researchers, including information about credentials.
In addition, the report may need to make clear the personal connections they had to the people, topic, or community under study. For example, it is relevant for a reader of a report on the coping mechanisms of AIDS patients to know that the researcher is HIV positive.
Argues
Assess Validity of Data V Denzin’s (1989),Kimchi, Polivka, and Stephenson (1991),Checking Lincoln and Guba.
Patton argues that researchers should report “any personal and professional information that may have affected data collection, analysis and interpretation either negatively or positively. Dependability The second criterion used to assess trustworthiness in qualitative research is dependability.
The dependability of qualitative data refers to the stability of data over time and over conditions. This is conceptually similar to the stability and equivalence aspects of reliability assessments in quantitative studies (and also similar to time triangulation). One approach to assessing the dependability of data is to undertake a procedure referred to as step ‘wise replication.
This approach involves having a research team that can be divided into two groups. These groups deal with data sources separately and conduct, essentially, independent inquiries through. which data can be compared. Ongoing, regular communication between the groups is essential for the success of this procedure.
Another technique relating to dependability is the inquiry audit. An inquiry audit involves a scrutiny of the data and relevant supporting documents by an external reviewer, an approach that also has a bearing on the confirmability of the data, a topic we discuss next.
Read More:
https://nurseseducator.com/assess-validity-of-data-iv/
https://nurseseducator.com/assess-validity-of-data-iv/
https://nurseseducator.com/assess-the-validity-of-data-part-iii/