The trustworthiness of qualitative data analysis is often hard to evaluate, because the data we analyze are unstructured and influenced by context. For example, the themes discovered from an interview depend on the state of the interviewee, the skills of the interviewer, and the coder analyzing the transcript. Even if a skilled researcher conducts a scripted, structured interview with the same participant, they could get different responses on different days, and the responses could be coded differently by different individuals. Even though we cannot always replicate the results of qualitative data analysis, there are strategies we can use to maximize trustworthiness. We will go over some of them now.
Triangulation
Triangulation is the use of multiple sources of information to create trustworthy data. You could triangulate using multiple methods, sources, researchers, and even theories. For example, imagine we are using interviews to study the experiences of gender non-conforming adolescents in Vermont public schools.
- In addition to interviews, we could collect quantitative data using closed-ended surveys and engage in classroom observation. This would be considered method triangulation.
- We might also repeat the interview process in a number of different schools to triangulate sources.
- We might have peer researchers code and interpret the data and then compare our conclusions.
- Finally, we might answer our research question using different theories of gender as a starting point.
Intercoder Reliability
As is mentioned in our discussion of triangulation, interview data are often coded by more than one coder. The more agreement among researchers on how to categorize and code the data, the more reliable the resulting dataset is. In this case, it can be helpful to develop a codebook which contains a set list of codes and outlines possible scenarios in the data that merit each code.
The codebook may either be created by the primary researcher or co-created by a team of researchers through a negotiation process. To do this, you and your colleagues usually code an identical section of the data separately, and then compare your coding to agree on a criterion that resolves your disagreements.
For Example:
In their study about experiences with earthquakes, Joffe and O’Connor (2020) were analyzing this segment of a participant’s response:
When an earthquake occurs in the evening, when you’re sleeping that, that’s always been a concern to me. Because you’re not awake you’re wondering am I having a bad dream, er, what’s happening here, and so your senses aren’t as sharp as they would be during a daytime event.
Researcher A coded this segment as “anxiety/fear” while researcher B coded it as “disorientation”. The two could discuss the segment and choose one of the codes, or come up with a new code to fit both of their understandings.
-
- If they agree on coding this segment as “disorientation”, researcher A should go back to other sections they coded to see if similar segments are coded as “anxiety/fear” and need to be changed. Therefore, it is important to compare codes frequently and build intercoder reliability with your colleagues early into the analysis.
-
- After this conversation, the researchers will also specify in the codebook all other possible situations where “disorientation” applies, so that there will be fewer ambiguity and disagreements moving forward.
In addition, it is helpful to note in a shared memo your disagreements and how they were rectified. This will provide another layer of trustworthiness.
In addition to discussing and resolving the disagreements, it is helpful to note down in a shared memo the disagreements and how they were rectified. This will provide another layer of trustworthiness.
Negative Case Analysis
You will more than likely have certain cases that deviate from emerging themes. It is important not to ignore these cases or impose codes on them. Rather, you can ask yourself what you can learn from these cases and why they may be different from other information you have gathered. You may want to revise your interpretation of the data and try to provide an explanation for all cases, including negative ones. Although these negative cases need not always fit with your conclusion, including them helps you construct a comprehensive view of the topic.
For example, imagine you are a researcher interviewing residents of a small town about a new railroad station being built there. Most participants express excitement about the construction, conveying their enthusiasm about being able to easily travel to the neighboring city. One participant, however, seems distressed over the railroad. As it turns out, they are a night-shift worker worried about the noise the train will make when they are trying to sleep.
Instead of ignoring this outlier, think about the additional insights they can add to your data. Might this same issue of noise pollution affect others in the town? How can you incorporate their perspective into your analysis? After hearing what this participant has to say, you may choose to ask others about their thoughts on the effect of noise pollution from trains on their quality of life.
Member Checking
The technique of member checking, or respondent validation, requires you to take your interpretation of the data back to participants for verification. Therefore, you need to make it clear how and why you will use member checks during the informed consent process – before you collect data from them.
You can refer to the table below for some ways in which member checks could be done, their respective advantages, and potential risks.
Checking Method | Advantages | Risks |
---|---|---|
Sharing collected data with participants without follow-up discussions | Allows you to confirm, verify, and modify the data | Does not address the trustworthiness of your analysis; Risk participants wanting to change their words or records of their behavior |
Follow-up interviews to discuss collected data | Confirm, verify, and modify the data | Does not address trustworthiness of analysis; Risks participants wanting to change their words; Risks coercion into agreement due to researcher’s presence |
Follow-up interviews to discuss interpreted data | Confirm, verify, and modify your interpretation; Allows participants to check for compromised anonymity | Risks coercion into agreement |
Follow-up group interviews to discuss interpreted data | Confirm, verify, and modify your interpretation; Establishes a support system between participants | Risks compliance due to presence of other participants |
Returning analyzed and synthesized data to participants without follow-up interviews | Allows participants to confirm that their experiences are reflected in the synthesized themes; Addresses trustworthiness of the entire study and its overarching themes |
Peer Review
One of the most helpful ways to check how trustworthy your findings are is to have another person assess your interpretation. Peer review can take place at any point during the research process, but is especially important during data analysis when you are at risk of imparting your own biases onto the data. Peer review can uncover some of these hidden biases and find general errors and issues that you may have overlooked. The ideal peer reviewer is impartial and has some training in qualitative research, but they need not be an expert in your particular field.
Peer review happens in what researchers call debriefing sessions. During these sessions, the reviewer asks questions about the methods of analysis, coding decisions, and interpretations. If you ever become a reviewer for a colleague, it is important to ask hard and genuine questions during these sessions. And as a researcher, being open to criticism and documenting highlights from these sessions will help you interpret your work more impartially later.
Providing an Audit Trail
An audit trail is a traceable account of your research process. You describe your data collection and coding process and explain how and why you made analytic decisions. This requires you to take notes throughout the analysis process to document what you did and why you did it.
Through being transparent about your research process, you present yourself as a dependable investigator and allow readers to accurately evaluate your research and interpretations. Researchers will often summarize their research and analytical processes in a methods section while also attaching a full or distilled audit trail in the appendix. If presenting research findings in a lecture or poster format, you would still prepare a full audit trail for anyone who is interested in further details and be ready to answer questions about your method.
Reflexivity
Reflexivity in the research process means recognizing your biases and your personal influence upon the study. You can keep a reflexive journal about your analytic decisions, how your own biases may have impacted them, what confused you in the data, and how you are attempting to make meaning of it. Being reflexive not only builds accountability and trustworthiness for your project but also invites your personal growth as a scholar and allows you to conduct even better research in the future.
Personal Project
Now that we have walked through some methods of ensuring trustworthy qualitative analysis, you may choose and apply a few of them in your own project to ensure that you are confident with your analysis. If you prefer thinking about these criteria visually, you can check out this video on trustworthiness in qualitative research: