Don’t save your analysis for your Discussion section; your findings section should contain analysis, not just a summary of what you found. The Discussion puts your findings in conversation with the broader literature (often stuff from your lit review, but sometimes other stuff).
Something I see a lot in qualitative research is a lack of analysis. Basically a rundown of what’s in the data, but without doing anything with it (analysis). It’s the qualitative equivalent of descriptive stats. Short thread.
Some tricks to move past this: go back to your research question. Some RQs are purely descriptive, which can be okay in some settings, but usually are less exciting.
On the purely descriptive front, too often, I’m seeing variations of trying to convince the audience that something is good or bad, which is not often good social science. It’s more argument rather than RQ-driven work.
If you’re lost in description land, ask yourself: What are you trying to explain (not just describe)? What outcome are you trying to explain? What do you think is causing it, based on your data? What does the lit suggest is causing it?
Probably most importantly: What is/are the mechanism(s) through which some factor/variable is(are) affecting the outcome of interest?
If you are lost for inspiration and the literature isn’t inspirng you, more inductively, write memos about your data. (See ch in #RockingQualSocSci) What observations are you generating? What insights? What are they telling you about how things work, not just how things are?
What similarities and differences do you see across groups or settings, not just in your data but also across relevant groups or settings in the literature, or across literatures? Think about possible framings (see other Ch in #RockingQualSocSci) and how to connect your work.
But don’t stop at telling us what’s in your dataset(s). Don’t stop with your content analysis. Do something with it. If you don’t know how, check your RQ–make sure you have one (even a broad one), otherwise it may need some work. Or check the lit for inspiration.
What’s something the lit claims about how things work (which is about cause and effect) that you see in your data or that your data are telling you is wrong or maybe more nuanced or you see a diff variable/mechanism?
There’s certainly a place for description, but too often, a paper that just has description and no analysis feels incomplete, like we’re waiting to read step two, but it never comes. And analysis is hard. I get it. But there are ways to move forward and make your paper stronger.
Over the last few days, I’ve gotten some pushback from esp critical folks opposing how positivist I sound here. You can still use this advice to make critical claims. But you have to do more than tell the reader about how much someone’s life sucks; show/expl why it sucks.
Show me how whatever variable(s) of oppression you care about are functioning in their lives and how they are leading to the bad outcomes. Don’t just assume they are doing that work. Again, that’s the qual equivalent of correlation studies. Qual work is great at unpacking those.
I know there are differing views on this, but they are wrong. CITE to your data. Don’t have us take your word for it, cite it. If you have interview transcripts, cite them. Docs? Cite them. Field notes? Cite them. Do what you need to to maintain confidentiality, fine, but cite!
And there are lots of tricks for confidentiality. Lawyer in City A. Mid-level employee in Organization 2. “Sunnyside” resident 24. Use what’s relevant: demos, job, location, time. But always cite to your source (fieldnotes, interview transcript, doc, etc.)
You can use pseudonyms (sometimes your participants can choose their own), but make clear that they are in fact made-up names. Don’t make the reader wonder, “Oh shit, did they just name this person out right?”
A big issue for documents: cite your documents in such a way that a curious reader could look them up (get on a plane, drive up a dirt road, pull up a URL, search a database).
Replication has its own set of problems, but it is a good theoretical model for thinking about how to cite things: can we (in theory) reproduce your project based on the info you gave us? Can you find your sources (or sources like them, in some cases), do something similar?
File this under a larger need to SHOW, don’t just tell when reviewing your findings.
Just as we tell our students, “If you are paraphrasing an author’s argument, you still have to cite them,” so too if you are paraphrasing your fieldnotes, you still have to cite them.
Show us you’re not doing this shit from memory!
Something I wish more qualitative studies would do is start off with an overview of the findings. In some studies, this might be an array of different types of behaviors, activities, or attitudes people did/had. Then, move into the deeper dives of individual examples.
I really like to see that breadth before we get depth. In some cases, we never get breadth, just depth, and I think it can undersell the richness of the data.
There are probably exceptions where this wouldn’t work, but I definitely see a lot of cases where it would really make the piece better.
Of course, the flip side is when one only gives a rundown/overview of their findings without ever actually going into detail or giving an example or analysis, which I’ve criticized before.
Also, I’m realizing my original post was unclear bc you should include an intro paragraph to your findings that lays out your analysis or argument based on your data, but I also mean show me the breadth and then zero in on specific things you want to highlight.
Another problem I sometimes see in qualitative works is dangling quotations–that is, a quotation from the dataset without anything following it, using a quote to close out a paragraph. Doing this occasionally is fine, but usually you want to tell the reader why you included it.
Likewise, it’s a good idea to close out a section with some sort of concluding remark (not necessarily a whole paragraph, but at least a sentence) to transition to the next section.