top of page
rabrams0

Let’s talk about data analysis…

Updated: Dec 11, 2023

In this blog post Dr Ruth Abrams, Georgette Eaton, Claire Duddy, Dr Hannah Kendrick and Dr Jo Howe sit down (virtually) to discuss how they each approach data analysis in realist approaches…


Ruth:

When I first started doing realist reviews and evaluations, trying to understand the process of data analysis was a bit like a black box in and of itself. There isn’t a specific realist guide or step by step to analysing data that is particularly accessible. Whilst I had incredibly wise guidance from Dr Geoff Wong, I also resorted to what I knew best, skills developed from qualitative data analysis more generally. In that respect my go to method has always been Braun and Clarke thematic analysis. This approach lays out the basics as well as allowing for nuance and complexity, and it can be used within any paradigm. It is the approach I signpost my students to the most as well because it is accessible for first-timers.


I did the thing that most new-comers to realist approaches do…started coding for contexts, mechanisms and outcomes across the data and using these as my broad themes which I would then categorise my codes within. This was such an unhelpful way to approach data and I really would urge anyone using realist approaches to avoid this temptation. It first of all forces you to categorise your data way too early, and secondly makes forming configurations at a later point much more complicated than it needs to be. I now code much more broadly, generating as many as possible across all my data points then integrating these into broader themes that are not related to C-M-Os. Within each broad theme is then where I start to look for my contexts, mechanisms and outcomes. I form configurations within each theme before then starting to map what this looks like across themes.


Georgette:

I agree with Ruth – I think it can be a hard approach initially, as it is different to traditional thematic synthesis methods. I was also fortunate to have advice from Dr Geoff Wong, lead author of RAMSES reporting standards for realist synthesis and realist evaluations. Geoff advised me to think about my data in terms of ‘buckets’, thinking broadly about areas of similarities (and associated areas of divergence). In doing this, I had a range of ‘buckets’ through which I could draw context, mechanisms and outcomes from narratively – and I could see where these interlinked. I’m a big visual learner, so the first time I did this I used flipchart paper and lots of pens – and that really helped me to eventually draw the C-M-Os out of each bucket, and then across each bucket. Here’s a picture of me doing this!


Claire:

It’s really interesting to talk about this with other realist researchers, because there seems to be a real range of approaches out there. I think that, like Ruth, people are often influenced by what they know and how they have approached other projects. I am naturally a bit of a pluralist – I always tend to think there are probably lots of different ways of getting to the same end point.


For myself, my approach is something similar to what Ruth has described – aiming to code data into broad themes as a starting point. Geoff Wong’s influence means that I am inclined to call them “Bucket Codes”, like Georgette! I see this primarily as a process of organising my data and becoming more familiar with what I’ve got to work with. I am not necessarily strict or tidy about it – I am happy for the same bit of data to appear in multiple “buckets” and for the coding framework to shift about and evolve as I work. On the NHS Health Checks realist review project, I had an early code called “post Health Check”. As I started to fill it up with data, I ended up splitting it up into buckets for “referrals”, “advice”, and so on. This later became the focus of the entire project. There were definitely no CMOCs at this stage, just a process of grouping data together and working out an appropriate level of granularity. It was much easier later on to start building CMOCs, working with one bucket at a time. One of the main reasons that I think this approach is helpful is that it means that you are working across documents when you start to build CMOCs. Trying to look for CMOCs within single documents is very difficult – sometimes they just aren’t there. It’s in looking across multiple sources that I start to see patterns of contexts, mechanisms and outcomes.


One other thing that I think can be overlooked is the value of writing the narrative of the results. For me, this is when things really start to come together. CMOCs often seem a bit bland on their own, but for me, writing about the CMOCs is where they all start to fit together and mean something.


Hannah:

When I first began analysing my PhD data for context-mechanism-outcome configurations, my main difficulty was understanding the role of values, politics, and social structure within my data. I wanted a way of analysing my data to account for the real processes, events and causal mechanisms, whilst understanding their relationship with discursive and structural factors. To do this I used Fairclough’s (2003) critical discourse analysis and used it to analyse organisational documents and ‘communicative events’, such as staff meetings and training sessions. This allowed me to determine the main discourses that were framing the service change and were being introduced to staff at an organisational level.


The rest of my data (interview and observational), I analysed, similarly to others here, by coding for C-M-Os. Like Ruth, I was concerned about data becoming fragmented and narrowing down CMOs too soon, so I coded more loosely within broad themes. I now know from Claire and Georgette that these can be described as ‘buckets’! I did however find it useful to label my codes as either contexts, mechanisms, or outcomes, whilst using memo writing, popular amongst grounded theorists, to note analytical and theoretical insights about how one element of the CMO might be linked to another. Memo writing also allowed me to make theoretical notes about how elements of the CMO related to the discursive framing of the service change. For example, did the behavioural response of staff to resources, seek to reinforce, or perpetuate dominant discourse, or did it highlight resistance, and what were the implications of this? It was quite a messy process, but eventually I was able to build up CMOs that were situated within their discursive and structural context. I completely agree with Claire that it’s within the written narrative that CMOs become most interesting!


Jo:

One of the things that has struck me talking to realist researchers over the years is that coding and synthesis is an individual endeavour. When I first embarked upon my PhD analysis, I was overwhelmed; 55 one hour-long interviews generate a lot of data. I stumbled across a blog written by Dr Sonia Dalkin from the University of Northumbria on using NVivo in realist studies. Dalkin advised coding at programme theory level which was light a lightbulb moment for me and made the task more manageable.


I created nodes for each programme theory and referred to them as bins throughout my PhD. This is a high-level task, it can be completed relatively quickly, and gives a good overview of the whole dataset. The next stage of coding was more inductive and more time consuming. A key feature of my thinking at this time point was to understand which contexts and mechanism resources were driving outcomes and I created mini bins (child nodes) to reflect this thinking. Some sections of text were coded to several mini bins simultaneously and may or may not have contained some or all of the elements of context, mechanism and outcome. I then continued in MS Word one bin at a time, and created tentative CMOC tables with corresponding sources of evidence from the interview data. Sometimes the mechanism reasoning component was clearly articulated, sometimes not. This process provided a holistic view of emerging CMOCS and allowed me to amalgamate several. These CMOC tables provided the basis for the narrative explanations contained in my findings chapters. In reality, the final CMOCs are different to these initial tables, but this reflects the continual reflective iterative nature of refinement which occurs in realist studies.


If you would like to guest blog for our site, please email r.abrams@surrey.ac.uk. We welcome short pieces (between 500-700 words) on your bug bears, top tips or anything else realist inspired.


Please cite this blog as: Abrams, R, Eaton, G, Duddy, C, Kendrick, H and Howe, J. (2023) Let's talk about data analysis..., Realist Health & Social Care SIG, Dec. Available at: https://rabrams0.wixsite.com/realistworkforce-sig/post/let-s-talk-about-data-analysis


Resources mentioned in this blog:

  • Braun., V. and Clarke., V. 2006. Using thematic analysis in Psychology. Qualitative Research in Psychology. DOI: 10.1191/1478088706qp063oa

  • Dalkin et al. 2020. Using computer assisted qualitative data analysis software (CAQDAS; NVivo) to assist in the complex process of realist theory generation, refinement and testing. International Journal of Social Research Methodology, DOI: https://doi.org/10.1080/13645579.2020.1803528

  • Fairclough, N. 2003. Analysing Discourse Textual Analysis for Social Research. Routledge. UK.

  • Talk delivered by Sonia Dalkin to Notts Realism on the above, which can be viewed here

160 views0 comments

Comments


bottom of page