Guest post #18: Dr Joanna Fadyl: Seeing the changes that matter: QLR focused on recovery and adaptation

Dr Joanna Fadyl is a Senior Lecturer and Deputy Director of the Centre for Person Centred Research at Auckland University of Technology in New Zealand. Her expertise is in rehabilitation and disability. Here, she reflects on the experiences of the group of researchers who worked on the ‘TBI experiences study’ – Qualitative Longitudinal Research (QLR) about recovery and adaptation after traumatic brain injury (TBI) – co-led by Professor Kathryn McPherson and Associate Professor Alice Theadom.  The team came to QLR as qualitative researchers who saw a need to capture how recovery and adaptation shifted and changed over time, in order to better inform rehabilitation services and support.

At the start of the study they had limited understanding about the challenges they would encounter because of the nature of QLR, but in ‘working it out by doing it’ they saw the immense value in such an approach, and indeed some of the authors have since been involved in other QLR projects.

Seeing the changes that matter: QLR focused on recovery and adaptation

For QLR, our data collection period (48 months in total) was relatively short. Our focus was on understanding what helped or hindered recovery and adaptation for people with TBI and significant others in their lives (family and close community). However, with 52 participants (and their significant others), the volume of data was significant. We interviewed our participants at 6, 12, 24 and 48 months after a TBI. At 48 months this was a subset of participants with diverse experiences.

The focus for our analytical approach was a type of thematic analysis based on Kathy Charmaz’s writing on grounded theory. The purpose of our research was to build a picture of what recovery and adaptation looks like for a cohort of people over time. While we did do some analysis of ‘case sets’ (the series of interviews relating to a particular person) to understand and contextualise aspects of their stories, the focus of analysis was not as much on individuals as it was on looking at patterns across the participant group.

Of course, making sense of a large amount of rich data is always challenging, but the added dimension of change over time was something we spent a lot of time pondering. Because we were interested in exploring recovery and adaptation – and we were particularly interested in how this presented across a cohort – one of the biggest challenges was to find strategies to make the changes we were interested in visible in our coding structure so we could easily see what was happening in our data over time. We chose to set up an extensive code structure during analysis at the first time-point, and work with this set of codes throughout, adapting and adding to them at further time-points. We reasoned that this would enable us to track both similarities and differences in the ways people were talking about their experiences over the various time-points. Indeed, it has made it possible to map the set of codes themselves as a way of seeing the changes over time. To make this work well, we used detailed titles for the codes and comprehensive code descriptions that included examples from the data. At each time-point the code descriptions were added to, reflecting changes and new aspects, and at each time-point consideration was given to which particular codes were out-dated and/or had shifted enough to be inconsistent with previous titles and descriptions. We also considered the new codes that were needed.

I will illustrate with an example. A code we labelled ‘allowing me to change what I normally do to manage symptoms and recover’ at 6-months, needed extensions to the code description at 12 months to reflect subtle changes. Beyond that although data still fitted with the essence of the code that had been developing over time, we began to question the ongoing appropriateness of the code title. The later data related to the same idea, but it was no longer about managing symptoms so much as it was about navigating the need to do things differently than before the injury in order to cope with changes. This way of working with the code enabled us to reflect on the experience and processes for participants relating to ‘allowing me to change what I normally do’ over time. At the 24-month point it was ‘in transition’ – not quite a new code yet, but different enough to be an uncomfortable fit with the original title and description. The description now included this query and ideas that might help us re-consider it in light of new data in the future.

It was apparent that when analysing interviews with participants at 48-months, the data related to this idea had changed, and it was clear that it no longer fitted the existing code title or description. We needed to consider introducing a new code, one that had a key relationship with the existing one but captured the essence of our findings more clearly. Essentially, the idea of ‘changing what I normally do’ had expired because there was less of a tendency to refer to pre-injury activities as ‘what I normally do’. However, negotiating having to do things differently than other people in order to manage life was still an issue for the participants who were experiencing ongoing effects. The change in the codes over time and the relationship between the ‘old’ and ‘new’ code were very visible using this system. The extensive code descriptions helped orientate us to the interview extracts that were most influential in shaping the code, and the database we set up for recording our coding allowed us to create reports of every extract coded here so we could review and debate the changes with reference to the key data and the general ‘feel’ of what was coded there.

Another key strategy we used to help us explore the data over time was the use of data visualisation software. The software we used (QlikSense) is designed for exploring patterns in data and then directly drilling down into the relevant detail to look at what is going on (as opposed to seeing an overview – we did our overviews on paper). One example is where codes and groups of codes varied in their prominence (e.g. coding density or number of participants who contributed to the code) across different time-points. Seeing these differences prompted us to look at the code descriptions and the data coded there to consider if this pattern added to our understanding of how people’s experiences were changing over time. We provide some more detailed examples of different patterns we explored in the paper that was published in Nursing Inquiry in 2017. The paper also gives some more detail and a slightly different perspective on some of the other discussion in this post. We invite you to read the paper and contribute to the conversation!

Fadyl, J. K., Theadom, A., Channon, A., & McPherson, K. M. (2017). Recovery and Adaptation after Traumatic Brain Injury in New Zealand: Longitudinal qualitative findings over the first two years. Neuropsychological Rehabilitation (open access)

Fadyl, J. K., Channon, A., Theadom, A., & McPherson, K. M. (2017). Optimising Qualitative Longitudinal Analysis: Insights from a Study of Traumatic Brain Injury Recovery and Adaptation. Nursing Inquiry, 24(2).




Guest post #17 Dr Daniel Turner: Can a computer do qualitative analysis?

This guest blog post is by Dr Daniel Turner, a qualitative researcher and Director of Quirkos, a simple and visual software tool for qualitative analysis. It’s based on collaborative research with Claire Grover, Claire Lewellyn, and the late Jon Oberlander at the Informatics department, University of Edinburgh with Kathrin Cresswell and Aziz Sheikh from the Usher Institute of Population Health Sciences and Informatics, University of Edinburgh. The project was part funded by the Scottish Digital Health & Care Institute.

Can a computer do qualitative analysis?

It seems that everywhere we look researchers are applying machine learning (ML) and artificial intelligence (AI) to new fields. But what about qualitative analysis? Is there a potential for software to help a researcher in coding qualitative data and understanding emerging themes and trends from complex datasets?

Firstly, why would we want to do this? The power of qualitative research comes from uncovering the unexpected and unanticipated in complex issues that defy easy questions and answers. Quantitative research methods typically struggle with these kind of topics, and machine learning approaches are essentially quantitative methods of analysing qualitative data.

However, while machines may not be ready to take the place of a researcher in setting research questions and evaluating complex answers, there are areas that could benefit from a more automated approach. Qualitative analysis is time consuming and hence costly, and this greatly limits the situations in which it is utilised. If we could train a computer system to act as a guide or assistant for a qualitative researcher wading through very large, long or longitudinal qualitative data sets, it could open many doors.

Few qualitative research projects have the luxury of a secondary coder who can independently read, analyse and check interpretations of the data, but an automated tool could perform this function, giving some level of assurance and suggesting quotes or topics that might have been overlooked.

Qualitative research could use larger data sources if a tool could at least speed up the work of a human researcher. While we aim in qualitative research to focus on the small, often this means focusing on a very small population group or geographical area. With faster coding tools we could design qualitative research with the same resources that samples more diverse populations to see how universal or variable trends are.

It also could allow for secondary analysis: qualitative research generates huge amounts of deep detailed data that is typically only used to answer a small set of research questions. Using ML tools to explore existing qualitative data sets with new research questions could help to get increased value from archived and/or multiple sets of data.

I’m also very excited about the potential for including wider sources of qualitative data in research projects. While most researchers go straight to interviews or focus groups with respondents, analysing policy or media on the subject would help gain a better understanding of the culture and context around a research issue. Usually this work is too extensive to systematically include in academic projects, but could increase the applicability of research findings to setting policy and understanding media coverage on contentious issues.

With an interdisciplinary team from the University of Edinburgh, we performed experiments with current ML tools to see how feasible these approaches currently are. We tried three different types of qualitative data sets with conventional ‘off-the-shelf’ Natural Language Processing tools to try and do ‘categorisation’ tasks where researchers had already given the ‘topics’ or categories we wanted extracts on from the data. The software was tasked with assessing which sentences were relevant to each of the topics we defined. Even in the best performing approach there was only an agreement rate of ~20% compared to how the researchers had coded the data. However this was not far off the agreement rate of a second human coder, who was not involved with the research project, did not know the research question, just the categories to code into. In this respect the researcher was put in the same situation as the computer.


Figure 1: Visualisations in Quirkos allow the user to quickly see how well automated coding correlates with their own interpretations

The core challenge comes from the relatively small size of qualitative data sets. ML algorithms work best when they have thousands, or even millions of sources to identify patterns in. Typical qualitative research projects may only have a dozen or less sources, and so the approaches give generally weak results. However, the accuracy of the process could be improved, especially by pre-training the model with other related datasets.

There are also limitations to the way the ML approaches themselves work – for example there is no way at the moment to input the research questions into the software. While you can provide a coding framework of topics you are interested in (or get it to try and guess what the categories should be) you can’t explain to the algorithm what your research questions are, and so what aspects of the data is interesting to you. ML might highlight how often your respondents talked about different flavours of ice cream, but if your interest is in healthy eating this may not be very helpful.

Finally, even when the ML is working well, it’s very difficult to know why: ML typically doesn’t create a human readable decision tree that would explain why it made each choice. In deep learning approaches, where the algorithm is self-training, the designers of the system can’t see how works, creating a ‘black box’. And this is problematic because we can’t see their decision making process, and tell if a few unusual pieces of data are skewing the process, or if it is making basic mistakes like confusing the two different meanings of a word like ‘mine’.

There is a potential here for a new field: one which meets the quantitative worlds of big data with the insight from qualitative questions. It’s unlikely that these tools will remove the researcher and their primary role in analysis, and there will always be problems and questions that are best met with a purely manual qualitative approach. However, for the right research questions and data sets, it could open the door to new approaches and even more nuanced answers.



Guest Post#16: Prof Rachel Thomson, Dr Sara Bragg and Dr Liam Berriman: Time, technology and documentation

In today’s guest post Rachel Thomson, Sara Bragg and Liam Berriman (University of Sussex) encourage us to reconsider the idea of archiving data as the end point of a study. Drawing on material from their new book Researching Everyday Childhoods: Time, Technology and Documentation in a Digital Age they argue that technological transformations have opened up new possibilities for the placing of archiving in the research process. Working with research participants, the team have co-produced the publically accessible archive Everyday Childhoods; a process that has enabled them to explore what it means to become data.

Rachel, Professor of Childhood & Youth Studies and Co-Director of the Sussex Humanities Lab, has undertaken a number of qualitative longitudinal and cross generational projects. Sara, Associate Researcher at the Centre for Innovation and Research in Childhood and Youth, has research interests in young people, participation, gender, sexuality, media, education, and creative research methods and creative learning. Liam, Lecturer in Digital Humanities & Social Science in the Department of Social Work, has conducted research on children’s relationships with digital media and technology as part of their everyday lives, the technologisation of toys in childhood, and the co-production of research using digital methods.

 Time, technology and documentation

There is a tradition within qualitative longitudinal research of returning to earlier studies building on the places, people or data sets of earlier research. In some disciplines this kind of iterative practice is well established, for example the long term ethnography in anthropology where generations of scholars ‘pass the mantle’ of responsibility for tracking the historical evolution of a community. Within sociology we talk of ‘revisits’ that can take the form of re-engaging with the methods/data or sites of earlier studies and earlier research selves if revisiting our own work. These kinds of reflexive contemplations have the potential to historicise social research practice, helping us to see how our research questions, methods and technologies are part and parcel of the knowledge economies that we as researchers are part of, and how these change over time. In general terms, designing time into a research process has enormous potential for making things visible in new ways, including the contingent modes of production of social research.

So paradoxically, by holding certain things constant, temporal methods have the capacity to help us notice change. For example following the same participant over time reveals all kinds of transformations but also a consolidation of something that in retrospect we understand as always having been there. Repeating a method over time has a similar analytic dividend providing a bridge to consider relations of sameness and mutability, difference and repetition. Generations within a family, institution or a society can also be thought of through the same prism – enabling us to tease apart biographical and historical time, life stages (such as early career, or young adulthood) and contexts (post- Brexit austerity). Designing generations into social research increases the power and the complexity of any investigation.

Our new book Researching Everyday Childhoods is a culmination of several threads of methodological development in the field of qualitative longitudinal research. The project focuses on children and young people and what it is like to live and grow in a culture that is saturated by digital technology. It is also a book about what it means for researchers to operate in the same environment, recognising how our practice is transformed by new tools and changing relationships of expertise and authority. The book is a mediation on a shift from analog to digital knowledge that encompasses all of the actors involved: the researchers, the participants, the funders, the audiences, the publishers, the data.  This is achieved by anchoring the empirical project to our own pasts – the seven year old children in the study are the yet to be born babies in our earlier intergenerational study of new motherhood. The researchers following them have known their families for almost a decade and this ‘back-story’ forms part of the relationship and data shadow for their cases. We have also adapted methods first trialled in the motherhood study: a day in a life, object based conversations and ‘recursive interviews’ where fragments of data and analysis from the research relationship are represented and responded to in the present.

Yet the study also brings in the new in a deliberate way. New participants in the form of a panel of teenagers, and new researchers bringing fresh perspectives, research questions and skills into the team. Importantly the project has sought to address the limits of our earlier research.

This includes the idea of starting rather than ending with the archive. Where previously we had promised confidentiality and anonymity as a condition of the research, in this project we invited participants to work collaboratively with us to co-produce a publically accessible archive. The practice of ‘curation’ is as important to us as ‘data generation’ and we are aware that professional social researchers no longer have a monopoly over such knowledge practices and the resulting knowledge relations. Working in collaboration with the Mass Observation Archive and our participant families we have created a new multi-media collection as well as an open access online interface – something that has involved us entering the archive itself, exploring what it means to become data, to be available for unknown audiences and unforeseen modes of secondary analysis. Thinking through what is the same and what might be different, we move more deeply into an era of digital data in which notions of indelibility, anonymity and trust change their character. We cannot confidently make promises about a future that we are yet to apprehend. We can however engage in the analytic and ethical labour necessary to ensure that we are thinking together in a way that is transparent, reflexive and accountable. Our book Researching Everyday Childhoods: Time, Technology and Documentation in a Digital Age does just that. We are pleased that it is also open access, meaning that along with the public archive it may be used as a resource for teaching and collaboration.


Guest Post #15: Dr Ruth Patrick: Analytic strategies for working within and across cases in qualitative longitudinal research

Dr Ruth Patrick, Social Policy Researcher in the School of Law and Social Justice, University of Liverpool, contributes today’s guest post. Ruth’s research illustrates the ways in which qualitative longitudinal research can help us to understand popular and political narratives around poverty, welfare reform and austerity and lived experiences for those directly affected by recent and ongoing changes to the social security system. She is author of For whose benefit: the everyday realities of welfare reform.

 In her post, Ruth draws on an ongoing project – The Lived Experiences of Welfare Reform Study – to demonstrate the value of conducting both diachronic and synchronic analysis in qualitative longitudinal work. In so doing, she highlights some implications for re-users of such material.

Analytic strategies for working within and across cases in qualitative longitudinal research

When I think about the – many – reasons why I am a fan of qualitative longitudinal research (QLR), I often remember Tess Ridge’s reflection on her own journey moving from researching at one point of time to researching through and across time. Tess described her experience as equivalent to going from watching television in black and white to technicolor, such are the greater depths, richness and detail that qualitative longitudinal research enables.

This richness is a wonderful advantage of QLR, but it does create challenges for the research process, especially when it comes to data management and analysis. In ‘The Lived Experiences of Welfare Reform Study’, I have followed a small number of single parents, disabled people and young jobseekers as they navigate the changing social security context, and experience an increasingly punitive regime of welfare conditionality and benefit sanctions. This research (which remains ongoing) has generated rich data, which I have sought to analyse by working to develop both diachronic (tracking a case over time) and synchronic (looking across cases at one point in time) analyses, as well as exploring the iteration between the two (Corden & Nice, 2006). To aid my data management, I use the qualitative analysis software package NVivo, with thematic codes emerging from a close reading and engagement with the data generated. Data analysis strategies include developing pen pictures for each case, which provide a short account of each individual’s journey through welfare reform. The synchronic analysis is supported by the coding process and then efforts to climb the analytical conceptual scaffold working upwards from data management and coding to descriptive analyses, and finally to explanatory accounts (Spencer et al, 2003). Repeated immersion with the data is critical, as is taking the time to return to the data generated after each wave, as each re-analysis brings fresh insight. In looking to the iteration between the diachronic and synchronic, I find it helpful to explore patterns and anomalies in the data generated, and to identify common themes emerging through time between the cases.

One theme to emerge very strongly from this analysis is the extent of ‘othering’ that exists (Lister, 2004), whereby the people I interviewed seek to assert their own deservingness to social security by dismissing and being critical of the asserted ‘undeservingness’ of some ‘other’. This ‘othering’ was widespread in my interviews, and there was some evidence of this increasing over time, as welfare reforms continued to take effect.

For example, Chloe, a single parent, talked negatively about immigrants in each of our three interviews between 2011 and 2013. However, her anger grew at each interview encounter and – by the third interview – in 2013 – she was employing threats of violence, and using racist language, in her articulation of how she felt towards this ‘other’ that she judged undeserving.

My initial diachronic analysis of Chloe’s case found that her sense of anger and even hatred towards immigrants grew over time, and that this could have arisen because she was herself being increasingly adversely affected by welfare reform. As her own situation deteriorated, she hit out more stridently at the perceived ‘other’, with her anger borne out of alienation, poverty and disenfranchisement.

However, another analysis is possible. As qualitative longitudinal researchers remind us (see, for example, Neale, 2018), one of the advantages of QLR is that repeat encounters develop the relationship between researcher and researched, and then create the possibilities for disclosures in later interviews because of a strengthened relationship and improved trust. Could it be, therefore, that Chloe was only speaking more stridently because she felt more secure in our relationship, and comfortable to speak to me more openly?

There are no easy answers here, and to assume otherwise would be simplistic but analytical strategies can help. Further diachronic analysis of Chloe’s case reveals that there were some significant disclosures made in the third interview wave, which were not mentioned (although relevant) in the first and second interviews and so this might suggest a change in relationship as posited. At the same time, though, synchronic analysis of ‘othering’ in each of the interview waves shows that the increased presence and ferocity of ‘othering’ observed in Chloe over time was also observable in several of the participants.

What is important to recognise – above all – is that as a qualitative longitudinal researcher returns each time to a participant, their relationship is inevitably evolving and changing, and that this may alter what participants say and why they say it. Working within and between cases in an iterative manner could help a secondary analyst to understand more of the context of the interview, and to consider how the changing relationship between researcher and participant may have impacted on what is disclosed over time. To further support this, it is beneficial if secondary analysts can have access to any field notes or research diaries completed by the primary researcher(s), as these may help clarify how research relationships evolved over time, and any reflections from the researcher on how these affected the data generation process.

QLR is a wonderful method within a researcher’s tool bag, but it – like any of the most powerful tools – needs to come with careful instructions and health warnings.



Corden, A. & Nice, Pathways to Work: Findings from the final cohort in a qualitative longitudinal panel of Incapacity Benefit recipients. Research Report No 398. London: Department for Work and Pensions.

Lister R. (2004), Poverty. Bristol: Policy Press

Neale, B (2018, in press), What is qualitative longitudinal research? London, Bloomsbury.

Neale, B. & Hanna, E. (2012), The ethics of researching lives qualitatively through time,

Spencer, L., Ritchie, J. & O’Connor, W. (2003), Analysis: Practices, Principles and Processes. In: Ritchie, J.Lewis, J. Eds. Qualitative Research Practice: A Guide for Social Science Research Students and Researchers. London: SAGE Publications, pp.199-218.

Timescapes Method Guides Series, University of Leeds, Available at:



Guest Post #14: Prof Jane Millar and Prof Tess Ridge: Following families

In today’s guest post Jane Millar and Tess Ridge, draw on some of the insights gleaned from their qualitative longitudinal study, The family work project: earning and caring in low-income households. The latest phase – following up 15 of the original families – was completed in early 2017, and published as Work and relationships over time in lone-mother families.

 In this blog, they focus on two factors that affected the sharing and re-use of data; the construction of their sample, and their family-based theoretical approach.

Jane, Professor of Social Policy at the University of Bath, is a Fellow of the British Academy and of the Academy of Social Sciences. Jane is well known for her work on lone parents and welfare to work, social security and tax credits, poverty and family policy.

Tess is an Honorary Professor in the Department of Social and Policy Studies at the University of Bath, and Fellow of the Academy of Social Sciences. Tess is well known for her children-focused work on childhood poverty, social exclusion, welfare and economic support.

Following families

Our longitudinal qualitative research started about 15 years ago, with a project to explore the impact of moving into paid work on the lives of lone mothers and their children. This was a very hot policy topic at the time, with a major policy drive to increase lone-parent employment.

Our sample consisted of 50 lone mothers who left Income Support for paid work in 2001/2002. We interviewed the mothers and their children first in 2004, and then again in 2005 and 2007. We have published a number of reports and articles, looking at various aspects of the move into work and employment sustainability, see our project webpage.

In 2016 we returned to 15 families, chosen to reflect the range of family and employment experiences and circumstances[1]. The long-term nature of the study has provided a unique insight into how these families managed work and care through some very challenging economic times.

Every longitudinal study starts at a particular point in time, and from the conceptual and methodological decisions and priorities at that time. Such decisions have implications throughout the project, and beyond. Here we discuss two factors that affected the question of data archiving and re-use: how we found the sample, and our family-based theoretical approach.

For the sample, we were interested in exploring the transition into work, and what helped and what hindered. So we wanted to interview lone mothers who recently started working. We found our sample through the (as it was then) Inland Revenue and the Department for Work and Pensions, who agreed to draw a sample to our specifications. These were that the women should have at least one child aged 8 to 14, been receiving Income Support, started work and received tax credits between October 2002 and October 2003, and lived in specified areas of the country (see our DWP-published report on the first three rounds). This gave us a very well specified and robust sample. But one of the conditions was that we should not share the data, even in anonymised form, due to concerns about confidentiality and privacy. So, we agreed not to place the transcripts in a data archive, or make available in other ways. Data archiving had not been a condition of the funding for the project, so there were no issues there.

Times have changed, and the general view now is in support of open access to all sorts of research data, with a growing literature on the issues and challenges of this in respect of qualitative research. We do agree this is important and are very interested in the way that the ‘Working across qualitative longitudinal studies’ project is taking this forward. Understanding and practice have developed much beyond where we were fifteen years ago. As Bren Neale discusses in her recent blog, the debate has moved on: ‘a concern with whether or not qualitative datasets should be used is giving way to a more productive concern with how they should be used, not least, how best to work with their inherent temporality’.

Still, in some ways we are relieved not to have had to address the issues of how to anonymise our material in ways that would enable further analysis that could be properly grounded in the actual interview content. This difficulty would have been compounded for us by a key feature of our research design, which was that we interviewed both the mothers and the children.

Our initial starting point was that the move into paid work, and then trying to sustain work over time, was something that involved the children as well as the mothers. The children’s lives would change, and they would have their own perspectives and experiences. In order to explore this ‘family-work-project’ as we called it, we needed to interview both the mothers and the children.

We did find that the mothers and children shared a commitment to the family-work-project and that this was a key factor in enabling the women to sustain work. But in analysing the interviews, and presenting the management of work and care as a family issue, we were also very aware of the importance of maintaining within-family privacy and confidentiality. The family-work-project sometimes involved painful adjustments and compromises and, as time passed, some of the ambivalence came more to the fore.

For some participants, the challenges of managing family life with low and insecure incomes over many years did come at a heavy cost to family relationships, at certain points in time. From the first round, the interviews had been carried out separately with the mothers and the children. And we took the decision to analyse and present these separately as well, in order to maintain within-family privacy. Thus in our articles and reports, particularly those using all waves of the data, we have focused on the mothers and the children separately. Where, for example, we wanted to discuss how the mother responded to her child’s situation and decisions we did so without directly identifying the link to the child’s account.  But we did, of course, know that link ourselves. And we are not sure it would have been possible to anonymise the transcripts to ensure such protection and keep that separation between the accounts of family members. We struggled with this ourselves, and so are very aware of the challenges.  In making the data anonymous, there would, we think, inevitably have to be some loss of the overall family perspective.

Developing approaches to informed consent that can recognise the family perspective in the analysis of the data would therefore be useful. However, it is not always the case that a longitudinal study is funded over several waves with one funder, and in our case we sought funding as we progressed and the study developed. This was demanding, but at the time no funding would have been available for three or more waves of research. Getting and maintaining informed consent over time is particularly challenging and requires considerable ethical rigour to ensure that participants – families in this case – do not have an ‘obligation’ to continue in the study and are aware that their data may be used elsewhere.

Using, and re-using, longitudinal qualitative data from interviews is an ongoing process that is far from straightforward. It is important to be aware of potential issues in the design of the research, insofar as possible. But issues and tensions also emerge during the course of the research, and these cannot always be anticipated.

[1] The first and second round were funded by the ESRC (RES-000-23-1079

The third by DWP (

The fourth by the Joseph Rowntree Foundation ( We thank all for their support.


Guest Post #13: Prof Bren Neale: Research Data as Documents of Life

Bren Neale is Emeritus Professor of Life course and Family Research (University of Leeds, School of Sociology and Social Policy, UK) and a fellow of the Academy of Social Sciences (elected in 2010). Bren is a leading expert in Qualitative Longitudinal (QL) research methodology and provides training for new and established researchers throughout the UK and internationally.

Bren specialises in research on the dynamics of family life and inter-generational relationships, and has published widely in this field. From 2007 to 2012 she directed the Economic and Social Research Council-funded Timescapes Initiative (, as part of which she advanced QL methods across academia, and in Government and NGO settings. Since completing the ESRC funded ‘Following Young Fathers Study’ ( Bren has taken up a consultancy to design and support the delivery of a World Health Organisation study that is tracking the introduction of a new malaria vaccine in sub-Saharan Africa. 

In this post, Bren draws on her extensive expertise as Director of the Timescapes Initiative, along with reflections from her forthcoming book ‘What is Qualitative Longitudinal Research?’ (Bloomsbury 2018) to consider the diverse forms of archival data that may be re-used or re-purposed in qualitative longitudinal work. In so doing, Bren outlines the possibilities for, and progress made in developing ways of working with and across assemblages of archived materials to capture social and temporal processes.

 Research Data as Documents of Life

Among the varied sources of data that underpin Qualitative Longitudinal (QL) studies, documentary and archival sources have been relatively neglected. This is despite their potential to shed valuable light on temporal processes. These data sources form part of a larger corpus of materials that Plummer (2001) engagingly describes as ‘documents of life’:

“The world is crammed full of human personal documents. People keep diaries, send letters, make quilts, take photos, dash off memos, compose auto/biographies, construct websites, scrawl graffiti, publish their memoirs, write letters, compose CVs, leave suicide notes, film video diaries, inscribe memorials on tombstones, shoot films, paint pictures, make tapes and try to record their personal dreams. All of these expressions of personal life are hurled out into the world by the millions, and can be of interest to anyone who cares to seek them out” (p. 17).

To take one example, letters have long provided a rich source of insight into unfolding lives. In their classic study of Polish migration, conducted in the first decades of the twentieth century, Thomas and Znaniecki (1958 [1918-20)] analysed the letters of Polish migrants to the US (an opportunistic source, for a rich collection of such letters was thrown out of a Chicago window and landed at Znaniecki’s feet). Similarly, Stanley’s (2013) study of the history of race and apartheid was based on an analysis of three collections of letters written by white South Africans spanning a 200 year period (1770s to 1970s). The documentary treasure trove outlined by Plummer also includes articles in popular books, magazines and newsprint; text messages, emails and interactive websites; the rich holdings of public record offices; and confidential and often revealing documents held in organisations and institutions. Social biographers and oral historians are adept at teasing out a variety of such evidence to piece together a composite picture of lives and times; they are ‘jackdaws’ rather than methodological purists (Thompson 1981: 290).

Among the many forms of documentary data that may be repurposed by researchers, social science and humanities datasets have significant value. The growth in the use of such legacy data over recent decades has been fuelled by the enthusiasm and commitment of researchers who wish to preserve their datasets for historical use. Further impetus has come from the development of data infrastructures and funding initiatives to support this process, and a fledgling corpus of literature that is documenting and refining methodologies for re-use (e.g. Corti, Witzel and Bishop 2005; Crow and Edwards 2012; Irwin 2013). Alongside the potential to draw on individual datasets, there is a growing interest in working across datasets, bringing together data that can build new insights across varied social or historical contexts (e.g. Irwin, Bornat and Winterton 2012; and indeed the project on which this website is founded).

Many qualitative datasets remain in the stewardship of the original researchers where they are at risk of being lost to posterity (although they may be fortuitously rediscovered, O’Connor and Goodwin 2012). However, the culture of archiving and preserving legacy data through institutional, specialist or national repositories is fast becoming established (Bishop and Kuula-Luumi 2017). These facilities are scattered across the UK (for example, the Kirklees Sound Archive in West Yorkshire, which houses oral history interviews on the wool textile industry (Bornat 2013)). The principal collections in the UK are held at the UK Data Archive (which includes the classic ‘Qualidata’ collection); the British Library Sound Archive, NIQA (the Northern Ireland Qualitative Archive, including the ARK resource); the recently developed Timescapes Archive (an institutional repository at the University of Leeds, which specialises in Qualitative Longitudinal datasets); and the Mass Observation Archive, a resource which, for many decades, has commissioned and curated contemporary accounts from a panel of volunteer recorders. International resources include the Irish Qualitative Data Archive, the Murray Research Center Archive (Harvard), and a range of data facilities at varying levels of development across mainland Europe (Neale and Bishop 2010-11).

In recent years some vigorous debates have ensued about the ethical and epistemological foundations for reusing qualitative datasets. In the main, the issues have revolved around data ownership and researcher reputations; the ethics of confidentiality and consent for longer-term use; the nature of disciplinary boundaries; and the tension between realist understandings of data (as something that is simply ‘out there’), and a narrowly constructivist view that data are non-transferable because they are jointly produced and their meaning tied to the context of their production.

These debates are becoming less polarised over time. In part this is due to a growing awareness that most of these issues are not unique to the secondary use of datasets (or documentary sources more generally) but impact also on their primary use, and indeed how they are generated in the first place. In particular, epistemological debates about the status and veracity of qualitative research data are beginning to shift ground (see, for example, Mauthner et al 1998 and Mauthner and Parry 2013). Research data are by no means simply ‘out there’ for they are inevitably constructed and re-constructed in different social, spatial and historical contexts; indeed, they are transformed historically simply through the passage of time (Moore 2007). But this does not mean that the narratives they contain are ‘made up’ or that they have no integrity or value across different contexts (Hammersley 2010; Bornat 2013). It does suggest, however, that data sources are capable of more than one interpretation, and that their meaning and salience emerge in the moment of their use:

“There is no a-priori privileged moment in time in which we can gain a deeper, more profound, truer insight, than in any other moment. … There is never a single authorised reading … It is the multiple viewpoints, taken together, which are the most illuminating” (Brockmeier and Reissman cited in Andrews 2008: 89; Andrews 2008: 90).

Moreover, whether revisiting data involves stepping into the shoes of an earlier self, or of someone else entirely, this seems to have little bearing on the interpretive process. From this point of view, the distinctions between using and re-using data, or between primary and secondary analysis begin to break down (Moore 2007; Neale 2013).

This is nowhere more apparent than in Qualitative Longitudinal enquiry, where the transformative potential of data is part and parcel of the enterprise. Since data are used and re-used over the longitudinal frame of a study, their re-generation is a continual process. The production of new data as a study progresses inevitably reconfigures and re-contextualises the dataset as a whole, creating new assemblages of data and opening up new insights from a different temporal standpoint. Indeed, since longitudinal datasets may well outlive their original research questions, it is inevitable that researchers will need to ask new questions of old data (Elder and Taylor 2009).

The status and veracity of research data, then, is not a black and white, either/or issue, but one of recognising the limitations and partial vision of all data sources, and the need to appraise the degree of ‘fit’ and contextual understanding that can be achieved and maintained (Hammersley 2010; Duncan 2012; Irwin 2013). This, in turn, has implications for how a dataset is crafted and contextualised for future use (Neale 2013).

A decade ago, debates about the use of qualitative datasets were in danger of becoming polarised (Moore 2007). However, the pre-occupations of researchers are beginning to move on. The concern with whether or not qualitative datasets should be used is giving way to a more productive concern with how they should be used, not least, how best to work with their inherent temporality. Overall, the ‘jackdaw’ approach to re-purposing documentary and archival sources of data is the very stuff of historical sociology and of social history more generally (Kynaston 2005; Bornat 2008; McLeod and Thomson 2009), and it has huge and perhaps untapped potential in Qualitative Longitudinal research.


Andrews, M. (2008) ‘Never the last word: Revisiting data’, in M. Andrews, C. Squire and M. Tamboukou (eds.) Doing narrative research, London, Sage, 86-101

Bishop, L. and Kuula-Luumi, A. (2017) ‘Revisiting Qualitative Data reuse: A decade on’, SAGE Open, Jan-March, 1-15.

Bornat, J. (2008) Crossing boundaries with secondary analysis: Implications for archived oral history data, Paper given at the ESRC National Council for Research Methods Network for Methodological Innovation: Theory, Methods and Ethics across Disciplines, 19 September 2008, University of Essex,

Bornat, J. (2013) ‘Secondary analysis in reflection: Some experiences of re-use from an oral history perspective’, Families, Relationships and Societies, 2, 2, 309-17

Corti, L., Witzel, A. and Bishop, L. (2005) (eds.) Secondary analysis of qualitative data: Special issue, Forum: Qualitative Social Research, 6,1.

Crow, G. and Edwards, R. (2012) (eds.) ‘ Editorial Introduction: Perspectives on working with archived textual and visual material in social research’, International Journal of Social Research Methodology, 15,4, 259-262.

Duncan, S. (2012) ‘Using elderly data theoretically: Personal life in 1949/50 and individualisation theory’, International Journal of Social Research Methodology, 15, 4, 311-319.

Elder, G. and Taylor, M. (2009) ‘Linking research questions to data archives’, in J. Giele and G. Elder (eds.) The Craft of Life course research, New York, Guilford Press,  93-116.

Hammersley, M. (2010) ‘Can we re-use qualitative data via secondary analysis? Notes on some terminological and substantive issues’, Sociological Research Online, 15, 1,5.

Irwin, S. (2013) ‘Qualitative secondary analysis in practice’ Introduction’ in S. Irwin and J. Bornat (eds.) Qualitative secondary analysis (Open Space), Families, Relationships and Societies, 2, 2, 285-8.

Irwin, S., Bornat, J. and Winterton, M. (2012) ‘Timescapes secondary analysis: Comparison, context and working across datasets’, Qualitative Research, 12, 1, 66-80

Kynaston, D. (2005) ‘The uses of sociology for real-time history,’ Forum: Qualitative Social Research, 6,1.

McLeod, J. and Thomson, R. (2009) Researching Social Change: Qualitative approaches, Sage.

Mauthner, N., Parry, O. and Backett-Milburn, K. (1998) ‘The data are out there, or are they? Implications for archiving and revisiting qualitative data’, Sociology, 32,4, 733-745.

Mauthner, N. and Parry, O. (2013). ‘Open Access Digital Data Sharing: Principles, policies and practices’, Social Epistemology, 27, 1, 47-67.

Moore, N. (2007) ‘(Re) using qualitative data?’ Sociological Research Online, 12, 3, 1.

Neale, B. (2013)’ Adding time into the mix: Stakeholder ethics in qualitative longitudinal research’, Methodological Innovations Online, 8, 2, 6-20.

Neale, B., and Bishop, L. (2010 -11) ‘Qualitative and qualitative longitudinal resources in Europe: Mapping the field’, IASSIST Quarterly: Special double issue, 34 (3-4); 35(1-2)

O’Connor, H. and Goodwin, J. (2012) ‘Revisiting Norbert Elias’s sociology of community: Learning from the Leicester re-studies’, The Sociological Review, 60, 476-497.

Plummer, K. (2001) Documents of Life 2: An invitation to a critical humanism, London, Sage.

Stanley, L. (2013) ‘Whites writing: Letters and documents of life in a QLR project’, in L. Stanley (ed.) Documents of life revisited, London, Routledge, 59-76.

Thomas, W. I. and Znaniecki, F. (1958) [1918-20] The Polish Peasant in Europe and America Volumes I and II, New York, Dover Publications.

Thompson, P. (1981) ‘Life histories and the analysis of social change’, in. D. Bertaux (ed.) Biography and society: the life history approach in the social sciences, London, Sage, 289-306.





Guest Post #12: Dr Sian Lincoln and Dr Brady Robards, Facebook timelines: Young people’s growing up narratives online

Sian Lincoln (Liverpool John Moores University) and Brady Robards (Monash University) contribute today’s insightful post. Sian, Reader in Communication, Media and Youth Culture, has interests in contemporary youth and youth cultures, social network sites and identity, and ethnography. Brady, a Lecturer in Sociology, has interests in the use of social media and methods involving social media.

In this post, Sian and Brady draw on their study ‘Facebook Timelines’ which explores the role of social media in mediating and archiving ‘growing up’ narratives. Using Facebook they provide the fascinating example of analyzing longitudinal digital traces by working with participants as co-analysts encouraging them to ‘scroll back’ and interpret their own personal archives. On the subject, they have authored ‘Uncovering longitudinal life narratives: Scrolling back on Facebook’  and Editing the project of the self: Sustained Facebook use and growing up online.



In 2014 Facebook celebrated its tenth birthday. To mark this first decade, we edited a special issue of New Media & Society that reflected on the extent to which the site had become embedded into the everyday lives of its users. It was also evident at this point that there was now a generation of young people who had literally ‘grown up’ using the site. This prompted us to design a new research project, and a new research method in the process. Facebook Timelines is a qualitative study with young people in their twenties who joined the site in their early teens. We were particularly interested in this age group because they had used Facebook throughout their teens and many found themselves at a ‘crossroads’ moment in their life when they are beginning to think seriously about post-education working life and ‘professional identity’. Using a combination of qualitative interviewing, time-lining and the ‘scroll back method’, we worked with 40 young people to find out how they (and their friends) had disclosed their ‘growing up’ experiences on the site. In this respect, the Facebook Timeline (also known as the profile) was used as a ‘prompt’ and the years upon years of disclosures on the site acted as ‘cues’ for what often became elaborate and in-depth stories of teenage life.

One of our core interests here was how ‘growing up’ stories are recorded and made visible on social media. Given Facebook’s longevity, it has become a digital archive of life for many – a longitudinal digital trace. We wanted to interrogate this further by working with our participants as co-analysts of their own digital traces. How do young people make sense of these longitudinal digital traces? How do these traces persist and re-surface, years later, as people grow up and enter into new stages of their lives?

Time-lining: going back to pencil and paper

What key or critical moments have you experienced in your teenage years, since joining Facebook? As a period of turbulence and change, we were keen to ask this question and explore what our participants perceived to be those important, life defining events or rites of passage that have come to define them. A simple print out of a timeline enabled our participants to consider this question and to map out those moments as they remember them. These included: going to high school, leaving school, getting a part time job, going to a first gig, family weddings, births and deaths, going into full time employment, going to university, the beginning and end of relationships and all manner of important moments. Our participants were then invited to log into their Facebook profile using a laptop, tablet or phone, depending on the participants’ preference, to consider how the moments they recalled ‘mapped onto’ their Facebook Timeline.

The scroll back method

At this point, our participants were asked to ‘scroll back’ to their very first post on the site. It is common for them to have an emotional response to early disclosures on the site; embarrassment being the most typical. For us, this was interesting because their response acted as the first ‘marker of growing up’ they encounter in the ‘scroll back’ and represented a form of self-reflexivity and self-realisation. In addition, their responses were physical: the covering of the eyes, a slight wince, even turning away from the screen when confronted with a younger self and evidence of their digital trace dating back some years. Consider a 24-year-old confronting their 16-year-old self, as mediated on Facebook. Once the ‘scroll back’ begins, participants click chronologically through their years of disclosures, opening up year after year of their Facebook archive provoking narration and description of the content. This method serves to be empowering for the participant as it places them in control of which moments they wish to talk about and which they do not; which to discuss and which to pass over. However, because of the sheer amount of content – much of which is forgotten (particularly the earlier stuff) – there is a danger that participants will be confronted with challenging, difficult moments from their past at which point the participant is asked whether they wish to continue. Often they do, seeing this as a ‘therapeutic moment’ to reflect with hindsight on the event. Some saw it as an important life moment and thus it remained in their Timeline.

Importantly, we recruited our participants not just as interviewees or ‘subjects’ of observation. We worked with our participants as co-analysts of their own digital traces. Having our participants sign-in to their own Facebook accounts and scroll back through their Timeline profiles in front of us allowed us to see their Facebook histories ‘from their perspective’. If we were to analyse these digital traces without the involvement of the participants themselves, we’d be limited in multiple ways: first, in terms of what we could actually see, but second – and for us, more importantly – in terms of the stories that certain disclosures prompted. Often, our participants would be able to ‘fill in the blanks’ or provide crucial context and explanation for in-jokes, vague status updates, or obscure images that we alone would have had little capacity to fully understand. Thus, our analysis here really hinged on the involvement and insight of our participants themselves.

Scroll back and narratives of growing up

The Facebook Timelines project has clearly under-lined the significance of Facebook in the lives of young people in their twenties as a key platform for sharing their everyday life experiences. While some participants claim to be ‘partial’ Facebook users today amidst broader claims of ‘Facebook fatigue’ and a more complicated ‘polymedia’ environment including Instagram, Snapchat, dating and hook-up apps, and so on, scrolling back through participants’ Timelines has affirmed just how embedded and central Facebook is in their lives. Further, their changes in use from ‘intense’ to more silent (but still present) ‘disuse’ tells us much about their growing up and claims to being ‘more mature’ equating to disclosing less. Additionally, the amount of ‘memory work’ the site is doing on their behalf (so many forgotten moments were unveiled through ‘scroll back’) makes getting rid of Facebook for good almost an impossibility.

Facebook Timelines offer immense opportunities for longitudinal researchers, however the depth of many profiles certainly presents analytical challenges as essentially these are not profiles that have been created for a research project. For us, and as we mention above, ‘analysis’ of the Timelines was embedded into the scroll back method from the start with participants analyzing their own digital traces as a core part of the research process. Drawing on Thomson and Holland (2003) we then considered the data ‘cross-sectionally in order to identify discourses through which identity is constructed, and longitudinally at the development of a particular narrative over time’ (2003: 236). We did this with the participants as they scrolled back, then cross-referenced these discourses with other participants by analyzing the interview transcripts using the themes defined by our participants (for example, relationships, travel and education). Overall, we felt this approach gave our participants a genuine feeling that they had witnessed, unfolded and given voice to, a self-narrative of their growing up on Facebook.

Related publications

In this article, we expand on the scroll back method in much more detail:

These publications report on findings from our study:

Further background to our research:


  • Thomson, R. and Holland, R. (2003) Hindsight, foresight and insight: the challenges of longitudinal qualitative research. International Journal of Social Research Methodology, 6(3): 233-244.




Guest blog #11: Dr Rebecca Taylor: The challenges of computer assisted data analysis for distributed research teams working on large qualitative projects

RebeccaTaylor5Our guest post today is by Rebecca Taylor, Lecturer in Sociology at the University of Southampton. Her research focuses on conceptualising work, particularly unpaid forms of work, understanding individuals’ working lives and careers, and work in different organisations and sectors. She has over 10 years’ experience of conducting qualitative longitudinal research on studies such as: Inventing Adulthoods, Minority Ethnic Outreach Evaluation and Real Times at the Third Sector Research Centre.

Her current project, Supporting employee-driven innovation in the healthcare sector, with colleagues Alison Fuller, Susan Halford and Kate Lyle, is a qualitative ethnography of three health service innovations involving multiple data sources. The research is funded by the ESRC through the LLAKES Centre for Research in Learning and Life Chances based at UCL Institute of Education, University College London.

In this post, Rebecca considers the three possible ways of overcoming the challenges of conducting large-scale qualitative longitudinal analysis in geographically-distributed research teams and the possibilities, and indeed limitations, offered by computer assisted data analysis software.

The challenges of computer assisted data analysis for distributed research teams working on large qualitative projects

Academics, like many other groups of workers in the digital economy, often find themselves working in geographically distributed teams spanning multiple locations connected by increasingly sophisticated digital technologies. Teleconferencing tools like Skype; cloud based file storage/hosting services such as Google docs and Dropbox; and project planning tools such as Trello, enable groups of researchers to meet, talk, write, share and edit documents, plan, manage and conduct research and even analyse data despite their separate locations.

LaptopIf you are a researcher involved in large scale qualitative studies, such as qualitative longitudinal research (QLR), where projects can potentially span decades and short-term contracts mean that researchers move between institutions, it is highly likely that you will, at some point, be operating in a distributed research team working across institutions, geographical locations and maybe even time zones. QLR in particular tends to amplify the challenges and opportunities of other qualitative methodologies (see e.g. Thomson and Holland 2003); the difficulties of managing multiple cases over multiple waves in terms of storage, labelling and retrieval are even more demanding when carried out remotely.  In fact any large data set creates challenges for a distributed team. Providing access to data across institutions necessitates organising access rights and often the use of a VPN (Virtual Personal Network). Cloud based collaboration solutions may lack  institutional technical support and the required level of data security raising legal and ethical problems for the storage of non-anonymised transcripts, observation notes and other documents.

These issues are all in play when it comes to analysing a geographically-distributed team’s data. The overwhelming array of CAQDAS (Computer Assisted Qualitative Data Analysis Software) packages offer multiple functionality for managing and manipulating qualitative data but are less helpful when it comes to facilitating distributed team working. Our recent experiences as a research team spread across two institutions with members also working mainly from home, provides a useful case study of the issues. As we looked at the CAQDAS packages currently available it became apparent that our options were dependent on where the software was situated – locally, institutionally, or in the cloud:

Option A: Working locally

This traditional model involved packages (such as NVivo, MAX Q) uploaded onto individual computers so that all team members worked on their own local version of the project. For the team to work together on the data and see everyone’s coding and new transcripts, required that researchers all send their projects to a team member who would merge them together and redistribute a new master copy of the project. In a distributed team, this meant finding a way to regularly transfer large project files safely, securely and easily between team members with all the attendant hazards of version control and file management. The size of project files and the security issues around cloud based storage ruled out the more straightforward options like email or Dropbox and the remote desktop route made any sort of data transfer brain numbingly complicated because there was no way to move documents between the home computer and the remote desktop. We had one option for data transfer – a University of Southampton download service for large files which used high levels of encryption.

Option B: Working institutionally

This model made use of server-based packages which stored the data centrally such NVivo Server (‘NVivo for Teams’ with V11) enabling team members to work on the project simultaneously using an institutional local area network (LAN). In the case of Nvivo Server this mitigated the need for a regular time consuming merge process. However, for those members of the team at other institutions or not working on campus it required using remote desktop solutions which were slow and unwieldy and made file transfers (for example when importing a new transcript into the software) difficult. We worried about this process given the software’s reputation for stability issues when used with a potentially intermittent network connection. More importantly, it required a different type of Institutional software licence which was an expense we had not budgeted for and implied considerable delay as we negotiated with the university about purchase and technical support.

Option C: Working in the cloud

Thinking more creatively about the problem we looked at online (and thus not institutionally located) packages such as US-based Dedoose (try saying that with an American accent – it makes more sense) designed to facilitate team-based qualitative and mixed methods data analysis. We could, it seemed, all work online on the same project from any PC or laptop in any location without the need to merge or transfer projects and documents – Were all our problems solved?  Sadly not. Consultation with IT services in our own institutions revealed that such sites used cloud storage in the US and were therefore deemed insecure – we would be compromising our data security and thus our ethical contract. So we were back to square one or in our case Option A – the old school model; a laborious and time-consuming (but ultimately secure) way of working; individual projects on our individual desktops with regular or not so regular transfers and merges.

It’s worked Ok – we are now writing our third journal article. Yet as the funding ended and we lost our brilliant Research Fellow to another short term contract we have tended towards more individualised analysis, the merge process has largely fizzled out as no one has time to manage it and the software serves primarily as a data management tool. It is clear that in the contemporary HE landscape of intensification, and metricisation of research, the tools for distributed team working need to be super-effective and easy to use; they need to make collaborative qualitative analysis straightforward and rewarding irrespective of the geographical location of individual team members. Distributed working arrangements are certainly not going away.


Thomson, R. and Holland, J. (2003) Hindsight, foresight and insight: The challenges of qualitative longitudinal research, International Journal of Social Research Methodology, 6(3): 233-244.

Guest blog # 10: Dr Georgia Philip: Working with qualitative longitudinal data

georgia+philipGeorgia Philip, a Senior Research Associate in the School of Social Work, at the University of East Anglia, writes today’s insightful post. Georgia has expertise in the areas of: fathers, gender and care, qualitative and feminist research, the feminist ethics of care, parenting interventions and family policy.

In this post, Georgia reflects on the challenges of managing the volume and depth of data generated in a qualitative longitudinal analysis of men’s experiences of the UK child protection system. The study was conducted with colleagues John Clifton and Marian Brandon.


Working with qualitative longitudinal data

For the past two years I have worked with colleagues John Clifton & Marian Brandon on a qualitative longitudinal (QL) study of men’s experiences of the UK child protection system.

Alongside the twists and turns of the research relationships developed with our participants and the conceptual work involved in presenting their accounts, we have also encountered practical challenges of managing the volume and depth of data generated. This post briefly identifies some of these challenges, and our responses to them.

Our QL study involved 35 men who were fathers or father figures to a child with a newly made child protection plan, recruited between April and August 2015, and taking part for a period of 12 months. The study consisted of two in-depth interviews, at the start and end of the study period, and (approximately) monthly phone contacts with each man. Twenty-eight men participated for the full 12 months. We took a holistic approach, looking back at men’s histories, relationships, fathering experiences and any past encounters with welfare agencies, and then accompanying them forward, into the current encounter with child protection and its impact on their lives.


Our overall approach to the analysis was inductive and iterative, drawing on existing QL methodological literature (Neale, Henwood & Holland, 2012). It also engaged us in thinking about ‘time’ in theoretical and methodological terms: as a concept, that shapes how lives are lived, narrated and imagined, and as a resource for examining a significant local authority process. Our practical approach to the management of the high volume of data was a combination of pre-emptive and responsive strategies. Three challenges we encountered were, how to analyse across and within our sample; how to facilitate data sharing across the research team; how to combine analysis of men’s lives, and of the child protection system, in coherent way.

Early on, we decided to use NVivo Frameworks as a mechanism for managing the data (NatCen 2014, Ritchie et al 2014), and we constructed a matrix to record aspects of men’s lives, and of the unfolding child protection process. This enabled us to collate and analyse data from the outset rather than separating (and delaying) analysis from data collection. It also established a process for organising the data using the ‘case and wave’ approach adopted in other QL studies (Hughes and Emmel, 2012; Thomson, 2007) to look across the sample by time wave (we divided our 12 months into four three-month periods), and within it, at each man’s individual ‘case’ However, whilst NVivo allowed us to develop a way of structuring our analysis, it did not, in practice, facilitate a reliable way of collaborating across the research team.

As the researchers, John and I had a group of men and an accumulating data set that we ‘knew’ better. This meant we needed to develop ways of sharing cases and checking our developing analysis, to build an integrated and credible understanding of the sample as a whole. We found that working independently on, and then trying to merge, copies of our NVivo project just wasn’t viable, and the project files were unstable. Therefore we had to devise, or revert back, to other strategies for managing this. We continued using our original matrix, to summarise data over the four time waves, and to help compile the individual case studies, but did this using Word and sharing via a secure drive on the University network. We met monthly as a full team to discuss and compare our analysis, understand the developing cumulative picture, and review the ongoing process of data gathering. We also came to make extensive use of memo writing as a particularly useful means of condensing data, exploring pertinent issues within it, and discussing these with each other. We then took the decision that John and I each take the lead in analysing one of the two main domains of the data: men’s encounter with the child protection process and their wider lives as fathers. This ensured that we both had to fully consider all participants’ data and actively collaborate on integrating our work as part of the later, conceptual stages of the analysis.

This project has been intensely demanding and satisfying, at every stage. Finding ways of coping with rich, accumulating data, generated with increasing momentum as research relationships develop, has been just one of these demands. Being committed to an inductive approach, which does justice to the men’s own accounts, whilst also generating a coherent conceptual explanation and meaningful practice messages for social workers, is another. What we have offered here is a tiny glimpse into some of the practical strategies for meeting such multiple demands, which we hope may be useful for other researchers new to QL research.

Our full report will be available from the Centre for Research on Children and Families, from September 2017.

Fathers Research Summary


NatCen (2014) Frameworks in NVIVO manual- Step by step guide to setting up Framework matrices in NVIVO. London: NatCen Social Research.

Neale, B, Henwood, K & Holland, J (2012) Researching Lives Through Time: an introduction to the Timescapes approach, Qualitative Research, 12 (1) 4-15

Ritchie, J Lewis, J McNaughton Nicholls, C Ormston, R (2014) Qualitative research practice: A guide for social science students and researchers London: Sage.

Thomson R (2007). The qualitative longitudinal case history: practical, methodological and ethical reflections. Social Policy and Society 6(4): 571–582.



Guest blog # 9: Virginia Morrow: The ethics of secondary data analysis

We are excited to have a blog this week by Ginny Morrow, Deputy Director of Young Lives. This is an incredible study of childhood poverty which, over the last 15 years, has followed the lives of 12,000 children in Ethiopia, India (in the states of Andhra Pradesh and Telangana), Peru and Vietnam. The aim of Young Lives is to illuminate the drivers and impacts of child poverty, and generate evidence to help policymakers design programmes that make a real difference to poor children and their families.

In this post Ginny reflects on the ethical responsibilities of researchers sharing secondary data.

The ethics of secondary data analysis – respecting communities in research

For the past 10 years, I have been involved with Young Lives, a longitudinal study of children growing up in Ethiopia, India, Peru and Vietnam, which has been an amazing experience and a great privilege. As well as being Deputy Director since 2011, I have been ‘embedded’ in Young Lives as the ethics lead – though it is vital that ethics are not the responsibility of one person, but shared across the whole team.
research photo pilot Peru 2016 for web VIE_SonPhuoc129_R1_cropped

Young Lives encounters all kinds of ethics questions and dilemmas, and for this guest blog, I have been asked to explore the ethics of secondary data analysis. Arguments about the promises and pitfalls of archiving (qualitative) data are well-rehearsed, as outlined in discussions by Natasha Mauthner and others.

A few years ago, as an ESRC-funded National Centre for Research Methods node (2011-14),  Young Lives qualitative research team had a very productive and enjoyable collaboration with colleagues at TCRU in London and Sussex, Family Lives and Environments, as part of Novella  (Narratives of Varied Everyday Lives and Linked Approaches), in which Young Lives qualitative data formed the basis for narrative and thematic analysis of children’s and their families relationships to the environment in India (Andhra Pradesh) and England (see Catharine Walker’s thesis, Based on our experiences, we produced a working paper exploring the ethics of sharing qualitative data,– and we identified a number of challenges, which we hope have helped other researchers as they grapple with the demands of sharing data.

Ginny's data picture edited

We argued that sharing data and undertaking secondary analysis can take many forms, and bring many benefits. But it can be ethically complex. One of the considerations that we discussed was responsibilities to participants and to the original researchers, and the need to achieve a contextual understanding of the data by identifying and countering risks of misinterpretation. We highlighted the importance of developing and maintaining trusting relationships between research participants, primary and secondary researchers.

Novella involved a team of qualitative researchers, and we did not fully discuss the ethics of secondary analysis of survey data, bar touching on questions of informed consent. But one of the questions that I’ve long been concerned about, based on experiences at Young Lives of seeing research based on our publically-archived survey data being used in ways very far from the intentions of our study (which is to explore childhood poverty over time), is the following: how do the people we study and write about, feel about the interpretation and use we make of their data?  Might they object to how their data are used, and how they are represented in research findings and other media dissemination?

So I was fascinated to learn about the EU-funded project, entitled TRUST, that has led to the generation of the San Code of Research Ethics, launched by the South African San Institute a couple of weeks ago (this video gives a great insight to the project).

The San Code of Ethics calls for respect, honesty, justice and fairness, and care – and asks that the San Council, which represents the San Community, is involved in research from inception, design, through to approval of the project, and subsequent publications. The San are not the only indigenous people to create codes of ethics demanding they are fairly respected in research, and the impetus for this initiative has come from genomics research, but the points about respect are relevant for all research. Two points are worthy of much more attention in research ethics:

  1. Failure by researchers to meet their promises to provide feedback, which the San Council say they have encountered frequently, and which they see as an example of disrespect; and
  2. ‘A lack of honesty in many instances in the past. Researchers have deviated from the stated purpose of research, failed to honour a promise to show the San the research prior to publication, and published a biased paper based upon leading questions given to young San trainees’

The technicalities of all of this may be challenging, but demand our attention, so that open, honest, and continuous communication can take place, and the hurt caused by lack of justice, fairness and respect can be avoided in the future.


Mauthner, NS. (2016). Should data sharing be regulated? In A Hamilton & WC van den Hoonaard (eds), The Ethics Rupture: Exploring alternatives to formal research-ethics review. University of Toronto Press, pp. 206-229.