Mar 07

Welcome to the project

Welcome to the ‘Working Across Qualitative Longitudinal Studies’ blog. Here you will be able to follow our research process, the team discussions (and debates) and our findings as they evolve. We are very aware that accounts of qualitative research – especially the process of data management and analysis – are often sanitised by the time they reach academic journals. We also know that qualitative longitudinal research (QLR) has the capacity to add new layers of complexity to qualitative data analysis. This not only comes from the volume of data that QLR can generate, but also the disciplinary requirement to engage with temporality in all its forms across and between data sources.

One of the core elements of our project is to contribute to good practice in analysing large scale QLR, both within and across projects. It is incredibly important that our own analytical processes are not kept in a ‘black box’, shrouded in mystery. Rather, over the next two years, the project team (Emma, Lynn, Ros and Susie) will share our own struggles, deliberations and successes. We hope that by doing this we can enrich the landscape of QLR, and support other researchers as they journey through their own projects.

This blog will also include the experiences of others exploring the challenging world of qualitative longitudinal research. With guest posts from early career researchers, to international experts, on topics as varied as the ethics of using big qualitative data, using secondary qualitative data and computer-assisted qualitative data analysis software, we will profile the diversity of QLR taking place in the UK, and beyond.

If you would like to write for us as a guest contributor, please email us on to discuss.

Mar 29

Guest blog # 9: Virginia Morrow: The ethics of secondary data analysis

We are excited to have a blog this week by Ginny Morrow, Deputy Director of Young Lives. This is an incredible study of childhood poverty which, over the last 15 years, has followed the lives of 12,000 children in Ethiopia, India (in the states of Andhra Pradesh and Telangana), Peru and Vietnam. The aim of Young Lives is to illuminate the drivers and impacts of child poverty, and generate evidence to help policymakers design programmes that make a real difference to poor children and their families.

In this post Ginny reflects on the ethical responsibilities of researchers sharing secondary data.

The ethics of secondary data analysis – respecting communities in research

For the past 10 years, I have been involved with Young Lives, a longitudinal study of children growing up in Ethiopia, India, Peru and Vietnam, which has been an amazing experience and a great privilege. As well as being Deputy Director since 2011, I have been ‘embedded’ in Young Lives as the ethics lead – though it is vital that ethics are not the responsibility of one person, but shared across the whole team.
research photo pilot Peru 2016 for web VIE_SonPhuoc129_R1_cropped

Young Lives encounters all kinds of ethics questions and dilemmas, and for this guest blog, I have been asked to explore the ethics of secondary data analysis. Arguments about the promises and pitfalls of archiving (qualitative) data are well-rehearsed, as outlined in discussions by Natasha Mauthner and others.

A few years ago, as an ESRC-funded National Centre for Research Methods node (2011-14),  Young Lives qualitative research team had a very productive and enjoyable collaboration with colleagues at TCRU in London and Sussex, Family Lives and Environments, as part of Novella  (Narratives of Varied Everyday Lives and Linked Approaches), in which Young Lives qualitative data formed the basis for narrative and thematic analysis of children’s and their families relationships to the environment in India (Andhra Pradesh) and England (see Catharine Walker’s thesis, Based on our experiences, we produced a working paper exploring the ethics of sharing qualitative data,– and we identified a number of challenges, which we hope have helped other researchers as they grapple with the demands of sharing data.

Ginny's data picture edited

We argued that sharing data and undertaking secondary analysis can take many forms, and bring many benefits. But it can be ethically complex. One of the considerations that we discussed was responsibilities to participants and to the original researchers, and the need to achieve a contextual understanding of the data by identifying and countering risks of misinterpretation. We highlighted the importance of developing and maintaining trusting relationships between research participants, primary and secondary researchers.

Novella involved a team of qualitative researchers, and we did not fully discuss the ethics of secondary analysis of survey data, bar touching on questions of informed consent. But one of the questions that I’ve long been concerned about, based on experiences at Young Lives of seeing research based on our publically-archived survey data being used in ways very far from the intentions of our study (which is to explore childhood poverty over time), is the following: how do the people we study and write about, feel about the interpretation and use we make of their data?  Might they object to how their data are used, and how they are represented in research findings and other media dissemination?

So I was fascinated to learn about the EU-funded project, entitled TRUST, that has led to the generation of the San Code of Research Ethics, launched by the South African San Institute a couple of weeks ago (this video gives a great insight to the project).

The San Code of Ethics calls for respect, honesty, justice and fairness, and care – and asks that the San Council, which represents the San Community, is involved in research from inception, design, through to approval of the project, and subsequent publications. The San are not the only indigenous people to create codes of ethics demanding they are fairly respected in research, and the impetus for this initiative has come from genomics research, but the points about respect are relevant for all research. Two points are worthy of much more attention in research ethics:

  1. Failure by researchers to meet their promises to provide feedback, which the San Council say they have encountered frequently, and which they see as an example of disrespect; and
  2. ‘A lack of honesty in many instances in the past. Researchers have deviated from the stated purpose of research, failed to honour a promise to show the San the research prior to publication, and published a biased paper based upon leading questions given to young San trainees’

The technicalities of all of this may be challenging, but demand our attention, so that open, honest, and continuous communication can take place, and the hurt caused by lack of justice, fairness and respect can be avoided in the future.


Mauthner, NS. (2016). Should data sharing be regulated? In A Hamilton & WC van den Hoonaard (eds), The Ethics Rupture: Exploring alternatives to formal research-ethics review. University of Toronto Press, pp. 206-229.

Feb 06

Guest blog # 8: Dr Sarah Wilson: Using qualitative secondary analysis as a tool of critical reflexivity

Our guest post today is by Sarah Wilson, a Senior Lecturer in Sociology in the School of Applied Social Science at the University of Stirling. Sarah’s research interests are in the sociology of families, relationships and personal life, with a methodological focus on developing visual, audial and artistic qualitative research. In this post, Sarah reflects on her qualitative secondary analysis of data from the Timescapes ‘Siblings and Friends’ project, a longitudinal dataset with which we are also working, and how this process prompted reflection on her own research practices.

This post draws on Sarah’s 2014 article in Sociological Research Online, ‘Using secondary analysis to maintain a critically reflexive approach to qualitative research’ which you can read here:


 Using qualitative secondary analysis as a tool of critical reflexivity

Maintaining a critical, reflexive approach to research when engaging in specialised work is not easy. Partly because of the need to convince funders of their expertise, researchers often focus on relatively circumscribed areas of inquiry, with samples drawn from particular social groups.

My own research has focused on samples characterised as ‘vulnerable’; notably young people affected by parental substance misuse or living ‘in care’. Often this work has been located within more ‘applied’ approaches to social research, and influenced by funders’ concerns. Such work is valuable. However, the segregation often maintained between research with young people from more ‘vulnerable’ and more ‘ordinary’ backgrounds may reinforce perceptions that the experiences, values and aspirations of members of each ‘category’ are distinct. As Law (2009) argues, research is ‘performative’, helping to re-produce and reinforce perceptions of social groups. In the current political context, such distinctions may even implicitly reinforce the stigmatisation of ‘troubled families’. As such, there is a need to find ways to subject one’s own research practice to scrutiny.

To better situate my previous research, I engaged in qualitative secondary analysis of the longitudinal Timescapes ‘Siblings and Friends’ (SAF) study to prepare for a new project with ‘looked after’ young people: Young people creating belonging: spaces, sounds and sights (ESRC RES-061-25-0501). The idea was to reflect on my own approaches, and previous framings of interview questions in the light of the very rich SAF project data which involved predominantly ‘ordinary’ young people from across the UK. This proved to be an illuminating, if demanding, process that prompted further thought about both projects.

Importantly, this analysis highlighted significant commonalities between the experience of those included in ‘ordinary’ and ‘vulnerable’ samples. Notably, the SAF data included several accounts of strained family relationships, of parental mental ill-health and of undesirable housing conditions that suggested family circumstances comparable to those in my previous work on parental substance misuse. However, the SAF interview questions situated violence outside of the home. As Gillies (2000) argues, even where ‘difficult’ accounts within ‘ordinary’ samples are identified, they are often not written up. As such, the complexity and pain within ‘ordinary’ families may be under-estimated in research, and potentially more easily obscured within political discourse. Similarly, the everyday ambiguity and minor conflicts associated with ‘ordinary’ siblings and parents sharing limited space may be downplayed. Such ambiguities and tensions led several SAF respondents to seek out friends’ homes, or private corners of their own, to escape from family life at least for a time. I had previously associated such strategies with young people affected by parental substance use, many of whom often spent time at friends’ houses. However, this analysis suggested a more nuanced understanding of the importance to the latter group of employing strategies that could be presented as ‘ordinary’ teenage practices. The process of secondary analysis also highlighted uncomfortable omissions from my previous research in which, for various reasons, greater emphasis was placed on the respondents’ own potential substance use, than on their school work and employment aspirations. The predominance of such concerns in the SAF accounts led me to worry that my own research had reflected and performed perceptions of education as less important to ‘vulnerable’ than to ‘ordinary’ young people.

In conclusion, qualitative secondary analysis is a ‘labour-intensive, time-consuming process’ that Gillies and Edwards (2005: para24) compare to primary data collection. However, it presents a useful tool to subject assumptions built up over a specialised research career to scrutiny.



Gillies, V. (2000) ‘Young people and family life: analysing and comparing disciplinary discourses’, Journal of Youth Studies, 3(2): 211-228

Gillies, V. and Edwards, R. (2005) ‘Secondary analysis in exploring family and social change: addressing the issue of context’, Forum: Qualitative Social Research, 6(1): art 44.

Law, J. (2009), ‘Assembling the World by Survey: Performativity and Politics’, Cultural Sociology, 3, 2, 239-256.

Wilson, S. (2014) ‘Using secondary analysis to maintain a critically reflexive approach to qualitative research’ Sociological Research Online, 19(3), 21




Jan 16

Guest post #7, Dr Gregor Wiedemann: Computer-assisted text analysis beyond words

robot-507811_1920Dr Gregor Wiedemann works in the Natural Language Processing Group at Leipzig University. He studied Political Science and Computer Science in Leipzig and Miami. In his research, he develops methods and workflows of text mining for applications in social sciences. In September 2016, he published the book “Text Mining for Qualitative Data Analysis in the Social Sciences: A Study on Democratic Discourse in Germany” (Springer VS, ISBN 978-3-658-15309-0).

In this blog, he discusses computational textual analysis and the opportunities it presents for qualitative research and researchers. 

Computer-assisted text analysis beyond words

In our digital era, amounts of textual data are growing rapidly. Unlike traditional data acquisition in qualitative analysis, such as conducting interviews, texts from (online) news articles, user commentaries or social network posts are usually not generated directly for the purpose of research. This huge pool of new data provides interesting material for analysis, but it also poses qualitative research with the challenge to open up to new methods. Some of these were introduced in blog post #4. Here, computer-assisted text analysis using Wordsmith and Wordstat were discussed as a means of allowing an ‘aerial view’ on the data, e.g. by comparative keyword analysis.

Despite the long history of computer-assisted text analysis, it has stayed a parallel development with only little interaction with qualitative analysis . Methods of lexicometric analysis such as extraction of key words, collocations or frequency analysis usually operate on the level of single words. Unfortunately, as Benjamin Schmidt phrased it, “words are frustrating entities to study. Although higher order entities like concepts are all ultimately constituted through words, no word or group can easily stand in for any of them” (2012). Since qualitative studies are interested in the production of meaning, of what is said and how, there certainly are overlaps with lexicometric measures, but nonetheless their research subjects appear somewhat incompatible. Observation of words alone without respect to their local context appears as rough simplification compared to a hermeneutic close reading and interpretation of a text passage.

The field of natural language processing (NLP) from the discipline of computer science provides a huge variety of (semi-) automatic approaches for large scale text analysis, and has only slowly been discovered by social scientists and other qualitative researchers. Many of these text mining methods operate on semantics beyond the level of isolated words, and are therefore much more compatible with established methods of qualitative text analysis. Topic models, for instance, allow for automatic extraction of word and document clusters in large document collections (Blei 2012). Since topics represent measures of latent semantic meaning, they can be interpreted qualitatively and utilised for quantitative thematic analysis of document collections at the same time. Text classification as a method of supervised machine learning provides techniques even closer to established manual analysis approaches. It allows for automatic coding of documents, or parts of documents such as paragraphs, sentences or phrases on the basis of manually labelled training sets. The classifier learns features from hand coded text, where coding is realised analogously to conventional content analysis. The classifier model can be seen as a ‘naïve coder’ who has learned characteristics of language expressions representative for a specific interpretation of meaning of a text passage. This ‘naïve coder’ then is able to process and code thousands of new texts, which explicitly opens the qualitative analysis of categories up to quantification.

In my dissertation study on the discourse of democratic demarcation in Germany (Wiedemann 2016), I utilised methods of text mining in an integrated, systematic analysis on more than 600,000 newspaper documents covering a time period of more than six decades. Among others, I tracked categories of left-wing and right-wing demarcation in the public discourse over time. Categories were operationalised as sentences expressing demarcation against, or a demand for, exclusion of left-/right-wing political actors or ideologies from the legitimate political spectrum (e.g. “The fascist National Democratic Party needs to be banned” or “The communist protests in Berlin pose a serious threat to our democracy”). Using automatic text classification, I was able to measure the distribution of such qualitatively defined categories in different newspapers between 1950 and 2011. As an example, the following figure shows relative frequencies of documents containing demarcation statements in the German newspaper, the Frankfurter Allgemeine Zeitung (FAZ).
gregor-1Distribution indicates that demarcation towards left-wing actors and ideology long-time superseded right-wing demarcation. Soon after 1990, the latter became the primary discourse subject of threats of German democracy. The enormous benefit of automatic classification is that it allows for easy comparison of publications (e.g. other newspapers) or relations with any other category. For instance, the distribution of “reassurance of democratic identity”, a third category I measured, strongly correlates with right-wing demarcation, but not with left-wing demarcation. Such a finding can be realised only by a combination of the qualitative and the quantitative paradigm.

While computer-assisted methods support qualitative researchers clearly in their task of retrieving “what” is being said in large data sets, they certainly have limitations on the more interpretive task of reconstructing “how” something is said, i.e. the characterisation of how meaning is produced. It is an exciting future task of qualitative research to determine how nowadays state-of-the-art NLP methods may contribute to this requirement. In this respect, computational analysis extends the toolbox for qualitative researchers by complementing their well-established methods. They offer conventional approaches new chances for reproducible research designs and opportunities to open up to “big data” (Wiedemann 2013). Currently, actors in the emerging field of “data science” are a major driving force in computational textual analysis for social science related questions. Since I repeatedly observe lack of basic methodological and theoretical knowledge with respect to qualitative research in this field, I look forward to a closer interdisciplinary integration of them both.

Further reading

Blei, David M. 2012. “Probabilistic topic models: Surveying a suite of algorithms that offer a solution to managing large document archives.” Communications of the ACM 55 (4): 77–84.

Schmidt, Benjamin M. 2012. “Words alone: dismantling topic models in the humanities.” Journal of Digital Humanities 2 (1). Url

Wiedemann, Gregor. 2013. “Opening up to Big Data. Computer-Assisted Analysis of Textual Data in Social Sciences.” Historical Social Research 38 (4): 332-357.

Wiedemann, Gregor. 2016. Text Mining for Qualitative Data Analysis in the Social Sciences: A Study on Democratic Discourse in Germany. Wiesbaden: Springer VS, Url:

Dec 14

Guest post #6, Nick Emmel: Revisiting yesterday’s data today

past-present-futureToday we welcome Dr Nick Emmel as our guest blogger. Nick has been investigating social exclusion and vulnerability in low-income communities in a city in northern England since 1999. The research discussed in this blog, Intergenerational Exchange, was an investigation of the care grandparents provide for their children. This was a part of Timescapes, the ESRC’s qualitative longitudinal research initiative. More details of this research are available at

In this thought provoking post, Nick reflects on his experiences of revisiting qualitative data, and the ways in which new interpretations and explanations are generated over time. 


Revisiting yesterday’s data today 

I have recently finished writing a paper about vulnerability. This is the third in an ongoing series of published papers; the first published in 2010 and the second in 2014 (Emmel and Hughes, 2010; 2014; Emmel, 2017). Each elaborates and extends a model of vulnerability. All three are based on the same data collected in a qualitative longitudinal research project, Intergenerational Exchange, a part of Timescapes and its archive. The second and third paper draw on newly collected data from subsequent research projects as well. In this blog I want to explore how interpretation and explanation are reconstituted and reconceived through engagement with these new data and theory, considering some methodological lessons in the context of qualitative longitudinal research.

At first sight the narratives told us about poverty, social exclusion, and the experiences of grand parenting by Bob and Diane, Ruth, Sheila, Geoff and Margaret, and Lynn, which populate these three papers seem fixed, even immutable. After all, I am still using the same printed transcripts from interviews conducted between 2007 and 2011, marked up with a marginalia of memos and codes in my micrographia handwriting, text emphasised with single and double underlines in black ink. But each time I get these transcripts out of the locked filing cabinet in my office I learn something new.

To start with there are the misremembered memories of what is actually in the transcripts. Many of the stories our participants tell, Geoff and Margaret’s account of the midnight drop, Sheila bathing her kids in the washing machine, or Lynn walking into the family court for the first time, I have retold over and over again. In their retelling details have been elaborated, twisted, and reworked to make better stories so my students, service deliverers, and policy makers will think a little harder, I hope, about powerlessness, constrained powerfulness, and ways in which excluded people depend on undependable service delivery. In this way they are no different to the original stories, neither truth nor untruth, but narrated for a purpose, to describe experience in qualitative research. Getting the detail and emphasis right is important. The participants know their lived experience far better than I do. Re-reading the transcripts, these stories are reattached to their empirical moorings once again. But this is only the start of their reanalysis.

Rereading may confirm empirical description but past interpretations are unsettled by new empirical accounts. New knowledge has the effect, as Barbara Adam (1990:143) observes, of making the ‘past as revocable and hypothetical as the future’.  In the most recent of the three papers the apparently foundational role of poverty elaborated in our first paper is reinterpreted. New data from relatively affluent grandparents describe the barriers they face in accessing services and the ways in which these experiences make them vulnerable. This knowledge has the effect of reconstituting the original transcripts, shifting attention away from the determining role of poverty to relationships with service providers in which poverty may play a generative part. These data evoke new interpretations. But it is not only new empirical accounts that reshape this longitudinal engagement, new ideas are at play.

In this blog I have suggested that new empirical accounts change how we understand and interpret existing data. To ascribe reinterpretation only to these insights is not enough however. Explanations rely on more than reconstructing empirical accounts in the light of new insight. For a realist like me theories guide the reading of the original transcripts and the collection of new data. Theories are practical things, bundles of hypotheses to be judged and refined empirically. We started with a theory about time as a chronological progression of events, as is explained in the first paper. For our participants, they noticed little difference as recession merged with recession all the way back to the closure of the estate’s main employer in 1984. This theory was found wanting when we came to looking at young grandparenthood and engagement with service provision in the second paper. A refined theoretical account of the social conscience of generational and institutional time supported explanation. These theories, like the empirical accounts of the social world they are brought into a relation with, are revocable and only ever relatively enduring.

To paraphrase the Greek philosopher Heraclitus, no researcher ever steps into the same river twice, for it is not the same river and it is not the same researcher. Revisiting yesterday’s data today reminds us of these methodological lessons in qualitative longitudinal research.


Adam, B (1990) Time and social theory Polity Press, Cambridge.

Emmel, N. (2017) Empowerment in the relational longitudinal space of vulnerability. Social Policy and Society. July.

Emmel, N. & Hughes, K. (2010) “‘Recession, it’s all the same to us son’: the longitudinal experience (1999-2010) of deprivation”, 21st Century Society, vol. 5, no. 2, pp. 171-182.

Emmel, N. & Hughes, K. (2014) “Vulnerability, inter-generational exchange, and the conscience of generations,” in Understanding Families over Time: Research and Policy, Holland J & Edwards R, eds., Palgrave, Basingstoke.

Image source: Fosco Lucarelli (


Dec 03

Research team blog 6: Getting out of the swamp

annaDear friends,

We have been working with Dr Anna Tarrant during the course of our project (Anna was our first guest blogger – read again here). Anna’s research, ‘Men, Poverty and Lifetimes of Care’, is funded by the Leverhulme Trust and University of Leeds and is exploring change and continuities in the care responsibilities of men who are living on a low-income. Like our project, Anna is drawing on data from the Timescapes research programme, including Following Young Fathers and Intergenerational Exchange.

Anna has a great new article out in which she looks at how the secondary analysis of thematically related qualitative longitudinal (QL) datasets might be used productively in qualitative research design.

The article abstract is below, as is a link to the full text. Happy reading!

Anna Tarrant (2016): ‘Getting out of the swamp? Methodological reflections on using qualitative secondary analysis to develop research design’, International Journal of Social Research Methodology, DOI: 10.1080/13645579.2016.1257678

In recent years, the possibilities and pitfalls of qualitative secondary analysis have been the subject of on-going academic debate, contextualised by the growing availability of qualitative data in digital archives and the increasing interest of funding councils in the value of data re-use. This article contributes to, and extends these methodological discussions, through a critical consideration of how the secondary analysis of thematically related qualitative longitudinal (QL) datasets might be utilised productively in qualitative research design. It outlines the re-use of two datasets available in the Timescapes Archive, that were analysed to develop a primary empirical project exploring processes of continuity and change in the context of men’s care responsibilities in low-income families. As well as outlining the process as an exemplar, key affordances and challenges of the approach are considered. Particular emphasis is placed on how a structured exploration of existing QL datasets can enhance research design in studies where there is limited published evidence.

Nov 14

Research team blog 5: Time in Timescapes

It is obvious to state that time is the most important aspect of qualitative longitudinal research since it affords a rich insight into the phenomena being studied as it evolves. Yet throughout our project, time has been one of the most difficult aspects of the data on which to get an analytical ‘grip’.

Time in Timescapes

Time in Timescapes

Time matters – yet it its presence is complex, fluid and intersectional. These many dimensions, or layers of time, are captured in our data archive. These include biologically defined life cycle stages (aging and developmental change), family and kinship groups (aligned vertically through time), age cohorts (aligned horizontally through time), as well as socially / culturally defined categories, sequences or events (such as becoming a parent).

Time is a narrated aspect of the texture of social life. Our data shows this intersection between time and space, with participants variously describing ‘time’ as something that can be in short supply, as in demand and, within the context of work and family lives, a source of negotiation, stress and, at times, conflict. Time can also be part of the more abstract notion of ‘being there’, where time spent together provides the basis through which caring and intimate relationships are created, and sustained.

Time is also historical. The projects themselves have a temporal identity, as an archive of a particular epoch and the particular socio-economic contexts in which individual lives were unfolding. At a further level, time frames the research process, and does so differently across the six projects for which we have data. Each were conducted in broadly the same historical time, yet they captured time in different ‘waves’, and using different methods (from life history / biographical interviews, through to daily diaries and ‘day in the life’ observations).

How time matters, and how we can ensure it foregrounds our analysis, will be an ongoing source of reflection for our project. To help us make sense of some of this messiness we have begun to ‘map’ the time in Timescapes using Tiki Toki, a web-based software for creating interactive timelines. In our timeline we have sought to capture when participants within each study were born, the epoch in which the study was conducted, the duration of each study and the different ‘waves’ of research. We have also sought to include any key outputs from the project and any follow-on studies (such as Anna Tarrant’s ongoing work on Men, Poverty and Care). These latter aspects will be added to as the study progresses.

Of course, our portrayal of time is two dimensional, and is in part a pragmatic effort to tidy the messiness of time. Its limitation is in our inability to ‘map’ the social, cultural and emotional dimensions of time, and how these intersect (i.e. the emotional and practical connections within, and between, generations, or how these change or stay the same across different historical time frames). That is an aspect of time that our ongoing processes of analysis will seek to capture.

To open our Toki Toki, click on the image below, Please let us know what you think, and if you decide to design your own, share it with us here.


Nov 08

Research team blog 4: Approaches to Analysing Qualitative Data: Archaeology as a Metaphor for Method, 18th October 2016

Today we would like to share the videos from our NCRM seminar on Approaches to Analysing Qualitative Data, where we presented our ongoing work alongside Professor Emeritus Clive Seale and Professor Maria Tamboukou. In the seminar we used the metaphor of archaeology to think about how can we ‘dig down’, and where do we dig, to get an analytic grip when working with large and complex bodies of qualitative data. It was a great event, and we learnt a lot from our co-presenters and audience on how to develop our approach to analysing large volume of qualitative data.

Professor Seale provided a fantastic overview of how to use computer-assisted text analysis when working with a corpus of qualitative data that is too large to be analysed using conventional analytical approaches. He used two packages – Wordsmith and Wordstat – to demonstrate the ways comparative keyword analysis can reliably analyse large amounts of text, providing a picture that is ‘less biased’ by the researchers’ own subjectivity. This big ‘aerial’ view can then be combined with more in-depth qualitative analysis to facilitate an approach which bridges the quan-qual divide.

Professor Maria Tamboukou’s presentation brought together the theory and method of archival research, particularly in the context Foucault’s archaeological framework. She notes that Foucault, despite being an ‘archive’ addict, wrote little about the ‘nuts and bolts’ of ‘doing’ archival research. Drawing on her own research, Professor Tamboukou provided insight into the working practices of archival research. As researchers we come to the archive with specific questions, and as such have a role in defining the archive and the knowledge that comes from it. But the fragments and traces in the archive, she noted, should also surprise us and challenge our pre-existing judgements and prejudices.

The presentations looked at very different methodological approaches, but both have helped us develop our own project. We are drawing on data from an archive: this not only holds the stories of the research participants, but also traces of the original researchers and archivists. In accessing the data, manipulating it and asking our questions of interest, we are making our own trace. Yet we also want to be surprised by our data, and are seeking to use keyword analysis in a way that allows us to excavate new layers of understanding and meaning from the data. We see potential in conceptualising our approach as one which employs theoretical and empirical investigation, using both as a means of moving between the stages of our own archaeological metaphor.

Continue to follow our website for more information on our project as it progresses. In the meantime, you will find references to Professor Seale’s work within his presentation. You may also like to look at the chapter he wrote with Jonathan Charteris-Black for the The SAGE Handbook of Qualitative Methods in Health Research, ‘Keyword Analysis: A New Tool for Qualitative Research’.

Please also look at Professor Tamboukou’s wonderful new book ‘The Archive Project‘, which is co-written with Niamh Moore, Andrea Salter and Liz Stanley. You can visit the project’s website here.

Oct 11

Research team blog 3: Case Histories in Qualitative Longitudinal Research, 6th & 7th October 2016

Susie and I were delighted to have been invited to the University of Sussex this week to talk about our project at an NCRM event on case histories. Organised by case study experts, Rachel Thomson and Julie McLeod, the event brought together an international and interdisciplinary group of researchers working with, and negotiating, ‘cases’ in qualitative longitudinal research.

An overriding theme was the tricky question of what makes a case ‘a case’. A case, it seems, can be many things. They can be selected theoretically or pragmatically; they can represent the ‘typical’ or the extraordinary; and they can relate to, and shape, both process and outcome. As noted by Rebecca Taylor, a case study design can help the qualitative researcher ‘frame’ their research, enabling a stronger analytical grip, particularly when handling large or complex data. But whilst the case can provide the means through which to make sense of the messiness of our data, it doesn’t necessarily have to structure it. The case in itself can be temporal. Its boundaries can shift (and be shifted) over time, responding to events and actors in the field, new points of comparison and emergent theoretical possibilities.

Two points made at the training spoke to our research. The first is that despite their fluidity, the very nature of case studies is reductionist: decision making about how your cases should be bounded, of course, results in some things being included, and other aspects being excluded. Second, is that while a case may be bounded in some way (a case can be a town, an organisation, a family, an individual), it should always seek to illuminate something bigger, and be larger than the sum of its parts.

If you have been following our work, you will know that we are working with secondary data from six of the projects archived under the Timescapes initiative. Together, this dataset amounts to 1,000 documents, and 165 ‘sets’ of data (cases?) relating to individuals and families. We have been using the metaphor of the archaeological excavation as a means of approaching our dataset. Working as ‘aerial archaeologists’ we began by completing a ‘surface survey’ of the dataset, looking at its contours and texture. A ‘geophysical survey’ followed, utilising new computational methods through which to conduct a corpus analysis of the dataset. With this big ‘aerial’ view in place, we began the process of digging into our data. Initial ‘shovel test pits’ involved the selection, and analysis, of a small number of selected cases. We are now planning ‘deep excavations’. These will involve a larger number of case studies, but focused on a narrow research question.

Our process is very much work in progress. Key questions for moving forward are how should we both define a case, and in turn, how our cases should be selected. In essence our aim is to demonstrate the ways in which corpus analysis (the aerial view if you like) can assist in the analysis of large volumes of qualitative data, whilst retaining the forms of analysis which provide biographical depth, and insight into the complex micro dynamics of how, why, and in what circumstances, change happens. Most critically we want to demonstrate how these approaches can be brought into dialogue – enriching each other, rather than sitting in opposition.

We continue to develop our work and will be reporting our progress here. We have a forthcoming event in London on the 18th October where we will discuss our archaeological metaphor in more detail. Further training events are being planned for the months ahead.

We will finish by saying thank you to all the presenters and participants at the event. It was incredibly valuable, and has given us many, many ideas for developing our project. We look forward to sharing our explorations with you.



Aug 08

Forthcoming event: Approaches to Analysing Qualitative Data, 18th October 2016

On the 18th October 2016 we will be hosting a seminar at The Foundling Museum in London, ‘Approaches to Analysing Qualitative Data: Archaeology as a Metaphor for Method’.

The seminar will ask the question, how can we ‘dig down’, and where do we dig, to get an analytic grip when working with large and complex bodies of qualitative data? The metaphor of archaeology enables qualitative analysts to think about what lies ‘underneath’ the corpus of material being analysed, working extensively and intensively to identify and excavate meaning. Researchers working with different bodies of qualitative materials will be discussing how they approached their analysis, from a range of methodological perspectives. The seminar is likely to be of interest and use to researchers with a range of qualitative analytic skills and experience, from postgraduate to senior.

Our speakers include:

Professor Emeritus Clive Seale (Brunel University) : An archaeological approach working with keyword analysis of a large corpus of qualitative data

Professor Maria Tamboukou (University of East London) : Archaeology of knowledge and working in the archives

Dr. Susie Weller (University of Southampton) and Dr. Emma Davidson (University of Edinburgh) : A layered archaeological approach to analysis across multiple sets of qualitative longitudinal data

For full details and booking, visit the ‘Training and Events’ page on NCRM website. We look forward to welcoming you there!


Aug 01

Guest post #5, Sue Bellass: The challenges of multiple perspectival QL analysis


IMG_7916-Edit-800x800Our guest post today is by Sue Bellass, a PhD student in the School of Nursing, Midwifery, Social Work and Social Sciences at the University of Salford. Her thesis, which she is due to submit in August, has been exploring how intergenerational families are affected by young onset dementia over time.

In this post, Sue shares in detail her approach to analysing data over time, from multiple perspectives. The process has been complex and challenging, but has also brought creativity and freedom – and ultimately a deeper understanding of the lived experience of young onset dementia.

If you would like to know more about Sue’s research, contact her by email:

The challenges of multiple perspectival qualitative longitudinal (QL) analysis: a strategy created for an intergenerational study of young onset dementia

Although dementia is often perceived to be a condition that occurs in later life, around 1 in 20 people with dementia are below the age of 65 (Alzheimer’s Society, 2015). Over the last two decades there has been increasing interest in developing qualitative understandings of the experience of the condition in younger people; however, almost without exception existing studies have used cross-sectional designs, providing only a snapshot of life with an unpredictable, dynamic condition. For my PhD I decided to use a QL methodology to explore relationality over a twelve-month period by following five intergenerational families where one person had received a diagnosis of young onset dementia.

Since people with dementia are a marginalised, negatively positioned group (Sabat et al., 2011), I felt it was appropriate to democratise the research process to enable my participants to choose their preferred means of engaging with the study. This choice included the method of data collection (ethical approval was gained for interviews, audio/ video diaries, blogs and tweets) and, if participants opted for interviews, which family members would participate and where the interviews would take place.  Ultimately, 18 participants chose to be interviewed, 16 of whom were interviewed in pairs or larger family groups, with two preferring individual interviews. Interviews were conducted in three waves at months 0, 6 and 12.

Analysing the data set has been a challenging process. As Henderson et al. (2012) note, despite increasing interest in QL methods, methods of analysing and representing complex QL data sets have rarely been explicated. I experienced this as a mixed blessing; on the one hand, there is space for creativity, flexibility and freedom, on the other, there is room for doubt to flourish!  I have attempted to slice the data in different ways in order to interrogate the data set to best effect.  Inspired by Thomson (2010, 2014), I treated each family as a unique case and also aimed to create a cross-case analysis across the four generations represented in the families.

Example QL matrices


Initially I attempted to analyse the group interviews at the ‘family’ level, however it quickly became apparent that divergent accounts were being obscured.  Subsequently I took a multiple perspectival approach (Ribbens McCarthy et al., 2003), teasing apart individual experiences within the families, viewing them as cases within a case. For each person, I induced categories of experience then, to permit holistic re-engagement, organised the raw data in a time-ordered matrix across the three waves.

Then, again for each person, I created a longitudinal matrix adapted from Saldana (2003) to look for transitions and continuities, using motif coding, a form of coding which draws attention to recurring elements in experiences, and describing through-lines, a crystallisation of a participant’s change over time. Although it could be argued that such an approach may disguise intersubjective creation of meaning, I consciously retained a focus on relationality, creating spaces within the matrix to capture data on meaning-making processes over time. Finally I created an intergenerational matrix, organising the data by generation to look for patterns and themes, setting the data against the backdrop of the recent increasing public, policy and research interest in dementia to try and interweave biographical, generational and historical timescapes.

Qualitative research has faced criticism for lack of clarity regarding the relationship between theory and data, and this, I argue, is an important area to address as we continue to develop the contours of QL research. My own perspective has been influenced by Mills (1959), who describes a ‘shuttle back and forth’ between theory and data. I have utilised such an iterative approach, and have drawn on theory from the sociology of chronic illness and family and relationship sociology to develop understandings of the intergenerational experience of young onset dementia.


Alzheimer’s Society (2015). Dementia 2015: Aiming higher to transform lives. London: Alzheimer’s Society.

Henderson, S., Holland, J., McGrellis, S., Sharpe, S., & Thomson, R. (2012). Storying qualitative longitudinal research: sequence, voice and motif. Qualitative Research, 12(1), 16-34.

Mills, C.W. (1959). The sociological imagination. London: Penguin.

Ribbens McCarthy, J., Holland, J., & Gillies, V. (2003). Multiple perspectives on the ‘family’ lives of young people: methodological and theoretical issues in case study research. International Journal of Social Research Methodology, 6(1), 1-23.

Sabat, S.R., Johnson, A., Swarbrick, C., & Keady, J. (2011). The ‘demented other’ or simply ‘a person’? Extending the philosophical discourse of Naue and Kroll through the situated selfNursing Philosophy, 12(4), 282-292.

Saldaña, J. (2003). Longitudinal qualitative research: analyzing change through time. California: Alta Mira Press.

Thomson, R. (2010). Creating family case histories: subjects, selves and family dynamics. In Thomson, R. (Ed.) Intensity and insight: qualitative longitudinal methods as a route to the psycho-social. Timescapes Working Paper Series No.3.

Older posts «