Mar 07

Welcome to the project

Welcome to the ‘Working Across Qualitative Longitudinal Studies’ blog. Here you will be able to follow our research process, the team discussions (and debates) and our findings as they evolve. We are very aware that accounts of qualitative research – especially the process of data management and analysis – are often sanitised by the time they reach academic journals. We also know that qualitative longitudinal research (QLR) has the capacity to add new layers of complexity to qualitative data analysis. This not only comes from the volume of data that QLR can generate, but also the disciplinary requirement to engage with temporality in all its forms across and between data sources.

One of the core elements of our project is to contribute to good practice in analysing large scale QLR, both within and across projects. It is incredibly important that our own analytical processes are not kept in a ‘black box’, shrouded in mystery. Rather, over the next two years, the project team (Emma, Lynn, Ros and Susie) will share our own struggles, deliberations and successes. We hope that by doing this we can enrich the landscape of QLR, and support other researchers as they journey through their own projects.

This blog will also include the experiences of others exploring the challenging world of qualitative longitudinal research. With guest posts from early career researchers, to international experts, on topics as varied as the ethics of using big qualitative data, using secondary qualitative data and computer-assisted qualitative data analysis software, we will profile the diversity of QLR taking place in the UK, and beyond.

If you would like to write for us as a guest contributor, please email us on qlr@ncrm.ac.uk to discuss.

Dec 04

Guest Post #13: Prof Bren Neale: Research Data as Documents of Life


Bren Neale is Emeritus Professor of Life course and Family Research (University of Leeds, School of Sociology and Social Policy, UK) and a fellow of the Academy of Social Sciences (elected in 2010). Bren is a leading expert in Qualitative Longitudinal (QL) research methodology and provides training for new and established researchers throughout the UK and internationally.

Bren specialises in research on the dynamics of family life and inter-generational relationships, and has published widely in this field. From 2007 to 2012 she directed the Economic and Social Research Council-funded Timescapes Initiative (www.timescapes.leeds.ac.uk), as part of which she advanced QL methods across academia, and in Government and NGO settings. Since completing the ESRC funded ‘Following Young Fathers Study’ (www.followingfathers.leeds.ac.uk) Bren has taken up a consultancy to design and support the delivery of a World Health Organisation study that is tracking the introduction of a new malaria vaccine in sub-Saharan Africa. 

In this post, Bren draws on her extensive expertise as Director of the Timescapes Initiative, along with reflections from her forthcoming book ‘What is Qualitative Longitudinal Research?’ (Bloomsbury 2018) to consider the diverse forms of archival data that may be re-used or re-purposed in qualitative longitudinal work. In so doing, Bren outlines the possibilities for, and progress made in developing ways of working with and across assemblages of archived materials to capture social and temporal processes.

 Research Data as Documents of Life

Among the varied sources of data that underpin Qualitative Longitudinal (QL) studies, documentary and archival sources have been relatively neglected. This is despite their potential to shed valuable light on temporal processes. These data sources form part of a larger corpus of materials that Plummer (2001) engagingly describes as ‘documents of life’:

“The world is crammed full of human personal documents. People keep diaries, send letters, make quilts, take photos, dash off memos, compose auto/biographies, construct websites, scrawl graffiti, publish their memoirs, write letters, compose CVs, leave suicide notes, film video diaries, inscribe memorials on tombstones, shoot films, paint pictures, make tapes and try to record their personal dreams. All of these expressions of personal life are hurled out into the world by the millions, and can be of interest to anyone who cares to seek them out” (p. 17).

To take one example, letters have long provided a rich source of insight into unfolding lives. In their classic study of Polish migration, conducted in the first decades of the twentieth century, Thomas and Znaniecki (1958 [1918-20)] analysed the letters of Polish migrants to the US (an opportunistic source, for a rich collection of such letters was thrown out of a Chicago window and landed at Znaniecki’s feet). Similarly, Stanley’s (2013) study of the history of race and apartheid was based on an analysis of three collections of letters written by white South Africans spanning a 200 year period (1770s to 1970s). The documentary treasure trove outlined by Plummer also includes articles in popular books, magazines and newsprint; text messages, emails and interactive websites; the rich holdings of public record offices; and confidential and often revealing documents held in organisations and institutions. Social biographers and oral historians are adept at teasing out a variety of such evidence to piece together a composite picture of lives and times; they are ‘jackdaws’ rather than methodological purists (Thompson 1981: 290).

Among the many forms of documentary data that may be repurposed by researchers, social science and humanities datasets have significant value. The growth in the use of such legacy data over recent decades has been fuelled by the enthusiasm and commitment of researchers who wish to preserve their datasets for historical use. Further impetus has come from the development of data infrastructures and funding initiatives to support this process, and a fledgling corpus of literature that is documenting and refining methodologies for re-use (e.g. Corti, Witzel and Bishop 2005; Crow and Edwards 2012; Irwin 2013). Alongside the potential to draw on individual datasets, there is a growing interest in working across datasets, bringing together data that can build new insights across varied social or historical contexts (e.g. Irwin, Bornat and Winterton 2012; and indeed the project on which this website is founded).

Many qualitative datasets remain in the stewardship of the original researchers where they are at risk of being lost to posterity (although they may be fortuitously rediscovered, O’Connor and Goodwin 2012). However, the culture of archiving and preserving legacy data through institutional, specialist or national repositories is fast becoming established (Bishop and Kuula-Luumi 2017). These facilities are scattered across the UK (for example, the Kirklees Sound Archive in West Yorkshire, which houses oral history interviews on the wool textile industry (Bornat 2013)). The principal collections in the UK are held at the UK Data Archive (which includes the classic ‘Qualidata’ collection); the British Library Sound Archive, NIQA (the Northern Ireland Qualitative Archive, including the ARK resource); the recently developed Timescapes Archive (an institutional repository at the University of Leeds, which specialises in Qualitative Longitudinal datasets); and the Mass Observation Archive, a resource which, for many decades, has commissioned and curated contemporary accounts from a panel of volunteer recorders. International resources include the Irish Qualitative Data Archive, the Murray Research Center Archive (Harvard), and a range of data facilities at varying levels of development across mainland Europe (Neale and Bishop 2010-11).

In recent years some vigorous debates have ensued about the ethical and epistemological foundations for reusing qualitative datasets. In the main, the issues have revolved around data ownership and researcher reputations; the ethics of confidentiality and consent for longer-term use; the nature of disciplinary boundaries; and the tension between realist understandings of data (as something that is simply ‘out there’), and a narrowly constructivist view that data are non-transferable because they are jointly produced and their meaning tied to the context of their production.

These debates are becoming less polarised over time. In part this is due to a growing awareness that most of these issues are not unique to the secondary use of datasets (or documentary sources more generally) but impact also on their primary use, and indeed how they are generated in the first place. In particular, epistemological debates about the status and veracity of qualitative research data are beginning to shift ground (see, for example, Mauthner et al 1998 and Mauthner and Parry 2013). Research data are by no means simply ‘out there’ for they are inevitably constructed and re-constructed in different social, spatial and historical contexts; indeed, they are transformed historically simply through the passage of time (Moore 2007). But this does not mean that the narratives they contain are ‘made up’ or that they have no integrity or value across different contexts (Hammersley 2010; Bornat 2013). It does suggest, however, that data sources are capable of more than one interpretation, and that their meaning and salience emerge in the moment of their use:

“There is no a-priori privileged moment in time in which we can gain a deeper, more profound, truer insight, than in any other moment. … There is never a single authorised reading … It is the multiple viewpoints, taken together, which are the most illuminating” (Brockmeier and Reissman cited in Andrews 2008: 89; Andrews 2008: 90).

Moreover, whether revisiting data involves stepping into the shoes of an earlier self, or of someone else entirely, this seems to have little bearing on the interpretive process. From this point of view, the distinctions between using and re-using data, or between primary and secondary analysis begin to break down (Moore 2007; Neale 2013).

This is nowhere more apparent than in Qualitative Longitudinal enquiry, where the transformative potential of data is part and parcel of the enterprise. Since data are used and re-used over the longitudinal frame of a study, their re-generation is a continual process. The production of new data as a study progresses inevitably reconfigures and re-contextualises the dataset as a whole, creating new assemblages of data and opening up new insights from a different temporal standpoint. Indeed, since longitudinal datasets may well outlive their original research questions, it is inevitable that researchers will need to ask new questions of old data (Elder and Taylor 2009).

The status and veracity of research data, then, is not a black and white, either/or issue, but one of recognising the limitations and partial vision of all data sources, and the need to appraise the degree of ‘fit’ and contextual understanding that can be achieved and maintained (Hammersley 2010; Duncan 2012; Irwin 2013). This, in turn, has implications for how a dataset is crafted and contextualised for future use (Neale 2013).

A decade ago, debates about the use of qualitative datasets were in danger of becoming polarised (Moore 2007). However, the pre-occupations of researchers are beginning to move on. The concern with whether or not qualitative datasets should be used is giving way to a more productive concern with how they should be used, not least, how best to work with their inherent temporality. Overall, the ‘jackdaw’ approach to re-purposing documentary and archival sources of data is the very stuff of historical sociology and of social history more generally (Kynaston 2005; Bornat 2008; McLeod and Thomson 2009), and it has huge and perhaps untapped potential in Qualitative Longitudinal research.

References

Andrews, M. (2008) ‘Never the last word: Revisiting data’, in M. Andrews, C. Squire and M. Tamboukou (eds.) Doing narrative research, London, Sage, 86-101

Bishop, L. and Kuula-Luumi, A. (2017) ‘Revisiting Qualitative Data reuse: A decade on’, SAGE Open, Jan-March, 1-15.

Bornat, J. (2008) Crossing boundaries with secondary analysis: Implications for archived oral history data, Paper given at the ESRC National Council for Research Methods Network for Methodological Innovation: Theory, Methods and Ethics across Disciplines, 19 September 2008, University of Essex,

Bornat, J. (2013) ‘Secondary analysis in reflection: Some experiences of re-use from an oral history perspective’, Families, Relationships and Societies, 2, 2, 309-17

Corti, L., Witzel, A. and Bishop, L. (2005) (eds.) Secondary analysis of qualitative data: Special issue, Forum: Qualitative Social Research, 6,1.

Crow, G. and Edwards, R. (2012) (eds.) ‘ Editorial Introduction: Perspectives on working with archived textual and visual material in social research’, International Journal of Social Research Methodology, 15,4, 259-262.

Duncan, S. (2012) ‘Using elderly data theoretically: Personal life in 1949/50 and individualisation theory’, International Journal of Social Research Methodology, 15, 4, 311-319.

Elder, G. and Taylor, M. (2009) ‘Linking research questions to data archives’, in J. Giele and G. Elder (eds.) The Craft of Life course research, New York, Guilford Press,  93-116.

Hammersley, M. (2010) ‘Can we re-use qualitative data via secondary analysis? Notes on some terminological and substantive issues’, Sociological Research Online, 15, 1,5.

Irwin, S. (2013) ‘Qualitative secondary analysis in practice’ Introduction’ in S. Irwin and J. Bornat (eds.) Qualitative secondary analysis (Open Space), Families, Relationships and Societies, 2, 2, 285-8.

Irwin, S., Bornat, J. and Winterton, M. (2012) ‘Timescapes secondary analysis: Comparison, context and working across datasets’, Qualitative Research, 12, 1, 66-80

Kynaston, D. (2005) ‘The uses of sociology for real-time history,’ Forum: Qualitative Social Research, 6,1.

McLeod, J. and Thomson, R. (2009) Researching Social Change: Qualitative approaches, Sage.

Mauthner, N., Parry, O. and Backett-Milburn, K. (1998) ‘The data are out there, or are they? Implications for archiving and revisiting qualitative data’, Sociology, 32,4, 733-745.

Mauthner, N. and Parry, O. (2013). ‘Open Access Digital Data Sharing: Principles, policies and practices’, Social Epistemology, 27, 1, 47-67.

Moore, N. (2007) ‘(Re) using qualitative data?’ Sociological Research Online, 12, 3, 1.

Neale, B. (2013)’ Adding time into the mix: Stakeholder ethics in qualitative longitudinal research’, Methodological Innovations Online, 8, 2, 6-20.

Neale, B., and Bishop, L. (2010 -11) ‘Qualitative and qualitative longitudinal resources in Europe: Mapping the field’, IASSIST Quarterly: Special double issue, 34 (3-4); 35(1-2)

O’Connor, H. and Goodwin, J. (2012) ‘Revisiting Norbert Elias’s sociology of community: Learning from the Leicester re-studies’, The Sociological Review, 60, 476-497.

Plummer, K. (2001) Documents of Life 2: An invitation to a critical humanism, London, Sage.

Stanley, L. (2013) ‘Whites writing: Letters and documents of life in a QLR project’, in L. Stanley (ed.) Documents of life revisited, London, Routledge, 59-76.

Thomas, W. I. and Znaniecki, F. (1958) [1918-20] The Polish Peasant in Europe and America Volumes I and II, New York, Dover Publications.

Thompson, P. (1981) ‘Life histories and the analysis of social change’, in. D. Bertaux (ed.) Biography and society: the life history approach in the social sciences, London, Sage, 289-306.

 

 

 

 

Nov 08

Guest Post #12: Dr Sian Lincoln and Dr Brady Robards, Facebook timelines: Young people’s growing up narratives online

Sian Lincoln (Liverpool John Moores University) and Brady Robards (Monash University) contribute today’s insightful post. Sian, Reader in Communication, Media and Youth Culture, has interests in contemporary youth and youth cultures, social network sites and identity, and ethnography. Brady, a Lecturer in Sociology, has interests in the use of social media and methods involving social media.

In this post, Sian and Brady draw on their study ‘Facebook Timelines’ which explores the role of social media in mediating and archiving ‘growing up’ narratives. Using Facebook they provide the fascinating example of analyzing longitudinal digital traces by working with participants as co-analysts encouraging them to ‘scroll back’ and interpret their own personal archives. On the subject, they have authored ‘Uncovering longitudinal life narratives: Scrolling back on Facebook’  and Editing the project of the self: Sustained Facebook use and growing up online.

 

Project

In 2014 Facebook celebrated its tenth birthday. To mark this first decade, we edited a special issue of New Media & Society that reflected on the extent to which the site had become embedded into the everyday lives of its users. It was also evident at this point that there was now a generation of young people who had literally ‘grown up’ using the site. This prompted us to design a new research project, and a new research method in the process. Facebook Timelines is a qualitative study with young people in their twenties who joined the site in their early teens. We were particularly interested in this age group because they had used Facebook throughout their teens and many found themselves at a ‘crossroads’ moment in their life when they are beginning to think seriously about post-education working life and ‘professional identity’. Using a combination of qualitative interviewing, time-lining and the ‘scroll back method’, we worked with 40 young people to find out how they (and their friends) had disclosed their ‘growing up’ experiences on the site. In this respect, the Facebook Timeline (also known as the profile) was used as a ‘prompt’ and the years upon years of disclosures on the site acted as ‘cues’ for what often became elaborate and in-depth stories of teenage life.

One of our core interests here was how ‘growing up’ stories are recorded and made visible on social media. Given Facebook’s longevity, it has become a digital archive of life for many – a longitudinal digital trace. We wanted to interrogate this further by working with our participants as co-analysts of their own digital traces. How do young people make sense of these longitudinal digital traces? How do these traces persist and re-surface, years later, as people grow up and enter into new stages of their lives?

Time-lining: going back to pencil and paper

What key or critical moments have you experienced in your teenage years, since joining Facebook? As a period of turbulence and change, we were keen to ask this question and explore what our participants perceived to be those important, life defining events or rites of passage that have come to define them. A simple print out of a timeline enabled our participants to consider this question and to map out those moments as they remember them. These included: going to high school, leaving school, getting a part time job, going to a first gig, family weddings, births and deaths, going into full time employment, going to university, the beginning and end of relationships and all manner of important moments. Our participants were then invited to log into their Facebook profile using a laptop, tablet or phone, depending on the participants’ preference, to consider how the moments they recalled ‘mapped onto’ their Facebook Timeline.

The scroll back method

At this point, our participants were asked to ‘scroll back’ to their very first post on the site. It is common for them to have an emotional response to early disclosures on the site; embarrassment being the most typical. For us, this was interesting because their response acted as the first ‘marker of growing up’ they encounter in the ‘scroll back’ and represented a form of self-reflexivity and self-realisation. In addition, their responses were physical: the covering of the eyes, a slight wince, even turning away from the screen when confronted with a younger self and evidence of their digital trace dating back some years. Consider a 24-year-old confronting their 16-year-old self, as mediated on Facebook. Once the ‘scroll back’ begins, participants click chronologically through their years of disclosures, opening up year after year of their Facebook archive provoking narration and description of the content. This method serves to be empowering for the participant as it places them in control of which moments they wish to talk about and which they do not; which to discuss and which to pass over. However, because of the sheer amount of content – much of which is forgotten (particularly the earlier stuff) – there is a danger that participants will be confronted with challenging, difficult moments from their past at which point the participant is asked whether they wish to continue. Often they do, seeing this as a ‘therapeutic moment’ to reflect with hindsight on the event. Some saw it as an important life moment and thus it remained in their Timeline.

Importantly, we recruited our participants not just as interviewees or ‘subjects’ of observation. We worked with our participants as co-analysts of their own digital traces. Having our participants sign-in to their own Facebook accounts and scroll back through their Timeline profiles in front of us allowed us to see their Facebook histories ‘from their perspective’. If we were to analyse these digital traces without the involvement of the participants themselves, we’d be limited in multiple ways: first, in terms of what we could actually see, but second – and for us, more importantly – in terms of the stories that certain disclosures prompted. Often, our participants would be able to ‘fill in the blanks’ or provide crucial context and explanation for in-jokes, vague status updates, or obscure images that we alone would have had little capacity to fully understand. Thus, our analysis here really hinged on the involvement and insight of our participants themselves.

Scroll back and narratives of growing up

The Facebook Timelines project has clearly under-lined the significance of Facebook in the lives of young people in their twenties as a key platform for sharing their everyday life experiences. While some participants claim to be ‘partial’ Facebook users today amidst broader claims of ‘Facebook fatigue’ and a more complicated ‘polymedia’ environment including Instagram, Snapchat, dating and hook-up apps, and so on, scrolling back through participants’ Timelines has affirmed just how embedded and central Facebook is in their lives. Further, their changes in use from ‘intense’ to more silent (but still present) ‘disuse’ tells us much about their growing up and claims to being ‘more mature’ equating to disclosing less. Additionally, the amount of ‘memory work’ the site is doing on their behalf (so many forgotten moments were unveiled through ‘scroll back’) makes getting rid of Facebook for good almost an impossibility.

Facebook Timelines offer immense opportunities for longitudinal researchers, however the depth of many profiles certainly presents analytical challenges as essentially these are not profiles that have been created for a research project. For us, and as we mention above, ‘analysis’ of the Timelines was embedded into the scroll back method from the start with participants analyzing their own digital traces as a core part of the research process. Drawing on Thomson and Holland (2003) we then considered the data ‘cross-sectionally in order to identify discourses through which identity is constructed, and longitudinally at the development of a particular narrative over time’ (2003: 236). We did this with the participants as they scrolled back, then cross-referenced these discourses with other participants by analyzing the interview transcripts using the themes defined by our participants (for example, relationships, travel and education). Overall, we felt this approach gave our participants a genuine feeling that they had witnessed, unfolded and given voice to, a self-narrative of their growing up on Facebook.

Related publications

In this article, we expand on the scroll back method in much more detail:

These publications report on findings from our study:

Further background to our research:

References:

  • Thomson, R. and Holland, R. (2003) Hindsight, foresight and insight: the challenges of longitudinal qualitative research. International Journal of Social Research Methodology, 6(3): 233-244.

 

 

 

Sep 18

Guest blog #11: Dr Rebecca Taylor: The challenges of computer assisted data analysis for distributed research teams working on large qualitative projects

RebeccaTaylor5Our guest post today is by Rebecca Taylor, Lecturer in Sociology at the University of Southampton. Her research focuses on conceptualising work, particularly unpaid forms of work, understanding individuals’ working lives and careers, and work in different organisations and sectors. She has over 10 years’ experience of conducting qualitative longitudinal research on studies such as: Inventing Adulthoods, Minority Ethnic Outreach Evaluation and Real Times at the Third Sector Research Centre.

Her current project, Supporting employee-driven innovation in the healthcare sector, with colleagues Alison Fuller, Susan Halford and Kate Lyle, is a qualitative ethnography of three health service innovations involving multiple data sources. The research is funded by the ESRC through the LLAKES Centre for Research in Learning and Life Chances based at UCL Institute of Education, University College London.

In this post, Rebecca considers the three possible ways of overcoming the challenges of conducting large-scale qualitative longitudinal analysis in geographically-distributed research teams and the possibilities, and indeed limitations, offered by computer assisted data analysis software.

The challenges of computer assisted data analysis for distributed research teams working on large qualitative projects

Academics, like many other groups of workers in the digital economy, often find themselves working in geographically distributed teams spanning multiple locations connected by increasingly sophisticated digital technologies. Teleconferencing tools like Skype; cloud based file storage/hosting services such as Google docs and Dropbox; and project planning tools such as Trello, enable groups of researchers to meet, talk, write, share and edit documents, plan, manage and conduct research and even analyse data despite their separate locations.

LaptopIf you are a researcher involved in large scale qualitative studies, such as qualitative longitudinal research (QLR), where projects can potentially span decades and short-term contracts mean that researchers move between institutions, it is highly likely that you will, at some point, be operating in a distributed research team working across institutions, geographical locations and maybe even time zones. QLR in particular tends to amplify the challenges and opportunities of other qualitative methodologies (see e.g. Thomson and Holland 2003); the difficulties of managing multiple cases over multiple waves in terms of storage, labelling and retrieval are even more demanding when carried out remotely.  In fact any large data set creates challenges for a distributed team. Providing access to data across institutions necessitates organising access rights and often the use of a VPN (Virtual Personal Network). Cloud based collaboration solutions may lack  institutional technical support and the required level of data security raising legal and ethical problems for the storage of non-anonymised transcripts, observation notes and other documents.

These issues are all in play when it comes to analysing a geographically-distributed team’s data. The overwhelming array of CAQDAS (Computer Assisted Qualitative Data Analysis Software) packages offer multiple functionality for managing and manipulating qualitative data but are less helpful when it comes to facilitating distributed team working. Our recent experiences as a research team spread across two institutions with members also working mainly from home, provides a useful case study of the issues. As we looked at the CAQDAS packages currently available it became apparent that our options were dependent on where the software was situated – locally, institutionally, or in the cloud:

Option A: Working locally

This traditional model involved packages (such as NVivo, MAX Q) uploaded onto individual computers so that all team members worked on their own local version of the project. For the team to work together on the data and see everyone’s coding and new transcripts, required that researchers all send their projects to a team member who would merge them together and redistribute a new master copy of the project. In a distributed team, this meant finding a way to regularly transfer large project files safely, securely and easily between team members with all the attendant hazards of version control and file management. The size of project files and the security issues around cloud based storage ruled out the more straightforward options like email or Dropbox and the remote desktop route made any sort of data transfer brain numbingly complicated because there was no way to move documents between the home computer and the remote desktop. We had one option for data transfer – a University of Southampton download service for large files which used high levels of encryption.

Option B: Working institutionally

This model made use of server-based packages which stored the data centrally such NVivo Server (‘NVivo for Teams’ with V11) enabling team members to work on the project simultaneously using an institutional local area network (LAN). In the case of Nvivo Server this mitigated the need for a regular time consuming merge process. However, for those members of the team at other institutions or not working on campus it required using remote desktop solutions which were slow and unwieldy and made file transfers (for example when importing a new transcript into the software) difficult. We worried about this process given the software’s reputation for stability issues when used with a potentially intermittent network connection. More importantly, it required a different type of Institutional software licence which was an expense we had not budgeted for and implied considerable delay as we negotiated with the university about purchase and technical support.

Option C: Working in the cloud

Thinking more creatively about the problem we looked at online (and thus not institutionally located) packages such as US-based Dedoose (try saying that with an American accent – it makes more sense) designed to facilitate team-based qualitative and mixed methods data analysis. We could, it seemed, all work online on the same project from any PC or laptop in any location without the need to merge or transfer projects and documents – Were all our problems solved?  Sadly not. Consultation with IT services in our own institutions revealed that such sites used cloud storage in the US and were therefore deemed insecure – we would be compromising our data security and thus our ethical contract. So we were back to square one or in our case Option A – the old school model; a laborious and time-consuming (but ultimately secure) way of working; individual projects on our individual desktops with regular or not so regular transfers and merges.

It’s worked Ok – we are now writing our third journal article. Yet as the funding ended and we lost our brilliant Research Fellow to another short term contract we have tended towards more individualised analysis, the merge process has largely fizzled out as no one has time to manage it and the software serves primarily as a data management tool. It is clear that in the contemporary HE landscape of intensification, and metricisation of research, the tools for distributed team working need to be super-effective and easy to use; they need to make collaborative qualitative analysis straightforward and rewarding irrespective of the geographical location of individual team members. Distributed working arrangements are certainly not going away.

References

Thomson, R. and Holland, J. (2003) Hindsight, foresight and insight: The challenges of qualitative longitudinal research, International Journal of Social Research Methodology, 6(3): 233-244.

Jun 26

Guest blog # 10: Dr Georgia Philip: Working with qualitative longitudinal data

georgia+philipGeorgia Philip, a Senior Research Associate in the School of Social Work, at the University of East Anglia, writes today’s insightful post. Georgia has expertise in the areas of: fathers, gender and care, qualitative and feminist research, the feminist ethics of care, parenting interventions and family policy.

In this post, Georgia reflects on the challenges of managing the volume and depth of data generated in a qualitative longitudinal analysis of men’s experiences of the UK child protection system. The study was conducted with colleagues John Clifton and Marian Brandon.

 

Working with qualitative longitudinal data

For the past two years I have worked with colleagues John Clifton & Marian Brandon on a qualitative longitudinal (QL) study of men’s experiences of the UK child protection system.

Alongside the twists and turns of the research relationships developed with our participants and the conceptual work involved in presenting their accounts, we have also encountered practical challenges of managing the volume and depth of data generated. This post briefly identifies some of these challenges, and our responses to them.

Our QL study involved 35 men who were fathers or father figures to a child with a newly made child protection plan, recruited between April and August 2015, and taking part for a period of 12 months. The study consisted of two in-depth interviews, at the start and end of the study period, and (approximately) monthly phone contacts with each man. Twenty-eight men participated for the full 12 months. We took a holistic approach, looking back at men’s histories, relationships, fathering experiences and any past encounters with welfare agencies, and then accompanying them forward, into the current encounter with child protection and its impact on their lives.

Fatherhood

Our overall approach to the analysis was inductive and iterative, drawing on existing QL methodological literature (Neale, Henwood & Holland, 2012). It also engaged us in thinking about ‘time’ in theoretical and methodological terms: as a concept, that shapes how lives are lived, narrated and imagined, and as a resource for examining a significant local authority process. Our practical approach to the management of the high volume of data was a combination of pre-emptive and responsive strategies. Three challenges we encountered were, how to analyse across and within our sample; how to facilitate data sharing across the research team; how to combine analysis of men’s lives, and of the child protection system, in coherent way.

Early on, we decided to use NVivo Frameworks as a mechanism for managing the data (NatCen 2014, Ritchie et al 2014), and we constructed a matrix to record aspects of men’s lives, and of the unfolding child protection process. This enabled us to collate and analyse data from the outset rather than separating (and delaying) analysis from data collection. It also established a process for organising the data using the ‘case and wave’ approach adopted in other QL studies (Hughes and Emmel, 2012; Thomson, 2007) to look across the sample by time wave (we divided our 12 months into four three-month periods), and within it, at each man’s individual ‘case’ However, whilst NVivo allowed us to develop a way of structuring our analysis, it did not, in practice, facilitate a reliable way of collaborating across the research team.

As the researchers, John and I had a group of men and an accumulating data set that we ‘knew’ better. This meant we needed to develop ways of sharing cases and checking our developing analysis, to build an integrated and credible understanding of the sample as a whole. We found that working independently on, and then trying to merge, copies of our NVivo project just wasn’t viable, and the project files were unstable. Therefore we had to devise, or revert back, to other strategies for managing this. We continued using our original matrix, to summarise data over the four time waves, and to help compile the individual case studies, but did this using Word and sharing via a secure drive on the University network. We met monthly as a full team to discuss and compare our analysis, understand the developing cumulative picture, and review the ongoing process of data gathering. We also came to make extensive use of memo writing as a particularly useful means of condensing data, exploring pertinent issues within it, and discussing these with each other. We then took the decision that John and I each take the lead in analysing one of the two main domains of the data: men’s encounter with the child protection process and their wider lives as fathers. This ensured that we both had to fully consider all participants’ data and actively collaborate on integrating our work as part of the later, conceptual stages of the analysis.

This project has been intensely demanding and satisfying, at every stage. Finding ways of coping with rich, accumulating data, generated with increasing momentum as research relationships develop, has been just one of these demands. Being committed to an inductive approach, which does justice to the men’s own accounts, whilst also generating a coherent conceptual explanation and meaningful practice messages for social workers, is another. What we have offered here is a tiny glimpse into some of the practical strategies for meeting such multiple demands, which we hope may be useful for other researchers new to QL research.

Our full report will be available from the Centre for Research on Children and Families, from September 2017.

Fathers Research Summary

References:

NatCen (2014) Frameworks in NVIVO manual- Step by step guide to setting up Framework matrices in NVIVO. London: NatCen Social Research.

Neale, B, Henwood, K & Holland, J (2012) Researching Lives Through Time: an introduction to the Timescapes approach, Qualitative Research, 12 (1) 4-15

Ritchie, J Lewis, J McNaughton Nicholls, C Ormston, R (2014) Qualitative research practice: A guide for social science students and researchers London: Sage.

Thomson R (2007). The qualitative longitudinal case history: practical, methodological and ethical reflections. Social Policy and Society 6(4): 571–582.

 

 

Mar 29

Guest blog # 9: Virginia Morrow: The ethics of secondary data analysis

We are excited to have a blog this week by Ginny Morrow, Deputy Director of Young Lives. This is an incredible study of childhood poverty which, over the last 15 years, has followed the lives of 12,000 children in Ethiopia, India (in the states of Andhra Pradesh and Telangana), Peru and Vietnam. The aim of Young Lives is to illuminate the drivers and impacts of child poverty, and generate evidence to help policymakers design programmes that make a real difference to poor children and their families.

In this post Ginny reflects on the ethical responsibilities of researchers sharing secondary data.

The ethics of secondary data analysis – respecting communities in research

For the past 10 years, I have been involved with Young Lives, a longitudinal study of children growing up in Ethiopia, India, Peru and Vietnam, which has been an amazing experience and a great privilege. As well as being Deputy Director since 2011, I have been ‘embedded’ in Young Lives as the ethics lead – though it is vital that ethics are not the responsibility of one person, but shared across the whole team.
research photo pilot Peru 2016 for web VIE_SonPhuoc129_R1_cropped

Young Lives encounters all kinds of ethics questions and dilemmas, and for this guest blog, I have been asked to explore the ethics of secondary data analysis. Arguments about the promises and pitfalls of archiving (qualitative) data are well-rehearsed, as outlined in discussions by Natasha Mauthner and others.

A few years ago, as an ESRC-funded National Centre for Research Methods node (2011-14),  Young Lives qualitative research team had a very productive and enjoyable collaboration with colleagues at TCRU in London and Sussex, Family Lives and Environments, as part of Novella  (Narratives of Varied Everyday Lives and Linked Approaches), in which Young Lives qualitative data formed the basis for narrative and thematic analysis of children’s and their families relationships to the environment in India (Andhra Pradesh) and England (see Catharine Walker’s thesis, http://www.novella.ac.uk/about/1056.html). Based on our experiences, we produced a working paper exploring the ethics of sharing qualitative data,– and we identified a number of challenges, which we hope have helped other researchers as they grapple with the demands of sharing data.

Ginny's data picture edited

We argued that sharing data and undertaking secondary analysis can take many forms, and bring many benefits. But it can be ethically complex. One of the considerations that we discussed was responsibilities to participants and to the original researchers, and the need to achieve a contextual understanding of the data by identifying and countering risks of misinterpretation. We highlighted the importance of developing and maintaining trusting relationships between research participants, primary and secondary researchers.

Novella involved a team of qualitative researchers, and we did not fully discuss the ethics of secondary analysis of survey data, bar touching on questions of informed consent. But one of the questions that I’ve long been concerned about, based on experiences at Young Lives of seeing research based on our publically-archived survey data being used in ways very far from the intentions of our study (which is to explore childhood poverty over time), is the following: how do the people we study and write about, feel about the interpretation and use we make of their data?  Might they object to how their data are used, and how they are represented in research findings and other media dissemination?

So I was fascinated to learn about the EU-funded project, entitled TRUST, that has led to the generation of the San Code of Research Ethics, launched by the South African San Institute a couple of weeks ago (this video gives a great insight to the project).

The San Code of Ethics calls for respect, honesty, justice and fairness, and care – and asks that the San Council, which represents the San Community, is involved in research from inception, design, through to approval of the project, and subsequent publications. The San are not the only indigenous people to create codes of ethics demanding they are fairly respected in research, and the impetus for this initiative has come from genomics research, but the points about respect are relevant for all research. Two points are worthy of much more attention in research ethics:

  1. Failure by researchers to meet their promises to provide feedback, which the San Council say they have encountered frequently, and which they see as an example of disrespect; and
  2. ‘A lack of honesty in many instances in the past. Researchers have deviated from the stated purpose of research, failed to honour a promise to show the San the research prior to publication, and published a biased paper based upon leading questions given to young San trainees’

The technicalities of all of this may be challenging, but demand our attention, so that open, honest, and continuous communication can take place, and the hurt caused by lack of justice, fairness and respect can be avoided in the future.

References 

Mauthner, NS. (2016). Should data sharing be regulated? In A Hamilton & WC van den Hoonaard (eds), The Ethics Rupture: Exploring alternatives to formal research-ethics review. University of Toronto Press, pp. 206-229.

Feb 06

Guest blog # 8: Dr Sarah Wilson: Using qualitative secondary analysis as a tool of critical reflexivity

Our guest post today is by Sarah Wilson, a Senior Lecturer in Sociology in the School of Applied Social Science at the University of Stirling. Sarah’s research interests are in the sociology of families, relationships and personal life, with a methodological focus on developing visual, audial and artistic qualitative research. In this post, Sarah reflects on her qualitative secondary analysis of data from the Timescapes ‘Siblings and Friends’ project, a longitudinal dataset with which we are also working, and how this process prompted reflection on her own research practices.

This post draws on Sarah’s 2014 article in Sociological Research Online, ‘Using secondary analysis to maintain a critically reflexive approach to qualitative research’ which you can read here: http://www.socresonline.org.uk/19/3/21.html

 

 Using qualitative secondary analysis as a tool of critical reflexivity

Maintaining a critical, reflexive approach to research when engaging in specialised work is not easy. Partly because of the need to convince funders of their expertise, researchers often focus on relatively circumscribed areas of inquiry, with samples drawn from particular social groups.

My own research has focused on samples characterised as ‘vulnerable’; notably young people affected by parental substance misuse or living ‘in care’. Often this work has been located within more ‘applied’ approaches to social research, and influenced by funders’ concerns. Such work is valuable. However, the segregation often maintained between research with young people from more ‘vulnerable’ and more ‘ordinary’ backgrounds may reinforce perceptions that the experiences, values and aspirations of members of each ‘category’ are distinct. As Law (2009) argues, research is ‘performative’, helping to re-produce and reinforce perceptions of social groups. In the current political context, such distinctions may even implicitly reinforce the stigmatisation of ‘troubled families’. As such, there is a need to find ways to subject one’s own research practice to scrutiny.

To better situate my previous research, I engaged in qualitative secondary analysis of the longitudinal Timescapes ‘Siblings and Friends’ (SAF) study to prepare for a new project with ‘looked after’ young people: Young people creating belonging: spaces, sounds and sights (ESRC RES-061-25-0501). The idea was to reflect on my own approaches, and previous framings of interview questions in the light of the very rich SAF project data which involved predominantly ‘ordinary’ young people from across the UK. This proved to be an illuminating, if demanding, process that prompted further thought about both projects.

Importantly, this analysis highlighted significant commonalities between the experience of those included in ‘ordinary’ and ‘vulnerable’ samples. Notably, the SAF data included several accounts of strained family relationships, of parental mental ill-health and of undesirable housing conditions that suggested family circumstances comparable to those in my previous work on parental substance misuse. However, the SAF interview questions situated violence outside of the home. As Gillies (2000) argues, even where ‘difficult’ accounts within ‘ordinary’ samples are identified, they are often not written up. As such, the complexity and pain within ‘ordinary’ families may be under-estimated in research, and potentially more easily obscured within political discourse. Similarly, the everyday ambiguity and minor conflicts associated with ‘ordinary’ siblings and parents sharing limited space may be downplayed. Such ambiguities and tensions led several SAF respondents to seek out friends’ homes, or private corners of their own, to escape from family life at least for a time. I had previously associated such strategies with young people affected by parental substance use, many of whom often spent time at friends’ houses. However, this analysis suggested a more nuanced understanding of the importance to the latter group of employing strategies that could be presented as ‘ordinary’ teenage practices. The process of secondary analysis also highlighted uncomfortable omissions from my previous research in which, for various reasons, greater emphasis was placed on the respondents’ own potential substance use, than on their school work and employment aspirations. The predominance of such concerns in the SAF accounts led me to worry that my own research had reflected and performed perceptions of education as less important to ‘vulnerable’ than to ‘ordinary’ young people.

In conclusion, qualitative secondary analysis is a ‘labour-intensive, time-consuming process’ that Gillies and Edwards (2005: para24) compare to primary data collection. However, it presents a useful tool to subject assumptions built up over a specialised research career to scrutiny.

 

References 

Gillies, V. (2000) ‘Young people and family life: analysing and comparing disciplinary discourses’, Journal of Youth Studies, 3(2): 211-228

Gillies, V. and Edwards, R. (2005) ‘Secondary analysis in exploring family and social change: addressing the issue of context’, Forum: Qualitative Social Research, 6(1): art 44.

Law, J. (2009), ‘Assembling the World by Survey: Performativity and Politics’, Cultural Sociology, 3, 2, 239-256.

Wilson, S. (2014) ‘Using secondary analysis to maintain a critically reflexive approach to qualitative research’ Sociological Research Online, 19(3), 21 http://www.socresonline.org.uk/19/3/21.html

 

 

 

Jan 16

Guest post #7, Dr Gregor Wiedemann: Computer-assisted text analysis beyond words

robot-507811_1920Dr Gregor Wiedemann works in the Natural Language Processing Group at Leipzig University. He studied Political Science and Computer Science in Leipzig and Miami. In his research, he develops methods and workflows of text mining for applications in social sciences. In September 2016, he published the book “Text Mining for Qualitative Data Analysis in the Social Sciences: A Study on Democratic Discourse in Germany” (Springer VS, ISBN 978-3-658-15309-0).

In this blog, he discusses computational textual analysis and the opportunities it presents for qualitative research and researchers. 

Computer-assisted text analysis beyond words

In our digital era, amounts of textual data are growing rapidly. Unlike traditional data acquisition in qualitative analysis, such as conducting interviews, texts from (online) news articles, user commentaries or social network posts are usually not generated directly for the purpose of research. This huge pool of new data provides interesting material for analysis, but it also poses qualitative research with the challenge to open up to new methods. Some of these were introduced in blog post #4. Here, computer-assisted text analysis using Wordsmith and Wordstat were discussed as a means of allowing an ‘aerial view’ on the data, e.g. by comparative keyword analysis.

Despite the long history of computer-assisted text analysis, it has stayed a parallel development with only little interaction with qualitative analysis . Methods of lexicometric analysis such as extraction of key words, collocations or frequency analysis usually operate on the level of single words. Unfortunately, as Benjamin Schmidt phrased it, “words are frustrating entities to study. Although higher order entities like concepts are all ultimately constituted through words, no word or group can easily stand in for any of them” (2012). Since qualitative studies are interested in the production of meaning, of what is said and how, there certainly are overlaps with lexicometric measures, but nonetheless their research subjects appear somewhat incompatible. Observation of words alone without respect to their local context appears as rough simplification compared to a hermeneutic close reading and interpretation of a text passage.

The field of natural language processing (NLP) from the discipline of computer science provides a huge variety of (semi-) automatic approaches for large scale text analysis, and has only slowly been discovered by social scientists and other qualitative researchers. Many of these text mining methods operate on semantics beyond the level of isolated words, and are therefore much more compatible with established methods of qualitative text analysis. Topic models, for instance, allow for automatic extraction of word and document clusters in large document collections (Blei 2012). Since topics represent measures of latent semantic meaning, they can be interpreted qualitatively and utilised for quantitative thematic analysis of document collections at the same time. Text classification as a method of supervised machine learning provides techniques even closer to established manual analysis approaches. It allows for automatic coding of documents, or parts of documents such as paragraphs, sentences or phrases on the basis of manually labelled training sets. The classifier learns features from hand coded text, where coding is realised analogously to conventional content analysis. The classifier model can be seen as a ‘naïve coder’ who has learned characteristics of language expressions representative for a specific interpretation of meaning of a text passage. This ‘naïve coder’ then is able to process and code thousands of new texts, which explicitly opens the qualitative analysis of categories up to quantification.

In my dissertation study on the discourse of democratic demarcation in Germany (Wiedemann 2016), I utilised methods of text mining in an integrated, systematic analysis on more than 600,000 newspaper documents covering a time period of more than six decades. Among others, I tracked categories of left-wing and right-wing demarcation in the public discourse over time. Categories were operationalised as sentences expressing demarcation against, or a demand for, exclusion of left-/right-wing political actors or ideologies from the legitimate political spectrum (e.g. “The fascist National Democratic Party needs to be banned” or “The communist protests in Berlin pose a serious threat to our democracy”). Using automatic text classification, I was able to measure the distribution of such qualitatively defined categories in different newspapers between 1950 and 2011. As an example, the following figure shows relative frequencies of documents containing demarcation statements in the German newspaper, the Frankfurter Allgemeine Zeitung (FAZ).
gregor-1Distribution indicates that demarcation towards left-wing actors and ideology long-time superseded right-wing demarcation. Soon after 1990, the latter became the primary discourse subject of threats of German democracy. The enormous benefit of automatic classification is that it allows for easy comparison of publications (e.g. other newspapers) or relations with any other category. For instance, the distribution of “reassurance of democratic identity”, a third category I measured, strongly correlates with right-wing demarcation, but not with left-wing demarcation. Such a finding can be realised only by a combination of the qualitative and the quantitative paradigm.

While computer-assisted methods support qualitative researchers clearly in their task of retrieving “what” is being said in large data sets, they certainly have limitations on the more interpretive task of reconstructing “how” something is said, i.e. the characterisation of how meaning is produced. It is an exciting future task of qualitative research to determine how nowadays state-of-the-art NLP methods may contribute to this requirement. In this respect, computational analysis extends the toolbox for qualitative researchers by complementing their well-established methods. They offer conventional approaches new chances for reproducible research designs and opportunities to open up to “big data” (Wiedemann 2013). Currently, actors in the emerging field of “data science” are a major driving force in computational textual analysis for social science related questions. Since I repeatedly observe lack of basic methodological and theoretical knowledge with respect to qualitative research in this field, I look forward to a closer interdisciplinary integration of them both.

Further reading

Blei, David M. 2012. “Probabilistic topic models: Surveying a suite of algorithms that offer a solution to managing large document archives.” Communications of the ACM 55 (4): 77–84.

Schmidt, Benjamin M. 2012. “Words alone: dismantling topic models in the humanities.” Journal of Digital Humanities 2 (1). Url http://journalofdigitalhumanities.org/2-1/words-alone-by-benjamin-m-schmidt.

Wiedemann, Gregor. 2013. “Opening up to Big Data. Computer-Assisted Analysis of Textual Data in Social Sciences.” Historical Social Research 38 (4): 332-357.

Wiedemann, Gregor. 2016. Text Mining for Qualitative Data Analysis in the Social Sciences: A Study on Democratic Discourse in Germany. Wiesbaden: Springer VS, Url: http://link.springer.com/book/10.1007%2F978-3-658-07224-7.

Dec 14

Guest post #6, Nick Emmel: Revisiting yesterday’s data today

past-present-futureToday we welcome Dr Nick Emmel as our guest blogger. Nick has been investigating social exclusion and vulnerability in low-income communities in a city in northern England since 1999. The research discussed in this blog, Intergenerational Exchange, was an investigation of the care grandparents provide for their children. This was a part of Timescapes, the ESRC’s qualitative longitudinal research initiative. More details of this research are available at http://www.timescapes.leeds.ac.uk/research/intergenerational-exchange.html.

In this thought provoking post, Nick reflects on his experiences of revisiting qualitative data, and the ways in which new interpretations and explanations are generated over time. 

 

Revisiting yesterday’s data today 

I have recently finished writing a paper about vulnerability. This is the third in an ongoing series of published papers; the first published in 2010 and the second in 2014 (Emmel and Hughes, 2010; 2014; Emmel, 2017). Each elaborates and extends a model of vulnerability. All three are based on the same data collected in a qualitative longitudinal research project, Intergenerational Exchange, a part of Timescapes and its archive. The second and third paper draw on newly collected data from subsequent research projects as well. In this blog I want to explore how interpretation and explanation are reconstituted and reconceived through engagement with these new data and theory, considering some methodological lessons in the context of qualitative longitudinal research.

At first sight the narratives told us about poverty, social exclusion, and the experiences of grand parenting by Bob and Diane, Ruth, Sheila, Geoff and Margaret, and Lynn, which populate these three papers seem fixed, even immutable. After all, I am still using the same printed transcripts from interviews conducted between 2007 and 2011, marked up with a marginalia of memos and codes in my micrographia handwriting, text emphasised with single and double underlines in black ink. But each time I get these transcripts out of the locked filing cabinet in my office I learn something new.

To start with there are the misremembered memories of what is actually in the transcripts. Many of the stories our participants tell, Geoff and Margaret’s account of the midnight drop, Sheila bathing her kids in the washing machine, or Lynn walking into the family court for the first time, I have retold over and over again. In their retelling details have been elaborated, twisted, and reworked to make better stories so my students, service deliverers, and policy makers will think a little harder, I hope, about powerlessness, constrained powerfulness, and ways in which excluded people depend on undependable service delivery. In this way they are no different to the original stories, neither truth nor untruth, but narrated for a purpose, to describe experience in qualitative research. Getting the detail and emphasis right is important. The participants know their lived experience far better than I do. Re-reading the transcripts, these stories are reattached to their empirical moorings once again. But this is only the start of their reanalysis.

Rereading may confirm empirical description but past interpretations are unsettled by new empirical accounts. New knowledge has the effect, as Barbara Adam (1990:143) observes, of making the ‘past as revocable and hypothetical as the future’.  In the most recent of the three papers the apparently foundational role of poverty elaborated in our first paper is reinterpreted. New data from relatively affluent grandparents describe the barriers they face in accessing services and the ways in which these experiences make them vulnerable. This knowledge has the effect of reconstituting the original transcripts, shifting attention away from the determining role of poverty to relationships with service providers in which poverty may play a generative part. These data evoke new interpretations. But it is not only new empirical accounts that reshape this longitudinal engagement, new ideas are at play.

In this blog I have suggested that new empirical accounts change how we understand and interpret existing data. To ascribe reinterpretation only to these insights is not enough however. Explanations rely on more than reconstructing empirical accounts in the light of new insight. For a realist like me theories guide the reading of the original transcripts and the collection of new data. Theories are practical things, bundles of hypotheses to be judged and refined empirically. We started with a theory about time as a chronological progression of events, as is explained in the first paper. For our participants, they noticed little difference as recession merged with recession all the way back to the closure of the estate’s main employer in 1984. This theory was found wanting when we came to looking at young grandparenthood and engagement with service provision in the second paper. A refined theoretical account of the social conscience of generational and institutional time supported explanation. These theories, like the empirical accounts of the social world they are brought into a relation with, are revocable and only ever relatively enduring.

To paraphrase the Greek philosopher Heraclitus, no researcher ever steps into the same river twice, for it is not the same river and it is not the same researcher. Revisiting yesterday’s data today reminds us of these methodological lessons in qualitative longitudinal research.

References

Adam, B (1990) Time and social theory Polity Press, Cambridge.

Emmel, N. (2017) Empowerment in the relational longitudinal space of vulnerability. Social Policy and Society. July.

Emmel, N. & Hughes, K. (2010) “‘Recession, it’s all the same to us son’: the longitudinal experience (1999-2010) of deprivation”, 21st Century Society, vol. 5, no. 2, pp. 171-182.

Emmel, N. & Hughes, K. (2014) “Vulnerability, inter-generational exchange, and the conscience of generations,” in Understanding Families over Time: Research and Policy, Holland J & Edwards R, eds., Palgrave, Basingstoke.

Image source: Fosco Lucarelli ( https://www.flickr.com/photos/fosco/3915752142/).

 

Dec 03

Research team blog 6: Getting out of the swamp

annaDear friends,

We have been working with Dr Anna Tarrant during the course of our project (Anna was our first guest blogger – read again here). Anna’s research, ‘Men, Poverty and Lifetimes of Care’, is funded by the Leverhulme Trust and University of Leeds and is exploring change and continuities in the care responsibilities of men who are living on a low-income. Like our project, Anna is drawing on data from the Timescapes research programme, including Following Young Fathers and Intergenerational Exchange.

Anna has a great new article out in which she looks at how the secondary analysis of thematically related qualitative longitudinal (QL) datasets might be used productively in qualitative research design.

The article abstract is below, as is a link to the full text. Happy reading!

Anna Tarrant (2016): ‘Getting out of the swamp? Methodological reflections on using qualitative secondary analysis to develop research design’, International Journal of Social Research Methodology, DOI: 10.1080/13645579.2016.1257678

In recent years, the possibilities and pitfalls of qualitative secondary analysis have been the subject of on-going academic debate, contextualised by the growing availability of qualitative data in digital archives and the increasing interest of funding councils in the value of data re-use. This article contributes to, and extends these methodological discussions, through a critical consideration of how the secondary analysis of thematically related qualitative longitudinal (QL) datasets might be utilised productively in qualitative research design. It outlines the re-use of two datasets available in the Timescapes Archive, that were analysed to develop a primary empirical project exploring processes of continuity and change in the context of men’s care responsibilities in low-income families. As well as outlining the process as an exemplar, key affordances and challenges of the approach are considered. Particular emphasis is placed on how a structured exploration of existing QL datasets can enhance research design in studies where there is limited published evidence.

Nov 14

Research team blog 5: Time in Timescapes

It is obvious to state that time is the most important aspect of qualitative longitudinal research since it affords a rich insight into the phenomena being studied as it evolves. Yet throughout our project, time has been one of the most difficult aspects of the data on which to get an analytical ‘grip’.

Time in Timescapes

Time in Timescapes

Time matters – yet it its presence is complex, fluid and intersectional. These many dimensions, or layers of time, are captured in our data archive. These include biologically defined life cycle stages (aging and developmental change), family and kinship groups (aligned vertically through time), age cohorts (aligned horizontally through time), as well as socially / culturally defined categories, sequences or events (such as becoming a parent).

Time is a narrated aspect of the texture of social life. Our data shows this intersection between time and space, with participants variously describing ‘time’ as something that can be in short supply, as in demand and, within the context of work and family lives, a source of negotiation, stress and, at times, conflict. Time can also be part of the more abstract notion of ‘being there’, where time spent together provides the basis through which caring and intimate relationships are created, and sustained.

Time is also historical. The projects themselves have a temporal identity, as an archive of a particular epoch and the particular socio-economic contexts in which individual lives were unfolding. At a further level, time frames the research process, and does so differently across the six projects for which we have data. Each were conducted in broadly the same historical time, yet they captured time in different ‘waves’, and using different methods (from life history / biographical interviews, through to daily diaries and ‘day in the life’ observations).

How time matters, and how we can ensure it foregrounds our analysis, will be an ongoing source of reflection for our project. To help us make sense of some of this messiness we have begun to ‘map’ the time in Timescapes using Tiki Toki, a web-based software for creating interactive timelines. In our timeline we have sought to capture when participants within each study were born, the epoch in which the study was conducted, the duration of each study and the different ‘waves’ of research. We have also sought to include any key outputs from the project and any follow-on studies (such as Anna Tarrant’s ongoing work on Men, Poverty and Care). These latter aspects will be added to as the study progresses.

Of course, our portrayal of time is two dimensional, and is in part a pragmatic effort to tidy the messiness of time. Its limitation is in our inability to ‘map’ the social, cultural and emotional dimensions of time, and how these intersect (i.e. the emotional and practical connections within, and between, generations, or how these change or stay the same across different historical time frames). That is an aspect of time that our ongoing processes of analysis will seek to capture.

To open our Toki Toki, click on the image below, Please let us know what you think, and if you decide to design your own, share it with us here.

tiki-toki_time-in-timescapes