Guest blog #11: Dr Rebecca Taylor: The challenges of computer assisted data analysis for distributed research teams working on large qualitative projects

RebeccaTaylor5Our guest post today is by Rebecca Taylor, Lecturer in Sociology at the University of Southampton. Her research focuses on conceptualising work, particularly unpaid forms of work, understanding individuals’ working lives and careers, and work in different organisations and sectors. She has over 10 years’ experience of conducting qualitative longitudinal research on studies such as: Inventing Adulthoods, Minority Ethnic Outreach Evaluation and Real Times at the Third Sector Research Centre.

Her current project, Supporting employee-driven innovation in the healthcare sector, with colleagues Alison Fuller, Susan Halford and Kate Lyle, is a qualitative ethnography of three health service innovations involving multiple data sources. The research is funded by the ESRC through the LLAKES Centre for Research in Learning and Life Chances based at UCL Institute of Education, University College London.

In this post, Rebecca considers the three possible ways of overcoming the challenges of conducting large-scale qualitative longitudinal analysis in geographically-distributed research teams and the possibilities, and indeed limitations, offered by computer assisted data analysis software.

The challenges of computer assisted data analysis for distributed research teams working on large qualitative projects

Academics, like many other groups of workers in the digital economy, often find themselves working in geographically distributed teams spanning multiple locations connected by increasingly sophisticated digital technologies. Teleconferencing tools like Skype; cloud based file storage/hosting services such as Google docs and Dropbox; and project planning tools such as Trello, enable groups of researchers to meet, talk, write, share and edit documents, plan, manage and conduct research and even analyse data despite their separate locations.

LaptopIf you are a researcher involved in large scale qualitative studies, such as qualitative longitudinal research (QLR), where projects can potentially span decades and short-term contracts mean that researchers move between institutions, it is highly likely that you will, at some point, be operating in a distributed research team working across institutions, geographical locations and maybe even time zones. QLR in particular tends to amplify the challenges and opportunities of other qualitative methodologies (see e.g. Thomson and Holland 2003); the difficulties of managing multiple cases over multiple waves in terms of storage, labelling and retrieval are even more demanding when carried out remotely.  In fact any large data set creates challenges for a distributed team. Providing access to data across institutions necessitates organising access rights and often the use of a VPN (Virtual Personal Network). Cloud based collaboration solutions may lack  institutional technical support and the required level of data security raising legal and ethical problems for the storage of non-anonymised transcripts, observation notes and other documents.

These issues are all in play when it comes to analysing a geographically-distributed team’s data. The overwhelming array of CAQDAS (Computer Assisted Qualitative Data Analysis Software) packages offer multiple functionality for managing and manipulating qualitative data but are less helpful when it comes to facilitating distributed team working. Our recent experiences as a research team spread across two institutions with members also working mainly from home, provides a useful case study of the issues. As we looked at the CAQDAS packages currently available it became apparent that our options were dependent on where the software was situated – locally, institutionally, or in the cloud:

Option A: Working locally

This traditional model involved packages (such as NVivo, MAX Q) uploaded onto individual computers so that all team members worked on their own local version of the project. For the team to work together on the data and see everyone’s coding and new transcripts, required that researchers all send their projects to a team member who would merge them together and redistribute a new master copy of the project. In a distributed team, this meant finding a way to regularly transfer large project files safely, securely and easily between team members with all the attendant hazards of version control and file management. The size of project files and the security issues around cloud based storage ruled out the more straightforward options like email or Dropbox and the remote desktop route made any sort of data transfer brain numbingly complicated because there was no way to move documents between the home computer and the remote desktop. We had one option for data transfer – a University of Southampton download service for large files which used high levels of encryption.

Option B: Working institutionally

This model made use of server-based packages which stored the data centrally such NVivo Server (‘NVivo for Teams’ with V11) enabling team members to work on the project simultaneously using an institutional local area network (LAN). In the case of Nvivo Server this mitigated the need for a regular time consuming merge process. However, for those members of the team at other institutions or not working on campus it required using remote desktop solutions which were slow and unwieldy and made file transfers (for example when importing a new transcript into the software) difficult. We worried about this process given the software’s reputation for stability issues when used with a potentially intermittent network connection. More importantly, it required a different type of Institutional software licence which was an expense we had not budgeted for and implied considerable delay as we negotiated with the university about purchase and technical support.

Option C: Working in the cloud

Thinking more creatively about the problem we looked at online (and thus not institutionally located) packages such as US-based Dedoose (try saying that with an American accent – it makes more sense) designed to facilitate team-based qualitative and mixed methods data analysis. We could, it seemed, all work online on the same project from any PC or laptop in any location without the need to merge or transfer projects and documents – Were all our problems solved?  Sadly not. Consultation with IT services in our own institutions revealed that such sites used cloud storage in the US and were therefore deemed insecure – we would be compromising our data security and thus our ethical contract. So we were back to square one or in our case Option A – the old school model; a laborious and time-consuming (but ultimately secure) way of working; individual projects on our individual desktops with regular or not so regular transfers and merges.

It’s worked Ok – we are now writing our third journal article. Yet as the funding ended and we lost our brilliant Research Fellow to another short term contract we have tended towards more individualised analysis, the merge process has largely fizzled out as no one has time to manage it and the software serves primarily as a data management tool. It is clear that in the contemporary HE landscape of intensification, and metricisation of research, the tools for distributed team working need to be super-effective and easy to use; they need to make collaborative qualitative analysis straightforward and rewarding irrespective of the geographical location of individual team members. Distributed working arrangements are certainly not going away.

References

Thomson, R. and Holland, J. (2003) Hindsight, foresight and insight: The challenges of qualitative longitudinal research, International Journal of Social Research Methodology, 6(3): 233-244.

Leave a Reply

Your e-mail address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.