Advances in computer technology have made researchers’ lives an awful lot easier.
When I started working on my PhD in 2013, having not really written academically for 10 years, I was delighted to find that programs like Mendeley and EndNote had made child’s play of the once time-consuming and painstaking task of referencing. Evernote allowed me to make notes when I had sudden inspiration on the go: on the train, in the park or on the school run. I could then access the quick notes I made on my phone or tablet on my PC later. I soon created a twitter account that meant I could network quickly and easily with other scholars with similar interests, get quick-fire updates on conferences I couldn’t attend and live-tweet about those I could.
Computer technology has also revolutionised the way I approach qualitative analysis. Now I love stationery. Spending the day lining up my coloured highlighters and lighting up my data set sounds pretty close to a perfect day at ‘work’. But when you have a data set of over 200,000 words the prospect of doing a traditional pen-and-paper qualitative analysis quickly becomes unappealing.
It’s no surprise, then, that programs for storing and analysing data have become so popular. Finding the right software for the job isn’t always as straightforward as it first seems, though, and I’ve found it’s particularly problematic when you’re analysing social media data. The topic has come up a few times in offline and online conversation within the BAAL Language and New Media group. I know that quite a few of my fellow scholars have used Gephi to visualise online networks. Another colleague has recently adopted Birmingham City’s Quick XML editor to tag a dataset, create codes and visualise data.
My research involves analysing threads from an online discussion forum. I was looking for a tool that would enable me to store threads in their original format, so that the idiosyncracies of interaction in the online context would not be lost. I’m also adopting the principles of grounded theory, so needed a platform that would make coding, annotating and memo writing easy, and to help me link all of these strands of my analysis. Why struggle under a pile of papers when I could harness the power of computers to store, sort, match and link data? QSR International’s NVivo software seemed well suited to my needs, and plenty of others were putting it to good use.
But how good is it? There’s plenty of literature on the use of NVivo for qualitative data analysis. It’s very useful when you’re committed to using the program, but what it doesn’t do is provide an honest review of NVivo, warts and all. So here’s my effort.
In many ways, I found NVivo to be an invaluable tool for my purposes. It is certainly well matched to the demands of a grounded theory approach. After a few hiccups while I got used to the program, I found that I was able to move through my data quickly, creating and adding to ‘nodes’ (the name for codes in NVivo) as I went. I also found it fairly easy to revise, review and modify my nodes and coding. With a click, I could bring up all the references within a node, merge, delete and reorganise them. I also found it particularly helpful to be able to bring up nodes or threads alongside each other for easy comparison.
There are a few things I wish I’d known before I committed myself to NVivo, though; if I had, I might have searched a little harder for the ‘perfect’ qualitative analysis software. NVivo is well-known for being able to cope with multi-media data. It can handle pdfs, pages from the internet, videos, audio and images, as well as straightforward text files. I was a little disappointed, then, to find that it didn’t always deal well with my discussion forum data. The threads I captured using NVivo’s ‘NCapture’ tool didn’t always transfer perfectly; the last line of long posts was often lost. And though the visuals and unconventional text features, such as images, smilies and strikethrough text looked fine when I brought up the thread, coding them was a different matter. When I viewed my coded excerpts within nodes, they had been reduced to a simple text format, erasing all the colourful nuances that are such an important part of communication in this medium. NVivo also split my longer threads into artificial pages, but struggled to code extracts that travelled across these pages.
I had a few other problems with NVivo that aren’t specific to dealing with social media data. For example, I found it difficult to review coding at a glance. With the traditional highlighter-and-paper technique (which I have reverted back to with a few key threads), it’s easy to see very quickly which codes have been used in a particular extract. The ‘coding stripes’ function in NVivo aims to imitate this manual technique, but it’s not nearly as effective, placing the stripes on a vertical slant that makes it difficult to see how they correspond with the text. The ‘highlight’ function marks all coded text, but doesn’t distinguish between the different nodes.
Ultimately, these are all problems that I have managed to overcome, to differing degrees. And I’m not sure there’s actually any software out there that could have done a better job (if you’ve found something, I’d love to hear about it!). There’s no doubt that using NVivo has allowed me to manage the qualitative analysis of a relatively large amount of data with relative ease. I think there’s still a long way to go, though, before it matches the subtlety and flexibility that can be gained from a researcher in a room with some data and a pack of coloured pens.