I have a problem: I can focus intently on reading for hours, but the subject matter tends to get more esoteric as I go. All sorts of tangentially related articles become fascinating, and then suddenly I find myself lost: only a few mouseclicks away from my original topic, but miles away from what I need to focus on.
Today’s warning signs: when you follow business strategy articles from one reference to the next, and they start citing articles by Raymond Kurzweil and Freeman Dyson, you have left the realm you were meant to be exploring.
It’s the fascinating thing about researching internet use. Ten years ago, most of the things I’m looking at were science fiction.
There’s a familiar routine when moving to a new desk: turn on computer, load up Internet Explorer for just long enough to download a copy of Firefox. Install the Zotero and Delicious add-ons that allow me to function at work. Puzzle over how I’m supposed to access printers in the new building.
Zotero has a Group Libraries function, which I used to set up RMIT-SmartServicesCRC (the library is the easiest way to find content, as the main page seems to list each individual file and record as it’s added). The name is a bit misleading at the moment, as there are only two of us using the group so far… others have occasionally looked interested, and then gone back to wrestling with their EndNote libraries.
Today I decided to take them up on their file storage offer: instead of just bibliographic data, you can host complete copies of your reference library on their servers. I ran into a slight hitch when I found that there were existing files stored in the group: as the group owner, they come out of my storage allotment. That meant that a bunch of files wouldn’t fit into my free 100MB limit. Oops.
I’ve now purchased 1GB of storage space, so that ought to hold everything for the foreseeable future of this project. And after a few minutes of syncing files, I can now access all my PDFs and website snapshots from the office computer. No more “file not found” error messages – the whole library is now synchronised between all the computers that I work on. I think it’s fantastic 🙂
Now, if only getting printer access was this easy…
Today’s QIG topic continued the theme of “rapid qualitative research” from last month: how can you perform “good” qualitative research quickly, instead of immersing yourself in your data for months (or years) as themes and theories emerge from the mass of data.
There were some interesting stories shared, from people with experience working on project timelines unheard of in a typical academic setting: market research consultancies with two-week turnarounds; international aid assessments that generate reports a month after hitting the ground in a new country. This is a quick braindump of things I found interesting during the meeting:
‘Quick’ projects aren’t always as quick as they look. Consultancies that take two weeks to go from project brief to final report don’t always start cold: the brief will usually be written by someone who knows exactly what they are looking for (and how to find it), and the researchers may well be spending years working in that space. These reports can generate some fascinating insights into behaviours and motivations, but those insights may be shaped by a career of research in related areas.
If you want quick data analysis, you need to design your research to accommodate this. Work out what types of data you need to answer the research questions. Are they opinions? Behaviours? Reactions? The more you can structure your research during the experiment design phase, the less time is needed to untangle it during the analysis phase.
Analyse during data collection. While conducting an interview, be aware of the major themes that you want to address – it helps you know when to ask more detailed questions, and you can link together important themes. It also makes it much easier to find related items in a transcript. “Tell me about X” might be a great way of generating a heap of data. “Tell me about how X was affected by A, B and C” may be much more effective at answering your research questions.
Some types of research are going to be slower than others. If you’re testing a hypothesis: great! Previous work should let you know what to look for, and what to ask about. It’ll either pass or fail. If you want to develop grounded theory, you’ll need time.
Quick work is rarely done alone. Academic qualitative studies are usually done solo. The relationship between these is no coincidence… Generally, quick research involves data analysis by a group of researchers, often immediately before and after gathering data in the field. Doing the whole lot solo is unlikely to let you work through things quickly.
I’m a little behind on updates, as it’s been a hell of a busy week.
On Monday 23/11, I attended a full-day workshop on NVivo 8. It’s a piece of software used to help analyse qualitative data: things like interviews, images, audio and video. 12-month licenses are available for the desktop PCs at work, but it’s unfortunately not available on a Mac (short of running it through BootCamp).
I’m impressed with what I’ve seen so far. It’s extremely flexible, and should work for a range of different research methodologies – though some have claimed that it’s best for grounded theory work.
Any data (usually text from a transcript, or notes made for a specific part of a media file) can be tagged with one or more “nodes” – thematic keywords, essentially working like the many different highlighters and sticky notes that I’ve been using on my hardcopies. Nodes can be “free” or use a tree-structure if you want to break a concept down into smaller parts, and they can be (and usually are) modified frequently during the analysis.
You can choose from a few different visual representations of your nodes – my favourite is a series of coloured stripes running down the side of the screen, level with the relevant parts in the transcript. The first is a greyscale bar showing the density of tags applied to each section (darker grey = more keywords), which gives a very quick way of seeing which areas were most valuable, or which haven’t had much attention paid to them yet. Other bands of colour highlight specific nodes: the 5 most frequently used, or any custom selection.
You can search through nodes, to quickly find all the quotes (linked back to original sources) that have been assigned to a particular node. This is a handy one when looking for a couple of additional quotes for a paper – it saves leafing through all the pages looking for annotations.
It’s also possible to do searches: “what statements about technological aptitude were made by people over the age of 45?” or “did anyone talk about trust or security when they discussed using an unfamiliar website?”
Above all else, it’s a powerful way of managing the large amounts of data that my interviews will generate. I’d like to learn how to use it’s more advanced functions, but even if I just use it as a management tool, it will be worth the investment of time in discovering how to set it up.