I thought I’d write about some of the tools, tricks and technology I’ve discovered and incorporated into my research. Over the last 18 months, I’ve been looking for ways to make interviewing, note taking and case study development a little easier, as there is a huge volume of information out there.
These are the heart of my research project. Before heading off to an interview, I check up on the business I’m interviewing via Gist. I use it as a social media aggregator, and fed it contacts from my email accounts, Twitter and LinkedIn. Gist contacts fall into two groups: Individual and Company. I’m interested in talking to business owners and/or managers, so I tend to merge all the information about a small business into an Individual profile. The most valuable feature in Gist, as far as I’m concerned, is the handy “view dossier” link at the top of the page. It summarises recent email conversations I’ve had with that person, news results and blog/social network/twitter updates onto a single page, and it’s only available for Individual contacts, not Companies.
I take notes and record the interview using a Livescribe pen – possibly my favourite gadget so far, as it lets me play back audio from any point in the interview by tapping the notes I was writing at the time. This took a bit of getting used to – with a normal pen I write all over the place, and often jump around on the page adding details as I go. It’s still possible to do that with the Livescribe, but it makes things confusing when you try to listen to a specific bit of the audio. Now, I just write down key words and phrases as they come up in the interview. That turns the written notes into a menu, which I use to navigate through the audio later on.
Afterwards, I write a quick summary of what happened in the interview – with my notes as a reference, but without re-listening to the audio. Something I’ve been told by many people (academics and journalists alike) is “write your notes before you go to sleep” – regardless of how late you might finish an interview. It’s a way of brain-dumping information while it’s still available, and a night’s sleep can make you lose track of the details. Mine have all ended up around 1500 words, so far. At that rate, I could actually hit my 90k wordcount on summaries alone…
Social media information:
This is an area moving much faster than I can keep up with it. Blogs like Mashable and ReadWriteWeb have been great for news, but there are new bloggers appearing all the time. I rely heavily on Twitter to find out what’s happening – hearing things on a grapevine that includes a bunch of people who are looking for the same kinds of things I’m interested in. By using lots of eyes and ears, I can draw information from a much wider range of sources than I could find on my own. Unfortunately, the volume of information has become a bit like drinking from a fire hydrant…
Twitter Lists were a start, and have helped me to cut down on the amount I’m reading – though they also cut down on the interesting content. I tend to look at lists when I’m pressed for time and want updates from a subset of people on Twitter: small businesses owners, close friends, or people from the #socialmelb meetup. Really interesting stuff comes from the weak ties in my network, and to find those I need access to the whole unfiltered lot, in all its raw, high-volume glory. I also need an extra hundred hours in each day, if I’m going to try reading the lot.
Tim Bull mentioned using paper.li as a way of quickly browsing through content (links etc) posted to Twitter. I’ve been playing with it this afternoon, and I think it’s brilliant. Every 24 hours, paper.li trawls through content posted to a hashtag, @ list, or a personal account (including accounts they are following) and arranges it into a “newspaper” for you to browse through. Everything ends up in one spot. I’d never attempt to follow the @mashable/social-media list (98 very active members, watched by 7,013 people). But put it in a one-page newspaper, and it becomes much more manageable.
Inspiration strikes at the weirdest times, and in the weirdest places. I’ve lost count of the number of notes that have been hastily scribbled down into notebooks, on the backs of envelopes, or on scraps of paper that I promptly lose track of. I’ve started using Evernote to keep track of thesis-related thoughts. Typing out (or recording) a quick note and tagging it with theory or case-study means that I can find those notes when I really need them: while I’m writing up the relevant section.
If I’m not at a computer (a rare event, these days…) I generally have my iPod nearby. It also has Evernote on it, and tends to be a good way of collecting notes that will later be used at the computer. I often spot good links between articles or authors while in the library. If I’m skimming through a book and spot something that might be useful in a different bit of my thesis (particularly handy when finding theoretical models from other disciplines) I can tag it and file it for future reference.
What are some of the tricks of your trade?
Last time I wrote, I was worrying about my project – specifically, trying to get enough people to interview. That’s still a problem, but slightly less of one due to some friendly folks in the twitterverse. After posting last week’s PhD progress update, I put out a call for help on Twitter last Wednesday:
“Still looking for Melbourne small businesses to interview – esp. bars, cafes and restaurants. Info at http://bit.ly/bd8qXi“
Not a new idea, and something I’ve tried (to fairly minimal response) before. Since the last time, however, I’ve become a lot more active on Twitter – gathering contacts from the social media community, and as many Melbourne small businesses as I could track down. As it turned out, it wasn’t my immediate contact that helped to spread the word. Instead, the message was retweeted or otherwise passed on in various forms by a group of different people, most of whom I’d never met before. Many thanks (and much internet karma) to all those who helped! You made a weary PhD student much happier about the world.
I now have a few new interviews organised, and have sent off a bunch of emails. A few friends have asked why I’m using such an easy-to-ignore medium when contacting people, instead of picking up a phone. The business world seems to love phonecalls and hate emails, while the academic world is the complete opposite. After thinking about it, I guess I have a few reasons:
Firstly, I hate getting phonecalls while I’m working. It breaks my concentration and usually rearranges the tasks I’d planned to be working on that day. So, I try to extend that courtesy to other people: when they are at work, I let them do their job. If “deal with enquiries from random strangers” is something they prefer to do when attacking their email, I’d much rather let them do that when they want to deal with it.
Secondly, cold-calling strangers is right up there with my Least Favourite Things To Do. Regardless of how the actual conversation goes, all the stress associated with it (picking a time, trying not to be intrusive etc) generally manages to ruin whichever day I have to do it on. I really can’t explain it – I’d much rather give a talk in front of an audience of strangers, undergo surgery, eat spiders or jump out of a plane…
Finally, and most importantly, I don’t want to pressure people into making decisions. Calling or rocking up to someone’s workplace forces them to decide on their participation immediately – even if that decision is “come back later after I’ve thought about it.” I want to give people a chance to hear a brief elevator pitch, read further if it interests them, and make their decision.
It’s probably costing me a lot of potential interviews – small business owners are super-busy people, and giving an hour of their time to a random PhD student can be a big thing to ask. It does mean that I value those who agree to take part in the project even more, though.
Today’s QIG topic continued the theme of “rapid qualitative research” from last month: how can you perform “good” qualitative research quickly, instead of immersing yourself in your data for months (or years) as themes and theories emerge from the mass of data.
There were some interesting stories shared, from people with experience working on project timelines unheard of in a typical academic setting: market research consultancies with two-week turnarounds; international aid assessments that generate reports a month after hitting the ground in a new country. This is a quick braindump of things I found interesting during the meeting:
‘Quick’ projects aren’t always as quick as they look. Consultancies that take two weeks to go from project brief to final report don’t always start cold: the brief will usually be written by someone who knows exactly what they are looking for (and how to find it), and the researchers may well be spending years working in that space. These reports can generate some fascinating insights into behaviours and motivations, but those insights may be shaped by a career of research in related areas.
If you want quick data analysis, you need to design your research to accommodate this. Work out what types of data you need to answer the research questions. Are they opinions? Behaviours? Reactions? The more you can structure your research during the experiment design phase, the less time is needed to untangle it during the analysis phase.
Analyse during data collection. While conducting an interview, be aware of the major themes that you want to address – it helps you know when to ask more detailed questions, and you can link together important themes. It also makes it much easier to find related items in a transcript. “Tell me about X” might be a great way of generating a heap of data. “Tell me about how X was affected by A, B and C” may be much more effective at answering your research questions.
Some types of research are going to be slower than others. If you’re testing a hypothesis: great! Previous work should let you know what to look for, and what to ask about. It’ll either pass or fail. If you want to develop grounded theory, you’ll need time.
Quick work is rarely done alone. Academic qualitative studies are usually done solo. The relationship between these is no coincidence… Generally, quick research involves data analysis by a group of researchers, often immediately before and after gathering data in the field. Doing the whole lot solo is unlikely to let you work through things quickly.
I’m a little behind on updates, as it’s been a hell of a busy week.
On Monday 23/11, I attended a full-day workshop on NVivo 8. It’s a piece of software used to help analyse qualitative data: things like interviews, images, audio and video. 12-month licenses are available for the desktop PCs at work, but it’s unfortunately not available on a Mac (short of running it through BootCamp).
I’m impressed with what I’ve seen so far. It’s extremely flexible, and should work for a range of different research methodologies – though some have claimed that it’s best for grounded theory work.
Any data (usually text from a transcript, or notes made for a specific part of a media file) can be tagged with one or more “nodes” – thematic keywords, essentially working like the many different highlighters and sticky notes that I’ve been using on my hardcopies. Nodes can be “free” or use a tree-structure if you want to break a concept down into smaller parts, and they can be (and usually are) modified frequently during the analysis.
You can choose from a few different visual representations of your nodes – my favourite is a series of coloured stripes running down the side of the screen, level with the relevant parts in the transcript. The first is a greyscale bar showing the density of tags applied to each section (darker grey = more keywords), which gives a very quick way of seeing which areas were most valuable, or which haven’t had much attention paid to them yet. Other bands of colour highlight specific nodes: the 5 most frequently used, or any custom selection.
You can search through nodes, to quickly find all the quotes (linked back to original sources) that have been assigned to a particular node. This is a handy one when looking for a couple of additional quotes for a paper – it saves leafing through all the pages looking for annotations.
It’s also possible to do searches: “what statements about technological aptitude were made by people over the age of 45?” or “did anyone talk about trust or security when they discussed using an unfamiliar website?”
Above all else, it’s a powerful way of managing the large amounts of data that my interviews will generate. I’d like to learn how to use it’s more advanced functions, but even if I just use it as a management tool, it will be worth the investment of time in discovering how to set it up.