I’ve come back to my methodology chapter, after burying my head in papers on social capital and social network theory for the last month. I never did end up getting any feedback on what I wrote in the last draft, but the break has given me some more perspective on my own writing. In short: not entirely terrible, but there’s a hell of a long way to go before it’s decent.
One of the sections that I’ve struggled to write about is the “ethnographic” research methods that I’ve used. I’m wary about using the term, as it carries a lot of baggage – I don’t want to be jumped on by old-school anthropologists insisting that a real ethnographer needs to spend six years in a Kenyan village, and that online observations are fundamentally different…
I found a copy of Robert Kozinets’ Netnography: Doing Ethnographic Research Online by random happenstance while in the library last week. The books I went there to find are more traditional resources on ethnography, but it’s been fascinating comparing them with the newer book.
In many ways, ethnography seems a good label for what I have been doing: I’ve spent the past two years embedded in the online communities that each business manager has joined, trying to understand how those communities work from the perspectives of the participants. In some of these I’m more of a traditional observer, in others I definitely participate to a larger extent. Particularly in Twitter, I’m using it myself (to keep in touch with people, contacting people before/after conferences or meetups, finding out news) while following all the different ways that each business manager is using the service. Access into some communities implies participation – profiles are public, and anything that you subscribe to/like/follow/etc gives those you’re watching an opportunity to watch you right back, or start up a conversation.
Kozinets argues that there’s a need for the newer label, as some areas are significantly different between “offline” and online ethnographic research. That blurry area around what constitutes participation and observation is one. Most of your data is natively digital: transcription errors aren’t a problem when you have every word used by the participants. It also affects the volume of data that can be used: depending on the source of the data, you could well pull the entire history of interactions between your study participants. That quickly turns data collection into much less of a problem than data management: it’s easy to get absolutely buried under reams of data that may or may not actually help you answer your research questions.
There are a few different names floating around. RMIT is starting its own centre for “Digital Ethnography.” Ethnographic online research, netnography, webnography and network ethnography have all been used as well. They may all be intended to mean similar things, but it’s difficult to tell exactly what any given author mean without reading their entire study.
Much like the quip about a language being “a dialect with an army and a navy,” successful methodologies are often jargon with a marketing department. In the case of netnography, the originator is a Professor of Marketing, backed up by many MBA students… since first mentioning the term on Twitter last week, people have started contacting me out of the blue, with links to Prof. Kozinets’ presentations and course material, and the term has already got a lot of traction. Being able to concisely express a set of shared understandings (approach, methods, standards) is what good labels are all about. So I’ll most likely adopt it to describe what I’ve been doing so far: observing and learning about communities that operate largely online, by embedding myself in them over a period of months or years.
Today’s QIG topic continued the theme of “rapid qualitative research” from last month: how can you perform “good” qualitative research quickly, instead of immersing yourself in your data for months (or years) as themes and theories emerge from the mass of data.
There were some interesting stories shared, from people with experience working on project timelines unheard of in a typical academic setting: market research consultancies with two-week turnarounds; international aid assessments that generate reports a month after hitting the ground in a new country. This is a quick braindump of things I found interesting during the meeting:
‘Quick’ projects aren’t always as quick as they look. Consultancies that take two weeks to go from project brief to final report don’t always start cold: the brief will usually be written by someone who knows exactly what they are looking for (and how to find it), and the researchers may well be spending years working in that space. These reports can generate some fascinating insights into behaviours and motivations, but those insights may be shaped by a career of research in related areas.
If you want quick data analysis, you need to design your research to accommodate this. Work out what types of data you need to answer the research questions. Are they opinions? Behaviours? Reactions? The more you can structure your research during the experiment design phase, the less time is needed to untangle it during the analysis phase.
Analyse during data collection. While conducting an interview, be aware of the major themes that you want to address – it helps you know when to ask more detailed questions, and you can link together important themes. It also makes it much easier to find related items in a transcript. “Tell me about X” might be a great way of generating a heap of data. “Tell me about how X was affected by A, B and C” may be much more effective at answering your research questions.
Some types of research are going to be slower than others. If you’re testing a hypothesis: great! Previous work should let you know what to look for, and what to ask about. It’ll either pass or fail. If you want to develop grounded theory, you’ll need time.
Quick work is rarely done alone. Academic qualitative studies are usually done solo. The relationship between these is no coincidence… Generally, quick research involves data analysis by a group of researchers, often immediately before and after gathering data in the field. Doing the whole lot solo is unlikely to let you work through things quickly.
The Qualitative Inquiry Group meets on the first Tuesday of each month, in the Accounting and Law boardroom – right next to my office. I’ve decided to start attending their meetings, though it’s causing a bit of trouble with desk space… I originally said I’d be in the uni on Wednesdays and Fridays, and my desk has been claimed during the rest of the week.
Today’s workshop was on coding qualitative data (using software like NVivo). A few people described their experiences during past or current projects, and the discussion evolved from there. It seems that there are only two real rules, as far as data coding goes:
1. You need to have (and know!) a reason for coding up the data – it’s a technique that can help to solve a problem. Otherwise it’s a rather masochistic exercise, and a great way of never finishing a thesis…
2. No two people follow the same methods, though hopefully two researchers would both use similar codes for the same bit of data.
Essentially, NVivo is designed to help analyse large amounts of qualitative data. In my case, that takes the form of interview transcripts: by the end of the research phase, I’ll hopefully have about forty interviews to analyse.
Codes are categories, topics or themes that you are interested in exploring. They aren’t always directly stated (so can’t be searched for using the transcript text) – they might be nebulous things like an interviewee’s attitudes towards management, or their motivation for making a decision.
Developing codes is an arcane art. Ideally, they “emerge” from the data – when reviewing the interviews, new themes and elements of interest are discovered. Many people start by assigning codes based on existing theories that (hopefully) explain your observations. That’s fine if you’re testing a theory, but it doesn’t work if you want to develop grounded theory of your own…
Multiple codes can be applied to any given bit of data, which is a good reason for doing this with software instead of coloured pens and post-it notes. Actually, a good interviewer should be getting statements that have multiple meanings: a statement about use of a specific technology might also allude to innovation, risk, etc.