Archive

Posts Tagged ‘qualitative’

The “monastic phase”

February 8, 2012 2 comments

Supriya mentioned in today’s meeting that I’m entering the “monastic phase” of my project, now: the final six months, where it’s pretty much expected that every waking moment will be filled by working on or thinking about my thesis. I think it’s an apt description – though other aspects of life will keep grinding relentlessly onwards, as I don’t have the luxury of taking six months off from the world.

I’m spending today working on my data analysis, using Dedoose (which I wrote a bit about over here). Over the past few months, Louise has been using it extensively for her Masters thesis, and now it’s my turn to get to know what it can do. So far, it’s been very easy to use: it works on my Mac, allows simple tagging/coding of text excerpts from interview transcripts, and has some nice visualisation tools for exploring the data. It’s quite laggy on my computer, though (running in Firefox 9.01 & OS X 10.5.8, on a fast university network). I also find that I need to use a separate mouse, as the left/right click commands sometimes don’t respond to the touchpad. Still, it’s far simpler than running a virtual Windows environment, with two different operating systems slowing everything down.

Despite needing to spend most of my time hiding under a rock, I’m going to a public talk that danah is giving at RMIT tomorrow evening – it’s called “Privacy in networked places,” and there’s a bit of info about it here. I almost missed hearing about it, as the university departments can be terrible at publicising their events. Fortunately, I’ve now discovered that we have a Digital Ethnography research centre over in Media & Communications, and will try to go to all their events in future. Hopefully I’ll be able to meet some of the ARC Creative Industries and Youth & Wellbeing CRC people while I’m at the seminar, too.

Another look at Qualitative data analysis for Mac users: Dedoose

August 17, 2011 82 comments

Technically I’m not meant to be looking at data analysis right now: it’s writing month, and the last thing I need to be doing is getting lost in data. However, the Qualitative Data Analysis Software for Mac – A Brief Look post that I wrote last September is still getting a huge amount of traffic every day, and I thought it was time to write a quick update.

What’s changed? Lots. I recently discovered a tool called Dedoose. Like the other tools that I’ve come to rely upon (Zotero for managing references, Scrivener for writing) it’s not an incremental upgrade that adds bells and whistles to an existing product. I like things that are built with a researcher’s workflow in mind, instead of a list of features that you need to work around. It’s also browser-based, so can be used from almost any computer with an internet connection.

The main advantages that I can see are:

  • Cross-platform (so I can use it on my Mac without needing to buy and install Windows and Parallels or VMware Fusion! That already saves me a few hundred dollars)
  • Online data storage, so I can access and work on the project from any computer, and have online backups available if my computer crashes.
  • I’m hopeful about it having a more intuitive interface than NVivo 9, as a shallow learning curve was one of their design goals
  • For projects involving multiple people (i.e. most things except a PhD), it’s a product built around collaborative tools, rather than bolting them onto an existing product
  • Lots of visualisation tools (which can also export to various formats). I think visually, and that will help me to understand what’s going on within and between my different case studies.
  • Licensed by the month, and at competitive rates with other options on the market.

For me, the downsides are:

  • Storing my project data online will require an ethics variation form, as my current ethics approval requires me to store digital copies on RMIT servers. However, Dedoose uses encrypted storage so the variation should be a fairly straightforward bit of paperwork.
  • Requires an internet connection. My home internet has been increasingly flakey over the last few months, and I don’t like my productivity tied to whether Virgin Broadband has decided to work that day.
  • Less support available in the university. It has two black marks against it in an RMIT context: it’s new, and it’s not NVivo. There’s a very established NVivo community here, and a lot of organisational resistance to change. However, there seems to be a lot of online support, so I can teach myself how to use it.
  • No local copy of the project, but you can export your data whenever it’s needed.

I think that a PhD project is a good place to start trialling it, as I don’t need to convince a group of co-workers to give it a shot. There’s a 30-day free trial available, and I’ll see how I go from there. If it works well, I’ll look into using it for collaborative projects later on. I’ll keep you posted.

Have you tried it out? Any experiences to report, or things I should look out for?

Qualitative Analysis software for Mac – a brief look

September 3, 2010 42 comments

EDIT: Since writing this post, I’ve opted for a browser-based tool called Dedoose. I’ve written a bit more about it here.

This has turned out to be a much shorter search than expected. I’ve been looking for qualitative data analysis software that runs on Mac OS X, and have found a disappointing short list of applications.

What I want:

  • Runs natively on OS X, without requiring dual-booting into Windows, or virtual machine software
  • Flexibility in tagging text (interview transcripts and field notes) according to themes that will emerge during analysis
  • Visualisation of tagged data (coding stripes, etc)
  • Search functions (boolean searches, proximity searches, negative cases, comparing to demographic data, etc)
  • Intuitive, user-friendly interface

Optional things I’d like to see:

  • Ability to handle audio, video and image data
  • Software I can buy outright, in order to access my data in future

What I’ve found:

  • hyperRESEARCH is the main piece of analysis software that will run on a Mac. I’m going to download a trial version next week. It’s about $200USD to purchase, so I’ll only buy it if I know that it’s going to work for me. It seems to be the main contender for an NVivo-like tool,
  • TAMS Analyser is a Text Analysis Markup System (not to be confused with the Technology Acceptance Model, which I’ve been reading about recently…). It’s free, which is good for students. The review on MacResearchSW describes it as a powerful and full-featured analysis tool, but comments that it has a non-intuitive interface with a steep learning curve. Some users on Academics Anon (semi-public LJ group  – many entries won’t be visible unless you join the LJ community) have expressed frustration at the interface. In particular, displaying all tags in the text can make documents unreadable when multiple overlapping tags are used.
  • Coding Analysis Toolkit (CAT) is a web-based toolkit, designed to allow analysis of text data that’s either been pre-coded using ATLAS.ti, or using the built-in coding module. Relatively basic features, and the web-based approach has some drawbacks (reliance on web access and server uptime) as well as positives (access via any platform using a web browser).

The CAQDAS network provides information about a range of software designed to assist with qualitative data analysis. Unfortunately, they don’t currently have reviews for very many packages – just NVivo 8, ATLAS.ti 6, Digital Replay Systems (DRS) and MAXqda 2007. None of these currently work on a Mac.

Thomas Koenig has an excellent CAQDAS Comparison, ordered by software function. Most available software is designed by researchers, and not by interface designers – so virtually all the available options will feel a lot like you’re using an old version of Windows. I can’t find a date anywhere on the page, though, and the inclusion of NVivo 2 suggests that it might be a bit dated.

People at work are all suggesting NVivo. Support is readily available via the Qualitative Inquiry Group community, other people in the office already use it on their ALTC project, and it does all the things I’m looking for.

However, I’ll either have to run it on my clunky old work PC (preventing me from doing any data analysis from home), or I’ll need to buy and install a copy of windows on my Macbook. My options there are dual-booting via BootCamp, or running it in a virtual machine window like Parallels Desktop or VMware Fusion.

It’s also expensive. I can purchase 12-month Student licenses for $129AUD per year, or buy a copy outright for $780AUD (or $850AUD, including an upgrade to NVivo 9 when it’s released next month). The first option prevents me from readily accessing my data in future, and the second option costs a small fortune, from the perspective of a full-time student.

I’m still searching, but I suspect that I’ll end up using NVivo. It’s certainly not ideal (expensive, and means I’ll also have to buy a copy of Windows), but it seems to have the best combination of search features and user support (both local and online). If anyone can suggest alternatives, I’d love to hear them.

QIG: March 2010

March 2, 2010 Leave a comment

Today’s QIG topic continued the theme of “rapid qualitative research” from last month: how can you perform “good” qualitative research quickly, instead of immersing yourself in your data for months (or years) as themes and theories emerge from the mass of data.

There were some interesting stories shared, from people with experience working on project timelines unheard of in a typical academic setting: market research consultancies with two-week turnarounds; international aid assessments that generate reports a month after hitting the ground in a new country. This is a quick braindump of things I found interesting during the meeting:

‘Quick’ projects aren’t always as quick as they look. Consultancies that take two weeks to go from project brief to final report don’t always start cold: the brief will usually be written by someone who knows exactly what they are looking for (and how to find it), and the researchers may well be spending years working in that space. These reports can generate some fascinating insights into behaviours and motivations, but those insights may be shaped by a career of research in related areas.

If you want quick data analysis, you need to design your research to accommodate this. Work out what types of data you need to answer the research questions. Are they opinions? Behaviours? Reactions? The more you can structure your research during the experiment design phase, the less time is needed to untangle it during the analysis phase.

Analyse during data collection. While conducting an interview, be aware of the major themes that you want to address – it helps you know when to ask more detailed questions, and you can link together important themes. It also makes it much easier to find related items in a transcript. “Tell me about X” might be a great way of generating a heap of data. “Tell me about how X was affected by A, B and C” may be much more effective at answering your research questions.

Some types of research are going to be slower than others. If you’re testing a hypothesis: great! Previous work should let you know what to look for, and what to ask about. It’ll either pass or fail. If you want to develop grounded theory, you’ll need time.

Quick work is rarely done alone. Academic qualitative studies are usually done solo. The relationship between these is no coincidence… Generally, quick research involves data analysis by a group of researchers, often immediately before and after gathering data in the field. Doing the whole lot solo is unlikely to let you work through things quickly.

NVivo workshop

November 27, 2009 Leave a comment

I’m a little behind on updates, as it’s been a hell of a busy week.

On Monday 23/11, I attended a full-day workshop on NVivo 8. It’s a piece of software used to help analyse qualitative data: things like interviews, images, audio and video. 12-month licenses are available for the desktop PCs at work, but it’s unfortunately not available on a Mac (short of running it through BootCamp).

I’m impressed with what I’ve seen so far. It’s extremely flexible, and should work for a range of different research methodologies – though some have claimed that it’s best for grounded theory work.

Any data (usually text from a transcript, or notes made for a specific part of a media file) can be tagged with one or more “nodes” – thematic keywords, essentially working like the many different highlighters and sticky notes that I’ve been using on my hardcopies. Nodes can be “free” or use a tree-structure if you want to break a concept down into smaller parts, and they can be (and usually are) modified frequently during the analysis.

You can choose from a few different visual representations of your nodes – my favourite is a series of coloured stripes running down the side of the screen, level with the relevant parts in the transcript. The first is a greyscale bar showing the density of tags applied to each section (darker grey = more keywords), which gives a very quick way of seeing which areas were most valuable, or which haven’t had much attention paid to them yet. Other bands of colour highlight specific nodes: the 5 most frequently used, or any custom selection.

You can search through nodes, to quickly find all the quotes (linked back to original sources) that have been assigned to a particular node. This is a handy one when looking for a couple of additional quotes for a paper – it saves leafing through all the pages looking for annotations.

It’s also possible to do searches: “what statements about technological aptitude were made by people over the age of 45?” or “did anyone talk about trust or security when they discussed using an unfamiliar website?

Above all else, it’s a powerful way of managing the large amounts of data that my interviews will generate. I’d like to learn how to use it’s more advanced functions, but even if I just use it as a management tool, it will be worth the investment of time in discovering how to set it up.

Qualitative Inquiry Group @ RMIT

August 4, 2009 Leave a comment

The Qualitative Inquiry Group meets on the first Tuesday of each month, in the Accounting and Law boardroom – right next to my office. I’ve decided to start attending their meetings, though it’s causing a bit of trouble with desk space… I originally said I’d be in the uni on Wednesdays and Fridays, and my desk has been claimed during the rest of the week.

Today’s workshop was on coding qualitative data (using software like NVivo). A few people described their experiences during past or current projects, and the discussion evolved from there. It seems that there are only two real rules, as far as data coding goes:

1. You need to have (and know!) a reason for coding up the data – it’s a technique that can help to solve a problem. Otherwise it’s a rather masochistic exercise, and a great way of never finishing a thesis…

2. No two people follow the same methods, though hopefully two researchers would both use similar codes for the same bit of data.

Essentially, NVivo is designed to help analyse large amounts of qualitative data. In my case, that takes the form of interview transcripts: by the end of the research phase, I’ll hopefully have about forty interviews to analyse.

Codes are categories, topics or themes that you are interested in exploring. They aren’t always directly stated (so can’t be searched for using the transcript text) – they might be nebulous things like an interviewee’s attitudes towards management, or their motivation for making a decision.

Developing codes is an arcane art. Ideally, they “emerge” from the data – when reviewing the interviews, new themes and elements of interest are discovered. Many people start by assigning codes based on existing theories that (hopefully) explain your observations. That’s fine if you’re testing a theory, but it doesn’t work if you want to develop grounded theory of your own…

Multiple codes can be applied to any given bit of data, which is a good reason for doing this with software instead of coloured pens and post-it notes. Actually, a good interviewer should be getting statements that have multiple meanings: a statement about use of a specific technology might also allude to innovation, risk, etc.