Archive

Posts Tagged ‘software’

Another look at Qualitative data analysis for Mac users: Dedoose

August 17, 2011 82 comments

Technically I’m not meant to be looking at data analysis right now: it’s writing month, and the last thing I need to be doing is getting lost in data. However, the Qualitative Data Analysis Software for Mac – A Brief Look post that I wrote last September is still getting a huge amount of traffic every day, and I thought it was time to write a quick update.

What’s changed? Lots. I recently discovered a tool called Dedoose. Like the other tools that I’ve come to rely upon (Zotero for managing references, Scrivener for writing) it’s not an incremental upgrade that adds bells and whistles to an existing product. I like things that are built with a researcher’s workflow in mind, instead of a list of features that you need to work around. It’s also browser-based, so can be used from almost any computer with an internet connection.

The main advantages that I can see are:

  • Cross-platform (so I can use it on my Mac without needing to buy and install Windows and Parallels or VMware Fusion! That already saves me a few hundred dollars)
  • Online data storage, so I can access and work on the project from any computer, and have online backups available if my computer crashes.
  • I’m hopeful about it having a more intuitive interface than NVivo 9, as a shallow learning curve was one of their design goals
  • For projects involving multiple people (i.e. most things except a PhD), it’s a product built around collaborative tools, rather than bolting them onto an existing product
  • Lots of visualisation tools (which can also export to various formats). I think visually, and that will help me to understand what’s going on within and between my different case studies.
  • Licensed by the month, and at competitive rates with other options on the market.

For me, the downsides are:

  • Storing my project data online will require an ethics variation form, as my current ethics approval requires me to store digital copies on RMIT servers. However, Dedoose uses encrypted storage so the variation should be a fairly straightforward bit of paperwork.
  • Requires an internet connection. My home internet has been increasingly flakey over the last few months, and I don’t like my productivity tied to whether Virgin Broadband has decided to work that day.
  • Less support available in the university. It has two black marks against it in an RMIT context: it’s new, and it’s not NVivo. There’s a very established NVivo community here, and a lot of organisational resistance to change. However, there seems to be a lot of online support, so I can teach myself how to use it.
  • No local copy of the project, but you can export your data whenever it’s needed.

I think that a PhD project is a good place to start trialling it, as I don’t need to convince a group of co-workers to give it a shot. There’s a 30-day free trial available, and I’ll see how I go from there. If it works well, I’ll look into using it for collaborative projects later on. I’ll keep you posted.

Have you tried it out? Any experiences to report, or things I should look out for?

Managing social networks

September 15, 2010 Leave a comment

One of the topics that came up at #socialmelb last Friday was the ever-increasing complexity of people’s online social networks. There’s no particular divide between personal and professional – just shades of grey in both arenas. Even your personal contacts will be part of many interrelated subgroups, as seen in this presentation by Paul Adams. Finding where people gather online is only part of the problem… you also need to find out how they communicate.

User behaviour is an important consideration for businesses trying to start a social media presence. Raw numbers of users don’t really help: it’s meaningless for a business to know that Facebook is the site with the most Australian users, if most of those people want to use it for their personal connections. The medium has a huge impact on the message, and there’s a world of difference between the types of conversations held on a Facebook company page, a Twitter hashtag, and a LinkedIn group.

I seem to have gathered quite a collection, but everything currently serves a different purpose. There’s a livejournal that I’ve had since late 2003 to keep in touch with a central group of friends. LJ supports a range of different privacy filters, and I know lots of people using it – so it works well for semi-private communication. Moving outwards from there, I use this blog (and its predecessor) for work-related stuff. Family and personal contacts can look at Facebook; work contacts I’ll generally add to LinkedIn. Anyone who’s currently saying something interesting might get followed on Twitter, which has become a key tool for me in finding out about new trends in social media. Then there are services I use for other tools they provide (Zotero, Citeulike, Last.FM), rather than for their networking functions.

It’s hard enough keeping track of your own fractured digital persona these days. For my research project, I’m building case studies of 30 small businesses – including their social media activity over a 12 month period. I’m yet to find a business with only one online profile, so far, and that means I have a vast quantity of data to sift through. At a rough estimate, I’m looking at around 150 sites in total.

I’ve begun using Gist as a social media aggregator. It’s only recently out of beta, and has a few quirks. It’s shaping up to be an extremely powerful tool, though. I’ve fed it contacts from LinkedIn, Gmail and Twitter so far, and left it to dig up information on those people. Now, whenever I log in it has suggestions for extra snippets of information (and a handy note telling you how that information was found). You can view people, companies, and lists like “people I’m meeting with” – and generate a handy one-page summary to bring you up to speeed on each person.

In my case, I’ve tagged all the businesses in my study with “casestudy.” Searching for entries with that tag, I can quickly review the online activities of each business. Tweets, Facebook and blog updates are all gathered into one place, so I can spend less time gathering the information, and more time making sense of it all.

That’s helping me manage the incoming information. Outgoing information – the active participation in all those different social networks – is something that doesn’t have an easy fix. I can’t stand reading (or skimming past) mass re-posted messages, where software is used to automatically duplicate content across different platforms. I’d much rather stick to a few channels at a time, where I can actively engage with other people.

And if I run out of hours in the day, that’s what insomnia is for, right?

Qualitative Analysis software for Mac – a brief look

September 3, 2010 43 comments

EDIT: Since writing this post, I’ve opted for a browser-based tool called Dedoose. I’ve written a bit more about it here.

This has turned out to be a much shorter search than expected. I’ve been looking for qualitative data analysis software that runs on Mac OS X, and have found a disappointing short list of applications.

What I want:

  • Runs natively on OS X, without requiring dual-booting into Windows, or virtual machine software
  • Flexibility in tagging text (interview transcripts and field notes) according to themes that will emerge during analysis
  • Visualisation of tagged data (coding stripes, etc)
  • Search functions (boolean searches, proximity searches, negative cases, comparing to demographic data, etc)
  • Intuitive, user-friendly interface

Optional things I’d like to see:

  • Ability to handle audio, video and image data
  • Software I can buy outright, in order to access my data in future

What I’ve found:

  • hyperRESEARCH is the main piece of analysis software that will run on a Mac. I’m going to download a trial version next week. It’s about $200USD to purchase, so I’ll only buy it if I know that it’s going to work for me. It seems to be the main contender for an NVivo-like tool,
  • TAMS Analyser is a Text Analysis Markup System (not to be confused with the Technology Acceptance Model, which I’ve been reading about recently…). It’s free, which is good for students. The review on MacResearchSW describes it as a powerful and full-featured analysis tool, but comments that it has a non-intuitive interface with a steep learning curve. Some users on Academics Anon (semi-public LJ groupĀ  – many entries won’t be visible unless you join the LJ community) have expressed frustration at the interface. In particular, displaying all tags in the text can make documents unreadable when multiple overlapping tags are used.
  • Coding Analysis Toolkit (CAT) is a web-based toolkit, designed to allow analysis of text data that’s either been pre-coded using ATLAS.ti, or using the built-in coding module. Relatively basic features, and the web-based approach has some drawbacks (reliance on web access and server uptime) as well as positives (access via any platform using a web browser).

The CAQDAS network provides information about a range of software designed to assist with qualitative data analysis. Unfortunately, they don’t currently have reviews for very many packages – just NVivo 8, ATLAS.ti 6, Digital Replay Systems (DRS) and MAXqda 2007. None of these currently work on a Mac.

Thomas Koenig has an excellent CAQDAS Comparison, ordered by software function. Most available software is designed by researchers, and not by interface designers – so virtually all the available options will feel a lot like you’re using an old version of Windows. I can’t find a date anywhere on the page, though, and the inclusion of NVivo 2 suggests that it might be a bit dated.

People at work are all suggesting NVivo. Support is readily available via the Qualitative Inquiry Group community, other people in the office already use it on their ALTC project, and it does all the things I’m looking for.

However, I’ll either have to run it on my clunky old work PC (preventing me from doing any data analysis from home), or I’ll need to buy and install a copy of windows on my Macbook. My options there are dual-booting via BootCamp, or running it in a virtual machine window like Parallels Desktop or VMware Fusion.

It’s also expensive. I can purchase 12-month Student licenses for $129AUD per year, or buy a copy outright for $780AUD (or $850AUD, including an upgrade to NVivo 9 when it’s released next month). The first option prevents me from readily accessing my data in future, and the second option costs a small fortune, from the perspective of a full-time student.

I’m still searching, but I suspect that I’ll end up using NVivo. It’s certainly not ideal (expensive, and means I’ll also have to buy a copy of Windows), but it seems to have the best combination of search features and user support (both local and online). If anyone can suggest alternatives, I’d love to hear them.

NVivo workshop

November 27, 2009 Leave a comment

I’m a little behind on updates, as it’s been a hell of a busy week.

On Monday 23/11, I attended a full-day workshop on NVivo 8. It’s a piece of software used to help analyse qualitative data: things like interviews, images, audio and video. 12-month licenses are available for the desktop PCs at work, but it’s unfortunately not available on a Mac (short of running it through BootCamp).

I’m impressed with what I’ve seen so far. It’s extremely flexible, and should work for a range of different research methodologies – though some have claimed that it’s best for grounded theory work.

Any data (usually text from a transcript, or notes made for a specific part of a media file) can be tagged with one or more “nodes” – thematic keywords, essentially working like the many different highlighters and sticky notes that I’ve been using on my hardcopies. Nodes can be “free” or use a tree-structure if you want to break a concept down into smaller parts, and they can be (and usually are) modified frequently during the analysis.

You can choose from a few different visual representations of your nodes – my favourite is a series of coloured stripes running down the side of the screen, level with the relevant parts in the transcript. The first is a greyscale bar showing the density of tags applied to each section (darker grey = more keywords), which gives a very quick way of seeing which areas were most valuable, or which haven’t had much attention paid to them yet. Other bands of colour highlight specific nodes: the 5 most frequently used, or any custom selection.

You can search through nodes, to quickly find all the quotes (linked back to original sources) that have been assigned to a particular node. This is a handy one when looking for a couple of additional quotes for a paper – it saves leafing through all the pages looking for annotations.

It’s also possible to do searches: “what statements about technological aptitude were made by people over the age of 45?” or “did anyone talk about trust or security when they discussed using an unfamiliar website?

Above all else, it’s a powerful way of managing the large amounts of data that my interviews will generate. I’d like to learn how to use it’s more advanced functions, but even if I just use it as a management tool, it will be worth the investment of time in discovering how to set it up.