This past Thursday and Friday the MISO Survey team met to work on revisions to the survey instrument. This is something the survey team does every two years to keep the survey up to date. Since we value the ability to provide longitudinal analysis, we’re conservative about adding or removing things from the survey.
The MISO Survey is “a Web-based quantitative survey designed to measure how students, faculty, and staff use and evaluate the services and resources of colleges and universities with merged library and computing units”. We’ve had some non-merged institutions participate along the way, since the survey is a useful measure of how these often complimentary services are received.
With so many services and resources to measure, the MISO Survey takes a while to answer. Some questions are optional, but the core questions on use, importance, and satisfaction. The average time to complete the survey is 18 minutes, and that’s been a problem for some institutions.
In response to concerns about time to complete the survey, we’re going to give institutions more options on which sections or even specific questions are included in the survey. For example, while the questions about frequency of use are interesting, we’ve found over the last few years that there’s more interest on how important faculty, staff, and students consider our resources and services, and how satisfied they are. So the section on use will be optional going forward: available to institutions who would like to know this information, but not required of all institutions. We’re confident that enough institutions will continue to ask these questions that the validity of the responses across institutions will persist.
We’ve also made some changes that correct duplication in the survey. There is a section in the survey that asks whether you use a particular technology, such as course management system, blogs, etc. Several of the items in this list duplicate questions in the frequency of use section of the survey. Since institutions can look at frequency of use to determine overall use, we can remove items from our “use of tools” section like the course management system, making fewer options for each respondent to use.
One challenge we haven’t yet figured out is how to ask questions about support for faculty research. At all of our institutions, faculty across the disciplines have a variety of computing needs at both the desktop and server levels. Few of our organizations have a research computing unit, and so we respond to each new research opportunity with an ad hoc group comprised of the people with the expertise to support the required technology. The challenge is to find a way to ask questions about this support that aren’t interpreted in as many ways as there are respondents. Do we focus on the word “research”? “Grants”? Something else?
The survey has one question that asks “How satisfied or dissatisfied are you with support for your specialized computing needs?”. But I’m not sure “specialized” is the right word either.
I’m going to take the next week or so to try and come up with wording for questions that get at support for research/grant/specialized computing. In the meantime, we will be preparing an updated version of the survey so it can be tested at each of the five sponsoring institutions. We’ll be testing the revised questions with faculty, students, and staff over the summer so that the new survey is ready to go when the next survey cycle begins in the fall.