“Feed me…,” not just the words of a hungry child, but the daily demand of any small study abroad or service-learning organization. Or, [maybe] more famously, from the 1980s movie Short Circuit, “Innnpuuut, innnpuuut…” Anyone who runs their own business knows it’s consistently responding to feedback that ensures the organization delivers a great experience for its volunteers and students.
The primary means of getting feedback is through client surveys. But feedback is of little use if 1) answer choices aren’t clear, and 2) response rates are low. Thankfully, you’ll find many experts giving advice about:
1) Unipolar versus bipolar response types.
2) What type of rating levels should be used?
3) Should negative responses be listed first?
4) Exactly how should questions be worded?
5) How long should a survey be?
My post here is not an end-all guide for creating the perfect client survey, but the simple, straightforward “survey powered by squirrel” approach we use and have developed over 10 years in business.
Our system is based around physical feedback cards. These are filled out on the volunteer’s last day of their program. We regularly achieve collection rates in excess of 90% of participants, and we do this by requiring our teams to collect a minimum of 90% in order to qualify for team bonuses. A 90+% collection rate helps ensure the results reflect a wide view of our program.
When we miss a volunteer on their last day, we email an electronic feedback form. In our experience though, the physical cards get a far higher response rate and volunteers give us more useful information on them. E-surveys are likely less effective because of crowded email boxes, effective spam filters, and emails are just too easily set aside (and never returned to).
Physical feedback cards also have immediacy—the volunteer’s feelings about the program are upfront and fresh in their mind; it’s not a week or two after the volunteer’s experience.
SHORT & SWEET
Our survey fits on a 5 x 7 card. There are 10 key points on which we ask volunteers to rate us. This means when a volunteer looks at our card, it’s something that immediately looks easy to complete and is not time consuming. Volunteers rate us on four areas: Orientation, Accommodation, Volunteer Project, and Our Organization (e.g. Client Service and Facilities).
Most performance review systems use four level, five level , and seven level rating scales. For example:
Always Exceeded Expectations / Frequently Exceeded Expectations / Sometimes Exceeded Expectations / Met Expectations / Sometimes Didn’t Meet Expectations / Frequently Didn’t Meet Expectations / Never Met Expectations
Five level and seven level rating scales are most common and I’m told the most accurate. Be careful, because the experts say 0-10 rating scales reduce reliability and validity. The argument for more options in rating levels is that when there are more answers to choose from, the volunteer has more options to better reflect how they feel, and the survey provides improved granularity for analysis.
The problem with these systems is that I’m never sure what to make of them. Does a 7 out of 10 equate to 70%, so that’s a “C” or “Satisfactory” or is it actually a stronger rating, because it’s above the mid-point (5/10)? Also, what’s the difference between “Okay” / “Satisfactory” / “Fair” / “Acceptable?” And, should these ratings be considered any “good?” Aren’t these just nicer ways of saying “needs improvement?”
To keep things simple and straightforward, we use only three rating levels: “Excellent,” “Good,” and “Needs Improvement.” We look at the volunteer experience in these terms: “Did we exceed, meet, or not meet the volunteer’s expectations?” Three rating levels keeps it super simple!
ANALYZE OPEN RESPONSES
On the back of our feedback card, we ask our volunteers to give us additional comments. Approximately 70% take the extra minute or two to leave us additional thoughts. These free text answers provide valuable insight into volunteer satisfaction. However, they need to be analyzed and comments need to be categorized for tracking.
We read every single one of them, and we react. If a team member could have been friendlier, this is brought to their attention; if a team member is mentioned by name in a really positive way they’re told and congratulated; if a host family is criticized, we hold a meeting with the family, and so on.
BE REAL—Read, Evaluate, Act, Learn
Most importantly, we track our feedback statistics. These are discussed in weekly team meetings and action points are identified. If there is something very serious, the card is immediately brought to the Executive Director’s desk!
We insist that teams track their results week by week. If feedback statistics are put off until the end of the month, the gathering and reporting becomes too large a task. Also, by looking at statistics week to week, our teams can react more immediately and they’re not “surprised” at the end of the month with lower than expected results.
Finally, volunteers are happy to leave feedback, but they’re even happier when we’ve acted on their feedback. When we identify tough or critical comments we respond to the volunteer. We never respond defensively, though we do take the time to provide things like price breakdowns, answers about our business relationships, our plans for improving a particular project, etc.
In the end, client feedback is an incredibly effective business tool, but it can easily become over complicated. Read up on what the experts have to say and experiment and adapt your process as you go along. Above all, keep it simple and look for ways to drive collection rates as high as you can—this maximizes your input.
Learn more about Maximo Nivel at www.maximonivel.com.