Rethinking the Feedback Form

Are there ways to get more out of feedback forms? Here are some actionable ways.


Post-training feedback forms ask respondents to rate trainers on things which are ambiguous or hard for them to assess. Such feedback is of limited use to both the client organisation and the trainer. Can we re-design the feedback form, using first principles of feedback? What would it look like? And what difference would it make?

It is common for my clients to share with me, the post-training feedback that they’ve collected from learners. My eyes usually zero-in on the “Instructor Rating” section. I read at a brisk pace, pausing only if I accost an extreme rating. And when that happens, I look for clues in another section of the form, typically titled “Any other feedback for the trainer”.

It takes me about 2 minutes to breeze through about 25 forms. That’s an average of 5 seconds per form. Five seconds!!! And I confess, I am rather sensitive to opinions of my program participants. So, why do I treat feedback forms with such nonchalance?

For an answer, let’s take a closer look at a fragment of one such feedback form.

This one asks the respondent to rate the trainer on 8 parameters, using a Likert-scale.

I have a problem with the first parameter, Style and delivery. Each respondent can interpret this term differently. Style can be about having a friendly demeanour. Or about training method used (interactive discussion versus activity-led). It can also be about fluent speech, use of vernacular to enhance understanding or a generous sprinkling of humour. If a respondent awards a high rating, it doesn’t tell me which of these many aspects of style/delivery was appreciated. Worse, a low rating doesn’t tell me which of these I should improve.

Look at the next 3 parameters: “Communication”, “responsiveness to group” and “produces a good learning climate”. They are all similarly ambiguous. And the information gleaned from the rating is not actionable. No wonder, it doesn’t get much attention from me.

What about parameters 5 to 7? Ambiguity is not an issue with “knowledge of subject”, “conceptual clarity” and “preparation”. Note that these aren’t observable “actions” and need to be inferred from what is observable. If I talk confidently and fluently, what does that indicate to a respondent? Is it proof of great knowledge, clarity or preparation? Or is it glib talk which hides insufficient knowledge and weak concepts? So here, the problem is different… it is difficult for the respondent to find out the truth. Hence, the ratings awarded by respondents are questionable. It may be unwise for a trainer to take this rating too seriously.

To be fair, such feedback does have some merits.
1. The choice of parameters printed in the feedback form reveals what is valued in a trainer. That is a basis for ongoing improvement efforts by a trainer.

2. Respondents’ ratings reveal how happy they are with the trainer. So extreme ratings can be used as a basis for action by the client organisation (retain or replace the trainer). But in the vastly majority of cases, ratings aren’t extreme. That takes away the possibility of sharply focussed action on any one parameter.

If we want to design a feedback form which delivers value, it must…
– Remove Ambiguity
– Improve Credibility
– Enable focussed action

What does useful feedback look like?

During a coffee break in one of my workshops, a participant, Shweta, came up to me and said, “You’re spending too much time answering individual participants’ questions, most of which are elementary. I’m bored and I think it’s a waste of my time”.

I find this feedback really useful.

In particular,
– It is unambiguous. I know exactly what she’s talking about.
– It is credible. I have no reason to doubt what she says, because it doesn’t involve any guesswork. It’s coming straight from her personal experience.
– It is actionable. If I want, I can limit the amount of time I spend answering questions from individual learners. Of course, I will need to weigh competing needs of learners. Some need clarity and would want me to dwell on the topic further; others already have clarity and would rather have me move on.

Shweta’s feedback fits our 3 criteria for valuable feedback. We’re getting somewhere!

If we inspect her words carefully, we can discern an ACTION and its IMPACT. And little else. She hasn’t rated me on a numerical scale. She hasn’t judged me. She has simply commented on an action I took and its impact on her. And in her pithy statement lies the value of her words.

Constrained by the typical feedback form, she could have rated me LOW on Responsiveness to group or Produces a good learning climate. And I would be left scratching my head, wondering, “Where did I go wrong? Did Shweta want me to go faster, go slower, infer that she is bored, allow longer coffee breaks or do something to relieve post-lunch drowsiness? Which one of these?”

 

Rethinking the feedback form

How would we go about eliciting such action-oriented feedback from learners? Here’s my version of such a feedback form.

It preserves the ACTION and IMPACT format. And I’ve put in 5 broad areas to jog respondents’ memory a bit.

 

Implementation

Most respondents are used to a much simpler, faster way of giving feedback. It is usually a few closed-ended questions to be rated on a Likert scale.

The Action-Impact format of feedback demands more attention and effort. Its open-ended format forces them to recall specific events during the workshop. Also, writing about impact on oneself may be uncomfortable for some (as compared to giving a rating against a parameter).

So, it may be a good idea to help things along, with a short guidance talk before respondents fill up the form.

It could be done in this way:
1. Explaining why the action-impact format is useful to the trainer.
2. Acknowledging that this requires more effort
3. Illustrating some valid and invalid examples of “action” and “impact”.

 

Benefits to client organisation

While there are obvious benefits to the trainer, even client organisations can gain from feedback which is in this format.

When data from several programs (across trainers) is collated over time, a profile of an “ideal” trainer emerges, ie, ideal in the organisation’s context. This profile is in very concrete terms… a list of actions which learners find most useful. See the chart below for an example.

The best thing about this summary is the directness with which it speaks to us. Moreover, the Pareto format picks up the really important stuff, things that most participants felt like saying.

This profile has multiple uses…
1. To assess/select prospective trainers
2. To brief a new trainer BEFORE the program. Trainers appreciate insights about learners, so that they’re not flying blind. They can use this list to adopt some actions as a matter of choice.
3. As a practical guide of dos and don’ts, when developing trainers internally.

In contrast, typical Likert-scale feedback forms, when collated, don’t give us much. Here is what a summary would look like.

1. This merely reveals how past trainers fared on certain broad parameters. The parameters are lag measures, hence not very actionable.
2. They offer no clue about which parameters learners value more, because respondents are forced to rate EVERY parameter on the form.

 

Trusting respondents

There is a common assumption that learners dislike giving feedback. Thus, design of feedback forms emphasizes quickness and minimal effort. Nothing ventured, nothing gained.

In contrast, the action-impact format is more ambitious. It seeks feedback quality over quantity. Its open-ended nature signals dialogue, instead of judgement by rating. It trusts respondents’ ability to say what is important to them.

If someone takes the effort to fill up such a feedback form, I would have much respect for it. Such a feedback form, I would read slowly.

Photo credit:  Helloquence on Unsplash