Warning: Undefined variable $tt_id in /home/szzegvvd/public_html/wp-content/plugins/wp-display-header/class-obenland-wp-display-header.php on line 505

Warning: Undefined variable $tt_id in /home/szzegvvd/public_html/wp-content/plugins/wp-display-header/class-obenland-wp-display-header.php on line 505

Warning: Undefined variable $tt_id in /home/szzegvvd/public_html/wp-content/plugins/wp-display-header/class-obenland-wp-display-header.php on line 505
class="archive paged category category-uncategorized category-1 wp-embed-responsive paged-4 category-paged-4 hfeed has-header-image page-one-column colors-light">

Rethinking the Feedback Form

Are there ways to get more out of feedback forms? Here are some actionable ways.


Post-training feedback forms ask respondents to rate trainers on things which are ambiguous or hard for them to assess. Such feedback is of limited use to both the client organisation and the trainer. Can we re-design the feedback form, using first principles of feedback? What would it look like? And what difference would it make?

It is common for my clients to share with me, the post-training feedback that they’ve collected from learners. My eyes usually zero-in on the “Instructor Rating” section. I read at a brisk pace, pausing only if I accost an extreme rating. And when that happens, I look for clues in another section of the form, typically titled “Any other feedback for the trainer”.

It takes me about 2 minutes to breeze through about 25 forms. That’s an average of 5 seconds per form. Five seconds!!! And I confess, I am rather sensitive to opinions of my program participants. So, why do I treat feedback forms with such nonchalance?

For an answer, let’s take a closer look at a fragment of one such feedback form.

This one asks the respondent to rate the trainer on 8 parameters, using a Likert-scale.

I have a problem with the first parameter, Style and delivery. Each respondent can interpret this term differently. Style can be about having a friendly demeanour. Or about training method used (interactive discussion versus activity-led). It can also be about fluent speech, use of vernacular to enhance understanding or a generous sprinkling of humour. If a respondent awards a high rating, it doesn’t tell me which of these many aspects of style/delivery was appreciated. Worse, a low rating doesn’t tell me which of these I should improve.

Look at the next 3 parameters: “Communication”, “responsiveness to group” and “produces a good learning climate”. They are all similarly ambiguous. And the information gleaned from the rating is not actionable. No wonder, it doesn’t get much attention from me.

What about parameters 5 to 7? Ambiguity is not an issue with “knowledge of subject”, “conceptual clarity” and “preparation”. Note that these aren’t observable “actions” and need to be inferred from what is observable. If I talk confidently and fluently, what does that indicate to a respondent? Is it proof of great knowledge, clarity or preparation? Or is it glib talk which hides insufficient knowledge and weak concepts? So here, the problem is different… it is difficult for the respondent to find out the truth. Hence, the ratings awarded by respondents are questionable. It may be unwise for a trainer to take this rating too seriously.

To be fair, such feedback does have some merits.
1. The choice of parameters printed in the feedback form reveals what is valued in a trainer. That is a basis for ongoing improvement efforts by a trainer.

2. Respondents’ ratings reveal how happy they are with the trainer. So extreme ratings can be used as a basis for action by the client organisation (retain or replace the trainer). But in the vastly majority of cases, ratings aren’t extreme. That takes away the possibility of sharply focussed action on any one parameter.

If we want to design a feedback form which delivers value, it must…
– Remove Ambiguity
– Improve Credibility
– Enable focussed action

What does useful feedback look like?

During a coffee break in one of my workshops, a participant, Shweta, came up to me and said, “You’re spending too much time answering individual participants’ questions, most of which are elementary. I’m bored and I think it’s a waste of my time”.

I find this feedback really useful.

In particular,
– It is unambiguous. I know exactly what she’s talking about.
– It is credible. I have no reason to doubt what she says, because it doesn’t involve any guesswork. It’s coming straight from her personal experience.
– It is actionable. If I want, I can limit the amount of time I spend answering questions from individual learners. Of course, I will need to weigh competing needs of learners. Some need clarity and would want me to dwell on the topic further; others already have clarity and would rather have me move on.

Shweta’s feedback fits our 3 criteria for valuable feedback. We’re getting somewhere!

If we inspect her words carefully, we can discern an ACTION and its IMPACT. And little else. She hasn’t rated me on a numerical scale. She hasn’t judged me. She has simply commented on an action I took and its impact on her. And in her pithy statement lies the value of her words.

Constrained by the typical feedback form, she could have rated me LOW on Responsiveness to group or Produces a good learning climate. And I would be left scratching my head, wondering, “Where did I go wrong? Did Shweta want me to go faster, go slower, infer that she is bored, allow longer coffee breaks or do something to relieve post-lunch drowsiness? Which one of these?”

 

Rethinking the feedback form

How would we go about eliciting such action-oriented feedback from learners? Here’s my version of such a feedback form.

It preserves the ACTION and IMPACT format. And I’ve put in 5 broad areas to jog respondents’ memory a bit.

 

Implementation

Most respondents are used to a much simpler, faster way of giving feedback. It is usually a few closed-ended questions to be rated on a Likert scale.

The Action-Impact format of feedback demands more attention and effort. Its open-ended format forces them to recall specific events during the workshop. Also, writing about impact on oneself may be uncomfortable for some (as compared to giving a rating against a parameter).

So, it may be a good idea to help things along, with a short guidance talk before respondents fill up the form.

It could be done in this way:
1. Explaining why the action-impact format is useful to the trainer.
2. Acknowledging that this requires more effort
3. Illustrating some valid and invalid examples of “action” and “impact”.

 

Benefits to client organisation

While there are obvious benefits to the trainer, even client organisations can gain from feedback which is in this format.

When data from several programs (across trainers) is collated over time, a profile of an “ideal” trainer emerges, ie, ideal in the organisation’s context. This profile is in very concrete terms… a list of actions which learners find most useful. See the chart below for an example.

The best thing about this summary is the directness with which it speaks to us. Moreover, the Pareto format picks up the really important stuff, things that most participants felt like saying.

This profile has multiple uses…
1. To assess/select prospective trainers
2. To brief a new trainer BEFORE the program. Trainers appreciate insights about learners, so that they’re not flying blind. They can use this list to adopt some actions as a matter of choice.
3. As a practical guide of dos and don’ts, when developing trainers internally.

In contrast, typical Likert-scale feedback forms, when collated, don’t give us much. Here is what a summary would look like.

1. This merely reveals how past trainers fared on certain broad parameters. The parameters are lag measures, hence not very actionable.
2. They offer no clue about which parameters learners value more, because respondents are forced to rate EVERY parameter on the form.

 

Trusting respondents

There is a common assumption that learners dislike giving feedback. Thus, design of feedback forms emphasizes quickness and minimal effort. Nothing ventured, nothing gained.

In contrast, the action-impact format is more ambitious. It seeks feedback quality over quantity. Its open-ended nature signals dialogue, instead of judgement by rating. It trusts respondents’ ability to say what is important to them.

If someone takes the effort to fill up such a feedback form, I would have much respect for it. Such a feedback form, I would read slowly.

Photo credit:  Helloquence on Unsplash

Taming role-plays

Role plays can be effective teaching tools, only when you can tame their unpredicability somewhat.


The role play that never was!

It happened during a Negotiation Skills workshop I was conducting 10 years ago. Two participants were supposed to role-play negotiators, representing two different business firms. Their instructions were: Get the best deal for your organisation, and take your own sweet time, no hurry!

One could hear the chairs creaking and throats being cleared, as the audience settled comfortably for what could be a longish demonstration of probing, give-and-take, requests for price revisions et al. That role play lasted all of 30 seconds! One negotiator had quickly capitulated and given away whatever the other wanted. This performance caused much laughter in the classroom, but nobody learnt anything from it. It was a waste of time.

It didn’t have to be this way. As I later learnt, there are some golden rules to ensure that role plays deliver value. Here are 3 role plays I was witness to, and the evergreen rules they illustrate.

The negotiation over watermelons

You may recall this oft quoted negotiation scenario. A scientist wants a rare variety of watermelons because the melon seeds have a chemical of commercial importance. Unknown to him, another scientist wants the melons because melon rinds are a source of anti-oxidants. Both come across a stash of the rare melons and argue over who should get it, each attempting to get as much as he can. They discover during the conversation, that one needs the rind and the other needs the seeds. The dispute is amicably resolved.

During my B-school days, this scenario was enacted as a role play in the classroom. It demonstrated to us the concept of a win-win outcome quite clearly. But when the instructor started talking about application of the concept in real life, there was scepticism all around. Students attacked the situation depicted in role play as too simplistic. The refrain was “This does not happen in real life”.

Contrast this with another experience I had recently. I wanted to demonstrate the process and benefits of Executive Coaching. I invited participation from the audience, someone with a dilemma which needed resolution. One of them volunteered. She spoke briefly about her struggles with her teenage son. The coaching role-play began. Thirty minutes later, the dilemma stood resolved. A roomful of managers were suddenly interested in how picking up coaching as a skill could help them professionally.

In this story, the fact that a real-life issue was resolved, and that it wasn’t ‘staged’, added to the credibility of the demonstration. What is more, when role plays involve real-life scenarios, role players take the role play more seriously. Also, learners don’t need to be sceptical about whether the concept learnt can be applied to the realities of their lives and workplaces.

So, the first golden rule of role plays is: KEEP THE SITUATION REALISTIC.

When things got out of hand

This one is from my personal archive of mistakes! I had set up a role play where a manager deals with a team member who hasn’t been honouring his promises. During the role play, as the manager started investigating, the team member got increasingly shifty. He invented stories, denied having made promises and attributed some of his actions directly to instructions straight from the CEO’s office! Blindsided by the fusillade of ‘invented’ facts, the hapless ‘manager’ threw his hands up, quit the role play and accused the other role player of not playing by the rules.

What happened here was this: I had neglected to describe the roles in adequate detail. In such situations, role players feel that it is okay to improvise and fabricate. Soon it becomes a competition, where each side invents increasingly bizarre stuff to outwit the other. While this serves as a good test of innovativeness, that usually isn’t the point of the role play after all.

How must one describe the roles? One way to do it is to write a detailed ‘back-story’. It contains the recent past, habits, motivations and attitude of the character. The more this role is fleshed-out, the less a role player needs to invent. It may even have a list of prohibited actions, which keeps things from getting out of hand. A quick word with the role player to make sure she understands the role, constraints and the objective of the role play also helps.

So, our second golden rule is: DEFINE THE ROLE IN ADEQUATE DETAIL

The 20-minute consulting assignment

In the third story, I happened to be role player. We were studying ‘peer consulting’: a way of seeking our peers’ help for business challenges. Our instructor, Andreas, set up our roles… one solution-seeker and 5 consultants. He explained to us that it was a test of how succinctly questions were asked and answers were given; and even showed us a little silver bell he would ring if we got too verbose. The peer consulting process had 4 steps: Solution-seeker describes a real-life problem, consultants ask clarificatory questions, consultants discuss among themselves and finally present the solution to the seeker. All this had to be achieved in 20 minutes. I remember being very surprised when we finished the role play, with a credible solution to the seeker’s problem, in just 22 minutes. We were a tad behind schedule, but far ahead of my own expectations of how long such an exercise would take. The exercise brought home to me the power of a structured conversation.

The secret sauce here (and our third golden rule) is FOCUS. Andreas had managed to make us focus our energies on a single goal: concise articulation. Without it, the 5 ‘consultants’ could have individually chosen to focus on whatever they thought important… being thorough by collecting more data, or offering the solution-seeker more choice by finding multiple solutions or improving communication clarity by quoting several illustrative examples. If that had happened, the learning from the role play would have been diffuse and uncertain.

While real life performance is always about putting multiple abilities together, while learning, it makes sense to learn those abilities one by one. How many different things can learners focus on at once?

How do we achieve focus? We must front-load: tell everyone the objective (or skill to be practised) before the role play starts.

In conclusion

Role plays are a powerful learning tool in a training program. The unpredictability of how role players will interact is a reality. That makes role plays more interesting for the audience and also gives role players the opportunity to innovate and display different shades of techniques. But it can also hurt the learning objective, as we saw in some stories above.

The 3 rules help a facilitator manage the risk.

Realistic role play situations reduce the risk of disbelief. Detailed role descriptions & focussed learning objectives help role players enact their roles fruitfully.

Photo credit: Steven Libralon via Unsplash

To decode high performance, watch closely

My admittedly outlandish take on how practising managers can understand what makes high performers tick.


What does high-performance look like?

For someone who is already a high performer, the question is largely academic.

But what about those who aren’t at the top of the heap? The majority, I mean. For them, the answer is critical. Without it, how do they “up” their game? Some will use their imagination to decide what to do differently (work longer hours, drive their teams harder, put results over process, put process over results …). Others will stay frozen in current habits, thinking that high performance is the preserve of the naturally gifted. Both sound sub-optimal to me. Depressing even.

In an organisation, it is very hard to get an answer to this question. Say you wanted to know what a high-performing team leader does differently. Will you…
– Look at her achievements? That will tell you what “outcome” a high performer produces, not what a high performer does. It generates awe, not understanding.
– Ask her directly? She is likely to shrug and say “It’s hard for me to explain. I just go out there and do what I do”. Dead end, again.
– Ask to shadow her, so you can “see” how its done? Possible, if she doesn’t mind revealing her secret sauce. But it will involve long periods of waiting for “high performance” to occur. Very time consuming and irritating for both parties involved, as I discovered during some shadowing assignments.
– Study the competency model for this role? Could be a starting point, but chances are that your eyes will glaze over, after a few minutes. The long list of competencies can daunt anyone.

Could the answer be outside the organisation? In a forum supportive of your efforts to find the answer. Where high performers are doing what they do, and you get ringside seats to the action. Where you can press the pause button when you wish, and ask them “Wait Ms. High Performer… why did you do it that way?” Think intimate theatre taken to an extreme. Think after-match-press-conferences.

A strange possibility

Such a forum exists, but has been hiding in plain sight. It is the humble and much reviled Outbound Experiential program.

In the usual program, you are the “performer” and your actions are being “watched”. A facilitator does the “watching” initially and later helps you reflect upon (“watch”) whatever you did. An excellent tool for self-awareness. Only, the limitation is that you get to see your “current” level of performance.

To see “high” performance, we will need to tweak the program. What if you become the “watcher”; while the facilitator and his team of support staff turn into “performers”? You see, this bunch of people often functions like a well-oiled machine. I’ve seen it happen so many times and I found it very instructive to watch them.

I’m going to take names now. Shantanu Pandit is a facilitator I admire a lot. If he is conducting an outbound program, he is usually the public face of the team and interacts with the learners. His support team works unobtrusively in the background to make all the logistics possible. In fact, the smoother the program (from logistical point of view), the less the learners notice them.

Now, I’m proposing that a group of learners (that’s you!) watch Shantanu and his team, as they conduct a weekend outbound program. Only this time, not just what they usually show you, but also everything that goes on “behind the scenes”. Just think of the possibilities…

Potentially…

You would see Shantanu in action as a leader. There would be complexities of varying skill & will of different team members. He would face the usual leadership dilemmas: “getting work done” versus “developing my team”, “democratically inviting opinions” versus “deciding what’s right”. And he would resolve those dilemmas right there, as you watch.

You would eaves-drop on late night “review of the day” sessions where he and his team share feedback, review how the day went and plan for the next day.

You would see the support team plan their work meticulously, and then re-plan it in a hurry, whenever Shantanu changed his mind about an activity he wants to run. You see, outbound programs tend to be quite free-flowing; facilitators change activities at the drop of a hat. They change things around to benefit program participants, but each such change is a crisis situation for the support team. So, you would be witness to meticulous planning, contingency plans, followership and being open to sudden changes.

About that Pause button… you would not only see a “live performance”, you would get a chance to interview the performers too. You could ask…
“Shantanu, what made you do this?”
“Would you have acted differently if…”
“Were you always this way? Or is this style something you’ve developed over time?”
“What went through your mind when…”
“Support Team, how do you prepare for sudden changes? May we see your planning checklist?”

Of course, you would have a chance to reflect upon yourselves too. If you attend as an intact team, you would ask yourselves:
“What can our team realistically borrow from Shantanu’s team? What can we change about our way of functioning?
“How is it that they can discuss sensitive issues threadbare, but we tend to avoid them?”
“After seeing this example, can we reset the “terms of engagement” between our leader and our team?”

How does this fare as a learning tool?

Where does this tweak to the traditional outbound program fit in? As far as the larger learning objective goes, this only solves the “knowledge” problem. You come to “know” what high performance looks like. It doesn’t solve the “skill” problem (it doesn’t ensure you can “do” it too, back at the office).

Within that limited objective, I see some advantages of this approach:
It is up close and personal. It allows you to see the nuts and bolts of the high performer’s methods, warts and all. It makes high performance tangible… turns it from an amorphous concept into concrete thoughts and actions. In doing so, it makes it easier to grasp.

As the performer is accessible, you can ask questions, and get to the “why” behind his actions. It’s like a case study discussion, made richer by the presence of the protagonist in the classroom.

It is very engaging. Like a virtual-reality experience where there’s a story going on and you can actively interact with the performers. Only, there’s nothing virtual about it. You’re watching a real person / team doing their real jobs.

Does this really exist?

This mode of learning by watching closely has always existed. It is prevalent where the work is either very visible or very codified/standardised. Examples are artisanal work like woodcarving, factory work like welding and various performing arts. In each of these, learning “at the feet of the master” is the norm. Psychologists even have a term for it: Observational Learning.

But look at service jobs or supervisory jobs, where work involves much more behind-the-scenes thinking, decision making and use of discretion. Observational learning is difficult to arrange in these areas. Shadowing comes close, but suffers from practical issues of scheduling, loss of confidentiality and disturbing the “shadowed” person at work. As a workaround, organisations have sought to study traits of high performers, codify them into competencies and then train others in these competencies. But such training, done “out of context” loses effectiveness.

I am arguing for a return to a more direct way of learning, even for complex jobs. Where one learns by watching a master. And where that master is eager, nay, happy to be analysed by the learner.

Who is game for this?

Photo Credit: Maarten Van den Heuvel on Unsplash

Calibrating expectations about training outcomes

L&D managers often have unrealistic expectations from training programs. This post busts some common myths.


If you have been in L&D or buyer of developmental programs for some time, these will sound familiar to you.

• A very inspirational talk by a movie star/cricketer fails to galvanize the troops at your organisation.
• A 2-day workshop on presentation skills produces slicker presentation slides. That falls short of the engaging, audience-centric presentations that were expected.
• The weekend outbound program that was billed as “development with fun” causes participants to quip “Next time, can we just have a picnic please”?
• A big-ticket leadership development program was held, with a workshop plus 3 months of coaching sessions. It produced 1 transformed manager. 1 out of 25.

I’m not talking here about ineffective training interventions. On the contrary, my claim is this: even if the above interventions were to be conducted well, they still wouldn’t produce the outcomes expected.

The problem is in the expectations. Based on hearsay or overzealous marketing by training firms, some myths get associated with training programs. They come in the way of realistic assessment of what a program can deliver.

Here are 4 common intervention formats, the myths associated with them and what can be realistically expected out of them.

The inspirational talk

Myth: The talk will produce motivation, which will stay on for at least a few months. And it may even help us meet our quarterly sales target.

Fact: Motivation (when it’s not intrinsic) needs a sustained stoking of the fire. The inspirational talk is like a gust of wind… it’s effect on motivation is fleeting.

On the plus side, it’s a good break from monotony, and people will thank you for it, saying “Let’s have more such talks”. Every once in a while, such a talk will seed a new idea in someone’s mind; they might take the idea to fruition. But of course, that cannot be predicted or assured, so it is a bit of a long shot.
A practical aspect … film stars, cricketers and divas will strain your budget; unsung heroes won’t, so give the latter a chance to speak. There is no dearth of start-up founders and social entrepreneurs who have achieved great things in their communities and will happily share their journeys. Moreover, your audience is more likely to connect with them, and think “If she did it, so can I”. 

The 2-day classroom workshop

Myth: Learners will emerge from the workshop, ready with new skills, which they can start applying right away.

Fact: For anything other than simple repetitive tasks, upskilling takes more than 2 days of immersion. Think of how long it took you to learn to drive, swim, play the guitar, speak a foreign language, handle a conflict between your kids, manage anger, deliver a smooth presentation … all human skills, fairly complex in terms of the number of micro-abilities needed. Each is possible to master, but it takes time. Also, it doesn’t get done in one long session; rather you need to practise the skill intermittently over a few weeks or months. I’ve written about this in more detail in another blog post.

What a short workshop can do is allow participants to collect knowledge about one topic, say financial terminology, roles of a team leader, quality management principles. This knowledge building exercise is best considered a prerequisite (necessary but not sufficient) to building skills through sustained, intermittent and supported practise later on.

The weekend outbound program

Myth: It will create an attitudinal shift in participants, and when they return to work, they will be collaborative, sensitive, result-oriented, focussed, quality conscious and will take ownership.

Fact: An outbound program is a pause for reflection. Aided by a skilled facilitator, people do reflect. Though reflection can result in a change of attitude, it isn’t guaranteed to. And even when it does, there is a second giant leap to be made: from new attitude to new behaviour. The odds are against all this happening as a result of a weekend in the woods.

But there are some very good uses of an outbound program.
1. Use it to break the ice between people unfamiliar to each other… after a merger/acquisition or as a new project team’s first interaction. In this avatar, all other pretence of learning management must be dropped, and the focus of all activities must be to get to know each other. That is the first step towards co-operative behaviour.

2. If you must have some learning, keep the topic narrow in scope (say “ability to plan better”), and use the program to:
• develop an awareness of the problem (“We don’t plan well”) 
• discuss the issue threadbare (“What prevents us from doing so? Why should we?”)
• get people to commit to change (“My department commits to doing…”)
• get them to follow through on that commitment, at least in the simulated context of the outbound program (This still doesn’t guarantee that people will keep commitments in the workplace context. But having demonstrated to themselves and others that they are capable of doing it, the chances of carryover to the workplace are a tad higher)

Keeping the topic narrow will allow enough time for multiple cycles of “experience-reflect-understand-experiment”. That is how experiential programs (and its sub-category, outbound programs) are supposed to work in their purest form. A big list of topics spreads out things too thinly, robbing the program of effectiveness.

3. Use the opportunity to “show” people an example of excellence. An intact team in an organisation could watch, from close quarters, another “hired” team perform extra-ordinarily. Leaders could watch another leader in action. Operational personnel could watch another ops-team, and so on.
The “watchers” could interact closely with the “performers”, learn what made the performance possible, and discuss possibilities of change within their own team.
Think of it as a longer and more “up-close” version of the inspirational talk. Or as an interactive theatre experience, where the performers have a script to follow, but the audience is pulled into the performance.


The workshop + coaching “leadership” program

Myth: We will see a visible difference in participants’ leadership behaviours within 3 months because we’ve provisioned for conceptual input as well as coaching support.

Fact: An umbrella skill like ‘leadership’ is composed of a host of sub-skills. Thinking ahead, thinking differently, knowing what makes people tick, managing contradictions, knowing and managing personal energy, communicating powerfully and so on. Just because we can fit all this into a nifty word, doesn’t mean that the learning can also be compressed. It takes a very long time to cover so much ground. To make matters more difficult, the leadership recipe also has ingredients like personality and experience, which a training intervention cannot provide.

A fair use for such a 3-month program is to develop a SINGLE skill. Just one. That provides focus, a key ingredient of up-skilling. And the 3-month duration is a chance for sustained, repetitive practise.

To develop broad abilities like leadership, we need to pick up the sub-skills one (or few) at a time. Pick one, then make sure learners are actively practising that sub-skill, through periodic engagement. Also, we continue the focus on that sub-skill for several weeks. That done, we move to the next sub-skill and repeat. And so on. The watchword is patience. 

To summarise…

One final myth

One more myth worth busting is: half the training effort will give us half the benefit. Driven by this belief, budget-constrained organisations spend much effort (and money) on half-hearted attempts at change or development. But training does not work that way. Rather, there seems to be a threshold level of effort, below which meagre efforts aren’t enough to get over the inertia of habits. The threshold isn’t an absolute number of hours or days; rather it varies by the desired outcome. An hour of discussion may suffice for new ideas whereas it may be several weeks or months before an upgraded skill becomes visible.


Sure enough, going overboard with training will detract from the participants’ regular role, and then the price to be paid for change becomes too high.

Like with so many other things, finding the middle ground is of essence.

Photo credit: Jamie Street on Unsplash

A rant against lookalike training programs

Similar looking training programs are as harmful as giving the same medicine for different illnesses.

As a professional trainer, I hear this a lot: “Could you ensure that the training program is engaging, activity-based, and includes an action-planning exercise at the end”?

 

I think it is a well-intentioned request. And granted, activities and action-planning have their advantages.
But when I sit down to design the structure of the training intervention, a different question occupies my mind: What is the purpose of training and what kind of program will best meet that purpose? To me, that is the Big Question in training. Often, the answer may not include activities. The end-of-program action planning may not fit in either.

 

But we’re jumping the gun here. Let us start from the beginning. And take a first-principles look at training. We start with purpose. Then we inspect the question: if the purpose of training changes, does the structure of the program change materially?

Training purpose

At first glance, every training intervention has the same purpose: to improve learners’ ability. Under closer scrutiny, this purpose can be one of 3 types.

 

The first type of purpose is giving NEW KNOWLEDGE to learners. Consider programs on informing people about product specifications, operating procedures or financial terminology. In these examples, the moment a learner knows something new, the job is done… the learner has improved. This is arguably the simplest kind of training program, and easy to get right. Academic programs cater to this objective.

 

Another training purpose is teaching NEW SKILLS. This is a different animal, and a tougher one to tame. Examples of skills relevant to organisations are coaching, selling, teaching, project planning or grievance handling. Sure enough, all of them do have a knowing component, but what really counts is the ability to implement that knowledge. The learner can be said to have improved only if she can do things differently; mere knowing isn’t enough. So a program built to teach skills needs to go further than an academic program.

 

A third purpose of training interventions is generating NEW BELIEFS. Examples of beliefs are “Cooperation is good”, “When I am wrong, it is OK to admit it” or “I am capable of more”. This situation is different from the first two in an important way. While learners with little knowledge or skills are possible, you can’t have a learner with no existent beliefs! New beliefs don’t fill a vacuum… they must unseat old beliefs. And that makes this kind of intervention the most difficult one, and its design very unique.

 

It is also common to have programs where the purpose is a combination of the above three. Suppose we want managers to give constructive feedback to team members.
This needs knowledge of appropriate ways to give feedback and common pitfalls to avoid. The skill component also comes in… being able to handle a real feedback conversation, which can get complicated and unpredictable. One may need to deal with strong emotions or rescue the conversation from digression: these are situations where knowing and doing are different abilities. Finally, beliefs play a role too. The learner must believe that feedback works in practice if done well. This needs to replace the old belief “Feedback is useless because people don’t listen”. Unless that happens, the newfound knowledge & skill will simply lie in the manager’s toolkit, unused.

 

Moving on to the design of the training intervention, we’ll inspect all 4 varieties of training interventions, starting with the 3 pure-play purposes.

Designing for new knowledge

If new knowledge is the main objective, it needs to be done in 2 stages.

 

The first stage ensures TRANSFER OF INFORMATION. The traditional way of doing this has been engaging an expert to meet the learners and talk about the new knowledge; and that works for small groups. But if there is a huge learner population which is spread out globally and speaks different languages, multi-lingual and online modes are used.
Since knowledge suffers from attenuation (forgetting), there is a need for a second stage. It enables REVISITING of information, which cements recently acquired knowledge. Without this, only the most actively used bits of knowledge will be retained; the rest will be forgotten. The revisiting is possible in multiple ways. The expert can return for a summary session. The learners can be tested for recall after they have an opportunity to review recently acquired knowledge. Or they can be called upon to teach the knowledge to others, thus forcing a revision as they prepare themselves.

 

Design for new skills

The design for an up-skilling program also has 2 broad stages. But the stages look very different from those in the knowledge-focussed program.

The first stage is an opportunity to know the technique by WATCHING A DEMONSTRATION. It is an opportunity to see how the finished product looks. It also sets learners’ expectations and gives them a goal to aspire for. Faculty-led role plays, either live or video-recorded, are a time-tested method for this.

 

In the second stage, we help the learner move from knowing to doing. The learner must go through multiple cycles of PRACTISING & RECEIVING FEEDBACK. The multiple cycles must challenge the learner to practice skills in increasingly difficult scenarios. The feedback from the expert ensures course correction at the earliest sign of deviation… without feedback, suboptimal versions of the skill can get ingrained in the learner. The second stage is not a single event but a series of events, which may be held intermittently, say once a week. The idea is to provide frequent and aided practice. Learner-led role plays are an appropriate method for this stage.

 

For hard-to-master skills, the 2 stages can be interleaved. Initially, demonstrations and practice of the skill in easy situations are held. This cycle is then repeated multiple times for increasingly more challenging contexts. Think of it as mastering the alphabet first, before moving on to words, then sentences and so on.

Design for new beliefs

Recall that new beliefs need to displace existing ones. Accordingly, the first step in such an intervention is to arrange a DISRUPTIVE EXPERIENCE. This is an opportunity to discard existing beliefs which may be dysfunctional or unhealthy. Learners are encouraged to inspect old beliefs in the light of new information or experience and decide whether a change is due. Consider the example of a program on communication skills. If learners believe that they are already excellent communicators, they aren’t likely to invest much attention during the program. But what if their communication ability is put to a challenging test, which returned mixed results? That is likely to make the learner sit up and say “Maybe I have much to learn”. This ‘test’ then, becomes the disruptive experience.
Once this happens, one must INTRODUCE NEW BELIEFS for the learners’ consideration. These are healthier for the learner or conducive to better relationships or work outcomes. The timing of this step is crucial… do this step too soon, and it won’t work. You cannot refill a cup until it is emptied first. In our ongoing example, a new belief could be “With some effort, dramatic improvements in communication ability are possible”

 

In a pureplay “new beliefs” program, the first two stages are commonly arranged as intense outbound programs, meetings with inspiring figures or meditation/reflection retreats.

 

Newly acquired beliefs are like tender saplings… they need nurturing in order to take root firmly. So, a third stage is necessary. This stage provides CONFIRMATION of the new beliefs. The learner is encouraged to assess whether new data or recent experiences support the new beliefs. A positive assessment further deepens or confirms the new beliefs. Observation logs, reflection diaries and review discussions aided by a facilitator are good ways of doing this.

Design of a multi-purpose intervention

We’ve seen earlier that some interventions involve all 3 purposes. How do we combine the design elements of various purposes in such a case? Well, the chronology needs to be:
1. Instil NEW BELIEF first. That creates fertile ground for subsequent knowledge/skills input.
2. Then, provide NEW KNOWLEDGE & SKILL

 

In practice, we don’t wait for new beliefs to take root before starting with new knowledge and skills. Rather, an interleaving of both strands is necessary. Conducive beliefs pave the way for intake of knowledge; and application of that knowledge (skill use) provides the experience needed to confirm the new beliefs. A chicken and egg situation, and fortunately, one that can be managed.

 

It is useful to think of the structure in terms of a LARGE HEAD and a LONG TAIL.
The LARGE HEAD
Several things happen in this phase. Learners inspect beliefs that may be dysfunctional or may stop them from imbibing new things. They also come to know the rudiments of the skill involved. This phase can be a single workshop and may vary in duration from a day to several days, depending on how broad the topic is. A narrow topic like Delegation may require just half a day; whereas Negotiation which is an umbrella term for several skills requires 2 days or more.

 

The LONG TAIL
The long tail is so named because it stretches over several weeks. This is where the skill is repeatedly practised under the guidance of the expert. It is also an opportunity to reflect on ongoing experiences and decide whether the new belief set is borne out in real life. Revisiting of knowledge occurs automatically if the skill practise is regular.

Isn’t there a short-cut?

This model of an effective intervention stretches out over several weeks. It needs commitment to follow the entire process. But resistance to this format of training comes from an unlikely quarter… the supervisors of learners, who demanded the training to start with. They don’t like the loss of productivity when the learner is away attending a training session. So, the search for a short cut begins.

 

Typically, they settle for a watered-down version of the intervention. Sometimes, the long tail is folded up and packed into the large head; at other times, the tail is chopped off completely! The hope is: if we follow part of the process, we’ll get at least some results. But human learning is somewhat like leaping across a chasm; a half-hearted leap doesn’t help.

In conclusion

Every kind of training intervention can be made engaging. Training methods like activities, action planning or role-plays need to be thought of as building blocks, which may or may not be needed to deliver results. But intervention design must be determined primarily by the purpose of training.
Photo credit: Sharon Benton on Unsplash

Why most up-skilling programs fail. And what to do about it?

A practical primer on ensuring that your investment in up-skilling programs will pay you back.


Deja vu.

We have been over this so many times. A well-defined skills training need appears. An appropriate trainer is found. The training program design looks good too, with a workshop for the group of learners, followed by multiple coaching sessions for individuals. The intervention passes off without incident. A few months later, an inspection of learners’ skills shows that not much has changed. The needle hasn’t moved.

Usual suspects

Disappointed L&D staff try to remedy the situation for the next batch of learners. They replace the trainer, but even with a new trainer, the entire cycle repeats itself. Other suspects like program duration (“maybe it was too short”) or learner selection (“maybe we chose the wrong participants”) are also explored, acted upon, but the result is still the same.

Action replay

Let’s take a closer look at what actually happens during and after a training program, a sort of action replay. Possibly, we may detect where things are going wrong. Here is the participants’ journey as they go through the workshop and coaching intervention.
Unless people have been coerced into training programs, they attend the initial workshops with enthusiasm. They hope that they will learn something relevant to their jobs.
As the workshop draws to a close, there is the customary action-planning exercise. According to it, they must intentionally practice new skills on the job during the next few weeks. So far so good.
The most diligent participants start off on their action plans immediately. They start trying out the new skills they’ve learnt. But doubts surface (“Am I doing this right?”). And predictably, they make mistakes, which is natural for anyone trying out new skills for the first time. The boss gets alarmed (“Wasn’t this training supposed to improve things?”). Faced with this barrage of difficulties, participants do what is most expedient: go back to doing things the old way. It’s the safe thing to do.
But remember, that was the diligent minority. What about the majority of participants? They get swamped by the usual whirlwind of work and end up not practising new skills at all. To compound matters, nobody notices this, because it is a continuation of how the participants were before the training. As time goes by, the workshop, its learnings and the action plan become distant memories.
Then comes the intermittent coaching support. In session after session, participants complain of not having found the time to practice. The coach provides some guidance and encouragement. But it is like holding a candle in front of a storm. The inertia of old ways and the difficulties of using new skills are too overwhelming, for the coach to make a dent.
The typical program design
is too feeble to change hardwired habits

The real culprit

From the story, it is clear that lack of implementation support is the culprit behind ineffective training programs.
The days and weeks immediately after the workshop are a critical period for the learner. Depending on the support receive at this time, the learner will either bring new skills to her work or will revert to the old ways.
The meagre coaching support we usually provide just isn’t enough.
What will help?

Hope

Let’s reverse engineer this problem. Let’s take a few instances where new skills do develop and identify why they do.
How do we learn to drive a car? It is usually 4 steps:
Step 1: Drive on empty road. Frustration: The gear just won’t engage! Encouragement from driving instructor to the rescue.
Step 2: Next day, drive on road with light traffic. Embarrassment: Stalled in the middle of the road! Feedback and encouragement from driving instructor again helps.
Step 3: After a few days, drive on road with heavy traffic. First success: back home safe! Driving instructor didn’t have to lift a finger today.
Step 4: Finally, drive alone. Without driving instructor!
The magic ingredients seem to be:
• Frequent practice, gradually increasing in level of difficulty
• Help from a skilled person during the struggle
The same secret sauce can be detected in so many upskilling examples. Learning to cook. Learning our first language. Learning to play the guitar.
But there’s something else here, something subtle. We don’t succeed if the practising of the new skill is optional.
We don’t learn to drive or cook if there’s someone else to do it for us. We don’t learn a foreign language easily because we can get by with our first language. Kids who get carried around a lot don’t learn walking quickly enough.
So, we must modify our formula to:
• Mandatory and frequent practice, gradually increasing in level of difficulty
• Help from a skilled person during the struggle
Here is a more business-like example, which illustrates the same principle.
In the year 2000, I was at Asian Paints, during a company-wide roll-out of a new ERP software. The pre-rollout training was done (“the workshop”). On the D-day, the old software was switched off. People had no option but to use the new system for their daily work (“mandatory frequent practice”). Technical glitches and lack of clarity made the experience difficult for people (“struggle”). The IT department, having anticipated this, already had a help-desk (“skilled help”). A few weeks later, when the dust settled, we had better software, with up-skilled users of the software. No doubt, the change was accompanied by some pain, struggle and effort to adjust, but that’s what it took
Post workshop support should be
mandatory, frequent and with guided practice

Acid Test

As the final piece of this puzzle, let’s see how to use our secret sauce in the training context.
Here is a typical training need: “Managers must learn to conduct performance appraisal meetings, which are constructive and raise performance”.
The intervention could start with a workshop where learners “see” and “experience” what appraisal meetings should be like. This is ideally done for a group of learners. The emphasis is more on conceptual clarity, rather than on practice.
Then, post-workshop phase starts, where assisted practice is emphasised.
Every individual learner attends pre-scheduled practice sessions. The initial ones have dummy appraisees, and they simulate easy situations. As time passes, more difficult situations are practised.
The pre-scheduling is key here… without it, participants are likely to get swept away by demands of usual work, and the practice may not happen at all.
Help from a skilled trainer is available throughout the practice sessions. The trainer’s job is to observe and provide feedback and encouragement. In addition, she records improvement in the participants’ appraisal skills and decides if the participant is action-ready.
Finally, the learner must face a real-life appraisee. If the organisational climate allows it, this meeting may also be video recorded, or be done in the presence of the trainer. That will provide one more round of practice.
If the above method is followed, a very high proportion of learners will be able to upskill.
Real-life change in skills or habits
necessarily involves growth pains.
Support to the learner
during the struggle is critical for change

Resolve

Making training effective is not rocket science. It is about implementing the “long tail” of compulsory, frequent, aided practice.
And that takes some resolve. The resolve to budget adequately. To get buy-in of learners and their managers alike, so that they commit adequate time. The resolve to follow-through.
With resolve, there is hope.

Photo credit: Ricky Kharawala on Unsplash