Blunt instrument or fine scalpel: Using 360 to evaluate training effectiveness

Work life may have changed but feedback is still important. Elva Ainsworth talks 360. 

Training, virtual or otherwise, takes time and money that usually does not directly connect to revenue or profit and, as such, will always be under scrutiny. Therefore, easy methods of proving the value of training are always of interest.

If you assume training is to upgrade skills or knowledge to improve job effectiveness, given 360 degree feedback indicates effectiveness as perceived by a number of different key people, then it might be assumed 360 would be a good measure of success. However, perhaps unsurprisingly, it is not that simple.

One of the issues is that the goals of training are usually multiple and various. 

Just as school is not just about pupils learning their subject matter, training is often not just about skill improvement. In addition to that, 360 degree is not an accurate, objective measure of effectiveness but, instead, paints a picture of the quality of the relationships of key co-workers as well as a mechanism that allows reviewers to voice opinions.

A 360 is normally more about providing developmental insight than it is about measuring skill levels. However, these viewpoints can provide very useful data to evaluate the application of learning back in the workplace so there is a useful place for 360 in training evaluation, especially when it has been designed with this purpose in mind.

As a result of the contextual nature of 360, care must be taken in how it is used for evaluation purposes.

In the main, 360 data is an assessment of an individual through the lenses of a number of different people and as such is affected significantly by the individual relationships rather than simply how well you can do your job. 

It will represent the culturally sensitive viewpoints i.e. based on the expectations and standards of the group and it will be affected by the safety and perceived confidentiality of the feedback process. It is not an objective measure. As a result of the contextual nature of 360, care must be taken in how it is used for evaluation purposes.

As an example, 360 ratings can often be seen to decrease in the 2nd year of a programme – not necessarily as a result of people performing worse but due to reviewers feeling more comfortable giving honest feedback. 

The other key factor is that a 360 can lead to more attention and consciousness being given to a specific area so the worse it can seem. As a result of the increased ‘conscious incompetence’, sometimes ratings may decrease before they can grow.

In addition to this, when one factor improves, it can lead to a deterioration in another e.g. you may get more strategic but your operational efficiency may diminish as a consequence. It is good to remember that our performance is not as linear and easily ‘improved’ as we might imagine and 360 will reflect the dynamic and complex, contextual nature of human performance.

 

Sometimes, for instance, the true value of a programme is in giving reviewers a ‘voice’ and allowing their views to be expressed and heard so the 360 ratings are far from being just about how good someone is.

Five tips on how to design your 360

Nevertheless, comparing 360 data before and after a training initiative can be extremely enlightening both for the individual participant as well as for the trainer but it is best to design your 360 with this in mind. Five tips on how to design your 360 to provide you with useful data for evaluation purposes are described below:

  1. Track progress. Set up your 360 to specifically track the progress in the identified development areas. Ask for detailed observations on the specific targeted behaviours alongside ratings using the same questions before and after and filtering so only the same reviewers’ data is compared.
  2. Ask for observed changes. Ask reviewers about the changes they have spotted and ask them to rate perceived improvements. Ask them their views on why the area may not have improved and their views on the value of the training initiative.
  3. Get commitment to target ratings. Ask participants to commit to target ratings for the end of their programme. Best to advise and coach on this point as expectations may be unrealistic. Shifts of 0.5 (on a five-point rating scale) may appear small but can still be statistically significant.
  4. Ask about confidence and insight. The participants’ perspective on their own confidence can be enlightening as they can gain useful insights at the same time as realising some aspects of their performance is not working. It is not always about improving the ratings.
  5. Manage expectations. Share the realism of expecting ratings to improve and discuss the various other processes that occur to ensure participants do not expect massive uplifts in ratings. Talking about the detail of the rating scheme can highlight how hard it may be to achieve a higher ‘score.’

So, 360 degree feedback provides fine detail of personal evaluations and can also provide useful insight of perceived value and observed changes. However, tracking changes to 360 data does not give a clear, simple evaluation of the effectiveness of your training. Other methods will usually need to be added to gain a good understanding of the true value of your training.

Understanding the impact of training is not a simple issue and this, together with the fact that 360 degree data is not straightforward, means that it should be used with care. Others’ views of an individual or of a programme form an important part of understanding the full impact of your training but should, itself, be seen in context.

360 can indeed be a fine scalpel to facilitate developmental insight and transformation but can also be a blunt instrument as far as training evaluation is concerned unless designed specifically for this purpose.

 

About the author

Elva Ainsworth is CEO at Talent Innovations

 

Jon_Kennard

Learn More →

Leave a Reply

Your email address will not be published. Required fields are marked *