Evidence-based training

Share this page

Written by Dr Rob Yeung on 1 October 2014 in Features
Features

How can we know that the techniques we use will genuinely make a difference? Dr Rob Yeung explores

I ran a series of workshops for senior managers within a large professional services firm recently. The head of L&D explained that many of these senior people were rocket scientists in terms of their understanding of finance and related topics like economics. Their knowledge about their discipline was no longer in question. However, the key skill that would help them to succeed in the next stage in their careers was about building relationships with clients and selling projects to them.

These managers already knew how to write and deliver presentations. But I was asked to help them with their confidence, storytelling and presence. The head of L&D said that while most of the group were competent presenters, few were outstanding or truly inspirational. In an increasingly competitive world, the firm needed its senior managers to be deeply impactful and memorable, not merely capable and professional.

One of the techniques I taught the managers over the course of the programme was designed to reduce any nervousness and boost confidence. I’m going to invite you for a moment to play an intellectual game. Go with it for now and I’ll explain its importance shortly.

I’m going to present you, the reader, with three possible techniques. Only one of them actually works. The other two are duds. Can you identify which I taught the group?

The first candidate technique is called acceptance. Stemming from a relatively new school of psychological therapy developed in the 1990s called acceptance and commitment therapy, the thinking behind this technique is that we should embrace all of our private thoughts and feelings, both good and bad, without trying to rid ourselves of them at all. The underlying theory is that allowing ourselves to fully experience negative emotions such as anxiety before a big presentation  may be more constructive than trying in vain to do away with such feelings.

Onto the second candidate technique. Remember that only one of these three techniques actually works – and we’ll explore how we know it works soon. This second technique is called suppression.

The goal of suppression is to hide how we’re really feeling inside. It’s about walling off our feelings and telling ourselves that we’re okay. You’ve probably heard the idea that we should “fake it until we make it.” Suppression is not just about feigning confidence to the outside world though – it’s almost about faking it to ourselves too.

The third and final candidate technique is called reappraisal. The idea is to view a situation in a different way – to come up with a reason or narrative that allows us to see a stressful event or circumstances in a more helpful manner. Reappraisal is a mental trick in which we try to find a more positive explanation for whatever ordeal we might be facing.

I’ve asked many audiences at conferences and delegates at workshops which they believe to be the most effective technique. Roughly one in three people guesses that acceptance is the real deal. Maybe half of people feel that reappraisal should be the best technique. Far fewer people believe that suppression should be the way to go. And you would be right if you had guessed that reappraisal was the most effective technique.

But now we come to the important point: how do we know what works and what doesn’t?

Peer-reviewed evidence

We know which of the three techniques works because all three were tested in an experiment run by a team led by Stefan Hofmann, a professor of psychology at the Center for Anxiety and Related Disorders at Boston University. In a 2009 paper published in the academic journal Behaviour Research and Therapy, he asked 202 volunteers to give impromptu speeches in front of a video camera. Mere moments before each presentation, he taught one third of his participants the acceptance technique, one third the suppression technique and one third the reappraisal technique.

By asking the volunteers to complete a battery of psychological tests both before and after their presentations, Hofmann discovered that the participants who had learned the reappraisal technique felt the least anxious. So even though acceptance seems like a plausible technique, it turned out to be less effective than reappraisal.

Along similar lines, we know that certain techniques are proven to boost people’s impact when presenting. When participants are taught these techniques, they become noticeably more influential in the eyes of audiences – even when those audiences are kept in the dark as to what techniques those presenters might have learnt.

The larger point is that robust research evidence matters when it comes to learning and development. Interventions need to have been tested on dozens of people against a control group in what’s known as a randomised controlled trial. And then the results need to have been written up in a peer-reviewed research journal.

Publication in a peer-reviewed journal is important because of the rigour of the academic writing process. To get a new piece of research into a scientific journal, researchers send a draft of their paper to the journal’s editor. The editor then sends the paper anonymously to a couple of researchers in the same field – competitor researchers, effectively. These peer reviewers do their best to criticise and take apart the paper. Those criticisms get fed back to the original researchers, who then reply to those criticisms. And it can go on again for a second round. Many papers never see the light of day if the critics’ comments cannot be answered satisfactorily.

The whole process typically takes many months, but once every one of those critics and the editor is happy, the paper may be published. And we as readers of such papers can trust that the research was conducted methodically and the researchers haven’t tried to pull the wool over our eyes.

Without that rigour, we are simply relying on consultants’ gut feel and personal beliefs in what they think might work. Just because something has worked for one person doesn’t mean that it will work for others.

The “it’s worked for me, so it should work for others” argument could be dangerous. Think about it this way: consider an elderly man – let’s say he is in his 70s – who is dispensing health advice for a younger generation based on his own experience. He says he has smoked 40 cigarettes a day for his entire adult life. So clearly, he argues, cigarettes must be good for you.

Of course we know better than to trust our health to anecdotal evidence like that. We know that what’s true for one person may not be true for most people. Research tells us that smoking cigarettes is harmful to many people’s health – it does not tell us that every person who smokes will get cancer.

However, we often fall prey to the “it’s worked for me, so it should work for others” argument when it comes to management practice. Many consultants have great experience. They may have run large, successful businesses or had great careers. But what worked for them may not work for others – not unless their techniques have been tested in controlled trials anyway.

Another reason not to simply trust people’s judgment about what they personally think works: the placebo effect. We’ve all heard of the fact that doctors can make patients feel better simply by dispensing people a sugar pill and telling patients that it’s medicine.

In a similar way, people at work can often come away from a workshop feeling that they have learned something worthwhile. But just because they feel that it has been valuable doesn’t necessarily mean that it will work – that they will be able to translate it into benefits back in the workplace for example.

Advice for L&D practitioners

If you’re interested in being able to discern for yourself what good versus bad research looks like, you could start by reading Bad Science, a 2009 book by medical doctor Ben Goldacre. The book debunks myths around alleged miracle cures in the dieting and healthcare industries. But as well as learning about what you should and shouldn’t be concerned about when it comes to your health and those of your loved ones, you will learn the importance of proper research trials in establishing whether anything works or not. You can apply those lessons to leadership, training and development too.

If you’re thinking of hiring a consultant or trainer into your organisation, be sure to quiz them about the efficacy of their interventions. Don’t be shy to ask them to prove that what they are recommending works. Don’t simply allow a consultant to fob you off with vague statement along the lines of “Business school research shows that this works.” Ask for more detail.

Ask them to tell you about the published research studies that they are using as the basis for their recommendations. Don’t be afraid to get into the details of each study. Was there a control group in the experiment? How did the researchers measure the alleged benefits of the technique?

Ask for copies of the actual journal papers to make sure that a consultant isn’t just referencing someone’s work without fully understanding it. And look out for journal titles such as Journal of Experimental Social Psychology and Proceedings of the National Academy of Sciences. On the other hand, if a consultant is mainly referring to newspaper articles or blog posts, then you know that the consultant has not gone to the original sources and may be relying on second-hand (and possibly erroneous) reporting from newspapers or online bloggers.

You can apply a similar level of rigour the next time you pick up a book on a topic – whether that’s to do with leadership, charisma, presentation skills or anything else. Look at the notes in the back of the book. If there are none, that suggests the book may be an entertaining read but don’t expect the advice within the book to work necessarily.

Testing personality

Consider the popular Myers-Briggs Type Indicator (MBTI) personality test. It is arguably the world’s best-known and most popular psychological test.

Unfortunately, published studies suggest that it isn’t actually a very robust measure of personality. For example, in a study published in the 1993 issue of Personality and Individual Differences, professor of psychology Adrian Furnham and Paul Stringfield, a collaborator from within industry, administered the MBTI to more than 340 European and Chinese managers. The researchers also collected detailed performance data on the managers covering their customer focus, decision-making, communication and team work. A rating of future potential was also collected.

The results of the study found that the four dimensions of the MBTI were mostly not linked to performance measures. The MBTI was mostly unrelated to performance in either the European or Chinese managers. In other words, the MBTI was mostly unable to predict useful outcomes at work to do with either current or future performance.

In fairness, the publishers of the MBTI do not claim that the test correlates with performance measures at work. But if that’s the case, then why use it?

Some might argue that the MBTI is only a guide as to people’s preferences – that it is simply a tool for helping people to understand their own preferences and therefore impact on people. But the research is very clear: the MBTI is not a valid measure of people’s impact.

For example, type theory says that personality preferences should become established and remain fairly stable early in life. However, studies which have asked participants to take the MBTI at one point in time and then again some weeks later have typically found that people’s types often change. For example, one investigation found that 50 per cent of people tested and then retested only 5 weeks later received a different classification. Given such massive changes over such a short space of time, the MBTI is unlikely to be evaluating any true measure of people’s deep-seated personality preferences.

Where does all of this leave us? In a 2005 paper published in Consulting Psychology Journal: Practice and Research, psychological scientist David Pittenger, reviewed the evidence on the MBTI and warned: “It is not evident that the [MBTI] can compartmentalise accurately, consistently, and unambiguously individuals’ personality into the 16 type categories created by the instrument. Consequently, using the MBTI as a consulting tool in corporate settings may be, in some instances, the equivalent of making promises that one cannot keep.”

When I mentioned the problems with the Myers-Briggs Type Indicator to the director of talent management at a large business, she seemed somewhat offended. She claimed that she found it very useful. She said that the test may not be very robust, but that its successful deployment depended on the skill of the coach (i.e. her) using it.

I did not doubt that she was a skilled facilitator and able to draw insights out of the tool. But she is in essence taking a flawed tool and compensating through her skill. Just imagine how much more effective she could be if she combined her undoubted levels of skill with a truly useful tool. Or, to put it another way, rather than putting a test that fails to relate to performance in the hands of a skilled coach, why not put a test that does relate to performance in the hands of the skilled coach instead?

Onwards and upwards

Clearly, evidence should matter when it comes to recommending tools and techniques that will allow people to learn and develop within the workplace. Evidence should not come simply from personal opinion. No matter how plausible a piece of advice or a technique might sound, it needs to have been verified by studies published in peer-reviewed journals.

Think about it another way. Imagine for a moment that your son or daughter or perhaps a nephew or niece has unfortunately been diagnosed with a rare blood disease. You are talking to the doctor who recommends a new medicine. In deciding whether to allow the doctor to treat your loved one, you would want evidence that the treatment will work. You would want for the medicine to have been tested in proper clinical trials. You would want to know about possible side effects, risks and likely benefits. Would you be happy simply to “give it a go” on the doctor’s personal assertion that the medicine should work?

Little in learning and development relates to matters of life or death. But we have a unique opportunity to improve the effectiveness of people in the workplace. With the right advice, we can help them to find their work more meaningful, lead people more effectively, collaborate more successfully and create more thriving organisations. But without evidence for the advice and techniques we recommend, we potentially risk wasting their time or making them less successful than they could possibly become.

About the author

Dr Rob Yeung is a psychologist at leadership consulting firm Talentspace and author of over 20 books. He can be contacted via www.talentspace.co.uk

CONTRIBUTIONS FROM READERS

Please login to post a comment or register for a free account.

Related Articles

Related Sponsored Articles

10 September 2015

Hurix Systems announced today it has been short-listed for Red Herring's Top 100 Asia award, a prestigious list honoring the year’s most promising private technology ventures in Asia. 

Tags