What does good look like?
Jane Massy explores how to set and measure performance and behaviour indicators
Impact is driven by performance, and few organisation leaders would say anything other than that their ultimate success hangs on the performance of the managers and their teams. The problem is that in many working contexts no one knows what observable and objectively measured good performance might look like.
In most working contexts today, the work is performed by teams and when looking at performance, it is important to consider not just individual roles but the processes and dynamics within the team. This means that the good manager needs to understand ‘what good looks like’ in respect both of the individuals’ performance, and in respect of the team, when all those individuals come together with a single purpose.
Performance and behaviour descriptors
It’s not always as easy as it might seem to come up with a description that can be objectively observed. While many organisations have a set of competency descriptions for jobs and job families, few are provided to line managers in a form that enables them to observe objectively whether the performance and behaviour is meeting the expected standard. Some argue that it is the line manager’s job (rather than the job of HR, OD or L&D) to specify the performance and behaviour criteria for each directly-reporting staff member. Our experience over the years is that this rarely happens. Few managers use more than the job and competency descriptions as a reference point when they assess performance and behaviour and to use as a basis to propose ways in which performance can be improved and processes and procedures developed. For HR, OD or L&D, who are not in direct contact with the staff doing the day to day work, it is difficult to establish appropriate performance and behaviour indicators. To be useful they need to be recorded in a way that can be used by managers to observe, objectively and fairly the performance and behaviours of the staff.
If there is a single lesson we have learned about human capital investment, it is the importance of well-articulated, unambiguous descriptors of the expected standards of performance and behaviour. We have yet to come across an organisation where anyone believes that the performance appraisal and management system works as effectively as it should and could. And the most important reason for that is that few managers have those clear descriptors of what good looks like for their staff.
According to a very recently published report1, ‘Only 8 percent of companies report that their performance management process drives high levels of value, while 58 percent said it is not an effective use of time’. Furthermore, they suggest, ‘HR is not making the grade as companies move away from HR as people administration to a focus on people performance’. From a survey of over 2500 business and HR leaders in 94 countries that they conducted in the last quarter of 2013, they note 50% rated as important and 18% as urgent the issue of performance management. 36% of respondents from Western Europe, rated performance management as one of the top five most important issues and the same reports notes that performance management falls under the category of significant capacity shortfalls (i.e. > 40% rate as ‘not ready’).
And the link to learning and development?
Competency descriptions are very helpful when establishing what someone must be able to do, exhibit and know. They provide valuable information along with context specific training needs analysis when designing learning and development interventions. Competency descriptions are built up over years to establish occupational profiles for jobs and job families and are of necessity, generic.
However, the basis from which competency profiles should be developed is in the first instance, the standards of task performance and behaviours required in the job and teams. This means creating descriptors. In other words, clear statement of what can be objectively observed as unacceptable, acceptable and exceptional in practice.
So, when planning an L&D intervention, the starting questions are:
- What are the required standards of task performance and behaviours
- Does current performance and behaviour meet the standards?
- If not, why not?
- How much of the deficit is attributable to lack of knowledge, skills and attitude?
In other words, who needs to do what better or differently?
Using descriptors to rate performance and behaviour
Our recommendation is to use a four-level rating scale, and for managers to complete descriptors at each level for key tasks and for the most important behavioural standards. These are:
- Unacceptable: includes all behaviours and/or ways of doing the work that are not compliant with regulatory requirements, pose risk to the individual, others and the business, failure to meet even the most basic standards
- Progress towards acceptable: the individual is observed to be doing some of what is required to an acceptable standard but not everything, or meets the standard but not consistently
- Meets acceptable: the individual consistently behaves and performs to the expected standard: no errors and no failures to comply
- Exceptional: generally behaviours that go beyond what is expected, someone does more than the acceptable standard, introduces innovations, demonstrates what tomorrow ‘acceptable’ standard should look like. Not always used: innovation may post risk or deviate from a regulatory requirement.
Writing descriptors requires input from several sources.
- Job holders
- Line managers
- Subject domain/technical experts
- OD/HR specialists
Others who might need to be included are quality assurance and risk specialists and even better, customers!
When we ask people ‘what good looks like’ in a project or process they’ve described to us, they invariably struggle. You can see them asking themselves how they should approach what seems to be a rather simple question. Is it a trick? No, it isn’t a trick, but in order to address it we need to abandon some assumptions we may have and look at the activity as if it were for the first time.
We need to take it to pieces as if it were a piece of machinery, identify its key moving parts, and then understand how they act on and with each other. We will then be in a position to see how the process needs to work if it is to be efficient and effective. The process will be laid bare.
But even this still doesn’t get us there. Merely understanding what has to be done will not tell us enough about what ‘good’ should look like. For that we need also to understand quality requirements. Standards, if they exist. Expectations if they do not. Once we have done that, we will have understood performance, timing and sequencing. These will each have a bearing on whether a quality standard has been met or missed. They will also be important factors in demonstrating what has to be done to repeat good performance, or to rectify and eliminate poor performance. It also tells us what might be considered ‘progress towards’ as opposed to unacceptable. In other words, clearly differentiating between what might pose real risk to an individual, customer or asset and what might need improvement but won’t cause any immediate problem for the organisation. Of course, we are also concerned with establishing what L&D is required for those that are categorised as ‘progress towards’. In addition, if everyone in a job role is appraised as performing at the acceptable level, this begs two questions: is there any need for any further L&D investment OR have we set our performance standard too low? Moreover, if there are more than a low single digit percentage of staff appraised as exceptional in their performance and behaviour, we must assume they work in an organisation that performs better than all its competitors and has no need to undertake the types of L&D interventions other organisations in the same sector require. Forced bell curves have no place in today’s organisations: they are misleading at best and at worst, leave staff disengaged, lacking in trust and at a loss to know what standard of performance is really expected.
Activity based costing
ABC or its more recent variant TDABC can add an extra and extremely valuable dimension to establishing good performance and behaviour standards. Activity based costing – costing the activities like ‘planning’, ‘processing’ and ‘reporting’ that make up projects and processes, rather than generic cost headings like ‘travel’ and ‘HR’ became popular in the 1980’s. When it was introduced and as is still the case today, many organisations fully understood the value contribution that certain activities made to an organisation’s performance. ABC fell out of favour in the 1990s, but has regained popularity in recent years. It helps us not only to identify what good performance looks like but also to identify performance and behaviours that add value – or not!
In order to put a cost on each activity carried out in the organisation, we must have a detailed view of the time taken to perform it to the required standard. We must know what the required standard is, we need to understand the process steps and the required behaviours to be followed, and who has contributed along the way. This is neither a simple nor a quick task, especially in complex operations. But we see the benefits that clients realise when they arrive at a real understanding of what their organisations spend on the processes they follow from day to day, as well as on the key elements of the projects they design. They understand actions, as well as inputs. They get a more accurate picture of their human resourcing requirements.
They see capacity and capability requirements more clearly and as a result are better able to decide the additional investments they need to make. It becomes easier to see the balance between financial investment and the improvements in performance and behaviour that may or may not influence impact outcomes such as revenues, profitability and customer and client satisfaction.
Recent work by Robert Kaplan (considered by many to be the father of ABC2) and Michael Porter has demonstrated the benefits of using time-driven activity-based costing (TBABC) to understand the true cost of healthcare and to demonstrate how the deployment of TBABC can deliver significant improvements in efficiency and effectiveness, leading to much greater value from healthcare costs3.
We have borrowed the lessons from Kaplan to demonstrate the huge benefit that can come from a better understanding of workplace activities (process, performance, behaviour), how they deliver impact outcomes, at what cost and to make the link between performance and behaviour improvements and organisational impact outcomes.
The starting point is to review existing process, performance standards and behaviours. Teams within organisations review who does what, how and to what standard and at what cost. The review of actual process, performance and behaviour and their associated cost provide the basis for reflection on potential efficiencies and improvements in effectiveness, most specifically how changes in performance and behaviour can influence outcomes. The result will be new more accurate, value based performance, process and behaviour standards for all involved. The quality of new standards will of course be dependent on the quality of the analysis and evidence base including the assumptions made. Monitoring and evaluating the results helps to identify what is working well and where improvements can continue to be made.
Providing everyone involved with a clearer picture of the relationship between current performance and behaviour and impact outcomes and future performance and behaviour and improved impact outcomes provides a powerful rationale to engage stakeholders in making the changes proposed.
As evaluators of human capital activities we need to display many of the instincts and interests of the engineer and the inventor. Most importantly, those who wish to improve their human capital management expertise need to develop a fascination with how pieces of process and activity and behaviours actually work and where improvements can be made.
Let’s take the example of recruitment. The following may help to illustrate the point.
An NGO is finding it difficult to attract and retain really high quality project team leaders. As a result, projects are delayed, fail to achieve the planned objectives and the NGO is finding it difficult to get funding renewed. An analysis suggests that there are excellent candidates in the market and that other NGOs are able to attract and retain them and the problem is not related to pay which is similar across the sector. Exit interviews provide little useful data but suggest that the employee leaving feels that the fit with the organisation is wrong and the job was not what they expected and so they sought and found a ‘better’ offer. The organisation decides that it needs to improve the interviewing skills of the HR team but before commencing the development of a training programme, a sensible voice suggests that more analysis might be useful.
The process of recruitment seems straightforward. Agree the job description, competency requirements and remuneration. Advertise the job, shortlist candidates, interview, offer to the selected candidate, negotiate the final terms and candidate starts work on the agreed date, receives an induction and hopefully stays in post for a minimum of 5 years doing the job to a high standard.
An analysis of the process and the development of a stakeholder map identifies the problem and shows that it lies in two points in the process. Firstly, the job description does not reflect the actual day to day tasks of the role and there are no stated performance and behaviour standards. Consequently, the line managers’ expectation of performance and behaviour in the role is not reflected in the information being used for recruitment. As a result of this, the discussions at interview do not reveal the performance and behaviour experience of the individual with regard to the specific tasks they will be expected to undertake.
By identifying the points in the process, the HR team can work with the projects team manager and members to examine the actual tasks, required performance standard, expected time they take, processes involved (especially those processes that involve others) and behaviour descriptors and can then revise the competency requirements. Using this information, the interviewers can be provided with questions that can elicit whether the candidate is likely to deliver the required standards of performance and demonstrate the expected behaviours and the candidate can have a much clearer idea of what is expected in the role.
Jane Massy is leading two new workshops in London around this area ‘Learning to Set and Measure Performance and Behaviour Indicators’ on 21st May and ‘Setting KPIs for Senior Management: Linking People, Performance and Results on 25th June. Find out more on the links or email email@example.com or telephone 01353 865340.
1 Global Human Capital Trends 2014. A report by Deloitte Consulting LLP and Bersin by Deloitte. http://www.deloitte.com/view/en_US/us/Services/consulting/human-capital/human-capital-trends/index.htm
2 Kaplan, R.S. and Anderson, S., “Time-Driven Activity Based Costing”, Harvard Business School Press, 2007
3 Robert S. Kaplan and Michael E. Porter, “How to Solve The Cost Crisis In Health Care” Harvard Business Review, September 2011.
A look at the best stories, research and news in HR, talent, learning and organisational development as selected by the TJ editorial team.
Cass Coulston explores recent research into ways of leading and thriving in a hybrid work environment
David B Horne investigates cognitive bias and its effect on women’s access to education and career opportunities
Anthony Santa Maria on how personalised learning builds future-ready workforces
Vincent Belliveau, Senior Vice President & General Manager EMEA at Cornerstone OnDemand, explores the benefits of internal recruitment
A new study, published today, reveals the most unacceptable manager behaviours in businesses across the country.