Measuring and demonstrating the value of internal communication is an ongiong and complex challenge. It’s a topic I’ve seen debated and queried more than any other in the online communities and conferences I’ve been part of.
I’m delighted that the CIPR Inside team are taking time to thoroughly research current measurement practice in internal comms. To respond to that call for input, I wanted to share a model I’ve used for measuring internal comms outcomes, and which I discovered in a different job role.
Before I enthusiastically jumped into the world of internal comms, I was a learning & development professional. That’s a role with some overlapping skills and aims, such as to enable people to change their behaviours, in the right way, by acquiring new knowledge or ways of thinking.
Learning professionals face similar challenges to internal comms professionals. They too have to overcome stereotypical assumptions about the limits of training and are constantly challenged to prove their efforts are cost-effectively contributing to the bottom line.
The Kirkpatrick Model
It’s not unlike the AMEC model many comms colleagues already use to good effect, but it has some subtle differences I think can help define a more bespoke approach for internal communications professionals.
The Kirkpatrick model consists of four levels of evaluation:
Level 1 – Reaction – participants’ initial feedback on the effectiveness of the intervention
Level 2 – Learning – whether participants have acquired new knowledge or skills
Level 3 – Behaviour – how people apply this new knowledge in their workplace
Level 4 – Results – the extent to which targeted outcomes are achieved as a result.
I’ll give a little more context from the lessons I learned at each stage of the model:
Level 1 – Reaction
This is about measuring satisfaction with the event itself and is usually completed straight away. It’s basically a count up of attendees, or people who completed your online content, plus the output of any feedback surveys completed at the end of the event.
During the early part of my career, almost all learning measurement I saw was ‘Level 1’ evaluation. That’s partly because it’s quick to collect and easy to analyse, and partly because it feeds a need for an instant ‘hit’ of positive feedback. It didn’t yield the information our business leaders were looking for.
Level 1 limitations
Keeping track of the numbers of people involved in any programme or campaign is important, of course, but this isn’t an evaluation of outcomes. Level 1 feedback data is obtained immediately after an event, like those tick-box questionnaires you get at the end of a course. It’s often filled in hastily and ‘politely’. It’s inadvertently limited by the questions posed, even when there’s a free-text section.
Evaluation reports based on this data usually said something like, ‘We successfully delivered 5 hours of training to 20 people; 100% were highly satisfied with the venue and 80% rated trainer effectiveness at 5 out of 5’. When handed over to an operations manager, they translated as, ‘We took your team out of productive time for a minimum of 100 hours. They liked the hotel and the trainer.’ That’s not necessarily moving anyone’s business forward and those reports never satisfied the question, ‘how are you contributing to the bottom line?’
I’ve experienced a similar response when delivering reaction-level data in internal comms. The fact that 1000 employees spent 10 minutes each reading my newsletter, or that my leader’s latest blog earned 12 likes and a few polite comments is nice to know. How is that moving the business forward?
The answers are further down the model.
Level 2 – Learning
At this point, we asses the uptake of new information or knowledge. Typically, it’s measured by tests or observed in scenarios where newly-acquired knowledge can be brought to life safely. Think quizzes, role plays or sandbox systems environments.
It’s great when you can show people have genuinely picked up on a new topic or have taken time to learn about your new initiative. It’s hard not to be tempted to champion outcomes as final when you get good data points along the first few steps of the model. Everyone loves the feel-good factor of great feedback! However, the best outcomes happen if and when you get to the results stage, and show how that outcome helps your organisation
Level 3 – Behaviour
To report on behaviour change, assess how well new knowledge and skills are being applied in someone’s working experience. For example, learners may now be able to handle more complex customer queries, or write certain types of code accurately, or lead constructive one to one meetings without provoking a grievance.
These are better measures of what’s actually changed, as a result of your efforts, in the real world. Behaviour outcomes are often the best indicators of success for each individual involved – people like to feel they have expanded their own skills set, for example.
They still don’t go far enough to demonstrate bottom line results.
Level 4 – Results
Finally, this is where you assess how and to what extent those positive reactions, lessons learned and changes in behaviour have contributed to the outcomes your organisation is aiming to achieve.
If you’ve inspired leaders to hold more successful one to one meetings; that might lead to a specific reduction in grievance cases. The results may include time and cost savings; reduced resignations; increased engagement scores around line management and career development topics and improved efficiency through collaboration.
In another example, a programme leading to improvements in customer service may directly lead to increased Net Promoter Scores and reduced operational costs through time saved on specific tasks.
How does this apply to internal comms?
When I started out in internal comms, the best way I knew how to demonstrate the result of my efforts was by using the Kirkpatrick model, so that’s what I did!
Here’s how the model worked for the types of projects I’ve led:
- An eye-catching, high-profile internal cyber security campaign (Level 1) has prompted a number of people to update their profile, or adopt new and safer software solutions (Level 3). This helped shorten the original estimate of time taken, and therefore cost, to get everyone up to date and the system more secure (Level 4 – Result!).
- The information shared about a company wellness programme had lots of positive feedback and read-rates, (Level 1) and most of people entering the quiz scored highly (Level 2). It inspired a large number of people to take up a screening offer and promote it on the intranet (Level 3). This led to improved engagement scores in a pulse survey about wellness, where it was reported the programme contributed to the retention of valued team members (Level 4 – Result!).
Try it out!
If you want to try this four-step approach in your professional practice, here are some suggestions of what to assess along each of the steps:
REACTION: initial data, outputs and feedback, e.g. opens; reads; likes; clicks; attendance rates; number of comments; number of dial-ins; satisfaction survey results.
LEARNING: knowledge transfer, i.e. what do people know now they didn’t know before? How do you know they have picked up that information? How well-informed are the questions people pose on this topic now, compared to before the campaign?
BEHAVIOUR: how has people’s behaviour changed? What do they do now they didn’t do before, or what have they stopped doing? What do they say about their ability to apply new knowledge and what difference it makes to them?
RESULTS: what does that mean in terms of business outcomes? How does it contribute to broader targets? How does it align with what the organisation thinks is important?
An ideal interal comms measurement model
Demonstrating positive impact on the bottom line will always be a challenge, and rightly so. It pushes us to show our leaders why professional internal communication skills are critical to the health and productivity of an organisation. Also, the better we get at figuring out the power and impact of what we do, the better we can all get at doing it.
The Kirkpatrick Model may not be a perfect match for internal comms as it stands – perhaps ‘learning’ isn’t the best description of an internal comms ‘Level 2’ evaluation. However, I’m confident it has something to teach us in our search for the ideal measurement model for our work. I’m also confident the work of the CIPR Inside team will help show us the way.