5 Ways to Ensure Fair Use of Assessment Technology

Published October 20, 2017

The use of technology in assessment (particularly high-volume) is becoming more and more commonplace.  Psychometric tests of cognitive ability and situational judgement have been a staple of sifting processes for many organizations, for many years.  Nowadays, games-based assessment has become a popular route for recruiters wanting to provide an attractive experience for candidates.

In addition to immersion and accuracy, assessment technologies can also bring incredible time/cost-saving benefits.  For example, in 2015 a&dc reduced the amount of days spent by the Fire & Rescue Services in Wales by implementing sifting technology and automating the marking processes.  Commercially speaking, their time in processing applications went down from 224 days (costing them £18,147.09 to process 2,096 applications) to 21 days (costing just £2,106.58 to process 5,912 applications).

In my conversations with clients, it doesn’t surprise me that most stakeholders are very familiar with the benefits of assessment technologies available to them.  However, many are very conscious of fairness in their use.  Diversity and inclusion is a core goal for organizations in the modern world, especially in light of research suggesting that more diverse workforces statistically outperform their competitors.  How can you be certain the technology you choose to use in assessing talent is giving all your candidates the same opportunity, while maintaining accuracy in decision-making?

Use the following 5 guidelines to give you greatest certainty that you are using your assessment technologies fairly:

1. Make Assessments Mobile/Tablet Compatible

At the time of writing, 63% of internet usage worldwide is on mobiles and tablets.  According to research, 80.28% of job searches for blue collar work is made on phones, as is the case for 57.46% of white collar roles.  You’re missing out on a lot of potential talent if your assessments can’t be accessed properly on mobile technology.  In the words of my colleague Ali Shalfrooshan, “it’s a form of digital discrimination!”

2. Test for Disability Access

In the UK, 10 million people are living with a disability.  That’s approximately 15% of the population.  Testing your online platform is essential to ensure it is accessible (in general usability terms) to your entire population of talent; if you don’t, you may be missing out.  For example, at a&dc we introduced a video-based judgement test to the Civil Service which had been rigorously tested by people with a range of disabilities to ensure that it would give suitably wide access to all applicants.

3. Design Fair Content

Candidates have opinions about what they are experiencing.  For example, someone taking a numerical reasoning tests for a role that doesn’t require numerical reasoning may get the wrong impression about the nature of the role and get turned off the process.  The importance of content goes beyond face validity.  If you present virtual scenarios or simulations that give internal applicants an advantage because they introduce technical or procedural concepts, you are pre-requisitioning internal knowledge, and that’s not fair.

4. Validate your Assessments

How confident do you feel that your assessment measures what it is supposed to measure?  Are you guessing that it predicts which candidates would be the best performers on gut instinct, or do you know that from data analysis?  To give yourself proof to show your industry that your assessments are accurate, candidates are more likely to feel a sense of justice when they go through your process (regardless whether they are successful or not).  Trial the assessment first with your existing staff, collect job performance data on your trial subjects and get an analyst to correlate the two data sets.  If the results show your test predicts performance, it is more likely to be fair.

5. Monitor the Impact of your Assessments

Once you’ve implemented your assessment, you can begin collecting results from candidates.  If you monitor that data, you will see if any trends arise.  For example, what should you do if a question appears to be too easy or difficult for applicants from a minority group?  Logic dictates you should get rid of that item, because it is discriminating against or in favour of certain groups.  If you don’t monitor your test content at least every 6 months, you are potentially missing something important.

We have also created this handy infographic that summarises these key points for you.

    Read more in Assessment