Aptitude testing the smart way
At a&dc we have primarily focused on assessing behaviour as we believe it to be a key differentiator for effective performance. Our portfolio of business simulations and situational judgement tests have all been focused on measuring what a candidate actually does when faced with a challenge at work.
However, for many roles, an individual’s intellect or cognitive horsepower is also critical. Being able to deal with numerical information effectively, comprehend verbal information and identify patterns are all essential qualities in employees. In fact, aptitude tests have been shown to be one of the most predictive measures of performance. As a consequence, aptitude tests are a very commonly used tool for sifting and selecting talent.
a&dc have designed a wide variety of bespoke aptitude tests for our clients. However, until now we have avoided designing our own aptitude tests as we always want to do things to the highest possible standard. We decided that if we were to have our own aptitude tests we would develop them in a way that integrates advancements in both technology and psychology.
The Smart Aptitude Series from a&dc utilises the latest technology in adaptive testing to sift high volumes of applicants across all sectors, functions and levels. So what does adaptive testing actually mean?
What is adaptive testing?
‘Adaptive testing’ describes assessments where the test content adapts dynamically to the individual test taker, resulting in questions that are pitched at the correct level for the test takers’ ability. This means that every test taker sits an assessment that is appropriate for their level of competence.
How does adaptive testing work?
First, some items of moderate difficulty are presented to the test taker. Then, if these are responded to correctly, more difficult items are presented. This continues until the optimum difficulty level is reached: these are items where the test taker has a 60% – 70% likelihood of answering the item correctly.
Is adaptive testing fair?
You may be questioning the fairness of an ability test where each test taker sees a different set of test questions. However, this is accounted for in the way the test is scored. The scoring mechanism used for The Smart Aptitude Series is ‘Item Response Theory’ (IRT). In a nutshell, this method of scoring provides an estimate of the candidate’s underlying ability (e.g. verbal reasoning ability, numerical reasoning ability etc), rather than a ‘number correct’ score on one specific test. The difficulty of the items the test taker has responded to is taken into account, meaning that answering 10 easy questions correctly does not result in the same score as answering 10 difficult questions correctly.
Ability tests have historically shown an adverse impact against certain demographic groups (for example females and ethnic minority groups). While group differences in performance do exist, these can be exaggerated by individual test items. In The Smart Aptitude Series, items are constantly monitored for group differences and items which unfairly discriminate against a protected group are removed from the bank and no longer used.
What are the benefits of adaptive testing?
Adaptive tests offer great improvements in candidate experience over traditional fixed format tests. As the test items are appropriate to the test takers’ ability level, high ability candidates are suitably challenged and lower ability candidates are not disheartened by being presented with a large number of items they cannot complete.
In terms of the test output, much more precise estimates of ability can be given. As an example, you may have a group of test takers with a particularly high ability. If they take a moderately difficult test, they will likely all receive the maximum test score and there would be no way to differentiate between them. The Smart Aptitude Series draws from a very large bank of test items, meaning high levels of differentiation between test takers at all ability levels.
A further advantage of adaptive testing is the reduction in cheating. Test takers each see different set of items, so there is no risk of complete tests or answer keys being leaked online.
You can read more about The Smart Aptitude Series here >>
Author: Katy Welsh
 Gamliel, E., & Cahan, S. (2007). Mind the Gap: Between‐group differences and fair test use. International Journal of Selection and Assessment, 15(3), 273-282.
- Published: November 9th, 2017London – 9 November, 2017 – PSI Services LLC (PSI), the leading global assessment provider, today announced a key [...]
- Published: April 27th, 2016
Ad-Hoc Assessment / Development Centre Facilitators
(Academic years 2015/16 & 2016/17, UK/Internatio [...]
- Published: February 8th, 2017One morning this week, I stood at my front door ready to leave for work and realised I could not find my keys. I explored where I normally keep them (and some other logical places) [...]
- Published: July 11th, 2017Back in 2000, your main concerns on assessment may have been whether you could fairly compare candidates or whether unsupervised testing was acceptable. Fast forward to today and t [...]