0730 556 541

INSSV Intelligence Structure Battery

INSSV is an intelligence test battery constructed on theory-led principles and designed to measure work-related abilities in a fair and economical manner.


Assessment of intelligence level and intelligence structure, for respondents aged 14 and over.

Theoretical background

As its name indicates, the Intelligence Structure Battery – Short Form is a simplified version of the Intelligence Structure Battery (INSBAT). Like INSBAT, it is based on the hierarchical intelligence model of Cattell-Horn-Carroll (Carroll, 1993, 2003; Horn, 1989; Horn & Noll, 1997), which assumes that there are broad-based secondary factors that underlie the correlations between the individual primary factors or subtests. The correlations between the secondary factors are in turn explained by a general factor of intelligence, which forms the peak or tip of the hierarchical intelligence model. The validity of this factor structure has been replicated in many studies from different countries (e.g. Arendasy, Hergovich & Sommer, 2008, Brickley, Keith & Wolfe, 1995; Carroll, 1989, 2003; Gustafsson, 1984; Undheim & Gustafsson, 1987).
For the Intelligence Structure Battery – Short Form the following secondary factors were selected: fluid intelligence, crystallised intelligence, quantitative reasoning, visual processing and long-term memory. With the exception of visual processing and long-term memory, each of the selected secondary factors is measured by two subtests - the subtest with the highest loading onto the factor in question and an additional subtest that helps to depict the breadth of content of the secondary factory. 
The eight subtests of the Intelligence Structure Battery – Short Form were created using a variety of approaches to automatic item generation  (AIG: Arendasy & Sommer, in press; Irvine & Kyllonen, 2002), taking account of recent research findings in the cognitive sciences and applied psychometrics. 


Unlike in INSBAT, the user of the Intelligence Structure Battery – Short Form can only omit entire secondary factors; it is not possible to omit individual subtests or to adjust their reliability to specific assessment needs. Each subtest is provided with standardised instructions and practice examples based on the principles of programmed instruction and “mastery learning”. Depending on the subtest, the respondent’s answers are given either in multiple-choice format or as automated free responses. The items in the individual subtests are presented partly in power test form and partly with a time limit on each item. In seven of the eight subtests the items are presented as an adaptive test (CAT) with a starting item selected on the basis of sociodemographic data, thereby maximising reliability and test security.


For each subtest the ability parameter is first calculated according to the 1PL Rasch model. However, since the reliability of the individual subtests is deliberately set low as standard, these test scores are not reported. They merely form the starting point for calculation of the real factors of interest – the secondary factors, which can be used to assess both intelligence structure and level. Alongside the reporting of the factor scores a norm comparison (percentile ranks and IQ; confidence interval) is carried out. At the conclusion of testing the results are displayed both in tabular form and as a profile, and these can be printed out. In addition, INSSV has provision for transferring the test results automatically into a report template.


The reliability of the five secondary factors lies between 0.70 and 0.84. The reliability of the general factor is 0.91.


The construct representation (Embretson, 1983) of the individual subtests has been demonstrated in studies in which the item difficulties were predicted from task characteristics derived from the theoretical models for the solving of these types of task. The multiple correlations between the item difficulty parameters of the Rasch model (Rasch, 1980) and the item characteristics thus obtained vary for the individual subtests between R=0.70 and R=0.97. This means that between 50% and 94% of the difference in the difficulties of the individual items can be explained by the theoretical models on which construction of the items in the individual subtests is based. 
Many other studies of construct validity are now available that confirm the theory-led assignment of the individual subtests to the secondary factors of the Cattell-Horn-Carroll model (Arendasy & Sommer, 2007; Arendasy, Hergovich & Sommer, 2008; Sommer & Arendasy, 2005; Sommer, Arendasy & Häusler, 2005).
Evidence of criterion validity has come from the fields of aviation psychology (selection of trainee pilots) and educational counselling (prediction of student success at universities of applied sciences).

Monica Dinca, Acvatot
Thank you, Rodica, because I rediscovered how wonderful are the people and that putting trust in them and giving them the space, they can deliver unexpected results. Congratulations HART for such a professional training session. Monica Dinca, HR Manager Acvatot
Oana Condruz, HRM, British American Tobacco
Multumim companiei Hart Consulting pentru colaborarea de pana acum si o recomandam cu incredere, datorita echipei sale, care impresioneaza prin vasta experienta si cunostintele solide in domeniu.Calitatea si profesionalismul acestei echipe ii recomanda ca fiind unii dintre cei mai competenti specialist pe zona de organizare si administrare centre de evaluare, precum si pe cea de evaluare
Magdalena Isan, Solvay Pharma Romania
HART Consulting was our reliable partner in several assessment centers developed in 2008 with the aim to select internal candidates for career progression (promotion). The role of HART Consulting was to design and implement the assessments centers. The exercises used by HART Consulting were adapted to our specific organizational realities and needs. The reports written by HART’s
Care to socialize with Hart?
Facebook Hart Consulting Twitter Hart Consulting LinkedIn Hart Consulting YouTube Hart Consulting