Case Studies

These case studies demonstrate how the calibration model can be used and are taken directly from the paper; the latest version of which can be download at arXiv.

The data for the simulation case study can be found here

Case Study 1 - Simulation

In the simulation, \(N_O= 3000\) objects are assessed by a panel of \(N_A = 15\) assessors. (This choice was motivated as realistic by the number of outputs and reviewers in the applied mathematics unit of assessment at the UK’s 2008 research assessment exercise.) The simulation was carried out using MATLAB, and the system of equations was solved using its built-in procedure, which computed the LU decomposition of \(L\).

True values of the items \(v_o\) were assumed to be normally distributed with a mean of 50 and standard deviation set to 15, but with \(v_o\) values truncated at 0 and 100. The assessor biases \(b_a\) were assumed to be normally distributed with a mean of 0 and a standard deviation of 15. Each assessor was considered to have high, medium, or low confidence in each assessment, and these were modelled using scaled uncertainties for the awarded scores of \(\sigma_{ao}\) = 5, 10 or 15 respectively. The allocated scores follow equation (4), truncating at 0 and 100.

With \(r\) assessors per item (which we took to be the same for each item in this instance), each simulation generated \(rN_o\) object scores \(s_{ao}\). From these, we generated \(N_o\) value estimates \(\hat{v_o}\) and \(N_A\) estimates of assessor biases \(\hat{b_a}\) using the calibration processes. We then took the mean and maximum values of the errors in the estimates, \(dv_o = | \hat{v_o} − v_o|\) and \(db_a = |\hat{b_a} − b_a|\). Straight averaging also delivered a value estimate \(\hat{v_o}\), as well as mean and maximal values of the errors \(dv_o\). Finally, we determined the averages of the errors \(dv_o\) and \(db_a\) over 100 simulations. The results for these averaged mean and maximal errors in the scores are denoted by \( \langle dv \rangle\) and \((dv)_{max}\), respectively and those for the biases (for the calibrated approaches only) are denoted \( \langle db \rangle\) and \((db)_{max}\).

Results for all three methods are presented in Figs. 1–4. The mean and maximal absolute errors for the straight averaging approach, the IBA method and the CWC approach are given in Panels (a)-(d) of Figs. 1 and 2. For demonstration purposes, we use three clear confidence levels rather than a distribution of standard deviation values, or several scenarios where there are mixtures of standard deviations. This allows us to clearly control differences in confidence levels in Figs. 1 and 2 and we do so by presenting four panels labeled (a),(b),(c) and (d). These represent different profiles with the confidence for each assessment randomly allocated using probabilities for high, medium and low confidences in the ratio (a) 1:1:1, (b) 1:1:2, (c) 1:2:1, (d) 2:1:1. We observe that, for each method, the scores become more accurate (errors decrease) as the number of assessors per object \(r\) increases.

From Fig. 1(a)-(d), with only two assessors per object, the straight averaging method gives errors averaging about 10 points. Over \(r\) = 6 readers per object are required to bring the mean error down to 6 points. Fisher’s IBA, however, achieves this level of improvement with only 2 or 3 readers. The CWC method delivers a further level of improvement of about one point. One also notes that, for the calibration approaches, relatively little is gained on average by employing more than four assessors per object.

Figure 1: Mean errors plotted against the number \(r\) of readers per object for the simple arithmetic-mean approach (upper curves, orange), the incomplete-block-analysis method (middle curves, green) and the calibration-with-confidences approach (lower curves, blue). The various panels represent different confidence profiles with probabilities for high, medium and low confidences in the ratio (a) 1:1:1, (b) 1:1:2, (c) 1:2:1, (d)2:1:1.

Fig. 2 shows that Fisher’s approach also leads to significant improvements in the maximal error values relative to those obtained through simple averaging. With two assessors per object, maximal errors are reduced from about 45 to 30-35. The CWC approach does not significantly improve upon this. However, with 6 assessors per object the maximal error value of about 25 delivered by the simple averaging process is reduced to about 20 by Fisher’s method and to as low as 16 when half the readers have a high degree of confidence in their scores.

Figure 2: Maximum errors plotted against the number \(r\) of readers per object for the simple arithmetic-mean approach (upper curves, orange), the incomplete-block-analysis method (middle curves, green) and the calibration-with-confidences approach (lower curves, blue). The various panels represent different confidence profiles with probabilities for high, medium and low confidences in the ratio (a) 1:1:1, (b) 1:1:2, (c) 1:2:1, (d) 2:1:1.

Fig. 3 panel (a) gives the improvements achieved by the calibration methods as a ratios of the mean errors coming from Fisher’s IBA approach to the straight averaging approach \( \langle dv \rangle _{IBA} / \langle dv \rangle _{avg} \) and a ratio of the mean errors coming from the CWC approach to the straight averaging approach \( \langle dv \rangle _{CWC} / \langle dv \rangle _{avg}\). Smaller ratios mean greater accuracy on the part of the calibrated approaches. Fig. 3 panel (b) gives the analogous accuracy ratios for the maximal errors, namely \( (dv)_{max,IBA} / (dv)_{max,avg} \) and \( (dv)_{max,CWC} / (dv)_{max,avg}\). Fig. 3(a) demonstrates that IBA delivers mean errors between about 60% and 80% of those coming from the averaging approach, the better improvements being associated with lower assessor numbers. This is also the most desirable configuration for realistic assessments, as it represents employment of a minimal number of assessors per object. The CWC approach reduces errors by about a further 10 percentage points irrespective of the number of assessors.

Figure 3: (a) The ratios \( \langle dv \rangle _{IBA} / \langle dv \rangle _{avg} \) and \( \langle dv \rangle _{CWC} / \langle dv \rangle _{avg} \) measure the mean improved accuracies of IBA (green curves) and CWC (blue), respectively, over straight averaging. Smaller ratios indicate a greater degree of improvement over the averaging approach. (b) The analogous quantities for maximal errors are \( (dv)_{max,IBA} / (dv)_{max,avg} \) and \( (dv)_{max,CWC} / (dv)_{max,avg}\), respectively. The four line types correspond to relative probabilities of standard deviations of 5, 10 or 15 respectively in the ratio 1:1:1 (solid lines); 1:1:2 (long-dashed); 1:2:1 (short-dashed) and 2:1:1 (dotted).

Finally in Fig. 4 we plot the errors of the bias estimates for the four confidence profiles. Both mean and maximal errors are depicted and neither displays a monotonic dependence on the number r of assessors per object.

Figure 4: (a) mean and (b) maximum errors of the bias estimates coming from the calibration-with-confidences method. The four scenarios depicted here, correspond to relative probabilities of standard deviations of 5, 10 or 15 respectively in the ratio 1:1:1 (solid lines); 1:1:2 (long-dashed); 1:2:1 (short-dashed) and 2:1:1 (dotted).

Case Study 2 - Grant Proposals

To demonstrate the differences that calibration and confidences can make to the outcomes of realistic contests, we applied the various methods to data based on a university’s internal competition for research funding. Although the data we use here are artificial, they are similar to a real exercise in which 43 proposals were evaluated by a panel of 11 assessors, each proposal being viewed by two experts (not all assessors viewed the same number of proposals). In such exercises, assessors are typically senior members of staff, experienced in evaluating grant proposals for national funding bodies. The usual way that such assessment panels arrive at decisions is based on simple averaging without robust account of varying degrees of expertise, bias and confidence. We consider a circumstance wherein the top ten proposals are funded.

To implement CWC, in addition to their score for each object, the assessors were asked to provide an estimate of confidence as “high”, “medium”, or “low”. These were translated to confidence levels \( c_{ao} = \lambda^2, 1, \lambda^{−2} \), respectively, with \(\lambda\) = 1.75 (see SI). The parameter \(\lambda\) represents the ratio of uncertainties implied by a medium:high or low:medium confidence declaration. The value to use depends on the interpretation by the assessors of the qualitative descriptors. This value has the consequence of weighting high confidence scores about 3 times more than medium ones, and low confidence scores about 3 times less than medium ones. One might argue that \(\lambda\) = 1.4 would be better, corresponding to weights of about 2 and 1/2, but 1.4 is a relatively small ratio in uncertainty that does not correspond to how we feel assessors would interpret the qualitative confidence levels. In principle, we could include a computation to infer \(\lambda\) from the data, but we would prefer panel chairs to request uncertainties rather than qualitative confidence descriptors, so we have not implemented the inference of \(\lambda\). A value of \(\lambda\) significantly higher than 2 would be likely to give exaggerated weight to the high confidence scores, while a value too close to 1 would pay insufficient attention to the confidence ratings. Indeed, setting \(\lambda\) = 1 recovers Fisher’s IBA. In what follows, we compare these two to each other and to the results from simple averaging. We denote the results from simple averaging by \( \mathcal{V}_{avg} \), the IBA values by \( \mathcal{V}_{IBA} \) and the CWC values by \( \mathcal{V}_{CWC} \).

The results coming from all three methods are depicted in Fig. 5. Based on the conclusions of Case Study 1, we believe that the CWC scores are a better indicator of the “true” values of the objects than the results of either of the other two methods. To facilitate comparison between the methods, we order the objects according to their \( \mathcal{V}_{CWC} \) values. In this way, the first object is ranked highest according to CWC and the \( \mathcal{V}_{CWC} \) scores (blue circles) decrease monotonically in the figure. The figure illustrates how close or far results from the other two methods are from this best line. Indeed one sees that the IBA scores (green crosses) are more tightly bunched around the CWC scores (blue circles) than are the simple averages (red “+” signs), reflecting the fact that IBA is a superior approach to simple averaging.

Figure 5: The estimates for object values for 43 grant proposals. The various symbols represent the results from simple averaging of raw scores V avg (red +); from additive incomplete block analysis \( \mathcal{V}_{IBA} \) (green x), which involves calibration but not confidences; and results from the new calibration method described here, \( \mathcal{V}_{CWC} \) (blue o), accounting for declared confidences. The labels for the objects have been chosen in decreasing order of \( \mathcal{V}_{CWC} \).

To investigate further, we plot the differences between the outcomes from the various methods in Fig. 6. The data points represent the differences \( \mathcal{V}_{IBA} − \mathcal{V}_{avg} \) (blue o); \( \mathcal{V}_{CWC} − \mathcal{V}_{IBA} \) (red +); \( \mathcal{V}_{CWC} − \mathcal{V}_{avg} \) (green ×). Of course, what really matters is not the differences between the value estimates coming from the various schemes, but the differences in their resulting rankings. Notwithstanding that, a comparison between values is also meaningful since the mean values and standard deviations coming from the three methods are comparable (Table 2). The maximal (mean) of \( | \mathcal{V}_{CWC} − \mathcal{V}_{IBA} | \) is 12.3 (4.1) and that of \( | \mathcal{V}_{IBA} − \mathcal{V}_{avg} | \) is 16.5 (7.2), suggesting that the maximal (mean) difference between IBA and averaging results is about 3/4 (1/2) that between IBA and simple averaging. This may be interpreted as signaling that the improvement delivered by CWC over IBA is almost as significant as that delivered by IBA over simple averaging. In other words, taking relative confidences into account is nearly as important as calibrating. Their combined differences with respect to the basic averaging approach is indicated by the maximal and mean values of \( | \mathcal{V}_{IBA} − \mathcal{V}_{avg} | \) which are 23.6 and 9.6, respectively.

Table 2

The maximimum, minimum, mean, range and standard deviation for the value estimates coming from the three different approaches.

Average, \( \mathcal{V}_{avg} \) IBA, \( \mathcal{V}_{IBA} \) CWC, \( \mathcal{V}_{CWC} \)
Maximum 87.0 85.3 88.8
Minimum 37.0 35.1 27.8
Mean 67.2 66.3 66.2
Range 50.0 50.2 61.0
St. dev. 12.1 11.7 13.6

Figure 6: The differences between the various value estimates: \( \mathcal{V}_{IBA} − \mathcal{V}_{avg} \) (blue o); \( \mathcal{V}_{CWC} − \mathcal{V}_{IBA} \) (red +); \( \mathcal{V}_{CWC} − \mathcal{V}_{avg} \) (green x). The biggest differences are between CWC and simple averaging (green x) where the differentials can be as much as 24 percentage points. The maximum differences between Fisher’s approach and simple averaging (blue o) is 17 percentage points, while that between Fisher’s method and CWC (red +) is 12 points. See Table 2.

As stated, the more meaningful comparison is between the rankings produced by each system because (a) scaling or shifting, such as due to the degeneracy-breaking condition (12), plays no role in rankings within a given scheme and (b) rankings are used in real life to arrive at a decision of which proposals to support. In Table 3 some of the outcomes of the competition according to the various assessment processes are given. The 43 grant proposals (objects) are labelled OA, OB, OC, . . . OZ, OA', OB', . . . OP', OQ'. (The actual labelling is of no relevance but here we assign the designations so that, arranged alphabetically they align with their ranks under the CWC scheme. This corresponds to the monotonic representation of the CWC results in Fig. 5.) In the table, the top ten proposals (we remind that only these were funded) are ranked according to their \( \mathcal{V}_{avg} \), \( \mathcal{V}_{IBA} \) and \( \mathcal{V}_{CWC} \) values, representing the outcomes of simple averaging, the IBA and CWC approaches, respectively. Neglecting the importance of confidences by using the IBA scores instead of the CWC scores leads to objects OG and OI (which are in bold face in the the CWC column of the table) slipping below the cut-off in favour of OP and OS (which are underlined in the IBA column). Simple averaging also overrates OP and OS. Neglecting calibration too (i.e., using simple averaging) would misidentify OB, OD, OE OG and OJ (italicised in the CWC column) as being below par and replace them by OP, OS, OM, OZ and OA ′ (underlined) as the top-ten, simply averaged scores. In other words, two of the top-ten projects would be misidentified as such if we used IBA instead of CWC and five would be misidentified if we used simple averaging instead of CWC. Similarly, simple averaging would have four of the top-ten IBA-ranked objects failing to make the cut – OB, OD OE and OJ.

Table 3

The 43 grant proposals are identified as OA, OB, OC, . . . OZ, OA', OB', . . . OP', OQ'. Here they are ranked according to their \( \mathcal{V}_{avg} \), \( \mathcal{V}_{IBA} \) and \( \mathcal{V}_{CWC} \) values, representing the outcomes of simple averaging, the IBA and CWC approaches. Proposals identified by CWC as belonging to the top ten but missed by IBA are highlighted in boldface. Proposals identified by IBA or CWC as belonging to the top ten but missed by simple averaging are highlighted in italics. Proposals which are overrated (mistakenly assigned to the top ten) are underlined.

Rank Average, \( \mathcal{V}_{avg} \) IBA, \( \mathcal{V}_{IBA} \) CWC, \( \mathcal{V}_{CWC} \)
1 OH (87.0) OA (85.3) OA (88.8)
2 OP (87.0) OC (84.9) OB (85.2)
3 OC (86.0) OH (80.6) OC (84.9)
4 OS (84.0) OP (79.7) OD (82.8)
5 OA (80.5) OD (79.5) OE (82.0)
6 OM (80.5) OB (79.4) OF (78.9)
7 OZ (80.5) OF (78.6) OG (78.4)
8 OF (79.5) OE (76.9) OH (77.3)
9 OA' (78.5) OS (76.7) OI (77.1)
10 OI (78.0) OJ (76.4) OJ (75.6)

Figure 7: How confidence affects bias: The estimated biases of the various assessors are plotted according to the IBA scheme (green “x”) and the CWC scheme (blue “o”).

To understand the reasons for the differences between the outcomes of the three different assessment methods, we have to look at the biases of the assessors. These are listed for the 11 assessors in Table 4 and plotted in Fig. 7. Again we have named the assessors so that increasing alphabetical order corresponds to increasing bias. The monotonicity of the CWC data points in Fig. 7 is an artifact of this naming system. It aids visualisation and shows that, relative to CWC, the IBA approach underestimates the extent of bias when it is largely positive or negative.

Table 4

Assessor statistics: Assessors are labeled AA, . . . AK according to increasing CWC-biases (5th column). Here we give the mean scores they awarded, standard deviations and IBA-biases too. The mean score awarded over all assessors and all objects was 66.9.

Assessor Mean St. dev. Bias (IBA) Bias (CWC)
AK 84.2 16.6 14.6 17.7
AJ 61.0 19.2 8.7 12.6
AI 64.6 10.0 0.0 9.7
AH 76.6 9.1 10.0 9.1
AG 71.9 6.9 8.8 8.8
AF 65.9 5.6 5.7 2.0
AE 72.3 15.5 2.8 1.1
AD 61.0 21.9 -5.0 -3.6
AC 62.3 9.6 -12.4 -15.6
AB 58.3 6.4 -12.8 -16.6
AA 49.1 12.1 -20.7 -25.2

With biases to hand, we understand why proposals OB, OD, OE, OG and OJ, which should be funded according to CWC, would be unlucky if the simple averaging procedure was adopted. Three of them (OB, OE and OJ) were assessed by the combination AB and AD who, respectively, are the second and fourth most negatively biased assessors. The other two (OD and OG) were assessed by AA and AC, the most negatively and third most negatively biased assessors.

Proposals OP, OS, OM, OZ and OA', on the other hand, should not be funded according to CWC but would be under a simple averaging system. Three of them (OP, OS and OA') were assessed by AI and AK, the assessors with the third most positive bias and the strongest positive bias, respectively, according to CWC. Project OM was assessed by Assessors AG and AH who, together, also have a strong positive bias. Project OZ was assessed by the luckiest combination of all, namely AJ and AK.

We can also explain differences between the CWC and IBA rankings as due to bias differences and confidences. Under IBA, objects OP and OS achieve top-ten spots in place of OG and OI which CWC suggests deserve funding. Object OG was assessed by AA and AC, an assessor combination whose negative bias is underestimated by IBA. IBA fails to adjust for that negative bias but CWC does, allowing OG to achieve a top-ten spot.

To explain the relative positions of OP, OS and OI in the IBA and CWC systems we have to acknowledge the importance of confidences. All three proposals were assessed by AI and AK. Assessor AK has strong positive bias according to IBA. IBA-adjustments for this, move the three objects down the table relative to their positions in the simple averaging scheme. The positive bias of AK is even stronger according to CWC but, more importantly, Assessor AI also has strong positive bias there. This moves the three proposals even further down the CWC table relative to their positions in the IBA table. However, while AI had a high degree of confidence in assessing OP and OS, that assessor had a low degree of confidence in assessing OI. The lack of confidence of AI reduces the “downward force” on OI in the CWC rankings allowing that object to remain in the top ten.