Subsections


3 Score Distributions

Let us now elaborate on the form of the two densities $ P(s\vert 1)$ and $ P(s\vert)$ of Section 2.1 and their estimation. 3

Score distributions have been modeled since the early years of IR with various known distributions [21,20,7,6]. However, the trend during the last few years, which has started in [3] and followed up in [8,1,2,14,22], has been to model score distributions by a mixture of normal-exponential densities: normal for relevant, exponential for non-relevant.

Despite its popularity, it was pointed out recently that, under a hypothesis of how systems should score and rank documents, this particular mixture of normal-exponential presents a theoretical anomaly [17]. In practice, nevertheless, it has stand the test of time in the light of

The reader should keep in mind that the normal-exponential mixture fits some retrieval models better than others, or it may not fit some data at all. As a rule of thumb, candidates for good fits are scoring functions in the form of a linear combination of query-term weights, e.g. tf.idf, cosine similarity, and some probabilistic models [2]. Also, long queries [2] or good queries/systems [14] seem to help.

In this paper, we do not set out to investigate alternative mixtures. We theoretically extend and refine the current model in order to account for practical situations, deal with its theoretical anomaly, and improve its computation. We also check its goodness-of-fit to empirical data using a statistical test; a check that has not been done before as far as we are concerned. At the same time, we explicitly state all parameters involved, try to minimize their number, and find for them a robust set of values.


3.1 The Normal-Exponential Model

Let us consider a general retrieval model which in theory produces scores in $ [s_{\min}, s_{\max}]$ , where $ s_{\min} \in \mathbb{R} \cup \{-\infty\}$ and $ s_{\max} \in \mathbb{R} \cup \{+\infty\}$ . By using an exponential distribution, which has semi-infinite support, the applicability of the s-d model is restricted to those retrieval models for which $ s_{\min} \in \mathbb{R}$ . The two densities are given by

$\displaystyle P(s\vert 1) = \frac{1}{\sigma} \, \phi\left(\frac{s-\mu}{\sigma}\right) \qquad \sigma>0, \; \mu,s \in \mathbb{R}$ (7)

$\displaystyle P(s\vert) = \psi(s-s_{\min};\lambda) \qquad \lambda > 0, \; s\ge s_{\min}$ (8)

where $ \phi(.)$ is the density function of the standard normal distribution, i.e. with a mean of 0 and standard deviation of 1, and $ \psi(.)$ is the standard exponential density (Equations 18-19 in Appendix B). The corresponding cdfs are given by Equations 20 and 22. The total score distribution is written as

$\displaystyle P(s) = \left(1-G_n\right) P(s\vert) + G_n P(s\vert 1)$    

where $ G_n \in [0,1]$ . Hence, there are 4 parameters to estimate, $ \lambda$ , $ \mu$ , $ \sigma$ , and $ G_n$ .

3.2 Problems of the Normal-Exponential Model

Over the years, two main problems of the normal-exponential model have been identified. We describe each one of them, and then introduce new models which eliminate the first problem and deal partly with the other.

3.2.1 Support Incompatibility

Although we already generalized somewhat above by introducing a shifted exponential, the mix, as it has been used in all related literature so far, has a support incompatibility problem: while the exponential is defined at or above some $ s_{\min}$ , the normal has a full real axis support. This is a theoretical problem which is solved by the new models we will introduce.


3.2.2 Recall-Fallout non-Convexity

From the point of view of how scores or rankings of IR systems should be, Robertson [17] formulates the recall-fallout convexity hypothesis:

For all good systems, the recall-fallout curve (as seen from [...] recall=1, fallout=0) is convex.
Similar hypotheses can be formulated as a conditions on other measures, e.g., the probability of relevance should be monotonically increasing with the score; the same should hold for smoothed precision. Although, in reality, these conditions may not always be satisfied, they are expected to hold for good systems, i.e. those producing rankings satisfying the probability ranking principle (PRP), because their failure implies that systems can be easily improved.

As an example, let us consider smoothed precision. If it declines as score increases for a part of the score range, that part of the ranking can be improved by a simple random re-ordering [19]. This is equivalent of "forcing" the two underlying distributions to be uniform (i.e. have linearly increasing cdfs) in that score range. This will replace the offending part of the precision curve with a flat one--the least that can be done-- improving the overall effectiveness of the system.

Such hypotheses put restrictions on the relative forms of the two underlying distributions. The normal-exponential mixture violates such conditions, only (and always) at both ends of the score range. Although the low-end scores are of insignificant importance, the top of the ranking is very significant, especially for low $ R$ topics. The problem is a manifestation of the fact that an exponential tail extends further than a normal one.

To complicate matters further, our data suggest that such conditions are violated at a different score $ s_c$ for the probability of relevance and for precision. Since the $ F$ -measure we are interested in is a combination of recall and precision (and recall by definition cannot have a similar problem), we find $ s_c$ for precision. We force the distributions to comply with the hypothesis only when $ s_c < s_1$ , where $ s_1$ the score of the top document; otherwise, the theoretical anomaly does not affect the score range. If $ s_{\max}$ is finite, then two uniform distributions can be used in $ [s_c,s_{\max}]$ as mentioned earlier. Alternatively, preserving a theoretical support in $ [s_{\min},+\infty)$ , the relevant documents distribution can be forced to an exponential in $ [s_c,+\infty)$ with the same $ \lambda$ as this of the non-relevant. We apply the alternative.

In fact, rankings can be further improved by reversing the offending sub-rankings; this will force the precision to increase with an increasing score, leading to better effectiveness than randomly re-ordering the sub-ranking. However, the big question here is whether the initial ranking satisfies the PRP or not. If it does, then the problem is an artifact of the normal-exponential model and reversing the sub-ranking may actually be dangerous to performance. If it does not, then the problem is inherent in the scoring formula producing the ranking. In the latter case, the normal-exponential model cannot be theoretically rejected, and it may even be used to detect the anomaly and improve rankings.

It is difficult, however, to determine whether a single ranking satisfies the PRP or not; it is well-known since the early IR years that precision for single queries is erratic, especially at early ranks, justifying the use of interpolated precision. On the one hand, according to interpolated precision all rankings satisfy the PRP, but this is forced by the interpolation. On the other hand, according to simple precision some of our rankings do not seem to satisfy the PRP, but we cannot determine this for sure. We would expect, however, that using precision averaged over all topics should produce a--more or less--declining curve with an increasing rank. Figure 1 suggests that the off-the-shelf system we currently use produces rankings that may not satisfy the PRP for ranks 5,000 to 10,000, on average.

Figure: Precision, Recall, and $ F_1$ --as these are estimated by TREC's deep-sampling method--averaged over all 26 topics of TREC Legal 2008. By rank 100,000, precision is still flat rather than declining, recall is still rising, so $ F_1$ has not yet peaked; this suggests that there are optimal $ K$ 's larger than 100,000. Systems correctly predicting $ K$ 's larger than 100,000 do not get credit.
\includegraphics[width=4.0in]{plots/avg_estprf_rank.ps}

Consequently, we rather leave open the question of whether the problem is inherent in some scoring functions or introduced by the combined use of normal and exponential distributions. Being conservative, we just randomize the offending sub-rankings rather than reversing them. The impact of this on thresholding is that the s-d method turns "blind" inside the upper offending range; as one goes down the corresponding ranks, precision would be flat, recall naturally rising, so the optimal $ F_1$ threshold can only be below the range.

We will use new models that, although they do not eliminate the problem, also do not always violate such conditions imposed by the PRP (irrespective of whether it holds or not).


3.3 The Truncated Normal-Exponential Model

In order to enforce support compatibility, Arampatzis et al. [5] introduced truncated models which we will discuss in this and the next section. They introduced a left-truncated at $ s_{\min}$ normal distribution for $ P(s\vert 1)$ . With this modification, we reach a new mixture model for score distributions with a semi-infinite support in $ [s_{\min}, +\infty),\, s_{\min} \in \mathbb{R}$ .

In practice, however, scores may be naturally bounded (by the retrieval model) or truncated to the upside as well. For example, cosine similarity scores are naturally bounded at 1. Scores from probabilistic models with a (theoretical) support in $ (-\infty,+\infty)$ are usually mapped to the bounded $ (0,1)$ via a logistic function. Other retrieval models may just truncate at some maximum number for practical reasons. Consequently, it makes sense to introduce a right-truncation as well, for both the normal and exponential densities.

Depending on how one wants to treat the leftovers due to the truncations, two new models may be considered.


3.3.1 Theoretical Truncation

Figure: Theoretical truncation.
\includegraphics[width=4.0in]{plots/theotrunc.ps}

There are no leftovers (Figure 2). The underlying theoretical densities are assumed to be the truncated ones, normalized accordingly to integrate to one:

$\displaystyle P(s\vert 1) = \frac{\frac{1}{\sigma} \, \phi \left( \frac{s-\mu} {\sigma} \right)} {\Phi(\beta) - \Phi(\alpha) } \qquad s \in [s_{\min}, s_{\max}]$ (9)

$\displaystyle P(s\vert) = \frac{\psi(s-s_{\min};\lambda)} {\Psi(s_{\max}-s_{\min};\lambda)} \quad s \in [s_{\min}, s_{\max}]$ (10)

where

$\displaystyle \alpha = \frac{s_{\min}-\mu} {\sigma} \qquad \beta = \frac{s_{\max}-\mu} {\sigma}$ (11)

$ \Phi(.)$ and $ \Psi(.)$ are the cdfs of $ \phi(.)$ and $ \psi(.)$ respectively (Equations 20 and 22). The cdfs of the above $ P(s\vert 1)$ and $ P(s\vert)$ are given by Equations 21 and 23, respectively.

Let $ S_\mathrm{rel}$ and $ S_\mathrm{nrel}$ be the random variables corresponding to the relevant and non-relevant document scores respectively. The expected value and variance of $ S_\mathrm{rel}$ are given by Equations 24 and 25 in Appendix B.3. For $ S_\mathrm{nrel}$ , the corresponding Equations are 26 and 27 in Appendix B.4.


3.3.2 Technical Truncation

Figure: Technical truncation.
\includegraphics[width=4.0in]{plots/techtrunc.ps}

The underlying theoretical densities are not truncated, but the truncation is of a "technical" nature. The leftovers are accumulated at the two truncation points introducing discontinuities (Figure 3). For the normal, the leftovers can easily be calculated:

$\displaystyle P(s\vert 1) = \begin{cases}\Phi(\alpha) \, \delta(s-s_{\min}) & s...
...}) \\ \left(1-\Phi(\beta)\right) \, \delta(s-s_{\max}) & s=s_{\max} \end{cases}$    

where $ \delta(.)$ is Dirac's delta function. For the exponential, while the leftovers at the right side are determined by the right truncation, in order to calculate the ones at the left side requires to assume that the exponential extends below $ s_{\min}$ to some new minimum score $ s_{\min}'$ :

$\displaystyle P(s\vert) = \begin{cases}\Psi(s_{\min}-s_{\min}';\lambda) \, \del...
...Psi(s_{\max}-s_{\min}';\lambda)) \, \delta(s-s_{\max}) & s=s_{\max} \end{cases}$    

The cdfs corresponding to the above densities are:

$\displaystyle F(s\vert 1) = \begin{cases}\Phi(\frac{s-\mu}{\sigma}) & s \in [s_{\min},s_{\max}) \\ 1 & s=s_{\max} \end{cases}$    

$\displaystyle F(s\vert) = \begin{cases}\Psi(s-s_{\min}';\lambda) & s \in [s_{\min},s_{\max}) \\ 1 & s=s_{\max} \end{cases}$    

The equations in this section simplify somewhat when estimating their parameters from down-truncated ranked lists, as we will see in Section 4.1. We do not need to calculate $ s_{\min}'$ . If, for some measure, the number of non-relevant documents is required, it can simply be estimated as $ n-R$ .

The expected values and variances of $ S_\mathrm{rel}$ and $ S_\mathrm{nrel}$ , if needed, have to be calculated starting from Equations 24-27 and taking into account the contribution of the discontinuities. We do not give the formulas in this paper.

3.3.3 The Relation Between the Truncated Models

For both models the right truncation is optional. For $ s_{\max} = +\infty$ , we get $ \Phi(\beta) = \Psi(s_{\max}-s_{\min}';\lambda) = 1$ , leading to left-truncated models; this accommodates retrieval models with scoring support in $ [s_{\min}, +\infty),\, s_{\min} \in \mathbb{R}$ . This is the maximum range that can be achieved with the current mixture, since the restriction of a finite $ s_{\min}$ is imposed by the use of the exponential.

When $ s_{\min} \ll \mu \ll s_{\max}$ then $ \Phi(\alpha) \approx 0$ and $ \Phi(\beta) \approx 1$ . If additionally $ s_{\min}' = s_{\min}$ , then $ \Psi(s_{\min}-s_{\min}';\lambda) = 0$ and $ \Psi(s_{\max}-s_{\min}';\lambda) \approx 1$ . Thus we can well-approximate the standard normal-exponential model. Consequently, using a truncated model is a valid choice even when truncations are insignificant.

From a theoretical point of view, it may be difficult to imagine a process producing a truncated normal directly. Truncated normal distributions are usually the results of censoring, meaning that the out-truncated data do actually exist. In this view, the technically truncated model may correspond better to the IR reality. This is also in line with the theoretical arguments for the existence of a full normal distribution [2].

Concerning convexity, both truncated models do not always violate such conditions. Consider the problem at the top score range $ (s_c,+\infty)$ . In the cases of $ s_c \ge s_{\max}$ , the problem is out-truncated in both models, while--in theory--it still always exists in the original model. The improvement so far is of a rather theoretical nature. In practise, we should be interested in what happens when $ s_c < s_1$ . Our extended experiments (not reported in this paper) suggest that truncation helps estimation in producing higher numbers of convex fits within the observed score range. Consequently, the benefits are also practical.

These improvements make the original model more general, and it indeed produces better fits on our data. In fact, the truncated distributions should have been used in the past during parameter estimation even for the original normal-exponential model due to down-truncated rankings.



Footnotes

... estimation.3
Probabilistic foundations necessary to follow the discussion can be found in several sources, [e.g., 10,16,11]. Where the derivation of a formula is obvious or it can easily be found in the literature, we give directly the result. Otherwise, we show its derivation in Appendix B.
avi (dot) arampatzis (at) gmail