Divergence From Randomness (DFR) Framework 
The Divergence from Randomness (DFR) paradigm is a generalization of one of the very first models of Information Retrieval, Harter's 2Poisson indexingmodel [1]. The 2Poisson model is based on the hypothesis that the level of treatment of the informative words is witnessed by an elite set of documents, in which these words occur to a relatively greater extent than in the rest of the documents.
On the other hand, there are words, which do not possess elite documents, and thus their frequency follows a random distribution, that is the single Poisson model. Harter's model was first explored as a retrievalmodel by Robertson, Van Rijsbergen and Porter [4]. Successively it was combined with standard probabilistic model by Robertson and Walker [3] and gave birth to the family of the BMs IR models (among them there is the well known BM25 which is at the basis the Okapi system).
DFR models are obtained by instantiating the three components of the framework: selecting a basic randomness model, applying the first normalisation and normalising the term frequencies.
The DFR models are based on this simple idea: "The more the divergence of the withindocument termfrequency from its frequency within the collection, the more the information carried by the word t in the document d. In other words the termweight is inversely related to the probability of termfrequency within the document d obtained by a model M of randomness:
where the subscript M stands for the type of model of randomness employed to compute the probability. In order to choose the appropriate model M of randomness, we can use different urn models. IR is thus seen as a probabilistic process, which uses random drawings from urn models, or equivalently random placement of coloured balls into urns. Instead of urns we have documents, and instead of different colours we have different terms, where each term occurs with some multiplicity in the urns as anyone of a number of related words or phrases which are called tokens of that term. There are many ways to choose M, each of these provides a basic DFR model. The basic models are derived in the following table.
Basic DFR Models  
D  Divergence approximation of the binomial 
P  Approximation of the binomial 
B_{E}  BoseEinstein distribution 
G  Geometric approximation of the BoseEinstein 
I(n)  Inverse Document Frequency model 
I(F)  Inverse Term Frequency model 
I(n_{e})  Inverse Expected Document Frequency model 
If the model M is the binomial distribution, then the basic model is P and computes the value^{1}:
where:Similarly, if the model M is the geometric distribution, then the basic model is G and computes the value:
where λ = F/N.When a rare term does not occur in a document then it has almost zero probability of being informative for the document. On the contrary, if a rare term has many occurrences in a document then it has a very high probability (almost the certainty) to be informative for the topic described by the document. Similarly to Ponte and Croft's [2] language model, we include a risk component in the DFR models. If the termfrequency in the document is high then the risk for the term of not being informative is minimal. In such a case Formula 1 gives a high value, but a minimal risk has also the negative effect of providing a small information gain. Therefore, instead of using the full weight provided by the Formula 1, we tune or smooth the weight of Formula 1 by considering only the portion of it which is the amount of information gained with the term:
The more the term occurs in the elite set, the less termfrequency is due to randomness, and thus the smaller the probability P_{risk} is, that is:
We use two models for computing the informationgain with a term within a document: the Laplace L model and the ratio of two Bernoulli's processes B:
where df is the number of documents containing the term.Before using Formula 4 the documentlength dl is normalized to a standard length sl. Consequently, the termfrequencies tf are also recomputed with respect to the standard documentlength, that is:
A more flexible formula, refered to as normalisation 2, is given below:
The parameter c can be set automatically, as described by He and Ounis [6][7]. Here we give a list of estimated parameter values on TREC collections:
Query type

c value 
disk1&2 and disk4&5


titleonly queries 
7.00

descriptiononly queries 
1.40

titledescriptionnarrative queries 
1.00

WT2G


titleonly queries 
10.99

descriptiononly queries 
2.33

titledescriptionnarrative queries 
4.80

WT10G


titleonly queries 
13.13

descriptiononly queries 
2.65

titledescriptionnarrative queries 
5.58

.GOV


titleonly queries (TREC11 topic distillation task 
1.28

titleonly queries (TREC12/13 topic distillation tasks 
0.10

.GOV2


titleonly queries 
15.34

titledescriptionnarrative queries 
2.16

DFR Models are finally obtained from the generating Formula 4, using a basic DFR model (such as Formulas 2 or 3) in combination with a model of informationgain (such as Formulas 6) and normalizing the termfrequency (such as in Formula 7 or Formula 8).
In the distribution of Terrier, the available DFR models are the following:
Model  Description 
BB2  Bose Enstein model with Bernouli aftereffect and normalisation 2. 
IFB2  Inverse Term Frequency model with Bernouli aftereffect and normalisation 2. 
In_expB2  Inverse Expected Document Frequency model with Bernouli aftereffect and normalisation 2. The logarithms are base 2. This model can be used for classic adhoc tasks. 
In_expC2  Inverse Expected Document Frequency model with Bernouli aftereffect and normalisation 2. The logarithms are base e. This model can be used for classic adhoc tasks. 
InL2  Inverse Document Frequency model with Laplace aftereffect and normalisation 2. This model can be used for tasks that require early precision. 
PL2  Poisson model with Laplace aftereffect and normalisation 2. This model can be used for tasks that require early precision [7, 8] 
Another provided weighting model is a derivation of the BM25 formula from the Divergence From Randomness framework.
The query expansion mechanism extracts the most informative terms from the topreturned document as the expanded query terms. In this expansion process, terms in the topreturned documents are weighted using a particular DFR term weighting model. Currently, Terrier deploys Bo1 (BoseEnstein 1), Bo2 (BoseEnstein 2) and KL (KullbackLeibler) term weighting models. The DFR term weighting models follow a parameterfree approach in default.
An alternative approach is Rocchio's query expansion mechanism. A user can switch to the latter approach by setting parameter.free.expansion to false in the terrier.properties file. The default value of the parameter beta of Rocchio's approach is 0.4. To change this parameter, the user needs to specify the property rocchio_beta in the terrier.properties file.
A different interpretation of the gainrisk generating Formula 4 can be explained by the notion of crossentropy. Shannon's mathematical theory of communication in the 1940s [5] established that the minimal average code word length is about the value of the entropy of the probabilities of the source words. This result is known under the name of the Noiseless Coding Theorem. The term noiseless refers at the assumption of the theorem that there is no possibility of errors in transmitting words. Nevertheless, it may happen that different sources about the same information are available. In general each source produces a different coding. In such cases, we can make a comparison of the two sources of evidence using the crossentropy. The cross entropy is minimised when the two pair of observations return the same probability density function, and in such a case crossentropy coincides with the Shannon's entropy.
We possess two tests of randomness: the first test is P_{risk} and is relative to the term distribution within its elite set, while the second Prob_{M} is relative to the document with respect the entire collection. The first distribution can be treated as a new source of the term distribution, while the coding of the term with the term distribution within the collection can be considered as the primary source. The definition of the crossentropy relation of these two probabilities distribution is:
Relation 9 is indeed Relation 4 of the DFR framework. DFR models can be equivalently defined as the divergence of two probabilities measuring the amount of randomness of two different sources of evidence.
For more details about the Divergence from Randomness framework, you may refer to the PhD thesis of Gianni Amati, or to Amati and Van Rijsbergen's paper Probabilistic models of information retrieval based on measuring divergence from randomness, TOIS 20(4):357389, 2002.
[1] S.P. Harter. A probabilistic approach to automatic keyword indexing. PhD thesis, Graduate Library, The University of Chicago, Thesis No. T25146, 1974.Copyright © 2015 University of Glasgow  All Rights Reserved