Extending Retrieval in Terrier |
It is very easy to alter the retrieval process in Terrier, as there are many "hooks" at which external classes can be involved. Firstly, you are free when writing your own application to render the results from Terrier in your own way. Results in Terrier come in the form of a ResultSet.
An application's interface with Terrier is through the Manager class. The manager firstly pre-processes the query, by applying it to the configured TermPipeline. Then it calls the Matching class, which is responsible for matching documents to the query, and scoring the documents using a WeightingModel. There are two forms of hooks into the Matching process: firstly, the score of a term in a document can be modified using the application of a TermScoreModifier; secondly, the overall score of a document to the entire query can be modified by using a DocumentScoreModifier. Both these can be set using the matching.tsms and matching.dsms properties.
Once the ResultSet has been returned to the Manager, there are two further phases, namely PostProcessing and PostFiltering. In PostProcessing, the ResultSet can be altered in any way - for example, QueryExpansion expands the query, and then calls Matching again to generate an improved ranking of documents. PostFiltering is simpler, allowing documents to be either included or excluded - this is ideal for interactive applications where users want to restrict the domain of the documents being retrieved.
Use a DocumentScoreModifier to integrate document priors into the retrieval strategy.
It is very easy to implement your own weighting models in Terrier. Simply write a new class that extends WeightingModel. What's more, there are many examples weighting models in uk.ac.gla.terrier.matching.models.
Generic Divergence From Randomness (DFR) Weighting ModelsThe DFRWeightingModel class provides an interface for freely combining different components of the DFR framework. It breaks a DFR weighting model into three components: the basic model for randomness, the first normalisation by the after effect, and term frequency normalisation. Details of these three components can be found from a description of the DFR framework. The DFRWeightingModel class provides an alternate and more flexible way of using the DFR weighting models in Terrier. For example, to use the PL2 model, the name of the model PL2 should be given in etc/trec.models, or set using the property trec.model. Alternatively, using the DFRWeightingModel class, we can replace PL2 with DFRWeightingModel(P, L, 2), where the three components of PL2 are specified in the brackets, separated by commas. If we do not want to use one of the three components, for example the first normalisation L, we can leave the space for this component blank (i.e. DFRWeightingModel(P, , 2)). We can also discard term frequency normalisation by removing the 2 between the brackets (i.e. DFRWeightingModel(P, , )). However, a basic randomness model must always be given.
The basic randomness models, the first normalisation methods, and the term frequency normalisation methods are included in packages uk.ac.gla.terrier.matching.models.basicmodel, uk.ac.gla.terrier.matching.models.aftereffect and uk.ac.gla.terrier.matching.models.normalisation, respectively. Many implementations of each are provided, allowing a vast number of DFR weighting models to be generated.
Sometimes you want to implement an entirely new way of weighting documents that does not fit within the confines of the WeightingModel class. In this case, it's best to implement your own Matching sub-class, in a similar manner to LMMatching, which is used to implement the PonteCroft language modelling approach.
Index index = Index.createIndex(); Lexicon lex = index.getLexicon(); LexiconEntry le = lex.getLexiconEntry("X"); if (le != null) System.out.println("Term X occurs in "+ le.n_t + " documents"); else System.out.println("Term X does not occur");
Index index = Index.createIndex(); Lexicon lex = index.getLexicon(); LexiconEntry le = lex.getLexiconEntry("X"); double p = le == null ? 0.0d : (double) le.TF / index.getCollectionStatistics().getNumberOfTokens();
Index index = Index.createIndex(); DirectIndex di = index.getDirectIndex(); Lexicon lex = index.getLexicon(); int[][] postings = di.getTerms(10); for(int i=0;i<postings[0].length; i++) { LexiconEntry le = lex.getLexiconEntry( postings[0][i]); System.out.print(le.term + " with frequency "+ postings[1][i]); }
Index index = Index.createIndex(); InvertedIndex di = index.getInvertedIndex(); DocumentIndex doi = index.getDocumentIndex(); Lexicon lex = index.getLexicon(); LexiconEntry le = lex.getLexiconEntry( "Z" ); int[][] postings = ii.getDocuments(le); for(int i=0;i<postings[0].length; i++) { System.out.println(doi.getDocumentNumber(postings[0][i]) + " with frequency "+ postings[1][i]); }
Moreover, if you're not comfortable with using Java, you can dump the indices of a collection using the --print* options of TrecTerrier. See the javadoc of TrecTerrier for more information.
Below, you can find a example sample of using the querying functionalities of Terrier.
String query = "term1 term2"; SearchRequest srq = queryingManager.newSearchRequest("queryID0", query); srq.addMatchingModel("Matching", "PL2"); queryingManager.runPreProcessing(srq); queryingManager.runMatching(srq); queryingManager.runPostProcessing(srq); queryingManager.runPostFilters(srq); ResultSet rs = srq.getResultSet();[Previous: Extending Indexing] [Contents] [Next: Non English language support]
Copyright © 2015 University of Glasgow | All Rights Reserved