Package org.terrier.indexing.tokenisation
Provides classes related to the tokenisation of documents. Tokenisers are responsible for breaking chunks of text into words to be indexed. Different tokenisers may be used for different languages. In particular, two tokenisers are provided by Terrier:
- EnglishTokeniser - splits words on containing characters not in [A-Za-z0-9].
- UTFTokeniser - splits words on containing characters
that are not one of the following:
- Character.isLetterOrDigit() returns true
- Character.getType() returns Character.NON_SPACING_MARK
- Character.getType() returns Character.COMBINING_SPACING_MARK
- Removing punctuation
- Lowercasing all terms if the property lowercase is set (default to true).
- Tokens longer than max.term.length are dropped.
- Any term which has more than 4 digits is discarded.
- Any term which has more than 3 consecutive identical characters are discarded.
Example Code
//get the default tokeniser, as set by property tokeniser Tokeniser tokeniser = Tokeniser.getTokeniser(); String sentence = "This is a sentence."; TokenStream toks = tokeniser.tokenise(new StringReader(sentence)); while(toks.hasNext()) { String token = toks.next(); }
-
Class Summary Class Description EnglishTokeniser Tokenises text obtained from a text stream assuming English language.IdentityTokeniser A Tokeniser implementation that returns the input as is.Tokeniser A tokeniser class is responsible for tokenising a block of text.TokenStream Represents a stream of tokens found by a tokeniser.UTFTokeniser Tokenises text obtained from a text stream.UTFTwitterTokeniser A tokeniser designed for use on tweets.