FileDocCategorySizeDatePackage
StandardAnalyzer.javaAPI DocApache Lucene 1.92599Mon Feb 20 09:19:46 GMT 2006org.apache.lucene.analysis.standard

StandardAnalyzer

public class StandardAnalyzer extends Analyzer
Filters {@link StandardTokenizer} with {@link StandardFilter}, {@link LowerCaseFilter} and {@link StopFilter}, using a list of English stop words.
version
$Id: StandardAnalyzer.java 219090 2005-07-14 20:36:28Z dnaber $

Fields Summary
private Set
stopSet
public static final String[]
STOP_WORDS
An array containing some common English words that are usually not useful for searching.
Constructors Summary
public StandardAnalyzer()
Builds an analyzer with the default stop words ({@link #STOP_WORDS}).


             
    
    this(STOP_WORDS);
  
public StandardAnalyzer(Set stopWords)
Builds an analyzer with the given stop words.

    stopSet = stopWords;
  
public StandardAnalyzer(String[] stopWords)
Builds an analyzer with the given stop words.

    stopSet = StopFilter.makeStopSet(stopWords);
  
public StandardAnalyzer(File stopwords)
Builds an analyzer with the stop words from the given file.

see
WordlistLoader#getWordSet(File)

    stopSet = WordlistLoader.getWordSet(stopwords);
  
public StandardAnalyzer(Reader stopwords)
Builds an analyzer with the stop words from the given reader.

see
WordlistLoader#getWordSet(Reader)

    stopSet = WordlistLoader.getWordSet(stopwords);
  
Methods Summary
public org.apache.lucene.analysis.TokenStreamtokenStream(java.lang.String fieldName, java.io.Reader reader)
Constructs a {@link StandardTokenizer} filtered by a {@link StandardFilter}, a {@link LowerCaseFilter} and a {@link StopFilter}.

    TokenStream result = new StandardTokenizer(reader);
    result = new StandardFilter(result);
    result = new LowerCaseFilter(result);
    result = new StopFilter(result, stopSet);
    return result;