Similaritypublic abstract class Similarity extends Object implements SerializableExpert: Scoring API.
Subclasses implement search scoring.
The score of query q for document d correlates to the
cosine-distance or dot-product between document and query vectors in a
Vector Space Model (VSM) of Information Retrieval.
A document whose vector is closer to the query vector in that model is scored higher.
The score is computed as follows:
where
-
tf(t in d)
correlates to the term's frequency,
defined as the number of times term t appears in the currently scored document d.
Documents that have more occurrences of a given term receive a higher score.
The default computation for tf(t in d) in
{@link org.apache.lucene.search.DefaultSimilarity#tf(float) DefaultSimilarity} is:
{@link org.apache.lucene.search.DefaultSimilarity#tf(float) tf(t in d)} =
|
frequency½
|
-
idf(t) stands for Inverse Document Frequency. This value
correlates to the inverse of docFreq
(the number of documents in which the term t appears).
This means rarer terms give higher contribution to the total score.
The default computation for idf(t) in
{@link org.apache.lucene.search.DefaultSimilarity#idf(int, int) DefaultSimilarity} is:
{@link org.apache.lucene.search.DefaultSimilarity#idf(int, int) idf(t)} =
|
1 + log (
|
numDocs |
––––––––– |
docFreq+1 |
|
)
|
-
coord(q,d)
is a score factor based on how many of the query terms are found in the specified document.
Typically, a document that contains more of the query's terms will receive a higher score
than another document with fewer query terms.
This is a search time factor computed in
{@link #coord(int, int) coord(q,d)}
by the Similarity in effect at search time.
-
queryNorm(q)
is a normalizing factor used to make scores between queries comparable.
This factor does not affect document ranking (since all ranked documents are multiplied by the same factor),
but rather just attempts to make scores from different queries (or even different indexes) comparable.
This is a search time factor computed by the Similarity in effect at search time.
The default computation in
{@link org.apache.lucene.search.DefaultSimilarity#queryNorm(float) DefaultSimilarity}
is:
queryNorm(q) =
{@link org.apache.lucene.search.DefaultSimilarity#queryNorm(float) queryNorm(sumOfSquaredWeights)}
=
|
1 |
––––––––––––––
|
sumOfSquaredWeights½ |
|
The sum of squared weights (of the query terms) is
computed by the query {@link org.apache.lucene.search.Weight} object.
For example, a {@link org.apache.lucene.search.BooleanQuery boolean query}
computes this value as:
{@link org.apache.lucene.search.Weight#sumOfSquaredWeights() sumOfSquaredWeights} =
{@link org.apache.lucene.search.Query#getBoost() q.getBoost()} 2
·
|
∑
|
(
idf(t) ·
t.getBoost()
) 2
|
|
t in q |
|
-
t.getBoost()
is a search time boost of term t in the query q as
specified in the query text
(see query syntax),
or as set by application calls to
{@link org.apache.lucene.search.Query#setBoost(float) setBoost()}.
Notice that there is really no direct API for accessing a boost of one term in a multi term query,
but rather multi terms are represented in a query as multi
{@link org.apache.lucene.search.TermQuery TermQuery} objects,
and so the boost of a term in the query is accessible by calling the sub-query
{@link org.apache.lucene.search.Query#getBoost() getBoost()}.
-
norm(t,d) encapsulates a few (indexing time) boost and length factors:
- Document boost - set by calling
{@link org.apache.lucene.document.Document#setBoost(float) doc.setBoost()}
before adding the document to the index.
- Field boost - set by calling
{@link org.apache.lucene.document.Fieldable#setBoost(float) field.setBoost()}
before adding the field to a document.
- {@link #lengthNorm(String, int) lengthNorm(field)} - computed
when the document is added to the index in accordance with the number of tokens
of this field in the document, so that shorter fields contribute more to the score.
LengthNorm is computed by the Similarity class in effect at indexing.
When a document is added to the index, all the above factors are multiplied.
If the document has multiple fields with the same name, all their boosts are multiplied together:
norm(t,d) =
{@link org.apache.lucene.document.Document#getBoost() doc.getBoost()}
·
{@link #lengthNorm(String, int) lengthNorm(field)}
·
|
∏
|
{@link org.apache.lucene.document.Fieldable#getBoost() f.getBoost}()
|
|
field f in d named as t |
|
However the resulted norm value is {@link #encodeNorm(float) encoded} as a single byte
before being stored.
At search time, the norm byte value is read from the index
{@link org.apache.lucene.store.Directory directory} and
{@link #decodeNorm(byte) decoded} back to a float norm value.
This encoding/decoding, while reducing index size, comes with the price of
precision loss - it is not guaranteed that decode(encode(x)) = x.
For instance, decode(encode(0.89)) = 0.75.
Also notice that search time is too late to modify this norm part of scoring, e.g. by
using a different {@link Similarity} for search.
|
Fields Summary |
---|
private static Similarity | defaultImplThe Similarity implementation used by default. | private static final float[] | NORM_TABLECache of decoded bytes. |
Methods Summary |
---|
public abstract float | coord(int overlap, int maxOverlap)Computes a score factor based on the fraction of all query terms that a
document contains. This value is multiplied into scores.
The presence of a large portion of the query terms indicates a better
match with the query, so implementations of this method usually return
larger values when the ratio between these parameters is large and smaller
values when the ratio between them is small.
| public static float | decodeNorm(byte b)Decodes a normalization factor stored in an index.
for (int i = 0; i < 256; i++)
NORM_TABLE[i] = SmallFloat.byte315ToFloat((byte)i);
return NORM_TABLE[b & 0xFF]; // & 0xFF maps negative bytes to positive above 127
| public static byte | encodeNorm(float f)Encodes a normalization factor for storage in an index.
The encoding uses a three-bit mantissa, a five-bit exponent, and
the zero-exponent point at 15, thus
representing values from around 7x10^9 to 2x10^-9 with about one
significant decimal digit of accuracy. Zero is also represented.
Negative numbers are rounded up to zero. Values too large to represent
are rounded down to the largest representable value. Positive values too
small to represent are rounded up to the smallest positive representable
value.
return SmallFloat.floatToByte315(f);
| public static org.apache.lucene.search.Similarity | getDefault()Return the default Similarity implementation used by indexing and search
code.
This is initially an instance of {@link DefaultSimilarity}.
return Similarity.defaultImpl;
| public static float[] | getNormDecoder()Returns a table for decoding normalization bytes.
return NORM_TABLE;
| public float | idf(org.apache.lucene.index.Term term, org.apache.lucene.search.Searcher searcher)Computes a score factor for a simple term.
The default implementation is:
return idf(searcher.docFreq(term), searcher.maxDoc());
Note that {@link Searcher#maxDoc()} is used instead of
{@link IndexReader#numDocs()} because it is proportional to
{@link Searcher#docFreq(Term)} , i.e., when one is inaccurate,
so is the other, and in the same direction.
return idf(searcher.docFreq(term), searcher.maxDoc());
| public float | idf(java.util.Collection terms, org.apache.lucene.search.Searcher searcher)Computes a score factor for a phrase.
The default implementation sums the {@link #idf(Term,Searcher)} factor
for each term in the phrase.
float idf = 0.0f;
Iterator i = terms.iterator();
while (i.hasNext()) {
idf += idf((Term)i.next(), searcher);
}
return idf;
| public abstract float | idf(int docFreq, int numDocs)Computes a score factor based on a term's document frequency (the number
of documents which contain the term). This value is multiplied by the
{@link #tf(int)} factor for each term in the query and these products are
then summed to form the initial score for a document.
Terms that occur in fewer documents are better indicators of topic, so
implementations of this method usually return larger values for rare terms,
and smaller values for common terms.
| public abstract float | lengthNorm(java.lang.String fieldName, int numTokens)Computes the normalization value for a field given the total number of
terms contained in a field. These values, together with field boosts, are
stored in an index and multipled into scores for hits on each field by the
search code.
Matches in longer fields are less precise, so implementations of this
method usually return smaller values when numTokens is large,
and larger values when numTokens is small.
That these values are computed under {@link
IndexWriter#addDocument(org.apache.lucene.document.Document)} and stored then using
{@link #encodeNorm(float)}. Thus they have limited precision, and documents
must be re-indexed if this method is altered.
| public abstract float | queryNorm(float sumOfSquaredWeights)Computes the normalization value for a query given the sum of the squared
weights of each of the query terms. This value is then multipled into the
weight of each query term.
This does not affect ranking, but rather just attempts to make scores
from different queries comparable.
| public static void | setDefault(org.apache.lucene.search.Similarity similarity)Set the default Similarity implementation used by indexing and search
code.
Similarity.defaultImpl = similarity;
| public abstract float | sloppyFreq(int distance)Computes the amount of a sloppy phrase match, based on an edit distance.
This value is summed for each sloppy phrase match in a document to form
the frequency that is passed to {@link #tf(float)}.
A phrase match with a small edit distance to a document passage more
closely matches the document, so implementations of this method usually
return larger values when the edit distance is small and smaller values
when it is large.
| public abstract float | tf(float freq)Computes a score factor based on a term or phrase's frequency in a
document. This value is multiplied by the {@link #idf(Term, Searcher)}
factor for each term in the query and these products are then summed to
form the initial score for a document.
Terms and phrases repeated in a document indicate the topic of the
document, so implementations of this method usually return larger values
when freq is large, and smaller values when freq
is small.
| public float | tf(int freq)Computes a score factor based on a term or phrase's frequency in a
document. This value is multiplied by the {@link #idf(Term, Searcher)}
factor for each term in the query and these products are then summed to
form the initial score for a document.
Terms and phrases repeated in a document indicate the topic of the
document, so implementations of this method usually return larger values
when freq is large, and smaller values when freq
is small.
The default implementation calls {@link #tf(float)}.
return tf((float)freq);
|
|