java 从 Lucene 中查找搜索命中的位置

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1311199/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-29 16:05:00  来源:igfitidea点击:

Finding the position of search hits from Lucene

javasearchlucene

提问by VoidPointer

With Lucene, what would be the recommended approach for locating matches in search results?

使用 Lucene,在搜索结果中定位匹配项的推荐方法是什么?

More specifically, suppose index documents have a field "fullText" which stores the plain-text content of some document. Furthermore, assume that for one of these documents the content is "The quick brown fox jumps over the lazy dog". Next a search is performed for "fox dog". Obviously, the document would be a hit.

更具体地说,假设索引文档有一个字段“fullText”,它存储某个文档的纯文本内容。此外,假设这些文档之一的内容是“敏捷的棕色狐狸跳过懒惰的狗”。接下来对“fox dog”执行搜索。显然,该文件将大受欢迎。

In this scenario, can Lucene be used to provide something like the matching regions for found document? So for this scenario I would like to produce something like:

在这种情况下,可以使用 Lucene 来为找到的文档提供匹配区域之类的东西吗?因此,对于这种情况,我想生成如下内容:

[{match: "fox", startIndex: 10, length: 3},
 {match: "dog", startIndex: 34, length: 3}]

I suspect that it could be implemented by what's provided in the org.apache.lucene.search.highlight package. I'm not sure about the overall approach though...

我怀疑它可以通过 org.apache.lucene.search.highlight 包中提供的内容来实现。虽然我不确定整体方法......

回答by Allasso

TermFreqVector is what I used. Here is a working demo, that prints both the term positions, and the starting and ending term indexes:

TermFreqVector 是我使用的。这是一个工作演示,它打印术语位置以及开始和结束术语索引:

public class Search {
    public static void main(String[] args) throws IOException, ParseException {
        Search s = new Search();  
        s.doSearch(args[0], args[1]);  
    }  

    Search() {
    }  

    public void doSearch(String db, String querystr) throws IOException, ParseException {
        // 1. Specify the analyzer for tokenizing text.  
        //    The same analyzer should be used as was used for indexing  
        StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_CURRENT);  

        Directory index = FSDirectory.open(new File(db));  

        // 2. query  
        Query q = new QueryParser(Version.LUCENE_CURRENT, "contents", analyzer).parse(querystr);  

        // 3. search  
        int hitsPerPage = 10;  
        IndexSearcher searcher = new IndexSearcher(index, true);  
        IndexReader reader = IndexReader.open(index, true);  
        searcher.setDefaultFieldSortScoring(true, false);  
        TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);  
        searcher.search(q, collector);  
        ScoreDoc[] hits = collector.topDocs().scoreDocs;  

        // 4. display term positions, and term indexes   
        System.out.println("Found " + hits.length + " hits.");  
        for(int i=0;i<hits.length;++i) {  

            int docId = hits[i].doc;  
            TermFreqVector tfvector = reader.getTermFreqVector(docId, "contents");  
            TermPositionVector tpvector = (TermPositionVector)tfvector;  
            // this part works only if there is one term in the query string,  
            // otherwise you will have to iterate this section over the query terms.  
            int termidx = tfvector.indexOf(querystr);  
            int[] termposx = tpvector.getTermPositions(termidx);  
            TermVectorOffsetInfo[] tvoffsetinfo = tpvector.getOffsets(termidx);  

            for (int j=0;j<termposx.length;j++) {  
                System.out.println("termpos : "+termposx[j]);  
            }  
            for (int j=0;j<tvoffsetinfo.length;j++) {  
                int offsetStart = tvoffsetinfo[j].getStartOffset();  
                int offsetEnd = tvoffsetinfo[j].getEndOffset();  
                System.out.println("offsets : "+offsetStart+" "+offsetEnd);  
            }  

            // print some info about where the hit was found...  
            Document d = searcher.doc(docId);  
            System.out.println((i + 1) + ". " + d.get("path"));  
        }  

        // searcher can only be closed when there  
        // is no need to access the documents any more.   
        searcher.close();  
    }      
}

回答by matthiasboesinger

Here is a solution for lucene 5.2.1. It works only for single word querys, but should demonstrate the basic principles.

这是 lucene 5.2.1 的解决方案。它仅适用于单字查询,但应展示基本原理。

The basic idea is:

基本思想是:

  1. Get a TokenStreamfor each document, which matches your query.
  2. Create a QueryScorerand initialize it with the retrieved tokenStream.
  3. 'Loop' over each token of the stream (done by tokenStream.incrementToken()) and check if the token matches the search criteria (done by queryScorer.getTokenScore()).
  1. 获取TokenStream与您的查询匹配的每个文档。
  2. 创建一个QueryScorer并使用检索到的tokenStream.
  3. “循环”流的每个标记(完成者tokenStream.incrementToken())并检查标记是否匹配搜索条件(完成者queryScorer.getTokenScore())。

Here is the code:

这是代码:

import java.io.IOException;
import java.util.List;
import java.util.Vector;

import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.de.GermanAnalyzer;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
import org.apache.lucene.document.Document;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.highlight.InvalidTokenOffsetsException;
import org.apache.lucene.search.highlight.QueryScorer;
import org.apache.lucene.search.highlight.TokenSources;

public class OffsetSearcher {

    private IndexReader reader;

    public OffsetSearcher(IndexWriter indexWriter) throws IOException { 
        reader = DirectoryReader.open(indexWriter, true); 
    }

    public OffsetData[] getTermOffsets(Query query) throws IOException, InvalidTokenOffsetsException 
    {
        List<OffsetData> result = new Vector<>();

        IndexSearcher searcher = new IndexSearcher(reader);
        TopDocs topDocs = searcher.search(query, 1000);

        ScoreDoc[] scoreDocs = topDocs.scoreDocs;   

        Document doc;
        TokenStream tokenStream;
        CharTermAttribute termAtt;
        OffsetAttribute offsetAtt;
        QueryScorer queryScorer;
        OffsetData offsetData;
        String txt, tokenText;
        for (int i = 0; i < scoreDocs.length; i++) 
        {
            int docId = scoreDocs[i].doc;
            doc = reader.document(docId);

            txt = doc.get(RunSearch.CONTENT);
            tokenStream = TokenSources.getTokenStream(RunSearch.CONTENT, reader.getTermVectors(docId), txt, new GermanAnalyzer(), -1);

            termAtt = (CharTermAttribute)tokenStream.addAttribute(CharTermAttribute.class);
            offsetAtt = (OffsetAttribute)tokenStream.addAttribute(OffsetAttribute.class);

            queryScorer = new QueryScorer(query);
            queryScorer.setMaxDocCharsToAnalyze(RunSearch.MAX_DOC_CHARS);
            TokenStream newStream  = queryScorer.init(tokenStream);
            if (newStream != null) {
                tokenStream = newStream;
            }
            queryScorer.startFragment(null);

            tokenStream.reset();

            int startOffset, endOffset;
            for (boolean next = tokenStream.incrementToken(); next && (offsetAtt.startOffset() < RunSearch.MAX_DOC_CHARS); next = tokenStream.incrementToken())
            {
                startOffset = offsetAtt.startOffset();
                endOffset = offsetAtt.endOffset();

                if ((endOffset > txt.length()) || (startOffset > txt.length()))
                {
                    throw new InvalidTokenOffsetsException("Token " + termAtt.toString() + " exceeds length of provided text sized " + txt.length());
                }

                float res = queryScorer.getTokenScore();
                if (res > 0.0F && startOffset <= endOffset) {
                    tokenText = txt.substring(startOffset, endOffset);
                    offsetData = new OffsetData(tokenText, startOffset, endOffset, docId);
                    result.add(offsetData);
                }           
            }   
        }

        return result.toArray(new OffsetData[result.size()]);
    }


    public void close() throws IOException {
        reader.close();
    }


    public static class OffsetData {

        public String phrase;
        public int startOffset;
        public int endOffset;
        public int docId;

        public OffsetData(String phrase, int startOffset, int endOffset, int docId) {
            super();
            this.phrase = phrase;
            this.startOffset = startOffset;
            this.endOffset = endOffset;
            this.docId = docId;
        }

    }

}