使用 Java 从文本中删除重复行

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5931665/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-30 13:30:19  来源:igfitidea点击:

Remove Duplicate Lines from Text using Java

javaduplicateslines

提问by Mat B.

I was wondering if anyone has logic in java that removes duplicate lines while maintaining the lines order.

我想知道是否有人在 Java 中有删除重复行同时保持行顺序的逻辑。

I would prefer no regex solution.

我宁愿没有正则表达式解决方案。

采纳答案by Emil

public class UniqueLineReader extends BufferedReader {
    Set<String> lines = new HashSet<String>();

    public UniqueLineReader(Reader arg0) {
        super(arg0);
    }

    @Override
    public String readLine() throws IOException {
        String uniqueLine;
        if (lines.add(uniqueLine = super.readLine()))
            return uniqueLine;
        return "";
    }

  //for testing.. 

    public static void main(String args[]) {
        try {
            // Open the file that is the first
            // command line parameter
            FileInputStream fstream = new FileInputStream(
                    "test.txt");
            UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
            String strLine;
            // Read File Line By Line
            while ((strLine = br.readLine()) != null) {
                // Print the content on the console
                if (strLine != "")
                    System.out.println(strLine);
            }
            // Close the input stream
            in.close();
        } catch (Exception e) {// Catch exception if any
            System.err.println("Error: " + e.getMessage());
        }
    }

}

Modified Version:

修改版本:

public class UniqueLineReader extends BufferedReader {
    Set<String> lines = new HashSet<String>();

    public UniqueLineReader(Reader arg0) {
        super(arg0);
    }

    @Override
    public String readLine() throws IOException {
        String uniqueLine;
        while (lines.add(uniqueLine = super.readLine()) == false); //read until encountering a unique line
            return uniqueLine;
    }

    public static void main(String args[]) {
        try {
            // Open the file that is the first
            // command line parameter
            FileInputStream fstream = new FileInputStream(
                    "/home/emil/Desktop/ff.txt");
            UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
            String strLine;
            // Read File Line By Line
            while ((strLine = br.readLine()) != null) {
                // Print the content on the console
                    System.out.println(strLine);
            }
            // Close the input stream
            in.close();
        } catch (Exception e) {// Catch exception if any
            System.err.println("Error: " + e.getMessage());
        }

    }
}

回答by entonio

If you feed the lines into a LinkedHashSet, it ignores the repeated ones, since it's a set, but preserves the order, since it's linked. If you just want to know whether you've seena given line before, feed them into a simple Setas you go on, and ignore those which the Set already contains/contained.

如果将行输入 a LinkedHashSet,它会忽略重复的行,因为它是一个集合,但保留顺序,因为它是链接的。如果您只是想知道您之前是否看过给定的行,请在继续时将它们输入到一个简单的Set中,并忽略 Set 已经包含/包含的那些。

回答by Mike

Read the text file using a BufferedReader and store it in a LinkedHashSet. Print it back out.

使用 BufferedReader 读取文本文件并将其存储在 LinkedHashSet 中。打印回来。

Here's an example:

下面是一个例子:

public class DuplicateRemover {

    public String stripDuplicates(String aHunk) {
        StringBuilder result = new StringBuilder();
        Set<String> uniqueLines = new LinkedHashSet<String>();

        String[] chunks = aHunk.split("\n");
        uniqueLines.addAll(Arrays.asList(chunks));

        for (String chunk : uniqueLines) {
            result.append(chunk).append("\n");
        }

        return result.toString();
    }

}

Here's some unit tests to verify ( ignore my evil copy-paste ;) ):

这里有一些单元测试来验证(忽略我邪恶的复制粘贴;)):

import org.junit.Test;
import static org.junit.Assert.*;

public class DuplicateRemoverTest {

    @Test
    public void removesDuplicateLines() {
        String input = "a\nb\nc\nb\nd\n";
        String expected = "a\nb\nc\nd\n";

        DuplicateRemover remover = new DuplicateRemover();

        String actual = remover.stripDuplicates(input);
        assertEquals(expected, actual);
    }

    @Test
    public void removesDuplicateLinesUnalphabetized() {
        String input = "z\nb\nc\nb\nz\n";
        String expected = "z\nb\nc\n";

        DuplicateRemover remover = new DuplicateRemover();

        String actual = remover.stripDuplicates(input);
        assertEquals(expected, actual);
    }

}

回答by Ramgau

It can be easy to remove duplicate line from text or File using new java Stream API. Stream support different aggregate feature like sort,distinct and work with different java's existing data structures and their methods. Following example can use to remove duplicate or sort the content in File using Stream API

使用新的 java Stream API 从文本或文件中删除重复行很容易。Stream 支持不同的聚合特性,如 sort、distinct 和使用不同的 java 现有数据结构及其方法。以下示例可用于使用 Stream API 删除重复文件或对文件中的内容进行排序

package removeword;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.OpenOption;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Scanner;
import java.util.stream.Stream;
import static java.nio.file.StandardOpenOption.*;
import static java.util.stream.Collectors.joining;

public class Java8UniqueWords {

public static void main(String[] args) throws IOException {        
    Path sourcePath = Paths.get("C:/Users/source.txt");
    Path changedPath = Paths.get("C:/Users/removedDouplicate_file.txt");
      try (final Stream<String> lines = Files.lines(sourcePath )
               // .map(line -> line.toLowerCase()) /*optional to use existing string methods*/
                .distinct()
               // .sorted())  /*aggregrate function to sort  disctincted line*/
       {
            final String uniqueWords = lines.collect(joining("\n"));
            System.out.println("Final Output:" + uniqueWords);
            Files.write(changedPath , uniqueWords.getBytes(),WRITE, TRUNCATE_EXISTING);
        }
}
}

回答by Abhinav

For better/optimum performance, it's wise to use Java 8'sAPI features viz. Streams& Method referenceswith LinkedHashSetfor Collection as below:

为了获得更好/最佳的性能,明智的做法是使用Java 8 的API 功能,即。使用LinkedHashSet进行集合的方法引用如下:

import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.LinkedHashSet;
import java.util.stream.Collectors;

public class UniqueOperation {

private static PrintWriter pw;  
enter code here
public static void main(String[] args) throws IOException {

    pw = new PrintWriter("abc.txt");

    for(String p : Files.newBufferedReader(Paths.get("C:/Users/as00465129/Desktop/FrontEndUdemyLinks.txt")).
                   lines().
                   collect(Collectors.toCollection(LinkedHashSet::new))) 
        pw.println(p);
    pw.flush();
    pw.close();

    System.out.println("File operation performed successfully");
}

回答by Mike

Here's another solution. Let's just use UNIX!

这是另一个解决方案。让我们只使用 UNIX!

cat MyFile.java | uniq > MyFile.java

Edit: Oh wait, I re-read the topic. Is this a legal solution since I managed to be language agnostic?

编辑:哦等等,我重新阅读了这个话题。这是一个合法的解决方案,因为我设法成为语言不可知论者吗?

回答by ratchet freak

here I'm using a hashset to store seen lines

在这里我使用哈希集来存储看到的行

Scanner scan;//input
Set<String> lines = new HashSet<String>();
StringBuilder strb = new StringBuilder();
while(scan.hasNextLine()){
    String line = scan.nextLine();
    if(lines.add(line)) strb.append(line);
}