java Hadoop:在 MapReduce 期间 OutputCollector 如何工作?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/10996963/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Hadoop: How does OutputCollector work during MapReduce?
提问by catty
I want to know if the OutputCollector's 'instance' output used in the map function: output.collect(key, value) this -output- be storing the key value pairs somewhere? even if it emits to the reducer function, their must be an intermediate file, right? What are those files? Are they visible and decided by the programer? Are the OutputKeyClass, and OutputValueClasses which we specify in the main function these places of storage? [Text.class and IntWritable.class]
我想知道映射函数中使用的 OutputCollector 的“实例”输出: output.collect(key, value) this -output- 是否将键值对存储在某处?即使它发送到reducer函数,它们也必须是一个中间文件,对吗?那些文件是什么?它们是否可见并由程序员决定?我们在主函数中指定的 OutputKeyClass 和 OutputValueClasses 是这些存储位置吗?[Text.class 和 IntWritable.class]
Im giving the standard code for Word Count example in MapReduce, which we can find at many places in the net.
我在 MapReduce 中给出了 Word Count 示例的标准代码,我们可以在网络的许多地方找到它。
public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
回答by Chaos
The output from the Map function is stored in Temporary Intermediate Files. These files are handled transparently by Hadoop, so in a normal scenario, the programmer doesn't have access to that. If you're curious about what's happening inside each mapper, you can review the logs for the respective job where you'll find a log file for each map task.
Map 函数的输出存储在临时中间文件中。这些文件由 Hadoop 透明处理,因此在正常情况下,程序员无权访问这些文件。如果您对每个映射器内部发生的事情感到好奇,您可以查看相应作业的日志,您将在其中找到每个映射任务的日志文件。
If you want to control where the temporary files are generated, and have access to them, you have to create your own OutputCollector class, and I don't know how easy that is.
如果您想控制临时文件的生成位置并访问它们,则必须创建自己的 OutputCollector 类,我不知道这有多容易。
If you want to have a look at the source code, you can use svn to get it. I think it is available here: http://hadoop.apache.org/common/version_control.html.
如果你想看看源代码,你可以使用 svn 来获取它。我认为它可以在这里找到:http: //hadoop.apache.org/common/version_control.html。
回答by Ulises
I believe they are stored in temporary locations and not available for the developer, unless you create your own class that implements OutputCollector
.
我相信它们存储在临时位置并且对开发人员不可用,除非您创建自己的类来实现OutputCollector
.
I once had to access those files and solved the problem by creating side-effect files: http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Task+Side-Effect+Files
我曾经不得不访问这些文件并通过创建副作用文件来解决问题:http: //hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Task+Side-Effect+Files
回答by kaushik mahaldar
The intermediate, grouped outputs are always stored in SequenceFiles. Applications can specify if and how the intermediate outputs are to be compressed and which CompressionCodecs are to be used via the JobConf.
中间的、分组的输出总是存储在 SequenceFiles 中。应用程序可以指定是否以及如何压缩中间输出,以及通过 JobConf 使用哪些 CompressionCodecs。
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/Mapper.html
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/mapred/Mapper.html