线程“main”中的异常 java.lang.ClassNotFoundException: WordCount
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/18796443/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Exception in thread "main" java.lang.ClassNotFoundException: WordCount
提问by GeekyOmega
I am currently wanting to create a single instance node of Hadoop. So I am following this tutorial. I ran the following command in terminal:
我目前想要创建 Hadoop 的单个实例节点。所以我正在关注本教程。我在终端中运行了以下命令:
hduser@ubuntu:/usr/local/hadoop$ bin/hadoop jar WordCount.jar geekyomega.WordCount /user/hduser/gutenberg /user/hduser/gutenberg-output
Things were going great until I ran into this error:
事情进展顺利,直到我遇到这个错误:
Exception in thread "main" java.lang.ClassNotFoundException: WordCount
at java.net.URLClassLoader.run(URLClassLoader.java:366)
at java.net.URLClassLoader.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:270)
at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
I am attempting to run this example using the following code, where I got from here. Here is my version of the code:
我正在尝试使用以下代码运行此示例,这是我从这里获得的。这是我的代码版本:
package geekyomega;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapreduce.*;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
public class WordCount {
public static class Map extends Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
context.write(word, one);
}
}
}
public static class Reduce extends Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = new Job(conf, "WordCount");
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class);
job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
}
}
I thought my issue was job instantiation. So I did as follows, I changed:
我以为我的问题是工作实例化。所以我做了如下,我改变了:
Job job = new Job(conf, "wordcount");
To the following, capitalized version:
以下为大写版本:
Job job = new Job(conf, "WordCount");
But that hasn't helped. Anyone know what could help me here?
但这并没有帮助。有谁知道这里有什么可以帮助我?
Thanks, Geeky
谢谢,极客
PS - I don't want to run the tutorial version of wordcount. What I did was created the project in eclipse, added the hadoop jar to it, and exported it as a jar file.
PS - 我不想运行 wordcount 的教程版本。我所做的是在 eclipse 中创建项目,将 hadoop jar 添加到其中,并将其导出为 jar 文件。
采纳答案by Tariq
Along with adding the package add the following line as well in the job config part of your program :
在添加包的同时,在程序的作业配置部分添加以下行:
job.setJarByClass(WordCount.class);
job.setJarByClass(WordCount.class);
回答by Akhilesh Singh
your classname is geekyomega.WordCount
你的班级名称是 geekyomega.WordCount
you are not appending the package name . in the command line , just after jar file name, give the fully qualified name of your job class.
您没有附加包名称。在命令行中,就在 jar 文件名之后,给出您的作业类的完全限定名称。