hadoop mapreduce: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/22150417/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
hadoop mapreduce: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
提问by msknapp
I am trying to write a snappy block compressed sequence file from a map-reduce job. I am using hadoop 2.0.0-cdh4.5.0, and snappy-java 1.0.4.1
我正在尝试从 map-reduce 作业编写一个 snappy 块压缩序列文件。我正在使用 hadoop 2.0.0-cdh4.5.0 和 snappy-java 1.0.4.1
Here is my code:
这是我的代码:
package jinvestor.jhouse.mr;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Arrays;
import java.util.List;
import jinvestor.jhouse.core.House;
import jinvestor.jhouse.core.util.HouseAvroUtil;
import jinvestor.jhouse.download.HBaseHouseDAO;
import org.apache.commons.io.IOUtils;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.LocatedFileStatus;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.fs.RemoteIterator;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;
import org.apache.hadoop.hbase.mapreduce.TableMapper;
import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.SequenceFile;
import org.apache.hadoop.io.compress.CompressionCodec;
import org.apache.hadoop.io.compress.SnappyCodec;
import org.apache.hadoop.mapred.FileOutputFormat;
import org.apache.hadoop.mapred.JobConf;
import org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat;
import org.apache.hadoop.mapreduce.Job;
import org.apache.mahout.math.DenseVector;
import org.apache.mahout.math.NamedVector;
import org.apache.mahout.math.VectorWritable;
/**
* Produces mahout vectors from House entries in HBase.
*
* @author Michael Scott Knapp
*
*/
public class HouseVectorizer {
private final Configuration configuration;
private final House minimumHouse;
private final House maximumHouse;
public HouseVectorizer(final Configuration configuration,
final House minimumHouse, final House maximumHouse) {
this.configuration = configuration;
this.minimumHouse = minimumHouse;
this.maximumHouse = maximumHouse;
}
public void vectorize() throws IOException, ClassNotFoundException, InterruptedException {
JobConf jobConf = new JobConf();
jobConf.setMapOutputKeyClass(LongWritable.class);
jobConf.setMapOutputValueClass(VectorWritable.class);
// we want the vectors written straight to HDFS,
// the order does not matter.
jobConf.setNumReduceTasks(0);
Path outputDir = new Path("/home/cloudera/house_vectors");
FileSystem fs = FileSystem.get(configuration);
if (fs.exists(outputDir)) {
fs.delete(outputDir, true);
}
FileOutputFormat.setOutputPath(jobConf, outputDir);
// I want the mappers to know the max and min value
// so they can normalize the data.
// I will add them as properties in the configuration,
// by serializing them with avro.
String minmax = HouseAvroUtil.toBase64String(Arrays.asList(minimumHouse,
maximumHouse));
jobConf.set("minmax", minmax);
Job job = Job.getInstance(jobConf);
Scan scan = new Scan();
scan.addFamily(Bytes.toBytes("data"));
TableMapReduceUtil.initTableMapperJob("homes", scan,
HouseVectorizingMapper.class, LongWritable.class,
VectorWritable.class, job);
job.setOutputFormatClass(SequenceFileOutputFormat.class);
job.setOutputKeyClass(LongWritable.class);
job.setOutputValueClass(VectorWritable.class);
job.setMapOutputKeyClass(LongWritable.class);
job.setMapOutputValueClass(VectorWritable.class);
SequenceFileOutputFormat.setOutputCompressionType(job, SequenceFile.CompressionType.BLOCK);
SequenceFileOutputFormat.setOutputCompressorClass(job, SnappyCodec.class);
SequenceFileOutputFormat.setOutputPath(job, outputDir);
job.getConfiguration().setClass("mapreduce.map.output.compress.codec",
SnappyCodec.class,
CompressionCodec.class);
job.waitForCompletion(true);
}
When I run it I get this:
当我运行它时,我得到这个:
java.lang.Exception: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:401)
Caused by: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:62)
at org.apache.hadoop.io.compress.SnappyCodec.getCompressorType(SnappyCodec.java:127)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:104)
at org.apache.hadoop.io.compress.CodecPool.getCompressor(CodecPool.java:118)
at org.apache.hadoop.io.SequenceFile$Writer.init(SequenceFile.java:1169)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1080)
at org.apache.hadoop.io.SequenceFile$BlockCompressWriter.<init>(SequenceFile.java:1400)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:274)
at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:527)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.<init>(MapTask.java:617)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:737)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:338)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:233)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
If I comment out these lines then my test passes:
如果我注释掉这些行,那么我的测试就通过了:
SequenceFileOutputFormat.setOutputCompressionType(job, SequenceFile.CompressionType.BLOCK);
SequenceFileOutputFormat.setOutputCompressorClass(job, SnappyCodec.class);
job.getConfiguration().setClass("mapreduce.map.output.compress.coded",
SnappyCodec.class,
CompressionCodec.class);
However, I really want to use snappy compression in my sequence files. Can somebody please explain to me what I am doing wrong?
但是,我真的很想在我的序列文件中使用 snappy 压缩。有人可以向我解释我做错了什么吗?
回答by msknapp
My problem was that my JRE did not contain the appropriate native libraries. This may or may not be because I switched the JDK from cloudera's pre-built VM to JDK 1.7. The snappy .so files are in your hadoop/lib/native directory, the JRE needs to have them. Adding them to the classpath did not seem to resolve my issue. I resolved it like this:
我的问题是我的 JRE 不包含适当的本机库。这可能是也可能不是因为我将 JDK 从 cloudera 的预构建 VM 切换到 JDK 1.7。snappy .so 文件位于您的 hadoop/lib/native 目录中,JRE 需要拥有它们。将它们添加到类路径似乎并没有解决我的问题。我是这样解决的:
$ cd /usr/lib/hadoop/lib/native
$ sudo cp *.so /usr/java/latest/jre/lib/amd64/
Then I was able to use the SnappyCodec class. Your paths may be different though.
然后我就可以使用 SnappyCodec 类了。不过,您的路径可能有所不同。
That seemed to get me to the next problem:
这似乎让我想到了下一个问题:
Caused by: java.lang.RuntimeException: native snappy library not available: SnappyCompressor has not been loaded.
引起:java.lang.RuntimeException:本机 snappy 库不可用:SnappyCompressor 尚未加载。
Still trying to resolve that.
仍在努力解决这个问题。
回答by Niko
I you need all files, not only the *.so ones. Also ideally you would include the folder to your path instead of copying the libs from there. You need to restart the MapReduce service after this, so that the new libraries are taken and can be used.
我需要所有文件,而不仅仅是 *.so 文件。同样理想情况下,您可以将文件夹包含到您的路径中,而不是从那里复制库。之后需要重启 MapReduce 服务,这样新的库才能被获取和使用。
Niko
尼可
回答by Oleksandr Petrenko
check your core-site.xml and mapred-site.xml they should contain correct properties and path of the folder with libraries
检查您的 core-site.xml 和 mapred-site.xml,它们应该包含带有库的文件夹的正确属性和路径
core-site.xml
核心站点.xml
<property>
<name>io.compression.codecs</name>
<value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
mapred-site.xml
mapred-site.xml
<property>
<name>mapreduce.map.output.compress</name>
<value>true</value>
</property>
<property>
<name>mapred.map.output.compress.codec</name>
<value>org.apache.hadoop.io.compress.SnappyCodec</value>
</property>
<property>
<name>mapreduce.admin.user.env</name>
<value>LD_LIBRARY_PATH=/usr/hdp/2.2.0.0-1084/hadoop/lib/native</value>
</property>
LD_LIBRARY_PATH- has to contain path of libsnappy.so .
LD_LIBRARY_PATH- 必须包含 libsnappy.so 的路径。
回答by Pradeep Jawahar
Found the following information from the Cloudera Communities
从Cloudera 社区找到以下信息
- Ensure that LD_LIBRARY_PATHand JAVA_LIBRARY_PATHcontains the native directory path having the libsnappy.so** files.
- Ensure that LD_LIBRARY_PATH and JAVA_LIBRARY path have been exported in the SPARK environment(spark-env.sh).
- 确保LD_LIBRARY_PATH和JAVA_LIBRARY_PATH包含具有libsnappy.so** 文件的本机目录路径。
- 确保 LD_LIBRARY_PATH 和 JAVA_LIBRARY 路径已在 SPARK 环境(spark-env.sh)中导出。
For example I use Hortonworks HDP and I have the following configuration in my spark-env.sh
例如,我使用 Hortonworks HDP,并且我的spark-env.sh 中有以下配置
export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:/usr/hdp/2.2.0.0-2041/hadoop/lib/native
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/hdp/2.2.0.0-2041/hadoop/lib/native
export SPARK_YARN_USER_ENV="JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH,LD_LIBRARY_PATH=$LD_LIBRARY_PATH"
回答by Jaigates
after removing hadoop.dll (which i copied manually) from windows\system32 and setting up HADOOP_HOME=\hadoop-2.6.4 IT WORKS!!!
从 windows\system32 中删除 hadoop.dll(我手动复制的)并设置 HADOOP_HOME=\hadoop-2.6.4 后,它工作了!!!
回答by staticor
In my case, you may check the hive-conf files : mapred-site.xml , and check the key: mapreduce.admin.user.env's value,
就我而言,您可以检查 hive-conf 文件: mapred-site.xml ,并检查密钥:mapreduce.admin.user.env的值,
I tested it in a new datanode, and received unlinked-buildSnappy error on the machine where is no native dependencies ( libsnappy.so , etc)
我在一个新的数据节点中对其进行了测试,并在没有本机依赖项( libsnappy.so 等)的机器上收到了 unlinked-buildSnappy 错误