java Hadoop Hive 无法将源移动到目标

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/30483296/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-02 17:11:46  来源:igfitidea点击:

Hadoop Hive unable to move source to destination

javahadoophiveexecution

提问by sgp

I am trying to use Hive 1.2.0 over Hadoop 2.6.0. I have created an employeetable. However, when I run the following query:

我正在尝试在 Hadoop 2.6.0 上使用 Hive 1.2.0。我已经创建了一个employee表。但是,当我运行以下查询时:

hive> load data local inpath '/home/abc/employeedetails' into table employee;

I get the following error:

我收到以下错误:

Failed with exception Unable to move source file:/home/abc/employeedetails to destination hdfs://localhost:9000/user/hive/warehouse/employee/employeedetails_copy_1
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MoveTask

What wrong am I doing here? Are there any specific permissions that I need to set? Thanks in advance!

我在这里做错了什么?我需要设置任何特定权限吗?提前致谢!

采纳答案by sgp

As mentioned by Rio, the issue involved lack of permissions to load data into hive table. I figures out the following command solves my problems:

正如 Rio 所提到的,问题涉及缺乏将数据加载到 hive 表的权限。我发现以下命令可以解决我的问题:

hadoop fs -chmod g+w /user/hive/warehouse

回答by Rio mario

See the permission for the HDFS directory:

查看 HDFS 目录的权限:

hdfs dfs -ls /user/hive/warehouse/employee/employeedetails_copy_1

Seems like you may not have permission to load data into hive table.

似乎您可能无权将数据加载到 hive 表中。

回答by Rajesh N

The error might be due to permission issue on local filesystem.

该错误可能是由于本地文件系统上的权限问题。

Change the permission for local filesystem:

更改本地文件系统的权限:

sudo chmod -R 777 /home/abc/employeedetails

Now, run:

现在,运行:

hive> load data local inpath '/home/abc/employeedetails' into table employee;

回答by Ramesh Muthavarapu

If we face same error After running the above command in distributed mode, we can try the the below cammand in all super users of all nodes. sudo usermod -a -G hdfs yarnNote:we get this error after restart the all the services of YARN(in AMBARI).My problem was resolved.This is admin command better to care when you are running.

如果我们遇到同样的错误在分布式模式下运行上述命令后,我们可以在所有节点的所有超级用户中尝试以下命令。 sudo usermod -a -G hdfs yarn注意:我们在重新启动 YARN 的所有服务(在 AMBARI 中)后出现此错误。我的问题已解决。这是管理命令,在您运行时更应注意。

回答by Hill

I meet the same problems and search it two days .Finally I find the reason is that datenode start a moment and shut down.

遇到同样的问题,找了两天,终于找到原因是datenode启动了一会儿就关机了。

solve steps:

解决步骤:

  1. hadoop fs -chmod -R 777 /home/abc/employeedetails

  2. hadoop fs -chmod -R 777 /user/hive/warehouse/employee/employeedetails_copy_1

  3. vi hdfs-site.xmland add follow infos :

    dfs.permissions.enabled false

  4. hdfs --daemon start datanode

  5. vi hdfs-site.xml#find the location of 'dfs.datanode.data.dir'and'dfs.namenode.name.dir'.If it is the same location ,you must change it ,this is why I can't start datanode reason.

  6. follow 'dfs.datanode.data.dir'/data/current edit the VERSION and copy clusterID to 'dfs.namenode.name.dir'/data/current clusterID of VERSION。

  7. start-all.sh

  8. if above it is unsolved , to be careful to follow below steps because of the safe of data ,but I already solve the problem because follow below steps.

  9. stop-all.sh

  10. delete the data folder of 'dfs.datanode.data.dir' and the data folder of 'dfs.namenode.name.dir' and tmp folder.

  11. hdfs namenode -format

  12. start-all.sh

  13. solve the problem

  1. hadoop fs -chmod -R 777 /home/abc/employeedetails

  2. hadoop fs -chmod -R 777 /user/hive/warehouse/employee/employeedetails_copy_1

  3. vi hdfs-site.xml并添加以下信息:

    dfs.permissions.enabled 假

  4. hdfs --daemon start datanode

  5. vi hdfs-site.xml#find'dfs.datanode.data.dir'和'dfs.namenode.name.dir'的位置。如果是同一个位置,你必须改变它,这就是为什么我无法启动datanode的原因。

  6. 按照 'dfs.datanode.data.dir'/data/current 编辑 VERSION 并将 clusterID 复制到 VERSION 的 'dfs.namenode.name.dir'/data/current clusterID。

  7. start-all.sh

  8. 如果以上问题未解决,请注意以下步骤,因为数据安全,但我已经解决了问题,因为请按照以下步骤操作。

  9. stop-all.sh

  10. 删除'dfs.datanode.data.dir'的数据文件夹和'dfs.namenode.name.dir'的数据文件夹和tmp文件夹。

  11. hdfs namenode -format

  12. start-all.sh

  13. 解决这个问题

maybe you will meet other problem like this.

也许你会遇到这样的其他问题。

problems:

问题:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /opt/hive/tmp/root/1be8676a-56ac-47aa-ab1c-aa63b21ce1fc. Name node is in safe mode

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException):无法创建目录/opt/hive/tmp/root/1be8676a-56ac-47aa-ab1c-aa63b21ce1fc。名称节点处于安全模式

methods: hdfs dfsadmin -safemode leave

方法: hdfs dfsadmin -safemode leave