java 为什么 Hadoop FileSystem.get 方法需要知道完整的 URI 而不仅仅是方案

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/13152619/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-31 11:40:45  来源:igfitidea点击:

Why Hadoop FileSystem.get method needs to know full URI and not just scheme

javahadoop

提问by aruns

Is it possible to use an instance of Hadoop FileSystem created from any valid hdfs url to be used again for reading and writing different hdfs urls.I have tried the following

是否可以使用从任何有效的 hdfs url 创建的 Hadoop FileSystem 实例再次用于读取和写入不同的 hdfs url。我尝试了以下操作

String url1 = "hdfs://localhost:54310/file1.txt";
String url2 = "hdfs://localhost:54310/file2.txt";
String url3 = "hdfs://localhost:54310/file3.txt";

//Creating filesystem using url1
FileSystem fileSystem = FileSystem.get(URI.create(url1), conf);

//Using same filesystem with url2 and url3  
InputStream in = fileSystem.open(new Path(url2));
OutputStream out = fileSystem.create(new Path(url3));

This works.But will this cause any other issues.

这有效。但这会导致任何其他问题。

回答by Thomas Jungblut

You can certainly create a single FileSystemwith your scheme and adress and then get it via the FileSystem.

您当然可以FileSystem使用您的方案和地址创建一个单曲,然后通过FileSystem.

Configuration conf = new Configuration();
conf.set("fs.default.name","hdfs://localhost:54310");
FileSystem fs = FileSystem.get(conf);
InputStream is = fs.open(new Path("/file1.txt"));

回答by octo

For different dfs paths, methods create/open will fail. Look at org.apache.hadoop.fs.FileSystem#checkPath method.

对于不同的 dfs 路径,方法 create/open 将失败。查看 org.apache.hadoop.fs.FileSystem#checkPath 方法。