ElasticSearch Java API:NoNodeAvailableException: 无可用节点
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/23520684/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
ElasticSearch Java API:NoNodeAvailableException: No node available
提问by Joe
public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "foxzen")
.put("node.name", "yu").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("XXX.XXX.XXX.XXX", 9200));
// XXX is my server's ip address
IndexResponse response = client.prepareIndex("twitter", "tweet")
.setSource(XContentFactory.jsonBuilder()
.startObject()
.field("productId", "1")
.field("productName", "XXX").endObject()).execute().actionGet();
System.out.println(response.getIndex());
System.out.println(response.getType());
System.out.println(response.getVersion());
client.close();
}
I access server from my computer
我从我的电脑访问服务器
curl -get http://XXX.XXX.XXX.XXX:9200/
get this
得到这个
{
"status" : 200,
"name" : "yu",
"version" : {
"number" : "1.1.0",
"build_hash" : "2181e113dea80b4a9e31e58e9686658a2d46e363",
"build_timestamp" : "2014-03-25T15:59:51Z",
"build_snapshot" : false,
"lucene_version" : "4.7"
},
"tagline" : "You Know, for Search"
}
Why get error by using Java API?
为什么使用 Java API 会出错?
EDIT
编辑
There is the cluster and node part config of elasticsearch.yml
有集群和节点部分配置 elasticsearch.yml
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: foxzen
#################################### Node #####################################
# Node names are generated dynamically on startup, so you're relieved
# from configuring them manually. You can tie this node to a specific name:
#
node.name: yu
采纳答案by hudsonb
Some suggestions:
一些建议:
1 - Use port 9300. [9300-9400] is for node-to-node communication, [9200-9300] is for HTTP traffic.
1 - 使用端口 9300。[9300-9400] 用于节点到节点通信,[9200-9300] 用于 HTTP 流量。
2 - Ensure the version of the Java API you are using matches the version of elasticsearch running on the server.
2 - 确保您使用的 Java API 版本与服务器上运行的 elasticsearch 版本匹配。
3 - Ensure that the name of your cluster is foxzen
(check the elasticsearch.yml on the server).
3 - 确保您的集群名称是foxzen
(检查服务器上的 elasticsearch.yml)。
4 - Remove put("node.name", "yu")
, you aren't joining the cluster as a node since you are using the TransportClient
, and even if you were it appears your server node is named yu
so you would want a different node name in any case.
4 - 删除put("node.name", "yu")
,因为您使用的是TransportClient
,所以您没有作为节点加入集群,即使您是这样,您的服务器节点已被命名,yu
因此无论如何您都需要不同的节点名称。
回答by John Petrone
You need to change your code to use port 9300 - correct line would be:
您需要更改代码以使用端口 9300 - 正确的行是:
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("XXX.XXX.XXX.XXX", 9300));
The reason is that the Java API is using the internal transport used for inter node communications and it defaults to port 9300. Port 9200 is the default for the REST API interface. Common issue to run into - check this sample code here towards the bottom of the page, under Transport Client:
原因是 Java API 使用用于节点间通信的内部传输,并且默认端口为 9300。端口 9200 是 REST API 接口的默认端口。遇到的常见问题 - 在页面底部的传输客户端下检查此示例代码:
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html
http://www.elasticsearch.org/guide/en/elasticsearch/client/java-api/current/client.html
// on startup
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress("host1", 9300))
.addTransportAddress(new InetSocketTransportAddress("host2", 9300));
// on shutdown
client.close();
回答by Amit Goldstein
I assume that you are setting the ES server on a remote host? In that case you will need to bind the publish address to the host's public IP address.
我假设您是在远程主机上设置 ES 服务器?在这种情况下,您需要将发布地址绑定到主机的公共 IP 地址。
In your ES host edit /etc/elasticsearch/elasticsearch.yml
and add its public IP after network.publish_host:
在您的 ES 主机中/etc/elasticsearch/elasticsearch.yml
,在 network.publish_host 之后编辑并添加其公共 IP:
# Set the address other nodes will use to communicate with this node. If not
# set, it is automatically derived. It must point to an actual IP address.
#
network.publish_host: 192.168.0.1
And in your code connect to this host on port 9300. Note that you need the IP and not the domain name (at least according to my experience on Amazon EC2)
并在您的代码中通过端口 9300 连接到此主机。请注意,您需要 IP 而不是域名(至少根据我在 Amazon EC2 上的经验)
回答by rrudland
If you are still having issues, even when using port 9300, and everything else seems to be configured correctly, try using an older version of elasticsearch.
如果您仍然遇到问题,即使使用端口 9300,并且其他一切似乎都配置正确,请尝试使用旧版本的 elasticsearch。
I was getting this same error while using elasticsearch version 2.2.0, but as soon as I rolled back to version 1.7.5, my problem magically went away. Here's a link to someone else having this issue : older version solves problem
我在使用 elasticsearch 2.2.0 版时遇到了同样的错误,但是一旦我回滚到 1.7.5 版,我的问题就神奇地消失了。这是其他人遇到此问题的链接:旧版本解决了问题
回答by Joseph Lust
For folks with similar problems, I received this because I had not set cluster.name
in the TransportClient
builder. Added the property and everything worked.
对于有类似问题的人,我收到了这个,因为我没有cluster.name
在TransportClient
构建器中设置。添加了属性,一切正常。
回答by btpka3
I met this error too. I use ElasticSearch 2.4.1 as a standalone server (single node) in docker, programming with Grails 3/spring-data-elasticsearch. My fix is setting client.transport.sniff
to false
. Here is my core conf :
我也遇到了这个错误。我在 docker 中使用 ElasticSearch 2.4.1 作为独立服务器(单节点),使用 Grails 3/spring-data-elasticsearch 进行编程。我的修复是设置client.transport.sniff
为false
. 这是我的核心配置:
application.yml
应用程序.yml
spring.data.elasticsearch:
cluster-name: "my-es"
cluster-nodes: "localhost:9300"
properties:
"client.transport.ignore_cluster_name": true
"client.transport.nodes_sampler_interval": "5s"
"client.transport.ping_timeout": "5s"
"client.transport.sniff": false # XXX : notice here
repositories.enabled: false
See this
看到这个
回答by Kondal Kolipaka
Other reason could be, your Elasticsearch Java clientis a different version from your Elasticsearch server.
其他原因可能是,您的Elasticsearch Java 客户端与您的Elasticsearch 服务器版本不同。
Elasticsearch Java client version is nothing but your elasticsearchjar version in your code base.
Elasticsearch Java 客户端版本只不过是代码库中的elasticsearchjar 版本。
For example: In my code it's elasticsearch-2.4.0.jar
例如:在我的代码中它是elasticsearch-2.4.0.jar
To verify Elasticsearch server version,
要验证 Elasticsearch 服务器版本,
$ /Users/kkolipaka/elasticsearch/bin/elasticsearch -version
Version: 5.2.2, Build: f9d9b74/2017-02-24T17:26:45.835Z, JVM: 1.8.0_111
As you can see, I've downloaded latest version of Elastic server 5.2.2 but forgot to update the ES Java API client version 2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java-api/current/client.html
如您所见,我已经下载了最新版本的 Elastic server 5.2.2,但忘记更新 ES Java API 客户端版本 2.4.0 https://www.elastic.co/guide/en/elasticsearch/client/java- api/current/client.html
回答by N. Kudryavtsev
Another solution may be to include io.netty.netty-all
into project dependencies explicitly.
另一种解决方案可能是io.netty.netty-all
显式地包含到项目依赖项中。
On addTransportAddresses
a method nodesSampler.sample()
is being executed, and added addresses are being checked for availability there.
In my case try-catch
block swallows ConnectTransportException
because a method io.netty.channel.DefaultChannelId.newInstance()
cannot be found. So added node just isn't treated as available.
上addTransportAddresses
的方法nodesSampler.sample()
正在被执行,并加入地址被判定为可在那里。在我的情况下,try-catch
块吞下ConnectTransportException
是因为io.netty.channel.DefaultChannelId.newInstance()
找不到方法。所以添加的节点不会被视为可用。