Java 连接到在 Docker 中运行的 Kafka
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/51630260/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Connect to Kafka running in Docker
提问by Sasha Shpota
I setup a single node Kafka Docker container on my local machine like it is described in the Confluent documentation(steps 2-3).
我在本地机器上设置了一个单节点 Kafka Docker 容器,就像Confluent 文档(步骤 2-3)中描述的那样。
In addition, I also exposed Zookeeper's port 2181 and Kafka's port 9092 so that I'll be able to connect to them from a client running on local machine:
此外,我还公开了 Zookeeper 的 2181 端口和 Kafka 的 9092 端口,以便我能够从本地机器上运行的客户端连接到它们:
$ docker run -d \
-p 2181:2181 \
--net=confluent \
--name=zookeeper \
-e ZOOKEEPER_CLIENT_PORT=2181 \
confluentinc/cp-zookeeper:4.1.0
$ docker run -d \
--net=confluent \
--name=kafka \
-p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka:4.1.0
Problem:When I try to connect to Kafka from the host machine, the connection fails because it can't resolve address: kafka:9092
.
问题:当我尝试从主机连接到 Kafka 时,连接失败,因为它can't resolve address: kafka:9092
.
Here is my Java code:
这是我的Java代码:
Properties props = new Properties();
props.put("bootstrap.servers", "localhost:9092");
props.put("client.id", "KafkaExampleProducer");
props.put("key.serializer", LongSerializer.class.getName());
props.put("value.serializer", StringSerializer.class.getName());
KafkaProducer<Long, String> producer = new KafkaProducer<>(props);
ProducerRecord<Long, String> record = new ProducerRecord<>("foo", 1L, "Test 1");
producer.send(record).get();
producer.flush();
The exception:
例外:
java.io.IOException: Can't resolve address: kafka:9092
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:235) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.common.network.Selector.connect(Selector.java:214) ~[kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:864) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.NetworkClient.ready(NetworkClient.java:265) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:266) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238) [kafka-clients-2.0.0.jar:na]
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:176) [kafka-clients-2.0.0.jar:na]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]
Caused by: java.nio.channels.UnresolvedAddressException: null
at sun.nio.ch.Net.checkAddress(Net.java:101) ~[na:1.8.0_144]
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:622) ~[na:1.8.0_144]
at org.apache.kafka.common.network.Selector.doConnect(Selector.java:233) ~[kafka-clients-2.0.0.jar:na]
... 7 common frames omitted
Question:How to connect to Kafka running in Docker? My code is running from host machine, not Docker.
问题:如何连接到运行在 Docker 中的 Kafka?我的代码是从主机运行的,而不是 Docker。
Note: I know that I could theoretically play around with DNS setup and /etc/hosts
but it is a workaround - it shouldn't be like that.
注意:我知道理论上我可以玩弄 DNS 设置,/etc/hosts
但这是一种解决方法 - 它不应该是那样的。
There is also similar question here, however it is based on ches/kafka
image. I use confluentinc
based image which is not the same.
也有类似的问题在这里,但它是基于ches/kafka
图像。我使用不同的confluentinc
基于图像。
采纳答案by OneCricketeer
Disclaimer
免责声明
tl;dr- At the end of the day, it's all the same ApacheKafkarunning in a container. You're just dependent on how it is configured. And which variablesmake it so.
The following uses
confluentinc
docker images, notwurstmeister/kafka
, although there is a similar configuration, I have not tried it. If using that image, read their Connectivity wiki.Nothing against the
wurstmeister
image, but it's community maintained, not built in an automated CI/CD release... Bitnami ones are similarly minimalistic and run in multiple cloud providers. Forbitnami
Kafka images, refer their README
debezium/kafka
docs on it are mentioned here.Note: advertised host and port settings are deprecated. Advertised listeners covers both
spotify/kafka
is deprecated and outdated.fast-data-dev
is great for an all in one solution, but it is bloatedFor supplemental reading, a fully-functional
docker-compose
, and network diagrams, see this blog by @rmoff
tl;dr- 归根结底,在容器中运行的都是相同的ApacheKafka。您只依赖于它的配置方式。以及哪些变量使它如此。
下面用的是
confluentinc
docker镜像,不是wurstmeister/kafka
,虽然有类似的配置,我没试过。如果使用该图像,请阅读他们的 Connectivity wiki。与
wurstmeister
图像无关,但它由社区维护,而不是内置于自动化 CI/CD 版本中……Bitnami 的也同样简约,并在多个云提供商中运行。对于bitnami
Kafka 图像,请参阅他们的 README
debezium/kafka
这里提到了关于它的文档。注意:广告主机和端口设置已弃用。广告听众涵盖两者
spotify/kafka
已弃用和过时。fast-data-dev
非常适合多合一解决方案,但它很臃肿如需补充阅读、全功能
docker-compose
图和网络图,请参阅@rmoff 的此博客
Answer
回答
The Confluent quickstart (Docker) documentassumes all produce and consume requests will be within the Docker network.
Confluent 快速入门 (Docker) 文档假设所有生产和消费请求都在 Docker 网络内。
You could fix the problem by running your Kafka client code within its own container, but otherwise you'll need to add some more environment variables for exposing the container externally, while still having it work within the Docker network.
您可以通过在其自己的容器中运行 Kafka 客户端代码来解决该问题,否则您需要添加更多环境变量以将容器暴露在外部,同时仍使其在 Docker 网络中工作。
First add a protocol mapping of PLAINTEXT_HOST:PLAINTEXT
that will map the listener protocol to a Kafka protocol
首先添加一个协议映射PLAINTEXT_HOST:PLAINTEXT
,它将监听器协议映射到 Kafka 协议
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
-e KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
Then setup two advertised listeners on different ports. (kafka:9092
here refers to the docker container name). Notice the protocols match the right side values of the mappings above
然后在不同的端口上设置两个通告的侦听器。(kafka:9092
这里指的是docker容器名称)。注意协议匹配上面映射的右侧值
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
When running the container, add -p 29092:29092
for the host port mapping
运行容器时,-p 29092:29092
为主机端口映射添加
tl;dr(with the above settings)
tl;dr(使用上述设置)
When running any Kafka Client outsidethe Docker network (including CLI tools you might have installed locally), use localhost:29092
for bootstrap servers and localhost:2181
for Zookeeper
在Docker 网络之外运行任何 Kafka 客户端(包括您可能在本地安装的 CLI 工具)时,localhost:29092
用于引导服务器和localhost:2181
Zookeeper
When running an app in the Docker network, use kafka:9092
for bootstrap servers and zookeeper:2181
for Zookeeper
在 Docker 网络中运行应用程序时,kafka:9092
用于引导服务器和zookeeper:2181
Zookeeper
See the example Compose file for the full Confluent stack
有关完整的 Confluent 堆栈,请参阅示例 Compose 文件
Appendix
附录
For anyone interested in Kubernetesdeployments: https://operatorhub.io/?keyword=Kafka
对于任何对Kubernetes部署感兴趣的人:https: //operatorhub.io/ ? keyword = Kafka
回答by wargre
When you first connect to a kafka node, it will give you back all the kafka node and the url where to connect. Then your application will try to connect to every kafka directly.
当你第一次连接到一个 kafka 节点时,它会给你所有的 kafka 节点和连接的 url。然后您的应用程序将尝试直接连接到每个 kafka。
Issue is always what is the kafka will give you as url ? It's why there is the KAFKA_ADVERTISED_LISTENERS
which will be used by kafka to tell the world how it can be accessed.
问题始终是 kafka 会给你的网址是什么?这就是为什么KAFKA_ADVERTISED_LISTENERS
kafka 将使用 which 来告诉世界如何访问它。
Now for your use-case, there is multiple small stuff to think about:
现在对于您的用例,需要考虑多个小问题:
Let say you set plaintext://kafka:9092
假设你设置 plaintext://kafka:9092
- This is OK if you have an application in your docker compose that use kafka. This application will get from kafka the URL with
kafka
that is resolvable through the docker network. - If you try to connect from your main system or from another container which is not in the same docker network this will fail, as the
kafka
name cannot be resolved.
- 如果您的 docker compose 中有一个使用 kafka 的应用程序,这没问题。此应用程序将从 kafka 获取
kafka
可通过 docker 网络解析的 URL 。 - 如果您尝试从主系统或不在同一 docker 网络中的另一个容器进行连接,这将失败,因为
kafka
无法解析名称。
==> To fix this, you need to have a specific DNS server like a service discovery one, but it is big trouble for small stuff. Or you set manually the kafka
name to the container ip in each /etc/hosts
==> 要解决这个问题,你需要有一个特定的 DNS 服务器,比如服务发现服务器,但对于小东西来说是个大麻烦。或者您手动将kafka
名称设置为每个容器的 ip/etc/hosts
If you set plaintext://localhost:9092
如果你设置 plaintext://localhost:9092
- This will be ok on your system if you have a port mapping ( -p 9092:9092 when launching kafka)
- This will fail if you test from an application on a container (same docker network or not) (localhost is the container itself not the kafka one)
- 如果您有端口映射(启动 kafka 时为 -p 9092:9092),这在您的系统上就可以了
- 如果您从容器上的应用程序(是否与 docker 网络相同)进行测试,这将失败(localhost 是容器本身而不是 kafka 容器)
==> If you have this and wish to use a kafka client in another container, one way to fix this is to share the network for both container (same ip)
==> 如果你有这个并且希望在另一个容器中使用 kafka 客户端,解决这个问题的一种方法是共享两个容器的网络(相同的 ip)
Last option : set an IP in the name: plaintext://x.y.z.a:9092
最后一个选项:在名称中设置一个 IP: plaintext://x.y.z.a:9092
This will be ok for everybody... BUT how can you get the x.y.z.a name ?
这对每个人都没有问题...但是你怎么能得到 xyza 的名字呢?
The only way is to hardcode this ip when you launch the container: docker run .... --net confluent --ip 10.x.y.z ...
. Note that you need to adapt the ip to one valid ip in the confluent
subnet.
唯一的方法是在启动容器时硬编码这个 ip:docker run .... --net confluent --ip 10.x.y.z ...
。请注意,您需要将 ip 调整为confluent
子网中的一个有效 ip 。
回答by ?brahim Ersin Yava?
before zookeeper
在动物园管理员之前
- docker container run --name zookeeper -p 2181:2181 zookeeper
- docker 容器运行 --name zookeeper -p 2181:2181 zookeeper
after kafka
卡夫卡之后
- docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092 -e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 confluentinc/cp-kafka
- docker container run --name kafka -p 9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://ip_address_of_your_computer_but_not_localhost!!!:9092:9092 -e KAFKA_ZOOKEEPER_CONNECT=192.168.8.128:2181
in kafka consumer and producer config
在 kafka 消费者和生产者配置中
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
configProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
configProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.8.128:9092");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "group_id");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
I run my project with these regulations. Good luck dude.
我按照这些规定运行我的项目。祝你好运伙计。