java KafkaSpout 工作示例

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/30269441/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-02 16:48:12  来源:igfitidea点击:

KafkaSpout working example

javaapache-kafkaapache-storm

提问by user3798920

I recently got familiar with Apache Kafka and have a working example of a producer-consumer.

我最近熟悉了 Apache Kafka,并有一个生产者-消费者的工作示例。

My next step is to integrate Kafka with Spout and Bolt and i am having a hard time getting the examples available(they are mostly old) working locally.

我的下一步是将 Kafka 与 Spout 和 Bolt 集成,我很难获得在本地工作的可用示例(它们大多是旧的)。

I got the following example working storm-book/examples-ch02-getting_started which is reading data from a local text file.

我得到了以下示例工作storm-b​​ook/examples-ch02-getting_started,它从本地文本文件读取数据。

The same repo has an example for storm-book/examples-ch04-spouts kafka-spoutbut i am not able to get it to work.

同一个 repo 有一个 Storm-b​​ook/examples-ch04-spouts kafka-spout 的例子, 但我无法让它工作。

I tried the following example as well cep.kafkabut got the following error-

我也尝试了以下示例cep.kafka但出现以下错误-

5034 [Thread-11-words] INFO  org.apache.curator.framework.imps.CuratorFrameworkImpl - Starting
5047 [Thread-11-words] ERROR backtype.storm.util - Async loop died!
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
        at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.newZooKeeper(CuratorFrameworkImpl.java:169) ~[curator-framework-2.4.0.jar:na]
        at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:94) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:188) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:234) ~[curator-framework-2.4.0.jar:na]
        at storm.kafka.ZkState.<init>(ZkState.java:62) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
        at storm.kafka.KafkaSpout.open(KafkaSpout.java:85) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
        at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:522) ~[storm-core-0.9.4.jar:0.9.4]
        at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.4.jar:0.9.4]
        at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
5049 [Thread-11-words] ERROR backtype.storm.daemon.executor -
java.lang.NoSuchMethodError: org.apache.zookeeper.ZooKeeper.<init>(Ljava/lang/String;ILorg/apache/zookeeper/Watcher;Z)V
        at org.apache.curator.utils.DefaultZookeeperFactory.newZooKeeper(DefaultZookeeperFactory.java:29) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.newZooKeeper(CuratorFrameworkImpl.java:169) ~[curator-framework-2.4.0.jar:na]
        at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:94) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.HandleHolder.getZooKeeper(HandleHolder.java:55) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.ConnectionState.reset(ConnectionState.java:219) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.ConnectionState.start(ConnectionState.java:103) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.CuratorZookeeperClient.start(CuratorZookeeperClient.java:188) ~[curator-client-2.4.0.jar:na]
        at org.apache.curator.framework.imps.CuratorFrameworkImpl.start(CuratorFrameworkImpl.java:234) ~[curator-framework-2.4.0.jar:na]
        at storm.kafka.ZkState.<init>(ZkState.java:62) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
        at storm.kafka.KafkaSpout.open(KafkaSpout.java:85) ~[storm-kafka-0.9.2-incubating.jar:0.9.2-incubating]
        at backtype.storm.daemon.executor$fn__3371$fn__3386.invoke(executor.clj:522) ~[storm-core-0.9.4.jar:0.9.4]
        at backtype.storm.util$async_loop$fn__460.invoke(util.clj:461) ~[storm-core-0.9.4.jar:0.9.4]
        at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]
5088 [Thread-11-words] ERROR backtype.storm.util - Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
        at backtype.storm.util$exit_process_BANG_.doInvoke(util.clj:325) [storm-core-0.9.4.jar:0.9.4]
        at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.5.1.jar:na]
        at backtype.storm.daemon.worker$fn__4693$fn__4694.invoke(worker.clj:491) [storm-core-0.9.4.jar:0.9.4]
        at backtype.storm.daemon.executor$mk_executor_data$fn__3272$fn__3273.invoke(executor.clj:240) [storm-core-0.9.4.jar:0.9.4]
        at backtype.storm.util$async_loop$fn__460.invoke(util.clj:473) [storm-core-0.9.4.jar:0.9.4]
        at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
        at java.lang.Thread.run(Thread.java:745) [na:1.8.0_05]

回答by jbarrueta

When I was having the same issue learning how to run create and run a Kafka Spout, I found this Github repovery useful, and I was able to have my KafkaSpout emitting tuples to the rest of the bolts.

当我在学习如何创建和运行 Kafka Spout 时遇到同样的问题时,我发现这个Github 存储库非常有用,并且我能够让我的 KafkaSpout 向其余的 bolt 发送元组。

This is a high level sample on how I create my topology for this.

这是关于我如何为此创建拓扑的高级示例。

public class TestTopology {

    public static void main(String[] args) {

        String zkIp = "192.168.59.103";

        String nimbusHost = "192.168.59.103";

        String zookeeperHost = zkIp +":2181";

        ZkHosts zkHosts = new ZkHosts(zookeeperHost);

        SpoutConfig kafkaConfig = new SpoutConfig(zkHosts, "myKafkaTopic", "", "storm");

        kafkaConfig.scheme = new SchemeAsMultiScheme(new JsonScheme() {
            @Override
            public Fields getOutputFields() {
                return new Fields("events");
            }
        });

        KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);

        TopologyBuilder builder = new TopologyBuilder();

        builder.setSpout("eventsEmitter", kafkaSpout, 8);

        builder.setBolt("eventsProcessor", new RollingCountBolt(2, 1), 8)
                .fieldsGrouping("requestsEmitter", new Fields("request"));

        //More bolts stuffzz

        Config config = new Config();

        config.setMaxTaskParallelism(5);
        config.put(Config.TOPOLOGY_TICK_TUPLE_FREQ_SECS, 2);
        config.put(Config.NIMBUS_HOST, nimbusHost);
        config.put(Config.NIMBUS_THRIFT_PORT, 6627);
        config.put(Config.STORM_ZOOKEEPER_PORT, 2181);
        config.put(Config.STORM_ZOOKEEPER_SERVERS, Arrays.asList(zkIp));

        try {
            StormSubmitter.submitTopology("my-topology", config, builder.createTopology());
        } catch (Exception e) {
            throw new IllegalStateException("Couldn't initialize the topology", e);
        }
    }

}

Hope this helps,

希望这可以帮助,

Jose Luis

何塞·路易斯