共计 3783 个字符,预计需要花费 10 分钟才能阅读完成。
系统环境
组件 | 版本 |
---|---|
CentOS | 6.5 64x |
zookeeper | 3.4.5 |
kafka | 2.10-0.8.1.1 |
单节点安装
下载 kafka 并且解压
tar zxvf kafka_2.10-0.8.1.1.tar.gz
cd kafka_2.10-0.8.1.1/
启动 kafka 默认配置
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
创建 topic 名为“test”
./bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
# 列出 topic
./bin/kafka-topics.sh --list --zookeeper localhost:2181
创建客户端
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
创建消费端
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
集群配置
修改 server-1.properties 中的参数
cp config/server.properties config/server-1.properties
主要修改内容
broker.id=0
log.dirs=/home/Hadoop/development/src/kafka_2.10-0.8.1.1/logs
zookeeper.connect=canbot130:2181,canbot131:2181,canbot132:2181
修改完成后 copy 到其他节点
scp -r ./kafka_2.10-0.8.1.1/ hadoop@canbot131:/home/hadoop/development/src/
scp -r ./kafka_2.10-0.8.1.1/ hadoop@canbot132:/home/hadoop/development/src/
copy 完以后需要修改再修改 server-1.properties 中的 broker.id
broker.id=0 192.169.2.130
broker.id=1 192.169.2.131
broker.id=2 192.169.2.132
启动 Kafka
分别在 canbot130/1/ 2 三个节点都启动
./kafka_2.10-0.8.1.1/bin/kafka-server-start.sh ./kafka_2.10-0.8.1.1/config/server-1.properties &
创建集群 Topic
[hadoop@canbot130 kafka_2.10-0.8.1.1]$./bin/kafka-topics.sh --create --zookeeper canbot130:2181 --replication-factor 3 --partitions 1 --topic test
提示以下内容表示创建 Topic 成功
Created topic "test".
查看 Topic 列表
[hadoop@canbot130 kafka_2.10-0.8.1.1]$ ./bin/kafka-topics.sh --list --zookeeper canbot130:2181
test
[hadoop@canbot130 kafka_2.10-0.8.1.1]$
创建生产者
./bin/kafka-console-consumer.sh --zookeeper canbot130:2181 --topic test
使用该命令创建生产者, 然后将在 canbot132 节点上创建 消费者, 查看消息是否被消费
创建消费者
在 canbot132 节点上执行
./bin/kafka-console-consumer.sh --zookeeper canbot130:2181 --topic test
生产消息 ==> 消费消息
在 canbot130 节点上的生产者
[hadoop@canbot130 kafka_2.10-0.8.1.1]$ ./bin/kafka-console-producer.sh --broker-list canbot130:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
"holl"
[2016-05-31 04:27:30,770] INFO Closing socket connection to /192.168.2.130. (kafka.network.Processor)
"hao xiang shi tong bu l haha"
"test kafka"
在 canbot132 节点上的消费者所产生的信息
[hadoop@canbot132 kafka_2.10-0.8.1.1]$ ./bin/kafka-console-consumer.sh --zookeeper canbot130:2181 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[2016-05-31 04:27:22,328] INFO Closing socket connection to /192.168.2.132. (kafka.network.Processor)
holl
hao xiang shi tong bu l haha
test kafka
错误记录
错误一
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/1. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering.
at kafka.utils.ZkUtils$.registerBrokerInZk(ZkUtils.scala:205)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:57)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:44)
at kafka.server.KafkaServer.startup(KafkaServer.scala:103)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
at kafka.Kafka$.main(Kafka.scala:46)
at kafka.Kafka.main(Kafka.scala)
解决方法: 该错误是由于 server.properties 中的 broker.id 重复
分布式发布订阅消息系统 Kafka 架构设计 http://www.linuxidc.com/Linux/2013-11/92751.htm
Apache Kafka 代码实例 http://www.linuxidc.com/Linux/2013-11/92754.htm
Apache Kafka 教程笔记 http://www.linuxidc.com/Linux/2014-01/94682.htm
Apache kafka 原理与特性 (0.8V) http://www.linuxidc.com/Linux/2014-09/107388.htm
Kafka 部署与代码实例 http://www.linuxidc.com/Linux/2014-09/107387.htm
Kafka 介绍和集群环境搭建 http://www.linuxidc.com/Linux/2014-09/107382.htm
Kafka 的详细介绍 :请点这里
Kafka 的下载地址 :请点这里
本文永久更新链接地址 :http://www.linuxidc.com/Linux/2016-08/134191.htm