Ansible + Docker 部署 Apache Kafka 3.9 集群
1. 准备工作
1.2 主机列表
IP | 主机名 | 内存(GB) | CPU核数 | 磁盘 | 操作系统 | CPU 架构 |
---|---|---|---|---|---|---|
10.0.0.13 | arc-pro-dc01 my.registry.com | 16 | 1 | 500GB | CentOS 7.9.2009 | x86_64 |
10.0.0.14 | arc-pro-dc02 | 16 | 1 | 500GB | CentOS 7.9.2009 | x86_64 |
10.0.0.15 | arc-pro-dc03 | 16 | 1 | 500GB | CentOS 7.9.2009 | x86_64 |
1.3 已安装服务
版本 | arc-pro-dc01 | arc-pro-dc02 | arc-pro-dc03 | |
---|---|---|---|---|
Ansible | 2.9.27 | ✅ | ||
Harbor | v2.13.2 | ✅ | ||
Docker | 28.1.1 | ✅ | ✅ | ✅ |
Docker Compose | v2.39.2 | ✅ | ✅ | ✅ |
说明:
- 每个服务器的 IP 均是静态的
- 每个服务器的防火墙都已关闭
- 每个服务器的 SELINUX 已经禁用
- 每个服务器均存在一个管理员用户 admin,该用户可以免密码执行 sudo 命令;
- 在 arc-pro-dc01 机器上,可以使用 admin 用户免密码 ssh 到其他机器;
- 服务器之间的时间同步;
- 所有操作均使用 admin 用户完成,Kafka 集群的所属用户为 admin;
- 私有镜像仓库地址:https://my.registry.com:10443。
为使集群满足以上要求,参考下列文章进行配置:
- 使用 VMware Workstation 安装 CentOS-7 虚拟机
- 用 Ansible 批量完成 CentOS 7 操作系统基础配置
- 使用 Ansible 批量安装 Docker
- Docker 私有镜像仓库 Harbor 安装部署带签名认证
1.3 Kafka 集群规划
版本 | arc-pro-dc01 | arc-pro-dc02 | arc-pro-dc03 | |
---|---|---|---|---|
Kafka | 3.9.1 | Kafka Broker | Kafka Broker | Kafka Broker |
1.4 镜像准备
找一个可以连接互联网的、已经安装了 docker 的服务器,下载镜像
docker pull apache/kafka:3.9.1
docker image save apache/kafka:3.9.1 -o kafka.3.9.1.tar.gz
将 kafka.3.9.1.tar.gz 上传到本集群任意一台服务器,导入镜像:
docker load -i kafka.3.9.1.tar.gz
docker tag apache/kafka:3.9.1 my.registry.com:10443/library/apache/kafka:3.9.1
# 上传到私服
docker push my.registry.com:10443/library/apache/kafka:3.9.1
2. Ansible 文件
2.1 Ansible 目录结构
说明:在 arc-pro-dc01 机器上,执行 ansible 命令的基础目录为 /home/admin/ansible
$ tree /home/admin/ansible/
/home/admin/ansible/
├── ansible.cfg
├── hosts
└── kafka├── docker-compose.yml.j2└── start-kafka-container.yml
2.2 ansible.cfg
[defaults]
inventory=./hosts
host_key_checking=False
2.3 hosts
[kafka]
arc-pro-dc01 broker_id=1
arc-pro-dc02 broker_id=2
arc-pro-dc03 broker_id=3
2.4 docker-compose.yml.j2
services:kafka:image: my.registry.com:10443/library/apache/kafka:3.9.1restart: unless-stoppedcontainer_name: kafkahostname: {{ inventory_hostname }}network_mode: hostenvironment:KAFKA_NODE_ID: {{ broker_id }}KAFKA_PROCESS_ROLES: broker,controllerKAFKA_LISTENERS: PLAINTEXT://{{ inventory_hostname }}:9092,CONTROLLER://{{ inventory_hostname }}:9093KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://{{ inventory_hostname }}:9092,CONTROLLER://{{ inventory_hostname }}:9093KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXTKAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLERKAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXTKAFKA_CONTROLLER_QUORUM_VOTERS: >-{% for host in groups['kafka'] -%}{{ hostvars[host]['broker_id'] }}@{{ host }}:9093{% if not loop.last %},{% endif %}{%- endfor %}KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 3000KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 3KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 2KAFKA_NUM_PARTITIONS: 3KAFKA_LOG_DIRS: /var/lib/kafka/datavolumes:- {{ kafka_data_dir }}:/var/lib/kafka/data- {{ kafka_log_dir }}:/opt/kafka/logs- {{ kafka_conf_dir }}:/opt/kafka/config
2.4 start-kafka-container.yml
---
- name: Start Kafka Containerhosts: kafkabecome: truegather_facts: falsevars:kafka_owner: adminkafka_group: admincompose_file_dir: /opt/app/kafkakafka_conf_dir: /etc/kafkakafka_log_dir: /data/kafka/logskafka_data_dir: /data/kafka/datatasks:- name: Remove old kafka container if existscommand: docker rm -f kafkaignore_errors: true- name: Remove kafka directories if they existfile:path: "{{ item }}"state: absentloop:- "{{ compose_file_dir }}"- "{{ kafka_conf_dir }}"- "{{ kafka_log_dir }}"- "{{ kafka_data_dir }}"- name: Create kafka directoriesfile:path: "{{ item }}"state: directoryowner: "{{ kafka_owner }}"group: "{{ kafka_group }}"mode: '0755'loop:- "{{ compose_file_dir }}"- "{{ kafka_conf_dir }}"- "{{ kafka_log_dir }}"- "{{ kafka_data_dir }}"- name: Deploy docker-compose.ymltemplate:src: docker-compose.yml.j2dest: "{{ compose_file_dir }}/docker-compose.yml"owner: "{{ kafka_owner }}"group: "{{ kafka_group }}"mode: '0644'- name: Start kafka containercommand: docker-compose -f {{ compose_file_dir }}/docker-compose.yml up -d
3. 部署
在 Ansible (arc-pro-dc01) 管理机执行
$ pwd
/home/admin/ansible$ ansible-playbook kafka/start-kafka-container.yml
4. 测试
[admin@arc-pro-dc01 ~]$ docker exec kafka \/opt/kafka/bin/kafka-topics.sh \--bootstrap-server arc-pro-dc01:9092,arc-pro-dc02:9092,arc-pro-dc03:9092 \--topic test-topic \--partitions 3 \--replication-factor 2 \--create
Created topic test-topic.# 查看每台服务器的数据目录
[admin@arc-pro-dc01 ~]$ ll /data/kafka/data | grep test-topic
drwxr-xr-x 2 admin admin 167 Sep 25 18:05 test-topic-0
drwxr-xr-x 2 admin admin 167 Sep 25 18:05 test-topic-2
[admin@arc-pro-dc02 ~]$ ll /data/kafka/data | grep test-topic
drwxr-xr-x 2 admin admin 167 Sep 25 18:05 test-topic-0
drwxr-xr-x 2 admin admin 167 Sep 25 18:05 test-topic-1
[admin@arc-pro-dc03 ~]$ ll /data/kafka/data | grep test-topic
drwxr-xr-x 2 admin admin 167 Sep 25 18:05 test-topic-1
drwxr-xr-x 2 admin admin 167 Sep 25 18:05 test-topic-2[admin@arc-pro-dc02 ~]$ docker exec kafka \/opt/kafka/bin/kafka-console-consumer.sh \--bootstrap-server arc-pro-dc01:9092,arc-pro-dc02:9092,arc-pro-dc03:9092 \--topic test-topic \--from-beginning[admin@arc-pro-dc03 ~]$ docker exec --workdir /opt/kafka/bin/ -it kafka sh
/opt/kafka/bin $ ./kafka-console-producer.sh \--bootstrap-server arc-pro-dc01:9092,arc-pro-dc02:9092,arc-pro-dc03:9092 \--topic test-topic>hello# arc-pro-dc02 的终端输出 hello