Skip to content

Commit

Permalink
Merge pull request #26 from sleighzy/upgrade-to-kafka-3.2.0
Browse files Browse the repository at this point in the history
Upgrade to Apache Kafka 3.2.0
  • Loading branch information
sleighzy authored Jul 8, 2022
2 parents b383e8e + e15004f commit 80caa23
Show file tree
Hide file tree
Showing 9 changed files with 68 additions and 63 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/molecule.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ jobs:
python-version: '3.x'

- name: Install test dependencies.
run: pip3 install ansible ansible-lint yamllint docker molecule-docker "molecule[docker,lint]"
run: pip3 install ansible ansible-compat==0.5.0 ansible-lint yamllint docker molecule-docker "molecule[docker,lint]"

- name: Run Molecule tests.
run: molecule test
Expand Down
91 changes: 46 additions & 45 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
[![Build Status]](https://travis-ci.org/sleighzy/ansible-kafka)
![Lint Code Base] ![Molecule]

Ansible role to install and configure [Apache Kafka] 3.1.0
Ansible role to install and configure [Apache Kafka] 3.2.0

[Apache Kafka] is a distributed event streaming platform using publish-subscribe
topics. Applications and streaming components can produce and consume messages
Expand Down Expand Up @@ -42,50 +42,51 @@ See <https://github.com/ansible/ansible/issues/71528> for more information.

## Role Variables

| Variable | Default |
| ---------------------------------------------- | ------------------------------------- |
| kafka_download_base_url | <http://www-eu.apache.org/dist/kafka> |
| kafka_version | 3.1.0 |
| kafka_scala_version | 2.13 |
| kafka_create_user_group | true |
| kafka_user | kafka |
| kafka_group | kafka |
| kafka_root_dir | /opt |
| kafka_dir | {{ kafka_root_dir }}/kafka |
| kafka_start | yes |
| kafka_restart | yes |
| kafka_log_dir | /var/log/kafka |
| kafka_broker_id | 0 |
| kafka_java_heap | -Xms1G -Xmx1G |
| kafka_background_threads | 10 |
| kafka_listeners | PLAINTEXT://:9092 |
| kafka_num_network_threads | 3 |
| kafka_num_io_threads | 8 |
| kafka_num_replica_fetchers | 1 |
| kafka_socket_send_buffer_bytes | 102400 |
| kafka_socket_receive_buffer_bytes | 102400 |
| kafka_socket_request_max_bytes | 104857600 |
| kafka_replica_socket_receive_buffer_bytes | 65536 |
| kafka_data_log_dirs | /var/lib/kafka/logs |
| kafka_num_partitions | 1 |
| kafka_num_recovery_threads_per_data_dir | 1 |
| kafka_log_cleaner_threads | 1 |
| kafka_offsets_topic_replication_factor | 1 |
| kafka_transaction_state_log_replication_factor | 1 |
| kafka_transaction_state_log_min_isr | 1 |
| kafka_log_retention_hours | 168 |
| kafka_log_segment_bytes | 1073741824 |
| kafka_log_retention_check_interval_ms | 300000 |
| kafka_auto_create_topics_enable | false |
| kafka_delete_topic_enable | true |
| kafka_default_replication_factor | 1 |
| kafka_group_initial_rebalance_delay_ms | 0 |
| kafka_zookeeper_connect | localhost:2181 |
| kafka_zookeeper_connection_timeout | 6000 |
| kafka_bootstrap_servers | localhost:9092 |
| kafka_consumer_group_id | kafka-consumer-group |

See [log4j.yml](./defaults/main/002-log4j.yml) for detailled
| Variable | Default |
| ---------------------------------------------- | -------------------------------- |
| kafka_download_base_url | <https://dlcdn.apache.org/kafka> |
| kafka_download_validate_certs | yes |
| kafka_version | 3.2.0 |
| kafka_scala_version | 2.13 |
| kafka_create_user_group | true |
| kafka_user | kafka |
| kafka_group | kafka |
| kafka_root_dir | /opt |
| kafka_dir | {{ kafka_root_dir }}/kafka |
| kafka_start | yes |
| kafka_restart | yes |
| kafka_log_dir | /var/log/kafka |
| kafka_broker_id | 0 |
| kafka_java_heap | -Xms1G -Xmx1G |
| kafka_background_threads | 10 |
| kafka_listeners | PLAINTEXT://:9092 |
| kafka_num_network_threads | 3 |
| kafka_num_io_threads | 8 |
| kafka_num_replica_fetchers | 1 |
| kafka_socket_send_buffer_bytes | 102400 |
| kafka_socket_receive_buffer_bytes | 102400 |
| kafka_socket_request_max_bytes | 104857600 |
| kafka_replica_socket_receive_buffer_bytes | 65536 |
| kafka_data_log_dirs | /var/lib/kafka/logs |
| kafka_num_partitions | 1 |
| kafka_num_recovery_threads_per_data_dir | 1 |
| kafka_log_cleaner_threads | 1 |
| kafka_offsets_topic_replication_factor | 1 |
| kafka_transaction_state_log_replication_factor | 1 |
| kafka_transaction_state_log_min_isr | 1 |
| kafka_log_retention_hours | 168 |
| kafka_log_segment_bytes | 1073741824 |
| kafka_log_retention_check_interval_ms | 300000 |
| kafka_auto_create_topics_enable | false |
| kafka_delete_topic_enable | true |
| kafka_default_replication_factor | 1 |
| kafka_group_initial_rebalance_delay_ms | 0 |
| kafka_zookeeper_connect | localhost:2181 |
| kafka_zookeeper_connection_timeout | 6000 |
| kafka_bootstrap_servers | localhost:9092 |
| kafka_consumer_group_id | kafka-consumer-group |

See [log4j.yml](./defaults/main/002-log4j.yml) for detailed
log4j-related available variables.

## Starting and Stopping Kafka services using systemd
Expand Down
5 changes: 3 additions & 2 deletions defaults/main/001-kafka.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,9 @@
---
# The Apache Kafka version to be downloaded and installed
# kafka_download_base_url should be set to https://archive.apache.org/dist/kafka/ for older versions than the current
kafka_download_base_url: http://www-eu.apache.org/dist/kafka
kafka_version: 3.1.0
kafka_download_base_url: https://dlcdn.apache.org/kafka
kafka_download_validate_certs: yes
kafka_version: 3.2.0
kafka_scala_version: 2.13

# The kafka user and group to create files/dirs with and for running the kafka service
Expand Down
1 change: 1 addition & 0 deletions molecule/default/molecule.yml
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ provisioner:
zookeeper_id: 2
kafka_broker_id: 1
kafka_listener_hostname: server-2
kafka_download_validate_certs: no
server-3:
zookeeper_id: 3
kafka_broker_id: 2
Expand Down
2 changes: 1 addition & 1 deletion molecule/default/requirements.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
roles:
- sleighzy.zookeeper
- name: sleighzy.zookeeper

collections:
- community.docker
8 changes: 4 additions & 4 deletions molecule/default/verify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,12 @@
- "'kafka' in getent_passwd"
- "'kafka' in getent_group"

- name: Register '/opt/kafka_2.13-3.1.0' installation directory status
- name: Register '/opt/kafka_2.13-3.2.0' installation directory status
stat:
path: '/opt/kafka_2.13-3.1.0'
path: '/opt/kafka_2.13-3.2.0'
register: install_dir

- name: Assert that '/opt/kafka_2.13-3.1.0' directory is created
- name: Assert that '/opt/kafka_2.13-3.2.0' directory is created
assert:
that:
- install_dir.stat.exists
Expand All @@ -39,7 +39,7 @@
that:
- kafka_dir.stat.exists
- kafka_dir.stat.islnk
- kafka_dir.stat.lnk_target == '/opt/kafka_2.13-3.1.0'
- kafka_dir.stat.lnk_target == '/opt/kafka_2.13-3.2.0'

- name: Register '/var/log/kafka' directory status
stat:
Expand Down
7 changes: 4 additions & 3 deletions tasks/main.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
- name: Load OS-specific variables
include_vars: "{{ item }}"
include_vars: '{{ item }}'
with_first_found:
- ../vars/{{ ansible_os_family }}.yml
- ../vars/{{ ansible_distribution_release }}.yml
Expand Down Expand Up @@ -35,8 +35,9 @@

- name: Download Apache Kafka
get_url:
url: "{{ kafka_download_base_url }}/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz"
url: '{{ kafka_download_base_url }}/{{ kafka_version }}/kafka_{{ kafka_scala_version }}-{{ kafka_version }}.tgz'
dest: /tmp
validate_certs: '{{ kafka_download_validate_certs }}'
when: not dir.stat.exists
tags:
- kafka_download
Expand Down Expand Up @@ -262,7 +263,7 @@
- name: Template kafka systemd service
template:
src: kafka.service.j2
dest: "{{ kafka_unit_path }}"
dest: '{{ kafka_unit_path }}'
group: '{{ kafka_group }}'
owner: '{{ kafka_user }}'
mode: 0644
Expand Down
3 changes: 2 additions & 1 deletion templates/producer.properties.j2
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ bootstrap.servers={{ kafka_producer_bootstrap_servers }}
# specify the compression codec for all data generated: none, gzip, snappy, lz4, zstd
compression.type={{ kafka_producer_compression_type }}

# name of the partitioner class for partitioning events; default partition spreads data randomly
# name of the partitioner class for partitioning records;
# The default uses "sticky" partitioning logic which spreads the load evenly between partitions, but improves throughput by attempting to fill the batches sent to each partition.
#partitioner.class=

# the maximum amount of time the client will wait for the response of a request
Expand Down
12 changes: 6 additions & 6 deletions templates/server.properties.j2
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults
# This configuration file is intended for use in ZK-based mode, where Apache ZooKeeper is required.
# See kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

Expand All @@ -37,18 +38,17 @@ background.threads={{ kafka_background_threads }}

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# The address the socket server listens on. If not configured, the host name will be equal to the value of
# java.net.InetAddress.getCanonicalHostName(), with PLAINTEXT listener name, and port 9092.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
listeners={{ kafka_listeners | join(",")}}

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
# Listener name, hostname and port the broker will advertise to clients.
# If not set, it uses the value for "listeners".
#advertised.listeners=PLAINTEXT://your.host.name:9092
{% if kafka_advertised_listeners is defined %}
advertised.listeners={{ kafka_advertised_listeners | join(",")}}
Expand Down

0 comments on commit 80caa23

Please sign in to comment.