ProficientNowTechRFCs

Kafka Architecture Migration: Zookeeper to KRaft

Kafka Architecture Migration: Zookeeper to KRaft

  • Author: - Prathik Shetty(@pshettydev)
  • Date: - 2025-04-09
  • Status: Accepted

Abstract

This document details the architectural decision and implementation process for migrating the Apache Kafka cluster coordination mechanism within the Automation Service from a Zookeeper-based quorum to the Kafka Raft (KRaft) consensus protocol. This change eliminates the external Zookeeper dependency, simplifying the infrastructure stack and leveraging the native consensus mechanism introduced in recent Kafka versions.

1. Introduction

Apache Kafka has historically relied on Apache Zookeeper for critical cluster metadata management, including controller election, topic configuration, and access control lists (ACLs). While robust, this dependency introduces operational complexity, requiring the management and maintenance of a separate distributed system alongside Kafka.

Recent Kafka versions have introduced KRaft, a built-in consensus protocol based on the Raft algorithm. KRaft allows Kafka brokers to manage cluster metadata internally, removing the need for Zookeeper. This migration represents a strategic shift towards a more streamlined, self-contained Kafka deployment.

2. Motivation

The primary motivations for migrating from a Zookeeper-based Kafka deployment to KRaft are:

  • Simplified Architecture: Eliminating Zookeeper removes an entire distributed system component, reducing infrastructure footprint, configuration overhead, and potential points of failure.
  • Operational Efficiency: Managing a single system (Kafka in KRaft mode) is simpler than managing two separate systems (Kafka and Zookeeper). This simplifies deployment, monitoring, upgrades, and troubleshooting.
  • Improved Scalability & Performance: KRaft is designed to handle a larger number of partitions and scale more efficiently than Zookeeper-based metadata management, potentially offering faster controller failover times.
  • Future-Proofing: KRaft is the future direction for Kafka cluster management. Adopting it ensures alignment with the latest Kafka advancements and community best practices.

3. Problem Statement / Context

The previous architecture relied on an external Zookeeper ensemble to manage the Kafka cluster's state and metadata. This involved:

  • Deploying and configuring separate Zookeeper nodes (or a single node in development environments).
  • Configuring Kafka brokers to connect to the Zookeeper ensemble (KAFKA_ZOOKEEPER_CONNECT).
  • Managing potential inconsistencies or operational issues arising from the interaction between the two separate systems.
  • Increased resource consumption due to running an additional service.

This dependency added complexity, particularly for development and testing environments, and represented an operational overhead that could be eliminated with KRaft.

4. Previous Architecture: Zookeeper-based Kafka

In the Zookeeper-based model:

  1. Metadata Storage: All critical cluster metadata (broker status, topic configurations, partition assignments, ACLs) resided within Zookeeper.
  2. Controller Election: Zookeeper managed the election process for the Kafka controller broker, which is responsible for managing partition leadership and broker state.
  3. Broker Discovery: Brokers registered themselves in Zookeeper, allowing them and clients to discover active members of the cluster.
  4. Configuration: Kafka brokers required the KAFKA_ZOOKEEPER_CONNECT setting to locate the Zookeeper ensemble. Clients sometimes also interacted directly with Zookeeper, although this is less common now.
  5. docker-compose.yml: The setup included a dedicated zookeeper service, and the kafka service had depends_on: [zookeeper] and the KAFKA_ZOOKEEPER_CONNECT environment variable configured.

Old docker-compose:

services:
  # ! This will be removed in the future
  zookeeper:
    image: confluentinc/cp-zookeeper:7.9.0
    environment:
      ZOOKEEPER_CLIENT_PORT: ${ZOOKEEPER_CLIENT_PORT:-2181}
      ZOOKEEPER_TICK_TIME: ${ZOOKEEPER_TICK_TIME:-2000}
    ports:
      - '${ZOOKEEPER_PORT:-2181}:2181'
    networks:
      - automation_network
 
  # TODO: Update to use environment variables and also update to the container config
  kafka:
    image: confluentinc/cp-kafka:7.9.0
    depends_on:
      - zookeeper
    ports:
      - '${KAFKA_INTERNAL_PORT:-9092}:9092' # Internal port
      - '${KAFKA_EXTERNAL_PORT:-29092}:29092' # External port
    environment:
      KAFKA_BROKER_ID: ${KAFKA_BROKER_ID:-1}
      KAFKA_ZOOKEEPER_CONNECT: ${KAFKA_ZOOKEEPER_CONNECT:-zookeeper:2181}
      KAFKA_ADVERTISED_LISTENERS: ${KAFKA_ADVERTISED_LISTENERS:-PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092}
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: ${KAFKA_LISTENER_SECURITY_PROTOCOL_MAP:-PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT}
      KAFKA_INTER_BROKER_LISTENER_NAME: ${KAFKA_INTER_BROKER_LISTENER_NAME:-PLAINTEXT}
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: ${KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR:-1}
      KAFKA_NUM_PARTITIONS: ${KAFKA_NUM_PARTITIONS:-3}
      KAFKA_AUTO_CREATE_TOPICS_ENABLE: ${KAFKA_AUTO_CREATE_TOPICS_ENABLE:-"true"}
    networks:
      - automation_network

5. New Architecture: KRaft-based Kafka

What is KRaft and How It Works

Kafka KRaft (Kafka Raft) is Apache Kafka's new metadata management system that eliminates the dependency on ZooKeeper. In your configuration, you're setting up a single-node Kafka cluster in KRaft mode.

Key Components and Process Flow:

  1. Combined Roles: Your configuration runs a single node with both broker and controller roles:

    • Broker Role: Handles client requests, manages message storage
    • Controller Role: Manages cluster metadata (replaces ZooKeeper)
  2. Initialization Process:

    • Generate a unique cluster ID
    • Format storage with this cluster ID before first startup
    • Start Kafka with controller and broker roles activated
  3. Communication Flow:

    • External clients connect via EXTERNAL listener (port 29092)
    • Internal broker communication uses PLAINTEXT listener (port 9092)
    • Controller communication uses CONTROLLER listener (port 9093)

In the KRaft-based model:

  1. Metadata Storage: Cluster metadata is stored internally within a dedicated Kafka topic (__cluster_metadata) replicated across a quorum of controller nodes using the Raft consensus protocol.
  2. Controller Quorum: Instead of a single controller elected via Zookeeper, KRaft uses a quorum of nodes designated with the controller role. These nodes manage the cluster state using the Raft algorithm for consensus. Brokers designated with the broker role fetch metadata updates directly from the active controller within the quorum. (In our single-node setup, one instance fulfills both broker and controller roles).
  3. Self-Contained: Kafka operates independently without any external coordination service like Zookeeper.
  4. Configuration:
    • The process.roles property (or KAFKA_PROCESS_ROLES env var) defines whether a node acts as a broker, controller, or both.
    • controller.quorum.voters (or KAFKA_CONTROLLER_QUORUM_VOTERS) specifies the nodes participating in the controller quorum.
    • A unique cluster.id (or CLUSTER_ID) must be generated and assigned to the cluster before its first startup.
    • The Kafka storage directory must be formatted using the kafka-storage format command before the first startup.
  5. docker-compose.yml:
    • The zookeeper service is removed.
    • The kafka service no longer depends on Zookeeper.
    • The KAFKA_ZOOKEEPER_CONNECT variable is removed.
    • New KRaft-specific environment variables (KAFKA_PROCESS_ROLES, KAFKA_NODE_ID, KAFKA_CONTROLLER_QUORUM_VOTERS, KAFKA_LISTENERS, KAFKA_CONTROLLER_LISTENER_NAMES, etc.) are added.
    • Comments highlight the need for manual CLUSTER_ID generation and storage formatting before initial startup.
    • A persistent volume (kafka_data) is now strongly recommended and configured to ensure metadata stored by KRaft persists across restarts.
    • The kafka-ui service configuration is updated to remove the Zookeeper connection string, connecting directly via bootstrap servers.

Flow Diagram to better understand the process:

6. Benefits of KRaft Migration

  • Reduced Complexity: Single system management simplifies operations.
  • Lower Resource Usage: No separate Zookeeper service consuming resources.
  • Faster Recovery: Potentially faster controller failover times compared to Zookeeper-based election.
  • Enhanced Scalability: Designed to handle significantly more topics and partitions.
  • Alignment with Kafka Roadmap: Positions the service to leverage future Kafka enhancements built upon KRaft.

7. Implementation Details & Considerations

KRaft Configuration Variables

VariableDescriptionImportance
KAFKA_NODE_IDUnique identifier for the node (uses KAFKA_BROKER_ID or defaults to 1)Critical: Every node must have a unique ID
KAFKA_PROCESS_ROLESDefines node's responsibilities (broker,controller)Critical: Determines whether node acts as broker, controller, or both
KAFKA_CONTROLLER_QUORUM_VOTERSList of controller nodes forming the quorumCritical: Defines the consensus group for metadata management
CLUSTER_IDUnique identifier for the entire Kafka clusterCritical: Must be generated before first start and remain consistent

Listener Configuration Variables

VariableDescriptionImportance
KAFKA_LISTENERSInternal socket addresses Kafka binds toCritical: Defines all communication endpoints
KAFKA_ADVERTISED_LISTENERSAddresses clients will use to connectCritical: Must be accessible to clients
KAFKA_LISTENER_SECURITY_PROTOCOL_MAPMaps listener names to security protocolsHigh: Defines security for each listener
KAFKA_INTER_BROKER_LISTENER_NAMEListener used for broker-to-broker communicationHigh: Must be properly configured for broker communication
KAFKA_CONTROLLER_LISTENER_NAMESListener used for controller-to-controller communicationHigh: Required for controller quorum operation

General Kafka Settings

VariableDescriptionImportance
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTORReplication factor for consumer offset topicMedium: Set to 1 for single node; higher in production
KAFKA_NUM_PARTITIONSDefault partition count for auto-created topicsMedium: Affects parallelism and throughput
KAFKA_AUTO_CREATE_TOPICS_ENABLEWhether topics can be auto-createdMedium: Convenient in dev but can be risky in production

Port Mappings

VariableDescriptionImportance
KAFKA_INTERNAL_PORTMaps to internal broker port (9092)High: Used for internal communication
KAFKA_EXTERNAL_PORTMaps to external client port (29092)High: This is how clients connect to your Kafka

Storage Configuration

The volume mapping kafka_data:/var/lib/kafka/data is critically important as it persists Kafka data between container restarts, especially important with KRaft where both message data and metadata are stored by Kafka itself.

8. Important Setup Steps

Before first startup:

  1. Generate a cluster ID using kafka-storage random-uuid
  2. Set this ID in KAFKA_CLUSTER_ID
  3. Format storage with kafka-storage format -t <YOUR_CLUSTER_ID> -c /etc/kafka/kafka.properties

This configuration gives you a single-node Kafka cluster in KRaft mode, which is suitable for development and testing. For production, you would typically have multiple nodes with separated controller and broker roles.

9. Sample Docker Compose

docker-compose of the kafka image with kraft configurations:

kafka:
  image: confluentinc/cp-kafka:7.9.0
  ports:
    - '${KAFKA_INTERNAL_PORT:-9092}:9092' # Internal port
    - '${KAFKA_EXTERNAL_PORT:-29092}:29092' # External port
  environment:
    # KRaft settings
    KAFKA_NODE_ID: ${KAFKA_BROKER_ID:-1} # Reuse KAFKA_BROKER_ID or set a specific node ID
    KAFKA_PROCESS_ROLES: 'broker,controller' # Combined roles for single-node KRaft
    KAFKA_CONTROLLER_QUORUM_VOTERS: '${KAFKA_BROKER_ID:-1}@kafka:9093' # Controller quorum voters (self in single-node)
    KAFKA_LISTENERS: 'PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:29092' # Renamed PLAINTEXT_HOST to EXTERNAL
    KAFKA_ADVERTISED_LISTENERS: ${KAFKA_ADVERTISED_LISTENERS:-PLAINTEXT://kafka:9092,EXTERNAL://localhost:29092} # Use EXTERNAL for advertised listener
    KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'PLAINTEXT:PLAINTEXT,EXTERNAL:PLAINTEXT,CONTROLLER:PLAINTEXT' # Map EXTERNAL listener name
    KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT' # Listener used for internal broker communication
    KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER' # Name of the controller listener
 
    # ! IMPORTANT ! Cluster ID: Must be generated before the first start.
    # 1. Generate a Cluster ID:
    #    docker run --rm confluentinc/cp-kafka:7.9.0 kafka-storage random-uuid
    # 2. Set this ID using the CLUSTER_ID environment variable below.
    #    Example: CLUSTER_ID='MjM0ZTcxYjMtZmM4MS00...'
    CLUSTER_ID: ${KAFKA_CLUSTER_ID} # <-- Replace YOUR_GENERATED_CLUSTER_ID with the actual ID you generated
 
    # ! IMPORTANT ! Storage Formatting: Must be done before the first start.
    # After setting CLUSTER_ID and potentially mounting a volume (see below), run:
    # docker compose run --rm kafka kafka-storage format -t <YOUR_CLUSTER_ID> -c /etc/kafka/kafka.properties
    # Or adjust the command if using a different kafka.properties path or setup.
 
    # General Kafka settings (remain mostly the same)
    KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: ${KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR:-1} # Keep as 1 for single node
    KAFKA_NUM_PARTITIONS: ${KAFKA_NUM_PARTITIONS:-3}
    KAFKA_AUTO_CREATE_TOPICS_ENABLE: ${KAFKA_AUTO_CREATE_TOPICS_ENABLE:-true}
  networks:
    - automation_network
  volumes:
    - kafka_data:/var/lib/kafka/data # Recommended: Mount a volume for persistent Kafka data especially with KRaft

10. Conclusion

The migration from Zookeeper to KRaft modernizes the Automation Service's Kafka infrastructure, simplifying its architecture and operational management. While requiring specific initialization steps (cluster ID generation and storage formatting), the long-term benefits of reduced complexity, improved scalability, and alignment with Kafka's future direction justify the transition.