Kafka / Messaging

Kafka integration for Spring Boot microservices

Spring Middleware provides a platform-aligned Kafka integration for Spring Boot microservices. It standardizes publishers, subscribers, topic configuration, event envelopes, retry behavior, and dead-letter handling while keeping the runtime model explicit and predictable.

Kafka publishers Kafka consumers EventEnvelope<T> Retry and dead-letter topics Topic configuration Configuration-driven wiring

What the Kafka module does

The Kafka module is built around configuration-backed publishers and subscribers. Publishers are registered by id, subscribers are wired from configuration, and events are sent through a consistent envelope model instead of ad hoc payload handling.

  • Register named publishers from configuration
  • Wire subscriber endpoints from topic, group, and concurrency settings
  • Wrap payloads in a standard event envelope
  • Apply retry and dead-letter behavior through module settings

What it is not

This integration is not intended as a hidden abstraction that makes Kafka disappear. Topics, publisher ids, consumer groups, retry limits, and dead-letter behavior remain explicit, because those are operational concerns that should stay visible in the platform.

  • Explicit topics: topic names and partitions are declared intentionally.
  • Explicit publishers: applications publish through named publisher ids.
  • Explicit failures: retry and dead-letter behavior are configuration-driven.

High-level Kafka flow

Kafka publishers and subscribers are configured as part of the platform model. The publisher wraps payloads in an event envelope, Kafka carries the record, and subscribers consume it with module-level error handling.

Producer Service Named Kafka Publisher EventEnvelope + topic resolution Subscriber Service Kafka runtime topics • partitions • consumer groups retry • backoff • dead-letter routing

Configuration model

The Kafka module is configured under middleware.kafka. Topics, publishers, subscribers, logging, and error handling are declared through application configuration.

Recommended application.yml

middleware:
  kafka:
    bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:localhost:9092}
    create-missing-topics: true
    logging:
      enabled: false
      log-payload: false
      log-headers: false
    topics:
      catalog-events:
        partitions: 5
        replication-factor: 3
    publishers:
      catalog:
        topic: ${KAFKA_TOPIC_CATALOG:catalog-events}
    subscribers:
      catalog:
        topic: ${KAFKA_TOPIC_CATALOG:catalog-events}
        group-id: ${KAFKA_GROUP_ID_CATALOG:catalog-service-group}
        concurrency: 3

Important ideas

  • bootstrap-servers points to the broker list
  • create-missing-topics enables topic creation from declared topic metadata
  • publishers.<id> defines named publishers used at runtime
  • subscribers.<id> defines topic, group id, and concurrency for consumers
  • logging controls middleware-level Kafka logging behavior

Publishers and subscribers

Kafka publishers are looked up by id from the registry, while subscribers are registered through the module's listener registrar.

Publishing model

A publisher is registered under a configured id and sends an EventEnvelope<T> to the configured topic. Applications publish by retrieving the publisher by id and calling publish(...) or publishWithKey(...).

  • Publishers are registered in KafkaPublisherRegistry
  • DefaultKafkaPublisher is the standard implementation
  • Publishing returns a CompletableFuture<PublishResult>

Subscriber model

Subscriber entries under middleware.kafka.subscribers are used by the listener registrar to wire consumer endpoints with topic, group id, and concurrency settings.

  • Subscriber ids are configuration-backed
  • Listener wiring uses @MiddlewareKafkaListener
  • Consumers receive typed EventEnvelope<T> data

Event envelope

Published records are wrapped in a standard envelope so event metadata stays consistent across services.

Envelope fields

  • eventId — unique event identifier
  • eventType — explicit type string or class-derived type
  • timestamp — event timestamp
  • traceId — correlation value from MDC or generated
  • payload — original event payload

Optional event type

@EventType("order.created.v1")
public class OrderCreatedEvent {
    private String orderId;
}

Runtime usage

Application code publishes through named publishers and consumes through middleware-managed listener registration.

Publishing through the registry

KafkaPublisher<CatalogEvent, String> publisher =
    publisherRegistry.getPublisher("catalog");

publisher.publishWithKey(event, event.getCatalogId());

Subscriber method

@MiddlewareKafkaListener("catalog")
public void onCatalogEvent(EventEnvelope<CatalogEvent> envelope) {
    CatalogEvent payload = envelope.getPayload();
    // handle payload
}

Error handling, retry, and dead-letter behavior

Kafka consumer failure behavior is controlled through module configuration. Retry count, backoff, and dead-letter routing are explicit parts of the runtime model.

01

Receive record

The subscriber receives an event from the configured topic and group.

02

Retry on failure

Failures can be retried according to configured retry count and backoff.

03

Route to dead-letter topic

If retries are exhausted and dead-letter is enabled, the record is routed to the DLT topic.

Error handling configuration

middleware:
  kafka:
    error-handling:
      enabled: ${KAFKA_ERROR_HANDLING_ENABLED:true}
      max-retries: ${KAFKA_ERROR_HANDLING_MAX_RETRIES:3}
      retry-backoff-ms: ${KAFKA_ERROR_HANDLING_RETRY_BACKOFF_MS:1000}
      dead-letter:
        enabled: ${KAFKA_ERROR_HANDLING_DEAD_LETTER_ENABLED:true}
        suffix: .DLT

Operational meaning

  • max-retries defines how many attempts are made before failure is considered final
  • retry-backoff-ms controls delay between retry attempts
  • dead-letter.enabled enables DLT routing after retries are exhausted
  • dead-letter.suffix defines the topic suffix, such as .DLT

Best practices

The Kafka module is most effective when topic naming, publisher ids, and failure behavior are treated as part of the platform contract.

Keep topic definitions explicit

Declare partitions and replication factor in configuration instead of leaving topic shape implicit across environments.

Use stable publisher ids

Publisher ids such as catalog or order-created become part of the operational model and should stay predictable.

Be deliberate with logging

Payload logging is useful in development, but in production it should be controlled carefully due to size and sensitivity.