ADVANCEDJAVADATA & PERSISTENCE

Transactional Outbox Publisher in Spring Boot

A scheduled outbox publisher that drains a Postgres outbox table using JDBC and SKIP LOCKED for safe concurrent processing.

Published Apr 14, 2026
javaspring-bootoutbox-patternpostgresmessaging

OutboxPublisher is a scheduled Spring component that drains a Postgres outbox table and publishes its rows to a downstream broker (Kafka, RabbitMQ, SNS — your choice). It uses SELECT ... FOR UPDATE SKIP LOCKED so multiple replicas can poll the same table without stepping on each other. This is the production pattern for the transactional outbox.

Tested on Spring Boot 3.4, Postgres 16, JDK 25.

When to Use This

  • You need to publish events as a side effect of a database write, atomically
  • You can't afford lost messages on broker outage
  • You're running multiple service instances and need safe concurrent draining
  • You want at-least-once delivery without 2-phase commit

Don't use this when the destination broker supports its own transactional inbox (Kafka EOS for example) or when latency below 100ms matters more than reliability.

Code

@Component
@RequiredArgsConstructor
public class OutboxPublisher {
 
    private static final Logger log = LoggerFactory.getLogger(OutboxPublisher.class);
    private static final int BATCH_SIZE = 100;
 
    private final JdbcTemplate jdbc;
    private final EventBroker broker;
 
    @Scheduled(fixedDelay = 1000)
    @Transactional
    public void drain() {
        List<OutboxEvent> batch = jdbc.query(
            """
            SELECT id, aggregate_type, aggregate_id, event_type, payload, created_at
            FROM outbox
            WHERE published_at IS NULL
            ORDER BY created_at
            LIMIT ?
            FOR UPDATE SKIP LOCKED
            """,
            (rs, i) -> new OutboxEvent(
                rs.getLong("id"),
                rs.getString("aggregate_type"),
                rs.getString("aggregate_id"),
                rs.getString("event_type"),
                rs.getString("payload"),
                rs.getTimestamp("created_at").toInstant()
            ),
            BATCH_SIZE
        );
 
        if (batch.isEmpty()) return;
 
        for (OutboxEvent event : batch) {
            try {
                broker.publish(event);
            } catch (Exception e) {
                log.error("Failed to publish outbox event {}", event.id(), e);
                throw e; // rolls back transaction, batch retried on next tick
            }
        }
 
        jdbc.update(
            "UPDATE outbox SET published_at = NOW() WHERE id = ANY(?)",
            batch.stream().map(OutboxEvent::id).toArray(Long[]::new)
        );
 
        log.info("Published {} outbox events", batch.size());
    }
}
 
public record OutboxEvent(
    long id,
    String aggregateType,
    String aggregateId,
    String eventType,
    String payload,
    Instant createdAt
) {}

The @Transactional annotation is critical. It opens a single Postgres transaction that holds the row locks for the entire batch. If broker publish fails, the transaction rolls back, the locks release, and the next tick retries the same rows.

Usage

The outbox table schema this snippet expects:

CREATE TABLE outbox (
    id BIGSERIAL PRIMARY KEY,
    aggregate_type VARCHAR(64) NOT NULL,
    aggregate_id VARCHAR(128) NOT NULL,
    event_type VARCHAR(128) NOT NULL,
    payload JSONB NOT NULL,
    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
    published_at TIMESTAMPTZ
);
 
CREATE INDEX idx_outbox_unpublished
  ON outbox (created_at)
  WHERE published_at IS NULL;

Producers insert into outbox in the same transaction as their domain write. The publisher drains it asynchronously.

Caveats

  • @Scheduled runs in a single-thread pool by default. Configure a TaskScheduler bean with a larger pool if you have multiple scheduled jobs.
  • The partial index is essential. Without WHERE published_at IS NULL, the index grows forever and the polling query gets slower over time.
  • Don't forget to clean up. Old published rows accumulate. Run a nightly job to delete published_at < NOW() - INTERVAL '7 days', or move them to a cold archive table.
  • Idempotency on the consumer side is non-negotiable. This publisher is at-least-once. Consumers must dedupe by event ID.
  • Don't increase BATCH_SIZE blindly. Larger batches mean longer transactions and longer-held locks. 50-200 is usually the sweet spot.

Frequently Asked Questions

Why use SKIP LOCKED in the outbox query?

SKIP LOCKED lets multiple instances of the publisher poll the outbox concurrently without blocking each other. Each instance grabs a different batch of unpublished rows, so you get horizontal scaling for free. Without it, instances either block each other or double-process rows.

Why poll instead of using triggers or LISTEN/NOTIFY?

Polling is simpler, fault-tolerant, and survives publisher restarts without missing events. LISTEN/NOTIFY is faster but requires a persistent connection per consumer and silently drops events if the consumer is offline. For most outbox patterns, a 1-second poll is fast enough and dramatically more reliable.

X (Twitter)LinkedIn