Skip to main content
Version: 1.0.0

Core Features

Zero Message Loss

The library ensures that domain records are never lost by storing them in the same database transaction as your business data. This guarantees consistency between your domain state and persisted records.

Benefits

  • ACID Compliance: Records are saved atomically with business data
  • Consistency Guarantee: No partial updates or lost records
  • Failure Recovery: System crashes don't result in data loss
  • Reliable Processing: Records are processed with automatic retry logic

How it Works


Record Ordering

Guaranteed Processing Order

Records with the same key are always processed in creation order, ensuring business logic consistency and preventing race conditions.

Key Benefits:

  • Aggregate Consistency: Records with the same key maintain order
  • Business Logic Safety: Dependent records process in correct sequence
  • Parallel Processing: Different keys process independently
  • Scalable Design: No global ordering bottlenecks

Controlling Failure Behavior

Control how the scheduler handles failures within a key sequence:

namastack:
outbox:
processing:
stop-on-first-failure: true

Behavior:

  • When one record fails, processing stops for remaining records with the same key
  • Maintains strict ordering within key sequences
  • Prevents cascading issues from dependent records
  • Recommended: When records within a key have dependencies

Behavior Comparison:

ConfigurationRecord 1Record 2Record 3Result
true (default)✓ Success✗ Fails⏸ SkippedRecord 2 retried, Record 3 waits
false✓ Success✗ Fails✓ SuccessRecord 2 retried independently

Hash-based Partitioning

Instead of distributed locking, the library uses hash-based partitioning to enable horizontal scaling across multiple instances while maintaining strict record ordering per key. This approach eliminates lock contention and provides better performance.

How Partitioning Works

Key Benefits

  • Consistent Hashing: Each key always maps to the same partition using MurmurHash3
  • No Lock Contention: Eliminates distributed lock overhead and deadlock risks
  • Horizontal Scaling: Partitions automatically redistribute when instances join/leave
  • Load Balancing: Even distribution of partitions across all active instances
  • Ordering Guarantee: Records within the same key process in strict order
  • Better Performance: No lock acquisition/renewal overhead

Partition Assignment

256 fixed partitions provide fine-grained load distribution. Partitions are automatically distributed among active instances. Each key always maps to the same partition.

val partition = PartitionHasher.getPartitionForAggregate("order-123")

Instance Coordination

Instances automatically coordinate partition assignments and rebalance when topology changes. Configuration controls how aggressive this coordination is:

namastack:
outbox:
instance:
heartbeat-interval-seconds: 5 # How often each instance sends a heartbeat
stale-instance-timeout-seconds: 30 # When an instance is considered stale and removed
graceful-shutdown-timeout-seconds: 0 # Optional: propagation window on shutdown (default: 0)
rebalance-interval: 10000 # How often partitions are recalculated
Instance 1: Partitions 0-84   (85 partitions)
Instance 2: Partitions 85-169 (85 partitions)
Instance 3: Partitions 170-255 (86 partitions)