|
|
另类的核桃 · NPM 工具窗口 | ...· 3 月前 · |
|
|
面冷心慈的饺子 · Unity 2018.3 Android ...· 5 月前 · |
|
|
任性的弓箭 · 广州市体育局网站-有效瘦身,普通人是否可以复 ...· 1 年前 · |
|
|
憨厚的篮球 · css控制文本超出省略(单行、两行、多行)- ...· 1 年前 · |
|
|
害羞的滑板 · django ...· 1 年前 · |
epoll
The RabbitMQ Stream Java Client is a Java library for communicating with the RabbitMQ Stream Plugin . Use it to create and delete streams, publish messages, and consume from streams. Learn more in the client overview .
This library requires at least Java 11, but Java 21 or more is recommended.
Stream PerfTest is a performance testing tool based on this client library.
A RabbitMQ stream is a persistent and replicated data structure that models an append-only log . It differs from the classical RabbitMQ queue in the way message consumption works. In a classical RabbitMQ queue, consuming removes messages from the queue. In a RabbitMQ stream, consuming leaves the stream intact. So the content of a stream can be read and re-read without impact or destructive effect.
A RabbitMQ stream is a persistent and replicated data structure that models an append-only log . It differs from traditional RabbitMQ queues in how message consumption works:
This allows stream content to be read and re-read multiple times without any destructive effects.
Neither streams nor traditional queues are inherently better — they serve different use cases.
Replay/Time-traveling: Applications need to read message history or resume from a specific point
High throughput: Higher performance is required compared to other protocols (AMQP, STOMP, MQTT)
Large logs: Large amounts of data must be stored with minimal memory overhead
You can also use streams in RabbitMQ with any protocol RabbitMQ supports (AMQP, MQTT, STOMP). Instead of using the stream protocol directly, you consume from "stream-powered" queues using e.g. AMQP. These special queues are backed by stream infrastructure and provide stream semantics (primarily non-destructive reading).
Stream-powered queues offer stream features (append-only structure, non-destructive reading) while still using your protocol of choice.
But by using another protocol than the stream protocol, one may not benefit from the performance it provides, as it has been designed with high throughput in mind.
RabbitMQ stream provides at-least-once guarantees thanks to the publisher confirm mechanism.
Message deduplication is also supported on the publisher side.
The RabbitMQ Stream Java Client implements the RabbitMQ Stream protocol and avoids dealing with low-level concerns by providing high-level functionalities to build fast, efficient, and robust client applications.
Offset tracking: Resume consumption from where you left off with automatic or manual tracking
Optimized connections: Connect publishers to stream leaders and consumers to replicas
Resource efficiency: Automatically scale connections based on publisher/consumer count
Automatic recovery: Handle network failures with connection recovery and consumer re-subscription
Observability: Built-in support for Prometheus metrics and distributed tracing ( OpenZipkin , Wavefront ) via Micrometer
Application Programming Interfaces (API):
Used for writing application logic.
Includes interfaces and classes in the
com.rabbitmq.stream
package (e.g.,
Producer
,
Consumer
,
Message
).
These APIs form the main programming model and remain as stable as possible.
New features may add methods to existing interfaces.
Service Provider Interfaces (SPI):
Used for implementing technical client behavior, not application logic.
Developers may reference these during configuration or when customizing internal client behavior.
SPIs include interfaces in
com.rabbitmq.stream.codec
,
com.rabbitmq.stream.compression
,
com.rabbitmq.stream.metrics
, and other packages.
These interfaces may change, but this typically won’t affect most applications since changes are limited to client internals.
A RabbitMQ 3.9+ node with the stream plugin enabled is required. The easiest way to get up and running is to use Docker.
There are different ways to make the broker visible to the client application when running in Docker. The next sections show a couple of options suitable for local development.
Docker runs on a virtual machine when using macOS, so do not expect high performance when using RabbitMQ Stream inside Docker on a Mac.
This section shows how to start a broker instance for local development (the broker Docker container and the client application are assumed to run on the same host).
The following command creates a one-time Docker container to run RabbitMQ:
docker run -it --rm --name rabbitmq -p 5552:5552 \
-e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS='-rabbitmq_stream advertised_host localhost' \
rabbitmq:4.1
The previous command exposes only the stream port (5552), you can expose ports for other protocols:
docker run -it --rm --name rabbitmq -p 5552:5552 -p 5672:5672 -p 15672:15672 \
-e RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS='-rabbitmq_stream advertised_host localhost' \
rabbitmq:4.1-management
Refer to the official RabbitMQ Docker image web page to find out more about its usage.
Once the container is started, the stream plugin must be enabled :
docker exec rabbitmq rabbitmq-plugins enable rabbitmq_stream
This is the simplest way to run the broker locally. The container uses the host network , this is perfect for experimenting locally.
docker run -it --rm --name rabbitmq --network host rabbitmq:4.1
Once the container is started, the stream plugin must be enabled :
docker exec rabbitmq rabbitmq-plugins enable rabbitmq_stream
The container will use the following ports: 5552 (for stream) and 5672 (for AMQP.)
Releases are available from Maven Central, which does not require specific declaration. Snapshots are available from a repository which must be declared in the dependency management configuration.
With Maven:
<repositories>
<repository>
<id>central-portal-snapshots</id>
<url>https://central.sonatype.com/repository/maven-snapshots/</url>
<snapshots><enabled>true</enabled></snapshots>
<releases><enabled>false</enabled></releases>
</repository>
</repositories>
With Gradle:
repositories { maven { name = 'Central Portal Snapshots' url = 'https://central.sonatype.com/repository/maven-snapshots/' // Only search this repository for the specific dependency content { includeModule("com.rabbitmq", "stream-client") mavenCentral()This section covers the basics of the RabbitMQ Stream Java API by building a small publish/consume application. This is a good way to get an overview of the API. If you want a more comprehensive introduction, you can go to the reference documentation section.
The sample application publishes some messages and then registers a consumer to make some computations out of them. The source code is available on GitHub.
The sample class starts with a few imports:
Imports for the sample applicationimport com.rabbitmq.stream.Consumer; import com.rabbitmq.stream.Environment; import com.rabbitmq.stream.OffsetSpecification; import com.rabbitmq.stream.Producer; public class SampleApplication { public static void main(String[] args) throws Exception { System.out.println("Connecting..."); Environment environment = Environment.builder().build(); (1) String stream = UUID.randomUUID().toString(); environment.streamCreator().stream(stream).create(); (2) System.out.println("Starting publishing... "); int messageCount = 10000; CountDownLatch publishConfirmLatch = new CountDownLatch(messageCount); Producer producer = environment.producerBuilder() (1) .stream(stream) .build(); IntStream.range(0, messageCount) .forEach(i -> producer.send( (2) producer.messageBuilder() (3) .addData(String.valueOf(i).getBytes()) (3) .build(), (3) confirmationStatus -> publishConfirmLatch.countDown() (4) publishConfirmLatch.await(10, TimeUnit.SECONDS); (5) producer.close(); (6) System.out.printf("Published %,d messages%n", messageCount); System.out.println("Starting consuming..."); AtomicLong sum = new AtomicLong(0); CountDownLatch consumeLatch = new CountDownLatch(messageCount); Consumer consumer = environment.consumerBuilder() (1) .stream(stream) .offset(OffsetSpecification.first()) (2) .messageHandler((offset, message) -> { (3) sum.addAndGet(Long.parseLong(new String(message.getBodyAsBinary()))); (4) consumeLatch.countDown(); (5) .build(); consumeLatch.await(10, TimeUnit.SECONDS); (6) System.out.println("Sum: " + sum.get()); (7) consumer.close(); (8) environment.deleteStream(stream); (1) environment.close(); (2)The next step is to create the
Environment. It is a management object used to manage streams and create producers as well as consumers. The next snippet shows how to create anEnvironmentinstance and create the stream used in the application:Creating the environmentSystem.out.println("Connecting..."); Environment environment = Environment.builder().build(); (1) String stream = UUID.randomUUID().toString(); environment.streamCreator().stream(stream).create(); (2)Then comes the publishing part. The next snippet shows how to create a
Producer, send messages, and handle publishing confirmations, to make sure the broker has taken outbound messages into account. The application uses a count down latch to move on once the messages have been confirmed.Publishing messagesSystem.out.println("Starting publishing..."); int messageCount = 10000; CountDownLatch publishConfirmLatch = new CountDownLatch(messageCount); Producer producer = environment.producerBuilder() (1) .stream(stream) .build(); IntStream.range(0, messageCount) .forEach(i -> producer.send( (2) producer.messageBuilder() (3) .addData(String.valueOf(i).getBytes()) (3) .build(), (3) confirmationStatus -> publishConfirmLatch.countDown() (4) publishConfirmLatch.await(10, TimeUnit.SECONDS); (5) producer.close(); (6) System.out.printf("Published %,d messages%n", messageCount);It is now time to consume the messages. The
Environmentlets us create aConsumerand provide some logic on each incoming message by implementing aMessageHandler. The next snippet does this to calculate a sum and output it once all the messages have been received:Consuming messagesSystem.out.println("Starting consuming..."); AtomicLong sum = new AtomicLong(0); CountDownLatch consumeLatch = new CountDownLatch(messageCount); Consumer consumer = environment.consumerBuilder() (1) .stream(stream) .offset(OffsetSpecification.first()) (2) .messageHandler((offset, message) -> { (3) sum.addAndGet(Long.parseLong(new String(message.getBodyAsBinary()))); (4) consumeLatch.countDown(); (5) .build(); consumeLatch.await(10, TimeUnit.SECONDS); (6) System.out.println("Sum: " + sum.get()); (7) consumer.close(); (8)The application has some cleaning to do before terminating, that is deleting the stream and closing the environment:
Cleaning before terminatingenvironment.deleteStream(stream); (1) environment.close(); (2)$ ./mvnw -q test-compile exec:java -Dexec.classpathScope="test" \ -Dexec.mainClass="com.rabbitmq.stream.docs.SampleApplication" Starting publishing... Published 10000 messages Starting consuming... Sum: 49995000You can remove the
-qflag if you want more insight on the execution of the build.Overview
This section covers the API for connecting to the RabbitMQ Stream Plugin and working with messages. The API provides three main interfaces:
Creating the Environment
The environment serves as the main entry point to a node or a cluster of nodes.
ProducerandConsumerinstances are created from anEnvironmentinstance. Here is the simplest way to create anEnvironmentinstance:Creating an environment with all the defaultsEnvironment environment = Environment.builder().build(); (1) // ... environment.close(); (2)Treat the environment as a long-lived object. An application will usually create one
Environmentinstance when it starts up and close it when it exits.It is possible to use a URI to specify all the necessary information to connect to a node:
Creating an environment with a URIEnvironment environment = Environment.builder() .uri("rabbitmq-stream://guest:guest@localhost:5552/%2f") (1) .build();The previous snippet uses a URI that specifies the following information: host, port, username, password, and virtual host (
/, which is encoded as%2f). The URI follows the same rules as the AMQP 0.9.1 URI, except the protocol must berabbitmq-stream. TLS is enabled by using therabbitmq-stream+tlsscheme in the URI.When using one URI, the corresponding node will be the main entry point to connect to. The
Environmentwill then use the stream protocol to find out more about streams topology (leaders and replicas) when asked to createProducerandConsumerinstances.If this node fails, the
Environmentwill lose connectivity. To improve resilience, specify multiple URIs as fallback options:Creating an environment with several URIsEnvironment environment = Environment.builder() .uris(Arrays.asList( (1) "rabbitmq-stream://host1:5552", "rabbitmq-stream://host2:5552", "rabbitmq-stream://host3:5552") .build();Understanding Connection Logic
Creating the environment to connect to a cluster node usually works seamlessly. Creating publishers and consumers may encounter connection issues because the client relies on cluster hints to locate stream leaders and replicas.
These connection hints can be accurate or less appropriate depending on the infrastructure. If you encounter connection problems (such as unresolvable hostnames), this blog post explains the root causes and solutions. Setting the
advertised_hostandadvertised_portconfiguration entries should solve the most common connection problems.To make the local development experience simple, the client library can choose to always use
localhostfor producers and consumers. This happens if the following conditions are met: the initial host to connect to islocalhost, the user isguest, and no custom address resolver has been provided. Provide a pass-throughAddressResolvertoEnvironmentBuilder#addressResolver(AddressResolver)to avoid this behavior. It is unlikely this behavior applies for any real-world deployment, wherelocalhostand/or the defaultguestuser should not be used.Enabling TLS
TLS can be enabled by using the
rabbitmq-stream+tlsscheme in the URI. The default TLS port is 5551.Use the
EnvironmentBuilder#tlsmethod to configure TLS. The most important setting is aio.netty.handler.ssl.SslContextinstance, which is created and configured with theio.netty.handler.ssl.SslContext#forClientmethod. Note hostname verification is enabled by default.The following snippet shows a common configuration, whereby the client is instructed to trust servers with certificates signed by the configured certificate authority (CA).
Creating an environment that uses TLSX509Certificate certificate; try (FileInputStream inputStream = new FileInputStream("/path/to/ca_certificate.pem")) { CertificateFactory fact = CertificateFactory.getInstance("X.509"); certificate = (X509Certificate) fact.generateCertificate(inputStream); (1) SslContext sslContext = SslContextBuilder .forClient() .trustManager(certificate) (2) .build(); Environment environment = Environment.builder() .uri("rabbitmq-stream+tls://guest:guest@localhost:5551/%2f") (3) .tls().sslContext(sslContext) (4) .environmentBuilder() .build();Checking the identity of the server the client connects to is an important part of the TLS handshake. To make this work with the stream client library, it is critical the configured trusted certificates match the hosts returned by cluster nodes in the connection hints. Make sure to read the section on connection logic. You may have to configure the
advertised_hostbroker setting in case of a mismatch between trusted certificates and the default connection hints cluster nodes return.It is sometimes handy to trust any server certificates in development environments.
EnvironmentBuilder#tlsprovides thetrustEverythingmethod to do so. This should not be used in a production environment.Creating a TLS environment that trusts all server certificates for developmentEnvironment environment = Environment.builder() .uri("rabbitmq-stream+tls://guest:guest@localhost:5551/%2f") .tls().trustEverything() (1) .environmentBuilder() .build();The URI of the node to connect to (single node).
rabbitmq-stream://guest:guest@localhost:5552/%2fThe URI of the nodes to try to connect to (cluster).
rabbitmq-stream://guest:guest@localhost:5552/%2fsingleton listHost to connect to.
localhostPort to use.
usernameUsername to use to connect.
guest
passwordPassword to use to connect.
guest
virtualHostVirtual host to connect to.
rpcTimeoutTimeout for RPC calls.
Duration.ofSeconds(10)
recoveryBackOffDelayPolicyDelay policy to use for backoff on connection recovery.
Fixed delay of 5 seconds
topologyUpdateBackOffDelayPolicyDelay policy to use for backoff on topology update, e.g. when a stream replica moves and a consumer needs to connect to another node.
Initial delay of 5 seconds then delay of 1 second.
scheduledExecutorServiceExecutor used to schedule infrastructure tasks like background publishing, producers and consumers migration after disconnection or topology update. If a custom executor is provided, it is the developer’s responsibility to close it once it is no longer necessary.
Executors .newScheduledThreadPool( Runtime .getRuntime() .availableProcessors()
maxProducersByConnectionThe maximum number of
Producerinstances a single connection can maintain before a new connection is open. The value must be between 1 and 256. The limit may not be strictly enforced in case of too many concurrent creations.
maxTrackingConsumersByConnectionThe maximum number of
Consumerinstances that store their offset a single connection can maintain before a new connection is open. The value must be between 1 and 256. The limit may not be strictly enforced in case of too many concurrent creations.
maxConsumersByConnectionThe maximum number of
Consumerinstances a single connection can maintain before a new connection is open. The value must be between 1 and 256. The limit may not be strictly enforced in case of too many concurrent creations.
lazyInitializationTo delay the connection opening until necessary.
false
requestedHeartbeatHeartbeat requested by the client.
60 seconds
forceReplicaForConsumersRetry connecting until a replica is available for consumers. The client retries 5 times before falling back to the stream leader node. Set to
trueonly for clustered environments, not for 1-node environments, where only the stream leader is available.
false
forceLeaderForProducersForce connecting to a stream leader for producers. Set to
falseif it acceptable to stay connected to a stream replica when a load balancer is in use.Informational ID for the environment instance. Used as a prefix for connection names.
rabbitmq-stream
addressResolverContract to change resolved node address to connect to.
Pass-through (no-op)
locatorConnectionCountNumber of locator connections to maintain (for metadata search)
The smaller of the number of URIs and 3.
Configuration helper for TLS.
TLS is enabled if a
rabbitmq-stream+tlsURI is provided.
tls#sslContextSet the
io.netty.handler.ssl.SslContextused for the TLS connection. Useio.netty.handler.ssl.SslContextBuilder#forClientto configure it. The server certificate chain, the client private key, and hostname verification are the usual elements that need to be configured.The JDK trust manager and no client private key.
tls#trustEverythingHelper to configure a
SslContextthat trusts all server certificates and does not use a client private key. Only for development.Disabled by default.
nettyConfiguration helper for Netty.
netty#eventLoopGroupNetty’s event dispatcher. It is the developer’s responsibility to close the
EventLoopGroupthey provide.
NioEventLoopGroupinstance closed automatically with theEnvironmentinstance.
netty#ByteBufAllocator
ByteBufallocator.PooledByteBufAllocator.DEFAULT
netty#channelCustomizerExtension point to customize Netty’s
Channelinstances used for connections.
netty#bootstrapCustomizerExtension point to customize Netty’s
Bootstrapinstances used to configure connections.When a Load Balancer is in Use
A load balancer can misguide the client when it tries to connect to nodes that host stream leaders and replicas. The "Connecting to Streams" blog post covers why client applications must connect to the appropriate nodes in a cluster and how a load balancer can make things complicated for them.
The
EnvironmentBuilder#addressResolver(AddressResolver)method allows intercepting the node resolution after metadata hints and before connection. Applications can use this hook to ignore metadata hints and always use the load balancer, as illustrated in the following snippet:Using a custom address resolver to always use a load balancerAddress entryPoint = new Address("my-load-balancer", 5552); (1) Environment environment = Environment.builder() .host(entryPoint.host()) (2) .port(entryPoint.port()) (2) .addressResolver(address -> entryPoint) (3) .locatorConnectionCount(3) (4) .build();Note the example above sets the number of locator connections the environment maintains. Locator connections are used to perform infrastructure-related operations (e.g. looking up the topology of a stream to find an appropriate node to connect to). The environment uses the number of passed-in URIs to choose an appropriate default number and will pick 1 in this case, which may be too low for a cluster deployment. This is why it is recommended to set the value explicitly, 3 being a good default.
Managing Streams
Streams are usually long-lived, centrally-managed entities, that is, applications are not supposed to create and delete them. It is nevertheless possible to create and delete stream with the
Environment. This comes in handy for development and testing purposes.Streams are created with the
Environment#streamCreator()method:Creating a streamenvironment.streamCreator().stream("my-stream").create(); (1)
StreamCreator#createis idempotent: trying to re-create a stream with the same name and same properties (e.g. maximum size, see below) will not throw an exception. In other words, you can be sure the stream has been created onceStreamCreator#createreturns. Note it is not possible to create a stream with the same name as an existing stream but with different properties. Such a request will result in an exception.Streams can be deleted with the
Environment#delete(String)method:Deleting a streamenvironment.deleteStream("my-stream"); (1)Note you should avoid stream churn (creating and deleting streams repetitively) as their creation and deletion imply some significant housekeeping on the server side (interactions with the file system, communication between nodes of the cluster).
It is also possible to limit the size of a stream when creating it. A stream is an append-only data structure and reading from it does not remove data. This means a stream can grow indefinitely. RabbitMQ Stream supports a size-based and time-based retention policies: once the stream reaches a given size or a given age, it is truncated (starting from the beginning).
Limit the size of streams if appropriate!Make sure to set up a retention policy on potentially large streams if you don’t want to saturate the storage devices of your servers. Keep in mind that this means some data will be erased!
environment.streamCreator() .stream("my-stream") .maxLengthBytes(ByteCapacity.GB(10)) (1) .maxSegmentSizeBytes(ByteCapacity.MB(500)) (2) .create();The previous snippet mentions a segment size. RabbitMQ Stream does not store a stream in a big, single file, it uses segment files for technical reasons. A stream is truncated by deleting whole segment files (and not part of them)so the maximum size of a stream is usually significantly higher than the size of segment files. 500 MB is a reasonable segment file size to begin with.
When does the broker enforce the retention policy?The broker enforces the retention policy when the segments of a stream roll over, that is when the current segment has reached its maximum size and is closed in favor of a new one. This means the maximum segment size is a critical setting in the retention mechanism.
RabbitMQ Stream also supports a time-based retention policy: segments get truncated when they reach a certain age. The following snippet illustrates how to set the time-based retention policy:
Setting a time-based retention policy when creating a streamenvironment.streamCreator() .stream("my-stream") .maxAge(Duration.ofHours(6)) (1) .maxSegmentSizeBytes(ByteCapacity.MB(500)) (2) .create();Creating a Producer
A
Producerinstance is created from theEnvironment. The only mandatory setting to specify is the stream to publish to:Creating a producer from the environmentProducer producer = environment.producerBuilder() (1) .stream("my-stream") (2) .build(); (3) // ... producer.close(); (4)Internally, the
Environmentwill query the broker to find out about the topology of the stream and will create or re-use a connection to publish to the leader node of the stream.The following table sums up the main settings to create a
Producer:The logical name of the producer. Specify a name to enable message deduplication.
null(no deduplication)
batchSizeThe maximum number of messages to accumulate before sending them to the broker.
subEntrySizeThe number of messages to put in a sub-entry. A sub-entry is one "slot" in a publishing frame, meaning outbound messages are not only batched in publishing frames, but in sub-entries as well. Use this feature to increase throughput at the cost of increased latency and potential duplicated messages even when deduplication is enabled. See the dedicated section for more information.
1 (meaning no use of sub-entry batching)
compressionCompression algorithm to use when sub-entry batching is in use. See the dedicated section for more information.
Compression.NONE
maxUnconfirmedMessagesThe maximum number of unconfirmed outbound messages.
Producer#sendwill start blocking when the limit is reached.10,000
batchPublishingDelayPeriod to send a batch of messages.
100 ms
dynamicBatchAdapt batch size depending on ingress rate.
confirmTimeout30 seconds
enqueueTimeoutTime before enqueueing of a message fail when the maximum number of unconfirmed is reached. The callback of the message will be called with a negative status. Set the value to
Duration.ZEROif there should be no timeout.10 seconds
retryOnRecoveryWhether to republish unconfirmed messages after recovery. Set to
falseto not republish unconfirmed messages and get a negativeConfirmationStatusfor unconfirmed messages.Once a
Producerhas been created, it is possible to send a message with theProducer#send(Message, ConfirmationHandler)method. The following snippet shows how to publish a message with a byte array payload:Sending a messagebyte[] messagePayload = "hello".getBytes(StandardCharsets.UTF_8); (1) producer.send( producer.messageBuilder().addData(messagePayload).build(), (2) confirmationStatus -> { (3) if (confirmationStatus.isConfirmed()) { // the message made it to the broker } else { // the message did not make it to the brokerUse aMessageBuilderinstance only onceA
MessageBuilderinstance is meant to create only one message. You need to create a new instance ofMessageBuilderfor every message you want to create.The
ConfirmationHandlerdefines an asynchronous callback invoked when the broker confirms message receipt. TheConfirmationHandleris the place for any logic on publishing confirmation, including re-publishing the message if it is negatively acknowledged.Keep the confirmation callback as short as possibleThe confirmation callback should be kept as short as possible to avoid blocking the connection thread. Not doing so can make the
Environment,Producer,Consumerinstances sluggish or even block them. Any long processing should be done in a separate thread (e.g. with an asynchronousExecutorService).Working with Complex Messages
The publishing example above showed that messages are made of a byte array payload, but it did not go much further. Messages in RabbitMQ Stream can actually be more sophisticated, as they comply with the AMQP 1.0 message format.
In a nutshell, a message in RabbitMQ Stream has the following structure:
properties: a defined set of standard properties of the message (e.g. message ID, correlation ID, content type, etc).
application properties: a set of arbitrary key/value pairs.
body: typically an array of bytes.
message annotations: a set of key/value pairs (aimed at the infrastructure).
The RabbitMQ Stream Java client uses the
Messageinterface to abstract a message and the recommended way to createMessageinstances is to use theProducer#messageBuilder()method. To publish aMessage, use theProducer#send(Message,ConfirmationHandler):Creating a message with propertiesMessage message = producer.messageBuilder() (1) .properties() (2) .messageId(UUID.randomUUID()) .correlationId(UUID.randomUUID()) .contentType("text/plain") .messageBuilder() (3) .addData("hello".getBytes(StandardCharsets.UTF_8)) (4) .build(); (5) producer.send(message, confirmationStatus -> { }); (6)Is RabbitMQ Stream based on AMQP 1.0?AMQP 1.0 is a standard that defines an efficient binary peer-to-peer protocol for transporting messages between two processes over a network. It also defines an abstract message format, with concrete standard encoding. This is only the latter part that RabbitMQ Stream uses. The AMQP 1.0 protocol is not used, only AMQP 1.0 encoded messages are wrapped into the RabbitMQ Stream binary protocol.
The actual AMQP 1.0 message encoding and decoding happen on the client side, the RabbitMQ Stream plugin stores only bytes, it has no idea that AMQP 1.0 message format is used.
AMQP 1.0 message format was chosen because of its flexibility and its advanced type system. It provides good interoperability, which allows streams to be accessed as AMQP 0-9-1 queues, without data loss.
Message Deduplication
RabbitMQ Stream provides publisher confirms to avoid losing messages: once the broker has persisted a message it sends a confirmation for this message. But this can lead to duplicate messages: imagine the connection closes because of a network glitch after the message has been persisted but before the confirmation reaches the producer. Once reconnected, the producer will retry to send the same message, as it never received the confirmation. So the message will be persisted twice.
Luckily RabbitMQ Stream can detect and filter out duplicated messages, based on 2 client-side elements: the producer name and the message publishing ID.
Deduplication Requirements: Single Publisher Instance and Single ThreadWe’ll see below that deduplication works using a strictly increasing sequence for messages. This means messages must be published in order, so there must be only one publisher instance with a given name and this instance must publish messages within a single thread.
With several publisher instances with the same name, one instance can be "ahead" of the others for the sequence ID: if it publishes a message with sequence ID 100, any message from any instance with a lower sequence ID will be filtered out.
If there is only one publisher instance with a given name, it should publish messages in a single thread. Even if messages are created in order, with the proper sequence ID, they can get out of order if they are published in several threads, e.g. message 5 can be published before message 2. The deduplication mechanism will then filter out message 2 in this case.
You have to be very careful about the way your applications publish messages when deduplication is in use: make sure publisher instances do not share the same name and use only a single thread. If you worry about performance, note it is possible to publish hundreds of thousands of messages in a single thread with RabbitMQ Stream.
Deduplication is not guaranteed when using sub-entries batchingIt is not possible to guarantee deduplication when sub-entry batching is in use. Sub-entry batching is disabled by default and it does not prevent batching messages in a single publish frame, which can already provide very high throughput.
Setting the Name of a Producer
The producer name is set when creating the producer instance, which automatically enables deduplication:
Naming a producer to enable message deduplicationProducer producer = environment.producerBuilder() .name("my-app-producer") (1) .confirmTimeout(Duration.ZERO) (2) .stream("my-stream") .build();Thanks to the name, the broker will be able to track the messages it has persisted on a given stream for this producer. If the producer connection unexpectedly closes, it will automatically recover and retry outstanding messages. The broker will then filter out messages it has already received and persisted. No more duplicates!
Why settingconfirmTimeoutto 0 when using deduplication?The point of deduplication is to avoid duplicates when retrying unconfirmed messages. But why retrying in the first place? To avoid losing messages, that is enforcing at-least-once semantics. If the client does not stubbornly retry messages and gives up at some point, messages can be lost, which maps to at-most-once semantics. This is why the deduplication examples set the
confirmTimeoutsetting toDuration.ZERO: to disable the background task that calls the confirmation callback for outstanding messages that time out. This way the client will do its best to retry messages until they are confirmed.A producer name must be stable and clear to a human reader. It must not be a random sequence that changes when the producer application is restarted. Names like
online-shop-orderoronline-shop-invoiceare better names than3d235e79-047a-46a6-8c80-9d159d3e1b05. There should be only one living instance of a producer with a given name on a given stream at the same time.Understanding Publishing ID
The producer name is only one part of the deduplication mechanism, the other part is the message publishing ID. If the producer has a name, the client automatically assigns a publishing ID to each outbound message for the producer. The publishing ID is a strictly increasing sequence, starting at 0 and incremented for each message. The default publishing sequence is good enough for deduplication, but it is possible to assign a publishing ID to each message:
Using an explicit publishing IDMessage message = producer.messageBuilder() .publishingId(1) (1) .addData("hello".getBytes(StandardCharsets.UTF_8)) .build(); producer.send(message, confirmationStatus -> { });A custom publishing ID sequence has usually a meaning: it can be the line number of a file or the primary key in a database.
Note the publishing ID is not part of the message: it is not stored with the message and so is not available when consuming the message. It is still possible to store the value in the AMQP 1.0 message application properties or in an appropriate properties (e.g.
messageId).Do not mix client-assigned and custom publishing IDAs soon as a producer name is set, message deduplication is enabled. It is then possible to let the producer assign a publishing ID to each message or assign custom publishing IDs. Do one or the other, not both!
Restarting a Producer Where It Left Off
Using a custom publishing sequence is even more useful to restart a producer where it left off. Imagine a scenario whereby the producer is sending a message for each line in a file and the application uses the line number as the publishing ID. If the application restarts because of some necessary maintenance or even a crash, the producer can restart from the beginning of the file: there would no duplicate messages because the producer has a name and the application sets publishing IDs appropriately. Nevertheless, this is far from ideal, it would be much better to restart just after the last line the broker successfully confirmed. Fortunately this is possible thanks to the
Producer#getLastPublishing()method, which returns the last publishing ID for a given producer. As the publishing ID in this case is the line number, the application can easily scroll to the next line and restart publishing from there.The next snippet illustrates the use of
Producer#getLastPublishing():Setting a producer where it left offProducer producer = environment.producerBuilder() .name("my-app-producer") (1) .confirmTimeout(Duration.ZERO) (2) .stream("my-stream") .build(); long nextPublishingId = producer.getLastPublishingId() + 1; (3) while (moreContent(nextPublishingId)) { byte[] content = getContent(nextPublishingId); (4) Message message = producer.messageBuilder() .publishingId(nextPublishingId) (5) .addData(content) .build(); producer.send(message, confirmationStatus -> {}); nextPublishingId++;Sub-Entry Batching and Compression
RabbitMQ Stream provides a special mode to publish, store, and dispatch messages: sub-entry batching. This mode increases throughput at the cost of increased latency and potential duplicated messages even when deduplication is enabled. It also allows using compression to reduce bandwidth and storage if messages are reasonably similar, at the cost of increasing CPU usage on the client side.
Sub-entry batching consists in squeezing several messages – a batch – in the slot that is usually used for one message. This means outbound messages are not only batched in publishing frames, but in sub-entries as well.
You can enable sub-entry batching by setting the
ProducerBuilder#subEntrySizeparameter to a value greater than 1, like in the following snippet:Enabling sub-entry batchingProducer producer = environment.producerBuilder() .stream("my-stream") .batchSize(100) (1) .subEntrySize(10) (2) .build();A sub-entry batch will go directly to disc after it reached the broker, so the publishing client has complete control over it. This is the occasion to take advantage of the similarity of messages and compress them. There is no compression by default but you can choose among several algorithms with the
ProducerBuilder#compression(Compression)method:Enabling compression of sub-entry messagesProducer producer = environment.producerBuilder() .stream("my-stream") .batchSize(100) (1) .subEntrySize(10) (2) .compression(Compression.ZSTD) (3) .build();Note the messages in a sub-entry are compressed altogether to benefit from their potential similarity, not one by one.
The following table lists the supported algorithms, general information about them, and the respective implementations used by default.
Has a high compression ratio but is slow compared to other algorithms.
JDK implementation
Aims for reasonable compression ratio and very high speeds.
Xerial Snappy (framed)
Aims for good trade-off between speed and compression ratio.
LZ4 Java (framed)
zstd (Zstandard)
Aims for high compression ratio and high speed, especially for decompression.
You are encouraged to test and evaluate the compression algorithms depending on your needs.
The compression libraries are pluggable thanks to the
EnvironmentBuilder#compressionCodecFactory(CompressionCodecFactory)method.Consumers, sub-entry batching, and compressionThere is no configuration required for consumers with regard to sub-entry batching and compression. The broker dispatches messages to client libraries: they are supposed to figure out the format of messages, extract them from their sub-entry, and decompress them if necessary. So when you set up sub-entry batching and compression in your publishers, the consuming applications must use client libraries that support this mode, which is the case for the stream Java client.
Creating a Consumer
A
Consumerinstance is created withEnvironment#consumerBuilder(). The main settings are the stream to consume from, the place in the stream to start consuming from (the offset), and a callback when a message is received (theMessageHandler). The next snippet shows how to create aConsumer:Creating a consumerConsumer consumer = environment.consumerBuilder() (1) .stream("my-stream") (2) .offset(OffsetSpecification.first()) (3) .messageHandler((offset, message) -> { message.getBodyAsBinary(); (4) .build(); (5) // ... consumer.close(); (6)The message processing callback can take its time, but not too muchThe message processing callback should not take too long or it could impact other consumers sharing the same connection. The
EnvironmentBuilder#maxConsumersByConnection(int)method allows isolating consumers from each other, at the cost of creating and maintaining more connections. Consider using a separate thread for long processing (e.g. with an asynchronousExecutorService). Note message processing callbacks run in a dedicated thread, they do not impact other network frames, which run in their own thread.
AutoTrackingStrategyEnable and configure the auto-tracking strategy.
This is the default tracking strategy if a consumer
nameis provided.
AutoTrackingStrategy#messageCountBeforeStorageNumber of messages before storing.
10,000
AutoTrackingStrategy#flushIntervalInterval to check and store the last received offset in case of inactivity.
Duration.ofSeconds(5)
ManualTrackingStrategyEnable and configure the manual tracking strategy.
Disabled by default.
ManualTrackingStrategy#checkIntervalInterval to check if the last requested stored offset has been actually stored.
Duration.ofSeconds(5)noTrackingStrategy
Disable server-side offset tracking even if a name is provided. Useful when single active consumer is enabled and an external store is used for offset tracking.
falsesubscriptionListener
A callback before the subscription is created. Useful when using an external store for offset tracking.
Configuration helper for flow control.
flow#initialCreditsNumber of credits when the subscription is created. Increase for higher throughput at the expense of memory usage.
flow#strategyThe
ConsumerFlowStrategyto use.
ConsumerFlowStrategy#creditOnChunkArrival(10)Why is my consumer not consuming?A consumer starts consuming at the very end of a stream by default (
nextoffset). This means the consumer will receive messages as soon as a producer publishes to the stream. This also means that if no producers are currently publishing to the stream, the consumer will stay idle, waiting for new messages to come in. Use theConsumerBuilder#offset(OffsetSpecification)to change the default behavior and see the offset section to find out more about the different types of offset specification.Specifying an Offset
The offset is the place in the stream where the consumer starts consuming from. The possible values for the offset parameter are the following:
OffsetSpecification.first(): starting from the first available offset. If the stream has not been truncated, this means the beginning of the stream (offset 0).
OffsetSpecification.last(): starting from the end of the stream and returning the last chunk of messages immediately (if the stream is not empty).
OffsetSpecification.next(): starting from the next offset to be written. Contrary toOffsetSpecification.last(), consuming withOffsetSpecification.next()will not return anything if no-one is publishing to the stream. The broker will start sending messages to the consumer when messages are published to the stream.
OffsetSpecification.offset(offset): starting from the specified offset. 0 means consuming from the beginning of the stream (first messages). The client can also specify any number, for example the offset where it left off in a previous incarnation of the application.
OffsetSpecification.timestamp(timestamp): starting from the messages stored after the specified timestamp. Note consumers can receive messages published a bit before the specified timestamp. Application code can filter out those messages if necessary.What is a chunk of messages?A chunk is simply a batch of messages. This is the storage and transportation unit used in RabbitMQ Stream, that is messages are stored contiguously in a chunk and they are delivered as part of a chunk. A chunk can be made of one to several thousands of messages, depending on the ingress.
Each chunk contains a timestamp of its creation time. The broker uses this timestamp to find the appropriate chunk to start from when using a timestamp specification. The broker chooses the closest chunk before the specified timestamp, that is why consumers may see messages published a bit before what they specified.
Tracking the Offset for a Consumer
RabbitMQ Stream provides server-side offset tracking. This means a consumer can track the offset it has reached in a stream. It allows a new incarnation of the consumer to restart consuming where it left off. All of this without an extra datastore, as the broker stores the offset tracking information.
Offset tracking works in 2 steps:
the consumer must have a name. The name is set with
ConsumerBuilder#name(String). The name can be any value (under 256 characters) and is expected to be unique (from the application point of view). Note neither the client library, nor the broker enforces uniqueness of the name: if 2ConsumerJava instances share the same name, their offset tracking will likely be interleaved, which applications usually do not expect.the consumer must periodically store the offset it has reached so far. The way offsets are stored depends on the tracking strategy: automatic or manual.
Whatever tracking strategy you use, a consumer must have a name to be able to store offsets.
Automatic Offset Tracking
The following snippet shows how to enable automatic tracking with the defaults:
Using automatic tracking strategy with the defaultsConsumer consumer = environment.consumerBuilder() .stream("my-stream") .name("application-1") (1) .autoTrackingStrategy() (2) .builder() .messageHandler((context, message) -> { // message handling code... .build();message count before storage: the client will store the offset after the specified number of messages, right after the execution of the message handler. The default is every 10,000 messages.
flush interval: the client will make sure to store the last received offset at the specified interval. This avoids having pending, not stored offsets in case of inactivity. The default is 5 seconds.
.name("application-1") (1) .autoTrackingStrategy() (2) .messageCountBeforeStorage(50_000) (3) .flushInterval(Duration.ofSeconds(10)) (4) .builder() .messageHandler((context, message) -> { // message handling code... .build();Note the automatic tracking is the default tracking strategy, so if you are fine with its defaults, it is enabled as soon as you specify a name for the consumer:
Setting only the consumer name to enable automatic trackingConsumer consumer = environment.consumerBuilder() .stream("my-stream") .name("application-1") (1) .messageHandler((context, message) -> { // message handling code... .build();Automatic tracking is simple and provides good guarantees. It is nevertheless possible to have more fine-grained control over offset tracking by using manual tracking.
Manual Offset Tracking
The manual tracking strategy gives the developer control of storing offsets whenever they want, not only after a given number of messages has been received and supposedly processed, like automatic tracking does.
The following snippet shows how to enable manual tracking and how to store the offset at some point:
Using manual tracking with defaultsConsumer consumer = environment.consumerBuilder() .stream("my-stream") .name("application-1") (1) .manualTrackingStrategy() (2) .builder() .messageHandler((context, message) -> { // message handling code... if (conditionToStore()) { context.storeOffset(); (3) .build();Manual tracking has only one setting: the check interval. The client checks that the last requested stored offset has been actually stored at the specified interval. The default check interval is 5 seconds.
The following snippet shows the configuration of manual tracking:
Configuring manual tracking strategyConsumer consumer = environment.consumerBuilder() .stream("my-stream") .name("application-1") (1) .manualTrackingStrategy() (2) .checkInterval(Duration.ofSeconds(10)) (3) .builder() .messageHandler((context, message) -> { // message handling code... if (conditionToStore()) { context.storeOffset(); (4) .build();The snippet above uses
MessageHandler.Context#storeOffset()to store at the offset of the current message, but it is possible to store anywhere in the stream withMessageHandler.Context#consumer()#store(long)or simplyConsumer#store(long).Considerations On Offset Tracking
When to store offsets? Avoid storing offsets too often or, worse, for each message. Even though offset tracking is a small and fast operation, it will make the stream grow unnecessarily, as the broker persists offset tracking entries in the stream itself.
A good rule of thumb is to store the offset every few thousands of messages. Of course, when the consumer restarts consuming in a new incarnation, the last tracked offset may be a little behind the very last message the previous incarnation actually processed, so the consumer may see some messages that have been already processed.
A solution to this problem is to make sure processing is idempotent or filter out the last duplicated messages.
Is the offset a reliable absolute value? Message offsets may not be contiguous. This means the message at offset 500 in a stream may not be the 501 message in the stream (offsets start at 0). There can be different types of entries in a stream storage, a message is just one of them. For example, storing an offset creates an offset tracking entry, which has its own offset.
This means one must be careful when basing some decision on offset values, like a modulo to perform an operation every X messages. As the message offsets have no guarantee to be contiguous, the operation may not happen exactly every X messages.
Subscription Listener
The client provides a
SubscriptionListenerinterface callback to add behavior before a subscription is created. This callback can be used to customize the offset the client library computed for the subscription. The callback is called when the consumer is first created and when the client has to re-subscribe (e.g. after a disconnection or a topology change).It is possible to use the callback to get the last processed offset from an external store, that is not using the server-side offset tracking feature RabbitMQ Stream provides. The following code snippet shows how this can be done (note the interaction with the external store is not detailed):
Using an external store for offset tracking with a subscription listenerConsumer consumer = environment.consumerBuilder() .stream("my-stream") .subscriptionListener(subscriptionContext -> { (1) long offset = getOffsetFromExternalStore(); (2) subscriptionContext.offsetSpecification(OffsetSpecification.offset(offset + 1)); (3) .messageHandler((context, message) -> { // message handling code... storeOffsetInExternalStore(context.offset()); (4) .build();When using an external store for offset tracking, it is no longer necessary to set a name and an offset strategy, as these only apply when server-side offset tracking is in use.
Using a subscription listener can also be useful to have more accurate offset tracking on re-subscription, at the cost of making the application code slightly more complex. This requires a good understanding on how and when subscription occurs in the client, and so when the subscription listener is called:
on the first subscription (when the consumer is created): the offset specification is the one specified with
ConsumerBuilder#offset(OffsetSpecification), the default beingOffsetSpecification#next()on re-subscription (after a disconnection or topology change): the offset specification is the offset of the last dispatched message
on the first subscription (when the consumer is created): the server-side stored offset (if any) overrides the value specified with
ConsumerBuilder#offset(OffsetSpecification)on re-subscription (after a disconnection or topology change): the server-side stored offset is used
The subscription listener comes in handy on re-subscription. The application can track the last processed offset in-memory, with an
AtomicLongfor example. The application knows exactly when a message is processed and updates its in-memory tracking accordingly, whereas the value computed by the client may not be perfectly appropriate on re-subscription.Let’s take the example of a named consumer with an offset tracking strategy that is lagging because of bad timing and a long flush interval. When a glitch happens and triggers the re-subscription, the server-side stored offset can be quite behind what the application actually processed. Using this server-side stored offset can lead to duplicates, whereas using the in-memory, application-specific offset tracking variable is more accurate. A custom
SubscriptionListenerlets the application developer uses what’s best for the application if the computed value is not optimal.Flow Control
This section covers how a consumer can tell the broker when to send more messages.
By default, the broker keeps sending messages as long as messages are processed and the
MessageHandler#handle(Context, Message)method returns. This strategy works fine if message processing is fast enough. If message processing takes longer, one can be tempted to process messages in parallel with anExecutorService. This will make thehandlemethod return immediately and the broker will keep sending messages, potentially overflowing the consumer.What we miss in the parallel processing case is a way to tell the library we are done processing a message and that we are ready at some point to handle more messages. This is the goal of the
MessageHandler.Context#processed()method.This method is by default a no-op because the default flow control strategy keeps asking for more messages as soon as message processing is done. This method gets some real behavior to control the flow of messages when an appropriate
ConsumerFlowStrategyis setConsumerBuilder#flow(). The following code snippet shows how to set a handy consumer flow strategy:Setting a consumer flow control strategyConsumer consumer = environment.consumerBuilder() .stream("my-stream") .flow() .strategy(ConsumerFlowStrategy.creditWhenHalfMessagesProcessed()) (1) .builder() .messageHandler((context, message) -> { // message handling code (possibly asynchronous)... context.processed(); (2) .build();In the example we set up the
creditWhenHalfMessagesProcessedstrategy which asks for more messages once half of the current messages have been marked as processed. The broker does not send messages one by one, it sends chunks of messages. A chunk of messages can contain 1 to several thousands of messages. So with the strategy set above, onceprocessed()has been called for half of the messages of the current chunk, the library will ask the broker for another one (it will provide a credit for the subscription). By doing this, the next chunk should arrive by the time we are done with the other half of the current chunk. This way the consumer is neither overwhelmed nor idle.The
ConsumerFlowStrategyinterface provides some static helpers to configure the appropriate strategy.Additional notes on consumer flow control:
Make sure to call the
processed()method once you set up aConsumerFlowStrategy. The method is a no-op by default, but it is essential to call it with count-based strategies likecreditWhenHalfMessagesProcessedorcreditOnProcessedMessageCount. No calling it will stop the dispatching of messages.Make sure to call
processed()only once. Whether the method is idempotent depends on the flow strategy implementation. Apart from the default one, the implementations the library provides does not makeprocessed()idempotent.When the single active consumer feature is enabled for several consumer instances sharing the same stream and name, only one of these instances will be active at a time and so will receive messages. The other instances will be idle.
The single active consumer feature provides 2 benefits:
Each application instance registers a single active consumer. The consumer instances share the same name.
The broker makes the first registered consumer the active one.
The active consumer receives and processes messages, the other consumer instances remain idle.
The active consumer stops or crashes.
The broker chooses the consumer next in line to become the new active one.
The new active consumer starts receiving messages.
Note there can be several groups of single active consumers on the same stream. What makes them different from each other is the name used by the consumers. The broker deals with them independently. Let’s use an example. Imagine 2 different
app-1andapp-2applications consuming from the same stream, with 3 identical instances each. Each instance registers 1 single active consumer with the name of the application. We end up with 3app-1consumers and 3app-2consumers, 1 active consumer in each group, so overall 6 consumers and 2 active ones, all of this on the same stream.Let’s see now the API for single active consumer.
Enabling Single Active Consumer
Use the
ConsumerBuilder#singleActiveConsumer()method to enable the feature:Enabling single active consumerConsumer consumer = environment.consumerBuilder() .stream("my-stream") .name("application-1") (1) .singleActiveConsumer() (2) .messageHandler((context, message) -> { // message handling code... .build();With the configuration above, the consumer will take part in the
application-1group on themy-streamstream. If the consumer instance is the first in a group, it will get messages as soon as there are some available. If it is not the first in the group, it will remain idle until it is its turn to be active (likely when all the instances registered before it are gone).Offset Tracking
Single active consumer and offset tracking work together: when the active consumer goes away, another consumer takes over and resumes when the former active left off. Well, this is how things should work and luckily this is what happens when using server-side offset tracking. So as long as you use automatic offset tracking or manual offset tracking, the handoff between a former active consumer and the new one will go well.
The story is different is you are using an external store for offset tracking. In this case you need to tell the client library where to resume from and you can do this by implementing the
ConsumerUpdateListenerAPI.Reacting to Consumer State Change
The broker notifies a consumer that becomes active before dispatching messages to it. The broker expects a response from the consumer and this response contains the offset the dispatching should start from. So this is the consumer’s responsibility to compute the appropriate offset, not the broker’s. The default behavior is to look up the last stored offset for the consumer on the stream. This works when server-side offset tracking is in use, but it does not when the application chose to use an external store for offset tracking. In this case, it is possible to use the
ConsumerBuilder#consumerUpdateListener(ConsumerUpdateListener)method like demonstrated in the following snippet:Fetching the last stored offset from an external store in the consumer update listener callbackConsumer consumer = environment.consumerBuilder() .stream("my-stream") .name("application-1") (1) .singleActiveConsumer() (2) .noTrackingStrategy() (3) .consumerUpdateListener(context -> { (4) long offset = getOffsetFromExternalStore(); (5) return OffsetSpecification.offset(offset + 1); (6) .messageHandler((context, message) -> { // message handling code... storeOffsetInExternalStore(context.offset()); .build();A super stream is a logical stream composed of multiple individual streams. It provides scalability through partitioning, distributing data across several streams instead of using a single stream.
The stream Java client maintains the same programming model for super streams as individual streams. The
Producer,Consumer,Message, and other APIs remain unchanged when using super streams, so your application code requires minimal modifications.Consuming applications can use super streams and single active consumer at the same time. The 2 features combined make sure only one consumer instance consumes from an individual stream at a time. In this configuration, super streams provide scalability and single active consumer provides the guarantee that messages of an individual stream are processed in order.
Super streams are a partitioning solution. They are not meant to replace individual streams; they sit on top of them to handle some use cases more effectively. If the stream data is likely to be large – hundreds of gigabytes or even terabytes, size remains relative – and even presents an obvious partition key (e.g. country), a super stream can be appropriate. It can help to cope with the data size and to take advantage of data locality for some processing use cases. Remember that partitioning always comes with complexity though, even if the implementation of super streams strives to make it as transparent as possible for the application developer.
Topology
The topology of a super stream follows the AMQP 0.9.1 model: exchanges, queues, and bindings. AMQP resources are not used to transport or store stream messages. Instead, they describe the super stream topology and define which streams compose the super stream.
Let’s take the example of an
invoicessuper stream made of 3 streams (i.e. partitions):the
invoices-0,invoices-1,invoices-2streams are the partitions of the super stream (streams are also AMQP queues in RabbitMQ)3 bindings between the exchange and the streams link the super stream to its partitions and represent routing rules
Figure 4. The topology of a super stream is defined with bindings between an exchange and queuesWhen a super stream is in use, the stream Java client queries this information to find out about the partitions of a super stream and the routing rules. From the application code point of view, using a super stream is mostly configuration-based. Some logic must also be provided to extract routing information from messages.
Super Stream Creation and Deletion
It is possible to manage super streams with
the stream Java client, by using
Environment#streamCreator()andEnvironment#deleteSuperStream(String)the
add_super_streamanddelete_super_streamcommands inrabbitmq-streams(CLI)any AMQP 0.9.1 client library
The stream Java client and the dedicated CLI commands are easier to use as they take care of the topology details (exchange, streams, and bindings).
With the Client Library
Here is how to create an
invoicessuper stream with 5 partitions:Creating a super stream by specifying the number of partitionsenvironment.streamCreator().name("invoices") .superStream() .partitions(5).creator() .create();The super stream partitions will be
invoices-0,invoices-1, …,invoices-4. This topology works by hashing routing keys to determine the target partition for each message. For example, if the routing key is a customer ID, all invoices for the same customer will be routed to the same partition, ensuring they are processed in publishing order.It is also possible to specify binding keys when creating a super stream:
Creating a super stream by specifying the binding keysenvironment.streamCreator().name("invoices") .superStream() .bindingKeys("amer", "emea", "apac").creator() .create();The super stream partitions will be
invoices-amer,invoices-emeaandinvoices-apacin this case.Using one type of topology or the other depends on the use cases, especially how messages are processed. See the next sections on publishing and consuming to find out more.
With the CLI
Here is how to create an
invoicessuper stream with 5 partitions:Creating a super stream from the CLIrabbitmq-streams add_super_stream invoices --partitions 5Use
rabbitmq-streams add_super_stream --helpto learn more about the command.Publishing to a Super Stream
When the topology of a super stream like the one described above has been set, creating a producer for it is straightforward:
Creating a Producer for a Super StreamProducer producer = environment.producerBuilder() .superStream("invoices") (1) .routing(message -> message.getProperties().getMessageIdAsString()) (2) .producerBuilder() .build(); (3) // ... producer.close(); (4)Although the
invoicessuper stream is not a physical stream, you must use its name when declaring the producer. The client automatically discovers the individual streams that compose the super stream. Your application code must provide logic to extract a routing key from each message using aFunction<Message, String>. The client hashes this routing key to determine the target stream using the partition list and a modulo operation.The client uses 32-bit MurmurHash3 by default to hash the routing key. This hash function provides good uniformity, performance, and portability, making it a good default choice, but it is possible to specify a custom hash function:
Specifying a custom hash functionProducer producer = environment.producerBuilder() .superStream("invoices") .routing(message -> message.getProperties().getMessageIdAsString()) .hash(rk -> rk.hashCode()) (1) .producerBuilder() .build();Note using Java’s
hashCode()method is a debatable choice as potential producers in other languages are unlikely to implement it, making the routing different between producers in different languages.Resolving Routes with Bindings
Hashing the routing key to pick a partition is only one way to route messages to the appropriate streams. The stream Java client provides another way to resolve streams, based on the routing key and the bindings between the super stream exchange and the streams.
This routing strategy makes sense when the partitioning has a business meaning, e.g. with a partition for a region in the world, like in the diagram below:
In such a case, the routing key will be a property of the message that represents the region:
Enabling the "key" routing strategyProducer producer = environment.producerBuilder() .superStream("invoices") .routing(msg -> msg.getApplicationProperties().get("region").toString()) (1) .key() (2) .producerBuilder() .build();Internally the client will query the broker to resolve the destination streams for a given routing key, making the routing logic from any exchange type available to streams. Note the client caches results, it does not query the broker for every message.
Using a Custom Routing Strategy
The solution that provides the most control over routing is using a custom routing strategy. This should be needed only for specific cases.
Here is an excerpt of the
RoutingStrategyinterface:The routing strategy interfacepublic interface RoutingStrategy { /** Where to route a message. */ List<String> route(Message message, Metadata metadata); /** Metadata on the super stream. */ interface Metadata { List<String> partitions(); List<String> route(String routingKey);Note it is possible to route a message to several streams or even nowhere. The "hash" routing strategy always routes to 1 stream and the "key" routing strategy can route to several streams.
The following code sample shows how to implement a simplistic round-robin
RoutingStrategyand use it in the producer. Note this implementation should not be used in production as the modulo operation is not sign-safe for simplicity’s sake.Setting a round-robin routing strategyAtomicLong messageCount = new AtomicLong(0); RoutingStrategy routingStrategy = (message, metadata) -> { List<String> partitions = metadata.partitions(); String stream = partitions.get( (int) messageCount.getAndIncrement() % partitions.size() return Collections.singletonList(stream); Producer producer = environment.producerBuilder() .superStream("invoices") .routing(null) (1) .strategy(routingStrategy) (2) .producerBuilder() .build();Deduplication
Deduplication for a super stream producer works the same way as with a single stream producer. The publishing ID values are spread across the streams, but this does not affect the mechanism.
Consuming From a Super Stream
A super stream consumer is a composite consumer: it looks up the super stream partitions and creates a consumer for each of them. The programming model is the same as with regular consumers for the application developer: their main job is to provide the application code to process messages, that is a
MessageHandlerinstance. The configuration is different though and this section covers its subtleties. But let’s focus on the behavior of a super stream consumer first.Super Stream Consumer in Practice
Imagine you have a super stream made of 3 partitions (individual streams). You start an instance of your application, that itself creates a super stream consumer for this super stream. The super stream consumer will create 3 consumers internally, one for each partition, and messages will flow in your
MessageHandler.Imagine now that you start another instance of your application. It will do the exact same thing as previously and the 2 instances will process the exact same messages in parallel. This may be not what you want: the messages will be processed twice!
Having one instance of your application may be enough: the data are spread across several streams automatically and the messages from the different partitions are processed in parallel from a single OS process.
But if you want to scale the processing across several OS processes (or bare-metal machines, or virtual machines) and you don’t want your messages to be processed several times as illustrated above, you’ll have to enable the single active consumer feature on your super stream consumer.
The next subsections cover the basic settings of a super stream consumer and a dedicated section covers how super stream consumers and single active consumer play together.
Declaring a Super Stream Consumer
Declaring a super stream consumer is not much different from declaring a single stream consumer. The
ConsumerBuilder#superStream(String)must be used to set the super stream to consume from:Declaring a super stream consumerConsumer consumer = environment.consumerBuilder() .superStream("invoices") (1) .messageHandler((context, message) -> { // message processing .build(); // ... consumer.close(); (2)Offset Tracking
The semantics of offset tracking for a super stream consumer are roughly the same as for an individual stream consumer. There are still some subtle differences, so a good understanding of offset tracking in general and of the automatic and manual offset tracking strategies is recommended.
Here are the main differences for the automatic/manual offset tracking strategies between single and super stream consuming:
automatic offset tracking: internally, the client divides the
messageCountBeforeStoragesetting by the number of partitions for each individual consumer. Consider a 3-partition super stream withmessageCountBeforeStorageset to 10,000. If 10,000 messages arrive evenly distributed (approximately 3,333 per partition), automatic offset tracking will not trigger because no individual partition reaches the threshold. DividingmessageCountBeforeStorageby the partition count provides more accurate tracking when messages are evenly distributed across partitions. A good rule of thumb is to then multiply the expected per-streammessageCountBeforeStorageby the number of partitions, to avoid storing offsets too often. So the default being 10,000, it can be set to 30,000 for a 3-partition super stream.manual offset tracking: the
MessageHandler.Context#storeOffset()method must be used, theConsumer#store(long)will fail, because an offset value has a meaning only in one stream, not in other streams. A call toMessageHandler.Context#storeOffset()will store the current message offset in its stream, but also the offset of the last dispatched message for the other streams of the super stream.As stated previously, super stream consumers and single active consumer provide scalability and the guarantee that messages of an individual stream are processed in order.
Let’s take an example with a 3-partition super stream:
You have an application that creates a super stream consumer instance with single active consumer enabled.
You start 3 instances of this application. Each instance is a JVM process running in a Docker container, virtual machine, or on bare-metal hardware.
Since the super stream has 3 partitions, each application instance creates a super stream consumer that maintains 3 internal consumer instances. This results in 9 consumer instances total. Such a super stream consumer is a composite consumer.
The broker and the different application instances coordinate so that only 1 consumer instance for a given partition receives messages at a time. So among these 9 consumer instances, only 3 are actually active, the other ones are idle or inactive.
If one of the application instances stops, the broker will rebalance its active consumer to one of the other instances.
The following figure illustrates how the client library supports the combination of the super stream and single active consumer features. It uses a composite consumer that creates an individual consumer for each partition of the super stream. If there is only one single active consumer instance with a given name for a super stream, each individual consumer is active.
Figure 6. A single active consumer on a super stream is a composite consumer that creates an individual consumer for each partitionImagine now we start 3 instances of the consuming application to scale out the processing. The individual consumer instances spread out across the super stream partitions and only one is active for each partition, as illustrated in the following figure:
Figure 7. Consumer instances spread across the super stream partitions and are activated accordinglyAfter this overview, let’s see the API and the configuration details.
The following snippet shows how to declare a single active consumer on a super stream with the
ConsumerBuilder#superStream(String)andConsumerBuilder#singleActiveConsumer()methods:Enabling single active consumer on a super streamConsumer consumer = environment.consumerBuilder() .superStream("invoices") (1) .name("application-1") (2) .singleActiveConsumer() (3) .messageHandler((context, message) -> { // message processing .build(); // ...Note it is mandatory to specify a name for the consumer. This name will be used to identify the group of consumer instances and make sure only one is active for each partition. The name is also the reference for offset tracking.
The example above uses by default automatic offset tracking. With this strategy, the client library takes care of offset tracking when consumers become active or inactive. It looks up the latest stored offset when a consumer becomes active to start consuming at the appropriate offset and it stores the last dispatched offset when a consumer becomes inactive.
The story is not the same with manual offset tracking as the client library does not know which offset it should store when a consumer becomes inactive. The application developer can use the
ConsumerUpdateListener)callback to react appropriately when a consumer changes state. The following snippet illustrates the use of theConsumerUpdateListenercallback:Using manual offset tracking for a super stream single active consumerConsumer consumer = environment.consumerBuilder() .superStream("invoices") (1) .name("application-1") (2) .singleActiveConsumer() (3) .manualTrackingStrategy() (4) .builder() .consumerUpdateListener(context -> { (5) if(context.isActive()) { (6) try { return OffsetSpecification.offset( context.consumer().storedOffset() + 1 } catch (NoOffsetException e) { return OffsetSpecification.next(); } else { context.consumer().store(lastProcessedOffsetForThisStream); (7) return null; .messageHandler((context, message) -> { // message handling code... if (conditionToStore()) { context.storeOffset(); (8) .build(); // ...The
ConsumerUpdateListenercallback must return the offset to start consuming from when a consumer becomes active. This is what the code above does: it checks if the consumer is active withConsumerUpdateListener.Context#isActive()and looks up the last stored offset. If there is no stored offset yet, it returns a default value,OffsetSpecification#next()here.When a consumer becomes inactive, it should store the last processed offset, as another consumer instance will take over elsewhere. It is expected this other consumer runs the exact same code, so it will execute the same sequence when it becomes active (looking up the stored offset, returning the value + 1).
Note the
ConsumerUpdateListeneris called for a partition, that is an individual stream. The application code should take care of maintaining a reference of the last processed offset for each partition of the super stream, e.g. with aMap<String, Long>(partition-to-offset map). To do so, thecontextparameter of theMessageHandlerandConsumerUpdateListenercallbacks provide astream()method.RabbitMQ Stream provides server-side offset tracking, but it is possible to use an external store to track offsets for streams. The
ConsumerUpdateListenercallback is still your friend in this case. The following snippet shows how to leverage it when an external store is in use:Using external offset tracking for a super stream single active consumerConsumer consumer = environment.consumerBuilder() .superStream("invoices") (1) .name("application-1") (2) .singleActiveConsumer() (3) .noTrackingStrategy() (4) .consumerUpdateListener(context -> { (5) if (context.isActive()) { (6) long offset = getOffsetFromExternalStore(); return OffsetSpecification.offset(offset + 1); return null; (7) .messageHandler((context, message) -> { // message handling code... storeOffsetInExternalStore(context.stream(), context.offset()); (8) .build();Even though there is no server-side offset tracking to use it, the consumer must still have a name to identify the group it belongs to. The external offset tracking mechanism is free to use the same name or not.
Calling
ConsumerBuilder#noTrackingStrategy()is necessary to disable server-side offset tracking, or the automatic tracking strategy will kick in.The snippet does not provide the details, but the offset tracking mechanism seems to store the offset for each message. The external store must be able to cope with the message rate in a real-world scenario.
The
ConsumerUpdateListenercallback returns the last stored offset + 1 when the consumer becomes active. This way the broker will resume the dispatching at this location in the stream.A well-behaved
ConsumerUpdateListenermust make sure the last processed offset is stored when the consumer becomes inactive, so that the consumer that will take over can look up the offset and resume consuming at the right location. OurConsumerUpdateListenerdoes not do anything when the consumer becomes inactive (it returnsnull): it can afford this because the offset is stored for each message. Make sure to store the last processed offset when the consumer becomes inactive to avoid duplicates when the consumption resumes elsewhere.RabbitMQ Stream’s server-side filtering saves network bandwidth by filtering messages on the server, so clients receive only a subset of the messages in a stream.
The filtering feature works as follows:
Why is client-side filtering logic still needed? Server-side filtering is probabilistic — it may still send messages that don’t match your filter values. The server uses a Bloom filter (a space-efficient probabilistic data structure) where false positives are possible. Despite this limitation, filtering significantly reduces network bandwidth.
Filtering on the Publishing Side
Publishers must define logic to extract filter values from messages. The following snippet shows how to extract the filter value from an application property:
Declaring a producer with logic to extract a filter value from each messageProducer producer = environment.producerBuilder() .stream("invoices") .filterValue(msg -> msg.getApplicationProperties().get("state").toString()) (1) .build();Filtering on the Consuming Side
A consumer needs to set up one or several filter values and some filtering logic to enable filtering. The filtering logic must be consistent with the filter values. In the next snippet, the consumer wants to process only messages from the state of California. It sets a filter value to
californiaand a predicate that accepts a message only if thestateapplication property iscalifornia:Declaring a consumer with a filter value and filtering logicString filterValue = "california"; Consumer consumer = environment.consumerBuilder() .stream("invoices") .filter() .values(filterValue) (1) .postFilter(msg -> filterValue.equals(msg.getApplicationProperties().get("state"))) (2) .builder() .messageHandler((ctx, msg) -> { }) .build();The filter logic is a
Predicate<Message>. It must returntrueif a message is accepted, following the same semantics asjava.util.stream.Stream#filter(Predicate).As stated above, not all messages must have an associated filter value. Many applications may not need filtering, so they can publish messages the regular way. So a stream can contain messages with and without an associated filter value.
By default, messages without a filter value (a.k.a unfiltered messages) are not sent to a consumer that enabled filtering.
But what if a consumer wants to process messages with a filter value and messages without any filter value as well? It must use the
matchUnfiltered()method in its declaration and also make sure to keep the filtering logic consistent:Getting unfiltered messages as well when enabling filteringString filterValue = "california"; Consumer consumer = environment.consumerBuilder() .stream("invoices") .filter() .values(filterValue) (1) .matchUnfiltered() (2) .postFilter(msg -> filterValue.equals(msg.getApplicationProperties().get("state")) || !msg.getApplicationProperties().containsKey("state") (3) .builder() .messageHandler((ctx, msg) -> { }) .build();Considerations on Filtering
Since the server may send non-matching messages due to the probabilistic nature of Bloom filters, the client-side filtering logic must be robust to avoid processing unwanted messages.
Good filter value candidates:
Shared categorical values: geographical locations (countries, states), document types (payslip, invoice, order), product categories (book, luggage, toy)
Values with reasonable cardinality (few to few thousand distinct values)
OAuth 2 Support
The client supports OAuth 2 authentication using the OAuth 2 Client Credentials flow. Both the client and RabbitMQ server must be configured to use the same OAuth 2 server.
Prerequisites:
Environment env = Environment.builder() .oauth2() (1) .tokenEndpointUri("https://localhost:8443/uaa/oauth/token/") (2) .clientId("rabbitmq").clientSecret("rabbitmq") (3) .grantType("password") (4) .parameter("username", "rabbit_super") (5) .parameter("password", "rabbit_super") (5) .sslContext(sslContext) (6) .environmentBuilder() .build();Using Native
epollThe stream Java client uses Netty's Java NIO transport by default, which works well for most applications.
For specialized performance requirements, Netty supports JNI-based transports. These are less portable but may offer better performance for specific workloads. Note: The RabbitMQ team has not observed significant improvements in their testing.
This example shows how to configure the popular Linux
epolltransport. Other JNI transports follow the same configuration pattern.Add the native transport dependency matching your OS and architecture. This example uses Linux x86-64 with the
linux-x86_64classifier. Here is the declaration for Maven:Declaring the Linux x86-64 nativeepolltransport dependency with Maven<dependencies> <dependency> <groupId>io.netty</groupId> <artifactId>netty-transport-native-epoll</artifactId> <version>4.2.7.Final</version> <classifier>linux-x86_64</classifier> </dependency> </dependencies>And for Gradle:
Declaring the Linux x86-64 nativeepolltransport dependency with Gradledependencies { compile "io.netty:netty-transport-native-epoll:4.2.7.Final:linux-x86_64"The native
epolltransport is set up when the environment is configured:Configuring the nativeepolltransport in the environmentEventLoopGroup epollEventLoopGroup = new MultiThreadIoEventLoopGroup( (1) EpollIoHandler.newFactory() (1) ); (1) Environment environment = Environment.builder() .netty() (2) .eventLoopGroup(epollEventLoopGroup) (3) .bootstrapCustomizer(b -> b.channel(EpollSocketChannel.class)) (4) .environmentBuilder() .build();It is possible to use Micrometer Observation to instrument publishing and consuming in the stream Java client. Micrometer Observation provides metrics, tracing, and log correlation with one single API.
The stream Java client provides an
ObservationCollectorabstraction and an implementation for Micrometer Observation. The following snippet shows how to create and set up the MicrometerObservationCollectorimplementation with an existingObservationRegistry:Configuring Micrometer ObservationEnvironment environment = Environment.builder() .observationCollector(new MicrometerObservationCollectorBuilder() (1) .registry(observationRegistry).build()) (2) .build();The next sections document the conventions, spans, and metrics made available by the instrumentation. They are automatically generated from the source code with the Micrometer documentation generator.
Observability - Conventions
Below you can find a list of all
Table 1. ObservationConvention implementationsGlobalObservationConventionandObservationConventiondeclared by this project.
com.rabbitmq.stream.observation.micrometer.DefaultProcessObservationConvention
ProcessContext
com.rabbitmq.stream.observation.micrometer.ProcessObservationConvention
ProcessContext
com.rabbitmq.stream.observation.micrometer.DefaultPublishObservationConvention
PublishContext
com.rabbitmq.stream.observation.micrometer.PublishObservationConvention
PublishContextSpan name
rabbitmq.stream.process(defined by convention classcom.rabbitmq.stream.observation.micrometer.DefaultProcessObservationConvention).Fully qualified name of the enclosing class
Table 2. Tag Keyscom.rabbitmq.stream.observation.micrometer.StreamObservationDocumentation.Span name
rabbitmq.stream.publish(defined by convention classcom.rabbitmq.stream.observation.micrometer.DefaultPublishObservationConvention).Fully qualified name of the enclosing class
Table 3. Tag Keyscom.rabbitmq.stream.observation.micrometer.StreamObservationDocumentation.Observability - Metrics
Below you can find a list of all metrics declared by this project.
Process Observation
Observation for processing a message.
Metric name
rabbitmq.stream.process(defined by convention classcom.rabbitmq.stream.observation.micrometer.DefaultProcessObservationConvention). Typetimer.Metric name
rabbitmq.stream.process.active(defined by convention classcom.rabbitmq.stream.observation.micrometer.DefaultProcessObservationConvention). Typelong task timer.Fully qualified name of the enclosing class
Table 4. Low cardinality Keyscom.rabbitmq.stream.observation.micrometer.StreamObservationDocumentation.Metric name
rabbitmq.stream.publish(defined by convention classcom.rabbitmq.stream.observation.micrometer.DefaultPublishObservationConvention). Typetimer.Metric name
rabbitmq.stream.publish.active(defined by convention classcom.rabbitmq.stream.observation.micrometer.DefaultPublishObservationConvention). Typelong task timer.Fully qualified name of the enclosing class
Table 5. Low cardinality Keyscom.rabbitmq.stream.observation.micrometer.StreamObservationDocumentation.
|
|
另类的核桃 · NPM 工具窗口 | IntelliJ IDEA 文档 3 月前 |