Class StreamsBuilder
- java.lang.Object
-
- org.apache.kafka.streams.StreamsBuilder
-
public class StreamsBuilder extends java.lang.ObjectStreamsBuilderprovide the high-level Kafka Streams DSL to specify a Kafka Streams topology.- See Also:
Topology,KStream,KTable,GlobalKTable
-
-
Constructor Summary
Constructors Constructor Description StreamsBuilder()
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description StreamsBuilderaddGlobalStore(StoreBuilder storeBuilder, java.lang.String topic, java.lang.String sourceName, Consumed consumed, java.lang.String processorName, ProcessorSupplier stateUpdateSupplier)Deprecated.StreamsBuilderaddGlobalStore(StoreBuilder storeBuilder, java.lang.String topic, Consumed consumed, ProcessorSupplier stateUpdateSupplier)Adds a globalStateStoreto the topology.StreamsBuilderaddStateStore(StoreBuilder builder)Adds a state store to the underlyingTopology.Topologybuild()Returns theTopologythat represents the specified processing logic.<K,V>
GlobalKTable<K,V>globalTable(java.lang.String topic)Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(java.lang.String topic, Consumed<K,V> consumed)Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(java.lang.String topic, Consumed<K,V> consumed, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(java.lang.String topic, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create aGlobalKTablefor the specified topic.<K,V>
KStream<K,V>stream(java.lang.String topic)Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(java.lang.String topic, Consumed<K,V> consumed)Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(java.util.Collection<java.lang.String> topics)Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(java.util.Collection<java.lang.String> topics, Consumed<K,V> consumed)Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(java.util.regex.Pattern topicPattern)Create aKStreamfrom the specified topic pattern.<K,V>
KStream<K,V>stream(java.util.regex.Pattern topicPattern, Consumed<K,V> consumed)Create aKStreamfrom the specified topic pattern.<K,V>
KTable<K,V>table(java.lang.String topic)Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(java.lang.String topic, Consumed<K,V> consumed)Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(java.lang.String topic, Consumed<K,V> consumed, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(java.lang.String topic, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create aKTablefor the specified topic.
-
-
-
Method Detail
-
stream
public <K,V> KStream<K,V> stream(java.lang.String topic)
Create aKStreamfrom the specified topics. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topic- the topic name; cannot benull- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(java.lang.String topic, Consumed<K,V> consumed)
Create aKStreamfrom the specified topics. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.
-
stream
public <K,V> KStream<K,V> stream(java.util.Collection<java.lang.String> topics)
Create aKStreamfrom the specified topics. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(java.util.Collection<java.lang.String> topics, Consumed<K,V> consumed)
Create aKStreamfrom the specified topics. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.
-
stream
public <K,V> KStream<K,V> stream(java.util.regex.Pattern topicPattern)
Create aKStreamfrom the specified topic pattern. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
public <K,V> KStream<K,V> stream(java.util.regex.Pattern topicPattern, Consumed<K,V> consumed)
Create aKStreamfrom the specified topic pattern. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the data before any key based operation (like aggregation or join) is applied to the returned
KStream.
-
table
public <K,V> KTable<K,V> table(java.lang.String topic, Consumed<K,V> consumed, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Create aKTablefor the specified topic. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used. Inputrecordswithnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStoreusing the givenMaterializedinstance. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).You should only specify serdes in the
Consumedinstance as these will also be used to overwrite the serdes inMaterialized, i.e.,
To query the localstreamBuilder.table(topic, Consumed.with(Serde.String(), Serde.String(), Materialized.<String, String, KeyValueStore<Bytes, byte[]>as(storeName))KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
topic- the topic name; cannot benullconsumed- the instance ofConsumedused to define optional parameters; cannot benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(java.lang.String topic)
Create aKTablefor the specified topic. The default"auto.offset.reset"strategy and default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).- Parameters:
topic- the topic name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(java.lang.String topic, Consumed<K,V> consumed)
Create aKTablefor the specified topic. The"auto.offset.reset"strategy,TimestampExtractor, key and value deserializers are defined by the options inConsumedare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).
-
table
public <K,V> KTable<K,V> table(java.lang.String topic, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Create aKTablefor the specified topic. The default"auto.offset.reset"strategy as specified in theconfigare used. Key and value deserializers as defined by the options inMaterializedare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStoreusing theMaterializedinstance. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).- Parameters:
topic- the topic name; cannot benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
KTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(java.lang.String topic, Consumed<K,V> consumed)
Create aGlobalKTablefor the specified topic. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).Note that
GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfigorConsumed.- Parameters:
topic- the topic name; cannot benullconsumed- the instance ofConsumedused to define optional parameters- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(java.lang.String topic)
Create aGlobalKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).Note that
GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
topic- the topic name; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(java.lang.String topic, Consumed<K,V> consumed, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Create aGlobalKTablefor the specified topic. InputKeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStoreconfigured with the provided instance ofMaterialized. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).You should only specify serdes in the
Consumedinstance as these will also be used to overwrite the serdes inMaterialized, i.e.,
To query the localstreamBuilder.globalTable(topic, Consumed.with(Serde.String(), Serde.String(), Materialized.<String, String, KeyValueStore<Bytes, byte[]>as(storeName))KeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfigorConsumed.- Parameters:
topic- the topic name; cannot benullconsumed- the instance ofConsumedused to define optional parameters; can't benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(java.lang.String topic, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Create aGlobalKTablefor the specified topic. InputKeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStoreconfigured with the provided instance ofMaterialized. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
topic- the topic name; cannot benullmaterialized- the instance ofMaterializedused to materialize a state store; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
addStateStore
public StreamsBuilder addStateStore(StoreBuilder builder)
Adds a state store to the underlyingTopology.- Parameters:
builder- the builder used to obtain this state storeStateStoreinstance- Returns:
- itself
- Throws:
TopologyException- if state store supplier is already added
-
addGlobalStore
@Deprecated public StreamsBuilder addGlobalStore(StoreBuilder storeBuilder, java.lang.String topic, java.lang.String sourceName, Consumed consumed, java.lang.String processorName, ProcessorSupplier stateUpdateSupplier)
Deprecated.
-
addGlobalStore
public StreamsBuilder addGlobalStore(StoreBuilder storeBuilder, java.lang.String topic, Consumed consumed, ProcessorSupplier stateUpdateSupplier)
Adds a globalStateStoreto the topology. TheStateStoresources its data from all partitions of the provided input topic. There will be exactly one instance of thisStateStoreper Kafka Streams instance.A
SourceNodewith the provided sourceName will be added to consume the data arriving from the partitions of the input topic.The provided
ProcessorSupplierwill be used to create anProcessorNodethat will receive all records forwarded from theSourceNode. ThisProcessorNodeshould be used to keep theStateStoreup-to-date. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
storeBuilder- user definedStoreBuilder; can't benulltopic- the topic to source the data fromconsumed- the instance ofConsumedused to define optional parameters; can't benullstateUpdateSupplier- the instance ofProcessorSupplier- Returns:
- itself
- Throws:
TopologyException- if the processor of state is already registered
-
-