Class KStreamBuilder
- java.lang.Object
-
- org.apache.kafka.streams.processor.TopologyBuilder
-
- org.apache.kafka.streams.kstream.KStreamBuilder
-
@Deprecated public class KStreamBuilder extends TopologyBuilder
Deprecated.UseStreamsBuilderinsteadKStreamBuilderprovide the high-level Kafka Streams DSL to specify a Kafka Streams topology.- See Also:
TopologyBuilder,KStream,KTable,GlobalKTable
-
-
Nested Class Summary
-
Nested classes/interfaces inherited from class org.apache.kafka.streams.processor.TopologyBuilder
TopologyBuilder.AutoOffsetReset, TopologyBuilder.TopicsInfo
-
-
Field Summary
-
Fields inherited from class org.apache.kafka.streams.processor.TopologyBuilder
internalTopologyBuilder
-
-
Constructor Summary
Constructors Constructor Description KStreamBuilder()Deprecated.
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description <K,V>
GlobalKTable<K,V>globalTable(java.lang.String topic)Deprecated.Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic)Deprecated.Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)Deprecated.Create aGlobalKTablefor the specified topic.<K,V>
GlobalKTable<K,V>globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, TimestampExtractor timestampExtractor, java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aGlobalKTablefor the specified topic.<K,V>
KStream<K,V>merge(KStream<K,V>... streams)Deprecated.java.lang.StringnewName(java.lang.String prefix)Deprecated.This function is only for internal usage only and should not be called.java.lang.StringnewStoreName(java.lang.String prefix)Deprecated.This function is only for internal usage only and should not be called.<K,V>
KStream<K,V>stream(java.lang.String... topics)Deprecated.Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(java.util.regex.Pattern topicPattern)Deprecated.Create aKStreamfrom the specified topic pattern.<K,V>
KStream<K,V>stream(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)Deprecated.Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)Deprecated.Create aKStreamfrom the specified topic pattern.<K,V>
KStream<K,V>stream(TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)Deprecated.Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)Deprecated.Create aKStreamfrom the specified topic pattern.<K,V>
KStream<K,V>stream(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String... topics)Deprecated.Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(TopologyBuilder.AutoOffsetReset offsetReset, java.util.regex.Pattern topicPattern)Deprecated.Create aKStreamfrom the specified topic pattern.<K,V>
KStream<K,V>stream(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)Deprecated.Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)Deprecated.Create aKStreamfrom the specified topic pattern.<K,V>
KStream<K,V>stream(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)Deprecated.Create aKStreamfrom the specified topics.<K,V>
KStream<K,V>stream(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)Deprecated.Create aKStreamfrom the specified topic pattern.<K,V>
KTable<K,V>table(java.lang.String topic)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TimestampExtractor timestampExtractor, java.lang.String topic, java.lang.String storeName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String storeName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String topic)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, java.lang.String topic, java.lang.String storeName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)Deprecated.Create aKTablefor the specified topic.<K,V>
KTable<K,V>table(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)Deprecated.Create aKTablefor the specified topic.-
Methods inherited from class org.apache.kafka.streams.processor.TopologyBuilder
addGlobalStore, addGlobalStore, addInternalTopic, addProcessor, addSink, addSink, addSink, addSink, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addSource, addStateStore, build, buildGlobalStateTopology, connectProcessorAndStateStores, connectProcessors, connectSourceStoreAndTopic, copartitionGroups, copartitionSources, earliestResetTopicsPattern, globalStateStores, latestResetTopicsPattern, nodeGroups, setApplicationIdAndInternalStream, sourceTopicPattern, stateStoreNameToSourceTopics, subscriptionUpdates, topicGroups, updateSubscriptions
-
-
-
-
Method Detail
-
stream
public <K,V> KStream<K,V> stream(java.lang.String... topics)
Deprecated.Create aKStreamfrom the specified topics. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String... topics)
Deprecated.Create aKStreamfrom the specified topics. The defaultTimestampExtractorand default key and value deserializers as specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topics if no valid committed offsets are availabletopics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(java.util.regex.Pattern topicPattern)
Deprecated.Create aKStreamfrom the specified topic pattern. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
topicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, java.util.regex.Pattern topicPattern)
Deprecated.Create aKStreamfrom the specified topic pattern. The defaultTimestampExtractorand default key and value deserializers as specified in theconfigare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the matched topics if no valid committed offsets are availabletopicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
public <K,V> KStream<K,V> stream(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)
Deprecated.Create aKStreamfrom the specified topics. The default"auto.offset.reset"strategy and defaultTimestampExtractoras specified in theconfigare used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
keySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)
Deprecated.Create aKStreamfrom the specified topics. The defaultTimestampExtractoras specified in theconfigis used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topics if no valid committed offsets are availablekeySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)
Deprecated.Create aKStreamfrom the specified topics. The default"auto.offset.reset"strategy as specified in theconfigis used.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
timestampExtractor- the stateless timestamp extractor used for this sourceKStream, if not specified the default extractor defined in the configs will be usedkeySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String... topics)
Deprecated.Create aKStreamfrom the specified topics.If multiple topics are specified there is no ordering guarantee for records from different topics.
Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topics if no valid committed offsets are availabletimestampExtractor- the stateless timestamp extractor used for this sourceKStream, if not specified the default extractor defined in the configs will be usedkeySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopics- the topic names; must contain at least one topic name- Returns:
- a
KStreamfor the specified topics
-
stream
public <K,V> KStream<K,V> stream(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)
Deprecated.Create aKStreamfrom the specified topic pattern. The default"auto.offset.reset"strategy and defaultTimestampExtractoras specified in theconfigare used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
keySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)
Deprecated.Create aKStreamfrom the specified topic pattern. The defaultTimestampExtractoras specified in theconfigis used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the matched topics if no valid committed offsets are availablekeySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
public <K,V> KStream<K,V> stream(TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)
Deprecated.Create aKStreamfrom the specified topic pattern. The default"auto.offset.reset"strategy as specified in theconfigis used.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
timestampExtractor- the stateless timestamp extractor used for this sourceKStream, if not specified the default extractor defined in the configs will be usedkeySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
stream
public <K,V> KStream<K,V> stream(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.regex.Pattern topicPattern)
Deprecated.Create aKStreamfrom the specified topic pattern.If multiple topics are matched by the specified pattern, the created
KStreamwill read data from all of them and there is no ordering guarantee between records from different topics.Note that the specified input topics must be partitioned by key. If this is not the case it is the user's responsibility to repartition the date before any key based operation (like aggregation or join) is applied to the returned
KStream.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the matched topics if no valid committed offsets are availabletimestampExtractor- the stateless timestamp extractor used for this sourceKStream, if not specified the default extractor defined in the configs will be usedkeySerde- key serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedvalSerde- value serde used to read this sourceKStream, if not specified the default serde defined in the configs will be usedtopicPattern- the pattern to match for topic names- Returns:
- a
KStreamfor topics matching the regex pattern.
-
table
public <K,V> KTable<K,V> table(java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy, defaultTimestampExtractor, and default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
topic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent oftable(String)()}.- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy and default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
topic- the topic name; cannot benullstoreSupplier- user defined state store supplier. Cannot benull.- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(java.lang.String topic)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy and default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).- Parameters:
topic- the topic name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availabletopic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent oftable(AutoOffsetReset, String).- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)
Deprecated.Create aKTablefor the specified topic. The defaultTimestampExtractorand default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availabletopic- the topic name; cannot benullstoreSupplier- user defined state store supplier. Cannot benull.- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String topic)
Deprecated.Create aKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availabletopic- the topic name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TimestampExtractor timestampExtractor, java.lang.String topic, java.lang.String storeName)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy and default key and value deserializers as specified in theconfigare used. InputKeyValuepairs withnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenstoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
timestampExtractor- the stateless timestamp extractor used for this sourceKTable, if not specified the default extractor defined in the configs will be usedtopic- the topic name; cannot benullstoreName- the state store name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, java.lang.String topic, java.lang.String storeName)
Deprecated.Create aKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. InputKeyValuepairs withnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenstoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availabletopic- the topic name; cannot benullstoreName- the state store name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy and defaultTimestampExtractoras specified in theconfigare used. Inputrecordswithnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent oftable(Serde, Serde, String)()}.- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)
Deprecated.Create aKTablefor the specified topic. The defaultTimestampExtractoras specified in theconfigis used. The default"auto.offset.reset"strategy as specified in theconfigis used. Inputrecordswithnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullstoreSupplier- user defined state store supplier. Cannot benull.- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy as specified in theconfigis used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aKTablefor the specified topic. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availablekeySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent oftable(AutoOffsetReset, Serde, Serde, String)- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String storeName)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy as specified in theconfigis used. InputKeyValuepairs withnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenstoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
timestampExtractor- the stateless timestamp extractor used for this sourceKTable, if not specified the default extractor defined in the configs will be usedkeySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullstoreName- the state store name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aKTablefor the specified topic. InputKeyValuepairs withnullkey will be dropped.Note that the specified input topic must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenstoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(storeName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availabletimestampExtractor- the stateless timestamp extractor used for this sourceKTable, if not specified the default extractor defined in the configs will be usedkeySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent oftable(AutoOffsetReset, Serde, Serde, String)- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic)
Deprecated.Create aKTablefor the specified topic. The default"auto.offset.reset"strategy as specified in theconfigis used. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availablekeySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benull- Returns:
- a
KTablefor the specified topic
-
table
public <K,V> KTable<K,V> table(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)
Deprecated.Create aKTablefor the specified topic. Inputrecordswithnullkey will be dropped.Note that the specified input topics must be partitioned by key. If this is not the case the returned
KTablewill be corrupted.The resulting
KTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application.- Parameters:
offsetReset- the"auto.offset.reset"policy to use for the specified topic if no valid committed offsets are availablekeySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullstoreSupplier- user defined state store supplier. Cannot benull.- Returns:
- a
KTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aGlobalKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
topic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent ofglobalTable(String)- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(java.lang.String topic)
Deprecated.Create aGlobalKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).Note that
GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
topic- the topic name; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, TimestampExtractor timestampExtractor, java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aGlobalKTablefor the specified topic. The defaultTimestampExtractorand default key and value deserializers as specified in theconfigare used. InputKeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent ofglobalTable(Serde, Serde, String)()}- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, StateStoreSupplier<KeyValueStore> storeSupplier)
Deprecated.Create aGlobalKTablefor the specified topic. The defaultTimestampExtractoras specified in theconfigis used. InputKeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullstoreSupplier- user defined state store supplier. Cannot benull.- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic, java.lang.String queryableStoreName)
Deprecated.Create aGlobalKTablefor the specified topic. The defaultTimestampExtractoras specified in theconfigis used. InputKeyValuepairs withnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith the givenqueryableStoreName. However, no internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).To query the local
KeyValueStoreit must be obtained viaKafkaStreams#store(...):
Note thatKafkaStreams streams = ... ReadOnlyKeyValueStore<String, Long> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<String, Long>keyValueStore()); String key = "some-key"; Long valueForKey = localStore.get(key);GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benullqueryableStoreName- the state store name; Ifnullthis is the equivalent ofglobalTable(Serde, Serde, String)()}- Returns:
- a
GlobalKTablefor the specified topic
-
globalTable
public <K,V> GlobalKTable<K,V> globalTable(org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.lang.String topic)
Deprecated.Create aGlobalKTablefor the specified topic. The default key and value deserializers as specified in theconfigare used. Inputrecordswithnullkey will be dropped.The resulting
GlobalKTablewill be materialized in a localKeyValueStorewith an internal store name. Note that store name may not be queriable through Interactive Queries. No internal changelog topic is created since the original input topic can be used for recovery (cf. methods ofKGroupedStreamandKGroupedTablethat return aKTable).Note that
GlobalKTablealways applies"auto.offset.reset"strategy"earliest"regardless of the specified value inStreamsConfig.- Parameters:
keySerde- key serde used to send key-value pairs, if not specified the default key serde defined in the configuration will be usedvalSerde- value serde used to send key-value pairs, if not specified the default value serde defined in the configuration will be usedtopic- the topic name; cannot benull- Returns:
- a
GlobalKTablefor the specified topic
-
newName
public java.lang.String newName(java.lang.String prefix)
Deprecated.This function is only for internal usage only and should not be called.Create a unique processor name used for translation into the processor topology.
- Parameters:
prefix- processor name prefix- Returns:
- a new unique name
-
newStoreName
public java.lang.String newStoreName(java.lang.String prefix)
Deprecated.This function is only for internal usage only and should not be called.Create a unique state store name.
- Parameters:
prefix- processor name prefix- Returns:
- a new unique name
-
-