Class TopologyBuilder
- java.lang.Object
-
- org.apache.kafka.streams.processor.TopologyBuilder
-
- Direct Known Subclasses:
KStreamBuilder
@Deprecated public class TopologyBuilder extends java.lang.ObjectDeprecated.useTopologyinsteadA component that is used to build aProcessorTopology. A topology contains an acyclic graph of sources, processors, and sinks. Asourceis a node in the graph that consumes one or more Kafka topics and forwards them to its child nodes. Aprocessoris a node in the graph that receives input records from upstream nodes, processes that records, and optionally forwarding new records to one or all of its children. Finally, asinkis a node in the graph that receives records from upstream nodes and writes them to a Kafka topic. This builder allows you to construct an acyclic graph of these nodes, and the builder is then passed into a newKafkaStreamsinstance that will thenbegin consuming, processing, and producing records.
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description static classTopologyBuilder.AutoOffsetResetDeprecated.static classTopologyBuilder.TopicsInfoDeprecated.NOTE this class would not needed by developers working with the processor APIs, but only used for internal functionalities.
-
Field Summary
Fields Modifier and Type Field Description org.apache.kafka.streams.processor.internals.InternalTopologyBuilderinternalTopologyBuilderDeprecated.NOTE this member would not needed by developers working with the processor APIs, but only used for internal functionalities.
-
Constructor Summary
Constructors Constructor Description TopologyBuilder()Deprecated.Create a new builder.
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description TopologyBuilderaddGlobalStore(StateStoreSupplier<KeyValueStore> storeSupplier, java.lang.String sourceName, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valueDeserializer, java.lang.String topic, java.lang.String processorName, ProcessorSupplier stateUpdateSupplier)Deprecated.Adds a globalStateStoreto the topology.TopologyBuilderaddGlobalStore(StateStoreSupplier<KeyValueStore> storeSupplier, java.lang.String sourceName, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valueDeserializer, java.lang.String topic, java.lang.String processorName, ProcessorSupplier stateUpdateSupplier)Deprecated.Adds a globalStateStoreto the topology.TopologyBuilderaddInternalTopic(java.lang.String topicName)Deprecated.Adds an internal topic NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.TopologyBuilderaddProcessor(java.lang.String name, ProcessorSupplier supplier, java.lang.String... predecessorNames)Deprecated.Add a new processor node that receives and processes records output by one or more predecessor source or processor node.TopologyBuilderaddSink(java.lang.String name, java.lang.String topic, java.lang.String... predecessorNames)Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic.<K,V>
TopologyBuilderaddSink(java.lang.String name, java.lang.String topic, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valSerializer, StreamPartitioner<? super K,? super V> partitioner, java.lang.String... predecessorNames)Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic.TopologyBuilderaddSink(java.lang.String name, java.lang.String topic, org.apache.kafka.common.serialization.Serializer keySerializer, org.apache.kafka.common.serialization.Serializer valSerializer, java.lang.String... predecessorNames)Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic.TopologyBuilderaddSink(java.lang.String name, java.lang.String topic, StreamPartitioner partitioner, java.lang.String... predecessorNames)Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic, using the supplied partitioner.TopologyBuilderaddSource(java.lang.String name, java.lang.String... topics)Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(java.lang.String name, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(java.lang.String name, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.lang.String... topics)Deprecated.Add a new source that consumes the named topics and forwards the records to child processor and/or sink nodes.TopologyBuilderaddSource(java.lang.String name, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forwards the records to child processor and/or sink nodes.TopologyBuilderaddSource(TimestampExtractor timestampExtractor, java.lang.String name, java.lang.String... topics)Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(TimestampExtractor timestampExtractor, java.lang.String name, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, java.lang.String... topics)Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forwards the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.lang.String... topics)Deprecated.Add a new source that consumes the named topics and forwards the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forwards the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, java.lang.String name, java.lang.String... topics)Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes.TopologyBuilderaddSource(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, java.lang.String name, java.util.regex.Pattern topicPattern)Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes.TopologyBuilderaddStateStore(StateStoreSupplier supplier, java.lang.String... processorNames)Deprecated.Adds a state storeorg.apache.kafka.streams.processor.internals.ProcessorTopologybuild(java.lang.Integer topicGroupId)Deprecated.Build the topology for the specified topic group.org.apache.kafka.streams.processor.internals.ProcessorTopologybuildGlobalStateTopology()Deprecated.Builds the topology for any global state stores NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.TopologyBuilderconnectProcessorAndStateStores(java.lang.String processorName, java.lang.String... stateStoreNames)Deprecated.Connects the processor and the state storesTopologyBuilderconnectProcessors(java.lang.String... processorNames)Deprecated.Connects a list of processors.protected TopologyBuilderconnectSourceStoreAndTopic(java.lang.String sourceStoreName, java.lang.String topic)Deprecated.This is used only for KStreamBuilder: when adding a KTable from a source topic, we need to add the topic as the KTable's materialized state store's changelog.java.util.Collection<java.util.Set<java.lang.String>>copartitionGroups()Deprecated.Returns the copartition groups.TopologyBuildercopartitionSources(java.util.Collection<java.lang.String> sourceNodes)Deprecated.Asserts that the streams of the specified source nodes must be copartitioned.java.util.regex.PatternearliestResetTopicsPattern()Deprecated.Get the Pattern to match all topics requiring to start reading from earliest available offset NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.java.util.Map<java.lang.String,StateStore>globalStateStores()Deprecated.Get any globalStateStores that are part of the topology NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.java.util.regex.PatternlatestResetTopicsPattern()Deprecated.Get the Pattern to match all topics requiring to start reading from latest available offset NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.java.util.Map<java.lang.Integer,java.util.Set<java.lang.String>>nodeGroups()Deprecated.Returns the map of node groups keyed by the topic group id.TopologyBuildersetApplicationIdAndInternalStream(java.lang.String applicationId, java.lang.String internalStream, java.lang.String internalStreamCompacted)Deprecated.This class is not part of public API and should never be used by a developer.java.util.regex.PatternsourceTopicPattern()Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.java.util.Map<java.lang.String,java.util.List<java.lang.String>>stateStoreNameToSourceTopics()Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.SubscriptionUpdatessubscriptionUpdates()Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.java.util.Map<java.lang.Integer,TopologyBuilder.TopicsInfo>topicGroups()Deprecated.Returns the map of topic groups keyed by the group id.voidupdateSubscriptions(org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.SubscriptionUpdates subscriptionUpdates, java.lang.String threadId)Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.
-
-
-
Method Detail
-
setApplicationIdAndInternalStream
public final TopologyBuilder setApplicationIdAndInternalStream(java.lang.String applicationId, java.lang.String internalStream, java.lang.String internalStreamCompacted)
Deprecated.This class is not part of public API and should never be used by a developer.
-
addSource
public final TopologyBuilder addSource(java.lang.String name, java.lang.String... topics)
Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
name- the unique name of the source used to reference this node whenadding processor children.topics- the name of one or more Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, java.lang.String... topics)
Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
offsetReset- the auto offset reset policy to use for this source if no committed offsets found; acceptable values earliest or latestname- the unique name of the source used to reference this node whenadding processor children.topics- the name of one or more Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(TimestampExtractor timestampExtractor, java.lang.String name, java.lang.String... topics)
Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration.- Parameters:
timestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedname- the unique name of the source used to reference this node whenadding processor children.topics- the name of one or more Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, java.lang.String name, java.lang.String... topics)
Deprecated.Add a new source that consumes the named topics and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration.- Parameters:
offsetReset- the auto offset reset policy to use for this source if no committed offsets found; acceptable values earliest or latesttimestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedname- the unique name of the source used to reference this node whenadding processor children.topics- the name of one or more Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(java.lang.String name, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
name- the unique name of the source used to reference this node whenadding processor children.topicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
offsetReset- the auto offset reset policy value for this source if no committed offsets found; acceptable values earliest or latest.name- the unique name of the source used to reference this node whenadding processor children.topicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(TimestampExtractor timestampExtractor, java.lang.String name, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration.- Parameters:
timestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedname- the unique name of the source used to reference this node whenadding processor children.topicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, TimestampExtractor timestampExtractor, java.lang.String name, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forward the records to child processor and/or sink nodes. The source will use thedefault key deserializeranddefault value deserializerspecified in thestream configuration.- Parameters:
offsetReset- the auto offset reset policy value for this source if no committed offsets found; acceptable values earliest or latest.timestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedname- the unique name of the source used to reference this node whenadding processor children.topicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(java.lang.String name, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.lang.String... topics)
Deprecated.Add a new source that consumes the named topics and forwards the records to child processor and/or sink nodes. The source will use the specified key and value deserializers. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
name- the unique name of the source used to reference this node whenadding processor childrenkeyDeserializer- key deserializer used to read this source, if not specified the default key deserializer defined in the configs will be usedvalDeserializer- value deserializer used to read this source, if not specified the default value deserializer defined in the configs will be usedtopics- the name of one or more Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if processor is already added or if topics have already been registered by another source
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.lang.String... topics)
Deprecated.Add a new source that consumes the named topics and forwards the records to child processor and/or sink nodes. The source will use the specified key and value deserializers.- Parameters:
offsetReset- the auto offset reset policy to use for this stream if no committed offsets found; acceptable values are earliest or latest.name- the unique name of the source used to reference this node whenadding processor children.timestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedkeyDeserializer- key deserializer used to read this source, if not specified the default key deserializer defined in the configs will be usedvalDeserializer- value deserializer used to read this source, if not specified the default value deserializer defined in the configs will be usedtopics- the name of one or more Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if processor is already added or if topics have already been registered by another source
-
addGlobalStore
public TopologyBuilder addGlobalStore(StateStoreSupplier<KeyValueStore> storeSupplier, java.lang.String sourceName, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valueDeserializer, java.lang.String topic, java.lang.String processorName, ProcessorSupplier stateUpdateSupplier)
Deprecated.Adds a globalStateStoreto the topology. TheStateStoresources its data from all partitions of the provided input topic. There will be exactly one instance of thisStateStoreper Kafka Streams instance.A
SourceNodewith the provided sourceName will be added to consume the data arriving from the partitions of the input topic.The provided
ProcessorSupplierwill be used to create anProcessorNodethat will receive all records forwarded from theSourceNode. ThisProcessorNodeshould be used to keep theStateStoreup-to-date. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
storeSupplier- user defined state store suppliersourceName- name of theSourceNodethat will be automatically addedkeyDeserializer- theDeserializerto deserialize keys withvalueDeserializer- theDeserializerto deserialize values withtopic- the topic to source the data fromprocessorName- the name of theProcessorSupplierstateUpdateSupplier- the instance ofProcessorSupplier- Returns:
- this builder instance so methods can be chained together; never null
-
addGlobalStore
public TopologyBuilder addGlobalStore(StateStoreSupplier<KeyValueStore> storeSupplier, java.lang.String sourceName, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valueDeserializer, java.lang.String topic, java.lang.String processorName, ProcessorSupplier stateUpdateSupplier)
Deprecated.Adds a globalStateStoreto the topology. TheStateStoresources its data from all partitions of the provided input topic. There will be exactly one instance of thisStateStoreper Kafka Streams instance.A
SourceNodewith the provided sourceName will be added to consume the data arriving from the partitions of the input topic.The provided
ProcessorSupplierwill be used to create anProcessorNodethat will receive all records forwarded from theSourceNode. ThisProcessorNodeshould be used to keep theStateStoreup-to-date.- Parameters:
storeSupplier- user defined state store suppliersourceName- name of theSourceNodethat will be automatically addedtimestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedkeyDeserializer- theDeserializerto deserialize keys withvalueDeserializer- theDeserializerto deserialize values withtopic- the topic to source the data fromprocessorName- the name of theProcessorSupplierstateUpdateSupplier- the instance ofProcessorSupplier- Returns:
- this builder instance so methods can be chained together; never null
-
addSource
public final TopologyBuilder addSource(java.lang.String name, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forwards the records to child processor and/or sink nodes. The source will use the specified key and value deserializers. The provided de-/serializers will be used for all matched topics, so care should be taken to specify patterns for topics that share the same key-value data format. The defaultTimestampExtractoras specified in theconfigis used.- Parameters:
name- the unique name of the source used to reference this node whenadding processor childrenkeyDeserializer- key deserializer used to read this source, if not specified the default key deserializer defined in the configs will be usedvalDeserializer- value deserializer used to read this source, if not specified the default value deserializer defined in the configs will be usedtopicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if processor is already added or if topics have already been registered by name
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, TimestampExtractor timestampExtractor, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forwards the records to child processor and/or sink nodes. The source will use the specified key and value deserializers. The provided de-/serializers will be used for all matched topics, so care should be taken to specify patterns for topics that share the same key-value data format.- Parameters:
offsetReset- the auto offset reset policy to use for this stream if no committed offsets found; acceptable values are earliest or latestname- the unique name of the source used to reference this node whenadding processor children.timestampExtractor- the stateless timestamp extractor used for this source, if not specified the default extractor defined in the configs will be usedkeyDeserializer- key deserializer used to read this source, if not specified the default key deserializer defined in the configs will be usedvalDeserializer- value deserializer used to read this source, if not specified the default value deserializer defined in the configs will be usedtopicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if processor is already added or if topics have already been registered by name
-
addSource
public final TopologyBuilder addSource(TopologyBuilder.AutoOffsetReset offsetReset, java.lang.String name, org.apache.kafka.common.serialization.Deserializer keyDeserializer, org.apache.kafka.common.serialization.Deserializer valDeserializer, java.util.regex.Pattern topicPattern)
Deprecated.Add a new source that consumes from topics matching the given pattern and forwards the records to child processor and/or sink nodes. The source will use the specified key and value deserializers. The provided de-/serializers will be used for all matched topics, so care should be taken to specify patterns for topics that share the same key-value data format.- Parameters:
offsetReset- the auto offset reset policy to use for this stream if no committed offsets found; acceptable values are earliest or latestname- the unique name of the source used to reference this node whenadding processor childrenkeyDeserializer- key deserializer used to read this source, if not specified the default key deserializer defined in the configs will be usedvalDeserializer- value deserializer used to read this source, if not specified the default value deserializer defined in the configs will be usedtopicPattern- regular expression pattern to match Kafka topics that this source is to consume- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if processor is already added or if topics have already been registered by name
-
addSink
public final TopologyBuilder addSink(java.lang.String name, java.lang.String topic, java.lang.String... predecessorNames)
Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic. The sink will use thedefault key serializeranddefault value serializerspecified in thestream configuration.- Parameters:
name- the unique name of the sinktopic- the name of the Kafka topic to which this sink should write its recordspredecessorNames- the name of one or more source or processor nodes whose output records this sink should consume and write to its topic- Returns:
- this builder instance so methods can be chained together; never null
- See Also:
addSink(String, String, StreamPartitioner, String...),addSink(String, String, Serializer, Serializer, String...),addSink(String, String, Serializer, Serializer, StreamPartitioner, String...)
-
addSink
public final TopologyBuilder addSink(java.lang.String name, java.lang.String topic, StreamPartitioner partitioner, java.lang.String... predecessorNames)
Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic, using the supplied partitioner. The sink will use thedefault key serializeranddefault value serializerspecified in thestream configuration.The sink will also use the specified
StreamPartitionerto determine how records are distributed among the named Kafka topic's partitions. Such control is often useful with topologies that usestate storesin its processors. In most other cases, however, a partitioner needs not be specified and Kafka will automatically distribute records among partitions using Kafka's default partitioning logic.- Parameters:
name- the unique name of the sinktopic- the name of the Kafka topic to which this sink should write its recordspartitioner- the function that should be used to determine the partition for each record processed by the sinkpredecessorNames- the name of one or more source or processor nodes whose output records this sink should consume and write to its topic- Returns:
- this builder instance so methods can be chained together; never null
- See Also:
addSink(String, String, String...),addSink(String, String, Serializer, Serializer, String...),addSink(String, String, Serializer, Serializer, StreamPartitioner, String...)
-
addSink
public final TopologyBuilder addSink(java.lang.String name, java.lang.String topic, org.apache.kafka.common.serialization.Serializer keySerializer, org.apache.kafka.common.serialization.Serializer valSerializer, java.lang.String... predecessorNames)
Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic. The sink will use the specified key and value serializers.- Parameters:
name- the unique name of the sinktopic- the name of the Kafka topic to which this sink should write its recordskeySerializer- thekey serializerused when consuming records; may be null if the sink should use thedefault key serializerspecified in thestream configurationvalSerializer- thevalue serializerused when consuming records; may be null if the sink should use thedefault value serializerspecified in thestream configurationpredecessorNames- the name of one or more source or processor nodes whose output records this sink should consume and write to its topic- Returns:
- this builder instance so methods can be chained together; never null
- See Also:
addSink(String, String, String...),addSink(String, String, StreamPartitioner, String...),addSink(String, String, Serializer, Serializer, StreamPartitioner, String...)
-
addSink
public final <K,V> TopologyBuilder addSink(java.lang.String name, java.lang.String topic, org.apache.kafka.common.serialization.Serializer<K> keySerializer, org.apache.kafka.common.serialization.Serializer<V> valSerializer, StreamPartitioner<? super K,? super V> partitioner, java.lang.String... predecessorNames)
Deprecated.Add a new sink that forwards records from predecessor nodes (processors and/or sources) to the named Kafka topic. The sink will use the specified key and value serializers, and the supplied partitioner.- Parameters:
name- the unique name of the sinktopic- the name of the Kafka topic to which this sink should write its recordskeySerializer- thekey serializerused when consuming records; may be null if the sink should use thedefault key serializerspecified in thestream configurationvalSerializer- thevalue serializerused when consuming records; may be null if the sink should use thedefault value serializerspecified in thestream configurationpartitioner- the function that should be used to determine the partition for each record processed by the sinkpredecessorNames- the name of one or more source or processor nodes whose output records this sink should consume and write to its topic- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if predecessor is not added yet, or if this processor's name is equal to the predecessor's name- See Also:
addSink(String, String, String...),addSink(String, String, StreamPartitioner, String...),addSink(String, String, Serializer, Serializer, String...)
-
addProcessor
public final TopologyBuilder addProcessor(java.lang.String name, ProcessorSupplier supplier, java.lang.String... predecessorNames)
Deprecated.Add a new processor node that receives and processes records output by one or more predecessor source or processor node. Any new record output by this processor will be forwarded to its child processor or sink nodes.- Parameters:
name- the unique name of the processor nodesupplier- the supplier used to obtain this node'sProcessorinstancepredecessorNames- the name of one or more source or processor nodes whose output records this processor should receive and process- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if predecessor is not added yet, or if this processor's name is equal to the predecessor's name
-
addStateStore
public final TopologyBuilder addStateStore(StateStoreSupplier supplier, java.lang.String... processorNames)
Deprecated.Adds a state store- Parameters:
supplier- the supplier used to obtain this state storeStateStoreinstance- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if state store supplier is already added
-
connectProcessorAndStateStores
public final TopologyBuilder connectProcessorAndStateStores(java.lang.String processorName, java.lang.String... stateStoreNames)
Deprecated.Connects the processor and the state stores- Parameters:
processorName- the name of the processorstateStoreNames- the names of state stores that the processor uses- Returns:
- this builder instance so methods can be chained together; never null
-
connectSourceStoreAndTopic
protected final TopologyBuilder connectSourceStoreAndTopic(java.lang.String sourceStoreName, java.lang.String topic)
Deprecated.This is used only for KStreamBuilder: when adding a KTable from a source topic, we need to add the topic as the KTable's materialized state store's changelog. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.
-
connectProcessors
public final TopologyBuilder connectProcessors(java.lang.String... processorNames)
Deprecated.Connects a list of processors. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Parameters:
processorNames- the name of the processors- Returns:
- this builder instance so methods can be chained together; never null
- Throws:
TopologyBuilderException- if less than two processors are specified, or if one of the processors is not added yet
-
addInternalTopic
public final TopologyBuilder addInternalTopic(java.lang.String topicName)
Deprecated.Adds an internal topic NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Parameters:
topicName- the name of the topic- Returns:
- this builder instance so methods can be chained together; never null
-
copartitionSources
public final TopologyBuilder copartitionSources(java.util.Collection<java.lang.String> sourceNodes)
Deprecated.Asserts that the streams of the specified source nodes must be copartitioned. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Parameters:
sourceNodes- a set of source node names- Returns:
- this builder instance so methods can be chained together; never null
-
nodeGroups
public java.util.Map<java.lang.Integer,java.util.Set<java.lang.String>> nodeGroups()
Deprecated.Returns the map of node groups keyed by the topic group id. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- groups of node names
-
build
public org.apache.kafka.streams.processor.internals.ProcessorTopology build(java.lang.Integer topicGroupId)
Deprecated.Build the topology for the specified topic group. This is called automatically when passing this builder into theKafkaStreams(TopologyBuilder, org.apache.kafka.streams.StreamsConfig)constructor. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.
-
buildGlobalStateTopology
public org.apache.kafka.streams.processor.internals.ProcessorTopology buildGlobalStateTopology()
Deprecated.Builds the topology for any global state stores NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- ProcessorTopology
-
globalStateStores
public java.util.Map<java.lang.String,StateStore> globalStateStores()
Deprecated.Get any globalStateStores that are part of the topology NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- map containing all global
StateStores
-
topicGroups
public java.util.Map<java.lang.Integer,TopologyBuilder.TopicsInfo> topicGroups()
Deprecated.Returns the map of topic groups keyed by the group id. A topic group is a group of topics in the same task. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- groups of topic names
-
earliestResetTopicsPattern
public java.util.regex.Pattern earliestResetTopicsPattern()
Deprecated.Get the Pattern to match all topics requiring to start reading from earliest available offset NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- the Pattern for matching all topics reading from earliest offset, never null
-
latestResetTopicsPattern
public java.util.regex.Pattern latestResetTopicsPattern()
Deprecated.Get the Pattern to match all topics requiring to start reading from latest available offset NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- the Pattern for matching all topics reading from latest offset, never null
-
stateStoreNameToSourceTopics
public java.util.Map<java.lang.String,java.util.List<java.lang.String>> stateStoreNameToSourceTopics()
Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- a mapping from state store name to a Set of source Topics.
-
copartitionGroups
public java.util.Collection<java.util.Set<java.lang.String>> copartitionGroups()
Deprecated.Returns the copartition groups. A copartition group is a group of source topics that are required to be copartitioned. NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.- Returns:
- groups of topic names
-
subscriptionUpdates
public org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.SubscriptionUpdates subscriptionUpdates()
Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.
-
sourceTopicPattern
public java.util.regex.Pattern sourceTopicPattern()
Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.
-
updateSubscriptions
public void updateSubscriptions(org.apache.kafka.streams.processor.internals.StreamPartitionAssignor.SubscriptionUpdates subscriptionUpdates, java.lang.String threadId)Deprecated.NOTE this function would not needed by developers working with the processor APIs, but only used for the high-level DSL parsing functionalities.
-
-