Class KTableImpl<K,S,V>
- java.lang.Object
-
- org.apache.kafka.streams.kstream.internals.AbstractStream<K,V>
-
- org.apache.kafka.streams.kstream.internals.KTableImpl<K,S,V>
-
- Type Parameters:
K- the key typeS- the source's (parent's) value typeV- the value type
- All Implemented Interfaces:
KTable<K,V>
public class KTableImpl<K,S,V> extends AbstractStream<K,V> implements KTable<K,V>
The implementation class ofKTable.
-
-
Field Summary
-
Fields inherited from class org.apache.kafka.streams.kstream.internals.AbstractStream
builder, keySerde, name, streamsGraphNode, subTopologySourceNodes, valSerde
-
-
Constructor Summary
Constructors Constructor Description KTableImpl(java.lang.String name, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.Set<java.lang.String> subTopologySourceNodes, java.lang.String queryableStoreName, ProcessorSupplier<?,?> processorSupplier, StreamsGraphNode streamsGraphNode, InternalStreamsBuilder builder)
-
Method Summary
All Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description voidenableSendingOldValues()KTable<K,V>filter(Predicate<? super K,? super V> predicate)Create a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with default serializers, deserializers, and state store.KTable<K,V>filter(Predicate<? super K,? super V> predicate, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.KTable<K,V>filter(Predicate<? super K,? super V> predicate, Named named)Create a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with default serializers, deserializers, and state store.KTable<K,V>filter(Predicate<? super K,? super V> predicate, Named named, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.KTable<K,V>filterNot(Predicate<? super K,? super V> predicate)Create a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with default serializers, deserializers, and state store.KTable<K,V>filterNot(Predicate<? super K,? super V> predicate, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.KTable<K,V>filterNot(Predicate<? super K,? super V> predicate, Named named)Create a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with default serializers, deserializers, and state store.KTable<K,V>filterNot(Predicate<? super K,? super V> predicate, Named named, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<K1,V1>
KGroupedTable<K1,V1>groupBy(KeyValueMapper<? super K,? super V,KeyValue<K1,V1>> selector)Re-groups the records of thisKTableusing the providedKeyValueMapperand default serializers and deserializers.<K1,V1>
KGroupedTable<K1,V1>groupBy(KeyValueMapper<? super K,? super V,KeyValue<K1,V1>> selector, Grouped<K1,V1> grouped)Re-groups the records of thisKTableusing the providedKeyValueMapperandSerdes as specified byGrouped.<K1,V1>
KGroupedTable<K1,V1>groupBy(KeyValueMapper<? super K,? super V,KeyValue<K1,V1>> selector, Serialized<K1,V1> serialized)Deprecated.<V1,R>
KTable<K,R>join(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner)Join records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with default serializers, deserializers, and state store.<V1,R>
KTable<K,R>join(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner, Named named)Join records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with default serializers, deserializers, and state store.<VO,VR>
KTable<K,VR>join(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store.<VO,VR>
KTable<K,VR>join(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store.<VR,KO,VO>
KTable<K,VR>join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner)Join records of thisKTablewith anotherKTableusing non-windowed inner join.<VR,KO,VO>
KTable<K,VR>join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTablewith anotherKTableusing non-windowed inner join.<VR,KO,VO>
KTable<K,VR>join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named)Join records of thisKTablewith anotherKTableusing non-windowed inner join.<VR,KO,VO>
KTable<K,VR>join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTablewith anotherKTableusing non-windowed inner join.<V1,R>
KTable<K,R>leftJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with default serializers, deserializers, and state store.<V1,R>
KTable<K,R>leftJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner, Named named)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with default serializers, deserializers, and state store.<VO,VR>
KTable<K,VR>leftJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store.<VO,VR>
KTable<K,VR>leftJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store.<VR,KO,VO>
KTable<K,VR>leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner)Join records of thisKTablewith anotherKTableusing non-windowed left join.<VR,KO,VO>
KTable<K,VR>leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTablewith anotherKTableusing non-windowed left join.<VR,KO,VO>
KTable<K,VR>leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named)Join records of thisKTablewith anotherKTableusing non-windowed left join.<VR,KO,VO>
KTable<K,VR>leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTablewith anotherKTableusing non-windowed left join.<VR> KTable<K,VR>mapValues(ValueMapper<? super V,? extends VR> mapper)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store.<VR> KTable<K,VR>mapValues(ValueMapper<? super V,? extends VR> mapper, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<VR> KTable<K,VR>mapValues(ValueMapper<? super V,? extends VR> mapper, Named named)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store.<VR> KTable<K,VR>mapValues(ValueMapper<? super V,? extends VR> mapper, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<VR> KTable<K,VR>mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store.<VR> KTable<K,VR>mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<VR> KTable<K,VR>mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper, Named named)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store.<VR> KTable<K,VR>mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<V1,R>
KTable<K,R>outerJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with default serializers, deserializers, and state store.<V1,R>
KTable<K,R>outerJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner, Named named)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with default serializers, deserializers, and state store.<VO,VR>
KTable<K,VR>outerJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store.<VO,VR>
KTable<K,VR>outerJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)Join records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store.java.lang.StringqueryableStoreName()Get the name of the local state store used that can be used to query thisKTable.KTable<K,V>suppress(Suppressed<? super K> suppressed)Suppress some updates from this changelog stream, determined by the suppliedSuppressedconfiguration.KStream<K,V>toStream()Convert this changelog stream to aKStream.<K1> KStream<K1,V>toStream(KeyValueMapper<? super K,? super V,? extends K1> mapper)Convert this changelog stream to aKStreamusing the givenKeyValueMapperto select the new key.<K1> KStream<K1,V>toStream(KeyValueMapper<? super K,? super V,? extends K1> mapper, Named named)Convert this changelog stream to aKStreamusing the givenKeyValueMapperto select the new key.KStream<K,V>toStream(Named named)Convert this changelog stream to aKStream.<VR> KTable<K,VR>transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, java.lang.String... stateStoreNames)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with default serializers, deserializers, and state store.<VR> KTable<K,VR>transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized, java.lang.String... stateStoreNames)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<VR> KTable<K,VR>transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized, Named named, java.lang.String... stateStoreNames)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance.<VR> KTable<K,VR>transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, Named named, java.lang.String... stateStoreNames)Create a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with default serializers, deserializers, and state store.KTableValueGetterSupplier<K,V>valueGetterSupplier()-
Methods inherited from class org.apache.kafka.streams.kstream.internals.AbstractStream
internalTopologyBuilder, keySerde, valueSerde
-
-
-
-
Constructor Detail
-
KTableImpl
public KTableImpl(java.lang.String name, org.apache.kafka.common.serialization.Serde<K> keySerde, org.apache.kafka.common.serialization.Serde<V> valSerde, java.util.Set<java.lang.String> subTopologySourceNodes, java.lang.String queryableStoreName, ProcessorSupplier<?,?> processorSupplier, StreamsGraphNode streamsGraphNode, InternalStreamsBuilder builder)
-
-
Method Detail
-
queryableStoreName
public java.lang.String queryableStoreName()
Description copied from interface:KTableGet the name of the local state store used that can be used to query thisKTable.- Specified by:
queryableStoreNamein interfaceKTable<K,S>- Returns:
- the underlying state store name, or
nullif thisKTablecannot be queried.
-
filter
public KTable<K,V> filter(Predicate<? super K,? super V> predicate)
Description copied from interface:KTableCreate a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with default serializers, deserializers, and state store. All records that do not satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does not satisfy the given predicate) a tombstone record is forwarded.
-
filter
public KTable<K,V> filter(Predicate<? super K,? super V> predicate, Named named)
Description copied from interface:KTableCreate a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with default serializers, deserializers, and state store. All records that do not satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does not satisfy the given predicate) a tombstone record is forwarded.- Specified by:
filterin interfaceKTable<K,S>- Parameters:
predicate- a filterPredicatethat is applied to each recordnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains only those records that satisfy the given predicate - See Also:
KTable.filterNot(Predicate)
-
filter
public KTable<K,V> filter(Predicate<? super K,? super V> predicate, Named named, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. All records that do not satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does not satisfy the given predicate) a tombstone record is forwarded.To query the local
ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... // filtering words ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<K, ValueAndTimestamp<V>>timestampedKeyValueStore()); K key = "some-word"; ValueAndTimestamp<V> valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).- Specified by:
filterin interfaceKTable<K,S>- Parameters:
predicate- a filterPredicatethat is applied to each recordnamed- aNamedconfig used to name the processor in the topologymaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains only those records that satisfy the given predicate - See Also:
KTable.filterNot(Predicate, Materialized)
-
filter
public KTable<K,V> filter(Predicate<? super K,? super V> predicate, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTablethat consists of all records of thisKTablewhich satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. All records that do not satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does not satisfy the given predicate) a tombstone record is forwarded.To query the local
ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... // filtering words ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<K, ValueAndTimestamp<V>>timestampedKeyValueStore()); K key = "some-word"; ValueAndTimestamp<V> valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).- Specified by:
filterin interfaceKTable<K,S>- Parameters:
predicate- a filterPredicatethat is applied to each recordmaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains only those records that satisfy the given predicate - See Also:
KTable.filterNot(Predicate, Materialized)
-
filterNot
public KTable<K,V> filterNot(Predicate<? super K,? super V> predicate)
Description copied from interface:KTableCreate a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with default serializers, deserializers, and state store. All records that do satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterNotfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does satisfy the given predicate) a tombstone record is forwarded.
-
filterNot
public KTable<K,V> filterNot(Predicate<? super K,? super V> predicate, Named named)
Description copied from interface:KTableCreate a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with default serializers, deserializers, and state store. All records that do satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterNotfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does satisfy the given predicate) a tombstone record is forwarded.- Specified by:
filterNotin interfaceKTable<K,S>- Parameters:
predicate- a filterPredicatethat is applied to each recordnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains only those records that do not satisfy the given predicate - See Also:
KTable.filter(Predicate)
-
filterNot
public KTable<K,V> filterNot(Predicate<? super K,? super V> predicate, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. All records that do satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterNotfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does satisfy the given predicate) a tombstone record is forwarded.To query the local
ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... // filtering words ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<K, ValueAndTimestamp<V>>timestampedKeyValueStore()); K key = "some-word"; ValueAndTimestamp<V> valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).- Specified by:
filterNotin interfaceKTable<K,S>- Parameters:
predicate- a filterPredicatethat is applied to each recordmaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains only those records that do not satisfy the given predicate - See Also:
KTable.filter(Predicate, Materialized)
-
filterNot
public KTable<K,V> filterNot(Predicate<? super K,? super V> predicate, Named named, Materialized<K,V,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTablethat consists all records of thisKTablewhich do not satisfy the given predicate, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. All records that do satisfy the predicate are dropped. For eachKTableupdate, the filter is evaluated based on the current update record and then an update record is produced for the resultKTable. This is a stateless record-by-record operation.Note that
filterNotfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided filter predicate is not evaluated but the tombstone record is forwarded directly if required (i.e., if there is anything to be deleted). Furthermore, for each record that gets dropped (i.e., does satisfy the given predicate) a tombstone record is forwarded.To query the local
ReadOnlyKeyValueStoreit must be obtained viaKafkaStreams#store(...):
For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams streams = ... // filtering words ReadOnlyKeyValueStore<K, ValueAndTimestamp<V>> localStore = streams.store(queryableStoreName, QueryableStoreTypes.<K, ValueAndTimestamp<V>>timestampedKeyValueStore()); K key = "some-word"; ValueAndTimestamp<V> valueForKey = localStore.get(key); // key must be local (application state is shared over all running Kafka Streams instances)KafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).- Specified by:
filterNotin interfaceKTable<K,S>- Parameters:
predicate- a filterPredicatethat is applied to each recordnamed- aNamedconfig used to name the processor in the topologymaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains only those records that do not satisfy the given predicate - See Also:
KTable.filter(Predicate, Materialized)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapper<? super V,? extends VR> mapper)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store. For eachKTableupdate the providedValueMapperis applied to the value of the updated record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of the value string.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValues(value -> value.split(" ").length);This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperthat computes a new output value- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapper<? super V,? extends VR> mapper, Named named)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store. For eachKTableupdate the providedValueMapperis applied to the value of the updated record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of the value string.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValues(value -> value.split(" ").length, Named.as("countTokenValue"));This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperthat computes a new output valuenamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store. For eachKTableupdate the providedValueMapperWithKeyis applied to the value of the update record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of value and key strings.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValues((readOnlyKey, value) -> readOnlyKey.split(" ").length + value.split(" ").length);Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperWithKeythat computes a new output value- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper, Named named)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with default serializers, deserializers, and state store. For eachKTableupdate the providedValueMapperWithKeyis applied to the value of the update record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of value and key strings.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValues((readOnlyKey, value) -> readOnlyKey.split(" ").length + value.split(" ").length, Named.as("countTokenValueAndKey"));Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperWithKeythat computes a new output valuenamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapper<? super V,? extends VR> mapper, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. For eachKTableupdate the providedValueMapperis applied to the value of the updated record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of the value string.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValue(new ValueMapper<String, Integer> { Integer apply(String value) { return value.split(" ").length; } });To query the local
KeyValueStorerepresenting outputTable above it must be obtained viaKafkaStreams#store(...): For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperthat computes a new output valuematerialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapper<? super V,? extends VR> mapper, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. For eachKTableupdate the providedValueMapperis applied to the value of the updated record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of the value string.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValue(new ValueMapper<String, Integer> { Integer apply(String value) { return value.split(" ").length; } });To query the local
KeyValueStorerepresenting outputTable above it must be obtained viaKafkaStreams#store(...): For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperthat computes a new output valuenamed- aNamedconfig used to name the processor in the topologymaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. For eachKTableupdate the providedValueMapperWithKeyis applied to the value of the update record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of value and key strings.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValue(new ValueMapperWithKey<String, String, Integer> { Integer apply(String readOnlyKey, String value) { return readOnlyKey.split(" ").length + value.split(" ").length; } });To query the local
KeyValueStorerepresenting outputTable above it must be obtained viaKafkaStreams.store(StoreQueryParameters)KafkaStreams#store(...)}: For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperWithKeythat computes a new output valuematerialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
mapValues
public <VR> KTable<K,VR> mapValues(ValueMapperWithKey<? super K,? super V,? extends VR> mapper, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type) in the newKTable, with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. For eachKTableupdate the providedValueMapperWithKeyis applied to the value of the update record and computes a new value for it, resulting in an updated record for the resultKTable. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is a stateless record-by-record operation.The example below counts the number of token of value and key strings.
KTable<String, String> inputTable = builder.table("topic"); KTable<String, Integer> outputTable = inputTable.mapValue(new ValueMapperWithKey<String, String, Integer> { Integer apply(String readOnlyKey, String value) { return readOnlyKey.split(" ").length + value.split(" ").length; } });To query the local
KeyValueStorerepresenting outputTable above it must be obtained viaKafkaStreams#store(...): For non-local keys, a custom RPC mechanism must be implemented usingKafkaStreams.allMetadata()to query the value of the key on a parallel running instance of your Kafka Streams application. The store name to query with is specified byMaterialized.as(String)orMaterialized.as(KeyValueBytesStoreSupplier).Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. This operation preserves data co-location with respect to the key. Thus, no internal data redistribution is required if a key based operator (like a join) is applied to the result
KTable.Note that
mapValuesfor a changelog stream works differently thanrecord stream filters, becauserecordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for tombstones the provided value-mapper is not evaluated but the tombstone record is forwarded directly to delete the corresponding record in the resultKTable.- Specified by:
mapValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTable- Parameters:
mapper- aValueMapperWithKeythat computes a new output valuenamed- aNamedconfig used to name the processor in the topologymaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains records with unmodified keys and new values (possibly of different type)
-
transformValues
public <VR> KTable<K,VR> transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, java.lang.String... stateStoreNames)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with default serializers, deserializers, and state store. AValueTransformerWithKey(provided by the givenValueTransformerWithKeySupplier) is applied to each input record value and computes a new value for it. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is similar toKTable.mapValues(ValueMapperWithKey), but more flexible, allowing access to additional state-stores, and access to theProcessorContext. Furthermore, viaPunctuator.punctuate(long)the processing progress can be observed and additional periodic actions can be performed.If the downstream topology uses aggregation functions, (e.g.
KGroupedTable.reduce(org.apache.kafka.streams.kstream.Reducer<V>, org.apache.kafka.streams.kstream.Reducer<V>, org.apache.kafka.streams.kstream.Materialized<K, V, org.apache.kafka.streams.state.KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>>),KGroupedTable.aggregate(org.apache.kafka.streams.kstream.Initializer<VR>, org.apache.kafka.streams.kstream.Aggregator<? super K, ? super V, VR>, org.apache.kafka.streams.kstream.Aggregator<? super K, ? super V, VR>, org.apache.kafka.streams.kstream.Materialized<K, VR, org.apache.kafka.streams.state.KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>>), etc), care must be taken when dealing with state, (either held in state-stores or transformer instances), to ensure correct aggregate results. In contrast, if the resulting KTable is materialized, (cf.KTable.transformValues(ValueTransformerWithKeySupplier, Materialized, String...)), such concerns are handled for you.In order to assign a state, the state must be created and registered beforehand:
// create store StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore("myValueTransformState"), Serdes.String(), Serdes.String()); // register store builder.addStateStore(keyValueStoreBuilder); KTable outputTable = inputTable.transformValues(new ValueTransformerWithKeySupplier() { ... }, "myValueTransformState");Within the
ValueTransformerWithKey, the state is obtained via theProcessorContext. To trigger periodic actions viapunctuate(), a schedule must be registered.new ValueTransformerWithKeySupplier() { ValueTransformerWithKey get() { return new ValueTransformerWithKey() { private KeyValueStore<String, String> state; void init(ProcessorContext context) { this.state = (KeyValueStore<String, String>)context.getStateStore("myValueTransformState"); context.schedule(Duration.ofSeconds(1), PunctuationType.WALL_CLOCK_TIME, new Punctuator(..)); // punctuate each 1000ms, can access this.state } NewValueType transform(K readOnlyKey, V value) { // can access this.state and use read-only key return new NewValueType(readOnlyKey); // or null } void close() { // can access this.state } } } }Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. Setting a new value preserves data co-location with respect to the key.
- Specified by:
transformValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the result table- Parameters:
transformerSupplier- a instance ofValueTransformerWithKeySupplierthat generates aValueTransformerWithKey. At least one transformer instance will be created per streaming task. Transformers do not need to be thread-safe.stateStoreNames- the names of the state stores used by the processor- Returns:
- a
KTablethat contains records with unmodified key and new values (possibly of different type) - See Also:
KTable.mapValues(ValueMapper),KTable.mapValues(ValueMapperWithKey)
-
transformValues
public <VR> KTable<K,VR> transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, Named named, java.lang.String... stateStoreNames)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with default serializers, deserializers, and state store. AValueTransformerWithKey(provided by the givenValueTransformerWithKeySupplier) is applied to each input record value and computes a new value for it. Thus, an input record<K,V>can be transformed into an output record<K:V'>. This is similar toKTable.mapValues(ValueMapperWithKey), but more flexible, allowing access to additional state-stores, and access to theProcessorContext. Furthermore, viaPunctuator.punctuate(long)the processing progress can be observed and additional periodic actions can be performed.If the downstream topology uses aggregation functions, (e.g.
KGroupedTable.reduce(org.apache.kafka.streams.kstream.Reducer<V>, org.apache.kafka.streams.kstream.Reducer<V>, org.apache.kafka.streams.kstream.Materialized<K, V, org.apache.kafka.streams.state.KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>>),KGroupedTable.aggregate(org.apache.kafka.streams.kstream.Initializer<VR>, org.apache.kafka.streams.kstream.Aggregator<? super K, ? super V, VR>, org.apache.kafka.streams.kstream.Aggregator<? super K, ? super V, VR>, org.apache.kafka.streams.kstream.Materialized<K, VR, org.apache.kafka.streams.state.KeyValueStore<org.apache.kafka.common.utils.Bytes, byte[]>>), etc), care must be taken when dealing with state, (either held in state-stores or transformer instances), to ensure correct aggregate results. In contrast, if the resulting KTable is materialized, (cf.KTable.transformValues(ValueTransformerWithKeySupplier, Materialized, String...)), such concerns are handled for you.In order to assign a state, the state must be created and registered beforehand:
// create store StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore("myValueTransformState"), Serdes.String(), Serdes.String()); // register store builder.addStateStore(keyValueStoreBuilder); KTable outputTable = inputTable.transformValues(new ValueTransformerWithKeySupplier() { ... }, "myValueTransformState");Within the
ValueTransformerWithKey, the state is obtained via theProcessorContext. To trigger periodic actions viapunctuate(), a schedule must be registered.new ValueTransformerWithKeySupplier() { ValueTransformerWithKey get() { return new ValueTransformerWithKey() { private KeyValueStore<String, String> state; void init(ProcessorContext context) { this.state = (KeyValueStore<String, String>)context.getStateStore("myValueTransformState"); context.schedule(Duration.ofSeconds(1), PunctuationType.WALL_CLOCK_TIME, new Punctuator(..)); // punctuate each 1000ms, can access this.state } NewValueType transform(K readOnlyKey, V value) { // can access this.state and use read-only key return new NewValueType(readOnlyKey); // or null } void close() { // can access this.state } } } }Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. Setting a new value preserves data co-location with respect to the key.
- Specified by:
transformValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the result table- Parameters:
transformerSupplier- a instance ofValueTransformerWithKeySupplierthat generates aValueTransformerWithKey. At least one transformer instance will be created per streaming task. Transformers do not need to be thread-safe.named- aNamedconfig used to name the processor in the topologystateStoreNames- the names of the state stores used by the processor- Returns:
- a
KTablethat contains records with unmodified key and new values (possibly of different type) - See Also:
KTable.mapValues(ValueMapper),KTable.mapValues(ValueMapperWithKey)
-
transformValues
public <VR> KTable<K,VR> transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized, java.lang.String... stateStoreNames)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. AValueTransformerWithKey(provided by the givenValueTransformerWithKeySupplier) is applied to each input record value and computes a new value for it. This is similar toKTable.mapValues(ValueMapperWithKey), but more flexible, allowing stateful, rather than stateless, record-by-record operation, access to additional state-stores, and access to theProcessorContext. Furthermore, viaPunctuator.punctuate(long)the processing progress can be observed and additional periodic actions can be performed. The resultingKTableis materialized into another state store (additional to the provided state store names) as specified by the user viaMaterializedparameter, and is queryable through its given name.In order to assign a state, the state must be created and registered beforehand:
// create store StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore("myValueTransformState"), Serdes.String(), Serdes.String()); // register store builder.addStateStore(keyValueStoreBuilder); KTable outputTable = inputTable.transformValues( new ValueTransformerWithKeySupplier() { ... }, Materialized.<String, String, KeyValueStore<Bytes, byte[]>>as("outputTable") .withKeySerde(Serdes.String()) .withValueSerde(Serdes.String()), "myValueTransformState");Within the
ValueTransformerWithKey, the state is obtained via theProcessorContext. To trigger periodic actions viapunctuate(), a schedule must be registered.new ValueTransformerWithKeySupplier() { ValueTransformerWithKey get() { return new ValueTransformerWithKey() { private KeyValueStore<String, String> state; void init(ProcessorContext context) { this.state = (KeyValueStore<String, String>)context.getStateStore("myValueTransformState"); context.schedule(Duration.ofSeconds(1), PunctuationType.WALL_CLOCK_TIME, new Punctuator(..)); // punctuate each 1000ms, can access this.state } NewValueType transform(K readOnlyKey, V value) { // can access this.state and use read-only key return new NewValueType(readOnlyKey); // or null } void close() { // can access this.state } } } }Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. Setting a new value preserves data co-location with respect to the key.
- Specified by:
transformValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the result table- Parameters:
transformerSupplier- a instance ofValueTransformerWithKeySupplierthat generates aValueTransformerWithKey. At least one transformer instance will be created per streaming task. Transformers do not need to be thread-safe.materialized- an instance ofMaterializedused to describe how the state store of the resulting table should be materialized. Cannot benullstateStoreNames- the names of the state stores used by the processor- Returns:
- a
KTablethat contains records with unmodified key and new values (possibly of different type) - See Also:
KTable.mapValues(ValueMapper),KTable.mapValues(ValueMapperWithKey)
-
transformValues
public <VR> KTable<K,VR> transformValues(ValueTransformerWithKeySupplier<? super K,? super V,? extends VR> transformerSupplier, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized, Named named, java.lang.String... stateStoreNames)
Description copied from interface:KTableCreate a newKTableby transforming the value of each record in thisKTableinto a new value (with possibly a new type), with thekey serde,value serde, and the underlyingmaterialized state storageconfigured in theMaterializedinstance. AValueTransformerWithKey(provided by the givenValueTransformerWithKeySupplier) is applied to each input record value and computes a new value for it. This is similar toKTable.mapValues(ValueMapperWithKey), but more flexible, allowing stateful, rather than stateless, record-by-record operation, access to additional state-stores, and access to theProcessorContext. Furthermore, viaPunctuator.punctuate(long)the processing progress can be observed and additional periodic actions can be performed. The resultingKTableis materialized into another state store (additional to the provided state store names) as specified by the user viaMaterializedparameter, and is queryable through its given name.In order to assign a state, the state must be created and registered beforehand:
// create store StoreBuilder<KeyValueStore<String,String>> keyValueStoreBuilder = Stores.keyValueStoreBuilder(Stores.persistentKeyValueStore("myValueTransformState"), Serdes.String(), Serdes.String()); // register store builder.addStateStore(keyValueStoreBuilder); KTable outputTable = inputTable.transformValues( new ValueTransformerWithKeySupplier() { ... }, Materialized.<String, String, KeyValueStore<Bytes, byte[]>>as("outputTable") .withKeySerde(Serdes.String()) .withValueSerde(Serdes.String()), "myValueTransformState");Within the
ValueTransformerWithKey, the state is obtained via theProcessorContext. To trigger periodic actions viapunctuate(), a schedule must be registered.new ValueTransformerWithKeySupplier() { ValueTransformerWithKey get() { return new ValueTransformerWithKey() { private KeyValueStore<String, String> state; void init(ProcessorContext context) { this.state = (KeyValueStore<String, String>)context.getStateStore("myValueTransformState"); context.schedule(Duration.ofSeconds(1), PunctuationType.WALL_CLOCK_TIME, new Punctuator(..)); // punctuate each 1000ms, can access this.state } NewValueType transform(K readOnlyKey, V value) { // can access this.state and use read-only key return new NewValueType(readOnlyKey); // or null } void close() { // can access this.state } } } }Note that the key is read-only and should not be modified, as this can lead to corrupt partitioning. Setting a new value preserves data co-location with respect to the key.
- Specified by:
transformValuesin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the result table- Parameters:
transformerSupplier- a instance ofValueTransformerWithKeySupplierthat generates aValueTransformerWithKey. At least one transformer instance will be created per streaming task. Transformers do not need to be thread-safe.materialized- an instance ofMaterializedused to describe how the state store of the resulting table should be materialized. Cannot benullnamed- aNamedconfig used to name the processor in the topologystateStoreNames- the names of the state stores used by the processor- Returns:
- a
KTablethat contains records with unmodified key and new values (possibly of different type) - See Also:
KTable.mapValues(ValueMapper),KTable.mapValues(ValueMapperWithKey)
-
toStream
public <K1> KStream<K1,V> toStream(KeyValueMapper<? super K,? super V,? extends K1> mapper)
Description copied from interface:KTableConvert this changelog stream to aKStreamusing the givenKeyValueMapperto select the new key.For example, you can compute the new key as the length of the value string.
Setting a new key might result in an internal data redistribution if a key based operator (like an aggregation or join) is applied to the resultKTable<String, String> table = builder.table("topic"); KTable<Integer, String> keyedStream = table.toStream(new KeyValueMapper<String, String, Integer> { Integer apply(String key, String value) { return value.length(); } });KStream.This operation is equivalent to calling
table.toStream().selectKey(KeyValueMapper).Note that
KTable.toStream()is a logical operation and only changes the "interpretation" of the stream, i.e., each record of this changelog stream is no longer treated as an updated record (cf.KStreamvsKTable).
-
toStream
public <K1> KStream<K1,V> toStream(KeyValueMapper<? super K,? super V,? extends K1> mapper, Named named)
Description copied from interface:KTableConvert this changelog stream to aKStreamusing the givenKeyValueMapperto select the new key.For example, you can compute the new key as the length of the value string.
Setting a new key might result in an internal data redistribution if a key based operator (like an aggregation or join) is applied to the resultKTable<String, String> table = builder.table("topic"); KTable<Integer, String> keyedStream = table.toStream(new KeyValueMapper<String, String, Integer> { Integer apply(String key, String value) { return value.length(); } });KStream.This operation is equivalent to calling
table.toStream().selectKey(KeyValueMapper).Note that
KTable.toStream()is a logical operation and only changes the "interpretation" of the stream, i.e., each record of this changelog stream is no longer treated as an updated record (cf.KStreamvsKTable).- Specified by:
toStreamin interfaceKTable<K,S>- Type Parameters:
K1- the new key type of the result stream- Parameters:
mapper- aKeyValueMapperthat computes a new key for each recordnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KStreamthat contains the same records as thisKTable
-
suppress
public KTable<K,V> suppress(Suppressed<? super K> suppressed)
Description copied from interface:KTableSuppress some updates from this changelog stream, determined by the suppliedSuppressedconfiguration. This controls what updates downstream table and stream operations will receive.
-
join
public <V1,R> KTable<K,R> join(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with default serializers, deserializers, and state store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTablethe providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:C> <K1:C> <K1:b> <K1:ValueJoiner(C,b)> <K1:C> <K1:null> <K1:null> - Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
V1- the value type of the otherKTableR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching records- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key - See Also:
KTable.leftJoin(KTable, ValueJoiner),KTable.outerJoin(KTable, ValueJoiner)
-
join
public <V1,R> KTable<K,R> join(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner, Named named)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with default serializers, deserializers, and state store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTablethe providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:C> <K1:C> <K1:b> <K1:ValueJoiner(C,b)> <K1:C> <K1:null> <K1:null> - Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
V1- the value type of the otherKTableR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key - See Also:
KTable.leftJoin(KTable, ValueJoiner),KTable.outerJoin(KTable, ValueJoiner)
-
join
public <VO,VR> KTable<K,VR> join(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTablethe providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:C> <K1:C> <K1:b> <K1:ValueJoiner(C,b)> <K1:C> <K1:null> <K1:null> - Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
VO- the value type of the otherKTableVR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsmaterialized- an instance ofMaterializedused to describe how the state store should be materialized. Cannot benull- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key - See Also:
KTable.leftJoin(KTable, ValueJoiner, Materialized),KTable.outerJoin(KTable, ValueJoiner, Materialized)
-
join
public <VO,VR> KTable<K,VR> join(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTable's records using non-windowed inner equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTablethe providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:C> <K1:C> <K1:b> <K1:ValueJoiner(C,b)> <K1:C> <K1:null> <K1:null> - Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
VO- the value type of the otherKTableVR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topologymaterialized- an instance ofMaterializedused to describe how the state store should be materialized. Cannot benull- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key - See Also:
KTable.leftJoin(KTable, ValueJoiner, Materialized),KTable.outerJoin(KTable, ValueJoiner, Materialized)
-
outerJoin
public <V1,R> KTable<K,R> outerJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with default serializers, deserializers, and state store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-joinorleft-join, all records from both inputKTables will produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record that does not find a corresponding record in the corresponding otherKTable's state the providedValueJoinerwill be called withnullvalue for the corresponding other value to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:ValueJoiner(null,b)> <K1:null> <K1:null> - Specified by:
outerJoinin interfaceKTable<K,S>- Type Parameters:
V1- the value type of the otherKTableR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching records- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of bothKTables - See Also:
KTable.join(KTable, ValueJoiner),KTable.leftJoin(KTable, ValueJoiner)
-
outerJoin
public <V1,R> KTable<K,R> outerJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner, Named named)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with default serializers, deserializers, and state store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-joinorleft-join, all records from both inputKTables will produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record that does not find a corresponding record in the corresponding otherKTable's state the providedValueJoinerwill be called withnullvalue for the corresponding other value to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:ValueJoiner(null,b)> <K1:null> <K1:null> - Specified by:
outerJoinin interfaceKTable<K,S>- Type Parameters:
V1- the value type of the otherKTableR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of bothKTables - See Also:
KTable.join(KTable, ValueJoiner),KTable.leftJoin(KTable, ValueJoiner)
-
outerJoin
public <VO,VR> KTable<K,VR> outerJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-joinorleft-join, all records from both inputKTables will produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record that does not find a corresponding record in the corresponding otherKTable's state the providedValueJoinerwill be called withnullvalue for the corresponding other value to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:ValueJoiner(null,b)> <K1:null> <K1:null> - Specified by:
outerJoinin interfaceKTable<K,S>- Type Parameters:
VO- the value type of the otherKTableVR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsmaterialized- an instance ofMaterializedused to describe how the state store should be materialized. Cannot benull- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of bothKTables - See Also:
KTable.join(KTable, ValueJoiner),KTable.leftJoin(KTable, ValueJoiner)
-
outerJoin
public <VO,VR> KTable<K,VR> outerJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed outer equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-joinorleft-join, all records from both inputKTables will produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record that does not find a corresponding record in the corresponding otherKTable's state the providedValueJoinerwill be called withnullvalue for the corresponding other value to compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. Thus, for input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:ValueJoiner(null,b)> <K1:null> <K1:null> - Specified by:
outerJoinin interfaceKTable<K,S>- Type Parameters:
VO- the value type of the otherKTableVR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topologymaterialized- an instance ofMaterializedused to describe how the state store should be materialized. Cannot benull- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of bothKTables - See Also:
KTable.join(KTable, ValueJoiner),KTable.leftJoin(KTable, ValueJoiner)
-
leftJoin
public <V1,R> KTable<K,R> leftJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with default serializers, deserializers, and state store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-join, all records from leftKTablewill produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record of leftKTablethat does not find a corresponding record in the rightKTable's state the providedValueJoinerwill be called withrightValue = nullto compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. For example, for left input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:null> <K1:null> - Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
V1- the value type of the otherKTableR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching records- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of leftKTable - See Also:
KTable.join(KTable, ValueJoiner),KTable.outerJoin(KTable, ValueJoiner)
-
leftJoin
public <V1,R> KTable<K,R> leftJoin(KTable<K,V1> other, ValueJoiner<? super V,? super V1,? extends R> joiner, Named named)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with default serializers, deserializers, and state store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-join, all records from leftKTablewill produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record of leftKTablethat does not find a corresponding record in the rightKTable's state the providedValueJoinerwill be called withrightValue = nullto compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. For example, for left input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:null> <K1:null> - Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
V1- the value type of the otherKTableR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of leftKTable - See Also:
KTable.join(KTable, ValueJoiner),KTable.outerJoin(KTable, ValueJoiner)
-
leftJoin
public <VO,VR> KTable<K,VR> leftJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-join, all records from leftKTablewill produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record of leftKTablethat does not find a corresponding record in the rightKTable's state the providedValueJoinerwill be called withrightValue = nullto compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. For example, for left input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:null> <K1:null> - Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
VO- the value type of the otherKTableVR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsmaterialized- an instance ofMaterializedused to describe how the state store should be materialized. Cannot benull- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of leftKTable - See Also:
KTable.join(KTable, ValueJoiner, Materialized),KTable.outerJoin(KTable, ValueJoiner, Materialized)
-
leftJoin
public <VO,VR> KTable<K,VR> leftJoin(KTable<K,VO> other, ValueJoiner<? super V,? super VO,? extends VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTable(left input) with anotherKTable's (right input) records using non-windowed left equi join, with theMaterializedinstance for configuration of thekey serde,the result table's value serde, andstate store. The join is a primary key join with join attributethisKTable.key == otherKTable.key. In contrast toinner-join, all records from leftKTablewill produce an output record (cf. below). The result is an ever updatingKTablethat represents the current (i.e., processing time) result of the join.The join is computed by (1) updating the internal state of one
KTableand (2) performing a lookup for a matching record in the current (i.e., processing time) internal state of the otherKTable. This happens in a symmetric way, i.e., for each update of eitherthisor theotherinputKTablethe result gets updated.For each
KTablerecord that finds a corresponding record in the otherKTable's state the providedValueJoinerwill be called to compute a value (with arbitrary type) for the result record. Additionally, for each record of leftKTablethat does not find a corresponding record in the rightKTable's state the providedValueJoinerwill be called withrightValue = nullto compute a value (with arbitrary type) for the result record. The key of the result record is the same as for both joining input records.Note that
recordswithnullvalues (so-called tombstone records) have delete semantics. For example, for left input tombstones the provided value-joiner is not called but a tombstone record is forwarded directly to delete a record in the resultKTableif required (i.e., if there is anything to be deleted).Input records with
nullkey will be dropped and no join computation is performed.Example:
Both input streams (or to be more precise, their underlying source topics) need to have the same number of partitions.thisKTable thisState otherKTable otherState result updated record <K1:A> <K1:A> <K1:ValueJoiner(A,null)> <K1:A> <K1:b> <K1:b> <K1:ValueJoiner(A,b)> <K1:null> <K1:b> <K1:null> <K1:null> - Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
VO- the value type of the otherKTableVR- the value type of the resultKTable- Parameters:
other- the otherKTableto be joined with thisKTablejoiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topologymaterialized- an instance ofMaterializedused to describe how the state store should be materialized. Cannot benull- Returns:
- a
KTablethat contains join-records for each key and values computed by the givenValueJoiner, one for each matched record-pair with the same key plus one for each non-matching record of leftKTable - See Also:
KTable.join(KTable, ValueJoiner, Materialized),KTable.outerJoin(KTable, ValueJoiner, Materialized)
-
groupBy
public <K1,V1> KGroupedTable<K1,V1> groupBy(KeyValueMapper<? super K,? super V,KeyValue<K1,V1>> selector)
Description copied from interface:KTableRe-groups the records of thisKTableusing the providedKeyValueMapperand default serializers and deserializers. EachKeyValuepair of thisKTableis mapped to a newKeyValuepair by applying the providedKeyValueMapper. Re-grouping aKTableis required before an aggregation operator can be applied to the data (cf.KGroupedTable). TheKeyValueMapperselects a new key and value (with should both have unmodified type). If the new record key isnullthe record will not be included in the resultingKGroupedTableBecause a new key is selected, an internal repartitioning topic will be created in Kafka. This topic will be named "${applicationId}-<name>-repartition", where "applicationId" is user-specified in
StreamsConfigvia parameterAPPLICATION_ID_CONFIG, "<name>" is an internally generated name, and "-repartition" is a fixed suffix. You can retrieve all generated internal topic names viaTopology.describe().All data of this
KTablewill be redistributed through the repartitioning topic by writing all update records to and rereading all updated records from it, such that the resultingKGroupedTableis partitioned on the new key.If the key or value type is changed, it is recommended to use
KTable.groupBy(KeyValueMapper, Grouped)instead.- Specified by:
groupByin interfaceKTable<K,S>- Type Parameters:
K1- the key type of the resultKGroupedTableV1- the value type of the resultKGroupedTable- Parameters:
selector- aKeyValueMapperthat computes a new grouping key and value to be aggregated- Returns:
- a
KGroupedTablethat contains the re-grouped records of the originalKTable
-
groupBy
@Deprecated public <K1,V1> KGroupedTable<K1,V1> groupBy(KeyValueMapper<? super K,? super V,KeyValue<K1,V1>> selector, Serialized<K1,V1> serialized)
Deprecated.Description copied from interface:KTableRe-groups the records of thisKTableusing the providedKeyValueMapperandSerdes as specified bySerialized. EachKeyValuepair of thisKTableis mapped to a newKeyValuepair by applying the providedKeyValueMapper. Re-grouping aKTableis required before an aggregation operator can be applied to the data (cf.KGroupedTable). TheKeyValueMapperselects a new key and value (with both maybe being the same type or a new type). If the new record key isnullthe record will not be included in the resultingKGroupedTableBecause a new key is selected, an internal repartitioning topic will be created in Kafka. This topic will be named "${applicationId}-<name>-repartition", where "applicationId" is user-specified in
StreamsConfigvia parameterAPPLICATION_ID_CONFIG, "<name>" is an internally generated name, and "-repartition" is a fixed suffix. You can retrieve all generated internal topic names viaTopology.describe().All data of this
KTablewill be redistributed through the repartitioning topic by writing all update records to and rereading all updated records from it, such that the resultingKGroupedTableis partitioned on the new key.- Specified by:
groupByin interfaceKTable<K,S>- Type Parameters:
K1- the key type of the resultKGroupedTableV1- the value type of the resultKGroupedTable- Parameters:
selector- aKeyValueMapperthat computes a new grouping key and value to be aggregatedserialized- theSerializedinstance used to specifySerdes- Returns:
- a
KGroupedTablethat contains the re-grouped records of the originalKTable
-
groupBy
public <K1,V1> KGroupedTable<K1,V1> groupBy(KeyValueMapper<? super K,? super V,KeyValue<K1,V1>> selector, Grouped<K1,V1> grouped)
Description copied from interface:KTableRe-groups the records of thisKTableusing the providedKeyValueMapperandSerdes as specified byGrouped. EachKeyValuepair of thisKTableis mapped to a newKeyValuepair by applying the providedKeyValueMapper. Re-grouping aKTableis required before an aggregation operator can be applied to the data (cf.KGroupedTable). TheKeyValueMapperselects a new key and value (where both could the same type or a new type). If the new record key isnullthe record will not be included in the resultingKGroupedTableBecause a new key is selected, an internal repartitioning topic will be created in Kafka. This topic will be named "${applicationId}-<name>-repartition", where "applicationId" is user-specified in
StreamsConfigvia parameterAPPLICATION_ID_CONFIG, "<name>" is either provided viaGrouped.as(String)or an internally generated name.You can retrieve all generated internal topic names via
Topology.describe().All data of this
KTablewill be redistributed through the repartitioning topic by writing all update records to and rereading all updated records from it, such that the resultingKGroupedTableis partitioned on the new key.- Specified by:
groupByin interfaceKTable<K,S>- Type Parameters:
K1- the key type of the resultKGroupedTableV1- the value type of the resultKGroupedTable- Parameters:
selector- aKeyValueMapperthat computes a new grouping key and value to be aggregatedgrouped- theGroupedinstance used to specifySerdesand the name for a repartition topic if repartitioning is required.- Returns:
- a
KGroupedTablethat contains the re-grouped records of the originalKTable
-
valueGetterSupplier
public KTableValueGetterSupplier<K,V> valueGetterSupplier()
-
enableSendingOldValues
public void enableSendingOldValues()
-
join
public <VR,KO,VO> KTable<K,VR> join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed inner join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V). If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching records- Returns:
- a
KTablethat contains the result of joining this table withother
-
join
public <VR,KO,VO> KTable<K,VR> join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed inner join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V). If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains the result of joining this table withother
-
join
public <VR,KO,VO> KTable<K,VR> join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed inner join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V). If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching recordsmaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains the result of joining this table withother
-
join
public <VR,KO,VO> KTable<K,VR> join(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed inner join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
joinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V). If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topologymaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains the result of joining this table withother
-
leftJoin
public <VR,KO,VO> KTable<K,VR> leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed left join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V). If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching records- Returns:
- a
KTablethat contains only those records that satisfy the given predicate
-
leftJoin
public <VR,KO,VO> KTable<K,VR> leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed left join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V) If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topology- Returns:
- a
KTablethat contains the result of joining this table withother
-
leftJoin
public <VR,KO,VO> KTable<K,VR> leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Named named, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed left join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V) If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching recordsnamed- aNamedconfig used to name the processor in the topologymaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains the result of joining this table withother
-
leftJoin
public <VR,KO,VO> KTable<K,VR> leftJoin(KTable<KO,VO> other, java.util.function.Function<V,KO> foreignKeyExtractor, ValueJoiner<V,VO,VR> joiner, Materialized<K,VR,KeyValueStore<org.apache.kafka.common.utils.Bytes,byte[]>> materialized)
Description copied from interface:KTableJoin records of thisKTablewith anotherKTableusing non-windowed left join.This is a foreign key join, where the joining key is determined by the
foreignKeyExtractor.- Specified by:
leftJoinin interfaceKTable<K,S>- Type Parameters:
VR- the value type of the resultKTableKO- the key type of the otherKTableVO- the value type of the otherKTable- Parameters:
other- the otherKTableto be joined with thisKTable. Keyed by KO.foreignKeyExtractor- aFunctionthat extracts the key (KO) from this table's value (V). If the result is null, the update is ignored as invalid.joiner- aValueJoinerthat computes the join result for a pair of matching recordsmaterialized- aMaterializedthat describes how theStateStorefor the resultingKTableshould be materialized. Cannot benull- Returns:
- a
KTablethat contains the result of joining this table withother
-
-