Class StreamingSink

java.lang.Object
org.apache.flink.connector.file.table.stream.StreamingSink

@Internal public class StreamingSink extends Object
Helper for creating streaming file sink.
  • Method Summary

    Modifier and Type
    Method
    Description
    static <T> org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo>
    compactionWriter(org.apache.flink.table.connector.ProviderContext providerContext, org.apache.flink.streaming.api.datastream.DataStream<T> inputStream, long bucketCheckInterval, org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,? extends org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder, FileSystemFactory fsFactory, org.apache.flink.core.fs.Path path, CompactReader.Factory<T> readFactory, long targetFileSize, int parallelism, boolean parallelismConfigured)
    Create a file writer with compaction operators by input stream.
    static org.apache.flink.streaming.api.datastream.DataStreamSink<?>
    sink(org.apache.flink.table.connector.ProviderContext providerContext, org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo> writer, org.apache.flink.core.fs.Path locationPath, org.apache.flink.table.catalog.ObjectIdentifier identifier, List<String> partitionKeys, TableMetaStoreFactory msFactory, FileSystemFactory fsFactory, org.apache.flink.configuration.Configuration options)
    Create a sink from file writer.
    static <T> org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo>
    writer(org.apache.flink.table.connector.ProviderContext providerContext, org.apache.flink.streaming.api.datastream.DataStream<T> inputStream, long bucketCheckInterval, org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,? extends org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder, int parallelism, List<String> partitionKeys, org.apache.flink.configuration.Configuration conf, boolean parallelismConfigured)
    Create a file writer by input stream.

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Method Details

    • writer

      public static <T> org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo> writer(org.apache.flink.table.connector.ProviderContext providerContext, org.apache.flink.streaming.api.datastream.DataStream<T> inputStream, long bucketCheckInterval, org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,? extends org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder, int parallelism, List<String> partitionKeys, org.apache.flink.configuration.Configuration conf, boolean parallelismConfigured)
      Create a file writer by input stream. This is similar to StreamingFileSink, in addition, it can emit PartitionCommitInfo to down stream.
    • compactionWriter

      public static <T> org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo> compactionWriter(org.apache.flink.table.connector.ProviderContext providerContext, org.apache.flink.streaming.api.datastream.DataStream<T> inputStream, long bucketCheckInterval, org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,? extends org.apache.flink.streaming.api.functions.sink.filesystem.legacy.StreamingFileSink.BucketsBuilder<T,String,?>> bucketsBuilder, FileSystemFactory fsFactory, org.apache.flink.core.fs.Path path, CompactReader.Factory<T> readFactory, long targetFileSize, int parallelism, boolean parallelismConfigured)
      Create a file writer with compaction operators by input stream. In addition, it can emit PartitionCommitInfo to down stream.
    • sink

      public static org.apache.flink.streaming.api.datastream.DataStreamSink<?> sink(org.apache.flink.table.connector.ProviderContext providerContext, org.apache.flink.streaming.api.datastream.DataStream<PartitionCommitInfo> writer, org.apache.flink.core.fs.Path locationPath, org.apache.flink.table.catalog.ObjectIdentifier identifier, List<String> partitionKeys, TableMetaStoreFactory msFactory, FileSystemFactory fsFactory, org.apache.flink.configuration.Configuration options)
      Create a sink from file writer. Decide whether to add the node to commit partitions according to options.