Class FileSource<T>

Type Parameters:
T - The type of the events/records produced by this source.
All Implemented Interfaces:
Serializable, org.apache.flink.api.connector.source.DynamicParallelismInference, org.apache.flink.api.connector.source.Source<T,FileSourceSplit,PendingSplitsCheckpoint<FileSourceSplit>>, org.apache.flink.api.connector.source.SourceReaderFactory<T,FileSourceSplit>, org.apache.flink.api.java.typeutils.ResultTypeQueryable<T>

@PublicEvolving public final class FileSource<T> extends AbstractFileSource<T,FileSourceSplit> implements org.apache.flink.api.connector.source.DynamicParallelismInference
A unified data source that reads files - both in batch and in streaming mode.

This source supports all (distributed) file systems and object stores that can be accessed via the Flink's FileSystem class.

Start building a file source via one of the following calls:

This creates a FileSource.FileSourceBuilder on which you can configure all the properties of the file source.

Batch and Streaming

This source supports both bounded/batch and continuous/streaming data inputs. For the bounded/batch case, the file source processes all files under the given path(s). In the continuous/streaming case, the source periodically checks the paths for new files and will start reading those.

When you start creating a file source (via the FileSource.FileSourceBuilder created through one of the above-mentioned methods) the source is by default in bounded/batch mode. Call AbstractFileSource.AbstractFileSourceBuilder.monitorContinuously(Duration) to put the source into continuous streaming mode.

Format Types

The reading of each file happens through file readers defined by file formats. These define the parsing logic for the contents of the file. There are multiple classes that the source supports. Their interfaces trade of simplicity of implementation and flexibility/efficiency.

  • A StreamFormat reads the contents of a file from a file stream. It is the simplest format to implement, and provides many features out-of-the-box (like checkpointing logic) but is limited in the optimizations it can apply (such as object reuse, batching, etc.).
  • A BulkFormat reads batches of records from a file at a time. It is the most "low level" format to implement, but offers the greatest flexibility to optimize the implementation.

Discovering / Enumerating Files

The way that the source lists the files to be processes is defined by the FileEnumerator. The FileEnumerator is responsible to select the relevant files (for example filter out hidden files) and to optionally splits files into multiple regions (= file source splits) that can be read in parallel).

See Also:
  • Field Details

    • DEFAULT_SPLIT_ASSIGNER

      public static final FileSplitAssigner.Provider DEFAULT_SPLIT_ASSIGNER
      The default split assigner, a lazy locality-aware assigner.
    • DEFAULT_SPLITTABLE_FILE_ENUMERATOR

      public static final FileEnumerator.Provider DEFAULT_SPLITTABLE_FILE_ENUMERATOR
      The default file enumerator used for splittable formats. The enumerator recursively enumerates files, split files that consist of multiple distributed storage blocks into multiple splits, and filters hidden files (files starting with '.' or '_'). Files with suffixes of common compression formats (for example '.gzip', '.bz2', '.xy', '.zip', ...) will not be split.
    • DEFAULT_NON_SPLITTABLE_FILE_ENUMERATOR

      public static final FileEnumerator.Provider DEFAULT_NON_SPLITTABLE_FILE_ENUMERATOR
      The default file enumerator used for non-splittable formats. The enumerator recursively enumerates files, creates one split for the file, and filters hidden files (files starting with '.' or '_').
  • Method Details

    • getSplitSerializer

      public org.apache.flink.core.io.SimpleVersionedSerializer<FileSourceSplit> getSplitSerializer()
      Specified by:
      getSplitSerializer in interface org.apache.flink.api.connector.source.Source<T,FileSourceSplit,PendingSplitsCheckpoint<FileSourceSplit>>
      Specified by:
      getSplitSerializer in class AbstractFileSource<T,FileSourceSplit>
    • inferParallelism

      public int inferParallelism(org.apache.flink.api.connector.source.DynamicParallelismInference.Context dynamicParallelismContext)
      Specified by:
      inferParallelism in interface org.apache.flink.api.connector.source.DynamicParallelismInference
    • forRecordStreamFormat

      public static <T> FileSource.FileSourceBuilder<T> forRecordStreamFormat(StreamFormat<T> streamFormat, org.apache.flink.core.fs.Path... paths)
      Builds a new FileSource using a StreamFormat to read record-by-record from a file stream.

      When possible, stream-based formats are generally easier (preferable) to file-based formats, because they support better default behavior around I/O batching or progress tracking (checkpoints).

      Stream formats also automatically de-compress files based on the file extension. This supports files ending in ".deflate" (Deflate), ".xz" (XZ), ".bz2" (BZip2), ".gz", ".gzip" (GZip).

    • forBulkFileFormat

      public static <T> FileSource.FileSourceBuilder<T> forBulkFileFormat(BulkFormat<T,FileSourceSplit> bulkFormat, org.apache.flink.core.fs.Path... paths)
      Builds a new FileSource using a BulkFormat to read batches of records from files.

      Examples for bulk readers are compressed and vectorized formats such as ORC or Parquet.