All Classes and Interfaces

Class
Description
AbstractColumnReader<VECTOR extends org.apache.flink.table.data.columnar.vector.writable.WritableColumnVector>
Abstract ColumnReader.
A convenience builder to create AvroParquetRecordFormat instances for the different kinds of Avro record types.
Convenience builder to create ParquetWriterFactory instances for the different Avro types.
Boolean ColumnReader.
To represent collection's position in repeated type.
ColumnBatchFactory<SplitT extends org.apache.flink.connector.file.src.FileSourceSplit>
Interface to create VectorizedColumnBatch.
ColumnReader<VECTOR extends org.apache.flink.table.data.columnar.vector.writable.WritableColumnVector>
Read a batch of records for a column to WritableColumnVector from parquet data file.
Double ColumnReader.
FixedLenBytesColumnReader<VECTOR extends org.apache.flink.table.data.columnar.vector.writable.WritableColumnVector>
Fixed length bytes ColumnReader, just for decimal.
To delegate repetition level and definition level.
This ColumnReader mainly used to read `Group` type in parquet such as `Map`, `Array`, `Row`.
Utils to calculate nested type position.
Reader to read nested primitive column.
Reading zero always.
Reading int from RunLengthBitPackingHybridDecoder.
Reading int from ValuesReader.
A builder to create a ParquetWriter from a Parquet OutputFile.
A simple BulkWriter implementation that wraps a ParquetWriter.
ParquetColumnarRowInputFormat<SplitT extends org.apache.flink.connector.file.src.FileSourceSplit>
A ParquetVectorizedInputFormat to provide RowData iterator.
This reader is used to read a VectorizedColumnBatch from input split.
Interface to gen VectorizedColumnBatch.
The interface to wrap the underlying Parquet dictionary and non dictionary encoded page reader.
Parquet file has self-describing schema which may differ from the user required schema (e.g.
The default data column reader for existing Parquet page reader which works for both dictionary or non dictionary types, Mirror from dictionary encoding path.
The reader who reads from the underlying Timestamp value.
Parquet write decimal as int32 and int64 and binary, this class wrap the real vector to provide DecimalColumnVector interface.
Parquet dictionary.
Field that represent parquet's field type.
Parquet format factory for file system.
ParquetBulkDecodingFormat which implements FileBasedStatisticsReportableInputFormat.
Utils for Parquet format statistics report.
Field that represent parquet's Group Field.
Parquet InputFile implementation, ParquetInputFile.newStream() call will delegate to Flink FSDataInputStream.
Field that represent parquet's primitive field.
Convenience builder for creating ParquetWriterFactory instances for Protobuf classes.
ParquetProtoWriters.ParquetProtoWriterBuilder<T extends com.google.protobuf.Message>
The builder for Protobuf ParquetWriter.
RowData of ParquetWriter.Builder.
Flink Row ParquetBuilder.
Writes a record to the Parquet API with the expected schema in order to be written to a file.
Schema converter converts Parquet schema to and from Flink internal types.
Util for generating ParquetColumnarRowSplitReader.
ParquetVectorizedInputFormat<T,SplitT extends org.apache.flink.connector.file.src.FileSourceSplit>
Parquet BulkFormat that reads data from the file to VectorizedColumnBatch in vectorized mode.
Reader batch that provides writing and reading capabilities.
A factory that creates a Parquet BulkWriter.
To represent struct's position in repeated type.
Wrap Configuration to a serializable class.
Timestamp ColumnReader.