Class ParquetRowDataBuilder
java.lang.Object
org.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
org.apache.flink.formats.parquet.row.ParquetRowDataBuilder
public class ParquetRowDataBuilder
extends org.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
RowData of ParquetWriter.Builder.-
Nested Class Summary
Nested Classes -
Constructor Summary
ConstructorsConstructorDescriptionParquetRowDataBuilder(org.apache.parquet.io.OutputFile path, org.apache.flink.table.types.logical.RowType rowType, boolean utcTimestamp) -
Method Summary
Modifier and TypeMethodDescriptionstatic ParquetWriterFactory<org.apache.flink.table.data.RowData>createWriterFactory(org.apache.flink.table.types.logical.RowType rowType, org.apache.hadoop.conf.Configuration conf, boolean utcTimestamp) Create a parquetBulkWriter.Factory.protected org.apache.parquet.hadoop.api.WriteSupport<org.apache.flink.table.data.RowData>getWriteSupport(org.apache.hadoop.conf.Configuration conf) protected ParquetRowDataBuilderself()Methods inherited from class org.apache.parquet.hadoop.ParquetWriter.Builder
build, config, enableDictionaryEncoding, enablePageWriteChecksum, enableValidation, getWriteSupport, withAdaptiveBloomFilterEnabled, withAllocator, withBloomFilterCandidateNumber, withBloomFilterEnabled, withBloomFilterEnabled, withBloomFilterFPP, withBloomFilterNDV, withByteStreamSplitEncoding, withCodecFactory, withColumnIndexTruncateLength, withCompressionCodec, withConf, withConf, withDictionaryEncoding, withDictionaryEncoding, withDictionaryPageSize, withEncryption, withExtraMetaData, withMaxBloomFilterBytes, withMaxPaddingSize, withMaxRowCountForPageSizeCheck, withMinRowCountForPageSizeCheck, withPageRowCountLimit, withPageSize, withPageWriteChecksumEnabled, withRowGroupSize, withRowGroupSize, withStatisticsTruncateLength, withValidation, withWriteMode, withWriterVersion
-
Constructor Details
-
ParquetRowDataBuilder
public ParquetRowDataBuilder(org.apache.parquet.io.OutputFile path, org.apache.flink.table.types.logical.RowType rowType, boolean utcTimestamp)
-
-
Method Details
-
self
- Specified by:
selfin classorg.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
-
getWriteSupport
protected org.apache.parquet.hadoop.api.WriteSupport<org.apache.flink.table.data.RowData> getWriteSupport(org.apache.hadoop.conf.Configuration conf) - Specified by:
getWriteSupportin classorg.apache.parquet.hadoop.ParquetWriter.Builder<org.apache.flink.table.data.RowData,ParquetRowDataBuilder>
-
createWriterFactory
public static ParquetWriterFactory<org.apache.flink.table.data.RowData> createWriterFactory(org.apache.flink.table.types.logical.RowType rowType, org.apache.hadoop.conf.Configuration conf, boolean utcTimestamp) Create a parquetBulkWriter.Factory.- Parameters:
rowType- row type of parquet table.conf- hadoop configuration.utcTimestamp- Use UTC timezone or local timezone to the conversion between epoch time and LocalDateTime. Hive 0.x/1.x/2.x use local timezone. But Hive 3.x use UTC timezone.
-