public class AvroParquetWriter<T extends org.apache.avro.generic.IndexedRecord> extends ParquetWriter<T>
DEFAULT_BLOCK_SIZE, DEFAULT_COMPRESSION_CODEC_NAME, DEFAULT_IS_DICTIONARY_ENABLED, DEFAULT_IS_VALIDATING_ENABLED, DEFAULT_PAGE_SIZE, DEFAULT_WRITER_VERSION| Constructor and Description |
|---|
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema)
Create a new
AvroParquetWriter. |
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize)
Create a new
AvroParquetWriter. |
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary)
Create a new
AvroParquetWriter. |
AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary,
org.apache.hadoop.conf.Configuration conf)
Create a new
AvroParquetWriter. |
public AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize)
throws IOException
AvroParquetWriter.file - avroSchema - compressionCodecName - blockSize - pageSize - IOExceptionpublic AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary)
throws IOException
AvroParquetWriter.file - The file name to write to.avroSchema - The schema to write with.compressionCodecName - Compression code to use, or CompressionCodecName.UNCOMPRESSEDblockSize - the block size threshold.pageSize - See parquet write up. Blocks are subdivided into pages for alignment and other purposes.enableDictionary - Whether to use a dictionary to compress columns.IOExceptionpublic AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema)
throws IOException
AvroParquetWriter. The default block size is 50 MB.The default
page size is 1 MB. Default compression is no compression. (Inherited from ParquetWriter)file - The file name to write to.avroSchema - The schema to write with.IOExceptionpublic AvroParquetWriter(org.apache.hadoop.fs.Path file,
org.apache.avro.Schema avroSchema,
CompressionCodecName compressionCodecName,
int blockSize,
int pageSize,
boolean enableDictionary,
org.apache.hadoop.conf.Configuration conf)
throws IOException
AvroParquetWriter.file - The file name to write to.avroSchema - The schema to write with.compressionCodecName - Compression code to use, or CompressionCodecName.UNCOMPRESSEDblockSize - the block size threshold.pageSize - See parquet write up. Blocks are subdivided into pages for alignment and other purposes.enableDictionary - Whether to use a dictionary to compress columns.conf - The Configuration to use.IOExceptionCopyright © 2015 The Apache Software Foundation. All Rights Reserved.