Packages

class ParquetPartitionReader extends FilePartitionReaderBase with ParquetPartitionReaderBase

A PartitionReader that reads a Parquet file split on the GPU.

Efficiently reading a Parquet split on the GPU requires re-constructing the Parquet file in memory that contains just the column chunks that are needed. This avoids sending unnecessary data to the GPU and saves GPU memory.

Linear Supertypes
ParquetPartitionReaderBase, MultiFileReaderFunctions, FilePartitionReaderBase, Arm, ScanWithMetrics, Logging, PartitionReader[ColumnarBatch], Closeable, AutoCloseable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. ParquetPartitionReader
  2. ParquetPartitionReaderBase
  3. MultiFileReaderFunctions
  4. FilePartitionReaderBase
  5. Arm
  6. ScanWithMetrics
  7. Logging
  8. PartitionReader
  9. Closeable
  10. AutoCloseable
  11. AnyRef
  12. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ParquetPartitionReader(conf: Configuration, split: PartitionedFile, filePath: Path, clippedBlocks: Iterable[BlockMetaData], clippedParquetSchema: MessageType, isSchemaCaseSensitive: Boolean, readDataSchema: StructType, debugDumpPrefix: String, maxReadBatchSizeRows: Integer, maxReadBatchSizeBytes: Long, execMetrics: Map[String, GpuMetric], isCorrectedInt96RebaseMode: Boolean, isCorrectedRebaseMode: Boolean, hasInt96Timestamps: Boolean, useFieldId: Boolean)

    conf

    the Hadoop configuration

    split

    the file split to read

    filePath

    the path to the Parquet file

    clippedBlocks

    the block metadata from the original Parquet file that has been clipped to only contain the column chunks to be read

    clippedParquetSchema

    the Parquet schema from the original Parquet file that has been clipped to contain only the columns to be read

    readDataSchema

    the Spark schema describing what will be read

    debugDumpPrefix

    a path prefix to use for dumping the fabricated Parquet data or null

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addPartitionValues(batch: Option[ColumnarBatch], inPartitionValues: InternalRow, partitionSchema: StructType): Option[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. var batch: Option[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  7. def calculateParquetFooterSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
    Annotations
    @nowarn()
  8. def calculateParquetOutputSize(currentChunkedBlocks: Seq[BlockMetaData], schema: MessageType, handleCoalesceFiles: Boolean): Long
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  9. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  10. def close(): Unit
    Definition Classes
    FilePartitionReaderBase → Closeable → AutoCloseable
  11. def closeOnExcept[T <: AutoCloseable, V](r: Option[T])(block: (Option[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  12. def closeOnExcept[T <: AutoCloseable, V](r: ArrayBuffer[T])(block: (ArrayBuffer[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  13. def closeOnExcept[T <: AutoCloseable, V](r: Array[T])(block: (Array[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  14. def closeOnExcept[T <: AutoCloseable, V](r: Seq[T])(block: (Seq[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  15. def closeOnExcept[T <: AutoCloseable, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block, closing the resource only if an exception occurs

    Executes the provided code block, closing the resource only if an exception occurs

    Definition Classes
    Arm
  16. def computeBlockMetaData(blocks: Seq[BlockMetaData], realStartOffset: Long, copyRangesToUpdate: Option[ArrayBuffer[CopyRange]] = None): Seq[BlockMetaData]

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    Computes new block metadata to reflect where the blocks and columns will appear in the computed Parquet file.

    blocks

    block metadata from the original file(s) that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    copyRangesToUpdate

    optional buffer to update with ranges of column data to copy

    returns

    updated block metadata

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
    Annotations
    @nowarn()
  17. val conf: Configuration
  18. def copyBlocksData(in: FSDataInputStream, out: HostMemoryOutputStream, blocks: Seq[BlockMetaData], realStartOffset: Long): Seq[BlockMetaData]

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output.

    Copies the data corresponding to the clipped blocks in the original file and compute the block metadata for the output. The output blocks will contain the same column chunk metadata but with the file offsets updated to reflect the new position of the column data as written to the output.

    in

    the input stream for the original Parquet file

    out

    the output stream to receive the data

    blocks

    block metadata from the original file that will appear in the computed file

    realStartOffset

    starting file offset of the first block

    returns

    updated block metadata corresponding to the output

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  19. val copyBufferSize: Int
    Definition Classes
    ParquetPartitionReaderBase
  20. def copyDataRange(range: CopyRange, in: FSDataInputStream, out: OutputStream, copyBuffer: Array[Byte]): Unit
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  21. def currentMetricsValues(): Array[CustomTaskMetric]
    Definition Classes
    PartitionReader
  22. def dumpDataToFile(hmb: HostMemoryBuffer, dataLength: Long, splits: Array[PartitionedFile], debugDumpPrefix: Option[String] = None, format: Option[String] = None): Unit

    Dump the data from HostMemoryBuffer to a file named by debugDumpPrefix + random + format

    Dump the data from HostMemoryBuffer to a file named by debugDumpPrefix + random + format

    hmb

    host data to be dumped

    dataLength

    data size

    splits

    PartitionedFile to be handled

    debugDumpPrefix

    file name prefix, if it is None, will not dump

    format

    file name suffix, if it is None, will not dump

    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  23. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  24. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  25. val execMetrics: Map[String, GpuMetric]
  26. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  27. def freeOnExcept[T <: RapidsBuffer, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block, freeing the RapidsBuffer only if an exception occurs

    Executes the provided code block, freeing the RapidsBuffer only if an exception occurs

    Definition Classes
    Arm
  28. def get(): ColumnarBatch
    Definition Classes
    FilePartitionReaderBase → PartitionReader
  29. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  30. def getParquetOptions(clippedSchema: MessageType, useFieldId: Boolean): ParquetOptions
    Definition Classes
    ParquetPartitionReaderBase
  31. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  32. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  33. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  34. var isDone: Boolean
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  35. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  36. val isSchemaCaseSensitive: Boolean
  37. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  38. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  39. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  40. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  46. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  47. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  48. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  49. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  50. var maxDeviceMemory: Long
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  51. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  52. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  53. def next(): Boolean
    Definition Classes
    ParquetPartitionReader → PartitionReader
  54. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  55. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  56. def populateCurrentBlockChunk(blockIter: BufferedIterator[BlockMetaData], maxReadBatchSizeRows: Int, maxReadBatchSizeBytes: Long): Seq[BlockMetaData]
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  57. val readDataSchema: StructType
  58. def readPartFile(blocks: Seq[BlockMetaData], clippedSchema: MessageType, filePath: Path): (HostMemoryBuffer, Long)
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  59. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  60. def toCudfColumnNames(readDataSchema: StructType, fileSchema: MessageType, isCaseSensitive: Boolean, useFieldId: Boolean): Seq[String]

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf.

    Take case-sensitive into consideration when getting the data reading column names before sending parquet-formatted buffer to cudf. Also clips the column names if useFieldId is true.

    readDataSchema

    Spark schema to read

    fileSchema

    the schema of the dumped parquet-formatted buffer, already removed unmatched

    isCaseSensitive

    if it is case sensitive

    useFieldId

    if enabled spark.sql.parquet.fieldId.read.enabled

    returns

    a sequence of tuple of column names following the order of readDataSchema

    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase
  61. def toString(): String
    Definition Classes
    AnyRef → Any
  62. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  63. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  64. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  65. def withResource[T <: AutoCloseable, V](h: CloseableHolder[T])(block: (CloseableHolder[T]) ⇒ V): V

    Executes the provided code block and then closes the resource

    Executes the provided code block and then closes the resource

    Definition Classes
    Arm
  66. def withResource[T <: AutoCloseable, V](r: ArrayBuffer[T])(block: (ArrayBuffer[T]) ⇒ V): V

    Executes the provided code block and then closes the array buffer of resources

    Executes the provided code block and then closes the array buffer of resources

    Definition Classes
    Arm
  67. def withResource[T <: AutoCloseable, V](r: Array[T])(block: (Array[T]) ⇒ V): V

    Executes the provided code block and then closes the array of resources

    Executes the provided code block and then closes the array of resources

    Definition Classes
    Arm
  68. def withResource[T <: AutoCloseable, V](r: Seq[T])(block: (Seq[T]) ⇒ V): V

    Executes the provided code block and then closes the sequence of resources

    Executes the provided code block and then closes the sequence of resources

    Definition Classes
    Arm
  69. def withResource[T <: AutoCloseable, V](r: Option[T])(block: (Option[T]) ⇒ V): V

    Executes the provided code block and then closes the Option[resource]

    Executes the provided code block and then closes the Option[resource]

    Definition Classes
    Arm
  70. def withResource[T <: AutoCloseable, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block and then closes the resource

    Executes the provided code block and then closes the resource

    Definition Classes
    Arm
  71. def withResourceIfAllowed[T, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block and then closes the value if it is AutoCloseable

    Executes the provided code block and then closes the value if it is AutoCloseable

    Definition Classes
    Arm
  72. def writeFooter(out: OutputStream, blocks: Seq[BlockMetaData], schema: MessageType): Unit
    Attributes
    protected
    Definition Classes
    ParquetPartitionReaderBase

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from MultiFileReaderFunctions

Inherited from FilePartitionReaderBase

Inherited from Arm

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped