Packages

c

com.nvidia.spark.rapids

MultiFileOrcPartitionReader

class MultiFileOrcPartitionReader extends MultiFileCoalescingPartitionReaderBase with OrcCommonFunctions

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MultiFileOrcPartitionReader
  2. OrcCommonFunctions
  3. OrcCodecWritingHelper
  4. MultiFileCoalescingPartitionReaderBase
  5. MultiFileReaderFunctions
  6. FilePartitionReaderBase
  7. Arm
  8. ScanWithMetrics
  9. Logging
  10. PartitionReader
  11. Closeable
  12. AutoCloseable
  13. AnyRef
  14. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MultiFileOrcPartitionReader(conf: Configuration, files: Array[PartitionedFile], clippedStripes: Seq[OrcSingleStripeMeta], readDataSchema: StructType, debugDumpPrefix: Option[String], maxReadBatchSizeRows: Integer, maxReadBatchSizeBytes: Long, execMetrics: Map[String, GpuMetric], partitionSchema: StructType, numThreads: Int, isCaseSensitive: Boolean)

    conf

    Configuration

    files

    files to be read

    clippedStripes

    the stripe metadata from the original Orc file that has been clipped to only contain the column chunks to be read

    readDataSchema

    the Spark schema describing what will be read

    debugDumpPrefix

    a path prefix to use for dumping the fabricated Orc data or null

    maxReadBatchSizeRows

    soft limit on the maximum number of rows the reader reads per batch

    maxReadBatchSizeBytes

    soft limit on the maximum number of bytes the reader reads per batch

    execMetrics

    metrics

    partitionSchema

    schema of partitions

    numThreads

    the size of the threadpool

    isCaseSensitive

    whether the name check should be case sensitive or not

Type Members

  1. class OrcCopyStripesRunner extends Callable[(Seq[DataBlockBase], Long)]

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. def addPartitionValues(batch: Option[ColumnarBatch], inPartitionValues: InternalRow, partitionSchema: StructType): Option[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
  5. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  6. var batch: Option[ColumnarBatch]
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  7. def buildReaderSchema(updatedSchema: TypeDescription, requestedMapping: Option[Array[Int]]): TypeDescription
    Attributes
    protected
    Definition Classes
    OrcCommonFunctions
  8. def buildReaderSchema(ctx: OrcPartitionReaderContext): TypeDescription

    Get the ORC schema corresponding to the file being constructed for the GPU

    Get the ORC schema corresponding to the file being constructed for the GPU

    Attributes
    protected
    Definition Classes
    OrcCommonFunctions
  9. def calculateEstimatedBlocksOutputSize(batchContext: BatchContext): Long

    Calculate the output size according to the block chunks and the schema, and the estimated output size will be used as the initialized size of allocating HostMemoryBuffer

    Calculate the output size according to the block chunks and the schema, and the estimated output size will be used as the initialized size of allocating HostMemoryBuffer

    Please be note, the estimated size should be at least equal to size of HEAD + Blocks + FOOTER

    batchContext

    the batch building context

    returns

    Long, the estimated output size

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  10. def calculateFinalBlocksOutputSize(footerOffset: Long, stripes: Seq[DataBlockBase], batchContext: BatchContext): Long

    Calculate the final block output size which will be used to decide if re-allocate HostMemoryBuffer

    Calculate the final block output size which will be used to decide if re-allocate HostMemoryBuffer

    For now, we still don't know the ORC file footer size, so we can't get the final size.

    Since calculateEstimatedBlocksOutputSize has over-estimated the size, it's safe to use it and it will not cause HostMemoryBuffer re-allocating.

    footerOffset

    footer offset

    stripes

    stripes to be evaluated

    batchContext

    the batch building context

    returns

    the output size

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  11. def checkIfNeedToSplitDataBlock(currentBlockInfo: SingleDataBlockInfo, nextBlockInfo: SingleDataBlockInfo): Boolean

    To check if the next block will be split into another ColumnarBatch

    To check if the next block will be split into another ColumnarBatch

    currentBlockInfo

    current SingleDataBlockInfo

    nextBlockInfo

    next SingleDataBlockInfo

    returns

    true: split the next block into another ColumnarBatch and vice versa

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  12. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  13. def close(): Unit
    Definition Classes
    FilePartitionReaderBase → Closeable → AutoCloseable
  14. def closeOnExcept[T <: AutoCloseable, V](r: Option[T])(block: (Option[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  15. def closeOnExcept[T <: AutoCloseable, V](r: ArrayBuffer[T])(block: (ArrayBuffer[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  16. def closeOnExcept[T <: AutoCloseable, V](r: Array[T])(block: (Array[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  17. def closeOnExcept[T <: AutoCloseable, V](r: Seq[T])(block: (Seq[T]) ⇒ V): V

    Executes the provided code block, closing the resources only if an exception occurs

    Executes the provided code block, closing the resources only if an exception occurs

    Definition Classes
    Arm
  18. def closeOnExcept[T <: AutoCloseable, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block, closing the resource only if an exception occurs

    Executes the provided code block, closing the resource only if an exception occurs

    Definition Classes
    Arm
  19. def copyStripeData(ctx: OrcPartitionReaderContext, out: WritableByteChannel, inputDataRanges: DiskRangeList): Unit

    Copy the stripe to the channel

    Copy the stripe to the channel

    Attributes
    protected
    Definition Classes
    OrcCommonFunctions
  20. def createBatchContext(chunkedBlocks: LinkedHashMap[Path, ArrayBuffer[DataBlockBase]], clippedSchema: SchemaBase): BatchContext

    Return a batch context which will be shared during the process of building a memory file, aka with the following APIs.

    Return a batch context which will be shared during the process of building a memory file, aka with the following APIs.

    • calculateEstimatedBlocksOutputSize
    • writeFileHeader
    • getBatchRunner
    • calculateFinalBlocksOutputSize
    • writeFileFooter It is useful when something is needed by some or all of the above APIs. Children can override this to return a customized batch context.
    chunkedBlocks

    mapping of file path to data blocks

    clippedSchema

    schema info

    Attributes
    protected
    Definition Classes
    MultiFileCoalescingPartitionReaderBase
  21. def currentMetricsValues(): Array[CustomTaskMetric]
    Definition Classes
    PartitionReader
  22. val debugDumpPrefix: Option[String]
  23. def decodeToTable(hostBuf: HostMemoryBuffer, bufSize: Long, memFileSchema: TypeDescription, requestedMapping: Option[Array[Int]], isCaseSensitive: Boolean, splits: Array[PartitionedFile]): Table

    Read the host data to GPU for ORC decoding, and return it as a cuDF Table.

    Read the host data to GPU for ORC decoding, and return it as a cuDF Table. The input host buffer should contain valid data, otherwise the behavior is undefined. 'splits' is used only for debugging.

    Attributes
    protected
    Definition Classes
    OrcCommonFunctions
  24. def dumpDataToFile(hmb: HostMemoryBuffer, dataLength: Long, splits: Array[PartitionedFile], debugDumpPrefix: Option[String] = None, format: Option[String] = None): Unit

    Dump the data from HostMemoryBuffer to a file named by debugDumpPrefix + random + format

    Dump the data from HostMemoryBuffer to a file named by debugDumpPrefix + random + format

    hmb

    host data to be dumped

    dataLength

    data size

    splits

    PartitionedFile to be handled

    debugDumpPrefix

    file name prefix, if it is None, will not dump

    format

    file name suffix, if it is None, will not dump

    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  25. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  26. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  27. def fileSystemBytesRead(): Long
    Attributes
    protected
    Definition Classes
    MultiFileReaderFunctions
    Annotations
    @nowarn()
  28. def freeOnExcept[T <: RapidsBuffer, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block, freeing the RapidsBuffer only if an exception occurs

    Executes the provided code block, freeing the RapidsBuffer only if an exception occurs

    Definition Classes
    Arm
  29. def get(): ColumnarBatch
    Definition Classes
    FilePartitionReaderBase → PartitionReader
  30. def getBatchRunner(tc: TaskContext, file: Path, outhmb: HostMemoryBuffer, blocks: ArrayBuffer[DataBlockBase], offset: Long, batchContext: BatchContext): Callable[(Seq[DataBlockBase], Long)]

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    The sub-class must implement the real file reading logic in a Callable which will be running in a thread pool

    tc

    task context to use

    file

    file to be read

    outhmb

    the sliced HostMemoryBuffer to hold the blocks, and the implementation is in charge of closing it in sub-class

    blocks

    blocks meta info to specify which blocks to be read

    offset

    used as the offset adjustment

    batchContext

    the batch building context

    returns

    Callable[(Seq[DataBlockBase], Long)], which will be submitted to a ThreadPoolExecutor, and the Callable will return a tuple result and result._1 is block meta info with the offset adjusted result._2 is the bytes read

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  31. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  32. final def getFileFormatShortName: String

    File format short name used for logging and other things to uniquely identity which file format is being used.

    File format short name used for logging and other things to uniquely identity which file format is being used.

    returns

    the file format short name

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  33. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  34. def initializeLogIfNecessary(isInterpreter: Boolean, silent: Boolean): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  35. def initializeLogIfNecessary(isInterpreter: Boolean): Unit
    Attributes
    protected
    Definition Classes
    Logging
  36. var isDone: Boolean
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  37. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  38. def isTraceEnabled(): Boolean
    Attributes
    protected
    Definition Classes
    Logging
  39. def log: Logger
    Attributes
    protected
    Definition Classes
    Logging
  40. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  41. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  42. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  43. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  44. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  45. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  46. def logName: String
    Attributes
    protected
    Definition Classes
    Logging
  47. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  48. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  49. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    Logging
  50. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    Logging
  51. var maxDeviceMemory: Long
    Attributes
    protected
    Definition Classes
    FilePartitionReaderBase
  52. val metrics: Map[String, GpuMetric]
    Definition Classes
    ScanWithMetrics
  53. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  54. def next(): Boolean
    Definition Classes
    MultiFileCoalescingPartitionReaderBase → PartitionReader
  55. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  56. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  57. def readBufferToTable(dataBuffer: HostMemoryBuffer, dataSize: Long, clippedSchema: SchemaBase, extraInfo: ExtraInfo): Table

    Sent host memory to GPU to decode

    Sent host memory to GPU to decode

    dataBuffer

    the data which can be decoded in GPU

    dataSize

    data size

    clippedSchema

    the clipped schema

    extraInfo

    the extra information for specific file format

    returns

    Table

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  58. val readDataSchema: StructType
  59. lazy val sizeOfStripeInformation: Int
  60. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  61. implicit def toOrcExtraInfo(in: ExtraInfo): OrcExtraInfo
  62. implicit def toOrcStripeWithMetas(stripes: Seq[DataBlockBase]): Seq[OrcStripeWithMeta]
  63. def toString(): String
    Definition Classes
    AnyRef → Any
  64. implicit def toStripe(block: DataBlockBase): OrcStripeWithMeta
  65. implicit def toTypeDescription(schema: SchemaBase): TypeDescription
  66. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  67. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  68. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  69. def withCodecOutputStream[T](ctx: OrcPartitionReaderContext, out: HostMemoryOutputStream)(block: (WritableByteChannel, CodedOutputStream, OutStream) ⇒ T): T

    Executes the provided code block in the codec environment

    Executes the provided code block in the codec environment

    Definition Classes
    OrcCodecWritingHelper
  70. def withResource[T <: AutoCloseable, V](h: CloseableHolder[T])(block: (CloseableHolder[T]) ⇒ V): V

    Executes the provided code block and then closes the resource

    Executes the provided code block and then closes the resource

    Definition Classes
    Arm
  71. def withResource[T <: AutoCloseable, V](r: ArrayBuffer[T])(block: (ArrayBuffer[T]) ⇒ V): V

    Executes the provided code block and then closes the array buffer of resources

    Executes the provided code block and then closes the array buffer of resources

    Definition Classes
    Arm
  72. def withResource[T <: AutoCloseable, V](r: Array[T])(block: (Array[T]) ⇒ V): V

    Executes the provided code block and then closes the array of resources

    Executes the provided code block and then closes the array of resources

    Definition Classes
    Arm
  73. def withResource[T <: AutoCloseable, V](r: Seq[T])(block: (Seq[T]) ⇒ V): V

    Executes the provided code block and then closes the sequence of resources

    Executes the provided code block and then closes the sequence of resources

    Definition Classes
    Arm
  74. def withResource[T <: AutoCloseable, V](r: Option[T])(block: (Option[T]) ⇒ V): V

    Executes the provided code block and then closes the Option[resource]

    Executes the provided code block and then closes the Option[resource]

    Definition Classes
    Arm
  75. def withResource[T <: AutoCloseable, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block and then closes the resource

    Executes the provided code block and then closes the resource

    Definition Classes
    Arm
  76. def withResourceIfAllowed[T, V](r: T)(block: (T) ⇒ V): V

    Executes the provided code block and then closes the value if it is AutoCloseable

    Executes the provided code block and then closes the value if it is AutoCloseable

    Definition Classes
    Arm
  77. def writeFileFooter(buffer: HostMemoryBuffer, bufferSize: Long, footerOffset: Long, stripes: Seq[DataBlockBase], batchContext: BatchContext): (HostMemoryBuffer, Long)

    Writer a footer for a specific file format.

    Writer a footer for a specific file format. If there is no footer for the file format, just return (hmb, offset)

    Please be note, some file formats may re-allocate the HostMemoryBuffer because of the estimated initialized buffer size may be a little smaller than the actual size. So in this case, the hmb should be closed in the implementation.

    buffer

    The buffer holding (header + data blocks)

    bufferSize

    The total buffer size which equals to size of (header + blocks + footer)

    footerOffset

    Where begin to write the footer

    stripes

    The data block meta info

    batchContext

    The batch building context

    returns

    the buffer and the buffer size

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  78. def writeFileHeader(buffer: HostMemoryBuffer, batchContext: BatchContext): Long

    Write a header for a specific file format.

    Write a header for a specific file format. If there is no header for the file format, just ignore it and return 0

    buffer

    where the header will be written

    batchContext

    the batch building context

    returns

    how many bytes written

    Definition Classes
    MultiFileOrcPartitionReaderMultiFileCoalescingPartitionReaderBase
  79. def writeOrcFileFooter(ctx: OrcPartitionReaderContext, fileFooterBuilder: Builder, rawOut: HostMemoryOutputStream, footerStartOffset: Long, numRows: Long, protoWriter: CodedOutputStream, codecStream: OutStream): Unit

    write the ORC file Footer and PostScript

    write the ORC file Footer and PostScript

    Attributes
    protected
    Definition Classes
    OrcCommonFunctions

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from OrcCommonFunctions

Inherited from OrcCodecWritingHelper

Inherited from MultiFileReaderFunctions

Inherited from FilePartitionReaderBase

Inherited from Arm

Inherited from ScanWithMetrics

Inherited from Logging

Inherited from PartitionReader[ColumnarBatch]

Inherited from Closeable

Inherited from AutoCloseable

Inherited from AnyRef

Inherited from Any

Ungrouped