Packages

c

com.mapr.db.spark.sql.v2

MapRDBScan

case class MapRDBScan(schema: StructType, tablePath: String, hintedIndexes: Array[String], filters: Array[Filter], readersPerTablet: Int) extends Scan with Batch with LoggingTrait with Product with Serializable

Linear Supertypes
Serializable, Serializable, Product, Equals, LoggingTrait, Batch, Scan, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. MapRDBScan
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. LoggingTrait
  7. Batch
  8. Scan
  9. AnyRef
  10. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new MapRDBScan(schema: StructType, tablePath: String, hintedIndexes: Array[String], filters: Array[Filter], readersPerTablet: Int)

Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native() @HotSpotIntrinsicCandidate()
  6. def createReaderFactory(): PartitionReaderFactory

    Returns a factory to create a PartitionReader for each InputPartition.

    Returns a factory to create a PartitionReader for each InputPartition.

    Definition Classes
    MapRDBScan → Batch
  7. def description(): String
    Definition Classes
    Scan
  8. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  9. val filters: Array[Filter]
  10. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  11. val hintedIndexes: Array[String]
  12. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  13. def log: Logger
    Attributes
    protected
    Definition Classes
    LoggingTrait
  14. def logDebug(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  15. def logDebug(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  16. def logError(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  17. def logError(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  18. def logInfo(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  19. def logInfo(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  20. def logName: String
    Attributes
    protected
    Definition Classes
    LoggingTrait
  21. def logTrace(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  22. def logTrace(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  23. def logWarning(msg: ⇒ String, throwable: Throwable): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  24. def logWarning(msg: ⇒ String): Unit
    Attributes
    protected
    Definition Classes
    LoggingTrait
  25. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  26. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  27. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native() @HotSpotIntrinsicCandidate()
  28. def planInputPartitions(): Array[InputPartition]

    Returns a list of input partitions.

    Returns a list of input partitions. Each InputPartition represents a data split that can be processed by one Spark task. The number of input partitions returned here is the same as the number of RDD partitions this scan outputs.

    If the Scan supports filter pushdown, this Batch is likely configured with a filter and is responsible for creating splits for that filter, which is not a full scan.

    This method will be called only once during a data source scan, to launch one Spark job.

    Definition Classes
    MapRDBScan → Batch
  29. def readSchema(): StructType

    Returns the actual schema of this data source scan, which may be different from the physical schema of the underlying storage, as column pruning or other optimizations may happen.

    Returns the actual schema of this data source scan, which may be different from the physical schema of the underlying storage, as column pruning or other optimizations may happen.

    Definition Classes
    MapRDBScan → Scan
  30. val readersPerTablet: Int
  31. val schema: StructType
  32. def supportedCustomMetrics(): Array[CustomMetric]
    Definition Classes
    Scan
  33. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  34. val tablePath: String
  35. def toBatch(): Batch
    Definition Classes
    MapRDBScan → Scan
  36. def toContinuousStream(arg0: String): ContinuousStream
    Definition Classes
    Scan
  37. def toMicroBatchStream(arg0: String): MicroBatchStream
    Definition Classes
    Scan
  38. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  39. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  40. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] ) @Deprecated
    Deprecated

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from LoggingTrait

Inherited from Batch

Inherited from Scan

Inherited from AnyRef

Inherited from Any

Ungrouped