package kafka010
Spark Integration for Kafka 0.9
- Alphabetic
- By Inheritance
- kafka010
- AnyRef
- Any
- Hide All
- Show All
- Public
- All
Type Members
-
trait
CanCommitOffsets extends AnyRef
Represents any object that can commit a collection of OffsetRanges.
Represents any object that can commit a collection of OffsetRanges. The direct Kafka DStream implements this interface (see KafkaUtils.createDirectStream).
val stream = KafkaUtils.createDirectStream(...) ... stream.asInstanceOf[CanCommitOffsets].commitAsync(offsets, new OffsetCommitCallback() { def onComplete(m: java.util.Map[TopicPartition, OffsetAndMetadata], e: Exception) { if (null != e) { // error } else { // success } } })
-
implicit
class
CanCommitStreamOffsets[K, V] extends AnyRef
This extension provides easy access to commit offsets back to MapR-ES or Kafka
-
abstract
class
ConsumerStrategy[K, V] extends AnyRef
Choice of how to create and configure underlying Kafka Consumers on driver and executors.
Choice of how to create and configure underlying Kafka Consumers on driver and executors. See ConsumerStrategies to obtain instances. Kafka 0.10 consumers can require additional, sometimes complex, setup after object instantiation. This interface encapsulates that process, and allows it to be checkpointed.
- K
type of Kafka message key
- V
type of Kafka message value
-
trait
HasOffsetRanges extends AnyRef
Represents any object that has a collection of OffsetRanges.
Represents any object that has a collection of OffsetRanges. This can be used to access the offset ranges in RDDs generated by the direct Kafka DStream (see KafkaUtils.createDirectStream).
KafkaUtils.createDirectStream(...).foreachRDD { rdd => val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges ... } -
sealed abstract
class
LocationStrategy extends AnyRef
Choice of how to schedule consumers for a given TopicPartition on an executor.
Choice of how to schedule consumers for a given TopicPartition on an executor. See LocationStrategies to obtain instances. Kafka 0.9 consumers prefetch messages, so it's important for performance to keep cached consumers on appropriate executors, not recreate them for every partition. Choice of location is only a preference, not an absolute; partitions may be scheduled elsewhere.
-
final
class
OffsetRange extends Serializable
Represents a range of offsets from a single Kafka TopicPartition.
Represents a range of offsets from a single Kafka TopicPartition. Instances of this class can be created with
OffsetRange.create(). -
abstract
class
PerPartitionConfig extends Serializable
Interface for user-supplied configurations that can't otherwise be set via Spark properties, because they need tweaking on a per-partition basis,
Value Members
-
object
ConsumerStrategies
Object for obtaining instances of ConsumerStrategy
-
object
KafkaUtils extends Logging
object for constructing Kafka streams and RDDs
- object KafkaUtilsPythonHelper
-
object
LocationStrategies
Object to obtain instances of LocationStrategy
-
object
OffsetRange extends Serializable
Companion object the provides methods to create instances of OffsetRange.