public class RangeChecksumInputFormat extends TableInputFormat
| Modifier and Type | Field and Description |
|---|---|
static java.lang.String |
INCLUDEDREGIONFILENAME |
static java.lang.String |
SPLITFILENAME |
cond, COND_OBJ, EXCLUDE_EMBEDDEDFAMILY, FIELD_PATH, GET_DELETES, INPUT_TABLE, jTable, READ_ALL_CFS, START_ROW, STOP_ROW| Constructor and Description |
|---|
RangeChecksumInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
java.util.List<org.apache.hadoop.mapreduce.InputSplit> |
getSplits(org.apache.hadoop.mapreduce.JobContext context)
Set up the splits which will be served as inputs to map tasks.
|
ByteBufWritableComparable |
getSplitStartKey(ByteBufWritableComparable key) |
protected boolean |
includeRegionInSplit(byte[] startKey,
byte[] endKey) |
void |
setConf(org.apache.hadoop.conf.Configuration configuration)
This function is used to set parameters in the configuration.
|
createRecordReader, getConfpublic static final java.lang.String SPLITFILENAME
public static final java.lang.String INCLUDEDREGIONFILENAME
public void setConf(org.apache.hadoop.conf.Configuration configuration)
TableInputFormatsetConf in interface org.apache.hadoop.conf.ConfigurablesetConf in class TableInputFormatconfiguration - Configuration object with parameters for TableInputFormat.Configurable.setConf(
org.apache.hadoop.conf.Configuration)public ByteBufWritableComparable getSplitStartKey(ByteBufWritableComparable key)
protected boolean includeRegionInSplit(byte[] startKey,
byte[] endKey)
public java.util.List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext context)
throws java.io.IOException,
java.lang.InterruptedException
TableInputFormatgetSplits in class TableInputFormatcontext - The current job context.java.io.IOExceptionjava.lang.InterruptedExceptionInputFormat.getSplits(org.apache.hadoop.mapreduce.JobContext)