|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use InputSplit | |
---|---|
org.apache.hadoop.mapred | |
org.apache.hadoop.mapred.join | |
org.apache.hadoop.mapred.lib | |
org.apache.hadoop.mapred.lib.db | |
org.apache.hadoop.mapreduce.split |
Uses of InputSplit in org.apache.hadoop.mapred |
---|
Classes in org.apache.hadoop.mapred that implement InputSplit | |
---|---|
class |
FileSplit
A section of an input file. |
class |
MultiFileSplit
A sub-collection of input files. |
Methods in org.apache.hadoop.mapred that return InputSplit | |
---|---|
InputSplit |
Reporter.getInputSplit()
Get the InputSplit object for a map. |
InputSplit |
Task.TaskReporter.getInputSplit()
|
InputSplit[] |
MultiFileInputFormat.getSplits(JobConf job,
int numSplits)
|
InputSplit[] |
InputFormat.getSplits(JobConf job,
int numSplits)
Logically split the set of input files for the job. |
InputSplit[] |
FileInputFormat.getSplits(JobConf job,
int numSplits)
Splits files returned by FileInputFormat.listStatus(JobConf) when
they're too big. |
Methods in org.apache.hadoop.mapred with parameters of type InputSplit | |
---|---|
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<org.apache.hadoop.io.Text,org.apache.hadoop.io.Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split |
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<org.apache.hadoop.io.BytesWritable,org.apache.hadoop.io.BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
void |
Task.TaskReporter.setInputSplit(InputSplit split)
|
Uses of InputSplit in org.apache.hadoop.mapred.join |
---|
Classes in org.apache.hadoop.mapred.join that implement InputSplit | |
---|---|
class |
CompositeInputSplit
This InputSplit contains a set of child InputSplits. |
Methods in org.apache.hadoop.mapred.join that return InputSplit | |
---|---|
InputSplit |
CompositeInputSplit.get(int i)
Get ith child InputSplit. |
InputSplit[] |
CompositeInputFormat.getSplits(JobConf job,
int numSplits)
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. |
Methods in org.apache.hadoop.mapred.join with parameters of type InputSplit | |
---|---|
void |
CompositeInputSplit.add(InputSplit s)
Add an InputSplit to this collection. |
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression. |
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of InputSplit in org.apache.hadoop.mapred.lib |
---|
Classes in org.apache.hadoop.mapred.lib that implement InputSplit | |
---|---|
class |
CombineFileSplit
|
Methods in org.apache.hadoop.mapred.lib that return InputSplit | |
---|---|
InputSplit[] |
CombineFileInputFormat.getSplits(JobConf job,
int numSplits)
|
InputSplit[] |
DelegatingInputFormat.getSplits(JobConf conf,
int numSplits)
|
InputSplit[] |
NLineInputFormat.getSplits(JobConf job,
int numSplits)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
Methods in org.apache.hadoop.mapred.lib with parameters of type InputSplit | |
---|---|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
CombineTextInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter)
|
abstract RecordReader<K,V> |
CombineFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet. |
RecordReader<K,V> |
DelegatingInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter)
|
RecordReader<K,V> |
CombineSequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter)
|
RecordReader<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
Uses of InputSplit in org.apache.hadoop.mapred.lib.db |
---|
Classes in org.apache.hadoop.mapred.lib.db that implement InputSplit | |
---|---|
protected static class |
DBInputFormat.DBInputSplit
A InputSplit that spans a set of rows |
Methods in org.apache.hadoop.mapred.lib.db that return InputSplit | |
---|---|
InputSplit[] |
DBInputFormat.getSplits(JobConf job,
int chunks)
Logically split the set of input files for the job. |
Methods in org.apache.hadoop.mapred.lib.db with parameters of type InputSplit | |
---|---|
RecordReader<org.apache.hadoop.io.LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
Uses of InputSplit in org.apache.hadoop.mapreduce.split |
---|
Methods in org.apache.hadoop.mapreduce.split with parameters of type InputSplit | |
---|---|
static void |
JobSplitWriter.createSplitFiles(org.apache.hadoop.fs.Path jobSubmitDir,
org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.fs.FileSystem fs,
InputSplit[] splits)
|
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |