public static class MultiFileWordCount.MyInputFormat extends org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<MultiFileWordCount.WordOffset,org.apache.hadoop.io.Text>
CombineFileInputFormat, one should extend it, to return a
(custom) RecordReader. CombineFileInputFormat uses
CombineFileSplits.| Constructor | Description |
|---|---|
MyInputFormat() |
| Modifier and Type | Method | Description |
|---|---|---|
org.apache.hadoop.mapreduce.RecordReader<MultiFileWordCount.WordOffset,org.apache.hadoop.io.Text> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context) |
createPool, createPool, getFileBlockLocations, getSplits, isSplitable, setMaxBlocksNum, setMaxSplitSize, setMinSplitSizeNode, setMinSplitSizeRackaddInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxBlockNum, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputBlockNum, setMaxInputSplitSize, setMinInputSplitSize, shrinkStatuspublic org.apache.hadoop.mapreduce.RecordReader<MultiFileWordCount.WordOffset,org.apache.hadoop.io.Text> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) throws java.io.IOException
createRecordReader in class org.apache.hadoop.mapreduce.lib.input.CombineFileInputFormat<MultiFileWordCount.WordOffset,org.apache.hadoop.io.Text>java.io.IOExceptionCopyright © 2008–2025 Apache Software Foundation. All rights reserved.