public class AuditReplayMapper extends WorkloadMapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>
FileInputFormat with splitting disabled is used so any files present
in the input path directory (given by the "auditreplay.input-path"
configuration) will be used as input; one file per mapper. The expected
format of these files is determined by the value of the
"auditreplay.command-parser.class" configuration, which defaults to
AuditLogDirectParser.
This generates a number of Counter
values which can be used to get information into the replay, including the
number of commands replayed, how many of them were "invalid" (threw an
exception), how many were "late" (replayed later than they should have been),
and the latency (from client perspective) of each command. If there are a
large number of "late" commands, you likely need to increase the number of
threads used and/or the number of mappers.
By default, commands will be replayed at the same rate as they were originally performed. However a rate factor can be specified via the "auditreplay.rate-factor" configuration; all of the (relative) timestamps will be divided by this rate factor, effectively changing the rate at which they are replayed. For example, a rate factor of 2 would make the replay occur twice as fast, and a rate factor of 0.5 would make it occur half as fast.
| Modifier and Type | Class | Description |
|---|---|---|
static class |
AuditReplayMapper.CommandType |
Define the type of command, either read or write.
|
static class |
AuditReplayMapper.ReplayCommand |
Definitions of the various commands that can be replayed.
|
static class |
AuditReplayMapper.REPLAYCOUNTERS |
Counter definitions for replay. |
| Modifier and Type | Field | Description |
|---|---|---|
static java.lang.Class<AuditLogDirectParser> |
COMMAND_PARSER_DEFAULT |
|
static java.lang.String |
COMMAND_PARSER_KEY |
|
static boolean |
CREATE_BLOCKS_DEFAULT |
|
static java.lang.String |
CREATE_BLOCKS_KEY |
|
static java.lang.String |
INDIVIDUAL_COMMANDS_COUNT_SUFFIX |
|
static java.lang.String |
INDIVIDUAL_COMMANDS_COUNTER_GROUP |
|
static java.lang.String |
INDIVIDUAL_COMMANDS_INVALID_SUFFIX |
|
static java.lang.String |
INDIVIDUAL_COMMANDS_LATENCY_SUFFIX |
|
static java.lang.String |
INPUT_PATH_KEY |
|
static int |
NUM_THREADS_DEFAULT |
|
static java.lang.String |
NUM_THREADS_KEY |
|
static java.lang.String |
OUTPUT_PATH_KEY |
|
static double |
RATE_FACTOR_DEFAULT |
|
static java.lang.String |
RATE_FACTOR_KEY |
| Constructor | Description |
|---|---|
AuditReplayMapper() |
| Modifier and Type | Method | Description |
|---|---|---|
void |
cleanup(org.apache.hadoop.mapreduce.Mapper.Context context) |
|
void |
configureJob(org.apache.hadoop.mapreduce.Job job) |
Setup input and output formats and optional reducer.
|
java.util.List<java.lang.String> |
getConfigDescriptions() |
Get a list of the description of each configuration that this mapper
accepts.
|
java.lang.String |
getDescription() |
Get the description of the behavior of this mapper.
|
void |
map(org.apache.hadoop.io.LongWritable lineNum,
org.apache.hadoop.io.Text inputLine,
org.apache.hadoop.mapreduce.Mapper.Context context) |
|
void |
setup(org.apache.hadoop.mapreduce.Mapper.Context context) |
|
boolean |
verifyConfigurations(org.apache.hadoop.conf.Configuration conf) |
Verify that the provided configuration contains all configurations required
by this mapper.
|
public static final java.lang.String INPUT_PATH_KEY
public static final java.lang.String OUTPUT_PATH_KEY
public static final java.lang.String NUM_THREADS_KEY
public static final int NUM_THREADS_DEFAULT
public static final java.lang.String CREATE_BLOCKS_KEY
public static final boolean CREATE_BLOCKS_DEFAULT
public static final java.lang.String RATE_FACTOR_KEY
public static final double RATE_FACTOR_DEFAULT
public static final java.lang.String COMMAND_PARSER_KEY
public static final java.lang.Class<AuditLogDirectParser> COMMAND_PARSER_DEFAULT
public static final java.lang.String INDIVIDUAL_COMMANDS_COUNTER_GROUP
public static final java.lang.String INDIVIDUAL_COMMANDS_LATENCY_SUFFIX
public static final java.lang.String INDIVIDUAL_COMMANDS_INVALID_SUFFIX
public static final java.lang.String INDIVIDUAL_COMMANDS_COUNT_SUFFIX
public java.lang.String getDescription()
WorkloadMappergetDescription in class WorkloadMapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>public java.util.List<java.lang.String> getConfigDescriptions()
WorkloadMappergetConfigDescriptions in class WorkloadMapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>public boolean verifyConfigurations(org.apache.hadoop.conf.Configuration conf)
WorkloadMapperverifyConfigurations in class WorkloadMapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>conf - configuration.public void setup(org.apache.hadoop.mapreduce.Mapper.Context context)
throws java.io.IOException
setup in class org.apache.hadoop.mapreduce.Mapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>java.io.IOExceptionpublic void map(org.apache.hadoop.io.LongWritable lineNum,
org.apache.hadoop.io.Text inputLine,
org.apache.hadoop.mapreduce.Mapper.Context context)
throws java.io.IOException,
java.lang.InterruptedException
map in class org.apache.hadoop.mapreduce.Mapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>java.io.IOExceptionjava.lang.InterruptedExceptionpublic void cleanup(org.apache.hadoop.mapreduce.Mapper.Context context)
throws java.lang.InterruptedException,
java.io.IOException
cleanup in class org.apache.hadoop.mapreduce.Mapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>java.lang.InterruptedExceptionjava.io.IOExceptionpublic void configureJob(org.apache.hadoop.mapreduce.Job job)
WorkloadMapperconfigureJob in class WorkloadMapper<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text,UserCommandKey,CountTimeWritable>Copyright © 2008–2025 Apache Software Foundation. All rights reserved.