AvailableSpaceRackFaultTolerantBlockPlacementPolicy@Private public class BlockPlacementPolicyRackFaultTolerant extends BlockPlacementPolicyDefault
BlockPlacementPolicy.NotEnoughReplicasExceptionclusterMap, considerLoad, considerLoadFactor, heartbeatInterval, host2datanodeMap, tolerateHeartbeatMultiplierLOG| Constructor | Description |
|---|---|
BlockPlacementPolicyRackFaultTolerant() |
| Modifier and Type | Method | Description |
|---|---|---|
protected org.apache.hadoop.net.Node |
chooseTargetInOrder(int numOfReplicas,
org.apache.hadoop.net.Node writer,
java.util.Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
java.util.List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
boolean newBlock,
java.util.EnumMap<org.apache.hadoop.fs.StorageType,java.lang.Integer> storageTypes) |
Choose numOfReplicas in order:
1.
|
protected int[] |
getMaxNodesPerRack(int numOfChosen,
int numOfReplicas) |
Calculate the maximum number of replicas to allocate per rack.
|
protected java.util.Collection<DatanodeStorageInfo> |
pickupReplicaSet(java.util.Collection<DatanodeStorageInfo> moreThanOne,
java.util.Collection<DatanodeStorageInfo> exactlyOne,
java.util.Map<java.lang.String,java.util.List<DatanodeStorageInfo>> rackMap) |
Pick up replica node set for deleting replica as over-replicated.
|
BlockPlacementStatus |
verifyBlockPlacement(org.apache.hadoop.hdfs.protocol.DatanodeInfo[] locs,
int numberOfReplicas) |
Verify if the block's placement meets requirement of placement policy,
i.e.
|
addToExcludedNodes, chooseDataNode, chooseDataNode, chooseFavouredNodes, chooseLocalOrFavoredStorage, chooseLocalRack, chooseLocalStorage, chooseLocalStorage, chooseRandom, chooseRandom, chooseRemoteRack, chooseReplicasToDelete, chooseReplicaToDelete, chooseTarget, chooseTarget, getExcludeSlowNodesEnabled, getMinBlocksForWrite, initialize, isMovable, setExcludeSlowNodesEnabled, setMinBlocksForWriteadjustSetsWithChosenReplica, getDatanodeInfo, getRack, splitNodesWithRackpublic BlockPlacementPolicyRackFaultTolerant()
protected int[] getMaxNodesPerRack(int numOfChosen,
int numOfReplicas)
BlockPlacementPolicyDefaultgetMaxNodesPerRack in class BlockPlacementPolicyDefaultnumOfChosen - The number of already chosen nodes.numOfReplicas - The number of additional nodes to allocate.protected org.apache.hadoop.net.Node chooseTargetInOrder(int numOfReplicas,
org.apache.hadoop.net.Node writer,
java.util.Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
java.util.List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
boolean newBlock,
java.util.EnumMap<org.apache.hadoop.fs.StorageType,java.lang.Integer> storageTypes)
throws BlockPlacementPolicy.NotEnoughReplicasException
BlockPlacementPolicy.NotEnoughReplicasException is thrown.
For normal setups, step 2 would suffice. So in the end, the difference of the numbers of replicas for each two racks is no more than 1. Either way it always prefer local storage.
chooseTargetInOrder in class BlockPlacementPolicyDefaultBlockPlacementPolicy.NotEnoughReplicasExceptionpublic BlockPlacementStatus verifyBlockPlacement(org.apache.hadoop.hdfs.protocol.DatanodeInfo[] locs, int numberOfReplicas)
BlockPlacementPolicyverifyBlockPlacement in class BlockPlacementPolicyDefaultlocs - block with locationsnumberOfReplicas - replica number of file to be verifiedprotected java.util.Collection<DatanodeStorageInfo> pickupReplicaSet(java.util.Collection<DatanodeStorageInfo> moreThanOne, java.util.Collection<DatanodeStorageInfo> exactlyOne, java.util.Map<java.lang.String,java.util.List<DatanodeStorageInfo>> rackMap)
BlockPlacementPolicyDefaultpickupReplicaSet in class BlockPlacementPolicyDefaultCopyright © 2008–2025 Apache Software Foundation. All rights reserved.