@Private @Evolving public class BlockPlacementPolicyWithUpgradeDomain extends BlockPlacementPolicyDefault
BlockPlacementPolicy.NotEnoughReplicasExceptionclusterMap, considerLoad, considerLoadFactor, heartbeatInterval, host2datanodeMap, tolerateHeartbeatMultiplierLOG| Constructor | Description |
|---|---|
BlockPlacementPolicyWithUpgradeDomain() |
| Modifier and Type | Method | Description |
|---|---|---|
java.lang.String |
getUpgradeDomainWithDefaultValue(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanodeInfo) |
|
void |
initialize(org.apache.hadoop.conf.Configuration conf,
FSClusterStats stats,
org.apache.hadoop.net.NetworkTopology clusterMap,
org.apache.hadoop.hdfs.server.blockmanagement.Host2NodesMap host2datanodeMap) |
Used to setup a BlockPlacementPolicy object.
|
protected boolean |
isGoodDatanode(DatanodeDescriptor node,
int maxTargetPerRack,
boolean considerLoad,
java.util.List<DatanodeStorageInfo> results,
boolean avoidStaleNodes) |
|
boolean |
isMovable(java.util.Collection<org.apache.hadoop.hdfs.protocol.DatanodeInfo> locs,
org.apache.hadoop.hdfs.protocol.DatanodeInfo source,
org.apache.hadoop.hdfs.protocol.DatanodeInfo target) |
Check if the move is allowed.
|
protected java.util.Collection<DatanodeStorageInfo> |
pickupReplicaSet(java.util.Collection<DatanodeStorageInfo> moreThanOne,
java.util.Collection<DatanodeStorageInfo> exactlyOne,
java.util.Map<java.lang.String,java.util.List<DatanodeStorageInfo>> rackMap) |
Pick up replica node set for deleting replica as over-replicated.
|
BlockPlacementStatus |
verifyBlockPlacement(org.apache.hadoop.hdfs.protocol.DatanodeInfo[] locs,
int numberOfReplicas) |
Verify if the block's placement meets requirement of placement policy,
i.e.
|
addToExcludedNodes, chooseDataNode, chooseDataNode, chooseFavouredNodes, chooseLocalOrFavoredStorage, chooseLocalRack, chooseLocalStorage, chooseLocalStorage, chooseRandom, chooseRandom, chooseRemoteRack, chooseReplicasToDelete, chooseReplicaToDelete, chooseTarget, chooseTarget, chooseTargetInOrder, getExcludeSlowNodesEnabled, getMaxNodesPerRack, getMinBlocksForWrite, setExcludeSlowNodesEnabled, setMinBlocksForWriteadjustSetsWithChosenReplica, getDatanodeInfo, getRack, splitNodesWithRackpublic BlockPlacementPolicyWithUpgradeDomain()
public void initialize(org.apache.hadoop.conf.Configuration conf,
FSClusterStats stats,
org.apache.hadoop.net.NetworkTopology clusterMap,
org.apache.hadoop.hdfs.server.blockmanagement.Host2NodesMap host2datanodeMap)
BlockPlacementPolicyinitialize in class BlockPlacementPolicyDefaultconf - the configuration objectstats - retrieve cluster status from hereclusterMap - cluster topologyprotected boolean isGoodDatanode(DatanodeDescriptor node, int maxTargetPerRack, boolean considerLoad, java.util.List<DatanodeStorageInfo> results, boolean avoidStaleNodes)
public java.lang.String getUpgradeDomainWithDefaultValue(org.apache.hadoop.hdfs.protocol.DatanodeInfo datanodeInfo)
public BlockPlacementStatus verifyBlockPlacement(org.apache.hadoop.hdfs.protocol.DatanodeInfo[] locs, int numberOfReplicas)
BlockPlacementPolicyverifyBlockPlacement in class BlockPlacementPolicyDefaultlocs - block with locationsnumberOfReplicas - replica number of file to be verifiedprotected java.util.Collection<DatanodeStorageInfo> pickupReplicaSet(java.util.Collection<DatanodeStorageInfo> moreThanOne, java.util.Collection<DatanodeStorageInfo> exactlyOne, java.util.Map<java.lang.String,java.util.List<DatanodeStorageInfo>> rackMap)
BlockPlacementPolicyDefaultpickupReplicaSet in class BlockPlacementPolicyDefaultpublic boolean isMovable(java.util.Collection<org.apache.hadoop.hdfs.protocol.DatanodeInfo> locs,
org.apache.hadoop.hdfs.protocol.DatanodeInfo source,
org.apache.hadoop.hdfs.protocol.DatanodeInfo target)
BlockPlacementPolicyisMovable in class BlockPlacementPolicyDefaultlocs - all replicas including source and targetsource - source replica of the movetarget - target replica of the moveCopyright © 2008–2025 Apache Software Foundation. All rights reserved.