| Package | Description |
|---|---|
| org.apache.hadoop.hdfs | |
| org.apache.hadoop.hdfs.client |
This package provides the administrative APIs for HDFS.
|
| org.apache.hadoop.hdfs.client.impl |
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.
|
| org.apache.hadoop.hdfs.protocol |
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.
|
| org.apache.hadoop.hdfs.protocol.datatransfer | |
| org.apache.hadoop.hdfs.protocolPB | |
| org.apache.hadoop.hdfs.server.protocol |
Package contains classes that allows HDFS to communicate information b/w
DataNode and NameNode.
|
| org.apache.hadoop.hdfs.shortcircuit |
| Modifier and Type | Field | Description |
|---|---|---|
protected org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache<DatanodeInfo,DatanodeInfo> |
DataStreamer.excludedNodes |
|
protected org.apache.hadoop.thirdparty.com.google.common.cache.LoadingCache<DatanodeInfo,DatanodeInfo> |
DataStreamer.excludedNodes |
| Modifier and Type | Method | Description |
|---|---|---|
DatanodeInfo[] |
DFSClient.datanodeReport(HdfsConstants.DatanodeReportType type) |
|
DatanodeInfo |
DFSInputStream.getCurrentDatanode() |
Returns the datanode from which the stream is currently reading.
|
DatanodeInfo[] |
DistributedFileSystem.getDataNodeStats() |
|
DatanodeInfo[] |
DistributedFileSystem.getDataNodeStats(HdfsConstants.DatanodeReportType type) |
|
DatanodeInfo[] |
ViewDistributedFileSystem.getDataNodeStats() |
|
DatanodeInfo[] |
ViewDistributedFileSystem.getDataNodeStats(HdfsConstants.DatanodeReportType type) |
|
DatanodeInfo[] |
DFSOutputStream.getPipeline() |
|
DatanodeInfo[] |
DistributedFileSystem.getSlowDatanodeStats() |
Retrieve stats for slow running datanodes.
|
DatanodeInfo[] |
ViewDistributedFileSystem.getSlowDatanodeStats() |
|
DatanodeInfo[] |
DFSClient.slowDatanodeReport() |
| Modifier and Type | Method | Description |
|---|---|---|
java.util.Set<DatanodeInfo> |
DeadNodeDetector.clearAndGetDetectedDeadNodes() |
Remove dead node which is not used by any DFSInputStream from deadNodes.
|
java.util.Map<ExtendedBlock,java.util.Set<DatanodeInfo>> |
DFSUtilClient.CorruptedBlocks.getCorruptionMap() |
|
java.util.concurrent.ConcurrentHashMap<DatanodeInfo,DatanodeInfo> |
DFSClient.getDeadNodes(DFSInputStream dfsInputStream) |
If deadNodeDetectionEnabled is true, return the dead nodes that detected by
all the DFSInputStreams in the same client.
|
java.util.concurrent.ConcurrentHashMap<DatanodeInfo,DatanodeInfo> |
DFSClient.getDeadNodes(DFSInputStream dfsInputStream) |
If deadNodeDetectionEnabled is true, return the dead nodes that detected by
all the DFSInputStreams in the same client.
|
org.apache.hadoop.hdfs.DeadNodeDetector.UniqueQueue<DatanodeInfo> |
DeadNodeDetector.getDeadNodesProbeQueue() |
|
protected java.util.concurrent.ConcurrentHashMap<DatanodeInfo,DatanodeInfo> |
DFSInputStream.getLocalDeadNodes() |
|
protected java.util.concurrent.ConcurrentHashMap<DatanodeInfo,DatanodeInfo> |
DFSInputStream.getLocalDeadNodes() |
|
org.apache.hadoop.hdfs.DeadNodeDetector.UniqueQueue<DatanodeInfo> |
DeadNodeDetector.getSuspectNodesProbeQueue() |
| Modifier and Type | Method | Description |
|---|---|---|
void |
DFSUtilClient.CorruptedBlocks.addCorruptedBlock(ExtendedBlock blk,
DatanodeInfo node) |
Indicate a block replica on the specified datanode is corrupted
|
void |
DFSClient.addNodeToDeadNodeDetector(DFSInputStream dfsInputStream,
DatanodeInfo datanodeInfo) |
Add given datanode in DeadNodeDetector.
|
void |
DeadNodeDetector.addNodeToDetect(DFSInputStream dfsInputStream,
DatanodeInfo datanodeInfo) |
Add datanode to suspectNodes and suspectAndDeadNodes.
|
protected void |
DFSInputStream.addToLocalDeadNodes(DatanodeInfo dnInfo) |
|
protected IOStreamPair |
DFSClient.connectToDN(DatanodeInfo dn,
int timeout,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken) |
|
static IOStreamPair |
DFSUtilClient.connectToDN(DatanodeInfo dn,
int timeout,
org.apache.hadoop.conf.Configuration conf,
SaslDataTransferClient saslClient,
javax.net.SocketFactory socketFactory,
boolean connectToDnViaHostname,
DataEncryptionKeyFactory dekFactory,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken) |
Connect to the given datanode's datantrasfer port, and return
the resulting IOStreamPair.
|
protected BlockReader |
DFSInputStream.getBlockReader(LocatedBlock targetBlock,
long offsetInBlock,
long length,
java.net.InetSocketAddress targetAddr,
org.apache.hadoop.fs.StorageType storageType,
DatanodeInfo datanode) |
|
int |
ClientContext.getNetworkDistance(DatanodeInfo datanodeInfo) |
|
protected org.apache.hadoop.util.DataChecksum.Type |
DFSClient.inferChecksumTypeByReading(LocatedBlock lb,
DatanodeInfo dn) |
Infer the checksum type for a replica by sending an OP_READ_BLOCK
for the first byte of that replica.
|
boolean |
DeadNodeDetector.isDeadNode(DatanodeInfo datanodeInfo) |
|
boolean |
DFSClient.isDeadNode(DFSInputStream dfsInputStream,
DatanodeInfo datanodeInfo) |
If deadNodeDetectionEnabled is true, judgement based on whether this
datanode is included or not in DeadNodeDetector.
|
protected void |
DFSInputStream.removeFromLocalDeadNodes(DatanodeInfo dnInfo) |
|
void |
DeadNodeDetector.removeNodeFromDeadNodeDetector(DFSInputStream dfsInputStream,
DatanodeInfo datanodeInfo) |
Remove suspect and dead node from suspectAndDeadNodes#dfsInputStream and
local deadNodes.
|
void |
DFSClient.removeNodeFromDeadNodeDetector(DFSInputStream dfsInputStream,
DatanodeInfo datanodeInfo) |
Remove given datanode from DeadNodeDetector.
|
protected boolean |
StripedDataStreamer.setupPipelineInternal(DatanodeInfo[] nodes,
org.apache.hadoop.fs.StorageType[] nodeStorageTypes,
java.lang.String[] nodeStorageIDs) |
| Modifier and Type | Method | Description |
|---|---|---|
protected org.apache.hadoop.hdfs.DFSInputStream.DNAddrPair |
DFSInputStream.getBestNodeDNAddrPair(LocatedBlock block,
java.util.Collection<DatanodeInfo> ignoredNodes) |
Get the best node from which to stream the data.
|
protected void |
DFSInputStream.reportLostBlock(LocatedBlock lostBlock,
java.util.Collection<DatanodeInfo> ignoredNodes) |
Warn the user of a lost block
|
protected void |
DFSStripedInputStream.reportLostBlock(LocatedBlock lostBlock,
java.util.Collection<DatanodeInfo> ignoredNodes) |
| Modifier and Type | Method | Description |
|---|---|---|
DatanodeInfo |
HdfsDataInputStream.getCurrentDatanode() |
Get the datanode from which the stream is currently reading.
|
| Modifier and Type | Method | Description |
|---|---|---|
BlockReaderFactory |
BlockReaderFactory.setDatanodeInfo(DatanodeInfo datanode) |
| Modifier and Type | Class | Description |
|---|---|---|
class |
DatanodeInfoWithStorage |
| Modifier and Type | Field | Description |
|---|---|---|
static DatanodeInfo[] |
DatanodeInfo.EMPTY_ARRAY |
| Modifier and Type | Method | Description |
|---|---|---|
DatanodeInfo |
DatanodeInfo.DatanodeInfoBuilder.build() |
|
DatanodeInfo[] |
LocatedBlock.getCachedLocations() |
|
DatanodeInfo[] |
ClientProtocol.getDatanodeReport(HdfsConstants.DatanodeReportType type) |
Get a report on the system's current datanodes.
|
DatanodeInfo[] |
StripedBlockInfo.getDatanodes() |
|
DatanodeInfo[] |
ClientProtocol.getSlowDatanodeReport() |
Get report on all of the slow Datanodes.
|
| Modifier and Type | Method | Description |
|---|---|---|
LocatedBlock |
ClientProtocol.addBlock(java.lang.String src,
java.lang.String clientName,
ExtendedBlock previous,
DatanodeInfo[] excludeNodes,
long fileId,
java.lang.String[] favoredNodes,
java.util.EnumSet<AddBlockFlag> addBlockFlags) |
A client that wants to write an additional block to the
indicated filename (which must currently be open for writing)
should call addBlock().
|
void |
LocatedBlock.addCachedLoc(DatanodeInfo loc) |
Add a the location of a cached replica of the block.
|
LocatedBlock |
ClientProtocol.getAdditionalDatanode(java.lang.String src,
long fileId,
ExtendedBlock blk,
DatanodeInfo[] existings,
java.lang.String[] existingStorageIDs,
DatanodeInfo[] excludes,
int numAdditionalNodes,
java.lang.String clientName) |
Get a datanode for an existing pipeline.
|
DatanodeInfo.DatanodeInfoBuilder |
DatanodeInfo.DatanodeInfoBuilder.setFrom(DatanodeInfo from) |
| Constructor | Description |
|---|---|
DatanodeInfo(DatanodeInfo from) |
|
DatanodeInfoWithStorage(DatanodeInfo from,
java.lang.String storageID,
org.apache.hadoop.fs.StorageType storageType) |
|
LocatedBlock(ExtendedBlock b,
DatanodeInfo[] locs) |
|
LocatedBlock(ExtendedBlock b,
DatanodeInfo[] locs,
java.lang.String[] storageIDs,
org.apache.hadoop.fs.StorageType[] storageTypes) |
|
LocatedBlock(ExtendedBlock b,
DatanodeInfo[] locs,
java.lang.String[] storageIDs,
org.apache.hadoop.fs.StorageType[] storageTypes,
long startOffset,
boolean corrupt,
DatanodeInfo[] cachedLocs) |
|
LocatedBlock(ExtendedBlock b,
DatanodeInfoWithStorage[] locs,
java.lang.String[] storageIDs,
org.apache.hadoop.fs.StorageType[] storageTypes,
long startOffset,
boolean corrupt,
DatanodeInfo[] cachedLocs) |
|
LocatedStripedBlock(ExtendedBlock b,
DatanodeInfo[] locs,
java.lang.String[] storageIDs,
org.apache.hadoop.fs.StorageType[] storageTypes,
byte[] indices,
long startOffset,
boolean corrupt,
DatanodeInfo[] cachedLocs) |
|
StripedBlockInfo(ExtendedBlock block,
DatanodeInfo[] datanodes,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier>[] blockTokens,
byte[] blockIndices,
ErasureCodingPolicy ecPolicy) |
| Modifier and Type | Method | Description |
|---|---|---|
void |
DataTransferProtocol.replaceBlock(ExtendedBlock blk,
org.apache.hadoop.fs.StorageType storageType,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken,
java.lang.String delHint,
DatanodeInfo source,
java.lang.String storageId) |
Receive a block from a source datanode
and then notifies the namenode
to remove the copy from the original datanode.
|
void |
Sender.replaceBlock(ExtendedBlock blk,
org.apache.hadoop.fs.StorageType storageType,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken,
java.lang.String delHint,
DatanodeInfo source,
java.lang.String storageId) |
|
boolean |
ReplaceDatanodeOnFailure.satisfy(short replication,
DatanodeInfo[] existings,
boolean isAppend,
boolean isHflushed) |
Does it need a replacement according to the policy?
|
void |
DataTransferProtocol.transferBlock(ExtendedBlock blk,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken,
java.lang.String clientName,
DatanodeInfo[] targets,
org.apache.hadoop.fs.StorageType[] targetStorageTypes,
java.lang.String[] targetStorageIDs) |
Transfer a block to another datanode.
|
void |
Sender.transferBlock(ExtendedBlock blk,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken,
java.lang.String clientName,
DatanodeInfo[] targets,
org.apache.hadoop.fs.StorageType[] targetStorageTypes,
java.lang.String[] targetStorageIds) |
|
void |
DataTransferProtocol.writeBlock(ExtendedBlock blk,
org.apache.hadoop.fs.StorageType storageType,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken,
java.lang.String clientName,
DatanodeInfo[] targets,
org.apache.hadoop.fs.StorageType[] targetStorageTypes,
DatanodeInfo source,
BlockConstructionStage stage,
int pipelineSize,
long minBytesRcvd,
long maxBytesRcvd,
long latestGenerationStamp,
org.apache.hadoop.util.DataChecksum requestedChecksum,
CachingStrategy cachingStrategy,
boolean allowLazyPersist,
boolean pinning,
boolean[] targetPinnings,
java.lang.String storageID,
java.lang.String[] targetStorageIDs) |
Write a block to a datanode pipeline.
|
void |
Sender.writeBlock(ExtendedBlock blk,
org.apache.hadoop.fs.StorageType storageType,
org.apache.hadoop.security.token.Token<BlockTokenIdentifier> blockToken,
java.lang.String clientName,
DatanodeInfo[] targets,
org.apache.hadoop.fs.StorageType[] targetStorageTypes,
DatanodeInfo source,
BlockConstructionStage stage,
int pipelineSize,
long minBytesRcvd,
long maxBytesRcvd,
long latestGenerationStamp,
org.apache.hadoop.util.DataChecksum requestedChecksum,
CachingStrategy cachingStrategy,
boolean allowLazyPersist,
boolean pinning,
boolean[] targetPinnings,
java.lang.String storageId,
java.lang.String[] targetStorageIds) |
| Modifier and Type | Method | Description |
|---|---|---|
static DatanodeInfo[] |
PBHelperClient.convert(java.util.List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto> list) |
|
static DatanodeInfo |
PBHelperClient.convert(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto di) |
|
static DatanodeInfo[] |
PBHelperClient.convert(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto[] di) |
|
static DatanodeInfo[] |
PBHelperClient.convert(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfosProto datanodeInfosProto) |
|
DatanodeInfo[] |
ClientNamenodeProtocolTranslatorPB.getDatanodeReport(HdfsConstants.DatanodeReportType type) |
|
DatanodeInfo[] |
ClientNamenodeProtocolTranslatorPB.getSlowDatanodeReport() |
| Modifier and Type | Method | Description |
|---|---|---|
LocatedBlock |
ClientNamenodeProtocolTranslatorPB.addBlock(java.lang.String src,
java.lang.String clientName,
ExtendedBlock previous,
DatanodeInfo[] excludeNodes,
long fileId,
java.lang.String[] favoredNodes,
java.util.EnumSet<AddBlockFlag> addBlockFlags) |
|
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto |
PBHelperClient.convert(DatanodeInfo info) |
|
static java.util.List<? extends org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto> |
PBHelperClient.convert(DatanodeInfo[] dnInfos) |
|
static java.util.List<? extends org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto> |
PBHelperClient.convert(DatanodeInfo[] dnInfos,
int startIdx) |
Copy from
dnInfos to a target of list of same size starting at
startIdx. |
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfoProto |
PBHelperClient.convertDatanodeInfo(DatanodeInfo di) |
|
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.DatanodeInfosProto |
PBHelperClient.convertToProto(DatanodeInfo[] datanodeInfos) |
|
LocatedBlock |
ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(java.lang.String src,
long fileId,
ExtendedBlock blk,
DatanodeInfo[] existings,
java.lang.String[] existingStorageIDs,
DatanodeInfo[] excludes,
int numAdditionalNodes,
java.lang.String clientName) |
| Modifier and Type | Method | Description |
|---|---|---|
DatanodeInfo |
DatanodeStorageReport.getDatanodeInfo() |
| Constructor | Description |
|---|---|
DatanodeStorageReport(DatanodeInfo datanodeInfo,
StorageReport[] storageReports) |
| Modifier and Type | Method | Description |
|---|---|---|
ShortCircuitShm.Slot |
ShortCircuitCache.allocShmSlot(DatanodeInfo datanode,
DomainPeer peer,
org.apache.commons.lang3.mutable.MutableBoolean usedPeer,
ExtendedBlockId blockId,
java.lang.String clientName) |
Allocate a new shared memory slot.
|
ShortCircuitShm.Slot |
DfsClientShmManager.allocSlot(DatanodeInfo datanode,
DomainPeer peer,
org.apache.commons.lang3.mutable.MutableBoolean usedPeer,
ExtendedBlockId blockId,
java.lang.String clientName) |
| Modifier and Type | Method | Description |
|---|---|---|
void |
DfsClientShmManager.Visitor.visit(java.util.HashMap<DatanodeInfo,DfsClientShmManager.PerDatanodeVisitorInfo> info) |
Copyright © 2008–2025 Apache Software Foundation. All rights reserved.