| Package | Description |
|---|---|
| org.apache.hadoop.fs |
Implementations of
AbstractFileSystem for hdfs
over rpc and hdfs over web. |
| org.apache.hadoop.hdfs | |
| org.apache.hadoop.hdfs.client |
This package provides the administrative APIs for HDFS.
|
| org.apache.hadoop.hdfs.protocol |
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.
|
| org.apache.hadoop.hdfs.protocolPB | |
| org.apache.hadoop.hdfs.util |
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements.
|
| Modifier and Type | Method | Description |
|---|---|---|
LocatedBlock |
HdfsBlockLocation.getLocatedBlock() |
| Constructor | Description |
|---|---|
HdfsBlockLocation(org.apache.hadoop.fs.BlockLocation loc,
LocatedBlock block) |
| Modifier and Type | Field | Description |
|---|---|---|
protected LocatedBlock |
DFSInputStream.currentLocatedBlock |
| Modifier and Type | Method | Description |
|---|---|---|
protected LocatedBlock |
DFSInputStream.fetchBlockAt(long offset) |
Fetch a block from namenode and cache it
|
protected LocatedBlock |
DFSInputStream.getBlockAt(long offset) |
Get block at the specified position.
|
protected LocatedBlock |
DFSInputStream.refreshLocatedBlock(LocatedBlock block) |
Refresh cached block locations.
|
protected LocatedBlock |
DFSStripedInputStream.refreshLocatedBlock(LocatedBlock block) |
The super method
DFSInputStream.refreshLocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) refreshes
cached LocatedBlock by executing DFSInputStream.getBlockAt(long) again. |
| Modifier and Type | Method | Description |
|---|---|---|
java.util.List<LocatedBlock> |
DFSInputStream.getAllBlocks() |
Return collection of blocks that has already been located.
|
| Modifier and Type | Method | Description |
|---|---|---|
static ClientDatanodeProtocol |
DFSUtilClient.createClientDatanodeProtocolProxy(DatanodeID datanodeid,
org.apache.hadoop.conf.Configuration conf,
int socketTimeout,
boolean connectToDnViaHostname,
LocatedBlock locatedBlock) |
Create a
ClientDatanodeProtocol proxy |
protected void |
DFSInputStream.fetchBlockByteRange(LocatedBlock block,
long start,
long end,
java.nio.ByteBuffer buf,
DFSUtilClient.CorruptedBlocks corruptedBlocks) |
|
protected void |
DFSStripedInputStream.fetchBlockByteRange(LocatedBlock block,
long start,
long end,
java.nio.ByteBuffer buf,
DFSUtilClient.CorruptedBlocks corruptedBlocks) |
Real implementation of pread.
|
protected org.apache.hadoop.hdfs.DFSInputStream.DNAddrPair |
DFSInputStream.getBestNodeDNAddrPair(LocatedBlock block,
java.util.Collection<DatanodeInfo> ignoredNodes) |
Get the best node from which to stream the data.
|
protected BlockReader |
DFSInputStream.getBlockReader(LocatedBlock targetBlock,
long offsetInBlock,
long length,
java.net.InetSocketAddress targetAddr,
org.apache.hadoop.fs.StorageType storageType,
DatanodeInfo datanode) |
|
protected org.apache.hadoop.util.DataChecksum.Type |
DFSClient.inferChecksumTypeByReading(LocatedBlock lb,
DatanodeInfo dn) |
Infer the checksum type for a replica by sending an OP_READ_BLOCK
for the first byte of that replica.
|
void |
DFSClientFaultInjector.onCreateBlockReader(LocatedBlock block,
int chunkIndex,
long offset,
long length) |
|
protected LocatedBlock |
DFSInputStream.refreshLocatedBlock(LocatedBlock block) |
Refresh cached block locations.
|
protected LocatedBlock |
DFSStripedInputStream.refreshLocatedBlock(LocatedBlock block) |
The super method
DFSInputStream.refreshLocatedBlock(org.apache.hadoop.hdfs.protocol.LocatedBlock) refreshes
cached LocatedBlock by executing DFSInputStream.getBlockAt(long) again. |
void |
DFSClient.reportBadBlocks(LocatedBlock[] blocks) |
Report corrupt blocks that were discovered by the client.
|
protected void |
DFSInputStream.reportLostBlock(LocatedBlock lostBlock,
java.util.Collection<DatanodeInfo> ignoredNodes) |
Warn the user of a lost block
|
protected void |
DFSStripedInputStream.reportLostBlock(LocatedBlock lostBlock,
java.util.Collection<DatanodeInfo> ignoredNodes) |
| Modifier and Type | Method | Description |
|---|---|---|
static org.apache.hadoop.fs.BlockLocation[] |
DFSUtilClient.locatedBlocks2Locations(java.util.List<LocatedBlock> blocks) |
Convert a List to BlockLocation[]
|
| Constructor | Description |
|---|---|
CannotObtainBlockLengthException(LocatedBlock locatedBlock) |
Constructs an
CannotObtainBlockLengthException with the
specified LocatedBlock that failed to obtain block length. |
CannotObtainBlockLengthException(LocatedBlock locatedBlock,
java.lang.String src) |
Constructs an
CannotObtainBlockLengthException with the
specified LocatedBlock and file that failed to obtain block length. |
| Modifier and Type | Method | Description |
|---|---|---|
java.util.List<LocatedBlock> |
HdfsDataInputStream.getAllBlocks() |
Get the collection of blocks that has already been located.
|
| Modifier and Type | Class | Description |
|---|---|---|
class |
LocatedStripedBlock |
LocatedBlock with striped block support. |
| Modifier and Type | Method | Description |
|---|---|---|
LocatedBlock |
ClientProtocol.addBlock(java.lang.String src,
java.lang.String clientName,
ExtendedBlock previous,
DatanodeInfo[] excludeNodes,
long fileId,
java.lang.String[] favoredNodes,
java.util.EnumSet<AddBlockFlag> addBlockFlags) |
A client that wants to write an additional block to the
indicated filename (which must currently be open for writing)
should call addBlock().
|
LocatedBlock |
LocatedBlocks.get(int index) |
Get located block.
|
LocatedBlock |
ClientProtocol.getAdditionalDatanode(java.lang.String src,
long fileId,
ExtendedBlock blk,
DatanodeInfo[] existings,
java.lang.String[] existingStorageIDs,
DatanodeInfo[] excludes,
int numAdditionalNodes,
java.lang.String clientName) |
Get a datanode for an existing pipeline.
|
LocatedBlock |
LastBlockWithStatus.getLastBlock() |
|
LocatedBlock |
LocatedBlocks.getLastLocatedBlock() |
Get the last located block.
|
LocatedBlock |
ClientProtocol.updateBlockForPipeline(ExtendedBlock block,
java.lang.String clientName) |
Get a new generation stamp together with an access token for
a block under construction
This method is called only when a client needs to recover a failed
pipeline or set up a pipeline for appending to a block.
|
| Modifier and Type | Method | Description |
|---|---|---|
java.util.List<LocatedBlock> |
LocatedBlocks.getLocatedBlocks() |
Get located blocks.
|
| Modifier and Type | Method | Description |
|---|---|---|
void |
ClientProtocol.reportBadBlocks(LocatedBlock[] blocks) |
The client wants to report corrupted blocks (blocks with specified
locations on datanodes).
|
| Modifier and Type | Method | Description |
|---|---|---|
void |
LocatedBlocks.insertRange(int blockIdx,
java.util.List<LocatedBlock> newBlocks) |
| Constructor | Description |
|---|---|
LastBlockWithStatus(LocatedBlock lastBlock,
HdfsFileStatus fileStatus) |
|
LocatedBlocks(long flength,
boolean isUnderConstuction,
java.util.List<LocatedBlock> blks,
LocatedBlock lastBlock,
boolean isLastBlockCompleted,
org.apache.hadoop.fs.FileEncryptionInfo feInfo,
ErasureCodingPolicy ecPolicy) |
| Constructor | Description |
|---|---|
LocatedBlocks(long flength,
boolean isUnderConstuction,
java.util.List<LocatedBlock> blks,
LocatedBlock lastBlock,
boolean isLastBlockCompleted,
org.apache.hadoop.fs.FileEncryptionInfo feInfo,
ErasureCodingPolicy ecPolicy) |
| Modifier and Type | Method | Description |
|---|---|---|
LocatedBlock |
ClientNamenodeProtocolTranslatorPB.addBlock(java.lang.String src,
java.lang.String clientName,
ExtendedBlock previous,
DatanodeInfo[] excludeNodes,
long fileId,
java.lang.String[] favoredNodes,
java.util.EnumSet<AddBlockFlag> addBlockFlags) |
|
static LocatedBlock[] |
PBHelperClient.convertLocatedBlock(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto[] lb) |
|
static LocatedBlock |
PBHelperClient.convertLocatedBlockProto(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto proto) |
|
static LocatedBlock[] |
PBHelperClient.convertLocatedBlocks(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto[] lb) |
|
LocatedBlock |
ClientNamenodeProtocolTranslatorPB.getAdditionalDatanode(java.lang.String src,
long fileId,
ExtendedBlock blk,
DatanodeInfo[] existings,
java.lang.String[] existingStorageIDs,
DatanodeInfo[] excludes,
int numAdditionalNodes,
java.lang.String clientName) |
|
LocatedBlock |
ClientNamenodeProtocolTranslatorPB.updateBlockForPipeline(ExtendedBlock block,
java.lang.String clientName) |
| Modifier and Type | Method | Description |
|---|---|---|
static java.util.List<LocatedBlock> |
PBHelperClient.convertLocatedBlock(java.util.List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto> lb) |
|
static java.util.List<LocatedBlock> |
PBHelperClient.convertLocatedBlocks(java.util.List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto> lb) |
| Modifier and Type | Method | Description |
|---|---|---|
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto |
PBHelperClient.convertLocatedBlock(LocatedBlock b) |
|
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto[] |
PBHelperClient.convertLocatedBlocks(LocatedBlock[] lb) |
|
void |
ClientNamenodeProtocolTranslatorPB.reportBadBlocks(LocatedBlock[] blocks) |
| Modifier and Type | Method | Description |
|---|---|---|
static java.util.List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.LocatedBlockProto> |
PBHelperClient.convertLocatedBlocks2(java.util.List<LocatedBlock> lb) |
| Constructor | Description |
|---|---|
ClientDatanodeProtocolTranslatorPB(DatanodeID datanodeid,
org.apache.hadoop.conf.Configuration conf,
int socketTimeout,
boolean connectToDnViaHostname,
LocatedBlock locatedBlock) |
| Modifier and Type | Method | Description |
|---|---|---|
static LocatedBlock |
StripedBlockUtil.constructInternalBlock(LocatedStripedBlock bg,
int idxInReturnedLocs,
int cellSize,
int dataBlkNum,
int idxInBlockGroup) |
This method creates an internal block at the given index of a block group.
|
static LocatedBlock[] |
StripedBlockUtil.parseStripedBlockGroup(LocatedStripedBlock bg,
int cellSize,
int dataBlkNum,
int parityBlkNum) |
This method parses a striped block group into individual blocks.
|
Copyright © 2008–2025 Apache Software Foundation. All rights reserved.