java.io.Closeable, java.lang.AutoCloseable, org.apache.hadoop.conf.Configurable, org.apache.hadoop.fs.BulkDeleteSource, org.apache.hadoop.fs.PathCapabilities, org.apache.hadoop.security.token.DelegationTokenIssuer@Private
@Evolving
public final class OBSFileSystem
extends org.apache.hadoop.fs.FileSystem
This subclass is marked as private as code should not be creating it
directly; use FileSystem.get(Configuration) and variants to create
one.
If cast to OBSFileSystem, extra methods and features may be
accessed. Consider those private and unstable.
Because it prints some of the state of the instrumentation, the output of
toString() must also be considered unstable.
| Modifier and Type | Field | Description |
|---|---|---|
static org.slf4j.Logger |
LOG |
Class logger.
|
| Constructor | Description |
|---|---|
OBSFileSystem() |
| Modifier and Type | Method | Description |
|---|---|---|
org.apache.hadoop.fs.FSDataOutputStream |
append(org.apache.hadoop.fs.Path f,
int bufferSize,
org.apache.hadoop.util.Progressable progress) |
Append to an existing file (optional operation).
|
protected java.net.URI |
canonicalizeUri(java.net.URI rawUri) |
Canonicalize the given URI.
|
void |
checkPath(org.apache.hadoop.fs.Path path) |
Check that a Path belongs to this FileSystem.
|
void |
close() |
Close the filesystem.
|
void |
copyFromLocalFile(boolean delSrc,
boolean overwrite,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst) |
Copy the
src file on the local disk to the filesystem at the given
dst name. |
org.apache.hadoop.fs.FSDataOutputStream |
create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
boolean overwrite,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress) |
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
|
org.apache.hadoop.fs.FSDataOutputStream |
create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress,
org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) |
Create an FSDataOutputStream at the indicated Path with write-progress
reporting.
|
org.apache.hadoop.fs.FSDataOutputStream |
createNonRecursive(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission,
java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress) |
Open an FSDataOutputStream at the indicated Path with write-progress
reporting.
|
boolean |
delete(org.apache.hadoop.fs.Path f,
boolean recursive) |
Delete a Path.
|
boolean |
exists(org.apache.hadoop.fs.Path f) |
Check if a path exists.
|
java.lang.String |
getCanonicalServiceName() |
Override
getCanonicalServiceName and return null since
delegation token is not supported. |
org.apache.hadoop.fs.ContentSummary |
getContentSummary(org.apache.hadoop.fs.Path f) |
Return the
ContentSummary of a given Path. |
long |
getDefaultBlockSize() |
Deprecated.
use
getDefaultBlockSize(Path) instead |
long |
getDefaultBlockSize(org.apache.hadoop.fs.Path f) |
Imitate HDFS to return the number of bytes that large input files should be
optimally split into to minimize I/O time.
|
int |
getDefaultPort() |
Return the default port for this FileSystem.
|
org.apache.hadoop.fs.FileStatus |
getFileStatus(org.apache.hadoop.fs.Path f) |
Return a file status object that represents the path.
|
java.lang.String |
getScheme() |
Return the protocol scheme for the FileSystem.
|
java.net.URI |
getUri() |
Return a URI whose scheme and authority identify this FileSystem.
|
org.apache.hadoop.fs.Path |
getWorkingDirectory() |
Return the current working directory for the given file system.
|
void |
initialize(java.net.URI name,
org.apache.hadoop.conf.Configuration originalConf) |
Initialize a FileSystem.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
listFiles(org.apache.hadoop.fs.Path f,
boolean recursive) |
List the statuses and block locations of the files in the given path.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
listLocatedStatus(org.apache.hadoop.fs.Path f) |
List the statuses of the files/directories in the given path if the path is
a directory.
|
org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> |
listLocatedStatus(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.PathFilter filter) |
List a directory.
|
org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.Path f) |
List the statuses of the files/directories in the given path if the path is
a directory.
|
org.apache.hadoop.fs.FileStatus[] |
listStatus(org.apache.hadoop.fs.Path f,
boolean recursive) |
This public interface is provided specially for Huawei MRS.
|
boolean |
mkdirs(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission) |
Make the given path and all non-existent parents into directories.
|
org.apache.hadoop.fs.FSDataInputStream |
open(org.apache.hadoop.fs.Path f,
int bufferSize) |
Open an FSDataInputStream at the indicated Path.
|
boolean |
rename(org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst) |
Rename Path src to Path dst.
|
void |
setWorkingDirectory(org.apache.hadoop.fs.Path newDir) |
Set the current working directory for the file system.
|
java.lang.String |
toString() |
Return a string that describes this filesystem instance.
|
access, append, append, append, append, appendFile, areSymlinksEnabled, cancelDeleteOnExit, clearStatistics, closeAll, closeAllForUGI, completeLocalOutput, concat, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, create, create, create, create, create, create, create, createBulkDelete, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createFid, createFile, createMultipartUploader, createNewFile, createNonRecursive, createNonRecursive, createPathHandle, createPathId, createSnapshot, createSnapshot, createSymlink, delete, deleteFid, deleteOnExit, deleteSnapshot, enableSymlinks, fixRelativePart, get, get, get, getAclStatus, getAdditionalTokenIssuers, getAllStatistics, getAllStoragePolicies, getBlockSize, getCanonicalUri, getChildFileSystems, getDefaultReplication, getDefaultReplication, getDefaultUri, getDelegationToken, getEnclosingRoot, getFileBlockLocations, getFileBlockLocations, getFileChecksum, getFileChecksum, getFileLinkStatus, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getHomeDirectory, getInitialWorkingDirectory, getJobTrackerAddrs, getLength, getLinkTarget, getLocal, getName, getNamed, getPathHandle, getQuotaUsage, getReplication, getServerDefaults, getServerDefaults, getStatistics, getStatistics, getStatus, getStatus, getStoragePolicy, getStorageStatistics, getTrashRoot, getTrashRoots, getUsed, getUsed, getXAttr, getXAttrs, getXAttrs, getZkConnectString, globStatus, globStatus, hasPathCapability, isDirectory, isFile, listCorruptFileBlocks, listStatus, listStatus, listStatus, listStatusBatch, listStatusIterator, listXAttrs, makeQualified, mkdirs, mkdirs, mkdirsFid, mkdirsFid, modifyAclEntries, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, msync, newInstance, newInstance, newInstance, newInstanceLocal, open, open, open, openFid, openFid, openFid2, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveCreate, primitiveMkdir, primitiveMkdir, printStatistics, processDeleteOnExit, removeAcl, removeAclEntries, removeDefaultAcl, removeXAttr, rename, renameSnapshot, resolveLink, resolvePath, satisfyStoragePolicy, setAcl, setDefaultUri, setDefaultUri, setOwner, setOwnerFid, setPermission, setQuota, setQuotaByStorageType, setReplication, setStoragePolicy, setTimes, setVerifyChecksum, setWriteChecksum, setXAttr, setXAttr, startLocalOutput, supportsSymlinks, truncate, unsetStoragePolicypublic void initialize(java.net.URI name,
org.apache.hadoop.conf.Configuration originalConf)
throws java.io.IOException
initialize in class org.apache.hadoop.fs.FileSystemname - a URI whose authority section names the host, port,
etc. for this FileSystemoriginalConf - the configuration to use for the FS. The
bucket-specific options are patched over the base ones
before any use is made of the config.java.io.IOExceptionpublic java.lang.String getScheme()
getScheme in class org.apache.hadoop.fs.FileSystempublic java.net.URI getUri()
getUri in class org.apache.hadoop.fs.FileSystempublic int getDefaultPort()
getDefaultPort in class org.apache.hadoop.fs.FileSystemURI.getPort()public void checkPath(org.apache.hadoop.fs.Path path)
checkPath in class org.apache.hadoop.fs.FileSystempath - the path to checkjava.lang.IllegalArgumentException - if there is an FS mismatchprotected java.net.URI canonicalizeUri(java.net.URI rawUri)
canonicalizeUri in class org.apache.hadoop.fs.FileSystemrawUri - the URI to be canonicalizedpublic org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f,
int bufferSize)
throws java.io.IOException
open in class org.apache.hadoop.fs.FileSystemf - the file path to openbufferSize - the size of the buffer to be usedjava.io.IOException - on any failure to open the filepublic org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
boolean overwrite,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress)
throws java.io.IOException
create in class org.apache.hadoop.fs.FileSystemf - the file path to createpermission - the permission to setoverwrite - if a file with this name already exists, then if true,
the file will be overwritten, and if false an error will
be thrownbufferSize - the size of the buffer to be usedreplication - required block replication for the fileblkSize - the requested block sizeprogress - the progress reporterjava.io.IOException - on any failure to create the fileFileSystem.setPermission(Path, FsPermission)public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.permission.FsPermission permission,
java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress,
org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt)
throws java.io.IOException
create in class org.apache.hadoop.fs.FileSystemf - the file name to createpermission - permission offlags - CreateFlags to use for this streambufferSize - the size of the buffer to be usedreplication - required block replication for the fileblkSize - block sizeprogress - progresschecksumOpt - check sum optionjava.io.IOException - io exceptionpublic org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission,
java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flags,
int bufferSize,
short replication,
long blkSize,
org.apache.hadoop.util.Progressable progress)
throws java.io.IOException
createNonRecursive in class org.apache.hadoop.fs.FileSystempath - the file path to createpermission - file permissionflags - CreateFlags to use for this streambufferSize - the size of the buffer to be usedreplication - required block replication for the fileblkSize - block sizeprogress - the progress reporterjava.io.IOException - IO failurepublic org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f,
int bufferSize,
org.apache.hadoop.util.Progressable progress)
throws java.io.IOException
append in class org.apache.hadoop.fs.FileSystemf - the existing file to be appendedbufferSize - the size of the buffer to be usedprogress - for reporting progress if it is not nulljava.io.IOException - indicating that append is not supportedpublic boolean exists(org.apache.hadoop.fs.Path f)
throws java.io.IOException
exists in class org.apache.hadoop.fs.FileSystemf - source pathjava.io.IOException - IO failurepublic boolean rename(org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
throws java.io.IOException
rename in class org.apache.hadoop.fs.FileSystemsrc - path to be renameddst - new path after renamejava.io.IOException - on IO failurepublic boolean delete(org.apache.hadoop.fs.Path f,
boolean recursive)
throws java.io.IOException
O(files), with added
overheads to enumerate the path. It is also not atomic.delete in class org.apache.hadoop.fs.FileSystemf - the path to deleterecursive - if path is a directory and set to true, the directory is
deleted else throws an exception. In case of a file the
recursive can be set to either true or falsejava.io.IOException - due to inability to delete a directory or filepublic org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path f)
throws java.io.FileNotFoundException,
java.io.IOException
listStatus in class org.apache.hadoop.fs.FileSystemf - given pathjava.io.FileNotFoundException - when the path does not existjava.io.IOException - see specific implementationpublic org.apache.hadoop.fs.FileStatus[] listStatus(org.apache.hadoop.fs.Path f,
boolean recursive)
throws java.io.FileNotFoundException,
java.io.IOException
f - given pathrecursive - whether to iterator objects in sub direcotriesjava.io.FileNotFoundException - when the path does not existjava.io.IOException - see specific implementationpublic org.apache.hadoop.fs.Path getWorkingDirectory()
getWorkingDirectory in class org.apache.hadoop.fs.FileSystempublic void setWorkingDirectory(org.apache.hadoop.fs.Path newDir)
setWorkingDirectory in class org.apache.hadoop.fs.FileSystemnewDir - the new working directorypublic boolean mkdirs(org.apache.hadoop.fs.Path path,
org.apache.hadoop.fs.permission.FsPermission permission)
throws java.io.IOException,
org.apache.hadoop.fs.FileAlreadyExistsException
'mkdir -p'. Existence of the directory hierarchy
is not an error.mkdirs in class org.apache.hadoop.fs.FileSystempath - path to createpermission - to apply to forg.apache.hadoop.fs.FileAlreadyExistsException - there is a file at the path specifiedjava.io.IOException - other IO problemspublic org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f)
throws java.io.FileNotFoundException,
java.io.IOException
getFileStatus in class org.apache.hadoop.fs.FileSystemf - the path we want information fromjava.io.FileNotFoundException - when the path does not existjava.io.IOException - on other problemspublic org.apache.hadoop.fs.ContentSummary getContentSummary(org.apache.hadoop.fs.Path f)
throws java.io.FileNotFoundException,
java.io.IOException
ContentSummary of a given Path.getContentSummary in class org.apache.hadoop.fs.FileSystemf - path to useContentSummaryjava.io.FileNotFoundException - if the path does not resolvejava.io.IOException - IO failurepublic void copyFromLocalFile(boolean delSrc,
boolean overwrite,
org.apache.hadoop.fs.Path src,
org.apache.hadoop.fs.Path dst)
throws org.apache.hadoop.fs.FileAlreadyExistsException,
java.io.IOException
src file on the local disk to the filesystem at the given
dst name.copyFromLocalFile in class org.apache.hadoop.fs.FileSystemdelSrc - whether to delete the srcoverwrite - whether to overwrite an existing filesrc - pathdst - pathorg.apache.hadoop.fs.FileAlreadyExistsException - if the destination file exists and
overwrite == falsejava.io.IOException - IO problempublic void close()
throws java.io.IOException
close in interface java.lang.AutoCloseableclose in interface java.io.Closeableclose in class org.apache.hadoop.fs.FileSystemjava.io.IOException - IO problempublic java.lang.String getCanonicalServiceName()
getCanonicalServiceName and return null since
delegation token is not supported.getCanonicalServiceName in interface org.apache.hadoop.security.token.DelegationTokenIssuergetCanonicalServiceName in class org.apache.hadoop.fs.FileSystempublic long getDefaultBlockSize()
getDefaultBlockSize(Path) insteadgetDefaultBlockSize in class org.apache.hadoop.fs.FileSystempublic long getDefaultBlockSize(org.apache.hadoop.fs.Path f)
getDefaultBlockSize in class org.apache.hadoop.fs.FileSystemf - path of filepublic java.lang.String toString()
toString in class java.lang.Objectpublic org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listFiles(org.apache.hadoop.fs.Path f,
boolean recursive)
throws java.io.FileNotFoundException,
java.io.IOException
If the path is a directory, if recursive is false, returns files in the directory; if recursive is true, return files in the subtree rooted at the path. If the path is a file, return the file's status and block locations.
listFiles in class org.apache.hadoop.fs.FileSystemf - a pathrecursive - if the subdirectories need to be traversed recursivelyjava.io.FileNotFoundException - if path does not existjava.io.IOException - if any I/O error occurredpublic org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f)
throws java.io.FileNotFoundException,
java.io.IOException
If a returned status is a file, it contains the file's block locations.
listLocatedStatus in class org.apache.hadoop.fs.FileSystemf - is the pathjava.io.FileNotFoundException - If f does not existjava.io.IOException - If an I/O error occurredpublic org.apache.hadoop.fs.RemoteIterator<org.apache.hadoop.fs.LocatedFileStatus> listLocatedStatus(org.apache.hadoop.fs.Path f,
org.apache.hadoop.fs.PathFilter filter)
throws java.io.FileNotFoundException,
java.io.IOException
listLocatedStatus in class org.apache.hadoop.fs.FileSystemf - a pathfilter - a path filterjava.io.FileNotFoundException - if f does not existjava.io.IOException - if any I/O error occurredCopyright © 2008–2025 Apache Software Foundation. All rights reserved.