Package com.mapr.fs
Class MapRFileSystem
java.lang.Object
org.apache.hadoop.conf.Configured
org.apache.hadoop.fs.FileSystem
org.apache.hadoop.maprfs.AbstractMapRFileSystem
com.mapr.fs.MapRFileSystem
- All Implemented Interfaces:
com.mapr.fs.jni.MapRConstants,Closeable,AutoCloseable,org.apache.hadoop.conf.Configurable,org.apache.hadoop.fs.BulkDeleteSource,org.apache.hadoop.fs.PathCapabilities,org.apache.hadoop.maprfs.Fid,org.apache.hadoop.security.token.DelegationTokenIssuer
public class MapRFileSystem
extends org.apache.hadoop.maprfs.AbstractMapRFileSystem
implements com.mapr.fs.jni.MapRConstants
-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.hadoop.fs.FileSystem
org.apache.hadoop.fs.FileSystem.DirectoryEntries, org.apache.hadoop.fs.FileSystem.DirListingIterator<T extends org.apache.hadoop.fs.FileStatus>, org.apache.hadoop.fs.FileSystem.StatisticsNested classes/interfaces inherited from interface com.mapr.fs.jni.MapRConstants
com.mapr.fs.jni.MapRConstants.ErrorValue, com.mapr.fs.jni.MapRConstants.JniUsername, com.mapr.fs.jni.MapRConstants.PutConstants, com.mapr.fs.jni.MapRConstants.RowConstants -
Field Summary
FieldsFields inherited from class org.apache.hadoop.fs.FileSystem
DEFAULT_FS, FS_DEFAULT_NAME_KEY, SHUTDOWN_HOOK_PRIORITY, statistics, TRASH_PREFIX, USER_HOME_PREFIXFields inherited from interface org.apache.hadoop.security.token.DelegationTokenIssuer
TOKEN_LOGFields inherited from interface com.mapr.fs.jni.MapRConstants
AtimeBit, AuditBit, ChunkSizeBit, ClusterConfDefault, CompressBit, CompressorTypeBit, DEFAULT_USER_IDENTIFIER, DEFAULT_USER_IDENTIFIER_ESCAPED, DefaultChunkSize, DefaultCLDBIp, DefaultCLDBPort, DiskFlushBit, EMPTY_BYTE_ARRAY, EMPTY_END_ROW, EMPTY_START_ROW, FidNameBit, GlobalClusterConfDefault, GroupBit, HADOOP_MAX_BLOCKSIZE, HADOOP_SECURITY_SPOOFED_GID, HADOOP_SECURITY_SPOOFED_GROUP, HADOOP_SECURITY_SPOOFED_UID, HADOOP_SECURITY_SPOOFED_USER, HOSTNAME_IP_SEPARATOR, IP_PORT_SEPARATOR, IPV6_ADDR_ENDER, IPV6_ADDR_STARTER, IPv6DefaultCLDBIp, LAST_ROW, LATEST_TIMESTAMP, MAPR_ENV_VAR, MAPR_PROPERTY_HOME, MapRClusterDir, MapRClusterDirPattern, MapRClusterDirSlash, MAPRFS_PREFIX, MAPRFS_SCHEME, MapRHomeDefault, MAX_CLUSTERS_CROSSED, MAX_PATH_LENGTH, MAX_PORT_NUMBER, MAX_RA_THREADS, MIN_RA_THREADS, MinChunkSize, ModeBit, MtimeBit, MULTI_ADDR_SEPARATOR, NUM_CONTAINERS_PER_RPC, OLDEST_TIMESTAMP, RA_THREADS_PER_STREAM, ReplBit, SSL_TRUSTSTORE, UserBit, UTF8_ENCODING, WireSecureBit -
Constructor Summary
ConstructorsConstructorDescriptionMapRFileSystem(String uname) MapRFileSystem(String cName, String[] cldbLocations) MapRFileSystem(String cName, String[] cldbLocations, String uname) -
Method Summary
Modifier and TypeMethodDescriptionvoidaccess(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) voidaddAceEntryError(ArrayList<FileAceEntry> acesList, org.apache.hadoop.fs.Path path, int err) org.apache.hadoop.security.token.Token<?>[]addDelegationTokens(String renewer, org.apache.hadoop.security.Credentials credentials) intaddFidMountPathToCache(String fidStr, String path, boolean forceUpdate) intaddSecurityPolicy(org.apache.hadoop.fs.Path path, String securityPolicyTag, boolean recursive) This method adds a single security policy tag to the list of existing security policy tags (if any) for the file or directory specified in path.intaddSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags, boolean recursive) This method adds one or more security policy tags to the list of existing security policy tags (if any) for the file or directory specified in path.booleanaddTableEntryToMetadata(String tablePath) booleanaddTableEntryToMetadata(String tablePath, int retryCount) voidaddTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc, com.mapr.fs.proto.Dbserver.TableReplAutoSetupInfo ainfo) voidaddTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc) org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) Append to an existing file (optional operation).voidClears the QueryServiceParam for the default cluster.voidclearQueryServiceParam(String clusterName) Clears the QueryServiceParam for the specified cluster.voidclose()intcopyAce(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) This method copies ACEs on source to destination.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite) Opens an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize) Opens an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize) org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, org.apache.hadoop.util.Progressable progress) Opens an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, int mask, org.apache.hadoop.fs.permission.FsPermission permission, boolean createIfNonExistant, boolean append, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, boolean createParent) org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, short replication) Opens an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, short replication, org.apache.hadoop.util.Progressable progress) Opens an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.util.Progressable progress) Create an FSDataOutputStream at the indicated Path with write-progress reporting.voidcreateColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) voidcreateColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, List<String> securityPolicyTagList) voidcreateColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, List<String> securityPolicyTagList) org.apache.hadoop.fs.FSDataOutputStreamorg.apache.hadoop.fs.FSDataOutputStreamintcreateHardlink(org.apache.hadoop.fs.Path oldpath, org.apache.hadoop.fs.Path newpath) org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) org.apache.hadoop.fs.PathIdintcreateSnapshot(String cluster, String volumeName, int volId, int rootCid, String snapshotName, boolean mirrorSnapshot, long expirationTime, String username) intcreateSnapshot(String cluster, String volumeName, int volId, int rootCid, String snapshotName, boolean mirrorSnapshot, long expirationTime, String username, long clusterOps) voidcreateSymbolicLink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link) intcreateSymlink(String target, org.apache.hadoop.fs.Path link, boolean createParent) voidcreateSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) voidcreateTable(org.apache.hadoop.fs.Path tableURI) voidcreateTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson) voidcreateTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson, boolean insertOrder) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user, byte[][] splitKeys) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue, List<String> securityPolicyTagList) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue, List<String> securityPolicyTagList, TableCreateOptions options) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue) voidcreateTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue, List<String> securityPolicyTagList) createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue) createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue, List<String> securityPolicyTagList) createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue, List<String> securityPolicyTagList, TableCreateOptions options) intcreateVolLink(String cluster, String volName, org.apache.hadoop.fs.Path volLink, boolean writeable, boolean isHidden) static com.mapr.fs.jni.MapRUserInfostatic com.mapr.fs.jni.MapRUserInfoCurrentUserInfo(String uname) intdelAces(org.apache.hadoop.fs.Path path, boolean recursive) intdelAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight) booleandelete(org.apache.hadoop.fs.Path f) booleandelete(org.apache.hadoop.fs.Path f, boolean recursive) Delete a file.voiddeleteAces(org.apache.hadoop.fs.Path path) This method deletes all ACEs of a file or directory.voiddeleteAces(org.apache.hadoop.fs.Path path, boolean recursive) This method deletes all ACEs of a file or directory recursively.voiddeleteColumnFamily(org.apache.hadoop.fs.Path tableURI, String name) booleanbooleandeleteTable(org.apache.hadoop.fs.Path tablePath) intdeleteVolLink(String cluster, String volLink) voideditTableReplica(org.apache.hadoop.fs.Path tableURI, String clusterName, String replicaPath, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) voideditTableReplica(org.apache.hadoop.fs.Path tableURI, String clusterName, String replicaPath, String topicName, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) voidenablePrivilegedProcessAccess(boolean enable) <ReturnType>
ReturnTypeexecuteCommand(com.mapr.fs.FSCommandHandler handler, com.mapr.fs.FSCommandHandler.ICommandExecutor<ReturnType> executor) intstatic StringfidToString(com.mapr.fs.proto.Common.FidMsg fid) Deprecated.voidforceLocalResolution(URI name) GatewaySourceToString(int source) getAces(org.apache.hadoop.fs.Path path) This method gets the ACEs of a file or directory.getAces(org.apache.hadoop.fs.Path path, boolean recursive) getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight) getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight, int serveridx) List<com.mapr.fs.proto.Dbserver.DataMask>Retrieves all the info for a all data maskintgetCidFromPath(org.apache.hadoop.fs.Path path) static ClusterConfbooleanbyte[]getContainerInfo(org.apache.hadoop.fs.Path path, List<Integer> cidList) com.mapr.fs.proto.Dbserver.DataMaskgetDataMask(String dmName) Retrieves all the info for a particular data mask given the namegetDataMaskNameFromId(int id) Returns the data mask name given the data mask ID.longshortorg.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) com.mapr.fs.jni.MapRFileCountgetFileCount(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.FileStatusgetFileLinkStatus(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.FileStatusgetFileStatus(org.apache.hadoop.fs.Path f) com.mapr.fs.jni.IPPort[]getGatewayIps(String file) com.mapr.fs.jni.IPPort[]getGatewayIps(String file, String dstCluster, boolean skipCache, com.mapr.fs.jni.GatewaySource source) org.apache.hadoop.fs.Pathutility functionsgetJobTrackerAddrs(org.apache.hadoop.conf.Configuration conf) org.apache.hadoop.fs.PathgetLinkTarget(org.apache.hadoop.fs.Path f) getMapRClient(String uri) getMapRFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len, boolean usePrimaryFid, boolean needDiskBlocks, boolean fullBlockInfo) getMapRFileStatus(org.apache.hadoop.fs.Path f) getMountPath(String cluster, String username, int pCid, int pCinum, int pUniq) getMountPath(String cluster, String username, int pCid, int pCinum, int pUniq, boolean useCache) getMountPath(String cluster, String username, int pCid, int pCinum, int pUniq, boolean useCache, boolean skipFSCall) getMountPathFid(String fidStr) getMountPathFidCached(String fidStr) getMountPathFidCached(String fidStr, boolean skipFSCall) getName(org.apache.hadoop.fs.Path p) getNameStr(String str) Return the QueryServiceParam from the default cluster.getQueryServiceParam(String clusterName) Return the QueryServiceParam from the specified cluster.static intcom.mapr.fs.proto.Dbserver.TableBasicStatsgetScanRangeStats(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] stKey, byte[] endKey) intgetSecurityPolicy(List<org.apache.hadoop.fs.Path> paths, List<MapRPathToSecurityPolicyTags> securityPolicyTags) This method returns the security policies associated with the files or directories specified in list of paths specified in paths.intgetSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags) This method returns the security policies associated with the file or directory specified in path.intgetSecurityPolicyId(String policyName) This method gets the security policy id for a single security policy name.getSecurityPolicyIds(List<String> securityPolicyTagList) This method gets the security policy ids for given security policy names from the policy cache maintained in the MapClient in JNI.getSecurityPolicyName(int policyID) This method gets the security policyname for a given security policy id.getSecurityPolicyNameOrId(int spId) This method gets the security policyname for a given security policy id.getServerForCid(int cid) getServerForCid(int cid, String cluster) getStat(org.apache.hadoop.fs.Path path) org.apache.hadoop.fs.FsStatusorg.apache.hadoop.fs.FsStatusgetStatus(org.apache.hadoop.fs.Path p) getTableBasicAttrs(org.apache.hadoop.fs.Path tableURI) getTableProperties(org.apache.hadoop.fs.Path tableURI) com.mapr.fs.proto.Dbserver.TableBasicStatsgetTableStats(org.apache.hadoop.fs.Path tableURI, String indexFid) com.mapr.fs.proto.Dbserver.TabletLookupResponsegetTablets(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage) com.mapr.fs.proto.Dbserver.TabletLookupResponsegetTablets(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap) getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid) getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid, boolean needSpaceUsage, boolean prefetchTabletMap) getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] startKey) getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] startKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap) com.mapr.fs.proto.Dbserver.TabletStatResponsegetTabletStat(org.apache.hadoop.fs.Path tablePath, com.mapr.fs.proto.Common.FidMsg tabletFid) static String[]getTrimmedStrings(String str) Splits a comma separated valueString, trimming leading and trailing whitespace on each value.getUri()Returns a URI whose scheme and authority identify this FileSystem.longgetUsed()Return the total size of all files in the filesystem.com.mapr.fs.jni.MapRUserInfogetVolumeName(int volId) This method gets the volume name for a given volume id.getVolumeNameCached(int volId) This method gets the volume name for a given volume id.org.apache.hadoop.fs.PathGet the current working directory for the given file systembyte[]Get the extended attribute associated with the given name on the given file or directory.getXAttrs(org.apache.hadoop.fs.Path path) Get all the extended attribute name/value pairs associated with the given file or directory.Get the extended attributes associated with the given list of names on the given file or directory.voidinitialize(URI name, org.apache.hadoop.conf.Configuration conf) voidinitialize(URI name, org.apache.hadoop.conf.Configuration conf, boolean isExpandAuditTool) booleanisChangelog(org.apache.hadoop.fs.Path path) static booleanisFidString(String fid) Deprecated.booleanisJsonTable(org.apache.hadoop.fs.Path path) booleanisStream(org.apache.hadoop.fs.Path path) booleanisTable(org.apache.hadoop.fs.Path path) List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr>listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces) List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr>listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces, boolean useCached) intlistDirLite(org.apache.hadoop.fs.Path f) Path should be a directory.listMapRStatus(org.apache.hadoop.fs.Path f, boolean showVols, boolean showHidden) listStatus(org.apache.hadoop.fs.Path f) List the statuses of the files/directories in the given path if the path is a directory.listStatusLite(org.apache.hadoop.fs.Path f, int cid, int cinum, int uniq, int count, long cookie, boolean showHidden) Path should be a directory.com.mapr.fs.proto.Dbserver.TableReplicaListResponselistTableIndexes(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean skipFieldsReadPermCheck, boolean refreshNow) com.mapr.fs.proto.Dbserver.TableReplicaListResponselistTableReplicas(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean refreshNow, boolean getCompactInfo) com.mapr.fs.proto.Dbserver.TableUpstreamListResponselistTableUpstreams(org.apache.hadoop.fs.Path tableURI) listXAttrs(org.apache.hadoop.fs.Path path) Get all the extended attribute names associated with the given file or directory.com.mapr.fs.proto.Fileserver.KvstoreLookupResponselookupKV(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Fileserver.KvStoreKey key) org.apache.hadoop.fs.PathmakeAbsolute(org.apache.hadoop.fs.Path path) voidmergeTableRegion(org.apache.hadoop.fs.Path tableURI, String fidstr) booleanmkdirs(org.apache.hadoop.fs.Path p, boolean createParent, org.apache.hadoop.fs.permission.FsPermission permission) booleanmkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) Make the given file and all non-existent parents into directories.booleanmkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress) mkdirs with compression optionbooleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress, long chunkSize) mkdirs with compression and chunksize optionbooleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, long chunkSize) mkdirs with chunksize optionmkdirsFid(org.apache.hadoop.fs.Path p) voidmodifyAces(org.apache.hadoop.fs.Path path, List<MapRFileAce> aces) This method modifies ACEs of a file or directory.intmodifyAudit(org.apache.hadoop.fs.Path path, boolean val) voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) voidmodifyDbCfSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, List<String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op, String cfname) voidmodifyDbSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, List<String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op) voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid) voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp) voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces) voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp) intmountVolume(String cluster, String volName, String mountPath, String username) Creates directory entry for volName at mountPath.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.Path f, int bufferSize) org.apache.hadoop.fs.FSDataInputStreamorg.apache.hadoop.fs.FSDataInputStreamorg.apache.hadoop.fs.FSDataInputStreamopenTable(org.apache.hadoop.fs.Path tableURI, MapRHTable htable) openTableWithFid(org.apache.hadoop.fs.Path priTableURI, String indexFid, MapRHTable htable) voidpackTableRegion(org.apache.hadoop.fs.Path tableURI, String fidstr, int ctype) com.mapr.fs.jni.MapRUserInfopopulateAndGetUserInfo(org.apache.hadoop.fs.Path p) voidpopulateUserInfo(org.apache.hadoop.fs.Path p) intprintSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive) This method prints the security policies associated with the files or directories specified in list of paths specified in paths.intremoveAllSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive) This method removes all security policy tags associated to the file or directory specified by path.intremoveRecursive(org.apache.hadoop.fs.Path f) byte[]removeS3Bucket(String bucketName, String domainName) intremoveSecurityPolicy(org.apache.hadoop.fs.Path path, String securityPolicyTag, boolean recursive) This method removes the security policy named securityPolicy from the list of existing security policies (if any) for the file or directory specified in path.intremoveSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags, boolean recursive) This method removes one or more security policy tags from the list of existing security policy tags (if any) for the file or directory specified in path.voidremoveTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) voidremoveTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc) voidremoveXAttr(org.apache.hadoop.fs.Path path, String name) Remove the extended attribute (specified by name) associated with the given file or directory.booleanrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) Renames Path src to Path dst.protected org.apache.hadoop.fs.PathresolveLink(org.apache.hadoop.fs.Path f) org.apache.hadoop.fs.PathresolveTablePath(org.apache.hadoop.fs.Path path) byte[]s3BucketCreate(String path, String bktName, String domain, int aId, boolean worm, long ownwerUid) byte[]byte[]com.mapr.fs.proto.Fileserver.KvstoreScanResponsescanKVGivenFid(org.apache.hadoop.fs.Path URI, com.mapr.fs.proto.Common.FidMsg kvFid, com.mapr.fs.proto.Fileserver.KvStoreKey start, com.mapr.fs.proto.Fileserver.KvStoreKey end) voidsetAces(org.apache.hadoop.fs.Path path, String strAces, boolean isSet, int noinherit, int preservemodebits, boolean recursive) This method fully replaces ACEs of a file or directory recursively, discarding all existing ones.intsetAces(org.apache.hadoop.fs.Path path, ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath) intsetAces(org.apache.hadoop.fs.Path path, ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath, int symLinkHeight) voidsetAces(org.apache.hadoop.fs.Path path, List<MapRFileAce> aces) This method fully replace ACEs of a file or directory, discarding all existing ones.voidsetAces(org.apache.hadoop.fs.Path path, List<MapRFileAce> aces, boolean recursive) This method fully replace ACEs of a file or directory recursively, discarding all existing ones.intsetChunkSize(org.apache.hadoop.fs.Path path, long val) intsetCompression(org.apache.hadoop.fs.Path path, boolean val, String compName) intsetDiskFlush(org.apache.hadoop.fs.Path path, boolean val) voidvoidsetOwnerFid(String pfid, String user, String group) voidsetPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) voidSets the QueryServiceParam for the default cluster.voidsetQueryServiceParam(String clusterName, QueryServiceParam qsp) Sets the QueryServiceParam for the specified cluster.intsetSecurityPolicy(org.apache.hadoop.fs.Path path, String securityPolicyTag, boolean recursive) This method sets the security policy tag to the file or directory specified in path, replacing all existing tags.intsetSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags, boolean recursive) This method sets one or more security policy tags for the file or directory specified in path, replacing any existing security policies.voidsetTimes(org.apache.hadoop.fs.Path p, long mtime, long atime) intsetWireSecurity(org.apache.hadoop.fs.Path path, boolean val) voidsetWorkingDirectory(org.apache.hadoop.fs.Path new_dir) Set the current working directory for the given file system.voidSet or replace an extended attribute on a file or directory.voidsetXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) Set an extended attribute on a file or directory.slashReaddir(String authority) voidsplitTableRegion(org.apache.hadoop.fs.Path tableURI, String fidstr, boolean ignoreRegionTooSmallError) booleancom.mapr.fs.jni.JNIFileTierStatustierOp(int op, org.apache.hadoop.fs.Path path, boolean verbose, boolean blocking, long shaHigh, long shaLow, long uniq) booleantruncate(org.apache.hadoop.fs.Path f, long newLength) Truncate the file in the indicated path to the indicated size.intunmountVolume(String cluster, String volName, String mountPath, String username, int pCid, int pCinum, int pUniq) Removes directory entry for volName Expects absolute path for mountPathstatic voidvalidateFid(String fid) Deprecated.Methods inherited from class org.apache.hadoop.fs.FileSystem
append, append, append, append, appendFile, areSymlinksEnabled, cancelDeleteOnExit, canonicalizeUri, checkPath, clearStatistics, closeAll, closeAllForUGI, completeLocalOutput, concat, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, createBulkDelete, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createFile, createMultipartUploader, createNewFile, createNonRecursive, createNonRecursive, createPathHandle, createSnapshot, createSnapshot, deleteOnExit, deleteSnapshot, enableSymlinks, exists, fixRelativePart, get, get, get, getAclStatus, getAdditionalTokenIssuers, getAllStatistics, getAllStoragePolicies, getBlockSize, getCanonicalServiceName, getCanonicalUri, getChildFileSystems, getContentSummary, getDefaultBlockSize, getDefaultPort, getDefaultReplication, getDefaultUri, getDelegationToken, getEnclosingRoot, getFileBlockLocations, getFileChecksum, getFileChecksum, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getInitialWorkingDirectory, getLength, getLocal, getName, getNamed, getPathHandle, getQuotaUsage, getReplication, getServerDefaults, getServerDefaults, getStatistics, getStatistics, getStoragePolicy, getStorageStatistics, getTrashRoot, getTrashRoots, getUsed, globStatus, globStatus, hasPathCapability, isDirectory, isFile, listCorruptFileBlocks, listFiles, listLocatedStatus, listLocatedStatus, listStatus, listStatus, listStatus, listStatusBatch, listStatusIterator, makeQualified, mkdirs, mkdirs, modifyAclEntries, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, msync, newInstance, newInstance, newInstance, newInstanceLocal, open, open, open, openFid, openFid, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveCreate, primitiveMkdir, primitiveMkdir, printStatistics, processDeleteOnExit, removeAcl, removeAclEntries, removeDefaultAcl, rename, renameSnapshot, resolvePath, satisfyStoragePolicy, setAcl, setDefaultUri, setDefaultUri, setQuota, setQuotaByStorageType, setReplication, setStoragePolicy, setVerifyChecksum, setWriteChecksum, startLocalOutput, unsetStoragePolicyMethods inherited from class org.apache.hadoop.conf.Configured
getConf, setConfMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface org.apache.hadoop.maprfs.Fid
openFid, openFid
-
Field Details
-
MAPRFS_BASE_URI
-
securityPolicyNameToIdMap_
-
securityPolicyIdToNameMap_
-
MaxNumFileAces
public final int MaxNumFileAces- See Also:
-
emptyStringArray
-
-
Constructor Details
-
MapRFileSystem
- Throws:
IOException
-
MapRFileSystem
- Throws:
IOException
-
MapRFileSystem
- Throws:
IOException
-
MapRFileSystem
- Throws:
IOException
-
-
Method Details
-
CurrentUserInfo
- Throws:
IOException
-
CurrentUserInfo
- Throws:
IOException
-
initialize
- Overrides:
initializein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
initialize
public void initialize(URI name, org.apache.hadoop.conf.Configuration conf, boolean isExpandAuditTool) throws IOException - Throws:
IOException
-
createPathId
public org.apache.hadoop.fs.PathId createPathId()- Specified by:
createPathIdin classorg.apache.hadoop.maprfs.AbstractMapRFileSystem
-
forceLocalResolution
- Throws:
IOException
-
access
public void access(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Overrides:
accessin classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws IOException - Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException - Overrides:
createNonRecursivein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, int mask, org.apache.hadoop.fs.permission.FsPermission permission, boolean createIfNonExistant, boolean append, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, boolean createParent) throws IOException - Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize) throws IOException - Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws IOException - Specified by:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite) throws IOException Opens an FSDataOutputStream at the indicated Path.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.util.Progressable progress) throws IOException Create an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, short replication) throws IOException Opens an FSDataOutputStream at the indicated Path. Files are overwritten by default.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, short replication, org.apache.hadoop.util.Progressable progress) throws IOException Opens an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize) throws IOException Opens an FSDataOutputStream at the indicated Path.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.- Throws:
IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException Opens an FSDataOutputStream at the indicated Path with write-progress reporting.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.- Throws:
IOException
-
truncate
Truncate the file in the indicated path to the indicated size.- Fails if path is a directory.
- Fails if path does not exist.
- Fails if path is not closed.
- Fails if new size is greater than current size.
- Overrides:
truncatein classorg.apache.hadoop.fs.FileSystem- Parameters:
f- The path to the file to be truncatednewLength- The size the file is to be truncated to- Returns:
trueif the file has been truncated to the desirednewLengthand is immediately available to be reused for write operations such asappend, orfalseif a background process of adjusting the length of the last block has been started, and clients should wait for it to complete before proceeding with further file updates.- Throws:
IOException
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f, int bufferSize) throws IOException, org.apache.hadoop.security.AccessControlException - Specified by:
openin classorg.apache.hadoop.fs.FileSystem- Throws:
IOExceptionorg.apache.hadoop.security.AccessControlException
-
getUri
Returns a URI whose scheme and authority identify this FileSystem.- Specified by:
getUriin classorg.apache.hadoop.fs.FileSystem
-
getScheme
- Overrides:
getSchemein classorg.apache.hadoop.fs.FileSystem
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) throws IOException, org.apache.hadoop.security.AccessControlException Append to an existing file (optional operation).- Specified by:
appendin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the existing file to be appended.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.- Throws:
IOExceptionorg.apache.hadoop.security.AccessControlException
-
getFileStatus
public org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f) throws IOException - Specified by:
getFileStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
rename
public boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws IOException Renames Path src to Path dst. Can take place on local fs or remote DFS.- Specified by:
renamein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
delete
Delete a file.- Specified by:
deletein classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the path to delete.recursive- if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.- Returns:
- true if delete is successful else false.
- Throws:
IOException
-
deleteTable
- Throws:
IOException
-
delete
- Overrides:
deletein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setWorkingDirectory
public void setWorkingDirectory(org.apache.hadoop.fs.Path new_dir) Set the current working directory for the given file system. All relative paths will be resolved relative to it.- Specified by:
setWorkingDirectoryin classorg.apache.hadoop.fs.FileSystem- Parameters:
new_dir-
-
getWorkingDirectory
public org.apache.hadoop.fs.Path getWorkingDirectory()Get the current working directory for the given file system- Specified by:
getWorkingDirectoryin classorg.apache.hadoop.fs.FileSystem- Returns:
- the directory pathname
-
getDefaultBlockSize
public long getDefaultBlockSize()- Overrides:
getDefaultBlockSizein classorg.apache.hadoop.fs.FileSystem
-
getDefaultReplication
public short getDefaultReplication()- Overrides:
getDefaultReplicationin classorg.apache.hadoop.fs.FileSystem
-
getUserInfo
public com.mapr.fs.jni.MapRUserInfo getUserInfo() -
populateUserInfo
- Throws:
IOException
-
populateAndGetUserInfo
public com.mapr.fs.jni.MapRUserInfo populateAndGetUserInfo(org.apache.hadoop.fs.Path p) throws IOException - Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException Make the given file and all non-existent parents into directories. Has the semantics of Unix 'mkdir -p'. Existence of the directory hierarchy is not an error.- Specified by:
mkdirsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path p, boolean createParent, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException - Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress) throws IOException mkdirs with compression option- Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, long chunkSize) throws IOException mkdirs with chunksize option- Throws:
IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress, long chunkSize) throws IOException mkdirs with compression and chunksize option- Throws:
IOException
-
getMapRFileStatus
- Throws:
IOException
-
fetchGNSConf
- Throws:
IOException
-
slashReaddir
- Throws:
IOException
-
listMapRStatus
public MapRFileStatus[] listMapRStatus(org.apache.hadoop.fs.Path f, boolean showVols, boolean showHidden) throws IOException - Throws:
IOException
-
listStatusLite
public MapRReaddirLite listStatusLite(org.apache.hadoop.fs.Path f, int cid, int cinum, int uniq, int count, long cookie, boolean showHidden) throws IOException Path should be a directory.- Throws:
IOException
-
listDirLite
Path should be a directory. Called from hadoop mfs -lsf. Implemented for Bug 31193 - hadoop fs -ls takes hours to finish on millions of files. - Throws:
IOException
-
listStatus
List the statuses of the files/directories in the given path if the path is a directory.- Specified by:
listStatusin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- given path- Returns:
- the statuses of the files/directories in the given patch
- Throws:
IOException
-
close
- Specified by:
closein interfaceAutoCloseable- Specified by:
closein interfaceCloseable- Overrides:
closein classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getUsed
Return the total size of all files in the filesystem.- Overrides:
getUsedin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
supportsSymlinks
public boolean supportsSymlinks()- Overrides:
supportsSymlinksin classorg.apache.hadoop.fs.FileSystem
-
createSymlink
public void createSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, org.apache.hadoop.fs.ParentNotDirectoryException, IOException - Overrides:
createSymlinkin classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.fs.FileAlreadyExistsExceptionFileNotFoundExceptionorg.apache.hadoop.fs.ParentNotDirectoryExceptionIOException
-
createSymlink
public int createSymlink(String target, org.apache.hadoop.fs.Path link, boolean createParent) throws IOException - Throws:
IOException
-
removeRecursive
public int removeRecursive(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, IOException - Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionIOException
-
getMapRClient
- Throws:
IOException
-
createHardlink
public int createHardlink(org.apache.hadoop.fs.Path oldpath, org.apache.hadoop.fs.Path newpath) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, FileNotFoundException, org.apache.hadoop.fs.ParentNotDirectoryException, IOException - Throws:
org.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.fs.FileAlreadyExistsExceptionFileNotFoundExceptionorg.apache.hadoop.fs.ParentNotDirectoryExceptionIOException
-
scanDir
- Throws:
IOException
-
scanKV
public byte[] scanKV(String cluster, String fidstr, byte[] start, byte[] end, int maxkeys) throws IOException - Throws:
IOException
-
scanKV
public byte[] scanKV(String cluster, String fidstr, byte[] start, byte[] end, int maxkeys, boolean fromGfsck) throws IOException - Throws:
IOException
-
scanKVGivenFid
public com.mapr.fs.proto.Fileserver.KvstoreScanResponse scanKVGivenFid(org.apache.hadoop.fs.Path URI, com.mapr.fs.proto.Common.FidMsg kvFid, com.mapr.fs.proto.Fileserver.KvStoreKey start, com.mapr.fs.proto.Fileserver.KvStoreKey end) throws IOException - Throws:
IOException
-
lookupKV
public com.mapr.fs.proto.Fileserver.KvstoreLookupResponse lookupKV(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Fileserver.KvStoreKey key) throws IOException - Throws:
IOException
-
createVolLink
public int createVolLink(String cluster, String volName, org.apache.hadoop.fs.Path volLink, boolean writeable, boolean isHidden) throws IOException - Throws:
IOException
-
deleteVolLink
- Throws:
IOException
-
getHomeDirectory
public org.apache.hadoop.fs.Path getHomeDirectory()utility functions- Overrides:
getHomeDirectoryin classorg.apache.hadoop.fs.FileSystem
-
makeAbsolute
public org.apache.hadoop.fs.Path makeAbsolute(org.apache.hadoop.fs.Path path) -
getClusterNameUnchecked
-
getNameStr
-
getName
-
getMapRFileBlockLocations
public MapRBlockLocation[] getMapRFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len, boolean usePrimaryFid, boolean needDiskBlocks, boolean fullBlockInfo) throws IOException - Throws:
IOException
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) throws IOException - Overrides:
getFileBlockLocationsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setOwner
- Overrides:
setOwnerin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setOwnerFid
- Specified by:
setOwnerFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
setOwnerFidin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setTimes
- Overrides:
setTimesin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setPermission
public void setPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws IOException - Overrides:
setPermissionin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getFileLinkStatus
public org.apache.hadoop.fs.FileStatus getFileLinkStatus(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, FileNotFoundException, org.apache.hadoop.fs.UnsupportedFileSystemException, IOException - Overrides:
getFileLinkStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionFileNotFoundExceptionorg.apache.hadoop.fs.UnsupportedFileSystemExceptionIOException
-
getLinkTarget
- Overrides:
getLinkTargetin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
resolveLink
- Overrides:
resolveLinkin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getStatus
- Overrides:
getStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getStatus
- Overrides:
getStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, String name, byte[] value, EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws IOException Set an extended attribute on a file or directory.- Overrides:
setXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The extended attribute name. This must be prefixed with the namespace followed by ".". For example, "user.attr".value- The extended attribute value.flag- Thevalue must be CREATE, to create a new extended attribute, or REPLACE, to replace an existing extended attribute. When creating a new extended attribute, an extended attribute with the given name must not already exist and when replacing, an extended attribute with the given name must already exist; else, an error is returned. - Throws:
IOException
-
setXAttr
Set or replace an extended attribute on a file or directory.- Overrides:
setXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The extended attribute name. This must be prefixed with the namespace followed by ".". For example, "user.attr".value- The extended attribute value.- Throws:
IOException
-
getXAttr
Get the extended attribute associated with the given name on the given file or directory.- Overrides:
getXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The name of the extended attribute (to retrieve). The name must be prefixed with the namespace followed by ".". For example, "user.attr".- Throws:
IOException
-
getXAttrs
Get all the extended attribute name/value pairs associated with the given file or directory. Only those extended attributes which the logged-in user has access to are returned.- Overrides:
getXAttrsin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.- Throws:
IOException
-
getXAttrs
public Map<String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, List<String> names) throws IOException Get the extended attributes associated with the given list of names on the given file or directory. Only those extended attributes which the logged-in user has access to are returned.- Overrides:
getXAttrsin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.names- The names of the extended attributes (to retrieve).- Throws:
IOException
-
listXAttrs
Get all the extended attribute names associated with the given file or directory.- Overrides:
listXAttrsin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.- Throws:
IOException
-
removeXAttr
Remove the extended attribute (specified by name) associated with the given file or directory.- Overrides:
removeXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The name of the extended attribute (to remove). The name must be prefixed with the namespace followed by ".". For example, "user.attr".- Throws:
IOException
-
executeCommand
public <ReturnType> ReturnType executeCommand(com.mapr.fs.FSCommandHandler handler, com.mapr.fs.FSCommandHandler.ICommandExecutor<ReturnType> executor) throws IOException - Throws:
IOException
-
setCompression
public int setCompression(org.apache.hadoop.fs.Path path, boolean val, String compName) throws IOException - Throws:
IOException
-
tierOp
public com.mapr.fs.jni.JNIFileTierStatus tierOp(int op, org.apache.hadoop.fs.Path path, boolean verbose, boolean blocking, long shaHigh, long shaLow, long uniq) throws IOException - Throws:
IOException
-
getStat
- Throws:
IOException
-
getFileCount
- Throws:
IOException
-
modifyAudit
- Throws:
IOException
-
setWireSecurity
- Throws:
IOException
-
setDiskFlush
- Throws:
IOException
-
setAces
This method fully replace ACEs of a file or directory, discarding all existing ones.- Parameters:
path- path to set ACEsaces- list of file ACEs- Throws:
IOException- if an ACE could not be replaced
-
setAces
public void setAces(org.apache.hadoop.fs.Path path, List<MapRFileAce> aces, boolean recursive) throws IOException This method fully replace ACEs of a file or directory recursively, discarding all existing ones.- Parameters:
path- path to set ACEs (recursively)aces- list of file ACEsrecursive- whether to apply ACE recursively true/false- Throws:
IOException- if an ACE could not be replaced
-
modifyAces
This method modifies ACEs of a file or directory. It can add new ACEs or replace an existing ones. All existing ACEs that are unspecified in the current call are retained without changes.- Parameters:
path- path to set ACEsaces- list of file ACEs- Throws:
IOException- if an ACE could not be modified
-
deleteAces
This method deletes all ACEs of a file or directory.- Parameters:
path- path to delete ACEs- Throws:
IOException- if an ACEs could not be modified
-
deleteAces
This method deletes all ACEs of a file or directory recursively.- Specified by:
deleteAcesin classorg.apache.hadoop.maprfs.AbstractMapRFileSystem- Parameters:
path- path to delete ACEs (recursively)recursive- whether to delete ACE recursively true/false- Throws:
IOException- if an ACEs could not be modified
-
getAces
This method gets the ACEs of a file or directory.- Parameters:
path- path to set ACEs- Returns:
- list of aces of a file or directory
- Throws:
IOException- if an ACE could not be read
-
addAceEntryError
public void addAceEntryError(ArrayList<FileAceEntry> acesList, org.apache.hadoop.fs.Path path, int err) -
getAces
public ArrayList<FileAceEntry> getAces(org.apache.hadoop.fs.Path path, boolean recursive) throws IOException - Throws:
IOException
-
getAces
public ArrayList<FileAceEntry> getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight) throws IOException - Throws:
IOException
-
getAces
public ArrayList<FileAceEntry> getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight, int serveridx) throws IOException - Throws:
IOException
-
copyAce
This method copies ACEs on source to destination.- Specified by:
copyAcein classorg.apache.hadoop.maprfs.AbstractMapRFileSystem- Parameters:
src- path of sourcedest- path of destination- Throws:
IOException- if an ACE could not be read/modified
-
delAces
- Throws:
IOException
-
delAces
public int delAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight) throws IOException - Throws:
IOException
-
setAces
public void setAces(org.apache.hadoop.fs.Path path, String strAces, boolean isSet, int noinherit, int preservemodebits, boolean recursive) throws IOException This method fully replaces ACEs of a file or directory recursively, discarding all existing ones. Accepts ACE parameter as String in short format. [rf:],[wf: ],[ef: ],[rd: ],[ac: ],[dc: ],[ld: ] - Specified by:
setAcesin classorg.apache.hadoop.maprfs.AbstractMapRFileSystem- Parameters:
path- path to set ACEsstrAces- string of file ACEs in short formatisSet- fully replace ACE if true or merge with existing ACE if falsenoinherit- do not inherit aces instead use umaskpreservemodebits- to not overwrite mode bitsrecursive- whether to apply ACE recursively true/false- Throws:
IOException- if an ACE could not be replaced
-
setAces
public int setAces(org.apache.hadoop.fs.Path path, ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath) throws IOException - Throws:
IOException
-
setAces
public int setAces(org.apache.hadoop.fs.Path path, ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath, int symLinkHeight) throws IOException - Throws:
IOException
-
setChunkSize
- Throws:
IOException
-
getCidFromPath
public int getCidFromPath(org.apache.hadoop.fs.Path path) -
mountVolume
Creates directory entry for volName at mountPath. Expects absolute path for mountPath -
unmountVolume
public int unmountVolume(String cluster, String volName, String mountPath, String username, int pCid, int pCinum, int pUniq) Removes directory entry for volName Expects absolute path for mountPath -
getVolumeName
This method gets the volume name for a given volume id.- Parameters:
volId- volume id- Returns:
- volume name
- Throws:
IOException- if input volume id is invalid.
-
getVolumeNameCached
This method gets the volume name for a given volume id.- Parameters:
volId- volume id- Returns:
- volume name
- Throws:
IOException- if input volume id is invalid.
-
getMountPathFid
- Throws:
IOException
-
addFidMountPathToCache
-
getMountPathFidCached
- Throws:
IOException
-
getMountPathFidCached
- Throws:
IOException
-
getMountPath
-
getMountPath
-
getMountPath
-
createSnapshot
-
createSnapshot
-
getQueryServiceParam
Return the QueryServiceParam from the default cluster.- Throws:
IOExceptionInterruptedException
-
getQueryServiceParam
public QueryServiceParam getQueryServiceParam(String clusterName) throws IOException, InterruptedException Return the QueryServiceParam from the specified cluster.- Throws:
IOExceptionInterruptedException
-
setQueryServiceParam
Sets the QueryServiceParam for the default cluster.- Throws:
IOExceptionInterruptedException
-
setQueryServiceParam
public void setQueryServiceParam(String clusterName, QueryServiceParam qsp) throws IOException, InterruptedException Sets the QueryServiceParam for the specified cluster.- Throws:
IOExceptionInterruptedException
-
clearQueryServiceParam
Clears the QueryServiceParam for the default cluster.- Throws:
IOExceptionInterruptedException
-
clearQueryServiceParam
Clears the QueryServiceParam for the specified cluster.- Throws:
IOExceptionInterruptedException
-
getZkConnectString
- Overrides:
getZkConnectStringin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getMapRClient
- Throws:
IOException
-
getJobTrackerAddrs
public InetSocketAddress[] getJobTrackerAddrs(org.apache.hadoop.conf.Configuration conf) throws IOException - Overrides:
getJobTrackerAddrsin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
getClusterConf
-
getTrimmedStrings
Splits a comma separated valueString, trimming leading and trailing whitespace on each value.- Parameters:
str- a comma separatedwith values - Returns:
- an array of
Stringvalues
-
fidToString
Deprecated.Convert FidMsg to String form -
isFidString
Deprecated. -
validateFid
Deprecated.- Throws:
IOException
-
createSymbolicLink
public void createSymbolicLink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link) throws IOException - Throws:
IOException
-
openTable
- Throws:
IOException
-
openTableWithFid
public Inode openTableWithFid(org.apache.hadoop.fs.Path priTableURI, String indexFid, MapRHTable htable) throws IOException - Throws:
IOException
-
getSecurityPolicyNameOrId
This method gets the security policyname for a given security policy id. Results are cached.- Parameters:
spId- security policy id- Returns:
- security policy name
- Throws:
IOException
-
getSecurityPolicyName
This method gets the security policyname for a given security policy id. Results are cached.- Parameters:
spId- security policy id- Returns:
- security policy name
- Throws:
IOException
-
getSecurityPolicyIds
This method gets the security policy ids for given security policy names from the policy cache maintained in the MapClient in JNI. Results are not cached. This method should be called only for admin/non-datapath operations- Parameters:
spNames- security policy names- Returns:
- security policy Ids
- Throws:
IOException
-
getSecurityPolicyId
This method gets the security policy id for a single security policy name. Results are cached.- Parameters:
spId- security policy id- Returns:
- security policy Name
- Throws:
IOException
-
getClusterSecurityPolicies
- Throws:
IOException
-
createTable
public String createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue) throws IOException - Throws:
IOException
-
createTable
public String createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue, List<String> securityPolicyTagList) throws IOException - Throws:
IOException
-
createTable
public String createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue, List<String> securityPolicyTagList, TableCreateOptions options) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue, List<String> securityPolicyTagList) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue, List<String> securityPolicyTagList) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue, List<String> securityPolicyTagList, TableCreateOptions options) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue) throws IOException - Throws:
IOException
-
createTable
- Throws:
IOException
-
createTable
- Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, String user, byte[][] splitKeys) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson) throws IOException - Throws:
IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson, boolean insertOrder) throws IOException - Throws:
IOException
-
getTablets
public com.mapr.fs.proto.Dbserver.TabletLookupResponse getTablets(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage) throws IOException - Throws:
IOException
-
getTablets
public com.mapr.fs.proto.Dbserver.TabletLookupResponse getTablets(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap) throws IOException - Throws:
IOException
-
createColumnFamily
public void createColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) throws IOException - Throws:
IOException
-
createColumnFamily
public void createColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, List<String> securityPolicyTagList) throws IOException - Throws:
IOException
-
createColumnFamily
public void createColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, List<String> securityPolicyTagList) throws IOException - Throws:
IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) throws IOException - Throws:
IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) throws IOException - Throws:
IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) throws IOException - Throws:
IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, String name, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) throws IOException - Throws:
IOException
-
deleteColumnFamily
- Throws:
IOException
-
listColumnFamily
public List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr> listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces) throws IOException - Throws:
IOException
-
listColumnFamily
public List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr> listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces, boolean useCached) throws IOException - Throws:
IOException
-
modifyDbSecurityPolicy
public void modifyDbSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, List<String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op) throws IOException - Throws:
IOException
-
modifyDbCfSecurityPolicy
public void modifyDbCfSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, List<String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op, String cfname) throws IOException - Throws:
IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) throws IOException - Throws:
IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid) throws IOException - Throws:
IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp) throws IOException - Throws:
IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces) throws IOException - Throws:
IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp) throws IOException - Throws:
IOException
-
getTableBasicAttrs
- Throws:
IOException
-
getTableProperties
- Throws:
IOException
-
splitTableRegion
public void splitTableRegion(org.apache.hadoop.fs.Path tableURI, String fidstr, boolean ignoreRegionTooSmallError) throws IOException - Throws:
IOException
-
packTableRegion
public void packTableRegion(org.apache.hadoop.fs.Path tableURI, String fidstr, int ctype) throws IOException - Throws:
IOException
-
mergeTableRegion
- Throws:
IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid) throws IOException - Throws:
IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid, boolean needSpaceUsage, boolean prefetchTabletMap) throws IOException - Throws:
IOException
-
getContainerInfo
public byte[] getContainerInfo(org.apache.hadoop.fs.Path path, List<Integer> cidList) throws IOException - Throws:
IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] startKey) throws IOException - Throws:
IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] startKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap) throws IOException - Throws:
IOException
-
getTabletStat
public com.mapr.fs.proto.Dbserver.TabletStatResponse getTabletStat(org.apache.hadoop.fs.Path tablePath, com.mapr.fs.proto.Common.FidMsg tabletFid) throws IOException - Throws:
IOException
-
addTableReplica
public void addTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc, com.mapr.fs.proto.Dbserver.TableReplAutoSetupInfo ainfo) throws IOException, UnsupportedOperationException -
editTableReplica
public void editTableReplica(org.apache.hadoop.fs.Path tableURI, String clusterName, String replicaPath, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) throws IOException - Throws:
IOException
-
editTableReplica
public void editTableReplica(org.apache.hadoop.fs.Path tableURI, String clusterName, String replicaPath, String topicName, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) throws IOException - Throws:
IOException
-
getTableStats
public com.mapr.fs.proto.Dbserver.TableBasicStats getTableStats(org.apache.hadoop.fs.Path tableURI, String indexFid) throws IOException - Throws:
IOException
-
getScanRangeStats
public com.mapr.fs.proto.Dbserver.TableBasicStats getScanRangeStats(org.apache.hadoop.fs.Path tableURI, String indexFid, byte[] stKey, byte[] endKey) throws IOException - Throws:
IOException
-
listTableIndexes
public com.mapr.fs.proto.Dbserver.TableReplicaListResponse listTableIndexes(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean skipFieldsReadPermCheck, boolean refreshNow) throws IOException - Throws:
IOException
-
listTableReplicas
public com.mapr.fs.proto.Dbserver.TableReplicaListResponse listTableReplicas(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean refreshNow, boolean getCompactInfo) throws IOException - Throws:
IOException
-
removeTableReplica
public void removeTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) throws IOException - Throws:
IOException
-
addTableUpstream
public void addTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc) throws IOException - Throws:
IOException
-
listTableUpstreams
public com.mapr.fs.proto.Dbserver.TableUpstreamListResponse listTableUpstreams(org.apache.hadoop.fs.Path tableURI) throws IOException - Throws:
IOException
-
removeTableUpstream
public void removeTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc) throws IOException - Throws:
IOException
-
getServerForCid
- Throws:
IOException
-
getDefaultClusterName
-
getClusterName
- Throws:
IOException
-
getServerForCid
- Throws:
IOException
-
openFid2
public org.apache.hadoop.fs.FSDataInputStream openFid2(org.apache.hadoop.fs.PathId pfid, String file, int readAheadBytesHint) throws IOException - Specified by:
openFid2in interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
openFid2in classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
openFid
public org.apache.hadoop.fs.FSDataInputStream openFid(String fid, byte[][] ips, int[] ports, long chunkSize, long fileSize) throws IOException - Throws:
IOException
-
openFid
public org.apache.hadoop.fs.FSDataInputStream openFid(String pfid, String file, byte[][] ips, int[] ports) throws IOException - Throws:
IOException
-
createFid
public org.apache.hadoop.fs.FSDataOutputStream createFid(String pfid, String file) throws IOException - Specified by:
createFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
createFidin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
createFid
public org.apache.hadoop.fs.FSDataOutputStream createFid(String pfid, String file, boolean overwrite) throws IOException - Specified by:
createFidin interfaceorg.apache.hadoop.maprfs.Fid- Specified by:
createFidin classorg.apache.hadoop.maprfs.AbstractMapRFileSystem- Throws:
IOException
-
deleteFid
- Specified by:
deleteFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
deleteFidin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
mkdirsFid
- Specified by:
mkdirsFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
mkdirsFidin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
mkdirsFid
- Specified by:
mkdirsFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
mkdirsFidin classorg.apache.hadoop.fs.FileSystem- Throws:
IOException
-
resolveTablePath
public org.apache.hadoop.fs.Path resolveTablePath(org.apache.hadoop.fs.Path path) throws IOException - Throws:
IOException
-
isTable
- Throws:
IOException
-
isJsonTable
- Throws:
IOException
-
isStream
- Throws:
IOException
-
isChangelog
- Throws:
IOException
-
GatewaySourceToString
-
getGatewayIps
public com.mapr.fs.jni.IPPort[] getGatewayIps(String file, String dstCluster, boolean skipCache, com.mapr.fs.jni.GatewaySource source) throws IOException - Throws:
IOException
-
getGatewayIps
- Throws:
IOException
-
getClusterNameUnique
public boolean getClusterNameUnique() -
getClusterList
-
enablePrivilegedProcessAccess
- Throws:
IOException
-
getRAThreads
public static int getRAThreads() -
addDelegationTokens
public org.apache.hadoop.security.token.Token<?>[] addDelegationTokens(String renewer, org.apache.hadoop.security.Credentials credentials) - Specified by:
addDelegationTokensin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer
-
addSecurityPolicy
public int addSecurityPolicy(org.apache.hadoop.fs.Path path, String securityPolicyTag, boolean recursive) throws IOException This method adds a single security policy tag to the list of existing security policy tags (if any) for the file or directory specified in path. The securityPolicyTag parameter is the security policy name.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTag- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the file cannot be tagged
-
addSecurityPolicy
public int addSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags, boolean recursive) throws IOException This method adds one or more security policy tags to the list of existing security policy tags (if any) for the file or directory specified in path. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTags- The list of security policy names- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the file cannot be tagged
-
setSecurityPolicy
public int setSecurityPolicy(org.apache.hadoop.fs.Path path, String securityPolicyTag, boolean recursive) throws IOException This method sets the security policy tag to the file or directory specified in path, replacing all existing tags.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTags- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the file cannot be tagged
-
setSecurityPolicy
public int setSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags, boolean recursive) throws IOException This method sets one or more security policy tags for the file or directory specified in path, replacing any existing security policies. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTags- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the file cannot be tagged
-
removeSecurityPolicy
public int removeSecurityPolicy(org.apache.hadoop.fs.Path path, String securityPolicyTag, boolean recursive) throws IOException This method removes the security policy named securityPolicy from the list of existing security policies (if any) for the file or directory specified in path.- Parameters:
securityPolicyTag- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the tag cannot be removed
-
removeSecurityPolicy
public int removeSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags, boolean recursive) throws IOException This method removes one or more security policy tags from the list of existing security policy tags (if any) for the file or directory specified in path. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file where the tag is to be removedsecurityPolicyTags- The security policy tags to remove- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the tag cannot be removed
-
removeAllSecurityPolicies
public int removeAllSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive) throws IOException This method removes all security policy tags associated to the file or directory specified by path.- Parameters:
path- The full path name to the file or directory where all tags are to be removed- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
IOException- if the tags cannot be removed
-
getSecurityPolicy
public int getSecurityPolicy(org.apache.hadoop.fs.Path path, List<String> securityPolicyTags) throws IOException This method returns the security policies associated with the file or directory specified in path. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file where the tags are to be retrievedsecurityPolicyTags- The caller-supplied list. On return, this will contain the list of security policy tags associated with the file or directory- Returns:
- This method returns 0 if successful, or a Unix error code otherwise
- Throws:
IOException- if the tags cannot be retrieved
-
getSecurityPolicy
public int getSecurityPolicy(List<org.apache.hadoop.fs.Path> paths, List<MapRPathToSecurityPolicyTags> securityPolicyTags) throws IOException This method returns the security policies associated with the files or directories specified in list of paths specified in paths. The mappings are returned in the securityPolicyTags parameter as a list of MapRPathToSecurityPolicyTags objects, each of which contains a path with its associated list of one or more security policy names.- Parameters:
paths- The full pathnames of the files where the tags are to be retrievedmappings- The caller-supplied list of security policy tags to file mappings On return, this will contain the list of security policy tags that are associated with the file or directory.- Returns:
- This method returns 0 if successful, or a Unix error code otherwise
- Throws:
IOException- if the tags cannot be retrieved
-
printSecurityPolicies
public int printSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive) throws IOException This method prints the security policies associated with the files or directories specified in list of paths specified in paths.- Parameters:
path- The full path name of the files where the tags are to be retrievedrecursive- if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.- Returns:
- This method returns 0 if successful, or a Unix error code otherwise
- Throws:
IOException- if the tags cannot be retrieved
-
getAllDataMasks
Retrieves all the info for a all data mask- Returns:
- List of DataMask objects containing all the information
- Throws:
IOException- if there is an internal error
-
getDataMask
Retrieves all the info for a particular data mask given the name- Parameters:
dmName- the name of the datamask to be retrieve- Returns:
- a DataMask object containing all the information
- Throws:
IOException- if there is an internal error
-
getDataMaskNameFromId
Returns the data mask name given the data mask ID. If not found, return null- Throws:
IOException
-
s3BucketCreate
public byte[] s3BucketCreate(String path, String bktName, String domain, int aId, boolean worm, long ownwerUid) throws IOException - Throws:
IOException
-
removeS3Bucket
- Throws:
IOException
-
addTableEntryToMetadata
-
addTableEntryToMetadata
-
Fids.fidToString(FidMsg)