Package com.mapr.fs
Class MapRFileSystem
- java.lang.Object
-
- org.apache.hadoop.conf.Configured
-
- org.apache.hadoop.fs.FileSystem
-
- org.apache.hadoop.maprfs.AbstractMapRFileSystem
-
- com.mapr.fs.MapRFileSystem
-
- All Implemented Interfaces:
com.mapr.fs.jni.MapRConstants,java.io.Closeable,java.lang.AutoCloseable,org.apache.hadoop.conf.Configurable,org.apache.hadoop.fs.PathCapabilities,org.apache.hadoop.maprfs.Fid,org.apache.hadoop.security.token.DelegationTokenIssuer
public class MapRFileSystem extends org.apache.hadoop.maprfs.AbstractMapRFileSystem implements com.mapr.fs.jni.MapRConstants
-
-
Field Summary
Fields Modifier and Type Field Description static java.lang.String[]emptyStringArraystatic java.net.URIMAPRFS_BASE_URIintMaxNumFileAcesprotected com.google.common.collect.BiMap<java.lang.Integer,java.lang.String>securityPolicyIdToNameMap_protected com.google.common.collect.BiMap<java.lang.String,java.lang.Integer>securityPolicyNameToIdMap_-
Fields inherited from class org.apache.hadoop.fs.FileSystem
DEFAULT_FS, FS_DEFAULT_NAME_KEY, SHUTDOWN_HOOK_PRIORITY, statistics, TRASH_PREFIX, USER_HOME_PREFIX
-
Fields inherited from interface com.mapr.fs.jni.MapRConstants
AtimeBit, AuditBit, ChunkSizeBit, ClusterConfDefault, CompressBit, CompressorTypeBit, DEFAULT_USER_IDENTIFIER, DEFAULT_USER_IDENTIFIER_ESCAPED, DefaultChunkSize, DefaultCLDBIp, DefaultCLDBPort, DiskFlushBit, EMPTY_BYTE_ARRAY, EMPTY_END_ROW, EMPTY_START_ROW, FidNameBit, GroupBit, HADOOP_MAX_BLOCKSIZE, HADOOP_SECURITY_SPOOFED_GID, HADOOP_SECURITY_SPOOFED_GROUP, HADOOP_SECURITY_SPOOFED_UID, HADOOP_SECURITY_SPOOFED_USER, HOSTNAME_IP_SEPARATOR, IP_PORT_SEPARATOR, LAST_ROW, LATEST_TIMESTAMP, MAPR_ENV_VAR, MAPR_PROPERTY_HOME, MapRClusterDir, MapRClusterDirPattern, MapRClusterDirSlash, MAPRFS_PREFIX, MAPRFS_SCHEME, MapRHomeDefault, MAX_CLUSTERS_CROSSED, MAX_PATH_LENGTH, MAX_PORT_NUMBER, MAX_RA_THREADS, MIN_RA_THREADS, MinChunkSize, ModeBit, MtimeBit, MULTI_ADDR_SEPARATOR, NUM_CONTAINERS_PER_RPC, OLDEST_TIMESTAMP, RA_THREADS_PER_STREAM, ReplBit, UserBit, UTF8_ENCODING, WireSecureBit
-
-
Constructor Summary
Constructors Constructor Description MapRFileSystem()MapRFileSystem(java.lang.String uname)MapRFileSystem(java.lang.String cName, java.lang.String[] cldbLocations)MapRFileSystem(java.lang.String cName, java.lang.String[] cldbLocations, java.lang.String uname)
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Deprecated Methods Modifier and Type Method Description voidaccess(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode)voidaddAceEntryError(java.util.ArrayList<FileAceEntry> acesList, org.apache.hadoop.fs.Path path, int err)org.apache.hadoop.security.token.Token<?>[]addDelegationTokens(java.lang.String renewer, org.apache.hadoop.security.Credentials credentials)intaddSecurityPolicy(org.apache.hadoop.fs.Path path, java.lang.String securityPolicyTag, boolean recursive)This method adds a single security policy tag to the list of existing security policy tags (if any) for the file or directory specified in path.intaddSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags, boolean recursive)This method adds one or more security policy tags to the list of existing security policy tags (if any) for the file or directory specified in path.voidaddTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc, com.mapr.fs.proto.Dbserver.TableReplAutoSetupInfo ainfo)voidaddTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc)org.apache.hadoop.fs.FSDataOutputStreamappend(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress)Append to an existing file (optional operation).voidclearQueryServiceParam()Clears the QueryServiceParam for the default cluster.voidclearQueryServiceParam(java.lang.String clusterName)Clears the QueryServiceParam for the specified cluster.voidclose()intcopyAce(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)This method copies ACEs on source to destination.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite)Opens an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize)Opens an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize)org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, org.apache.hadoop.util.Progressable progress)Opens an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, int mask, org.apache.hadoop.fs.permission.FsPermission permission, boolean createIfNonExistant, boolean append, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, boolean createParent)org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, short replication)Opens an FSDataOutputStream at the indicated Path.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, short replication, org.apache.hadoop.util.Progressable progress)Opens an FSDataOutputStream at the indicated Path with write-progress reporting.org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt)org.apache.hadoop.fs.FSDataOutputStreamcreate(org.apache.hadoop.fs.Path f, org.apache.hadoop.util.Progressable progress)Create an FSDataOutputStream at the indicated Path with write-progress reporting.voidcreateColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr)voidcreateColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, java.util.List<java.lang.String> securityPolicyTagList)voidcreateColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, java.util.List<java.lang.String> securityPolicyTagList)org.apache.hadoop.fs.FSDataOutputStreamcreateFid(java.lang.String pfid, java.lang.String file)org.apache.hadoop.fs.FSDataOutputStreamcreateFid(java.lang.String pfid, java.lang.String file, boolean overwrite)intcreateHardlink(org.apache.hadoop.fs.Path oldpath, org.apache.hadoop.fs.Path newpath)org.apache.hadoop.fs.FSDataOutputStreamcreateNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress)org.apache.hadoop.fs.PathIdcreatePathId()intcreateSnapshot(java.lang.String cluster, java.lang.String volumeName, int volId, int rootCid, java.lang.String snapshotName, boolean mirrorSnapshot, long expirationTime, java.lang.String username)voidcreateSymbolicLink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link)intcreateSymlink(java.lang.String target, org.apache.hadoop.fs.Path link, boolean createParent)voidcreateSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent)voidcreateTable(org.apache.hadoop.fs.Path tableURI)voidcreateTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson)voidcreateTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson, boolean insertOrder)voidcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user)voidcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, byte[][] splitKeys)voidcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue)voidcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue, java.util.List<java.lang.String> securityPolicyTagList)voidcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue)voidcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue, java.util.List<java.lang.String> securityPolicyTagList)java.lang.StringcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue)java.lang.StringcreateTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue, java.util.List<java.lang.String> securityPolicyTagList)intcreateVolLink(java.lang.String cluster, java.lang.String volName, org.apache.hadoop.fs.Path volLink, boolean writeable, boolean isHidden)static com.mapr.fs.jni.MapRUserInfoCurrentUserInfo()static com.mapr.fs.jni.MapRUserInfoCurrentUserInfo(java.lang.String uname)intdelAces(org.apache.hadoop.fs.Path path, boolean recursive)intdelAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight)booleandelete(org.apache.hadoop.fs.Path f)booleandelete(org.apache.hadoop.fs.Path f, boolean recursive)Delete a file.voiddeleteAces(org.apache.hadoop.fs.Path path)This method deletes all ACEs of a file or directory.voiddeleteAces(org.apache.hadoop.fs.Path path, boolean recursive)This method deletes all ACEs of a file or directory recursively.voiddeleteColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name)booleandeleteFid(java.lang.String pfid, java.lang.String dir)intdeleteVolLink(java.lang.String cluster, java.lang.String volLink)voideditTableReplica(org.apache.hadoop.fs.Path tableURI, java.lang.String clusterName, java.lang.String replicaPath, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc)voideditTableReplica(org.apache.hadoop.fs.Path tableURI, java.lang.String clusterName, java.lang.String replicaPath, java.lang.String topicName, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc)voidenablePrivilegedProcessAccess(boolean enable)<ReturnType>
ReturnTypeexecuteCommand(com.mapr.fs.FSCommandHandler handler, com.mapr.fs.FSCommandHandler.ICommandExecutor<ReturnType> executor)static java.lang.StringfidToString(com.mapr.fs.proto.Common.FidMsg fid)Deprecated.voidforceLocalResolution(java.net.URI name)java.lang.StringGatewaySourceToString(int source)java.util.List<MapRFileAce>getAces(org.apache.hadoop.fs.Path path)This method gets the ACEs of a file or directory.java.util.ArrayList<FileAceEntry>getAces(org.apache.hadoop.fs.Path path, boolean recursive)java.util.ArrayList<FileAceEntry>getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight)java.util.ArrayList<FileAceEntry>getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight, int serveridx)java.util.List<com.mapr.fs.proto.Dbserver.DataMask>getAllDataMasks()Retrieves all the info for a all data maskintgetCidFromPath(org.apache.hadoop.fs.Path path)static ClusterConfgetClusterConf()java.util.List<ClusterConf.ClusterEntry>getClusterList()java.lang.StringgetClusterName(java.net.URI p)java.lang.StringgetClusterNameUnchecked(java.lang.String str)booleangetClusterNameUnique()java.util.Map<java.lang.String,java.lang.Integer>getClusterSecurityPolicies()byte[]getContainerInfo(org.apache.hadoop.fs.Path path, java.util.List<java.lang.Integer> cidList)com.mapr.fs.proto.Dbserver.DataMaskgetDataMask(java.lang.String dmName)Retrieves all the info for a particular data mask given the namejava.lang.StringgetDataMaskNameFromId(int id)Returns the data mask name given the data mask ID.longgetDefaultBlockSize()java.lang.StringgetDefaultClusterName()shortgetDefaultReplication()org.apache.hadoop.fs.BlockLocation[]getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len)com.mapr.fs.jni.MapRFileCountgetFileCount(org.apache.hadoop.fs.Path f)org.apache.hadoop.fs.FileStatusgetFileLinkStatus(org.apache.hadoop.fs.Path f)org.apache.hadoop.fs.FileStatusgetFileStatus(org.apache.hadoop.fs.Path f)com.mapr.fs.jni.IPPort[]getGatewayIps(java.lang.String file)com.mapr.fs.jni.IPPort[]getGatewayIps(java.lang.String file, java.lang.String dstCluster, boolean skipCache, com.mapr.fs.jni.GatewaySource source)org.apache.hadoop.fs.PathgetHomeDirectory()utility functionsjava.net.InetSocketAddress[]getJobTrackerAddrs(org.apache.hadoop.conf.Configuration conf)org.apache.hadoop.fs.PathgetLinkTarget(org.apache.hadoop.fs.Path f)MapRBlockLocation[]getMapRFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len, boolean usePrimaryFid, boolean needDiskBlocks, boolean fullBlockInfo)MapRFileStatusgetMapRFileStatus(org.apache.hadoop.fs.Path f)java.lang.StringgetMountPath(java.lang.String cluster, java.lang.String username, int pCid, int pCinum, int pUniq)java.lang.StringgetMountPath(java.lang.String cluster, java.lang.String username, int pCid, int pCinum, int pUniq, boolean useCache)java.lang.StringgetMountPathFid(java.lang.String fidStr)java.lang.StringgetMountPathFidCached(java.lang.String fidStr)java.lang.StringgetName(org.apache.hadoop.fs.Path p)java.lang.StringgetNameStr(java.lang.String str)QueryServiceParamgetQueryServiceParam()Return the QueryServiceParam from the default cluster.QueryServiceParamgetQueryServiceParam(java.lang.String clusterName)Return the QueryServiceParam from the specified cluster.static intgetRAThreads()com.mapr.fs.proto.Dbserver.TableBasicStatsgetScanRangeStats(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] stKey, byte[] endKey)java.lang.StringgetScheme()intgetSecurityPolicy(java.util.List<org.apache.hadoop.fs.Path> paths, java.util.List<MapRPathToSecurityPolicyTags> securityPolicyTags)This method returns the security policies associated with the files or directories specified in list of paths specified in paths.intgetSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags)This method returns the security policies associated with the file or directory specified in path.intgetSecurityPolicyId(java.lang.String policyName)This method gets the security policy id for a single security policy name.java.util.List<java.lang.Integer>getSecurityPolicyIds(java.util.List<java.lang.String> securityPolicyTagList)This method gets the security policy ids for given security policy names from the policy cache maintained in the MapClient in JNI.java.lang.StringgetSecurityPolicyName(int policyID)This method gets the security policyname for a given security policy id.java.lang.StringgetSecurityPolicyNameOrId(int spId)This method gets the security policyname for a given security policy id.java.lang.StringgetServerForCid(int cid)java.lang.StringgetServerForCid(int cid, java.lang.String cluster)MapRFileStatusgetStat(org.apache.hadoop.fs.Path path)org.apache.hadoop.fs.FsStatusgetStatus()org.apache.hadoop.fs.FsStatusgetStatus(org.apache.hadoop.fs.Path p)TableBasicAttrsgetTableBasicAttrs(org.apache.hadoop.fs.Path tableURI)TablePropertiesgetTableProperties(org.apache.hadoop.fs.Path tableURI)com.mapr.fs.proto.Dbserver.TableBasicStatsgetTableStats(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid)com.mapr.fs.proto.Dbserver.TabletLookupResponsegetTablets(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage)com.mapr.fs.proto.Dbserver.TabletLookupResponsegetTablets(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap)MapRTabletScannergetTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid)MapRTabletScannergetTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, boolean needSpaceUsage, boolean prefetchTabletMap)MapRTabletScannergetTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] startKey)MapRTabletScannergetTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] startKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap)com.mapr.fs.proto.Dbserver.TabletStatResponsegetTabletStat(org.apache.hadoop.fs.Path tablePath, com.mapr.fs.proto.Common.FidMsg tabletFid)static java.lang.String[]getTrimmedStrings(java.lang.String str)Splits a comma separated valueString, trimming leading and trailing whitespace on each value.java.net.URIgetUri()Returns a URI whose scheme and authority identify this FileSystem.longgetUsed()Return the total size of all files in the filesystem.com.mapr.fs.jni.MapRUserInfogetUserInfo()java.lang.StringgetVolumeName(int volId)This method gets the volume name for a given volume id.java.lang.StringgetVolumeNameCached(int volId)This method gets the volume name for a given volume id.org.apache.hadoop.fs.PathgetWorkingDirectory()Get the current working directory for the given file systembyte[]getXAttr(org.apache.hadoop.fs.Path path, java.lang.String name)Get the extended attribute associated with the given name on the given file or directory.java.util.Map<java.lang.String,byte[]>getXAttrs(org.apache.hadoop.fs.Path path)Get all the extended attribute name/value pairs associated with the given file or directory.java.util.Map<java.lang.String,byte[]>getXAttrs(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> names)Get the extended attributes associated with the given list of names on the given file or directory.java.lang.StringgetZkConnectString()voidinitialize(java.net.URI name, org.apache.hadoop.conf.Configuration conf)voidinitialize(java.net.URI name, org.apache.hadoop.conf.Configuration conf, boolean isExpandAuditTool)booleanisChangelog(org.apache.hadoop.fs.Path path)static booleanisFidString(java.lang.String fid)Deprecated.booleanisJsonTable(org.apache.hadoop.fs.Path path)booleanisStream(org.apache.hadoop.fs.Path path)booleanisTable(org.apache.hadoop.fs.Path path)java.util.List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr>listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces)java.util.List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr>listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces, boolean useCached)intlistDirLite(org.apache.hadoop.fs.Path f)Path should be a directory.MapRFileStatus[]listMapRStatus(org.apache.hadoop.fs.Path f, boolean showVols, boolean showHidden)MapRFileStatus[]listStatus(org.apache.hadoop.fs.Path f)List the statuses of the files/directories in the given path if the path is a directory.MapRReaddirLitelistStatusLite(org.apache.hadoop.fs.Path f, int cid, int cinum, int uniq, int count, long cookie, boolean showHidden)Path should be a directory.com.mapr.fs.proto.Dbserver.TableReplicaListResponselistTableIndexes(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean skipFieldsReadPermCheck, boolean refreshNow)com.mapr.fs.proto.Dbserver.TableReplicaListResponselistTableReplicas(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean refreshNow, boolean getCompactInfo)com.mapr.fs.proto.Dbserver.TableUpstreamListResponselistTableUpstreams(org.apache.hadoop.fs.Path tableURI)java.util.List<java.lang.String>listXAttrs(org.apache.hadoop.fs.Path path)Get all the extended attribute names associated with the given file or directory.com.mapr.fs.proto.Fileserver.KvstoreLookupResponselookupKV(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Fileserver.KvStoreKey key)org.apache.hadoop.fs.PathmakeAbsolute(org.apache.hadoop.fs.Path path)voidmergeTableRegion(org.apache.hadoop.fs.Path tableURI, java.lang.String fidstr)booleanmkdirs(org.apache.hadoop.fs.Path p, boolean createParent, org.apache.hadoop.fs.permission.FsPermission permission)booleanmkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission)Make the given file and all non-existent parents into directories.booleanmkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress)mkdirs with compression optionbooleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress, long chunkSize)mkdirs with compression and chunksize optionbooleanmkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, long chunkSize)mkdirs with chunksize optionjava.lang.StringmkdirsFid(java.lang.String pfid, java.lang.String dir)java.lang.StringmkdirsFid(org.apache.hadoop.fs.Path p)voidmodifyAces(org.apache.hadoop.fs.Path path, java.util.List<MapRFileAce> aces)This method modifies ACEs of a file or directory.intmodifyAudit(org.apache.hadoop.fs.Path path, boolean val)voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr)voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm)voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm)voidmodifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr)voidmodifyDbCfSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, java.util.List<java.lang.String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op, java.lang.String cfname)voidmodifyDbSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, java.util.List<java.lang.String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op)voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm)voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid)voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp)voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces)voidmodifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp)intmountVolume(java.lang.String cluster, java.lang.String volName, java.lang.String mountPath, java.lang.String username)Creates directory entry for volName at mountPath.org.apache.hadoop.fs.FSDataInputStreamopen(org.apache.hadoop.fs.Path f, int bufferSize)org.apache.hadoop.fs.FSDataInputStreamopenFid(java.lang.String fid, long[] ips, long chunkSize, long fileSize)org.apache.hadoop.fs.FSDataInputStreamopenFid(java.lang.String pfid, java.lang.String file, long[] ips)org.apache.hadoop.fs.FSDataInputStreamopenFid2(org.apache.hadoop.fs.PathId pfid, java.lang.String file, int readAheadBytesHint)InodeopenTable(org.apache.hadoop.fs.Path tableURI, MapRHTable htable)InodeopenTableWithFid(org.apache.hadoop.fs.Path priTableURI, java.lang.String indexFid, MapRHTable htable)voidpackTableRegion(org.apache.hadoop.fs.Path tableURI, java.lang.String fidstr, int ctype)com.mapr.fs.jni.MapRUserInfopopulateAndGetUserInfo(org.apache.hadoop.fs.Path p)voidpopulateUserInfo(org.apache.hadoop.fs.Path p)intprintSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive)This method prints the security policies associated with the files or directories specified in list of paths specified in paths.intremoveAllSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive)This method removes all security policy tags associated to the file or directory specified by path.intremoveRecursive(org.apache.hadoop.fs.Path f)byte[]removeS3Bucket(java.lang.String bucketName, java.lang.String domainName)intremoveSecurityPolicy(org.apache.hadoop.fs.Path path, java.lang.String securityPolicyTag, boolean recursive)This method removes the security policy named securityPolicy from the list of existing security policies (if any) for the file or directory specified in path.intremoveSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags, boolean recursive)This method removes one or more security policy tags from the list of existing security policy tags (if any) for the file or directory specified in path.voidremoveTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc)voidremoveTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc)voidremoveXAttr(org.apache.hadoop.fs.Path path, java.lang.String name)Remove the extended attribute (specified by name) associated with the given file or directory.booleanrename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst)Renames Path src to Path dst.protected org.apache.hadoop.fs.PathresolveLink(org.apache.hadoop.fs.Path f)org.apache.hadoop.fs.PathresolveTablePath(org.apache.hadoop.fs.Path path)byte[]s3BucketCreate(java.lang.String path, java.lang.String bktName, java.lang.String domain, int aId, boolean worm, long ownwerUid)MapRFileStatus[]scanDir(java.lang.String cluster, java.lang.String fidstr)byte[]scanKV(java.lang.String cluster, java.lang.String fidstr, byte[] start, byte[] end, int maxkeys)byte[]scanKV(java.lang.String cluster, java.lang.String fidstr, byte[] start, byte[] end, int maxkeys, boolean fromGfsck)com.mapr.fs.proto.Fileserver.KvstoreScanResponsescanKVGivenFid(org.apache.hadoop.fs.Path URI, com.mapr.fs.proto.Common.FidMsg kvFid, com.mapr.fs.proto.Fileserver.KvStoreKey start, com.mapr.fs.proto.Fileserver.KvStoreKey end)intsetAces(org.apache.hadoop.fs.Path path, java.util.ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath)intsetAces(org.apache.hadoop.fs.Path path, java.util.ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath, int symLinkHeight)voidsetAces(org.apache.hadoop.fs.Path path, java.util.List<MapRFileAce> aces)This method fully replace ACEs of a file or directory, discarding all existing ones.voidsetAces(org.apache.hadoop.fs.Path path, java.util.List<MapRFileAce> aces, boolean recursive)This method fully replace ACEs of a file or directory recursively, discarding all existing ones.intsetChunkSize(org.apache.hadoop.fs.Path path, long val)intsetCompression(org.apache.hadoop.fs.Path path, boolean val, java.lang.String compName)intsetDiskFlush(org.apache.hadoop.fs.Path path, boolean val)voidsetOwner(org.apache.hadoop.fs.Path p, java.lang.String user, java.lang.String group)voidsetOwnerFid(java.lang.String pfid, java.lang.String user, java.lang.String group)voidsetPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission)voidsetQueryServiceParam(QueryServiceParam qsp)Sets the QueryServiceParam for the default cluster.voidsetQueryServiceParam(java.lang.String clusterName, QueryServiceParam qsp)Sets the QueryServiceParam for the specified cluster.intsetSecurityPolicy(org.apache.hadoop.fs.Path path, java.lang.String securityPolicyTag, boolean recursive)This method sets the security policy tag to the file or directory specified in path, replacing all existing tags.intsetSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags, boolean recursive)This method sets one or more security policy tags for the file or directory specified in path, replacing any existing security policies.voidsetTimes(org.apache.hadoop.fs.Path p, long mtime, long atime)intsetWireSecurity(org.apache.hadoop.fs.Path path, boolean val)voidsetWorkingDirectory(org.apache.hadoop.fs.Path new_dir)Set the current working directory for the given file system.voidsetXAttr(org.apache.hadoop.fs.Path path, java.lang.String name, byte[] value)Set or replace an extended attribute on a file or directory.voidsetXAttr(org.apache.hadoop.fs.Path path, java.lang.String name, byte[] value, java.util.EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag)Set an extended attribute on a file or directory.MapRFileStatus[]slashReaddir(java.lang.String authority)voidsplitTableRegion(org.apache.hadoop.fs.Path tableURI, java.lang.String fidstr, boolean ignoreRegionTooSmallError)booleansupportsSymlinks()com.mapr.fs.jni.JNIFileTierStatustierOp(int op, org.apache.hadoop.fs.Path path, boolean verbose, boolean blocking, long shaHigh, long shaLow, long uniq)booleantruncate(org.apache.hadoop.fs.Path f, long newLength)Truncate the file in the indicated path to the indicated size.intunmountVolume(java.lang.String cluster, java.lang.String volName, java.lang.String mountPath, java.lang.String username, int pCid, int pCinum, int pUniq)Removes directory entry for volName Expects absolute path for mountPathstatic voidvalidateFid(java.lang.String fid)Deprecated.-
Methods inherited from class org.apache.hadoop.fs.FileSystem
append, append, appendFile, areSymlinksEnabled, cancelDeleteOnExit, canonicalizeUri, checkPath, clearStatistics, closeAll, closeAllForUGI, completeLocalOutput, concat, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyFromLocalFile, copyToLocalFile, copyToLocalFile, copyToLocalFile, create, create, create, create, createDataInputStreamBuilder, createDataInputStreamBuilder, createDataOutputStreamBuilder, createFile, createMultipartUploader, createNewFile, createNonRecursive, createNonRecursive, createPathHandle, createSnapshot, createSnapshot, deleteOnExit, deleteSnapshot, enableSymlinks, exists, fixRelativePart, get, get, get, getAclStatus, getAdditionalTokenIssuers, getAllStatistics, getAllStoragePolicies, getBlockSize, getCanonicalServiceName, getCanonicalUri, getChildFileSystems, getContentSummary, getDefaultBlockSize, getDefaultPort, getDefaultReplication, getDefaultUri, getDelegationToken, getFileBlockLocations, getFileChecksum, getFileChecksum, getFileSystemClass, getFSofPath, getGlobalStorageStatistics, getInitialWorkingDirectory, getLength, getLocal, getName, getNamed, getPathHandle, getQuotaUsage, getReplication, getServerDefaults, getServerDefaults, getStatistics, getStatistics, getStoragePolicy, getStorageStatistics, getTrashRoot, getTrashRoots, getUsed, globStatus, globStatus, hasPathCapability, isDirectory, isFile, listCorruptFileBlocks, listFiles, listLocatedStatus, listLocatedStatus, listStatus, listStatus, listStatus, listStatusBatch, listStatusIterator, makeQualified, mkdirs, mkdirs, modifyAclEntries, moveFromLocalFile, moveFromLocalFile, moveToLocalFile, msync, newInstance, newInstance, newInstance, newInstanceLocal, open, open, open, openFile, openFile, openFileWithOptions, openFileWithOptions, primitiveCreate, primitiveMkdir, primitiveMkdir, printStatistics, processDeleteOnExit, removeAcl, removeAclEntries, removeDefaultAcl, rename, renameSnapshot, resolvePath, satisfyStoragePolicy, setAcl, setDefaultUri, setDefaultUri, setQuota, setQuotaByStorageType, setReplication, setStoragePolicy, setVerifyChecksum, setWriteChecksum, startLocalOutput, unsetStoragePolicy
-
-
-
-
Field Detail
-
MAPRFS_BASE_URI
public static final java.net.URI MAPRFS_BASE_URI
-
securityPolicyNameToIdMap_
protected com.google.common.collect.BiMap<java.lang.String,java.lang.Integer> securityPolicyNameToIdMap_
-
securityPolicyIdToNameMap_
protected com.google.common.collect.BiMap<java.lang.Integer,java.lang.String> securityPolicyIdToNameMap_
-
MaxNumFileAces
public final int MaxNumFileAces
- See Also:
- Constant Field Values
-
emptyStringArray
public static final java.lang.String[] emptyStringArray
-
-
Constructor Detail
-
MapRFileSystem
public MapRFileSystem() throws java.io.IOException- Throws:
java.io.IOException
-
MapRFileSystem
public MapRFileSystem(java.lang.String uname) throws java.io.IOException- Throws:
java.io.IOException
-
MapRFileSystem
public MapRFileSystem(java.lang.String cName, java.lang.String[] cldbLocations) throws java.io.IOException- Throws:
java.io.IOException
-
MapRFileSystem
public MapRFileSystem(java.lang.String cName, java.lang.String[] cldbLocations, java.lang.String uname) throws java.io.IOException- Throws:
java.io.IOException
-
-
Method Detail
-
CurrentUserInfo
public static com.mapr.fs.jni.MapRUserInfo CurrentUserInfo() throws java.io.IOException- Throws:
java.io.IOException
-
CurrentUserInfo
public static com.mapr.fs.jni.MapRUserInfo CurrentUserInfo(java.lang.String uname) throws java.io.IOException- Throws:
java.io.IOException
-
initialize
public void initialize(java.net.URI name, org.apache.hadoop.conf.Configuration conf) throws java.io.IOException- Overrides:
initializein classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
initialize
public void initialize(java.net.URI name, org.apache.hadoop.conf.Configuration conf, boolean isExpandAuditTool) throws java.io.IOException- Throws:
java.io.IOException
-
createPathId
public org.apache.hadoop.fs.PathId createPathId()
- Specified by:
createPathIdin classorg.apache.hadoop.maprfs.AbstractMapRFileSystem
-
forceLocalResolution
public void forceLocalResolution(java.net.URI name) throws java.io.IOException- Throws:
java.io.IOException
-
access
public void access(org.apache.hadoop.fs.Path path, org.apache.hadoop.fs.permission.FsAction mode) throws org.apache.hadoop.security.AccessControlException, java.io.FileNotFoundException, java.io.IOException- Overrides:
accessin classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionjava.io.FileNotFoundExceptionjava.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, java.util.EnumSet<org.apache.hadoop.fs.CreateFlag> flag, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, org.apache.hadoop.fs.Options.ChecksumOpt checksumOpt) throws java.io.IOException- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
createNonRecursive
public org.apache.hadoop.fs.FSDataOutputStream createNonRecursive(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOException- Overrides:
createNonRecursivein classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, int mask, org.apache.hadoop.fs.permission.FsPermission permission, boolean createIfNonExistant, boolean append, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress, boolean createParent) throws java.io.IOException- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, short replication, long blockSize) throws java.io.IOException- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean overwrite, int bufferSize, short replication, long blockSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOException- Specified by:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite) throws java.io.IOExceptionOpens an FSDataOutputStream at the indicated Path.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, org.apache.hadoop.util.Progressable progress) throws java.io.IOExceptionCreate an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, short replication) throws java.io.IOExceptionOpens an FSDataOutputStream at the indicated Path. Files are overwritten by default.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, short replication, org.apache.hadoop.util.Progressable progress) throws java.io.IOExceptionOpens an FSDataOutputStream at the indicated Path with write-progress reporting. Files are overwritten by default.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize) throws java.io.IOExceptionOpens an FSDataOutputStream at the indicated Path.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.- Throws:
java.io.IOException
-
create
public org.apache.hadoop.fs.FSDataOutputStream create(org.apache.hadoop.fs.Path f, boolean overwrite, int bufferSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOExceptionOpens an FSDataOutputStream at the indicated Path with write-progress reporting.- Overrides:
createin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the file name to openoverwrite- if a file with this name already exists, then if true, the file will be overwritten, and if false an error will be thrown.bufferSize- the size of the buffer to be used.- Throws:
java.io.IOException
-
truncate
public boolean truncate(org.apache.hadoop.fs.Path f, long newLength) throws java.io.IOExceptionTruncate the file in the indicated path to the indicated size.- Fails if path is a directory.
- Fails if path does not exist.
- Fails if path is not closed.
- Fails if new size is greater than current size.
- Overrides:
truncatein classorg.apache.hadoop.fs.FileSystem- Parameters:
f- The path to the file to be truncatednewLength- The size the file is to be truncated to- Returns:
trueif the file has been truncated to the desirednewLengthand is immediately available to be reused for write operations such asappend, orfalseif a background process of adjusting the length of the last block has been started, and clients should wait for it to complete before proceeding with further file updates.- Throws:
java.io.IOException
-
open
public org.apache.hadoop.fs.FSDataInputStream open(org.apache.hadoop.fs.Path f, int bufferSize) throws java.io.IOException, org.apache.hadoop.security.AccessControlException- Specified by:
openin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOExceptionorg.apache.hadoop.security.AccessControlException
-
getUri
public java.net.URI getUri()
Returns a URI whose scheme and authority identify this FileSystem.- Specified by:
getUriin classorg.apache.hadoop.fs.FileSystem
-
getScheme
public java.lang.String getScheme()
- Overrides:
getSchemein classorg.apache.hadoop.fs.FileSystem
-
append
public org.apache.hadoop.fs.FSDataOutputStream append(org.apache.hadoop.fs.Path f, int bufferSize, org.apache.hadoop.util.Progressable progress) throws java.io.IOException, org.apache.hadoop.security.AccessControlExceptionAppend to an existing file (optional operation).- Specified by:
appendin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the existing file to be appended.bufferSize- the size of the buffer to be used.progress- for reporting progress if it is not null.- Throws:
java.io.IOExceptionorg.apache.hadoop.security.AccessControlException
-
getFileStatus
public org.apache.hadoop.fs.FileStatus getFileStatus(org.apache.hadoop.fs.Path f) throws java.io.IOException- Specified by:
getFileStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
rename
public boolean rename(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws java.io.IOExceptionRenames Path src to Path dst. Can take place on local fs or remote DFS.- Specified by:
renamein classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
delete
public boolean delete(org.apache.hadoop.fs.Path f, boolean recursive) throws java.io.IOExceptionDelete a file.- Specified by:
deletein classorg.apache.hadoop.fs.FileSystem- Parameters:
f- the path to delete.recursive- if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.- Returns:
- true if delete is successful else false.
- Throws:
java.io.IOException
-
delete
public boolean delete(org.apache.hadoop.fs.Path f) throws java.io.IOException- Overrides:
deletein classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
setWorkingDirectory
public void setWorkingDirectory(org.apache.hadoop.fs.Path new_dir)
Set the current working directory for the given file system. All relative paths will be resolved relative to it.- Specified by:
setWorkingDirectoryin classorg.apache.hadoop.fs.FileSystem- Parameters:
new_dir-
-
getWorkingDirectory
public org.apache.hadoop.fs.Path getWorkingDirectory()
Get the current working directory for the given file system- Specified by:
getWorkingDirectoryin classorg.apache.hadoop.fs.FileSystem- Returns:
- the directory pathname
-
getDefaultBlockSize
public long getDefaultBlockSize()
- Overrides:
getDefaultBlockSizein classorg.apache.hadoop.fs.FileSystem
-
getDefaultReplication
public short getDefaultReplication()
- Overrides:
getDefaultReplicationin classorg.apache.hadoop.fs.FileSystem
-
getUserInfo
public com.mapr.fs.jni.MapRUserInfo getUserInfo()
-
populateUserInfo
public void populateUserInfo(org.apache.hadoop.fs.Path p) throws java.io.IOException- Throws:
java.io.IOException
-
populateAndGetUserInfo
public com.mapr.fs.jni.MapRUserInfo populateAndGetUserInfo(org.apache.hadoop.fs.Path p) throws java.io.IOException- Throws:
java.io.IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws java.io.IOExceptionMake the given file and all non-existent parents into directories. Has the semantics of Unix 'mkdir -p'. Existence of the directory hierarchy is not an error.- Specified by:
mkdirsin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path p, boolean createParent, org.apache.hadoop.fs.permission.FsPermission permission) throws java.io.IOException- Throws:
java.io.IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress) throws java.io.IOExceptionmkdirs with compression option- Throws:
java.io.IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, long chunkSize) throws java.io.IOExceptionmkdirs with chunksize option- Throws:
java.io.IOException
-
mkdirs
public boolean mkdirs(org.apache.hadoop.fs.Path f, org.apache.hadoop.fs.permission.FsPermission permission, boolean compress, long chunkSize) throws java.io.IOExceptionmkdirs with compression and chunksize option- Throws:
java.io.IOException
-
getMapRFileStatus
public MapRFileStatus getMapRFileStatus(org.apache.hadoop.fs.Path f) throws java.io.IOException
- Throws:
java.io.IOException
-
slashReaddir
public MapRFileStatus[] slashReaddir(java.lang.String authority) throws java.io.IOException
- Throws:
java.io.IOException
-
listMapRStatus
public MapRFileStatus[] listMapRStatus(org.apache.hadoop.fs.Path f, boolean showVols, boolean showHidden) throws java.io.IOException
- Throws:
java.io.IOException
-
listStatusLite
public MapRReaddirLite listStatusLite(org.apache.hadoop.fs.Path f, int cid, int cinum, int uniq, int count, long cookie, boolean showHidden) throws java.io.IOException
Path should be a directory.- Throws:
java.io.IOException
-
listDirLite
public int listDirLite(org.apache.hadoop.fs.Path f) throws java.io.IOExceptionPath should be a directory. Called from hadoop mfs -lsf. Implemented for Bug 31193 - hadoop fs -ls takes hours to finish on millions of files. - Throws:
java.io.IOException
-
listStatus
public MapRFileStatus[] listStatus(org.apache.hadoop.fs.Path f) throws java.io.IOException
List the statuses of the files/directories in the given path if the path is a directory.- Specified by:
listStatusin classorg.apache.hadoop.fs.FileSystem- Parameters:
f- given path- Returns:
- the statuses of the files/directories in the given patch
- Throws:
java.io.IOException
-
close
public void close() throws java.io.IOException- Specified by:
closein interfacejava.lang.AutoCloseable- Specified by:
closein interfacejava.io.Closeable- Overrides:
closein classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
getUsed
public long getUsed() throws java.io.IOExceptionReturn the total size of all files in the filesystem.- Overrides:
getUsedin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
supportsSymlinks
public boolean supportsSymlinks()
- Overrides:
supportsSymlinksin classorg.apache.hadoop.fs.FileSystem
-
createSymlink
public void createSymlink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link, boolean createParent) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, java.io.FileNotFoundException, org.apache.hadoop.fs.ParentNotDirectoryException, java.io.IOException- Overrides:
createSymlinkin classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.fs.FileAlreadyExistsExceptionjava.io.FileNotFoundExceptionorg.apache.hadoop.fs.ParentNotDirectoryExceptionjava.io.IOException
-
createSymlink
public int createSymlink(java.lang.String target, org.apache.hadoop.fs.Path link, boolean createParent) throws java.io.IOException- Throws:
java.io.IOException
-
removeRecursive
public int removeRecursive(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, java.io.FileNotFoundException, java.io.IOException- Throws:
org.apache.hadoop.security.AccessControlExceptionjava.io.FileNotFoundExceptionjava.io.IOException
-
createHardlink
public int createHardlink(org.apache.hadoop.fs.Path oldpath, org.apache.hadoop.fs.Path newpath) throws org.apache.hadoop.security.AccessControlException, org.apache.hadoop.fs.FileAlreadyExistsException, java.io.FileNotFoundException, org.apache.hadoop.fs.ParentNotDirectoryException, java.io.IOException- Throws:
org.apache.hadoop.security.AccessControlExceptionorg.apache.hadoop.fs.FileAlreadyExistsExceptionjava.io.FileNotFoundExceptionorg.apache.hadoop.fs.ParentNotDirectoryExceptionjava.io.IOException
-
scanDir
public MapRFileStatus[] scanDir(java.lang.String cluster, java.lang.String fidstr) throws java.io.IOException
- Throws:
java.io.IOException
-
scanKV
public byte[] scanKV(java.lang.String cluster, java.lang.String fidstr, byte[] start, byte[] end, int maxkeys) throws java.io.IOException- Throws:
java.io.IOException
-
scanKV
public byte[] scanKV(java.lang.String cluster, java.lang.String fidstr, byte[] start, byte[] end, int maxkeys, boolean fromGfsck) throws java.io.IOException- Throws:
java.io.IOException
-
scanKVGivenFid
public com.mapr.fs.proto.Fileserver.KvstoreScanResponse scanKVGivenFid(org.apache.hadoop.fs.Path URI, com.mapr.fs.proto.Common.FidMsg kvFid, com.mapr.fs.proto.Fileserver.KvStoreKey start, com.mapr.fs.proto.Fileserver.KvStoreKey end) throws java.io.IOException- Throws:
java.io.IOException
-
lookupKV
public com.mapr.fs.proto.Fileserver.KvstoreLookupResponse lookupKV(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Fileserver.KvStoreKey key) throws java.io.IOException- Throws:
java.io.IOException
-
createVolLink
public int createVolLink(java.lang.String cluster, java.lang.String volName, org.apache.hadoop.fs.Path volLink, boolean writeable, boolean isHidden) throws java.io.IOException- Throws:
java.io.IOException
-
deleteVolLink
public int deleteVolLink(java.lang.String cluster, java.lang.String volLink) throws java.io.IOException- Throws:
java.io.IOException
-
getHomeDirectory
public org.apache.hadoop.fs.Path getHomeDirectory()
utility functions- Overrides:
getHomeDirectoryin classorg.apache.hadoop.fs.FileSystem
-
makeAbsolute
public org.apache.hadoop.fs.Path makeAbsolute(org.apache.hadoop.fs.Path path)
-
getClusterNameUnchecked
public java.lang.String getClusterNameUnchecked(java.lang.String str) throws java.io.IOException- Throws:
java.io.IOException
-
getNameStr
public java.lang.String getNameStr(java.lang.String str)
-
getName
public java.lang.String getName(org.apache.hadoop.fs.Path p)
-
getMapRFileBlockLocations
public MapRBlockLocation[] getMapRFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len, boolean usePrimaryFid, boolean needDiskBlocks, boolean fullBlockInfo) throws java.io.IOException
- Throws:
java.io.IOException
-
getFileBlockLocations
public org.apache.hadoop.fs.BlockLocation[] getFileBlockLocations(org.apache.hadoop.fs.FileStatus file, long start, long len) throws java.io.IOException- Overrides:
getFileBlockLocationsin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
setOwner
public void setOwner(org.apache.hadoop.fs.Path p, java.lang.String user, java.lang.String group) throws java.io.IOException- Overrides:
setOwnerin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
setOwnerFid
public void setOwnerFid(java.lang.String pfid, java.lang.String user, java.lang.String group) throws java.io.IOException- Specified by:
setOwnerFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
setOwnerFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
setTimes
public void setTimes(org.apache.hadoop.fs.Path p, long mtime, long atime) throws java.io.IOException- Overrides:
setTimesin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
setPermission
public void setPermission(org.apache.hadoop.fs.Path p, org.apache.hadoop.fs.permission.FsPermission permission) throws java.io.IOException- Overrides:
setPermissionin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
getFileLinkStatus
public org.apache.hadoop.fs.FileStatus getFileLinkStatus(org.apache.hadoop.fs.Path f) throws org.apache.hadoop.security.AccessControlException, java.io.FileNotFoundException, org.apache.hadoop.fs.UnsupportedFileSystemException, java.io.IOException- Overrides:
getFileLinkStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
org.apache.hadoop.security.AccessControlExceptionjava.io.FileNotFoundExceptionorg.apache.hadoop.fs.UnsupportedFileSystemExceptionjava.io.IOException
-
getLinkTarget
public org.apache.hadoop.fs.Path getLinkTarget(org.apache.hadoop.fs.Path f) throws java.io.IOException- Overrides:
getLinkTargetin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
resolveLink
protected org.apache.hadoop.fs.Path resolveLink(org.apache.hadoop.fs.Path f) throws java.io.IOException- Overrides:
resolveLinkin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
getStatus
public org.apache.hadoop.fs.FsStatus getStatus() throws java.io.IOException- Overrides:
getStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
getStatus
public org.apache.hadoop.fs.FsStatus getStatus(org.apache.hadoop.fs.Path p) throws java.io.IOException- Overrides:
getStatusin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, java.lang.String name, byte[] value, java.util.EnumSet<org.apache.hadoop.fs.XAttrSetFlag> flag) throws java.io.IOExceptionSet an extended attribute on a file or directory.- Overrides:
setXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The extended attribute name. This must be prefixed with the namespace followed by ".". For example, "user.attr".value- The extended attribute value.flag- Thevalue must be CREATE, to create a new extended attribute, or REPLACE, to replace an existing extended attribute. When creating a new extended attribute, an extended attribute with the given name must not already exist and when replacing, an extended attribute with the given name must already exist; else, an error is returned. - Throws:
java.io.IOException
-
setXAttr
public void setXAttr(org.apache.hadoop.fs.Path path, java.lang.String name, byte[] value) throws java.io.IOExceptionSet or replace an extended attribute on a file or directory.- Overrides:
setXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The extended attribute name. This must be prefixed with the namespace followed by ".". For example, "user.attr".value- The extended attribute value.- Throws:
java.io.IOException
-
getXAttr
public byte[] getXAttr(org.apache.hadoop.fs.Path path, java.lang.String name) throws java.io.IOExceptionGet the extended attribute associated with the given name on the given file or directory.- Overrides:
getXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The name of the extended attribute (to retrieve). The name must be prefixed with the namespace followed by ".". For example, "user.attr".- Throws:
java.io.IOException
-
getXAttrs
public java.util.Map<java.lang.String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path) throws java.io.IOExceptionGet all the extended attribute name/value pairs associated with the given file or directory. Only those extended attributes which the logged-in user has access to are returned.- Overrides:
getXAttrsin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.- Throws:
java.io.IOException
-
getXAttrs
public java.util.Map<java.lang.String,byte[]> getXAttrs(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> names) throws java.io.IOExceptionGet the extended attributes associated with the given list of names on the given file or directory. Only those extended attributes which the logged-in user has access to are returned.- Overrides:
getXAttrsin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.names- The names of the extended attributes (to retrieve).- Throws:
java.io.IOException
-
listXAttrs
public java.util.List<java.lang.String> listXAttrs(org.apache.hadoop.fs.Path path) throws java.io.IOExceptionGet all the extended attribute names associated with the given file or directory.- Overrides:
listXAttrsin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.- Throws:
java.io.IOException
-
removeXAttr
public void removeXAttr(org.apache.hadoop.fs.Path path, java.lang.String name) throws java.io.IOExceptionRemove the extended attribute (specified by name) associated with the given file or directory.- Overrides:
removeXAttrin classorg.apache.hadoop.fs.FileSystem- Parameters:
path- The path to the file or directory.name- The name of the extended attribute (to remove). The name must be prefixed with the namespace followed by ".". For example, "user.attr".- Throws:
java.io.IOException
-
executeCommand
public <ReturnType> ReturnType executeCommand(com.mapr.fs.FSCommandHandler handler, com.mapr.fs.FSCommandHandler.ICommandExecutor<ReturnType> executor) throws java.io.IOException- Throws:
java.io.IOException
-
setCompression
public int setCompression(org.apache.hadoop.fs.Path path, boolean val, java.lang.String compName) throws java.io.IOException- Throws:
java.io.IOException
-
tierOp
public com.mapr.fs.jni.JNIFileTierStatus tierOp(int op, org.apache.hadoop.fs.Path path, boolean verbose, boolean blocking, long shaHigh, long shaLow, long uniq) throws java.io.IOException- Throws:
java.io.IOException
-
getStat
public MapRFileStatus getStat(org.apache.hadoop.fs.Path path) throws java.io.IOException
- Throws:
java.io.IOException
-
getFileCount
public com.mapr.fs.jni.MapRFileCount getFileCount(org.apache.hadoop.fs.Path f) throws java.io.IOException- Throws:
java.io.IOException
-
modifyAudit
public int modifyAudit(org.apache.hadoop.fs.Path path, boolean val) throws java.io.IOException- Throws:
java.io.IOException
-
setWireSecurity
public int setWireSecurity(org.apache.hadoop.fs.Path path, boolean val) throws java.io.IOException- Throws:
java.io.IOException
-
setDiskFlush
public int setDiskFlush(org.apache.hadoop.fs.Path path, boolean val) throws java.io.IOException- Throws:
java.io.IOException
-
setAces
public void setAces(org.apache.hadoop.fs.Path path, java.util.List<MapRFileAce> aces) throws java.io.IOExceptionThis method fully replace ACEs of a file or directory, discarding all existing ones.- Parameters:
path- path to set ACEsaces- list of file ACEs- Throws:
java.io.IOException- if an ACE could not be replaced
-
setAces
public void setAces(org.apache.hadoop.fs.Path path, java.util.List<MapRFileAce> aces, boolean recursive) throws java.io.IOExceptionThis method fully replace ACEs of a file or directory recursively, discarding all existing ones.- Parameters:
path- path to set ACEs (recursively)aces- list of file ACEsrecursive- whether to apply ACE recursively true/false- Throws:
java.io.IOException- if an ACE could not be replaced
-
modifyAces
public void modifyAces(org.apache.hadoop.fs.Path path, java.util.List<MapRFileAce> aces) throws java.io.IOExceptionThis method modifies ACEs of a file or directory. It can add new ACEs or replace an existing ones. All existing ACEs that are unspecified in the current call are retained without changes.- Parameters:
path- path to set ACEsaces- list of file ACEs- Throws:
java.io.IOException- if an ACE could not be modified
-
deleteAces
public void deleteAces(org.apache.hadoop.fs.Path path) throws java.io.IOExceptionThis method deletes all ACEs of a file or directory.- Parameters:
path- path to delete ACEs- Throws:
java.io.IOException- if an ACEs could not be modified
-
deleteAces
public void deleteAces(org.apache.hadoop.fs.Path path, boolean recursive) throws java.io.IOExceptionThis method deletes all ACEs of a file or directory recursively.- Parameters:
path- path to delete ACEs (recursively)recursive- whether to delete ACE recursively true/false- Throws:
java.io.IOException- if an ACEs could not be modified
-
getAces
public java.util.List<MapRFileAce> getAces(org.apache.hadoop.fs.Path path) throws java.io.IOException
This method gets the ACEs of a file or directory.- Parameters:
path- path to set ACEs- Returns:
- list of aces of a file or directory
- Throws:
java.io.IOException- if an ACE could not be read
-
addAceEntryError
public void addAceEntryError(java.util.ArrayList<FileAceEntry> acesList, org.apache.hadoop.fs.Path path, int err)
-
getAces
public java.util.ArrayList<FileAceEntry> getAces(org.apache.hadoop.fs.Path path, boolean recursive) throws java.io.IOException
- Throws:
java.io.IOException
-
getAces
public java.util.ArrayList<FileAceEntry> getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight) throws java.io.IOException
- Throws:
java.io.IOException
-
getAces
public java.util.ArrayList<FileAceEntry> getAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight, int serveridx) throws java.io.IOException
- Throws:
java.io.IOException
-
copyAce
public int copyAce(org.apache.hadoop.fs.Path src, org.apache.hadoop.fs.Path dst) throws java.io.IOExceptionThis method copies ACEs on source to destination.- Specified by:
copyAcein classorg.apache.hadoop.maprfs.AbstractMapRFileSystem- Parameters:
src- path of sourcedest- path of destination- Throws:
java.io.IOException- if an ACE could not be read/modified
-
delAces
public int delAces(org.apache.hadoop.fs.Path path, boolean recursive) throws java.io.IOException- Throws:
java.io.IOException
-
delAces
public int delAces(org.apache.hadoop.fs.Path path, boolean recursive, int symLinkHeight) throws java.io.IOException- Throws:
java.io.IOException
-
setAces
public int setAces(org.apache.hadoop.fs.Path path, java.util.ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath) throws java.io.IOException- Throws:
java.io.IOException
-
setAces
public int setAces(org.apache.hadoop.fs.Path path, java.util.ArrayList<com.mapr.fs.proto.Common.FileACE> aces, boolean isSet, int noinherit, int preservemodebits, boolean recursive, org.apache.hadoop.fs.Path hintAcePath, int symLinkHeight) throws java.io.IOException- Throws:
java.io.IOException
-
setChunkSize
public int setChunkSize(org.apache.hadoop.fs.Path path, long val) throws java.io.IOException- Throws:
java.io.IOException
-
getCidFromPath
public int getCidFromPath(org.apache.hadoop.fs.Path path)
-
mountVolume
public int mountVolume(java.lang.String cluster, java.lang.String volName, java.lang.String mountPath, java.lang.String username)Creates directory entry for volName at mountPath. Expects absolute path for mountPath
-
unmountVolume
public int unmountVolume(java.lang.String cluster, java.lang.String volName, java.lang.String mountPath, java.lang.String username, int pCid, int pCinum, int pUniq)Removes directory entry for volName Expects absolute path for mountPath
-
getVolumeName
public java.lang.String getVolumeName(int volId) throws java.io.IOExceptionThis method gets the volume name for a given volume id.- Parameters:
volId- volume id- Returns:
- volume name
- Throws:
java.io.IOException- if input volume id is invalid.
-
getVolumeNameCached
public java.lang.String getVolumeNameCached(int volId) throws java.io.IOExceptionThis method gets the volume name for a given volume id.- Parameters:
volId- volume id- Returns:
- volume name
- Throws:
java.io.IOException- if input volume id is invalid.
-
getMountPathFid
public java.lang.String getMountPathFid(java.lang.String fidStr) throws java.io.IOException- Throws:
java.io.IOException
-
getMountPathFidCached
public java.lang.String getMountPathFidCached(java.lang.String fidStr) throws java.io.IOException- Throws:
java.io.IOException
-
getMountPath
public java.lang.String getMountPath(java.lang.String cluster, java.lang.String username, int pCid, int pCinum, int pUniq, boolean useCache)
-
getMountPath
public java.lang.String getMountPath(java.lang.String cluster, java.lang.String username, int pCid, int pCinum, int pUniq)
-
createSnapshot
public int createSnapshot(java.lang.String cluster, java.lang.String volumeName, int volId, int rootCid, java.lang.String snapshotName, boolean mirrorSnapshot, long expirationTime, java.lang.String username)
-
getQueryServiceParam
public QueryServiceParam getQueryServiceParam() throws java.io.IOException, java.lang.InterruptedException
Return the QueryServiceParam from the default cluster.- Throws:
java.io.IOExceptionjava.lang.InterruptedException
-
getQueryServiceParam
public QueryServiceParam getQueryServiceParam(java.lang.String clusterName) throws java.io.IOException, java.lang.InterruptedException
Return the QueryServiceParam from the specified cluster.- Throws:
java.io.IOExceptionjava.lang.InterruptedException
-
setQueryServiceParam
public void setQueryServiceParam(QueryServiceParam qsp) throws java.io.IOException, java.lang.InterruptedException
Sets the QueryServiceParam for the default cluster.- Throws:
java.io.IOExceptionjava.lang.InterruptedException
-
setQueryServiceParam
public void setQueryServiceParam(java.lang.String clusterName, QueryServiceParam qsp) throws java.io.IOException, java.lang.InterruptedExceptionSets the QueryServiceParam for the specified cluster.- Throws:
java.io.IOExceptionjava.lang.InterruptedException
-
clearQueryServiceParam
public void clearQueryServiceParam() throws java.io.IOException, java.lang.InterruptedExceptionClears the QueryServiceParam for the default cluster.- Throws:
java.io.IOExceptionjava.lang.InterruptedException
-
clearQueryServiceParam
public void clearQueryServiceParam(java.lang.String clusterName) throws java.io.IOException, java.lang.InterruptedExceptionClears the QueryServiceParam for the specified cluster.- Throws:
java.io.IOExceptionjava.lang.InterruptedException
-
getZkConnectString
public java.lang.String getZkConnectString() throws java.io.IOException- Overrides:
getZkConnectStringin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
getJobTrackerAddrs
public java.net.InetSocketAddress[] getJobTrackerAddrs(org.apache.hadoop.conf.Configuration conf) throws java.io.IOException- Overrides:
getJobTrackerAddrsin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
getClusterConf
public static ClusterConf getClusterConf()
-
getTrimmedStrings
public static java.lang.String[] getTrimmedStrings(java.lang.String str)
Splits a comma separated valueString, trimming leading and trailing whitespace on each value.- Parameters:
str- a comma separatedwith values - Returns:
- an array of
Stringvalues
-
fidToString
@Deprecated public static java.lang.String fidToString(com.mapr.fs.proto.Common.FidMsg fid)
Deprecated.Convert FidMsg to String form
-
isFidString
@Deprecated public static boolean isFidString(java.lang.String fid)
Deprecated.
-
validateFid
@Deprecated public static void validateFid(java.lang.String fid) throws java.io.IOExceptionDeprecated.- Throws:
java.io.IOException
-
createSymbolicLink
public void createSymbolicLink(org.apache.hadoop.fs.Path target, org.apache.hadoop.fs.Path link) throws java.io.IOException- Throws:
java.io.IOException
-
openTable
public Inode openTable(org.apache.hadoop.fs.Path tableURI, MapRHTable htable) throws java.io.IOException
- Throws:
java.io.IOException
-
openTableWithFid
public Inode openTableWithFid(org.apache.hadoop.fs.Path priTableURI, java.lang.String indexFid, MapRHTable htable) throws java.io.IOException
- Throws:
java.io.IOException
-
getSecurityPolicyNameOrId
public java.lang.String getSecurityPolicyNameOrId(int spId) throws java.io.IOExceptionThis method gets the security policyname for a given security policy id. Results are cached.- Parameters:
spId- security policy id- Returns:
- security policy name
- Throws:
java.io.IOException
-
getSecurityPolicyName
public java.lang.String getSecurityPolicyName(int policyID) throws java.io.IOExceptionThis method gets the security policyname for a given security policy id. Results are cached.- Parameters:
spId- security policy id- Returns:
- security policy name
- Throws:
java.io.IOException
-
getSecurityPolicyIds
public java.util.List<java.lang.Integer> getSecurityPolicyIds(java.util.List<java.lang.String> securityPolicyTagList) throws java.io.IOExceptionThis method gets the security policy ids for given security policy names from the policy cache maintained in the MapClient in JNI. Results are not cached. This method should be called only for admin/non-datapath operations- Parameters:
spNames- security policy names- Returns:
- security policy Ids
- Throws:
java.io.IOException
-
getSecurityPolicyId
public int getSecurityPolicyId(java.lang.String policyName) throws java.io.IOExceptionThis method gets the security policy id for a single security policy name. Results are cached.- Parameters:
spId- security policy id- Returns:
- security policy Name
- Throws:
java.io.IOException
-
getClusterSecurityPolicies
public java.util.Map<java.lang.String,java.lang.Integer> getClusterSecurityPolicies() throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public java.lang.String createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public java.lang.String createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.fs.proto.Dbserver.TableAces aces, byte[][] splitKeys, boolean needServerInfo, int auditValue, java.util.List<java.lang.String> securityPolicyTagList) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, int auditValue, java.util.List<java.lang.String> securityPolicyTagList) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue, java.util.List<java.lang.String> securityPolicyTagList) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, com.mapr.fs.proto.Dbserver.TableAttr attr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, byte[][] splitKeys, int auditValue) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, java.lang.String user, byte[][] splitKeys) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson) throws java.io.IOException- Throws:
java.io.IOException
-
createTable
public void createTable(org.apache.hadoop.fs.Path tableURI, byte[][] splitKeys, boolean isBulkLoad, boolean isJson, boolean insertOrder) throws java.io.IOException- Throws:
java.io.IOException
-
getTablets
public com.mapr.fs.proto.Dbserver.TabletLookupResponse getTablets(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage) throws java.io.IOException- Throws:
java.io.IOException
-
getTablets
public com.mapr.fs.proto.Dbserver.TabletLookupResponse getTablets(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] stKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap) throws java.io.IOException- Throws:
java.io.IOException
-
createColumnFamily
public void createColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) throws java.io.IOException- Throws:
java.io.IOException
-
createColumnFamily
public void createColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, java.util.List<java.lang.String> securityPolicyTagList) throws java.io.IOException- Throws:
java.io.IOException
-
createColumnFamily
public void createColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, java.util.List<java.lang.String> securityPolicyTagList) throws java.io.IOException- Throws:
java.io.IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) throws java.io.IOException- Throws:
java.io.IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) throws java.io.IOException- Throws:
java.io.IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) throws java.io.IOException- Throws:
java.io.IOException
-
modifyColumnFamily
public void modifyColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp, com.mapr.fs.proto.Dbserver.ColumnFamilyAttr cfAttr) throws java.io.IOException- Throws:
java.io.IOException
-
deleteColumnFamily
public void deleteColumnFamily(org.apache.hadoop.fs.Path tableURI, java.lang.String name) throws java.io.IOException- Throws:
java.io.IOException
-
listColumnFamily
public java.util.List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr> listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces) throws java.io.IOException- Throws:
java.io.IOException
-
listColumnFamily
public java.util.List<com.mapr.fs.proto.Dbserver.ColumnFamilyAttr> listColumnFamily(org.apache.hadoop.fs.Path tableURI, boolean wantAces, boolean useCached) throws java.io.IOException- Throws:
java.io.IOException
-
modifyDbSecurityPolicy
public void modifyDbSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, java.util.List<java.lang.String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op) throws java.io.IOException- Throws:
java.io.IOException
-
modifyDbCfSecurityPolicy
public void modifyDbCfSecurityPolicy(org.apache.hadoop.fs.Path tableURI, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, java.util.List<java.lang.String> securityPolicyTagList, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation op, java.lang.String cfname) throws java.io.IOException- Throws:
java.io.IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm) throws java.io.IOException- Throws:
java.io.IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid) throws java.io.IOException- Throws:
java.io.IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.baseutils.utils.AceHelper.DBPermission dbperm, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp) throws java.io.IOException- Throws:
java.io.IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces) throws java.io.IOException- Throws:
java.io.IOException
-
modifyTableAttr
public void modifyTableAttr(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableAttr am, com.mapr.fs.proto.Dbserver.TableAces aces, boolean genUuid, com.mapr.fs.proto.Dbserver.SecurityPolicyOperation securityPolicyOp) throws java.io.IOException- Throws:
java.io.IOException
-
getTableBasicAttrs
public TableBasicAttrs getTableBasicAttrs(org.apache.hadoop.fs.Path tableURI) throws java.io.IOException
- Throws:
java.io.IOException
-
getTableProperties
public TableProperties getTableProperties(org.apache.hadoop.fs.Path tableURI) throws java.io.IOException
- Throws:
java.io.IOException
-
splitTableRegion
public void splitTableRegion(org.apache.hadoop.fs.Path tableURI, java.lang.String fidstr, boolean ignoreRegionTooSmallError) throws java.io.IOException- Throws:
java.io.IOException
-
packTableRegion
public void packTableRegion(org.apache.hadoop.fs.Path tableURI, java.lang.String fidstr, int ctype) throws java.io.IOException- Throws:
java.io.IOException
-
mergeTableRegion
public void mergeTableRegion(org.apache.hadoop.fs.Path tableURI, java.lang.String fidstr) throws java.io.IOException- Throws:
java.io.IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid) throws java.io.IOException
- Throws:
java.io.IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, boolean needSpaceUsage, boolean prefetchTabletMap) throws java.io.IOException
- Throws:
java.io.IOException
-
getContainerInfo
public byte[] getContainerInfo(org.apache.hadoop.fs.Path path, java.util.List<java.lang.Integer> cidList) throws java.io.IOException- Throws:
java.io.IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] startKey) throws java.io.IOException
- Throws:
java.io.IOException
-
getTabletScanner
public MapRTabletScanner getTabletScanner(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] startKey, byte[] endKey, boolean needSpaceUsage, boolean prefetchTabletMap) throws java.io.IOException
- Throws:
java.io.IOException
-
getTabletStat
public com.mapr.fs.proto.Dbserver.TabletStatResponse getTabletStat(org.apache.hadoop.fs.Path tablePath, com.mapr.fs.proto.Common.FidMsg tabletFid) throws java.io.IOException- Throws:
java.io.IOException
-
addTableReplica
public void addTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc, com.mapr.fs.proto.Dbserver.TableReplAutoSetupInfo ainfo) throws java.io.IOException, java.lang.UnsupportedOperationException- Throws:
java.io.IOExceptionjava.lang.UnsupportedOperationException
-
editTableReplica
public void editTableReplica(org.apache.hadoop.fs.Path tableURI, java.lang.String clusterName, java.lang.String replicaPath, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) throws java.io.IOException- Throws:
java.io.IOException
-
editTableReplica
public void editTableReplica(org.apache.hadoop.fs.Path tableURI, java.lang.String clusterName, java.lang.String replicaPath, java.lang.String topicName, boolean allowAllCfs, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) throws java.io.IOException- Throws:
java.io.IOException
-
getTableStats
public com.mapr.fs.proto.Dbserver.TableBasicStats getTableStats(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid) throws java.io.IOException- Throws:
java.io.IOException
-
getScanRangeStats
public com.mapr.fs.proto.Dbserver.TableBasicStats getScanRangeStats(org.apache.hadoop.fs.Path tableURI, java.lang.String indexFid, byte[] stKey, byte[] endKey) throws java.io.IOException- Throws:
java.io.IOException
-
listTableIndexes
public com.mapr.fs.proto.Dbserver.TableReplicaListResponse listTableIndexes(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean skipFieldsReadPermCheck, boolean refreshNow) throws java.io.IOException- Throws:
java.io.IOException
-
listTableReplicas
public com.mapr.fs.proto.Dbserver.TableReplicaListResponse listTableReplicas(org.apache.hadoop.fs.Path tableURI, boolean wantStats, boolean refreshNow, boolean getCompactInfo) throws java.io.IOException- Throws:
java.io.IOException
-
removeTableReplica
public void removeTableReplica(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableReplicaDesc desc) throws java.io.IOException- Throws:
java.io.IOException
-
addTableUpstream
public void addTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc) throws java.io.IOException- Throws:
java.io.IOException
-
listTableUpstreams
public com.mapr.fs.proto.Dbserver.TableUpstreamListResponse listTableUpstreams(org.apache.hadoop.fs.Path tableURI) throws java.io.IOException- Throws:
java.io.IOException
-
removeTableUpstream
public void removeTableUpstream(org.apache.hadoop.fs.Path tableURI, com.mapr.fs.proto.Dbserver.TableUpstreamDesc desc) throws java.io.IOException- Throws:
java.io.IOException
-
getServerForCid
public java.lang.String getServerForCid(int cid, java.lang.String cluster) throws java.io.IOException- Throws:
java.io.IOException
-
getDefaultClusterName
public java.lang.String getDefaultClusterName()
-
getClusterName
public java.lang.String getClusterName(java.net.URI p) throws java.io.IOException- Throws:
java.io.IOException
-
getServerForCid
public java.lang.String getServerForCid(int cid) throws java.io.IOException- Throws:
java.io.IOException
-
openFid2
public org.apache.hadoop.fs.FSDataInputStream openFid2(org.apache.hadoop.fs.PathId pfid, java.lang.String file, int readAheadBytesHint) throws java.io.IOException- Specified by:
openFid2in interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
openFid2in classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
openFid
public org.apache.hadoop.fs.FSDataInputStream openFid(java.lang.String fid, long[] ips, long chunkSize, long fileSize) throws java.io.IOException- Specified by:
openFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
openFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
openFid
public org.apache.hadoop.fs.FSDataInputStream openFid(java.lang.String pfid, java.lang.String file, long[] ips) throws java.io.IOException- Specified by:
openFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
openFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
createFid
public org.apache.hadoop.fs.FSDataOutputStream createFid(java.lang.String pfid, java.lang.String file) throws java.io.IOException- Specified by:
createFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
createFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
createFid
public org.apache.hadoop.fs.FSDataOutputStream createFid(java.lang.String pfid, java.lang.String file, boolean overwrite) throws java.io.IOException- Specified by:
createFidin interfaceorg.apache.hadoop.maprfs.Fid- Specified by:
createFidin classorg.apache.hadoop.maprfs.AbstractMapRFileSystem- Throws:
java.io.IOException
-
deleteFid
public boolean deleteFid(java.lang.String pfid, java.lang.String dir) throws java.io.IOException- Specified by:
deleteFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
deleteFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
mkdirsFid
public java.lang.String mkdirsFid(java.lang.String pfid, java.lang.String dir) throws java.io.IOException- Specified by:
mkdirsFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
mkdirsFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
mkdirsFid
public java.lang.String mkdirsFid(org.apache.hadoop.fs.Path p) throws java.io.IOException- Specified by:
mkdirsFidin interfaceorg.apache.hadoop.maprfs.Fid- Overrides:
mkdirsFidin classorg.apache.hadoop.fs.FileSystem- Throws:
java.io.IOException
-
resolveTablePath
public org.apache.hadoop.fs.Path resolveTablePath(org.apache.hadoop.fs.Path path) throws java.io.IOException- Throws:
java.io.IOException
-
isTable
public boolean isTable(org.apache.hadoop.fs.Path path) throws java.io.IOException- Throws:
java.io.IOException
-
isJsonTable
public boolean isJsonTable(org.apache.hadoop.fs.Path path) throws java.io.IOException- Throws:
java.io.IOException
-
isStream
public boolean isStream(org.apache.hadoop.fs.Path path) throws java.io.IOException- Throws:
java.io.IOException
-
isChangelog
public boolean isChangelog(org.apache.hadoop.fs.Path path) throws java.io.IOException- Throws:
java.io.IOException
-
GatewaySourceToString
public java.lang.String GatewaySourceToString(int source)
-
getGatewayIps
public com.mapr.fs.jni.IPPort[] getGatewayIps(java.lang.String file, java.lang.String dstCluster, boolean skipCache, com.mapr.fs.jni.GatewaySource source) throws java.io.IOException- Throws:
java.io.IOException
-
getGatewayIps
public com.mapr.fs.jni.IPPort[] getGatewayIps(java.lang.String file) throws java.io.IOException- Throws:
java.io.IOException
-
getClusterNameUnique
public boolean getClusterNameUnique()
-
getClusterList
public java.util.List<ClusterConf.ClusterEntry> getClusterList()
-
enablePrivilegedProcessAccess
public void enablePrivilegedProcessAccess(boolean enable) throws java.io.IOException- Throws:
java.io.IOException
-
getRAThreads
public static int getRAThreads()
-
addDelegationTokens
public org.apache.hadoop.security.token.Token<?>[] addDelegationTokens(java.lang.String renewer, org.apache.hadoop.security.Credentials credentials)- Specified by:
addDelegationTokensin interfaceorg.apache.hadoop.security.token.DelegationTokenIssuer
-
addSecurityPolicy
public int addSecurityPolicy(org.apache.hadoop.fs.Path path, java.lang.String securityPolicyTag, boolean recursive) throws java.io.IOExceptionThis method adds a single security policy tag to the list of existing security policy tags (if any) for the file or directory specified in path. The securityPolicyTag parameter is the security policy name.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTag- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the file cannot be tagged
-
addSecurityPolicy
public int addSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags, boolean recursive) throws java.io.IOExceptionThis method adds one or more security policy tags to the list of existing security policy tags (if any) for the file or directory specified in path. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTags- The list of security policy names- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the file cannot be tagged
-
setSecurityPolicy
public int setSecurityPolicy(org.apache.hadoop.fs.Path path, java.lang.String securityPolicyTag, boolean recursive) throws java.io.IOExceptionThis method sets the security policy tag to the file or directory specified in path, replacing all existing tags.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTags- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the file cannot be tagged
-
setSecurityPolicy
public int setSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags, boolean recursive) throws java.io.IOExceptionThis method sets one or more security policy tags for the file or directory specified in path, replacing any existing security policies. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file to be taggedsecurityPolicyTags- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the file cannot be tagged
-
removeSecurityPolicy
public int removeSecurityPolicy(org.apache.hadoop.fs.Path path, java.lang.String securityPolicyTag, boolean recursive) throws java.io.IOExceptionThis method removes the security policy named securityPolicy from the list of existing security policies (if any) for the file or directory specified in path.- Parameters:
securityPolicyTag- The security policy tag- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the tag cannot be removed
-
removeSecurityPolicy
public int removeSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags, boolean recursive) throws java.io.IOExceptionThis method removes one or more security policy tags from the list of existing security policy tags (if any) for the file or directory specified in path. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file where the tag is to be removedsecurityPolicyTags- The security policy tags to remove- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the tag cannot be removed
-
removeAllSecurityPolicies
public int removeAllSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive) throws java.io.IOExceptionThis method removes all security policy tags associated to the file or directory specified by path.- Parameters:
path- The full path name to the file or directory where all tags are to be removed- Returns:
- This method returns 0 if successful, or a Unix error code otherwise.
- Throws:
java.io.IOException- if the tags cannot be removed
-
getSecurityPolicy
public int getSecurityPolicy(org.apache.hadoop.fs.Path path, java.util.List<java.lang.String> securityPolicyTags) throws java.io.IOExceptionThis method returns the security policies associated with the file or directory specified in path. The securityPolicyTags parameter is a list of one or more security policy names.- Parameters:
path- The full path name to the file where the tags are to be retrievedsecurityPolicyTags- The caller-supplied list. On return, this will contain the list of security policy tags associated with the file or directory- Returns:
- This method returns 0 if successful, or a Unix error code otherwise
- Throws:
java.io.IOException- if the tags cannot be retrieved
-
getSecurityPolicy
public int getSecurityPolicy(java.util.List<org.apache.hadoop.fs.Path> paths, java.util.List<MapRPathToSecurityPolicyTags> securityPolicyTags) throws java.io.IOExceptionThis method returns the security policies associated with the files or directories specified in list of paths specified in paths. The mappings are returned in the securityPolicyTags parameter as a list of MapRPathToSecurityPolicyTags objects, each of which contains a path with its associated list of one or more security policy names.- Parameters:
paths- The full pathnames of the files where the tags are to be retrievedmappings- The caller-supplied list of security policy tags to file mappings On return, this will contain the list of security policy tags that are associated with the file or directory.- Returns:
- This method returns 0 if successful, or a Unix error code otherwise
- Throws:
java.io.IOException- if the tags cannot be retrieved
-
printSecurityPolicies
public int printSecurityPolicies(org.apache.hadoop.fs.Path path, boolean recursive) throws java.io.IOExceptionThis method prints the security policies associated with the files or directories specified in list of paths specified in paths.- Parameters:
path- The full path name of the files where the tags are to be retrievedrecursive- if path is a directory and set to true, the directory is deleted else throws an exception. In case of a file the recursive can be set to either true or false.- Returns:
- This method returns 0 if successful, or a Unix error code otherwise
- Throws:
java.io.IOException- if the tags cannot be retrieved
-
getAllDataMasks
public java.util.List<com.mapr.fs.proto.Dbserver.DataMask> getAllDataMasks() throws java.io.IOExceptionRetrieves all the info for a all data mask- Returns:
- List of DataMask objects containing all the information
- Throws:
java.io.IOException- if there is an internal error
-
getDataMask
public com.mapr.fs.proto.Dbserver.DataMask getDataMask(java.lang.String dmName) throws java.io.IOExceptionRetrieves all the info for a particular data mask given the name- Parameters:
dmName- the name of the datamask to be retrieve- Returns:
- a DataMask object containing all the information
- Throws:
java.io.IOException- if there is an internal error
-
getDataMaskNameFromId
public java.lang.String getDataMaskNameFromId(int id) throws java.io.IOExceptionReturns the data mask name given the data mask ID. If not found, return null- Throws:
java.io.IOException
-
s3BucketCreate
public byte[] s3BucketCreate(java.lang.String path, java.lang.String bktName, java.lang.String domain, int aId, boolean worm, long ownwerUid) throws java.io.IOException- Throws:
java.io.IOException
-
removeS3Bucket
public byte[] removeS3Bucket(java.lang.String bucketName, java.lang.String domainName) throws java.io.IOException- Throws:
java.io.IOException
-
-