@Private @Unstable public class LogAggregationTFileController extends LogAggregationFileController
APP_DIR_PERMISSIONS, APP_LOG_FILE_UMASK, conf, fileControllerName, fsSupportsChmod, maxRetry, remoteOlderRootLogDirSuffix, remoteRootLogDir, remoteRootLogDirSuffix, retentionSize, retryTimeout, TLDIR_PERMISSIONS, usersAclsManager| Constructor | Description |
|---|---|
LogAggregationTFileController() |
| Modifier and Type | Method | Description |
|---|---|---|
void |
closeWriter() |
Close the writer.
|
java.util.Map<org.apache.hadoop.yarn.api.records.ApplicationAccessType,java.lang.String> |
getApplicationAcls(org.apache.hadoop.fs.Path aggregatedLog,
org.apache.hadoop.yarn.api.records.ApplicationId appId) |
Returns ACLs for the application.
|
java.lang.String |
getApplicationOwner(org.apache.hadoop.fs.Path aggregatedLog,
org.apache.hadoop.yarn.api.records.ApplicationId appId) |
Returns the owner of the application.
|
java.util.Map<java.lang.String,java.util.List<ContainerLogFileInfo>> |
getLogMetaFilesOfNode(ExtendedLogMetaRequest logRequest,
org.apache.hadoop.fs.FileStatus currentNodeFile,
org.apache.hadoop.yarn.api.records.ApplicationId appId) |
Returns log file metadata for a node grouped by containers.
|
void |
initializeWriter(LogAggregationFileControllerContext context) |
Initialize the writer.
|
void |
initInternal(org.apache.hadoop.conf.Configuration conf) |
Derived classes initialize themselves using this method.
|
void |
postWrite(LogAggregationFileControllerContext record) |
Operations needed after write the log content.
|
boolean |
readAggregatedLogs(ContainerLogsRequest logRequest,
java.io.OutputStream os) |
Output container log.
|
java.util.List<ContainerLogMeta> |
readAggregatedLogsMeta(ContainerLogsRequest logRequest) |
Return a list of
ContainerLogMeta for an application
from Remote FileSystem. |
void |
renderAggregatedLogsBlock(HtmlBlock.Block html,
View.ViewContext context) |
Render Aggregated Logs block.
|
void |
write(AggregatedLogFormat.LogKey logKey,
AggregatedLogFormat.LogValue logValue) |
Write the log content.
|
aggregatedLogSuffix, belongsToAppAttempt, checkExists, cleanOldLogs, closePrintStream, createAppDir, createDir, extractRemoteOlderRootLogDirSuffix, extractRemoteRootLogDir, extractRemoteRootLogDirSuffix, getApplicationDirectoriesOfUser, getFileControllerName, getFileSystem, getNodeFilesOfApplicationDirectory, getOlderRemoteAppLogDir, getRemoteAppLogDir, getRemoteNodeLogFileForApp, getRemoteOlderRootLogDirSuffix, getRemoteRootLogDir, getRemoteRootLogDirSuffix, initialize, isFsSupportsChmod, verifyAndCreateRemoteLogDirpublic void initInternal(org.apache.hadoop.conf.Configuration conf)
LogAggregationFileControllerinitInternal in class LogAggregationFileControllerconf - the Configurationpublic void initializeWriter(LogAggregationFileControllerContext context) throws java.io.IOException
LogAggregationFileControllerinitializeWriter in class LogAggregationFileControllercontext - the LogAggregationFileControllerContextjava.io.IOException - if fails to initialize the writerpublic void closeWriter()
throws LogAggregationDFSException
LogAggregationFileControllercloseWriter in class LogAggregationFileControllerLogAggregationDFSException - if the closing of the writer fails
(for example due to HDFS quota being exceeded)public void write(AggregatedLogFormat.LogKey logKey, AggregatedLogFormat.LogValue logValue) throws java.io.IOException
LogAggregationFileControllerwrite in class LogAggregationFileControllerlogKey - the log keylogValue - the log contentjava.io.IOException - if fails to write the logspublic void postWrite(LogAggregationFileControllerContext record) throws java.lang.Exception
LogAggregationFileControllerpostWrite in class LogAggregationFileControllerrecord - the LogAggregationFileControllerContextjava.lang.Exception - if anything failspublic boolean readAggregatedLogs(ContainerLogsRequest logRequest, java.io.OutputStream os) throws java.io.IOException
LogAggregationFileControllerreadAggregatedLogs in class LogAggregationFileControllerlogRequest - ContainerLogsRequestos - the output streamjava.io.IOException - if we can not access the log file.public java.util.Map<java.lang.String,java.util.List<ContainerLogFileInfo>> getLogMetaFilesOfNode(ExtendedLogMetaRequest logRequest, org.apache.hadoop.fs.FileStatus currentNodeFile, org.apache.hadoop.yarn.api.records.ApplicationId appId) throws java.io.IOException
LogAggregationFileControllergetLogMetaFilesOfNode in class LogAggregationFileControllerlogRequest - extended query information holdercurrentNodeFile - file status of a node in an application directoryappId - id of the application, which is the same as in node pathjava.io.IOException - if there is no node filepublic java.util.List<ContainerLogMeta> readAggregatedLogsMeta(ContainerLogsRequest logRequest) throws java.io.IOException
LogAggregationFileControllerContainerLogMeta for an application
from Remote FileSystem.readAggregatedLogsMeta in class LogAggregationFileControllerlogRequest - ContainerLogsRequestContainerLogMetajava.io.IOException - if there is no available log filepublic void renderAggregatedLogsBlock(HtmlBlock.Block html, View.ViewContext context)
LogAggregationFileControllerrenderAggregatedLogsBlock in class LogAggregationFileControllerhtml - the htmlcontext - the ViewContextpublic java.lang.String getApplicationOwner(org.apache.hadoop.fs.Path aggregatedLog,
org.apache.hadoop.yarn.api.records.ApplicationId appId)
throws java.io.IOException
LogAggregationFileControllergetApplicationOwner in class LogAggregationFileControlleraggregatedLog - the aggregatedLog pathappId - the ApplicationIdjava.io.IOException - if we can not get the application ownerpublic java.util.Map<org.apache.hadoop.yarn.api.records.ApplicationAccessType,java.lang.String> getApplicationAcls(org.apache.hadoop.fs.Path aggregatedLog,
org.apache.hadoop.yarn.api.records.ApplicationId appId)
throws java.io.IOException
LogAggregationFileControllergetApplicationAcls in class LogAggregationFileControlleraggregatedLog - the aggregatedLog path.appId - the ApplicationIdjava.io.IOException - if we can not get the application aclsCopyright © 2008–2025 Apache Software Foundation. All rights reserved.