共计 6672 个字符,预计需要花费 17 分钟才能阅读完成。
Hadoop MapReduce Task Log 无法查看 syslog 问题
现象 :
由于多个 map task 共用一个 JVM,所以只输出了一组 log 文件
datanode01:/data/hadoop-x.x.x/logs/userlogs$ ls -R
.:
attempt_201211220735_0001_m_000000_0 attempt_201211220735_0001_m_000002_0 attempt_201211220735_0001_m_000005_0
attempt_201211220735_0001_m_000001_0 attempt_201211220735_0001_m_000003_0
./attempt_201211220735_0001_m_000000_0:
log.index
./attempt_201211220735_0001_m_000001_0:
log.index
./attempt_201211220735_0001_m_000002_0:
log.index stderr stdout syslog
通过 http://xxxxxxxx:50060/tasklog?attemptid= attempt_201211220735_0001_m_000000_0 获取 task 的日志时,会出现 syslog 无法获取
原因 :
1.TaskLogServlet.doGet() 方法
if (filter == null) {
printTaskLog(response, out, attemptId,start, end, plainText,
TaskLog.LogName.STDOUT,isCleanup);
printTaskLog(response, out, attemptId,start, end, plainText,
TaskLog.LogName.STDERR,isCleanup);
if(<SPAN style=”BACKGROUND-COLOR: rgb(255,153,0)”>haveTaskLog(attemptId, isCleanup, TaskLog.LogName.SYSLOG))</SPAN> {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.SYSLOG,isCleanup);
}
if(haveTaskLog(attemptId, isCleanup, TaskLog.LogName.DEBUGOUT)) {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.DEBUGOUT, isCleanup);
}
if(haveTaskLog(attemptId, isCleanup, TaskLog.LogName.PROFILE)) {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.PROFILE,isCleanup);
}
} else {
printTaskLog(response, out, attemptId,start, end, plainText, filter,
isCleanup);
}
尝试将 filter=SYSLOG 参数加上,可以访问到 syslog, 但去掉就不行。
看了代码多了一行
haveTaskLog(attemptId, isCleanup,TaskLog.LogName.SYSLOG)
判断,跟进代码发现,检查的是原来
attempt_201211220735_0001_m_000000_0 目录下是否有 syslog 文件?
而不是从 log.index 找 location 看是否有 syslog 文件,一个 bug 出现了!
更多详情见请继续阅读下一页的精彩内容 :http://www.linuxidc.com/Linux/2013-11/92758p2.htm
推荐阅读 :
Hadoop 新 MapReduce 框架 Yarn 详解 http://www.linuxidc.com/Linux/2013-09/90090.htm
Hadoop 中 HDFS 和 MapReduce 节点基本简介 http://www.linuxidc.com/Linux/2013-09/89653.htm
MapReduce 的自制 Writable 分组输出及组内排序 http://www.linuxidc.com/Linux/2013-09/89652.htm
MapReduce 的一对多连接操作 http://www.linuxidc.com/Linux/2013-09/89647.htm
Hadoop– 两个简单的 MapReduce 程序 http://www.linuxidc.com/Linux/2013-08/88631.htm
Hadoop 中利用 MapReduce 读写 MySQL 数据 http://www.linuxidc.com/Linux/2013-07/88117.htm
2.TaskLogServlet. printTaskLog 方法
获取日志文件时会从 log.index 读取。
InputStreamtaskLogReader =
new TaskLog.Reader(taskId,filter, start, end, isCleanup);
TaskLog.Reader
public Reader(TaskAttemptIDtaskid, LogName kind,
long start,long end, boolean isCleanup) throwsIOException {
// find the right log file
Map<LogName, LogFileDetail>allFilesDetails =
getAllLogsFileDetails(taskid, isCleanup);
static Map<LogName, LogFileDetail> getAllLogsFileDetails(
TaskAttemptID taskid, booleanisCleanup) throws IOException {
Map<LogName, LogFileDetail>allLogsFileDetails =
newHashMap<LogName, LogFileDetail>();
File indexFile = getIndexFile(taskid,isCleanup);
BufferedReader fis;
try {
fis = newBufferedReader(new InputStreamReader(
SecureIOUtils.openForRead(indexFile,obtainLogDirOwner(taskid))));
} catch(FileNotFoundException ex) {
LOG.warn(“Index file for the log of ” + taskid + ” does not exist.”);
//Assume no task reuse is used and files exist on attemptdir
StringBuffer input = newStringBuffer();
input.append(LogFileDetail.LOCATION
+ getAttemptDir(taskid,isCleanup) + “\n”);
for(LogName logName : LOGS_TRACKED_BY_INDEX_FILES) {
input.append(logName + “:0 -1\n”);
}
fis = newBufferedReader(new StringReader(input.toString()));
}
………………….
问题解决 :
类似 getAllLogsFileDetails 一样,先从 log.index 获取日志目录获取 logdir,
File indexFile = getIndexFile(taskid,isCleanup);
BufferedReader fis;
try {
fis = newBufferedReader(new InputStreamReader(
SecureIOUtils.openForRead(indexFile,obtainLogDirOwner(taskid))));
} catch(FileNotFoundException ex) {
LOG.warn(“Index file for the log of ” + taskid + ” does not exist.”);
//Assume no task reuse is used and files exist on attemptdir
StringBuffer input = newStringBuffer();
input.append(LogFileDetail.LOCATION
+ getAttemptDir(taskid,isCleanup) + “\n”);
for(LogName logName : LOGS_TRACKED_BY_INDEX_FILES) {
input.append(logName + “:0 -1\n”);
}
fis = newBufferedReader(new StringReader(input.toString()));
}
String str = fis.readLine();
if (str== null) {//thefile doesn’t have anything
throw newIOException (“Index file for the log of ” + taskid+”is empty.”);
}
String loc =str.substring(str.indexOf(LogFileDetail.LOCATION)+
LogFileDetail.LOCATION.length());
从 logdir 中判断是否有 syslog。
Workaround:
查询时加入在 url 上加入 filter=SYSLOG 就可以看到, 不需要修改代码。
更多 Hadoop 相关信息见 Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13
Hadoop MapReduce Task Log 无法查看 syslog 问题
现象 :
由于多个 map task 共用一个 JVM,所以只输出了一组 log 文件
datanode01:/data/hadoop-x.x.x/logs/userlogs$ ls -R
.:
attempt_201211220735_0001_m_000000_0 attempt_201211220735_0001_m_000002_0 attempt_201211220735_0001_m_000005_0
attempt_201211220735_0001_m_000001_0 attempt_201211220735_0001_m_000003_0
./attempt_201211220735_0001_m_000000_0:
log.index
./attempt_201211220735_0001_m_000001_0:
log.index
./attempt_201211220735_0001_m_000002_0:
log.index stderr stdout syslog
通过 http://xxxxxxxx:50060/tasklog?attemptid= attempt_201211220735_0001_m_000000_0 获取 task 的日志时,会出现 syslog 无法获取
原因 :
1.TaskLogServlet.doGet() 方法
if (filter == null) {
printTaskLog(response, out, attemptId,start, end, plainText,
TaskLog.LogName.STDOUT,isCleanup);
printTaskLog(response, out, attemptId,start, end, plainText,
TaskLog.LogName.STDERR,isCleanup);
if(<SPAN style=”BACKGROUND-COLOR: rgb(255,153,0)”>haveTaskLog(attemptId, isCleanup, TaskLog.LogName.SYSLOG))</SPAN> {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.SYSLOG,isCleanup);
}
if(haveTaskLog(attemptId, isCleanup, TaskLog.LogName.DEBUGOUT)) {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.DEBUGOUT, isCleanup);
}
if(haveTaskLog(attemptId, isCleanup, TaskLog.LogName.PROFILE)) {
printTaskLog(response, out,attemptId, start, end, plainText,
TaskLog.LogName.PROFILE,isCleanup);
}
} else {
printTaskLog(response, out, attemptId,start, end, plainText, filter,
isCleanup);
}
尝试将 filter=SYSLOG 参数加上,可以访问到 syslog, 但去掉就不行。
看了代码多了一行
haveTaskLog(attemptId, isCleanup,TaskLog.LogName.SYSLOG)
判断,跟进代码发现,检查的是原来
attempt_201211220735_0001_m_000000_0 目录下是否有 syslog 文件?
而不是从 log.index 找 location 看是否有 syslog 文件,一个 bug 出现了!
更多详情见请继续阅读下一页的精彩内容 :http://www.linuxidc.com/Linux/2013-11/92758p2.htm
推荐阅读 :
Hadoop 新 MapReduce 框架 Yarn 详解 http://www.linuxidc.com/Linux/2013-09/90090.htm
Hadoop 中 HDFS 和 MapReduce 节点基本简介 http://www.linuxidc.com/Linux/2013-09/89653.htm
MapReduce 的自制 Writable 分组输出及组内排序 http://www.linuxidc.com/Linux/2013-09/89652.htm
MapReduce 的一对多连接操作 http://www.linuxidc.com/Linux/2013-09/89647.htm
Hadoop– 两个简单的 MapReduce 程序 http://www.linuxidc.com/Linux/2013-08/88631.htm
Hadoop 中利用 MapReduce 读写 MySQL 数据 http://www.linuxidc.com/Linux/2013-07/88117.htm