阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Hadoop常见错误及解决办法

203次阅读
没有评论

共计 30277 个字符,预计需要花费 76 分钟才能阅读完成。

1:Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out

Answer:
程序里面需要打开多个文件,进行分析,系统一般默认数量是 1024,(用 ulimit - a 可以看到)对于正常使用是够了,但是对于程序来讲,就太少了。
修改办法:
修改 2 个文件。
/etc/security/limits.conf
vi /etc/security/limits.conf
加上:
* soft nofile 102400
* hard nofile 409600

$cd /etc/pam.d/
$sudo vi login
添加 session required /lib/security/pam_limits.so

针对第一个问题我纠正下答案:
这是 reduce 预处理阶段 shuffle 时获取已完成的 map 的输出失败次数超过上限造成的,上限默认为 5。引起此问题的方式可能会有很多种,比如网络连接不正常,连接超时,带宽较差以及端口阻塞等。。。通常框架内网络情况较好是不会出现此错误的。

2:Too many fetch-failures
Answer:
出现这个问题主要是节点间的连通不够全面。
1) 检查 所有机子上的 /etc/hosts 文件
要求本机 ip 对应 服务器名
要求要包含所有的服务器 ip + 服务器名
2) 检查 .ssh/authorized_keys
要求包含所有服务器(包括其自身)的 public key

3:处理速度特别的慢 出现 map 很快 但是 reduce 很慢 而且反复出现 reduce=0%
Answer:
结合第二点,然后
修改 conf/Hadoop-env.sh 中的 export HADOOP_HEAPSIZE=4000

4:Hdfs 出现:能够启动 datanode,但无法访问,也无法结束的错误
在 重新格式化一个新的分布式文件时,需要将你 NameNode 上所配置的 dfs.name.dir 这一 namenode 用来存放 NameNode 持久存储名字空间及事务日志的本地文件系统路径删除,同时将各 DataNode 上的 dfs.data.dir 的路径 DataNode 存放块数据的本地文件系统路径的目录也删除。如本此配置就是在 NameNode 上删除 /home/hadoop/NameData,在 DataNode 上 删除 /home/hadoop/DataNode1 和 /home/hadoop/DataNode2。这是因为 Hadoop 在格式化一个新的分布式文件系 统时,每个存储的名字空间都对应了建立时间的那个版本(可以查看 /home/hadoop /NameData/current 目录下的 VERSION 文件,上面记录了版本信息),在重新格式化新的分布式系统文件时,最好先删除 NameData 目录。必须删除各 DataNode 的 dfs.data.dir。这样才可以使 namedode 和 datanode 记录的信息版本对应。
注意:删除是个很危险的动作,不能确认的情况下不能删除!!做好删除的文件等通通备份!!

5:java.io.IOException: Could not obtain block: blk_194219614024901469_1100 file=/user/hive/warehouse/src_20090724_log/src_20090724_log
出现这种情况大多是节点断了,没有连接上。

6:java.lang.OutOfMemoryError: Java heap space
出现这种异常,明显是 jvm 内存不够得原因,要修改所有的 datanode 的 jvm 内存大小。
Java -Xms1024m -Xmx4096m
一般 jvm 的最大内存使用应该为总内存大小的一半,我们使用的 8G 内存,所以设置为 4096m,这一值可能依旧不是最优的值。

Hadoop 添加节点的方法
自己实际添加节点过程:
1. 先在 slave 上配置好环境,包括 ssh,jdk,相关 config,lib,bin 等的拷贝;
2. 将新的 datanode 的 host 加到集群 namenode 及其他 datanode 中去;
3. 将新的 datanode 的 ip 加到 master 的 conf/slaves 中;
4. 重启 cluster, 在 cluster 中看到新的 datanode 节点;
5. 运行 bin/start-balancer.sh,这个会很耗时间
备注:
1. 如果不 balance,那么 cluster 会把新的数据都存放在新的 node 上,这样会降低 mr 的工作效率;
2. 也可调用 bin/start-balancer.sh 命令执行,也可加参数 -threshold 5
threshold 是平衡阈值,默认是 10%,值越低各节点越平衡,但消耗时间也更长。
3. balancer 也可以在有 mr job 的 cluster 上运行,默认 dfs.balance.bandwidthPerSec 很低,为 1M/s。在没有 mr job 时,可以提高该设置加快负载均衡时间。

其他备注:
1. 必须确保 slave 的 firewall 已关闭;
2. 确保新的 slave 的 ip 已经添加到 master 及其他 slaves 的 /etc/hosts 中,反之也要将 master 及其他 slave 的 ip 添加到新的 slave 的 /etc/hosts 中
mapper 及 reducer 个数
url 地址:http://wiki.apache.org/hadoop/HowManyMapsAndReduces
HowManyMapsAndReduces
Partitioning your job into maps and reduces
Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead.
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you’ll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the [WWW] InputFormat determines the number of maps.
The number of map tasks can also be increased manually using the JobConf’s conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.
Number of Reduces
The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.
Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf’s conf.setNumReduceTasks(int num).
自己的理解:
mapper 个数的设置:跟 input file 有关系,也跟 filesplits 有关系,filesplits 的上线为 dfs.block.size,下线可以通过 mapred.min.split.size 设置,最后还是由 InputFormat 决定。

较好的建议:
The right number of reduces seems to be 0.95 or 1.75 multiplied by (<no. of nodes> * mapred.tasktracker.reduce.tasks.maximum).increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>2</value>
<description>The maximum number of reduce tasks that will be run
simultaneously by a task tracker.
</description>
</property>

单个 node 新加硬盘
1. 修改需要新加硬盘的 node 的 dfs.data.dir,用逗号分隔新、旧文件目录
2. 重启 dfs
 

相关阅读

Ubuntu 13.04 上搭建 Hadoop 环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1 版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu 下 Hadoop 环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建 Hadoop 环境(在 Winodws 环境下用虚拟机虚拟两个 Ubuntu 系统进行搭建)http://www.linuxidc.com/Linux/2011-12/48894.htm

更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

同步 Hadoop 代码
hadoop-env.sh
# host:path where hadoop code should be rsync’d from. Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

用命令合并 HDFS 小文件
hadoop fs -getmerge <src> <dest>

重启 reduce job 方法
Introduced recovery of jobs when JobTracker restarts. This facility is off by default.
Introduced config parameters “mapred.jobtracker.restart.recover”, “mapred.jobtracker.job.history.block.size”, and “mapred.jobtracker.job.history.buffer.size”.
还未验证过。

IO 写操作出现问题
0-1246359584298, infoPort=50075, ipcPort=50020):Got exception while serving blk_-5911099437886836280_1292 to /172.16.100.165:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/
172.16.100.165:50010 remote=/172.16.100.165:50930]
at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:185)
at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:293)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:387)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:179)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:94)
at java.lang.Thread.run(Thread.java:619)

It seems there are many reasons that it can timeout, the example given in
HADOOP-3831 is a slow reading client.

解决办法:在 hadoop-site.xml 中设置 dfs.datanode.socket.write.timeout= 0 试试;
My understanding is that this issue should be fixed in Hadoop 0.19.1 so that
we should leave the standard timeout. However until then this can help
resolve issues like the one you’re seeing.

HDFS 退服节点的方法
目前版本的 dfsadmin 的帮助信息是没写清楚的,已经 file 了一个 bug 了,正确的方法如下:
1. 将 dfs.hosts 置为当前的 slaves,文件名用完整路径,注意,列表中的节点主机名要用大名,即 uname -n 可以得到的那个。
2. 将 slaves 中要被退服的节点的全名列表放在另一个文件里,如 slaves.ex,使用 dfs.host.exclude 参数指向这个文件的完整路径
3. 运行命令 bin/hadoop dfsadmin -refreshNodes
4. web 界面或 bin/hadoop dfsadmin -report 可以看到退服节点的状态是 Decomission in progress,直到需要复制的数据复制完成为止
5. 完成之后,从 slaves 里(指 dfs.hosts 指向的文件)去掉已经退服的节点

附带说一下 -refreshNodes 命令的另外三种用途:
2. 添加允许的节点到列表中(添加主机名到 dfs.hosts 里来)
3. 直接去掉节点,不做数据副本备份(在 dfs.hosts 里去掉主机名)
4. 退服的逆操作——停止 exclude 里面和 dfs.hosts 里面都有的,正在进行 decomission 的节点的退服,也就是把 Decomission in progress 的节点重新变为 Normal(在 web 界面叫 in service)

hadoop 学习借鉴
1. 解决 hadoop OutOfMemoryError 问题:
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx800M -server</value>
</property>
With the right JVM size in your hadoop-site.xml , you will have to copy this
to all mapred nodes and restart the cluster.
或者:hadoop jar jarfile [main class] -D mapred.child.java.opts=-Xmx800M

2. Hadoop java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) while indexing.
when i use nutch1.0,get this error:
Hadoop java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) while indexing.
这个也很好解决:
可以删除 conf/log4j.properties,然后可以看到详细的错误报告
我这儿出现的是 out of memory
解决办法是在给运行主类 org.apache.nutch.crawl.Crawl 加上参数:-Xms64m -Xmx512m
你的或许不是这个问题,但是能看到详细的错误报告问题就好解决了

distribute cache 使用
类似一个全局变量,但是由于这个变量较大,所以不能设置在 config 文件中,转而使用 distribute cache
具体使用方法:(详见《the definitive guide》,P240)
1. 在命令行调用时:调用 -files,引入需要查询的文件 (可以是 local file, HDFS file(使用 hdfs://xxx?)), 或者 -archives (JAR,ZIP, tar 等)
% hadoop jar job.jar MaxTemperatureByStationNameUsingDistributedCacheFile \
-files input/ncdc/metadata/stations-fixed-width.txt input/ncdc/all output
2. 程序中调用:
public void configure(JobConf conf) {
metadata = new NcdcStationMetadata();
try {
metadata.initialize(new File(“stations-fixed-width.txt”));
} catch (IOException e) {
throw new RuntimeException(e);
}
}
另外一种间接的使用方法:在 hadoop-0.19.0 中好像没有
调用 addCacheFile()或者 addCacheArchive()添加文件,
使用 getLocalCacheFiles() 或 getLocalCacheArchives() 获得文件

hadoop 的 job 显示 web
There are web-based interfaces to both the JobTracker (MapReduce master) and NameNode (HDFS master) which display status pages about the state of the entire system. By default, these are located at [WWW] http://job.tracker.addr:50030/ and [WWW] http://name.node.addr:50070/.

hadoop 监控
OnlyXP(52388483) 131702
用 nagios 作告警,ganglia 作监控图表即可

status of 255 error
错误类型:
java.io.IOException: Task process exit with nonzero status of 255.
at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:424)

错误原因:
Set mapred.jobtracker.retirejob.interval and mapred.userlog.retain.hours to higher value. By default, their values are 24 hours. These might be the reason for failure, though I’m not sure

split size
FileInputFormat input splits: (详见《the definitive guide》P190)
mapred.min.split.size: default=1, the smallest valide size in bytes for a file split.
mapred.max.split.size: default=Long.MAX_VALUE, the largest valid size.
dfs.block.size: default = 64M, 系统中设置为 128M。
如果设置 minimum split size > block size, 会增加块的数量。(猜想从其他节点拿去数据的时候,会合并 block,导致 block 数量增多)
如果设置 maximum split size < block size, 会进一步拆分 block。

split size = max(minimumSize, min(maximumSize, blockSize));
其中 minimumSize < blockSize < maximumSize.

1:Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out

Answer:
程序里面需要打开多个文件,进行分析,系统一般默认数量是 1024,(用 ulimit - a 可以看到)对于正常使用是够了,但是对于程序来讲,就太少了。
修改办法:
修改 2 个文件。
/etc/security/limits.conf
vi /etc/security/limits.conf
加上:
* soft nofile 102400
* hard nofile 409600

$cd /etc/pam.d/
$sudo vi login
添加 session required /lib/security/pam_limits.so

针对第一个问题我纠正下答案:
这是 reduce 预处理阶段 shuffle 时获取已完成的 map 的输出失败次数超过上限造成的,上限默认为 5。引起此问题的方式可能会有很多种,比如网络连接不正常,连接超时,带宽较差以及端口阻塞等。。。通常框架内网络情况较好是不会出现此错误的。

2:Too many fetch-failures
Answer:
出现这个问题主要是节点间的连通不够全面。
1) 检查 所有机子上的 /etc/hosts 文件
要求本机 ip 对应 服务器名
要求要包含所有的服务器 ip + 服务器名
2) 检查 .ssh/authorized_keys
要求包含所有服务器(包括其自身)的 public key

3:处理速度特别的慢 出现 map 很快 但是 reduce 很慢 而且反复出现 reduce=0%
Answer:
结合第二点,然后
修改 conf/Hadoop-env.sh 中的 export HADOOP_HEAPSIZE=4000

4:Hdfs 出现:能够启动 datanode,但无法访问,也无法结束的错误
在 重新格式化一个新的分布式文件时,需要将你 NameNode 上所配置的 dfs.name.dir 这一 namenode 用来存放 NameNode 持久存储名字空间及事务日志的本地文件系统路径删除,同时将各 DataNode 上的 dfs.data.dir 的路径 DataNode 存放块数据的本地文件系统路径的目录也删除。如本此配置就是在 NameNode 上删除 /home/hadoop/NameData,在 DataNode 上 删除 /home/hadoop/DataNode1 和 /home/hadoop/DataNode2。这是因为 Hadoop 在格式化一个新的分布式文件系 统时,每个存储的名字空间都对应了建立时间的那个版本(可以查看 /home/hadoop /NameData/current 目录下的 VERSION 文件,上面记录了版本信息),在重新格式化新的分布式系统文件时,最好先删除 NameData 目录。必须删除各 DataNode 的 dfs.data.dir。这样才可以使 namedode 和 datanode 记录的信息版本对应。
注意:删除是个很危险的动作,不能确认的情况下不能删除!!做好删除的文件等通通备份!!

5:java.io.IOException: Could not obtain block: blk_194219614024901469_1100 file=/user/hive/warehouse/src_20090724_log/src_20090724_log
出现这种情况大多是节点断了,没有连接上。

6:java.lang.OutOfMemoryError: Java heap space
出现这种异常,明显是 jvm 内存不够得原因,要修改所有的 datanode 的 jvm 内存大小。
Java -Xms1024m -Xmx4096m
一般 jvm 的最大内存使用应该为总内存大小的一半,我们使用的 8G 内存,所以设置为 4096m,这一值可能依旧不是最优的值。

Hadoop 添加节点的方法
自己实际添加节点过程:
1. 先在 slave 上配置好环境,包括 ssh,jdk,相关 config,lib,bin 等的拷贝;
2. 将新的 datanode 的 host 加到集群 namenode 及其他 datanode 中去;
3. 将新的 datanode 的 ip 加到 master 的 conf/slaves 中;
4. 重启 cluster, 在 cluster 中看到新的 datanode 节点;
5. 运行 bin/start-balancer.sh,这个会很耗时间
备注:
1. 如果不 balance,那么 cluster 会把新的数据都存放在新的 node 上,这样会降低 mr 的工作效率;
2. 也可调用 bin/start-balancer.sh 命令执行,也可加参数 -threshold 5
threshold 是平衡阈值,默认是 10%,值越低各节点越平衡,但消耗时间也更长。
3. balancer 也可以在有 mr job 的 cluster 上运行,默认 dfs.balance.bandwidthPerSec 很低,为 1M/s。在没有 mr job 时,可以提高该设置加快负载均衡时间。

其他备注:
1. 必须确保 slave 的 firewall 已关闭;
2. 确保新的 slave 的 ip 已经添加到 master 及其他 slaves 的 /etc/hosts 中,反之也要将 master 及其他 slave 的 ip 添加到新的 slave 的 /etc/hosts 中
mapper 及 reducer 个数
url 地址:http://wiki.apache.org/hadoop/HowManyMapsAndReduces
HowManyMapsAndReduces
Partitioning your job into maps and reduces
Picking the appropriate size for the tasks for your job can radically change the performance of Hadoop. Increasing the number of tasks increases the framework overhead, but increases load balancing and lowers the cost of failures. At one extreme is the 1 map/1 reduce case where nothing is distributed. The other extreme is to have 1,000,000 maps/ 1,000,000 reduces where the framework runs out of resources for the overhead.
Number of Maps
The number of maps is usually driven by the number of DFS blocks in the input files. Although that causes people to adjust their DFS block size to adjust the number of maps. The right level of parallelism for maps seems to be around 10-100 maps/node, although we have taken it up to 300 or so for very cpu-light map tasks. Task setup takes awhile, so it is best if the maps take at least a minute to execute.
Actually controlling the number of maps is subtle. The mapred.map.tasks parameter is just a hint to the InputFormat for the number of maps. The default InputFormat behavior is to split the total number of bytes into the right number of fragments. However, in the default case the DFS block size of the input files is treated as an upper bound for input splits. A lower bound on the split size can be set via mapred.min.split.size. Thus, if you expect 10TB of input data and have 128MB DFS blocks, you’ll end up with 82k maps, unless your mapred.map.tasks is even larger. Ultimately the [WWW] InputFormat determines the number of maps.
The number of map tasks can also be increased manually using the JobConf’s conf.setNumMapTasks(int num). This can be used to increase the number of map tasks, but will not set the number below that which Hadoop determines via splitting the input data.
Number of Reduces
The right number of reduces seems to be 0.95 or 1.75 * (nodes * mapred.tasktracker.tasks.maximum). At 0.95 all of the reduces can launch immediately and start transfering map outputs as the maps finish. At 1.75 the faster nodes will finish their first round of reduces and launch a second round of reduces doing a much better job of load balancing.
Currently the number of reduces is limited to roughly 1000 by the buffer size for the output files (io.buffer.size * 2 * numReduces << heapSize). This will be fixed at some point, but until it is it provides a pretty firm upper bound.
The number of reduces also controls the number of output files in the output directory, but usually that is not important because the next map/reduce step will split them into even smaller splits for the maps.
The number of reduce tasks can also be increased in the same way as the map tasks, via JobConf’s conf.setNumReduceTasks(int num).
自己的理解:
mapper 个数的设置:跟 input file 有关系,也跟 filesplits 有关系,filesplits 的上线为 dfs.block.size,下线可以通过 mapred.min.split.size 设置,最后还是由 InputFormat 决定。

较好的建议:
The right number of reduces seems to be 0.95 or 1.75 multiplied by (<no. of nodes> * mapred.tasktracker.reduce.tasks.maximum).increasing the number of reduces increases the framework overhead, but increases load balancing and lowers the cost of failures.
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>2</value>
<description>The maximum number of reduce tasks that will be run
simultaneously by a task tracker.
</description>
</property>

单个 node 新加硬盘
1. 修改需要新加硬盘的 node 的 dfs.data.dir,用逗号分隔新、旧文件目录
2. 重启 dfs
 

相关阅读

Ubuntu 13.04 上搭建 Hadoop 环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1 版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu 下 Hadoop 环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建 Hadoop 环境(在 Winodws 环境下用虚拟机虚拟两个 Ubuntu 系统进行搭建)http://www.linuxidc.com/Linux/2011-12/48894.htm

更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

sort by value
Hadoop 不提供直接的 sort by value 方法,因为这样会降低 mapreduce 性能。
但可以用组合的办法来实现,具体实现方法见《the definitive guide》, P250
基本思想:
1. 组合 key/value 作为新的 key;
2. 重载 partitioner,根据 old key 来分割;
conf.setPartitionerClass(FirstPartitioner.class);
3. 自定义 keyComparator:先根据 old key 排序,再根据 old value 排序;
conf.setOutputKeyComparatorClass(KeyComparator.class);
4. 重载 GroupComparator, 也根据 old key 来组合;conf.setOutputValueGroupingComparator(GroupComparator.class);

small input files 的处理
对于一系列的 small files 作为 input file,会降低 hadoop 效率。
有 3 种方法可以将 small file 合并处理:
1. 将一系列的 small files 合并成一个 sequneceFile,加快 mapreduce 速度。
详见 WholeFileInputFormat 及 SmallFilesToSequenceFileConverter,《the definitive guide》, P194
2. 使用 CombineFileInputFormat 集成 FileinputFormat,但是未实现过;
3. 使用 hadoop archives(类似打包),减少小文件在 namenode 中的 metadata 内存消耗。(这个方法不一定可行,所以不建议使用)
方法:
将 /my/files 目录及其子目录归档成 files.har,然后放在 /my 目录下
bin/hadoop archive -archiveName files.har /my/files /my

查看 files in the archive:
bin/hadoop fs -lsr har://my/files.har

skip bad records
JobConf conf = new JobConf(ProductMR.class);
conf.setJobName(“ProductMR”);
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(Product.class);
conf.setMapperClass(Map.class);
conf.setReducerClass(Reduce.class);
conf.setMapOutputCompressorClass(DefaultCodec.class);
conf.setInputFormat(SequenceFileInputFormat.class);
conf.setOutputFormat(SequenceFileOutputFormat.class);
String objpath = “abc1”;
SequenceFileInputFormat.addInputPath(conf, new Path(objpath));
SkipBadRecords.setMapperMaxSkipRecords(conf, Long.MAX_VALUE);
SkipBadRecords.setAttemptsToStartSkipping(conf, 0);
SkipBadRecords.setSkipOutputPath(conf, new Path(“data/product/skip/”));
String output = “abc”;
SequenceFileOutputFormat.setOutputPath(conf, new Path(output));
JobClient.runJob(conf);

For skipping failed tasks try : mapred.max.map.failures.percent

restart 单个 datanode
如果一个 datanode 出现问题,解决之后需要重新加入 cluster 而不重启 cluster,方法如下:
bin/hadoop-daemon.sh start datanode
bin/hadoop-daemon.sh start jobtracker

reduce exceed 100%
“Reduce Task Progress shows > 100% when the total size of map outputs (for a
single reducer) is high “
造成原因:
在 reduce 的 merge 过程中,check progress 有误差,导致 status > 100%,在统计过程中就会出现以下错误:java.lang.ArrayIndexOutOfBoundsException: 3
at org.apache.hadoop.mapred.StatusHttpServer$TaskGraphServlet.getReduceAvarageProgresses(StatusHttpServer.java:228)
at org.apache.hadoop.mapred.StatusHttpServer$TaskGraphServlet.doGet(StatusHttpServer.java:159)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:689)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:427)
at org.mortbay.jetty.servlet.WebApplicationHandler.dispatch(WebApplicationHandler.java:475)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:567)
at org.mortbay.http.HttpContext.handle(HttpContext.java:1565)
at org.mortbay.jetty.servlet.WebApplicationContext.handle(WebApplicationContext.java:635)
at org.mortbay.http.HttpContext.handle(HttpContext.java:1517)
at org.mortbay.http.HttpServer.service(HttpServer.java:954)

jira 地址:

counters
3 中 counters:
1. built-in counters: Map input bytes, Map output records…
2. enum counters
调用方式:
enum Temperature {
MISSING,
MALFORMED
}

reporter.incrCounter(Temperature.MISSING, 1)
结果显示:
09/04/20 06:33:36 INFO mapred.JobClient: Air Temperature Recor
09/04/20 06:33:36 INFO mapred.JobClient: Malformed=3
09/04/20 06:33:36 INFO mapred.JobClient: Missing=66136856
3. dynamic countes:
调用方式:
reporter.incrCounter(“TemperatureQuality”, parser.getQuality(),1);

结果显示:
09/04/20 06:33:36 INFO mapred.JobClient: TemperatureQuality
09/04/20 06:33:36 INFO mapred.JobClient: 2=1246032
09/04/20 06:33:36 INFO mapred.JobClient: 1=973422173
09/04/20 06:33:36 INFO mapred.JobClient: 0=1

7: Namenode in safe mode
解决方法
bin/hadoop dfsadmin -safemode leave

8:java.net.NoRouteToHostException: No route to host
解决方法:
sudo /etc/init.d/iptables stop

9:更改 namenode 后,在 hive 中运行 select 依旧指向之前的 namenode 地址
这是因为:When youcreate a table, hive actually stores the location of the table (e.g.
hdfs://ip:port/user/root/…) in the SDS and DBS tables in the metastore . So when I bring up a new cluster the master has a new IP, but hive’s metastore is still pointing to the locations within the old
cluster. I could modify the metastore to update with the new IP everytime I bring up a cluster. But the easier and simpler solution was to just use an elastic IP for the master
所以要将 metastore 中的之前出现的 namenode 地址全部更换为现有的 namenode 地址

10:Your DataNode is started and you can create directories with bin/hadoop dfs -mkdir, but you get an error message when you try to put files into the HDFS (e.g., when you run a command like bin/hadoop dfs -put).
解决方法:
Go to the HDFS info web page (open your web browser and go to http://namenode:dfs_info_port where namenode is the hostname of your NameNode and dfs_info_port is the port you chose dfs.info.port; if followed the QuickStart on your personal computer then this URL will be http://localhost:50070). Once at that page click on the number where it tells you how many DataNodes you have to look at a list of the DataNodes in your cluster.
If it says you have used 100% of your space, then you need to free up room on local disk(s) of the DataNode(s).
If you are on Windows then this number will not be accurate (there is some kind of bug either in Cygwin’s df.exe or in Windows). Just free up some more space and you should be okay. On one Windows machine we tried the disk had 1GB free but Hadoop reported that it was 100% full. Then we freed up another 1GB and then it said that the disk was 99.15% full and started writing data into the HDFS again. We encountered this bug on Windows XP SP2.
11:Your DataNodes won’t start, and you see something like this in logs/*datanode*:
Incompatible namespaceIDs in /tmp/hadoop-ross/dfs/data
原因:
Your Hadoop namespaceID became corrupted. Unfortunately the easiest thing to do reformat the HDFS.
解决方法:
You need to do something like this:
bin/stop-all.sh
rm -Rf /tmp/hadoop-your-username/*
bin/hadoop namenode -format
12:You can run Hadoop jobs written in Java (like the grep example), but your HadoopStreaming jobs (such as the Python example that fetches web page titles) won’t work.
原因:
You might have given only a relative path to the mapper and reducer programs. The tutorial originally just specified relative paths, but absolute paths are required if you are running in a real cluster.
解决方法:
Use absolute paths like this from the tutorial:
bin/hadoop jar contrib/hadoop-0.15.2-streaming.jar \
-mapper $HOME/proj/hadoop/multifetch.py \
-reducer $HOME/proj/hadoop/reducer.py \
-input urls/* \
-output titles

13:2009-01-08 10:02:40,709 ERROR metadata.Hive (Hive.java:getPartitions(499)) – javax.jdo.JDODataStoreException: Required table missing : “”PARTITIONS”” in Catalog “” Schema “”. JPOX requires this table to perform its persistence operations. Either your MetaData is incorrect, or you need to enable “org.jpox.autoCreateTables”
原因:就是因为在 hive-default.xml 里把 org.jpox.fixedDatastore 设置成 true 了
starting namenode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-namenode-hadoop.out
localhost: starting datanode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-datanode-hadoop.out
localhost: starting secondarynamenode, logging to /home/hadoop/HadoopInstall/hadoop/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop.out
localhost: Exception in thread “main” java.lang.NullPointerException
localhost: at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:130)
localhost: at org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:116)
localhost: at org.apache.hadoop.dfs.NameNode.getAddress(NameNode.java:120)
localhost: at org.apache.hadoop.dfs.SecondaryNameNode.initialize(SecondaryNameNode.java:124)
localhost: at org.apache.hadoop.dfs.SecondaryNameNode.<init>(SecondaryNameNode.java:108)
localhost: at org.apache.hadoop.dfs.SecondaryNameNode.main(SecondaryNameNode.java:460)
14:09/08/31 18:25:45 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:45 INFO hdfs.DFSClient: Abandoning block blk_-8575812198227241296_1001
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:25:51 INFO hdfs.DFSClient: Abandoning block blk_-2932256218448902464_1001
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.11:50010
> 09/08/31 18:25:57 INFO hdfs.DFSClient: Abandoning block blk_-1014449966480421244_1001
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException:
Bad connect ack with firstBadLink 192.168.1.16:50010
> 09/08/31 18:26:03 INFO hdfs.DFSClient: Abandoning block blk_7193173823538206978_1001
> 09/08/31 18:26:09 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable
to create new block.
> at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2731)
> at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:1996)
> at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2182)
>
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Error Recovery for block blk_7193173823538206978_1001
bad datanode[2] nodes == null
> 09/08/31 18:26:09 WARN hdfs.DFSClient: Could not get block locations. Source file “/user/umer/8GB_input”
– Aborting…
> put: Bad connect ack with firstBadLink 192.168.1.16:50010

解决方法:
I have resolved the issue:
What i did:

1) ‘/etc/init.d/iptables stop’ –>stopped firewall
2) SELINUX=disabled in ‘/etc/selinux/config’ file.–>disabled selinux
I worked for me after these two changes

解决 jline.ConsoleReader.readLine 在 Windows 上不生效问题方法
在 CliDriver.java 的 main()函数中,有一条语句 reader.readLine,用来读取标准输入,但在 Windows 平台上该语句总是 返回 null,这个 reader 是一个实例 jline.ConsoleReader 实例,给 Windows Eclipse 调试带来不便。
我们可以通过使用 java.util.Scanner.Scanner 来替代它,将原来的
while ((line=reader.readLine(curPrompt+”> “)) != null)
复制代码
替换为:
Scanner sc = new Scanner(System.in);
while ((line=sc.nextLine()) != null)
复制代码
重新编译发布,即可正常从标准输入读取输入的 SQL 语句了。

Windows eclispe 调试 hive 报 does not have a scheme 错误可能原因
1、Hive 配置文件中的“hive.metastore.local”配置项值为 false,需要将它修改为 true,因为是单机版
2、没有设置 HIVE_HOME 环境变量,或设置错误
3、“does not have a scheme”很可能是因为找不到“hive-default.xml”。使用 Eclipse 调试 Hive 时,遇到找不到 hive- default.xml 的解决方法:http://bbs.Hadoopor.com/thread-292-1-1.html

1、中文问题
从 url 中解析出中文, 但 hadoop 中打印出来仍是乱码? 我们曾经以为 hadoop 是不支持中文的,后来经过查看源代码,发现 hadoop 仅仅是不支持以 gbk 格式输出中文而己。
这是 TextOutputFormat.class 中的代码,hadoop 默认的输出都是继承自 FileOutputFormat 来 的,FileOutputFormat 的两个子类一个是基于二进制流的输出,一个就是基于文本的输出 TextOutputFormat。
public class TextOutputFormat<K, V> extends FileOutputFormat<K, V> {
protected static class LineRecordWriter<K, V>
implements RecordWriter<K, V> {
private static final String utf8 =“UTF-8″;// 这里被写死成了 utf-8
private static final byte[] newline;
static {
try {
newline =“\n”.getBytes(utf8);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find”+ utf8 +”encoding”);
}
}

public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
this.out = out;
try {
this.keyValueSeparator = keyValueSeparator.getBytes(utf8);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find”+ utf8 +”encoding”);
}
}

private void writeObject(Object o) throws IOException {
if (o instanceof Text) {
Text to = (Text) o;
out.write(to.getBytes(), 0, to.getLength());// 这里也需要修改
} else {
out.write(o.toString().getBytes(utf8));
}
}

}
可以看出 hadoop 默认的输出写死为 utf-8,因此如果 decode 中文正确,那么将 Linux 客户端的 character 设为 utf- 8 是可以看到中文的。因为 hadoop 用 utf- 8 的格式输出了中文。
因为大多数数据库是用 gbk 来定义字段的,如果想让 hadoop 用 gbk 格式输出中文以兼容数据库怎么办?
我们可以定义一个新的类:
public class GbkOutputFormat<K, V> extends FileOutputFormat<K, V> {
protected static class LineRecordWriter<K, V>
implements RecordWriter<K, V> {
// 写成 gbk 即可
private static final String gbk =“gbk”;
private static final byte[] newline;
static {
try {
newline =“\n”.getBytes(gbk);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find”+ gbk +”encoding”);
}
}

public LineRecordWriter(DataOutputStream out, String keyValueSeparator) {
this.out = out;
try {
this.keyValueSeparator = keyValueSeparator.getBytes(gbk);
} catch (UnsupportedEncodingException uee) {
throw new IllegalArgumentException(”can’t find”+ gbk +”encoding”);
}
}

private void writeObject(Object o) throws IOException {
if (o instanceof Text) {
// Text to = (Text) o;
// out.write(to.getBytes(), 0, to.getLength());
// } else {
out.write(o.toString().getBytes(gbk));
}
}

}
然后在 mapreduce 代码中加入 conf1.setOutputFormat(GbkOutputFormat.class)
即可以 gbk 格式输出中文。

2、某次正常运行 mapreduce 实例时, 抛出错误

java.io.IOException: All datanodes xxx.xxx.xxx.xxx:xxx are bad. Aborting…
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2158)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
java.io.IOException: Could not get block locations. Aborting…
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2143)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1735)
at org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1889)
经查明,问题原因是 linux 机器打开了过多的文件导致。用命令 ulimit - n 可以发现 linux 默认的文件打开数目为 1024,修改 /ect/security/limit.conf,增加 hadoop soft 65535

再重新运行程序(最好所有的 datanode 都修改),问题解决

3、运行一段时间后 hadoop 不能 stop-all.sh 的问题,显示报错
no tasktracker to stop,no datanode to stop
问 题的原因是 hadoop 在 stop 的时候依据的是 datanode 上的 mapred 和 dfs 进程号。而默认的进程号保存在 /tmp 下,linux 默认会每 隔一段时间(一般是一个月或者 7 天左右)去删除这个目录下的文件。因此删掉 hadoop-hadoop-jobtracker.pid 和 hadoop- hadoop-namenode.pid 两个文件后,namenode 自然就找不到 datanode 上的这两个进程了。
在配置文件中的 export HADOOP_PID_DIR 可以解决这个问题

问题:
Incompatible namespaceIDs in /usr/local/hadoop/dfs/data: namenode namespaceID = 405233244966; datanode namespaceID = 33333244
原因:
在 每次执行 hadoop namenode -format 时,都会为 NameNode 生成 namespaceID,,但是在 hadoop.tmp.dir 目录下的 DataNode 还是保留上次的 namespaceID,因为 namespaceID 的不一致,而导致 DataNode 无法启动,所以只要在每次执行 hadoop namenode -format 之前,先删除 hadoop.tmp.dir 目录就可以启动成功。请注意是删除 hadoop.tmp.dir 对应的本地目录,而不是 HDFS 目录。

Problem: Storage directory not exist
2010-02-09 21:37:53,203 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory D:\hadoop\run\dfs_name_dir does not exist.
2010-02-09 21:37:53,203 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory D:\hadoop\run\dfs_name_dir is in an inconsistent state: storage directory does not exist or is not accessible.
solution: 是因为存储目录 D:\hadoop\run\dfs_name_dir 不存在,所以只需要手动创建好这个目录即可。
Problem: NameNode is not formatted
solution: 是因为 HDFS 还没有格式化,只需要运行 hadoop namenode -format 一下,然后再启动即可

 

bin/hadoop jps 后报如下异常:
Exception in thread “main” java.lang.NullPointerException
at sun.jvmstat.perfdata.monitor.protocol.local.LocalVmManager.activeVms(LocalVmManager.java:127)
at sun.jvmstat.perfdata.monitor.protocol.local.MonitoredHostProvider.activeVms(MonitoredHostProvider.java:133)
at sun.tools.jps.Jps.main(Jps.java:45)
原因为:
系统根目录 /tmp 文件夹被删除了。重新建立 /tmp 文件夹即可。
bin/hive 中出现 unable to create log directory /tmp/… 也可能是这个原因

相关阅读

Ubuntu 13.04 上搭建 Hadoop 环境 http://www.linuxidc.com/Linux/2013-06/86106.htm

Ubuntu 12.10 +Hadoop 1.2.1 版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htm

Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm

Ubuntu 下 Hadoop 环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm

单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm

搭建 Hadoop 环境(在 Winodws 环境下用虚拟机虚拟两个 Ubuntu 系统进行搭建)http://www.linuxidc.com/Linux/2011-12/48894.htm

更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-20发表,共计30277字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中