共计 3307 个字符,预计需要花费 9 分钟才能阅读完成。
Hadoop、HBase 配置 Ganglia 指南(metrics1)
相关阅读:
Ganglia 3.1.x 下扩展 Python 模块(翻译自官方 wiki) http://www.linuxidc.com/Linux/2014-04/99565.htm
使用 Ganglia 监控 Hadoop 集群 http://www.linuxidc.com/Linux/2012-05/61349.htm
在 VMware Workstation 的 Ubuntu 下安装和配置 Hadoop 与 Ganglia http://www.linuxidc.com/Linux/2013-06/85856.htm
Ganglia 安装部署之一建立 Grid http://www.linuxidc.com/Linux/2013-05/83673.htm
Ganglia 极其简单安装教程 yum 版 http://www.linuxidc.com/Linux/2012-12/76536.htm
Ganglia 快速开始向导(翻译自官方 wiki)http://www.linuxidc.com/Linux/2013-11/92747.htm
CentOS 集群上安装 Ganglia-3.6.0 监控 Hadoop-2.2.0 和 HBase-0.96.0 http://www.linuxidc.com/Linux/2014-01/95804.htm
Server 端:
yum install rrdtool ganglia ganglia-gmetad ganglia-gmond ganglia-web httpd php
Client 端:
yum install ganglia-gmond
创建 RRD 目录
mkdir -p /var/lib/ganglia/rrds
chown ganglia:ganglia /var/lib/ganglia/rrds
编辑 /etc/ganglia/gmond.conf
cluster {
name = “DFS”
owner = “unspecified”
latlong = “unspecified”
url = “unspecified”
}
udp_send_channel {
#bind_hostname = yes # Highly recommended, soon to be default.
# This option tells gmond to use a source address
# that resolves to the machine’s hostname. Without
# this, the metrics may appear to come from any
# interface and the DNS names associated with
# those IPs will be used to create the RRDs.
mcast_join = master.hadoop.test
port = 8649
ttl = 1
}
/* You can specify as many udp_recv_channels as you like as well. */
udp_recv_channel {
port = 8649
bind = master.hadoop.test
}
/* You can specify as many tcp_accept_channels as you like to share
an xml description of the state of the cluster */
tcp_accept_channel {
bind = master.hadoop.test
port = 8649
}
编辑 /etc/ganglia/gmetad.conf
data_source “DFS” master.hadoop.test:8649
安装 ganglia-web
cd /var/www/html
wget http://softlayer-dal.dl.sourceforge.net/project/ganglia/ganglia-web/3.5.10/ganglia-web-3.5.10.tar.gz
tar zxvf ganglia-web-3.5.7.tar.gz
mv ganglia-web-3.5.7 ganglia
修改 Apache 配置
vim /etc/httpd/conf.d/ganglia.conf
<Location /ganglia>
Order deny,allow
Allow from all
</Location>
# 开机运行采集进程
chkconfig –levels 235 gmond on
# 开机运行数据存储进程
chkconfig –levels 235 gmetad on
# 开机运行 apache 服务
chkconfig –levels 235 httpd on
启动服务
service gmond start
service gmetad start
service httpd restart
修改 hadoop 配置:
vim $HADOOP_HOME/conf/hadoop-metrics.properties
# Configuration of the “dfs” context for ganglia
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
dfs.period=10
dfs.servers=master.hadoop.test:8649
# Configuration of the “mapred” context for ganglia
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
mapred.period=10
mapred.servers=master.hadoop.test:8649
# Configuration of the “jvm” context for ganglia
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=master.hadoop.test:8649
rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
rpc.period=10
rpc.servers=master.hadoop.test:8649
重启 Hadoop
修改 HBase 配置:
hbase.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
hbase.period=10
hbase.servers=master.hadoop.test:8649
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=master.hadoop.test:8649
rpc.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
rpc.period=10
rpc.servers=master.hadoop.test:8649
rest.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
rest.period=10
rest.servers=master.hadoop.test:8649
重启 Hbase
访问 http://${ganglia_home}/ganglia
更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13