共计 9198 个字符,预计需要花费 23 分钟才能阅读完成。
在上周的上海 Gopher Meetup 的聚会上,听了 ASTA 谢的演讲。然后公司最近也需要实现一个日志集中分析平台。ASTA 谢恰好也讲了他使用了 Elasticsearch + Logstash + Kibana 这个组合进行日志分析。回来之后就买了一本书然后各种 google 把它配置好了,当然只是把框架搭好了。这三个组建还有很多功能并没有熟悉。本文只是简单的介绍在 CentOS 如果配置 ELK(因为公司的服务器是 Centos 的, 个人比较喜欢 Ubuntu 哈哈)
什么是 ELK:
Elasticsearch + Logstash + Kibana(ELK)是一套开源的日志管理方案,分析网站的访问情况时我们一般会借助 Google/ 百度 /CNZZ 等方式嵌入 JS 做数据统计,但是当网站访问异常或者被攻击时我们需要在后台分析如 Nginx 的具体日志,而 Nginx 日志分割 /GoAccess/Awstats 都是相对简单的单节点解决方案,针对分布式集群或者数据量级较大时会显得心有余而力不足,而 ELK 的出现可以使我们从容面对新的挑战。
- Logstash:负责日志的收集,处理和储存
- Elasticsearch:负责日志检索和分析
- Kibana:负责日志的可视化
官方网站:
JDK – http://www.Oracle.com/technetwork/Java/javase/downloads/index.html
Elasticsearch – https://www.elastic.co/downloads/elasticsearch
Logstash – https://www.elastic.co/downloads/logstash
Kibana – https://www.elastic.co/downloads/kibana
Nginx-https://www.nginx.com/
服务端配置:
安装 Java JDK:
cat /etc/RedHat-release
// 这是我 linux 的版本
CentOS Linux release 7.1.1503 (Core)
// 我们通过 yum 方式安装 Java Jdk
yum install java-1.7.0-openjdk
Elasticsearch 安装:
# 下载安装
wget https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.7.1.noarch.rpm
yum localinstall elasticsearch-1.7.1.noarch.rpm
# 启动相关服务
service elasticsearch start
service elasticsearch status
# 查看 Elasticsearch 的配置文件
rpm -qc elasticsearch
/etc/elasticsearch/elasticsearch.yml
/etc/elasticsearch/logging.yml
/etc/init.d/elasticsearch
/etc/sysconfig/elasticsearch
/usr/lib/sysctl.d/elasticsearch.conf
/usr/lib/systemd/system/elasticsearch.service
/usr/lib/tmpfiles.d/elasticsearch.conf
# 查看端口使用情况
netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1817/master
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 27369/node
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 31848/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 16567/sshd
tcp6 0 0 127.0.0.1:8005 :::* LISTEN 8263/java
tcp6 0 0 :::5000 :::* LISTEN 2771/java
tcp6 0 0 :::8009 :::* LISTEN 8263/java
tcp6 0 0 :::3306 :::* LISTEN 28839/mysqld
tcp6 0 0 :::80 :::* LISTEN 31848/nginx: master
tcp6 0 0 :::8080 :::* LISTEN 8263/java
tcp6 0 0 :::9200 :::* LISTEN 25808/java
tcp6 0 0 :::9300 :::* LISTEN 25808/java
tcp6 0 0 :::9301 :::* LISTEN 2771/java
tcp6 0 0 :::22 :::* LISTEN 16567/sshd
我们看到 9200 端口了说明我们安装成功了,我们可以在终端输入
# 测试访问
curl -X GET http://localhost:9200/
或者直接浏览器打开我们可以看到
{
status: 200,
name: “Pip the Troll”,
cluster_name: “elasticsearch”,
version: {
number: “1.7.2”,
build_hash: “e43676b1385b8125d647f593f7202acbd816e8ec”,
build_timestamp: “2015-09-14T09:49:53Z”,
build_snapshot: false,
lucene_version: “4.10.4”
},
tagline: “You Know, for Search”
}
说明我们的程序是运行正常的。
Kibana 的安装:
# 下载 tar 包
wget https://download.elastic.co/kibana/kibana/kibana-4.1.1-linux-x64.tar.gz
# 解压
tar zxf kibana-4.1.1-linux-x64.tar.gz -C /usr/local/
cd /usr/local/
mv kibana-4.1.1-linux-x64 kibana
# 创建 kibana 服务
vim /etc/rc.d/init.d/kibana
#!/bin/bash
### BEGIN INIT INFO
# Provides: kibana
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Runs kibana daemon
# Description: Runs the kibana daemon as a non-root user
### END INIT INFO
# Process name
NAME=kibana
DESC=”Kibana4″
PROG=”/etc/init.d/kibana”
# Configure location of Kibana bin
KIBANA_BIN=/usr/local/kibana/bin
# PID Info
PID_FOLDER=/var/run/kibana/
PID_FILE=/var/run/kibana/$NAME.pid
LOCK_FILE=/var/lock/subsys/$NAME
PATH=/bin:/usr/bin:/sbin:/usr/sbin:$KIBANA_BIN
DAEMON=$KIBANA_BIN/$NAME
# Configure User to run daemon process
DAEMON_USER=root
# Configure logging location
KIBANA_LOG=/var/log/kibana.log
# Begin Script
RETVAL=0
if [`id -u` -ne 0]; then
echo “You need root privileges to run this script”
exit 1
fi
# Function library
. /etc/init.d/functions
start() {
echo -n “Starting $DESC : “
pid=`pidofproc -p $PID_FILE kibana`
if [-n “$pid”] ; then
echo “Already running.”
exit 0
else
# Start Daemon
if [! -d “$PID_FOLDER”] ; then
mkdir $PID_FOLDER
fi
daemon –user=$DAEMON_USER –pidfile=$PID_FILE $DAEMON 1>”$KIBANA_LOG” 2>&1 &
sleep 2
pidofproc node > $PID_FILE
RETVAL=$?
[[$? -eq 0]] && success || failure
echo
[$RETVAL = 0] && touch $LOCK_FILE
return $RETVAL
fi
}
reload()
{
echo “Reload command is not implemented for this service.”
return $RETVAL
}
stop() {
echo -n “Stopping $DESC : “
killproc -p $PID_FILE $DAEMON
RETVAL=$?
echo
[$RETVAL = 0] && rm -f $PID_FILE $LOCK_FILE
}
case “$1” in
start)
start
;;
stop)
stop
;;
status)
status -p $PID_FILE $DAEMON
RETVAL=$?
;;
restart)
stop
start
;;
reload)
reload
;;
*)
# Invalid Arguments, print the following message.
echo “Usage: $0 {start|stop|status|restart}” >&2
exit 2
;;
esac
# 修改启动权限
chmod +x /etc/rc.d/init.d/kibana
# 启动 kibana 服务
service kibana start
service kibana status
# 查看端口
netstat -nltp
因为��刚已经执行过
netstat -nltp
所以显示的效果我这里就不贴了,如果我们能看到 5601 端口就说明我们安装成功了。
Option 1:Generate SSL Certificates:
生成 SSL 证书是为了服务端和客户端进行验证:
sudo vi /etc/pki/tls/openssl.cnf
Find the [v3_ca]
section in the file, and add this line under it (substituting in the Logstash Server’s private IP address):
subjectAltName = IP: logstash_server_private_ip
cd /etc/pki/tls
sudo openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Option 2: FQDN (DNS):
cd /etc/pki/tls
sudo openssl req -subj ‘/CN=<^>logstash_server_fqdn/’ -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Logstash 安装:
Logstash Forwarder(客户端):
安装 Logstash Forwarder
wget https://download.elastic.co/logstash-forwarder/binaries/logstash-forwarder-0.4.0-1.x86_64.rpm
yum localinstall logstash-forwarder-0.4.0-1.x86_64.rpm
# 查看 logstash-forwarder 的配置文件位置
rpm -qc logstash-forwarder
/etc/logstash-forwarder.conf
# 备份配置文件
cp /etc/logstash-forwarder.conf /etc/logstash-forwarder.conf.save
# 编辑 /etc/logstash-forwarder.conf,需要根据实际情况进行修改
vim /etc/logstash-forwarder.conf
{
“network”: {
“servers”: [“ 这里写服务器的 ip:5000”],
“ssl ca”: “/etc/pki/tls/certs/logstash-forwarder.crt”,
“timeout”: 15
},
“files”: [
{
“paths”: [
“/var/log/messages”,
“/var/log/secure”
],
“fields”: {“type”: “syslog”}
}
]
}
Logstash Server(服务端):
# 下载 rpm 包
wget https://download.elastic.co/logstash/logstash/packages/centos/logstash-1.5.4-1.noarch.rpm
# 安装
yum localinstall logstash-1.5.4-1.noarch.rpm
# 创建一个 01-logstash-initial.conf 文件
vim /etc/logstash/conf.d/01-logstash-initial.conf
input {
lumberjack {
port => 5000
type => “logs”
ssl_certificate => “/etc/pki/tls/certs/logstash-forwarder.crt”
ssl_key => “/etc/pki/tls/private/logstash-forwarder.key”
}
}
filter {
if [type] == “syslog” {
grok {
match => {“message” => “%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}” }
add_field => [“received_at”, “%{@timestamp}” ]
add_field => [“received_from”, “%{host}” ]
}
syslog_pri {}
date {
match => [“syslog_timestamp”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss”]
}
}
}
output {
elasticsearch {host => localhost}
stdout {codec => rubydebug}
}
# 启动 logstash 服务
service logstash start
service logstash status
# 访问 Kibana,Time-field name 选择 @timestamp 要在下一步操作 Nginx 日志配置之后访问 不然会没有数据不能创建
http://localhost:5601/
# 增加节点和客户端配置一样,注意同步证书 (可以通过 SSH 的方式同步)
/etc/pki/tls/certs/logstash-forwarder.crt
配置 Nginx 日志:
# 修改客户端配置
vim /etc/logstash-forwarder.conf
{
“network”: {
“servers”: [“ 自己服务器的 ip 地址:5000”],
“ssl ca”: “/etc/pki/tls/certs/logstash-forwarder.crt”,
“timeout”: 15
},
“files”: [
{
“paths”: [
“/var/log/messages”,
“/var/log/secure”
],
“fields”: {“type”: “syslog”}
}, {
“paths”: [
“/app/local/nginx/logs/access.log”
],
“fields”: {“type”: “nginx”}
}
]
}
# 服务端增加 patterns
mkdir /opt/logstash/patterns
vim /opt/logstash/patterns/nginx
NGUSERNAME [a-zA-Z.@-+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:remote_addr} – – [%{HTTPDATE:time_local}] “%{WORD:method} %{URIPATH:path}(?:%{URIPARAM:param})? HTTP/%{NUMBER:httpversion}” %{INT:status} %{INT:body_bytes_sent} %{QS:http_referer} %{QS:http_user_agent}
# 修改 logstash 权限
chown -R logstash:logstash /opt/logstash/patterns
# 修改服务端配置
vim /etc/logstash/conf.d/01-logstash-initial.conf
input {
lumberjack {
port => 5000
type => “logs”
ssl_certificate => “/etc/pki/tls/certs/logstash-forwarder.crt”
ssl_key => “/etc/pki/tls/private/logstash-forwarder.key”
}
}
filter {
if [type] == “syslog” {
grok {
match => {“message” => “%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:[%{POSINT:syslog_pid}])?: %{GREEDYDATA:syslog_message}” }
add_field => [“received_at”, “%{@timestamp}” ]
add_field => [“received_from”, “%{host}” ]
}
syslog_pri {}
date {
match => [“syslog_timestamp”, “MMM d HH:mm:ss”, “MMM dd HH:mm:ss”]
}
}
if [type] == “nginx” {
grok {
match => {“message” => “%{NGINXACCESS}” }
}
}
}
output {
elasticsearch {host => localhost}
stdout {codec => rubydebug}
}
我们看一下完成配置之后的效果:
好了,我是折腾了 2 天才折腾出来的,感觉自己好笨。写篇总结为了下一次能够快速的搭建起来。
Linux 上安装部署 ElasticSearch 全程记录 http://www.linuxidc.com/Linux/2015-09/123241.htm
Elasticsearch 安装使用教程 http://www.linuxidc.com/Linux/2015-02/113615.htm
ElasticSearch 配置文件译文解析 http://www.linuxidc.com/Linux/2015-02/114244.htm
ElasticSearch 集群搭建实例 http://www.linuxidc.com/Linux/2015-02/114243.htm
分布式搜索 ElasticSearch 单机与服务器环境搭建 http://www.linuxidc.com/Linux/2012-05/60787.htm
ElasticSearch 的工作机制 http://www.linuxidc.com/Linux/2014-11/109922.htm
ElasticSearch 的详细介绍 :请点这里
ElasticSearch 的下载地址 :请点这里
本文永久更新链接地址 :http://www.linuxidc.com/Linux/2015-12/126587.htm