共计 4939 个字符,预计需要花费 13 分钟才能阅读完成。
在之前 ElasticSearch 搭建好之后,我们通过 elasticsearch-header 插件在查看 ES 服务的时候,发现 cluster-health 显示的是 YELLOW。
Why?
首先,我们需要知道的是:颜色代表集群的健康状态。现在的颜色状态有:
* RED(红): 代表具体的分片还没有在集群中分配好;* YELLOW(黄): 代表主分片已经分配好,但是副本尚未分配;* GREEN(绿):表示所有的分片都已分配好,群集正常运行;
有了以上的简介之后,笔者觉得,单节点导致显示的 YELLOW 还可以完善,所以开始搭建 ** ES Cluster **。
首先,我们需要更新之前的配置。进入之前的解压目录,打开配置文件:
vim elasticsearch.yml
我们需要更新这样几个参数:
- node.name:指定当前节点的名称。我现在有两台机子,可以设置两个节点,所以,这里一个起名为 master,一个为 node1;
- node.master: 指定主节点。值一个设置为 true,另一个为 false;
- discovery.zen.ping.unicast.hosts:这里设置的是节点的 ip,或者自己配置 hosts 文件,指定 ip 映射关系,直接写 ip 对应的名称;
- discovery.zen.minimum_master_nodes:这里,我的主节点是 1 个,所以我在这配置的是 1;具体的可以参考 discovery.zen.minimum_master_nodes
在主节点配置好后,通过 scp 传输到 node1 节点上面,参数也做相应的改变即可。比如 node.name 更新成 node1,node.master: false。其他的保持一致。
我目前的主节点配置如下:
# ======================== Elasticsearch Configuration ========================= | |
# | |
# NOTE: Elasticsearch comes with reasonable defaults for most settings. | |
# Before you set out to tweak and tune the configuration, make sure you | |
# understand what are you trying to accomplish and the consequences. | |
# | |
# The primary way of configuring a node is via this file. This template lists | |
# the most important settings you may want to configure for a production cluster. | |
# | |
# Please consult the documentation for further information on configuration options: | |
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html | |
# | |
# ---------------------------------- Cluster ----------------------------------- | |
# | |
# Use a descriptive name for your cluster: | |
# | |
#cluster.name: my-application | |
cluster.name: es-demo | |
# | |
# ------------------------------------ Node ------------------------------------ | |
# | |
# Use a descriptive name for the node: | |
# | |
#node.name: node-1 | |
node.name: master | |
node.master: true | |
# | |
# Add custom attributes to the node: | |
# | |
#node.attr.rack: r1 | |
node.attr.rack: r1 | |
# | |
# ----------------------------------- Paths ------------------------------------ | |
# | |
# Path to directory where to store the data (separate multiple locations by comma): | |
# | |
#path.data: /path/to/data | |
path.data: /data/elasticsearch | |
# | |
# Path to log files: | |
# | |
#path.logs: /path/to/logs | |
path.logs: /data/logs/elasticsearch | |
# | |
# ----------------------------------- Memory ----------------------------------- | |
# | |
# Lock the memory on startup: | |
# | |
# bootstrap.memory_lock: true | |
# | |
# Make sure that the heap size is set to about half the memory available | |
# on the system and that the owner of the process is allowed to use this | |
# limit. | |
# | |
# Elasticsearch performs poorly when the system is swapping the memory. | |
# | |
# ---------------------------------- Network ----------------------------------- | |
# | |
# Set the bind address to a specific IP (IPv4 or IPv6): | |
# | |
network.host: 192.168.1.58 # 从节点成对应的 ip | |
# | |
# Set a custom port for HTTP: | |
# | |
http.port: 9200 | |
# | |
# For more information, consult the network module documentation. | |
# | |
# --------------------------------- Discovery ---------------------------------- | |
# | |
# Pass an initial list of hosts to perform discovery when new node is started: | |
# The default list of hosts is ["127.0.0.1", "[::1]"] | |
# | |
discovery.zen.ping.unicast.hosts: ["192.168.1.58", "192.168.1.54"] | |
# | |
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1): | |
# | |
discovery.zen.minimum_master_nodes: 1 | |
# | |
# For more information, consult the zen discovery module documentation. | |
# | |
# ---------------------------------- Gateway ----------------------------------- | |
# | |
# Block initial recovery after a full cluster restart until N nodes are started: | |
# | |
#gateway.recover_after_nodes: 3 | |
# | |
# For more information, consult the gateway module documentation. | |
# | |
# ---------------------------------- Various ----------------------------------- | |
# | |
# Require explicit names when deleting indices: | |
# | |
#action.destructive_requires_name: true | |
# --------------------------------- 其他配置 ----------------------------------- | |
http.cors.enabled: true | |
http.cors.allow-origin: "*" |
更新好配置之后呢,我们开始启动 ElasticSearch 服务。我启动 master 和 node1,发现出现了下面的错误:
max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]
查询发现,导致这种错误,是因为系统默认的 max_map_count 过低所致,我们需要设置大一点(按报错直接翻译也可知哦 ^(^)。我们需要设置一下这个参数:
sudo vim /etc/sysctl.conf
在最末尾添加:
vm.max_map_count=262144
两台机子配置好之后,我们重新启动下 elasticsearch。这时候 log 也已经提示了:
Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[movies][0]] ...])
好了。刷新下 elasticsearch-head,发现如下图所示:
至此,集群环境算是配置好了,集群状态也从 YELLOW 转到 GREEN 了。
Linux 上安装部署 ElasticSearch 全程记录 http://www.linuxidc.com/Linux/2015-09/123241.htm
Elasticsearch 安装使用教程 http://www.linuxidc.com/Linux/2015-02/113615.htm
ElasticSearch 配置文件译文解析 http://www.linuxidc.com/Linux/2015-02/114244.htm
ElasticSearch 集群搭建实例 http://www.linuxidc.com/Linux/2015-02/114243.htm
分布式搜索 ElasticSearch 单机与服务器环境搭建 http://www.linuxidc.com/Linux/2012-05/60787.htm
ElasticSearch 的工作机制 http://www.linuxidc.com/Linux/2014-11/109922.htm
Elasticsearch 的安装,运行和基本配置 http://www.linuxidc.com/Linux/2016-07/133057.htm
使用 Elasticsearch + Logstash + Kibana 搭建日志集中分析平台实践 http://www.linuxidc.com/Linux/2015-12/126587.htm
Ubuntu 14.04 搭建 ELK 日志分析系统 (Elasticsearch+Logstash+Kibana) http://www.linuxidc.com/Linux/2016-06/132618.htm
Elasticsearch1.7 升级到 2.3 实践总结 http://www.linuxidc.com/Linux/2016-11/137282.htm
Ubuntu 14.04 中 Elasticsearch 集群配置 http://www.linuxidc.com/Linux/2017-01/139460.htm
Elasticsearch-5.0.0 移植到 Ubuntu 16.04 http://www.linuxidc.com/Linux/2017-01/139505.htm
ElasticSearch 的详细介绍 :请点这里
ElasticSearch 的下载地址 :请点这里
本文永久更新链接地址 :http://www.linuxidc.com/Linux/2017-04/143136.htm
