一、在linux环境搭建es集群:
注意:要创建一个用户来安装,不要直接用root用户,否则要报错
1、官网下载elasticsearch-7.12.1-linux-x86_64.tar.gz包
下载地址:https://www.elastic.co/cn/downloads/elasticsearch
选择linux_x86_64 sha asc
2、上传到linux目录
3、tar -zxvf elasticsearch-7.12.1-linux-x86_64.tar.gz -C /安装目录
4、创建软连接 ln -s elasticsearch-7.12.1 elasticsearch
5、配置环境变量
sudo vim /etc/profile
注意:此时,elasticsearch -d启动会报错,因为jdk1.8的版本太低,需要jdk11以上的版本(elasticsearch自带了jdk16,所以可以直接使用)
6、配置es jdk环境变量
路径配到es目录下的jdk,如/home/hadoop/apps/elasticsearch/jdk/bin
7、创建data,logs目录
8、配置集群
进入es安装目录/config,vim elasticsearch.yml:
# ---------------------------------- Cluster -----------------------------------
# Use a descriptive name for your cluster:
cluster.name: myes
# ------------------------------------ Node ------------------------------------
# Use a descriptive name for the node:
node.name: node-1
# Add custom attributes to the node:
#node.attr.rack: r1
# ----------------------------------- Paths ------------------------------------
# Path to directory where to store the data (separate multiple locations by comma):
path.data: /home/hadoop/apps/elasticsearch/data/
# Path to log files:
path.logs: /home/hadoop/apps/elasticsearch/logs/
# ----------------------------------- Memory -----------------------------------
# Lock the memory on startup:
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
# Elasticsearch performs poorly when the system is swapping the memory.
# ---------------------------------- Network -----------------------------------
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
network.host: 192.168.19.10
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
http.port: 9200
# For more information, consult the network module documentation.
# --------------------------------- Discovery ----------------------------------
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
discovery.seed_hosts: ["192.168.19.10", "192.168.19.11","192.168.19.12"]
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1"]
# For more information, consult the discovery and cluster formation module documentation.
# ---------------------------------- Various -----------------------------------
# Require explicit names when deleting indices:
#action.destructive_requires_name: true
node.master: true
说明:cluster.name:集群名字,一个集群的节点必须一样;cluster.initial_master_nodes和node.master: true是为了选出es集群的master
9、将elasticsearch分发到集群的每个节点
然后更改node.name,network.host,其余不变
注意:然后启动,报了一堆错:
ERROR: [5] bootstrap checks failed. You must address the points described in the following [5] lines before starting Elasticsearch.
bootstrap check failure [1] of [5]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
bootstrap check failure [2] of [5]: memory locking requested for elasticsearch process but memory is not locked
bootstrap check failure [3] of [5]: max number of threads [1024] for user [hadoop] is too low, increase to at least [4096]
bootstrap check failure [4] of [5]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
bootstrap check failure [5] of [5]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk
主要是因为换了其它用户来安装,而不是root用户,因此系统在一些方面会有所限制,所以我们要用root权限更改这些配置:
1、sudo vim /etc/security/limits.conf
* hard nofile 65536
* soft nofile 65536
2、vim /etc/elasticsearch/elasticsearch.yml // 设置成false就正常运行了。
bootstrap.memory_lock: false
3、sudo vim /etc/security/limits.d/90-nproc.conf
* soft nproc 4096
root soft nproc unlimited
4、sudo vim /etc/sysctl.conf
#添加如下参数
vm.max_map_count = 2621441
执行 sudo sysctl -p /etc/sysctl.conf 命令,设置 永久改变
5、vim /etc/elasticsearch/elasticsearch.yml
在memory那一行添加
bootstrap.system_call_filter: false
10、启动,ok
bin/elasticsearch -d //后台启动,没有群起、群停命令,只有自己写shell脚本实现。
web请求http://hadoop10:9200/_cat/nodes?v&pretty,结果如下:
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.19.11 43 94 1 0.24 0.06 0.02 cdfhilmrstw - node-2
192.168.19.12 28 95 35 0.34 0.12 0.04 cdfhilmrstw - node-3
192.168.19.10 62 53 1 0.00 0.00 0.00 cdfhilmrstw *(是master) node-1
注意:必须要选出master,否则集群不可用,尽管es进程都在。
二、安装可视化集群管理工具cerebro
cerebro,不仅可以可视化(post、get、put、delete等),还可以监控集群,很好地查看索引分布情况......
1、githup下载
下载地址:https://github.com/lmenezes/cerebro/releases
选择版本,最新稳定版为v0.9.3,我选择了v0.8.5。zip包和tar.gz包均可,内容都一样。
2、选择一台服务器上传,然后解压到安装目录
3、创建data和logs目录
4、vim application.config
data.path = "/home/hadoop/apps/cerebro/data/cerebro.db"
hosts = [
{
host = "http://hadoop10:9200" //要连接的es集群其中一台的url
name = "myes" //集群名字,可以不与es集群名不一样
# headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
}
说明:可以完全不配置,直接启动,默认ip:localhost,默认端口:9000,web访问时再输入es url连接集群。配置了hosts,就可以直接按name选择进行连接
5、启动
命令:nohup /home/hadoop/apps/cerebro/bin/cerebro -Dhttp.port=9201 -Dhttp.address=192.168.19.10 >> /home/hadoop/apps/cerebro/logs/application.log 2>&1 &
bin/cerebro -Dhttp.port=9201 -Dhttp.address=192.168.19.10 >> logs/application.log 2>&1 &
nohup bin/cerebro >> logs/application.log 2>&1 &
bin/cerebro >> logs/application.log 2>&1 &
bin/cerebro -Dhttp.port=9201 -Dhttp.address=192.168.19.10
择一即可
6、web登录
当es集群有问题时,连接失败会报503错误,解决后再连接即可