403  
centos7.4安装elk7.7.0最新版
作者: 文艺范儿 于 2020年09月22日 发布在分类 / linux / 服务安装 下,并于 2020年10月30日 编辑
elk

一、准备安装包

1、环境概况

系统:CentOS 7
es主节点/es数据节点/kibana/head             192.168.3.11
es主节点/es数据节点/filebeat                    192.168.3.12
es主节点/es数据节点/logstash                   192.168.3.13

2、下载资源包

mkdir -p /home/deploy/elk && cd /home/deploy/elk
wget -c https://github.com/mobz/elasticsearch-head/archive/master.zip
wget -c https://mirrors.huaweicloud.com/elasticsearch/7.7.0/elasticsearch-7.7.0-linux-x86_64.tar.gz
wget -c https://mirrors.huaweicloud.com/kibana/7.7.0/kibana-7.7.0-linux-x86_64.tar.gz
wget -c https://mirrors.huaweicloud.com/logstash/7.7.0/logstash-7.7.0.tar.gz
wget -c https://mirrors.huaweicloud.com/filebeat/7.7.0/filebeat-7.7.0-linux-x86_64.tar.gz

二、配置基础环境(全部安装)

1、关闭防火墙和seliunx

systemctl stop firewalld && systemctl disable firewalld
sed -i 's/=enforcing/=disabled/g' /etc/selinux/config  && setenforce 0

2、设置打开文件数

vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 2048
* hard nproc 4096

vim /etc/sysctl.conf
vm.max_map_count=655360 #单个vm进程最大线程数量为改值的一半左右

sysctl -p

3、环境配置

以下2个环境都是解压后配置一下path即可直接使用

下载安装jdk11并配置环境 

ln -s /home/deploy/jdk/bin/java /home/deploy/bin/java

安装nodejs环境 

ln -s /home/deploy/node/bin/node /usr/bin/node

三、安装elasticsearch-head(192.168.3.11安装)

1、安装head

cd /home/deploy/elk
unzip master.zip
mv elasticsearch-head-master /home/deploy/elasticsearch-head

2、安装head插件 

cd /home/deploy/elasticsearch-head
npm install -g cnpm --registry=https://registry.npm.taobao.org
cnpm install -g grunt-cli
cnpm install -g grunt
cnpm install grunt-contrib-clean
cnpm install grunt-contrib-concat
cnpm install grunt-contrib-watch
cnpm install grunt-contrib-connect
cnpm install grunt-contrib-copy
cnpm install grunt-contrib-jasmine #若报错就再执行一遍

3、配置文件
vim /home/deploy/elasticsearch-head/Gruntfile.js

#找到下面connect属性,新增 hostname: '0.0.0.0', 
        
                connect: {
                        server: {
                                options: {
                                        hostname: '0.0.0.0',         #不要忘了后面的逗号
                                        port: 9100,
                                        base: '.',
                                        keepalive: true
                                }
                        }
                }
4、启动测试

cd /home/deploy/elasticsearch-head && nohup grunt server &

5、启动脚本

vim /etc/init.d/elasticsearch-head

#!/bin/bash
#chkconfig: 2345 55 24
#description: elasticsearch-head service manager

data="cd /home/deploy/elasticsearch-head/ ; nohup  npm run start >/dev/null 2>&1 &   "
START() {
                eval $data
}

STOP() {
                ps -ef | grep grunt | grep -v "grep" | awk '{print $2}' | xargs kill -s 9 >/dev/null
}


case "$1" in
  start)
        START
        ;;
  stop)
        STOP
        ;;
  restart)
        STOP
        sleep 2
        START
        ;;
  *)
        echo "Usage: elasticsearch-head (|start|stop|restart)"
        ;;
esac

6、启动管理

chmod +x /etc/init.d/elasticsearch-head
chkconfig elasticsearch-head on
service elasticsearch-head restart

7、测试

http://192.168.3.11:9100/

四、安装elasticsearch(全部安装)

1、安装elasticsearch

cd /home/deploy/elk
tar zxvf elasticsearch-7.7.0-linux-x86_64.tar.gz
mv elasticsearch-7.7.0 /home/deploy/elasticsearch

2、配置

vim /home/deploy/elasticsearch/config/elasticsearch.yml

cluster.name: elk
node.name: elk-11
path.data: /home/deploy/elasticsearch/data
path.logs: /home/deploy/elasticsearch/logs
bootstrap.memory_lock: false
network.host: 192.168.3.11
http.port: 9200
discovery.seed_hosts: ["192.168.3.11", "192.168.3.12", "192.168.3.13"]
cluster.initial_master_nodes: ["elk-11", "elk-12", "elk-13"]
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"

3、解决报错

解决bootstrap.memory_lock: true报错:

#vim /etc/security/limits.conf 
elk soft memlock unlimited
elk hard memlock unlimited

#vim /etc/sysctl.conf 
vm.swappiness=0

生产环境建议设置该项为true

4、启动测试

useradd elk
su - elk -c "/home/deploy/elasticsearch/bin/elasticsearch -d"
tail -f /home/deploy/elasticsearch/logs/elk.log                 #查看日志,是否正常启动

5、检测

查看集群健康状态

#curl http://192.168.3.11:9200/_cluster/health?pretty

{
  "cluster_name" : "elk",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 3,
  "active_primary_shards" : 0,
  "active_shards" : 0,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}


#curl http://192.168.3.12:9200/_cluster/health?pretty

#curl http://192.168.3.13:9200/_cluster/health?pretty

#返回结果与上面一致

查看master节点

#curl http://192.168.3.13:9200/_cat/master?v
id                     host           ip             node
k0ToQGFsTieswyMdOklyvg 192.168.3.12 192.168.3.12 elk-12

#curl http;//192.168.3.12:9200/_cat/master?v

#curl http://192.168.3.11:9200/_cat/master?v                 #返回结果与上面一致

查看集群详细信息

#curl http://192.168.3.12:9200/_cluster/state?pretty

打开head页面,连接集群任一节点地址,如192.168.3.12:9200,查看集群

6、配置服务

vim /etc/sysconfig/elasticsearch

################################
# Elasticsearch
################################

# Elasticsearch home directory
#ES_HOME=/usr/share/elasticsearch
ES_HOME=/home/deploy/elasticsearch

# Elasticsearch Java path
#JAVA_HOME=
#JAVA_HOME=/home/deploy/jdk
#CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/jre/lib

# Elasticsearch configuration directory
#ES_PATH_CONF=/etc/elasticsearch
ES_PATH_CONF=/home/deploy/elasticsearch/config

# Elasticsearch PID directory
#PID_DIR=/var/run/elasticsearch
PID_DIR=/home/deploy/elasticsearch/run

# Additional Java OPTS
#ES_JAVA_OPTS=

# Configure restart on package upgrade (true, every other setting will lead to not restarting)
#RESTART_ON_UPGRADE=true

################################
# Elasticsearch service
################################

# SysV init.d
#
# The number of seconds to wait before checking if Elasticsearch started successfully as a daemon process
ES_STARTUP_SLEEP_TIME=5

################################
# System properties
################################

# Specifies the maximum file descriptor number that can be opened by this process
# When using Systemd, this setting is ignored and the LimitNOFILE defined in
# /usr/lib/systemd/system/elasticsearch.service takes precedence
#MAX_OPEN_FILES=65535

# The maximum number of bytes of memory that may be locked into RAM
# Set to "unlimited" if you use the 'bootstrap.memory_lock: true' option
# in elasticsearch.yml.
# When using systemd, LimitMEMLOCK must be set in a unit file such as
# /etc/systemd/system/elasticsearch.service.d/override.conf.
#MAX_LOCKED_MEMORY=unlimited

# Maximum number of VMA (Virtual Memory Areas) a process can own
# When using Systemd, this setting is ignored and the 'vm.max_map_count'
# property is set at boot time in /usr/lib/sysctl.d/elasticsearch.conf
#MAX_MAP_COUNT=262144
#vim /usr/lib/systemd/system/elasticsearch.service

[Unit]
Description=Elasticsearch
Documentation=http://www.elastic.co
Wants=network-online.target
After=network-online.target

[Service]
RuntimeDirectory=elasticsearch
PrivateTmp=true
Environment=ES_HOME=/home/deploy/elasticsearch
Environment=ES_PATH_CONF=/home/deploy/elasticsearch/config
Environment=PID_DIR=/home/deploy/elasticsearch/run
EnvironmentFile=-/etc/sysconfig/elasticsearch

WorkingDirectory=/home/deploy/elasticsearch

User=elk
Group=elk

ExecStart=/home/deploy/elasticsearch/bin/elasticsearch -p ${PID_DIR}/elasticsearch.pid --quiet

# StandardOutput is configured to redirect to journalctl since
# some error messages may be logged in standard output before
# elasticsearch logging system is initialized. Elasticsearch
# stores its logs in /var/log/elasticsearch and does not use
# journalctl by default. If you also want to enable journalctl
# logging, you can simply remove the "quiet" option from ExecStart.
StandardOutput=journal
StandardError=inherit

# Specifies the maximum file descriptor number that can be opened by this process
LimitNOFILE=65535

# Specifies the maximum number of processes
LimitNPROC=4096

# Specifies the maximum size of virtual memory
LimitAS=infinity

# Specifies the maximum file size
LimitFSIZE=infinity

# Disable timeout logic and wait until process is stopped
TimeoutStopSec=0

# SIGTERM signal is used to stop the Java process
KillSignal=SIGTERM

# Send the signal only to the JVM rather than its control group
KillMode=process

# Java process is never killed
SendSIGKILL=no

# When a JVM receives a SIGTERM signal it exits with code 143
SuccessExitStatus=143

[Install]
WantedBy=multi-user.target

# Built for packages-6.7.1 (packages)

7、管理服务

mkdir /home/deploy/elasticsearch/run
touch /home/deploy/elasticsearch/run/elasticsearch.pid
chown -R elk:elk /home/deploy/elasticsearch
systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch                 #先kill之前的elasticsearch进程

8、测试

打开head连接 http://192.168.3.13:9200/ 在这里插入图片描述

五、安装kibana(192.168.3.11)

1、安装kibana

cd /home/deploy/elk
tar zxvf kibana-7.7.0-linux-x86_64.tar.gz
mv kibana-7.7.0-linux-x86_64 /home/deploy/kibana

2、配置

#vim /home/deploy/kibana/config/kibana.yml

server.port: 5601               #监听端口
server.host: "0.0.0.0"              #监听IP
elasticsearch.hosts: ["http://192.168.3.11:9200","http://192.168.3.12:9200","http://192.168.3.13:9200"]                #集群es地址
logging.dest: /home/deploy/kibana/logs/kibana.log                 #日志路径
kibana.index: ".kibana"                 #默认索引
i18n.locale: "zh-CN"					 #配置中文

#mkdir /home/deploy/kibana/logs && touch /home/deploy/kibana/logs/kibana.log

3、启动测试

/home/deploy/kibana/bin/kibana --allow-root &

注:以root用户启动需要加参数–allow-root

4、配置服务

vim /etc/default/kibana

user="elk"
group="elk"
chroot="/"
chdir="/"
nice=""

#If this is set to 1, then when `stop` is called, if the process has
#not exited within a reasonable time, SIGKILL will be sent next.
#The default behavior is to simply log a message "program stop failed; still running"
KILL_ON_STOP_TIMEOUT=0
vim /etc/systemd/system/kibana.service
[Unit]
Description=Kibana
StartLimitIntervalSec=30
StartLimitBurst=3

[Service]
Type=simple
User=elk
Group=elk
#Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
#Prefixing the path with '-' makes it try to load, but if the file doesn't
#exist, it continues onward.
EnvironmentFile=-/etc/default/kibana
EnvironmentFile=-/etc/sysconfig/kibana
ExecStart=/home/deploy/kibana/bin/kibana "-c /home/deploy/kibana/config/kibana.yml"
Restart=always
WorkingDirectory=/

[Install]
WantedBy=multi-user.target

5、管理服务

chown -R elk:elk /home/deploy/kibana
systemctl daemon-reload
systemctl start kibana.service

6、测试

http://192.168.3.11:5601 在这里插入图片描述

六、安装logstash(192.168.3.13)

以收集nginx日志为例
1、安装nginx并启动

nginx配置文件日志格式如下

    log_format main '$http_host $remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$upstream_addr" $request_time';
                      
    access_log logs/access.log  main;

2、安装logstash

cd /home/deploy/elk
tar zxvf logstash-7.7.0.tar.gz
mv logstash-7.7.0 /home/deploy/logstash
mkdir /home/deploy/logstash/conf.d

3、配置logstash

vim /home/deploy/logstash/config/logstash.yml

http.host: "192.168.3.13"
http.port: 9600

4、配置日志收集文件

#vim /home/deploy/logstash/conf.d/nginx_access.conf

input {
  file {
    path => "/home/deploy/nginx/logs/access.log"                 #设置为nginx访问日志的路径
    start_position => "beginning"
    type => "nginx"
  }
}
filter {
    grok {
        match => { "message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:by
tes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"}    }
    geoip {
        source => "clientip"
    }
}
output {
    stdout { codec => rubydebug }
    elasticsearch {
        hosts => ["192.168.3.13:9200"]                #可以为集群内其它机器的地址
        index => "nginx-test-%{+YYYY.MM.dd}"  #可以自动定义索引名称
  }
}

5、启动测试
nohup /home/deploy/logstash/bin/logstash --path.settings /home/deploy/logstash/ -f /home/deploy/logstash/conf.d/nginx_access.conf &
6、配置logstash服务

vim /etc/default/logstash

LS_HOME="/home/deploy/logstash"
LS_SETTINGS_DIR="/home/deploy/logstash"
LS_PIDFILE="/home/deploy/logstash/run/logstash.pid"
LS_USER="elk"
LS_GROUP="elk"
LS_GC_LOG_FILE="/home/deploy/logstash/logs/gc.log"
LS_OPEN_FILES="16384"
LS_NICE="19"
SERVICE_NAME="logstash"
SERVICE_DESCRIPTION="logstash"
123456789101112
#vim /etc/systemd/system/logstash.service 

[Unit]
Description=logstash

[Service]
Type=simple
User=elk
Group=elk
#Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
#Prefixing the path with '-' makes it try to load, but if the file doesn'texist,it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/home/deploy/logstash/bin/logstash "--path.settings" "/home/deploy/logstash/config" "--path.config" "/home/deploy/logstash/conf.d"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

[Install]
WantedBy=multi-user.target

7、管理服务
mkdir /home/deploy/logstash/run && touch /home/deploy/logstash/run/logstash.pid
touch /home/deploy/logstash/logs/gc.log && chown -R elk:elk /home/deploy/logstash
systemctl daemon-reload
systemctl enable logstash
systemctl start logstash                  #先kill之前的logstash进程 
8、测试

1、查看elk集群是否有nginx-test这个索引,有就是成功了 !

2 、索引的message正则是否成功需在kinaba建立索引查看字段,如果message没有展开的字段说明filter失败,说明配置有问题,参考 https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns 进行重新配置

在这里插入图片描述 在这里插入图片描述

七、filebeat安装(192.168.3.12)

1、安装

cd /home/deploy/elk
tar zxvf filebeat-7.7.0-linux-x86_64.tar.gz
mv filebeat-7.7.0-linux-x86_64 /home/deploy/filebeat

2、配置

#vim /home/deploy/filebeat/filebeat.yml


filebeat.inputs:
- type: log
  enabled: false
  paths:
    - /var/log/messages
    - /var/log/*.log
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
setup.template.settings:
  index.number_of_shards: 1
setup.kibana:
output.elasticsearch:
  hosts: ["192.168.3.12:9200"]
processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

3、启动测试

nohup /home/deploy/filebeat/filebeat -c /home/deploy/filebeat/filebeat.yml &

4、配置服务

#vim /usr/lib/systemd/system/filebeat.service

[Unit]
Description=Filebeat sends log files to Logstash or directly to Elasticsearch.
Documentation=https://www.elastic.co/products/beats/filebeat
Wants=network-online.target
After=network-online.target

[Service]
ExecStart=/home/deploy/filebeat/filebeat -c /home/deploy/filebeat/filebeat.yml -path.home /home/deploy/filebeat -path.config /home/deploy/filebeat -path.data /home/deploy/filebeat/data -path.logs /home/deploy/filebeat/logs
Restart=always

[Install]
WantedBy=multi-user.target

5、管理服务

#systemctl daemon-reload

#systemctl enable filebeat

#systemctl start filebeat                  #先kill之前的filebeat进程

6、测试

查看elk是否有filebeat开通的索引

在这里插入图片描述




 推荐知识

 历史版本

修改日期 修改人 备注
2020-10-30 17:26:50[当前版本] 文艺范儿 格式调整
2020-10-30 17:00:20 文艺范儿 格式调整
2020-10-30 16:57:38 文艺范儿 格式调整
2020-09-22 16:29:41 文艺范儿 创建版本

 附件

附件类型

PNGPNG

文艺知识分享平台 - 4.3.0 - 文艺范儿