ELK7.3部署与使用-2.logstash部署

1、准备环境,IP、防火墙、SELINUX
2、安装JAVA
3,logstash配置
[root@n7 logstash]# ll
total 36
drwxrwxr-x 2 root root 6 Jul 24 16:00 conf.d
-rw-r–r– 1 root root 1915 Jul 24 16:00 jvm.options
-rw-r–r– 1 root root 4987 Jul 24 16:00 log4j2.properties
-rw-r–r– 1 root root 342 Jul 24 16:00 logstash-sample.conf
-rw-r–r– 1 root root 8236 Aug 21 23:36 logstash.yml
-rw-r–r– 1 root root 285 Jul 24 16:00 pipelines.yml
-rw——- 1 root root 1696 Jul 24 16:00 startup.options
[root@n7 logstash]# cp logstash.yml logstash.yml.bak
[root@n7 logstash]# vim logstash.yml
[root@n7 logstash]# grep -n ^[a-Z] /etc/logstash/logstash.yml
19:node.name: n7
28:path.data: /var/lib/logstash
77:config.reload.automatic: true
81:config.reload.interval: 10s
190:http.host: “10.1.24.71”
208:path.logs: /var/log/logstash

说明:
[root@xx ~]# grep -n ^[a-Z] /etc/logstash/logstash.yml
19:node.name: xx #节点名称,一般为主机域名
28:path.data: /var/lib/logstash #logstash和插件使用的持久化目录
77:config.reload.automatic: true #开启配置文件自动加载
81:config.reload.interval: 10s #配置文件自动加载时间间隔
190:http.host: “x.x.x.x” #定义访问主机名,一般为本机IP或者主机域名
208:path.logs: /var/log/logstash #日志目录

注意权限必须为logstash
[root@n7 logstash]# ll /usr/share/logstash/
total 848
drwxr-xr-x 2 logstash logstash 4096 Aug 21 23:36 bin
-rw-r–r– 1 logstash logstash 2276 Jul 24 16:00 CONTRIBUTORS
drwxrwxr-x 2 logstash logstash 6 Jul 24 16:00 data
-rw-r–r– 1 logstash logstash 4144 Jul 24 16:00 Gemfile
-rw-r–r– 1 logstash logstash 23109 Jul 24 16:00 Gemfile.lock
drwxr-xr-x 6 logstash logstash 84 Aug 21 23:36 lib
-rw-r–r– 1 logstash logstash 13675 Jul 24 16:00 LICENSE.txt
drwxr-xr-x 4 logstash logstash 90 Aug 21 23:36 logstash-core
drwxr-xr-x 3 logstash logstash 86 Aug 21 23:36 logstash-core-plugin-api
drwxr-xr-x 4 logstash logstash 55 Aug 21 23:36 modules
-rw-r–r– 1 logstash logstash 808305 Jul 24 16:00 NOTICE.TXT
drwxr-xr-x 3 logstash logstash 30 Aug 21 23:36 tools
drwxr-xr-x 4 logstash logstash 33 Aug 21 23:36 vendor
drwxr-xr-x 9 logstash logstash 193 Aug 21 23:36 x-pack

启动logstash
[root@n7 logstash]# systemctl restart logstash.service
[root@n7 logstash]# systemctl status logstash.service
● logstash.service – logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-08-22 00:33:30 EDT; 5s ago
Main PID: 8011 (java)
CGroup: /system.slice/logstash.service
└─8011 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.thres…

Aug 22 00:33:30 n7 systemd[1]: Started logstash.
Aug 22 00:33:30 n7 logstash[8011]: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

启动需要一点时间,刚开始查不到端口号,稍等一下就好了
[root@n7 logstash]# ss -ntlp | grep 9600
[root@n7 logstash]# systemctl status logstash.service
● logstash.service – logstash
Loaded: loaded (/etc/systemd/system/logstash.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2019-08-22 00:33:30 EDT; 34s ago
Main PID: 8011 (java)
CGroup: /system.slice/logstash.service
└─8011 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.thres…

Aug 22 00:33:30 n7 systemd[1]: Started logstash.
Aug 22 00:33:30 n7 logstash[8011]: OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
[root@n7 logstash]# ss -ntlp | grep 9600
[root@n7 logstash]# ss -ntlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:22 *:* users:((“sshd”,pid=6754,fd=3))
LISTEN 0 100 127.0.0.1:25 *:* users:((“master”,pid=6905,fd=13))
LISTEN 0 128 :::22 :::* users:((“sshd”,pid=6754,fd=4))
LISTEN 0 100 ::1:25 :::* users:((“master”,pid=6905,fd=14))
LISTEN 0 50 ::ffff:10.1.24.71:9600 :::* users:((“java”,pid=8011,fd=70))

可以查到端口号了
[root@n7 logstash]# ss -ntlp | grep 9600
LISTEN 0 50 ::ffff:10.1.24.71:9600 :::* users:((“java”,pid=8011,fd=70))

5测试标准输入输出
[root@n7 logstash]# /usr/share/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{ codec => rubydebug }}’
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[INFO ] 2019-08-22 00:38:33.206 [main] writabledirectory – Creating directory {:setting=>”path.queue”, :path=>”/usr/share/logstash/data/queue”}
[INFO ] 2019-08-22 00:38:33.278 [main] writabledirectory – Creating directory {:setting=>”path.dead_letter_queue”, :path=>”/usr/share/logstash/data/dead_letter_queue”}
[WARN ] 2019-08-22 00:38:34.363 [LogStash::Runner] multilocal – Ignoring the ‘pipelines.yml’ file because modules or command line options are specified
[INFO ] 2019-08-22 00:38:34.382 [LogStash::Runner] runner – Starting Logstash {“logstash.version”=>”7.3.0”}
[INFO ] 2019-08-22 00:38:34.501 [LogStash::Runner] agent – No persistent UUID file found. Generating new UUID {:uuid=>”83311aa2-acb8-461b-b2bc-652ae7478fb0″, :path=>”/usr/share/logstash/data/uuid”}
[INFO ] 2019-08-22 00:38:38.392 [Converge PipelineAction::Create<main>] Reflections – Reflections took 121 ms to scan 1 urls, producing 19 keys and 39 values
[WARN ] 2019-08-22 00:38:41.351 [[main]-pipeline-manager] LazyDelegatingGauge – A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2019-08-22 00:38:41.388 [[main]-pipeline-manager] javapipeline – Starting pipeline {:pipeline_id=>”main”, “pipeline.workers”=>1, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>50, “pipeline.max_inflight”=>125, :thread=>”#<Thread:0x61e4247a run>”}
[INFO ] 2019-08-22 00:38:52.114 [[main]-pipeline-manager] javapipeline – Pipeline started {“pipeline.id”=>”main”}
[INFO ] 2019-08-22 00:38:52.233 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent – Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
The stdin plugin is now waiting for input:
[INFO ] 2019-08-22 00:38:52.869 [Api Webserver] agent – Successfully started Logstash API endpoint {:port=>9600}
hello world!
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
“host” => “n7”,
“message” => “hello world!”,
“@version” => “1”,
“@timestamp” => 2019-08-22T04:39:10.744Z
}
^C[WARN ] 2019-08-22 00:39:44.922 [SIGINT handler] runner – SIGINT received. Shutting down.
[INFO ] 2019-08-22 00:39:45.284 [Converge PipelineAction::Stop<main>] javapipeline – Pipeline terminated {“pipeline.id”=>”main”}
[INFO ] 2019-08-22 00:39:45.461 [LogStash::Runner] runner – Logstash shut down.

6测试输出到文件
/usr/share/logstash/bin/logstash -e ‘input { stdin{} } output { file { path => “/tmp/log-%{+YYYY.MM.dd}messages.gz”}}’

[root@n7 ~]# /usr/share/logstash/bin/logstash -e ‘input { stdin{} } output { file { path => “/tmp/log-%{+YYYY.MM.dd}messages.gz”}}’
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-08-22 00:47:00.721 [LogStash::Runner] multilocal – Ignoring the ‘pipelines.yml’ file because modules or command line options are specified
[INFO ] 2019-08-22 00:47:00.767 [LogStash::Runner] runner – Starting Logstash {“logstash.version”=>”7.3.0”}
[INFO ] 2019-08-22 00:47:04.366 [Converge PipelineAction::Create<main>] Reflections – Reflections took 120 ms to scan 1 urls, producing 19 keys and 39 values
[WARN ] 2019-08-22 00:47:06.116 [[main]-pipeline-manager] LazyDelegatingGauge – A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2019-08-22 00:47:06.138 [[main]-pipeline-manager] javapipeline – Starting pipeline {:pipeline_id=>”main”, “pipeline.workers”=>1, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>50, “pipeline.max_inflight”=>125, :thread=>”#<Thread:0x257170c8 run>”}
[INFO ] 2019-08-22 00:47:06.328 [[main]-pipeline-manager] javapipeline – Pipeline started {“pipeline.id”=>”main”}
The stdin plugin is now waiting for input:
[INFO ] 2019-08-22 00:47:06.560 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent – Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-08-22 00:47:08.277 [Api Webserver] agent – Successfully started Logstash API endpoint {:port=>9600}
hello world!
[INFO ] 2019-08-22 00:47:28.878 [[main]>worker0] file – Opening file {:path=>”/tmp/log-2019.08.22messages.gz”}
^C[WARN ] 2019-08-22 00:47:32.916 [SIGINT handler] runner – SIGINT received. Shutting down.
[INFO ] 2019-08-22 00:47:33.319 [Converge PipelineAction::Stop<main>] javapipeline – Pipeline terminated {“pipeline.id”=>”main”}
[INFO ] 2019-08-22 00:47:34.102 [LogStash::Runner] runner – Logstash shut down.
[root@n7 ~]# ll /tmp/log
log-2019.08.22messages.gz logstash2737950492275461580/ logstash2893378069423413922/
[root@n7 ~]# ll /tmp/log-2019.08.22messages.gz
-rw-r–r– 1 root root 94 Aug 22 00:47 /tmp/log-2019.08.22messages.gz
[root@n7 ~]# tail /tmp/log-2019.08.22messages.gz
{“@version”:”1″,”host”:”n7″,”message”:”hello world!”,”@timestamp”:”2019-08-22T04:47:28.156Z”}

7测试输出到elasticsearch
/usr/share/logstash/bin/logstash -e ‘input { stdin{} } output { elasticsearch {hosts => [“10.1.24.172:9200”] index => “mytest-%{+YYYY.MM.dd}” }}’

[root@n7 ~]# /usr/share/logstash/bin/logstash -e ‘input { stdin{} } output { elasticsearch {hosts => [“10.1.24.172:9200”] index => “mytest-%{+YYYY.MM.dd}” }}’
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using –path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-08-22 00:50:15.898 [LogStash::Runner] multilocal – Ignoring the ‘pipelines.yml’ file because modules or command line options are specified
[INFO ] 2019-08-22 00:50:15.931 [LogStash::Runner] runner – Starting Logstash {“logstash.version”=>”7.3.0”}
[INFO ] 2019-08-22 00:50:19.577 [Converge PipelineAction::Create<main>] Reflections – Reflections took 117 ms to scan 1 urls, producing 19 keys and 39 values
[INFO ] 2019-08-22 00:50:22.708 [[main]-pipeline-manager] elasticsearch – Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.24.172:9200/]}}
[WARN ] 2019-08-22 00:50:23.592 [[main]-pipeline-manager] elasticsearch – Restored connection to ES instance {:url=>”http://10.1.24.172:9200/”}
[INFO ] 2019-08-22 00:50:24.066 [[main]-pipeline-manager] elasticsearch – ES Output version determined {:es_version=>7}
[WARN ] 2019-08-22 00:50:24.070 [[main]-pipeline-manager] elasticsearch – Detected a 6.x and above cluster: the `type` event field won’t be used to determine the document _type {:es_version=>7}
[INFO ] 2019-08-22 00:50:24.185 [[main]-pipeline-manager] elasticsearch – New Elasticsearch output {:class=>”LogStash::Outputs::ElasticSearch”, :hosts=>[“//10.1.24.172:9200”]}
[WARN ] 2019-08-22 00:50:24.489 [[main]-pipeline-manager] LazyDelegatingGauge – A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2019-08-22 00:50:24.517 [[main]-pipeline-manager] javapipeline – Starting pipeline {:pipeline_id=>”main”, “pipeline.workers”=>1, “pipeline.batch.size”=>125, “pipeline.batch.delay”=>50, “pipeline.max_inflight”=>125, :thread=>”#<Thread:0x47d956ea run>”}
[INFO ] 2019-08-22 00:50:24.824 [Ruby-0-Thread-5: :1] elasticsearch – Using default mapping template
[INFO ] 2019-08-22 00:50:24.950 [[main]-pipeline-manager] javapipeline – Pipeline started {“pipeline.id”=>”main”}
The stdin plugin is now waiting for input:
[INFO ] 2019-08-22 00:50:25.062 [Ruby-0-Thread-5: :1] elasticsearch – Attempting to install template {:manage_template=>{“index_patterns”=>”logstash-*”, “version”=>60001, “settings”=>{“index.refresh_interval”=>”5s”, “number_of_shards”=>1}, “mappings”=>{“dynamic_templates”=>[{“message_field”=>{“path_match”=>”message”, “match_mapping_type”=>”string”, “mapping”=>{“type”=>”text”, “norms”=>false}}}, {“string_fields”=>{“match”=>”*”, “match_mapping_type”=>”string”, “mapping”=>{“type”=>”text”, “norms”=>false, “fields”=>{“keyword”=>{“type”=>”keyword”, “ignore_above”=>256}}}}}], “properties”=>{“@timestamp”=>{“type”=>”date”}, “@version”=>{“type”=>”keyword”}, “geoip”=>{“dynamic”=>true, “properties”=>{“ip”=>{“type”=>”ip”}, “location”=>{“type”=>”geo_point”}, “latitude”=>{“type”=>”half_float”}, “longitude”=>{“type”=>”half_float”}}}}}}}
[INFO ] 2019-08-22 00:50:25.136 [Ruby-0-Thread-5: :1] elasticsearch – Installing elasticsearch template to _template/logstash
[INFO ] 2019-08-22 00:50:25.203 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent – Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-08-22 00:50:26.392 [Api Webserver] agent – Successfully started Logstash API endpoint {:port=>9600}
hello world
my name is fencatn
tis^H^H^H
this is a test
^C[WARN ] 2019-08-22 00:52:10.893 [SIGINT handler] runner – SIGINT received. Shutting down.
[INFO ] 2019-08-22 00:52:11.335 [Converge PipelineAction::Stop<main>] javapipeline – Pipeline terminated {“pipeline.id”=>”main”}
[INFO ] 2019-08-22 00:52:11.520 [LogStash::Runner] runner – Logstash shut down.
[root@n7 ~]#

去elasticsearch主机查看下,测试正常
[root@n5 ~]#
[root@n5 ~]# cd /var/lib/elasticsearch/nodes/0/indices/
[root@n5 indices]# ls
uZziVR9WR8erEOJxKQDRZA wwhpptkTQ-udtSlW9QFeRA X7luiwh5TsavWGuz5ObL_g
[root@n5 indices]# ll
total 0
drwxr-sr-x 4 elasticsearch elasticsearch 29 Aug 22 12:49 uZziVR9WR8erEOJxKQDRZA
drwxr-sr-x 4 elasticsearch elasticsearch 29 Aug 22 12:13 wwhpptkTQ-udtSlW9QFeRA
drwxr-sr-x 4 elasticsearch elasticsearch 29 Aug 22 11:29 X7luiwh5TsavWGuz5ObL_g
[root@n5 indices]#

去浏览器查看下,测试正常

发表在 ELK | 标签为 | 留下评论

ELK7.3部署与使用-1.Elasticsearch+head部署

本文根据https://hacpai.com/article/1559892603869修改

前言
什么是 ELK
通俗来讲,ELK 是由 Elasticsearch、Logstash、Kibana 三个开源软件的组成的一个组合体,这三个软件当中,每个软件用于完成不同的功能,ELK 又称为 ELK stack,官方域名为 stactic.co

ELK stack 的主要优点有如下几个:

(1)处理方式灵活: elasticsearch 是实时全文索引,具有强大的搜索功能
(2)配置相对简单:elasticsearch 全部使用 JSON 接口,logstash 使用模块配置,kibana 的配置文件部分更简单。
(3)检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿(4)级数据的查询秒级响应。
(5)集群线性扩展:elasticsearch 和 logstash 都可以灵活线性扩展
(6)前端操作绚丽:kibana 的前端设计比较绚丽,而且操作简单

什么是 Elasticsearch

Elastic Search 是一个基于 Lucene 的开源分布式搜索服务器。它的特点有:分布式,零配置,自动发现,索引自动分片,索引副本机制,RESTful 风格接口,多数据源,自动搜索负载等。它提供了一个分布式多用户能力的全文搜索引擎,基于 RESTful web 接口。Elasticsearch 是用 Java 开发的,并作为 Apache 许可条款下的开放源码发布,是非常流行的企业搜索引擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。
在 elasticsearch 中,所有节点的数据是均等的。

什么是 Logstash

Logstash 是一个完全开源的工具,它可以对你的日志进行收集、过滤、分析,支持大量的数据获取方法,并将其存储供以后使用(如搜索)。说到搜索,logstash 带有一个 web 界面,搜索和展示所有日志。一般工作方式为 c/s 架构,client 端安装在需要收集日志的主机上,server 端负责将收到的各节点日志进行过滤、修改等操作在一并发往 elasticsearch 上去。

什么是 Kibana

Kibana 是一个基于浏览器页面的 Elasticsearch 前端展示工具,也是一个开源和免费的工具,Kibana 可以为 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以帮助您汇总、分析和搜索重要数据日志。

一、elasticsearch 部署:
1、环境准备
n5\6用作ES集群 n7装LOGSTASH n8装kibana
[root@n6 elasticsearch]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.24.172 n5
10.1.24.57 n6
10.1.24.71 n7
10.1.24.186 n8

关闭防所有服务器的火墙和 selinux
[root@n5 ~ ]# systemctl disable NetworkManager
[root@n5 ~ ]# sed -i ‘/SELINUX/s/enforcing/disabled/’ /etc/selinux/config
[root@n5 ~ ]# echo “* soft nofile 65536” >> /etc/security/limits.conf
[root@n5 ~ ]# echo “* hard nofile 65536” >> /etc/security/limits.conf

设置 epel 源、安装基本操作命令并同步时间:
[root@n5 ~]# yum install -y net-tools vim lrzsz tree screen lsof tcpdump wget ntpdate
[root@n5 ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
[root@n5 ~]# echo “*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w” >> /var/spool/cron/root
[root@n5 ~]# systemctl restart crond
[root@n5 ~]# reboot

在两台 ES 服务器准备 java 环境:
[root@n5 ~]# yum install -y java-1.8.0-openjdk

设置java环境(如果你是源码安装的话)
[root@n5 ~]# vim /etc/profile
export HISTTIMEFORMAT=”%F %T `whoami` “
export JAVA_HOME=JDK目录
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
[root@n5 ~]# source /etc/profile
[root@n5 ~]# java -version
openjdk version “1.8.0_222”
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

2、官网下载 elasticsearch 并安装:
[root@n5 ~]# ll
total 300780
-rw——-. 1 root root 1331 Jul 14 17:40 anaconda-ks.cfg
-rw-r–r– 1 root root 284575102 Aug 22 09:50 elasticsearch-7.3.0-x86_64.rpm
-rw-r–r– 1 root root 23415665 Aug 22 10:55 phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@n5 ~]# rpm -ivh elasticsearch-7.3.0-x86_64.rpm

## 编辑各elasticsearch服务器的服务配置文件
[root@n5 ~]# grep ‘^[a-Z]’ /etc/elasticsearch/elasticsearch.yml
cluster.name: es.cluster
node.name: n5
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: [“10.1.24.172”, “10.1.24.57”]
cluster.initial_master_nodes: [“10.1.24.172”, “10.1.24.57”]
http.cors.enabled: true
http.cors.allow-origin: “*”

[root@n6 ~]# grep ‘^[a-Z]’ /etc/elasticsearch/elasticsearch.yml
cluster.name: es.cluster
node.name: n6
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: [“10.1.24.172”, “10.1.24.57”]
cluster.initial_master_nodes: [“10.1.24.172”, “10.1.24.57”]
http.cors.enabled: true
http.cors.allow-origin: “*”

[root@linux-host2 ~]# systemctl restart elasticsearch

通过浏览器访问 elasticsearch 服务端口

3、安装 elasticsearch 插件之 head:(两台其中一台安装即可)
插件是为了完成不同的功能,官方提供了一些插件但大部分是收费的,另外也有一些开发爱好者提供的插件,可以实现对elasticsearch集群的状态监控与管理配置等功能。
Elasticsearch7.x版本不能使用命令直接安装head插件
# 修改配置文件/etc/elasticsearch/elasticsearch.yml增加参数
# 增加参数,使head插件可以访问es
[root@n5 ~]# vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: “*”

下载 head 插件 解压至 /usr/local 目录下
[root@n5 ~]# cd /usr/local
[root@n5 local ]# wget https://github.com/mobz/elasticsearch-head/archive/master.zip
[root@n5 local ]# unzip master.zip

安装 node
[root@n5 local ]# wget https://npm.taobao.org/mirrors/node/latest-v12.x/node-v12.0.0-linux-x64.tar.gz
[root@n5 local ]# tar -zxvf node-v12.0.0-linux-x64.tar.gz

如果你没法翻墙,你需要设置npm仓库为国内仓库
(1).临时使用
npm –registry https://registry.npm.taobao.org install express
(2).持久使用
npm config set registry https://registry.npm.taobao.org

修改环境变量 /etc/profile 添加
[root@n5 local ]# vim /etc/profile
export NODE_HOME=/usr/local/node-v12.0.0-linux-x64
export PATH=$PATH:$NODE_HOME/bin
export NODE_PATH=$NODE_HOME/lib/node_modules

#设置生效
[root@n5 local ]# source /etc/profile

安装 grunt
[root@n5 local ]# cd /usr/local/elasticsearch-head-master
[root@n5 local ]# npm install -g grunt-cli
修改 head 插件源码 /usr/local/elasticsearch-head-master/Gruntfile.js
[root@n5 local ]# vim /usr/local/elasticsearch-head-master/Gruntfile.js

PS:hostname是新增的,不要忘记原有的true后面加,符号

connect: {
server: {
options: {
port: 9100,
base: ‘.’,
keepalive: true,
hostname: ‘10.1.24.172’
}
}
}

修改连接地址 /usr/local/elasticsearch-head-master/_site/app.js
[root@n5 local ]# vim /usr/local/elasticsearch-head-master/_site/app.js
4374 this.base_uri = this.config.base_uri || this.prefs.get(“app-base_uri”) || “http://10.1.24.172:9200”;

下载运行 head 必要的文件(放到文件夹/tmp/phantomjs/下面,没有就创建一个)
[root@n5 local]# cd /tmp/phantomjs/
[root@n5 phantomjs]# pwd
/tmp/phantomjs
[root@n5 ~]# wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.1.1-linux-x86_64.tar.bz2
[root@n5 ~]# yum -y install bzip2

#运行head
[root@n5 ~]# cd /usr/local/elasticsearch-head-master
[root@n5 ~]# npm install

#后台启动
[root@n5 ~]# grunt server &

#设置开机自启
编辑启动脚本
[root@n5 elasticsearch-head-master]# cd /usr/local/elasticsearch-head-master/
[root@n5 elasticsearch-head]# vim elasticsearch-head
#!/bin/sh
# elasticsearch-head 的路径
cd /usr/local/elasticsearch-head
nohup npm run start >/usr/local/elasticsearch-head/nohup.out 2>&1 &

由systemd进行管理
[root@n5 elasticsearch-head]# cd /etc/systemd/system
[root@n5 system]# vim elasticsearch-head.service
[Unit]
Description=elasticsearch-head
After=network.target

[Service]
Type=forking
ExecStart=/usr/local/elasticsearch-head-master/elasticsearch-head

[Install]
WantedBy=multi-user.target

设置开机启动
[root@n5 system]# systemctl enable elasticsearch-head.service

[root@n5 system]# systemctl list-unit-files | grep elasticsearch-head.service
web 页面验证,http://192.168.66.15:9100

测试提交数据:


使用 Google 商店的 head 插件进行监控
前提是可以访问到谷歌商店

 

4、监控 elasticsearch 集群状态:
通过 shell 命令获取集群状态:
[root@n5 ~]# curl -sXGET http://10.1.24.172:9200/_cluster/health?pretty=true
{
“cluster_name” : “es.cluster”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 2,
“number_of_data_nodes” : 2,
“active_primary_shards” : 2,
“active_shards” : 4,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
[root@n5 ~]#

#获取到的是一个json格式的返回值,那就可以通过python对其中的信息进行分析,例如对status进行分析,如果等于green(绿色)就是运行在正常,等于yellow(×××)表示副本分片丢失,red(红色)表示主分片丢失

python 脚本

[root@n5 ~]# cat els-cluster-monitor.py
#!/usr/bin/env python
#coding:utf-8

import smtplib
from email.mime.text import MIMEText
from email.utils import formataddr
import subprocess
body = “”
false=”false”
obj = subprocess.Popen((“curl -sXGET http://10.1.24.172:9200/_cluster/health?pretty=true”),shell=True, stdout=subprocess.PIPE)
data = obj.stdout.read()
data1 = eval(data)
status = data1.get(“status”)
if status == “green”:
print “50”
else:
print “100”
[root@n5 ~]#

测试验证
[root@n5 ~]# python els-cluster-monitor.py
50

发表在 ELK | 标签为 | 留下评论

ssh通过ProxyCommand解决本地内网登陆云服务内网环境问题

现在大家在操作云主机时可能有个非常常见的需求:本地需要登陆云服务器,但云服务器在内网,只有一台跳板机可以使用。常规操作你需要先登陆跳板机,再登陆云服务器,其实这两步可以合成一部,就是利用ssh的ProxyCommand功能。下面举例详细说明:

先说明实验环境,n0是客户端,模拟企业内网,n1是云端的跳板机,n4是需要登陆的目标云主机,那常规情况下,需要n0登陆n1,再从n1登陆n4。

操作步骤也很简单,3个要点:

(1)n1可以直接登陆到n4,n0可以直接登陆到n1,这个是废话;

(2)n1有n0的公钥,也就是no可以免密登陆n1;

(3)n4有n0的公钥,也就是n0可以免密登陆n4;

(4)也就是说n1可不可以免密登陆n4无所谓,只要n1和n4的网是通的就行了,n0是借n1跳到n4,仅此而已,n1不需要其他操作;

下面是n4存放的公钥,可以看到n0的我放上去了

[root@n4 ~]# cat /root/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwHtKHmZP94Je5axcLe9/tT0XTQvwwCXglrsNvkRwJEtbLYFXU9qqYpqvQ3L1QXmA3oLRKjRHCsTvFPjFnA9mNFTRtEy9CHNJF7Gw57kfI1XIJF1IsWjTzYtya8RAWDflRRZtc+tB6Wkf1TR+51aAhT5fVMXU+AGR/itghwH7qi5Vb5PpsXrE18UnmfeibA+UGZ072ShaTTUBrHiQX7JTPDx5W/iR8KjUs6gj1tS+B030IfNTnkc31NUFQafIlmDD1ZKvqfxKyF0vUFzoUquebhZXYZDoQm7LWH9ZPt7W0nV/QBcXHiFIhRDREEulf0C9YxmBH4QhvacFavj39LuJr root@n0

下面是n1存放的公钥,可以看到n0我也放上去了

[root@n1 ~]# cat /root/.ssh/authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCwHtKHmZP94Je5axcLe9/tT0XTQvwwCXglrsNvkRwJEtbLYFXU9qqYpqvQ3L1QXmA3oLRKjRHCsTvFPjFnA9mNFTRtEy9CHNJF7Gw57kfI1XIJF1IsWjTzYtya8RAWDflRRZtc+tB6Wkf1TR+51aAhT5fVMXU+AGR/itghwH7qi5Vb5PpsXrE18UnmfeibA+UGZ072ShaTTUBrHiQX7JTPDx5W/iR8KjUs6gj1tS+B030IfNTnkc31NUFQafIlmDD1ZKvqfxKyF0vUFzoUquebhZXYZDoQm7LWH9ZPt7W0nV/QBcXHiFIhRDREEulf0C9YxmBH4QhvacFavj39LuJr root@n0

关键的步骤来了,在n0上面编辑.ssh/config

[root@n0 ~]# cat /root/.ssh/config 
Host n1
Hostname xxx.xxx.xxx.n1
Port 22
User root
IdentityFile ~/.ssh/id_rsa

Host n4
Hostname xxx.xxx.xxx.n4
Port 22
User root
ProxyCommand ssh n1 -W %h:%p
IdentityFile ~/.ssh/id_rsa

我详细说明下上面的格式是什么意思

Host n1 #任意名字,随便使用
HostName 192.168.1.1 #这个是跳板机的IP,支持域名
Port 22 #跳板机端口
User username_jmp #跳板机用户

Host n4 #同样,任意名字,随便起,这一步写的是客户端最终要跳的目标主机
HostName 192.168.1.2 #真正登陆的服务器,不支持域名必须IP地址
Port 22 #服务器的端口
User username #服务器的用户
ProxyCommand ssh username_jmp@jmp -W %h:%p

Host 10.10.0.* #可以用*通配符,也就是说你可以跳一个网段
Port 22 #服务器的端口
User username #服务器的用户
ProxyCommand ssh username_jmp@jm[] -W %h:%p

 

做完上面的步骤,你就能直接跳n4了

[root@n0 ~]# ssh n4
Last login: Mon Aug 19 21:46:13 2019 from 10.1.24.232
[root@n4 ~]# 

看到没有,其实本质上是通过下面这条命令登陆的:(只不过他写在了config里面去了)

ssh username@目标机器ip -p 22 -o ProxyCommand='ssh -p 22 username@跳板机ip -W %h:%p'

 

发表在 ssh | 标签为 | 留下评论

expect配合脚本实现ssh免密

1、生成密钥,创建脚本目录,具体不贴图了
[root@n0 sshcopy]#
[root@n0 sshcopy]# pwd
/root/sshcopy
[root@n0 sshcopy]# ll
total 8
-rwxr-xr-x. 1 root root 360 Aug 19 02:23 ssh.exp
-rwxr-xr-x. 1 root root 199 Aug 19 02:46 sshkey.sh
[root@n0 sshcopy]#
2、编辑免密脚本
[root@n0 sshcopy]# cat ssh.exp
#!/usr/bin/expect
set timeout 10
set user_hostname [lindex $argv 0]
set password [lindex $argv 1]
spawn ssh-copy-id $user_hostname
expect {
“(yes/no)?”
{
send “yes\n”
expect “*password: ” { send “$password\n” }
}
“*password: ” { send “$password\n” }
}
expect eof
3、调用免密脚本,注意文件路径按自己的情况进行更改
[root@n0 sshcopy]# cat ssh
ssh.exp sshkey.sh
[root@n0 sshcopy]# cat sshkey.sh
#!/bin/bash
ip=`echo -n “n1,n2,n3,n4” | xargs -d “,” -i echo {}`
password=”123456″

for i in $ip;do
/root/sshcopy/ssh.exp root@$i $password $ >> /root/sshcopy/out.log
ssh root@$i “echo $i ok”
done
[root@n0 sshcopy]#
4、验证
[root@n0 sshcopy]# ll
total 8
-rwxr-xr-x. 1 root root 360 Aug 19 02:23 ssh.exp
-rwxr-xr-x. 1 root root 199 Aug 19 02:46 sshkey.sh
[root@n0 sshcopy]# ./sshkey.sh
n1 ok
n2 ok
n3 ok
n4 ok
[root@n0 sshcopy]#
[root@n0 sshcopy]# ssh n1
Last login: Mon Aug 19 14:53:13 2019 from 10.1.24.232
[root@n1 ~]# exit
logout
Connection to n1 closed.
[root@n0 sshcopy]# ssh n2
Last login: Mon Aug 19 02:13:50 2019 from 10.1.24.232
[root@n2 ~]# exit
logout
Connection to n2 closed.
[root@n0 sshcopy]#

发表在 shell, ssh | 标签为 | 留下评论

echo -e 命令详解

echo -e 命令详解

echo在php中是输入那么在linux中是不是也是输入呢,当然echo在linux也是输入不过它的用法比php强大多了可以带参数及一些东西,下面我们来看一篇关于linux echo命令介绍及-n、-e参数详解吧,具体如下所示。

 

echo命令用于在shell中打印shell变量的值,或者直接输出指定的字符串。linux的echo命令,在shell编程中极为常用, 在终端下打印变量value的时候也是常常用到的,因此有必要了解下echo的用法echo命令的功能是在显示器上显示一段文字,一般起到一个提示的作用。

语法

echo(选项)(参数)选项

-e:激活转义字符。使用-e选项时,若字符串中出现以下字符,则特别加以处理,而不会将它当成一般文字输出:

•\a 发出警告声;
•\b 删除前一个字符;
•\c 最后不加上换行符号;
•\f 换行但光标仍旧停留在原来的位置;
•\n 换行且光标移至行首;
•\r 光标移至行首,但不换行;
•\t 插入tab;
•\v 与\f相同;
•\\ 插入\字符;
•\nnn 插入nnn(八进制)所代表的ASCII字符;
参数

变量:指定要打印的变量。

实例

用echo命令打印带有色彩的文字:

文字色:

echo -e “\e[1;31mThis is red text\e[0m”
This is red text•\e[1;31m 将颜色设置为红色
•\e[0m 将颜色重新置回
颜色码:重置=0,黑色=30,红色=31,绿色=32,黄色=33,蓝色=34,洋红=35,青色=36,白色=37

背景色:

echo -e “\e[1;42mGreed Background\e[0m”
Greed Background颜色码:重置=0,黑色=40,红色=41,绿色=42,黄色=43,蓝色=44,洋红=45,青色=46,白色=47

文字闪动:

echo -e ‘\033[37;31;5m”Warning:Nuclear missile has been launched!..”\033[39;49;0m’
红色数字处还有其他数字参数:0 关闭所有属性、1 设置高亮度(加粗)、4 下划线、5 闪烁、7 反显、8 消隐

echo -n 不换行输出
$echo -n “123”
$echo “456”

最终输出 
123456

而不是
123
456
echo -e 处理特殊字符

若字符串中出现以下字符,则特别加以处理,而不会将它当成一般文字输出:

\a 发出警告声;
\b 删除前一个字符;
\c 最后不加上换行符号;
\f 换行但光标仍旧停留在原来的位置;
\n 换行且光标移至行首;
\r 光标移至行首,但不换行;
\t 插入tab;
\v 与\f相同;
\\ 插入\字符;
\nnn 插入nnn(八进制)所代表的ASCII字符;

下面举例说明一下:

$echo -e “a\bdddd”  //前面的a会被擦除
dddd

$echo -e “a\adddd” //输出同时会发出报警声音
adddd

$echo -e “a\ndddd” //自动换行
a
dddd

我们在使用linux的过程中,经常会去下载安装包,下载时候的那个进度提示是不是比较好玩,下载进度的百分比在不断变化,利用echo -e和-n参数我们也可以实现这个效果了。

********************************************************************************

  • echo -e “\033[背景颜色;字体颜色m字符串\033[0m

格式: echo -e “\033[字背景颜色;字体颜色m字符串\033[0m” 

例如: 
echo -e “\033[41;36m something here \033[0m” 

其中41的位置代表底色, 36的位置是代表字的颜色 

那些ascii code 是对颜色调用的始末. 
\033[ ; m …… \033[0m 

字背景颜色范围:40—-49 
40:黑 
41:深红 
42:绿 
43:黄色 
44:蓝色 
45:紫色 
46:深绿 
47:白色 

字颜色:30———–39 
30:黑 
31:红 
32:绿 
33:黄 
34:蓝色 
35:紫色 
36:深绿 
37:白色 

===============================================ANSI控制码的说明 
\33[0m 关闭所有属性 
\33[1m 设置高亮度 
\33[4m 下划线 
\33[5m 闪烁 
\33[7m 反显 
\33[8m 消隐 
\33[30m — \33[37m 设置前景色 
\33[40m — \33[47m 设置背景色 
\33[nA 光标上移n行 
\33[nB 光标下移n行 
\33[nC 光标右移n行 
\33[nD 光标左移n行 
\33[y;xH设置光标位置 
\33[2J 清屏 
\33[K 清除从光标到行尾的内容 
\33[s 保存光标位置 
\33[u 恢复光标位置 
\33[?25l 隐藏光标 
\33[?25h 显示光标

发表在 bash, echo, shell | 标签为 | 留下评论

云主机AK/SK概念

AK/SK认证
通过API网关向下层服务发送请求时,必须使用AK(Access Key ID)、SK(Secret Access Key)对请求进行签名。
说明:
AK(Access Key ID):访问密钥ID。与私有访问密钥关联的唯一标识符;访问密钥ID和私有访问密钥一起使用,对请求进行加密签名。
SK(Secret Access Key):与访问密钥ID结合使用的密钥,对请求进行加密签名,可标识发送方,并防止请求被修改。

发表在 阿里云 | 标签为 | 留下评论

xshell 使用命令上传、下载文件

xshell 使用命令上传、下载文件
打开xshell,检查是否已经安装了上传下载的命令,#rpm -qa |grep lrzsz,如下表示已经安装了。
[root@n1 ~]# rpm -qa | grep lrzsz
lrzsz-0.12.20-36.el7.x86_64
如果未安装有,可使用yum安装,#yum install lrzsz -y
1、上传文件,使用#rz,然后会弹出选择对话框,选择好文件后,点击打开就能上传到当前目录下:

2、下载使用sz,例如要下载当前目录下的error_logs,就使用#sz error_logs,然后弹出对话框,选择保存的路径后点击确定即可下载文件。


3、如果不想每次都填写保存的路径,就可以在属性对话框中设置默认的下载路径,这样使用sz命令下载文件就能自动保存到默认的路径下了。设置如下:
另外可以使用Xftp来上传和下载更方便,在Xshell的工具栏中可以找到Xftp(没有的话去下载一个装好,然后工具栏就会显示xftp的图标),如下图:

注意是在xshell里面去点xftp的图标,不是去xftp里面去新建会话


上传即从左边的路径中找到文件,拖动到右边的目录下
下载就是从右边的目录中找到文件拖动到左边的路径下

发表在 xshell | 标签为 | 留下评论

使用 Yum 搜索、罗列和显示软件包信息

使用 yum search <term> [more_terms] 命令,能够在已启用的软件包仓库中,对所有软件包的名称、描述和概述中进行搜索,最后,yum 会以显示符合合条件的搜索结果列表。
[root@n2 ~]# yum search java-1.8.0-openjdk
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
======================================================================================================== N/S matched: java-1.8.0-openjdk ========================================================================================================
java-1.8.0-openjdk.i686 : OpenJDK Runtime Environment
java-1.8.0-openjdk.x86_64 : OpenJDK Runtime Environment 8
java-1.8.0-openjdk-accessibility.i686 : OpenJDK accessibility connector
java-1.8.0-openjdk-accessibility.x86_64 : OpenJDK accessibility connector
java-1.8.0-openjdk-accessibility-debug.i686 : OpenJDK accessibility connector for packages with debug on
java-1.8.0-openjdk-accessibility-debug.x86_64 : OpenJDK 8 accessibility connector for packages with debug on
java-1.8.0-openjdk-debug.i686 : OpenJDK Runtime Environment with full debug on
java-1.8.0-openjdk-debug.x86_64 : OpenJDK Runtime Environment 8 with full debug on
java-1.8.0-openjdk-demo.i686 : OpenJDK Demos
java-1.8.0-openjdk-demo.x86_64 : OpenJDK Demos 8
java-1.8.0-openjdk-demo-debug.i686 : OpenJDK Demos with full debug on
java-1.8.0-openjdk-demo-debug.x86_64 : OpenJDK Demos 8 with full debug on
java-1.8.0-openjdk-devel.i686 : OpenJDK Development Environment
java-1.8.0-openjdk-devel.x86_64 : OpenJDK Development Environment 8
java-1.8.0-openjdk-devel-debug.i686 : OpenJDK Development Environment with full debug on
java-1.8.0-openjdk-devel-debug.x86_64 : OpenJDK Development Environment 8 with full debug on
java-1.8.0-openjdk-headless.i686 : OpenJDK Runtime Environment
java-1.8.0-openjdk-headless.x86_64 : OpenJDK Headless Runtime Environment 8
java-1.8.0-openjdk-headless-debug.i686 : OpenJDK Runtime Environment with full debug on
java-1.8.0-openjdk-headless-debug.x86_64 : OpenJDK Runtime Environment with full debug on
java-1.8.0-openjdk-javadoc.noarch : OpenJDK 8 API documentation
java-1.8.0-openjdk-javadoc-debug.noarch : OpenJDK 8 API documentation for packages with debug on
java-1.8.0-openjdk-javadoc-zip.noarch : OpenJDK 8 API documentation compressed in a single archive
java-1.8.0-openjdk-javadoc-zip-debug.noarch : OpenJDK 8 API documentation compressed in a single archive for packages with debug on
java-1.8.0-openjdk-src.i686 : OpenJDK Source Bundle
java-1.8.0-openjdk-src.x86_64 : OpenJDK Source Bundle 8
java-1.8.0-openjdk-src-debug.i686 : OpenJDK Source Bundle for packages with debug on
java-1.8.0-openjdk-src-debug.x86_64 : OpenJDK Source Bundle 8 for packages with debug on

Name and summary matches only, use “search all” for everything.

如果你不记得软件包的确切名称,但了解软件包的某些相关名词时,yum search 命令能够有效地帮助你找到想要的软件包。

软件包列表
yum list 以及相关的一些命令则能够为你提供有关软件包、软件包集和软件仓库的信息。

所有的 yum list 命令都能够使用 glob 表达式作为参数,对输出结果进行过滤。在 glob 表达式中,你可以使用 * 代表任何数量个字符,使用 ? 代表任何一个字符。通过后面的示例,你可以对 glob 表达式有些简单的认识。
yum list <glob_expr> [more_glob_exprs] ── 列出所有符合 glob 表达式的软件包
yum list all ── 列出所有已安装的和可用的软件包
yum list installed ── 列出所有已经安装在系统中的软件包。输出结果的最右边一列是取得该软件包的软件仓库。标识为 installed 的软件包则说明它是做为系统基本组件而预安装的。
yum list available ── 列出所有启用的软件仓库中可用的软件包
yum grouplist ── 列出所有软件包组
yum repolist ── 列出所有启用的软件仓库的 ID,名称 及其包含的软件包的数量
查看软件包信息

使用 yum info <package_name> [more_names] 命令可查看一个或多个软件包的信息 (此处同样可以应用 glob 表达式)。
[root@n2 ~]# yum info java-1.8.0-openjdk.x86_64
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Available Packages
Name : java-1.8.0-openjdk
Arch : x86_64
Epoch : 1
Version : 1.8.0.222.b10
Release : 0.el7_6
Size : 274 k
Repo : updates/7/x86_64
Summary : OpenJDK Runtime Environment 8
URL : http://openjdk.java.net/
License : ASL 1.1 and ASL 2.0 and BSD and BSD with advertising and GPL+ and GPLv2 and GPLv2 with exceptions and IJG and LGPLv2+ and MIT and MPLv2.0 and Public Domain and W3C and zlib
Description : The OpenJDK runtime environment.
yum info <package_name> 与 rpm -q –info <package_name> 命令十分相似, 不过 yum 还能够提供软件包的软件仓库来源信息 (即输出结果中的 From repo 行)。

发表在 yum | 标签为 | 留下评论

rhel7上NTP授时服务Chrony

Chrony 应用本身已经有几年了,其是是网络时间协议的 (NTP) 的另一种实现。一直以来众多发行版里标配的都是ntpd对时服务,自rhel7/centos7 起,Chrony做为了发行版里的标配服务,不过老的ntpd服务依旧在rhel7/centos7里可以找到 。Chrony可以同时做为ntp服务的客户端和服务端。默认安装完后有两个程序chronyd和chronyc 。chronyd是一个在系统后台运行的守护进程,chronyc是用来监控chronyd性能和配置其参数程序。

一、安装启用
可以通过如下步骤安装启用chrony服务:

# yum install -y chrony –>安装服务
# systemctl start chronyd.service –>启动服务
# systemctl enable chronyd.service –>设置开机自启动,默认是enable的
二、chrony.conf的主要配置
chrony服务使用的配置文件为/etc/chrony.conf,配置内容格式和ntpd服务基本相似。默认内容如下:

[root@n2 ~]# cat /etc/chrony.conf | grep -v “^#” | grep -v “^$”
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

这里我随便找个可用的NTP服务器源换掉上面的默认源,并且把其他的NTP服务器注释掉
[root@n2 ~]# cat /etc/chrony.conf | grep -v “^#” | grep -v “^$”
server 1.cn.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony

如果本局域网内有对时服务开启的话,通过将上面的几条serer记录删除,增加指定局域网内的对时服务器并restart chrony服务即可。其中主要的配置参数有如下几个:

server – 该参数可以多次用于添加时钟服务器,必须以”server “格式使用。一般而言,你想添加多少服务器,就可以添加多少服务器;
stratumweight – stratumweight指令设置当chronyd从可用源中选择同步源时,每个层应该添加多少距离到同步距离。默认情况下,CentOS中设置为0,让chronyd在选择源时忽略源的层级;
driftfile – chronyd程序的主要行为之一,就是根据实际时间计算出计算机增减时间的比率,将它记录到一个文件中是最合理的,它会在重启后为系统时钟作出补偿,甚至可能的话,会从时钟服务器获得较好的估值;
rtcsync – rtcsync指令将启用一个内核模式,在该模式中,系统时间每11分钟会拷贝到实时时钟(RTC);
allow / deny – 这里你可以指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器;
cmdallow / cmddeny – 跟上面相类似,只是你可以指定哪个IP地址或哪台主机可以通过chronyd使用控制命令;
bindcmdaddress – 该指令允许你限制chronyd监听哪个网络接口的命令包(由chronyc执行)。该指令通过cmddeny机制提供了一个除上述限制以外可用的额外的访问控制等级。
makestep – 通常,chronyd将根据需求通过减慢或加速时钟,使得系统逐步纠正所有时间偏差。在某些特定情况下,系统时钟可能会漂移过快,导致该调整过程消耗很长的时间来纠正系统时钟。该指令强制chronyd在调整期大于某个阀值时步进调整系统时钟,但只有在因为chronyd启动时间超过指定限制(可使用负值来禁用限制),没有更多时钟更新时才生效。

完成后重启服务
[root@n2 ~]# systemctl restart chronyd

三、查看同步状态
检查ntp源服务器状态:
配置前:
[root@n2 ~]# chronyc sources -v
210 Number of sources = 4

.– Source mode ‘^’ = server, ‘=’ = peer, ‘#’ = local clock.
/ .- Source state ‘*’ = current synced, ‘+’ = combined , ‘-‘ = not combined,
| / ‘?’ = unreachable, ‘x’ = time may be in error, ‘~’ = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) –. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- a.chl.la 2 6 152 164 +49ms[ +51ms] +/- 186ms
^* electabuzz.felixc.at 3 6 377 103 +20ms[ +22ms] +/- 161ms
^- 119.28.206.193 2 6 37 37 -8960us[-8960us] +/- 49ms
^+ electrode.felixc.at 3 6 377 36 -3904us[-3904us] +/- 149ms

配置后
[root@n2 ~]# chronyc sourcestats
210 Number of sources = 1
Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
==============================================================================
ntp1.flashdance.cx 4 3 9 +418.469 11440.988 +22ms 2051us

检查ntp详细同步状态:

[root@n2 ~]# chronyc sources -v
210 Number of sources = 1

.– Source mode ‘^’ = server, ‘=’ = peer, ‘#’ = local clock.
/ .- Source state ‘*’ = current synced, ‘+’ = combined , ‘-‘ = not combined,
| / ‘?’ = unreachable, ‘x’ = time may be in error, ‘~’ = time too variable.
|| .- xxxx [ yyyy ] +/- zzzz
|| Reachability register (octal) -. | xxxx = adjusted offset,
|| Log2(Polling interval) –. | | yyyy = measured offset,
|| \ | | zzzz = estimated error.
|| | | \
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* ntp1.flashdance.cx 2 6 35 23 -1736us[-2342us] +/- 216ms

四、使用chronyc
可以通过运行chronyc命令来修改设置,命令如下:

accheck – 检查NTP访问是否对特定主机可用

activity – 该命令会显示有多少NTP源在线/离线

add server – 手动添加一台新的NTP服务器。

clients – 在客户端报告已访问到服务器

delete – 手动移除NTP服务器或对等服务器

settime – 手动设置守护进程时间

tracking – 显示系统时间信息

输入help命令可以查看更多chronyc的交互命令。

[root@n2 ~]# chronyc
chrony version 3.2
Copyright (C) 1997-2003, 2007, 2009-2017 Richard P. Curnow and others
chrony comes with ABSOLUTELY NO WARRANTY. This is free software, and
you are welcome to redistribute it under certain conditions. See the
GNU General Public License version 2 for details.

chronyc> activity
200 OK
1 sources online
0 sources offline
0 sources doing burst (return to online)
0 sources doing burst (return to offline)
0 sources with unknown address
chronyc> help
System clock:
tracking Display system time information
makestep Correct clock by stepping immediately
makestep <threshold> <updates>
Configure automatic clock stepping
maxupdateskew <skew> Modify maximum valid skew to update frequency
waitsync [<max-tries> [<max-correction> [<max-skew> [<interval>]]]]
Wait until synchronised in specified limits

Time sources:
sources [-v] Display information about current sources
sourcestats [-v] Display statistics about collected measurements
reselect Force reselecting synchronisation source
reselectdist <dist> Modify reselection distance

NTP sources:
activity Check how many NTP sources are online/offline
ntpdata [<address>] Display information about last valid measurement
add server <address> [options]
Add new NTP server
add peer <address> [options]
Add new NTP peer
delete <address> Remove server or peer
burst <n-good>/<n-max> [<mask>/<address>]
Start rapid set of measurements
maxdelay <address> <delay> Modify maximum valid sample delay
maxdelayratio <address> <ratio>
Modify maximum valid delay/minimum ratio
maxdelaydevratio <address> <ratio>
Modify maximum valid delay/deviation ratio
minpoll <address> <poll> Modify minimum polling interval
maxpoll <address> <poll> Modify maximum polling interval
minstratum <address> <stratum>
Modify minimum stratum
offline [<mask>/<address>] Set sources in subnet to offline status
online [<mask>/<address>] Set sources in subnet to online status
polltarget <address> <target>
Modify poll target
refresh Refresh IP addresses

Manual time input:
manual off|on|reset Disable/enable/reset settime command
manual list Show previous settime entries
manual delete <index> Delete previous settime entry
settime <time> Set daemon time
(e.g. Sep 25, 2015 16:30:05 or 16:30:05)

NTP access:
accheck <address> Check whether address is allowed
clients Report on clients that have accessed the server
serverstats Display statistics of the server
allow [<subnet>] Allow access to subnet as a default
allow all [<subnet>] Allow access to subnet and all children
deny [<subnet>] Deny access to subnet as a default
deny all [<subnet>] Deny access to subnet and all children
local [options] Serve time even when not synchronised
local off Don’t serve time when not synchronised
smoothtime reset|activate Reset/activate time smoothing
smoothing Display current time smoothing state

Monitoring access:
cmdaccheck <address> Check whether address is allowed
cmdallow [<subnet>] Allow access to subnet as a default
cmdallow all [<subnet>] Allow access to subnet and all children
cmddeny [<subnet>] Deny access to subnet as a default
cmddeny all [<subnet>] Deny access to subnet and all children

Real-time clock:
rtcdata Print current RTC performance parameters
trimrtc Correct RTC relative to system clock
writertc Save RTC performance parameters to file

Other daemon commands:
cyclelogs Close and re-open log files
dump Dump all measurements to save files
rekey Re-read keys from key file

Client commands:
dns -n|+n Disable/enable resolving IP addresses to hostnames
dns -4|-6|-46 Resolve hostnames only to IPv4/IPv6/both addresses
timeout <milliseconds> Set initial response timeout
retries <retries> Set maximum number of retries
keygen [<id> [<type> [<bits>]]]
Generate key for key file
exit|quit Leave the program
help Generate this help

chronyc>

五、其他时间设置相关指令
相关指令如下:

查看日期时间、时区及NTP状态:# timedatectl
查看时区列表:# timedatectl list-timezones
修改时区# timedatectl set-timezone Asia/Shanghai
修改日期时间:# timedatectl set-time “2015-01-21 11:50:00″(可以只修改其中一个)
开启NTP:# timedatectl set-ntp true/flase
还有另外一个有趣的指令system-config-date ,在rhel7 里也给了我们一个可以图形化配置chrony服务的工具 。安装命令如下:

[root@n2 ~]# yum -y install system-config-date
Loaded plugins: fastestmirror
Determining fastest mirrors
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
extras | 3.4 kB 00:00:00
updates | 3.4 kB 00:00:00
(1/2): extras/7/x86_64/primary_db | 215 kB 00:00:00
(2/2): updates/7/x86_64/primary_db | 7.4 MB 00:00:03
Resolving Dependencies
–> Running transaction check
(省略)
安装完成后运行system-config-date命令,界面如下:

system-config-date

六、chrony的优势
Chrony 的优势包括:

更快的同步只需要数分钟而非数小时时间,从而最大程度减少了时间和频率误差,这对于并非全天 24 小时运行的台式计算机或系统而言非常有用。
能够更好地响应时钟频率的快速变化,这对于具备不稳定时钟的虚拟机或导致时钟频率发生变化的节能技术而言非常有用。
在初始同步后,它不会停止时钟,以防对需要系统时间保持单调的应用程序造成影响。
在应对临时非对称延迟时(例如,在大规模下载造成链接饱和时)提供了更好的稳定性。
无需对服务器进行定期轮询,因此具备间歇性网络连接的系统仍然可以快速同步时钟。
参考文档:
红帽chrony文档
chrony官方手册

转载自http://www.361way.com/rhel7-chrony/4778.html,并做修改

发表在 NTP | 标签为 | 留下评论

HISTTIMEFORMAT 设置历史命令时间的格式

echo ‘HISTTIMEFORMAT=”%F %T `whoami`”  ‘ >>/etc/bashrc

whoami 完了后面要有空格不然会连住和命令

在/etc/bashrc下面添加一行:

HISTTIMEFORMAT=”%F %T `whoami` “

97 2019-08-15 10:20:14 root HISTTIMEFORMAT=”%F %T `whoami` ”
98 2019-08-15 10:20:16 root ls
99 2019-08-15 10:20:17 root pwd
100 2019-08-15 10:20:20 root cd /
101 2019-08-15 10:20:21 root ls
102 2019-08-15 10:20:24 root cd /etc/
103 2019-08-15 10:20:26 root ls
104 2019-08-15 10:20:29 root history
第几个命令  几点  哪个用户执行了什么命令

发表在 bash, LinuxBasic | 标签为 | 留下评论