不知道我自己的网站的ftp账号,seo专业论坛,360网站导航公司地址怎么做,模板网站如何做seodocker部署elk并开启ssl认证 docker-compose部署elk部署所需yml文件 —— docker-compose-elk.yml部署配置elasticsearch和kibana并开启ssl配置基础数据认证配置elasticsearch和kibana开启https访问 配置logstash创建springboot项目进行测试kibana创建视图#xff0c;查询日志… docker部署elk并开启ssl认证 docker-compose部署elk部署所需yml文件 —— docker-compose-elk.yml部署配置elasticsearch和kibana并开启ssl配置基础数据认证配置elasticsearch和kibana开启https访问 配置logstash创建springboot项目进行测试kibana创建视图查询日志 docker-compose部署elk
部署所需yml文件 —— docker-compose-elk.yml
version: 3.7
services:elasticsearch:image: elasticsearch:8.9.0container_name: elasticsearchhostname: elasticsearchrestart: novolumes:- /opt/space/soft/docker/elasticsearch/data:/usr/share/elasticsearch/data:rw- /opt/space/soft/docker/elasticsearch/config:/usr/share/elasticsearch/config:rw- /opt/space/soft/docker/elasticsearch/logs:/usr/share/elasticsearch/logs:rw- /opt/space/soft/docker/elasticsearch/plugins:/usr/share/elasticsearch/plugins:rwports:- 21033:9200- 21133:9300networks:elastic:aliases:- elasticsearchkibana:image: kibana:8.9.0container_name: kibanahostname: kibanarestart: novolumes:- /opt/space/soft/docker/kibana/data:/usr/share/kibana/data:rw- /opt/space/soft/docker/kibana/config:/usr/share/kibana/config:rw- /opt/space/soft/docker/kibana/logs:/usr/share/kibana/logs:rwports:- 21041:5601depends_on:- elasticsearchnetworks:elastic:aliases:- kibanalogstash:image: logstash:8.9.0container_name: logstashhostname: logstashrestart: novolumes:- /opt/space/soft/docker/logstash/config:/usr/share/logstash/config:rw- /opt/space/soft/docker/logstash/logs:/usr/share/logstash/logs:rw- /opt/space/soft/docker/logstash/pipeline:/usr/share/logstash/pipeline:rwports:- 21066:5044- 21067:9066- 21068:21068depends_on:- elasticsearchnetworks:elastic:aliases:- logstashnetworks:elastic:name: elasticdriver: bridge这里做了端口映射以及挂载数据卷。
部署
1.使用docker-compose命令部署
docker-compose -f docker-compose-elk.yml up -d部署前需要先将挂载的数据卷注释掉用于拷贝配置文件到本地磁盘
创建所需目录拷贝配置文件到本地并设置读写权限为 777
cd /opt/space/soft/docker/
sudo mkdir -p elasticsearch/{config,data,logs,plugins}
sudo mkdir -p kibana/{config,data,logs}
sudo mkdir -p logstash/{config,pipeline,logs}sudo docker cp elasticsearch:/usr/share/elasticsearch/config ./elasticsearch/
sudo docker cp kibana:/usr/share/kibana/config ./kibana/
sudo docker cp logstash:/usr/share/logstash/config ./logstash/
sudo docker cp logstash:/usr/share/logstash/pipeline ./logstash/sudo chmod 777 -R /opt/space/soft/docker/elasticsearch/*
sudo chmod 777 -R /opt/space/soft/docker/kibana/*
sudo chmod 777 -R /opt/space/soft/docker/logstash/*修改jvm参数作者电脑配置不够高所以需要进行修改 3.1 修改elasticsearch的jvm参数 打开/opt/space/soft/docker/elasticsearch/config/jvm.options文件按图中所示进行配置。 3.2 修改logstash的jvm参数 打开/opt/space/soft/docker/logstash/config/jvm.options文件按图中所示进行配置。 放开docker-compose-elk.yml文件内挂载数据卷的注释并执行下列命令。
docker-compose -f docker-compose-elk.yml stop
docker-compose -f docker-compose-elk.yml rm
docker-compose -f docker-compose-elk.yml up -d配置elasticsearch和kibana并开启ssl
配置基础数据认证
参考文档
进入elasticsearch的docker容器内
docker exec -it elasticsearch bash生成elastic-stack-ca.p12
./bin/elasticsearch-certutil ca生成elastic-certificates.p12
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 4. 拷贝elastic-certificates.p12到目录/usr/share/elasticsearch/config/certs删除elastic-stack-ca.p12
mv elastic-certificates.p12 config/certs/
mv elastic-stack-ca.p12 ./config/elastic-stack-ca.p12文件保留下来后面有用
5.设置elasticsearch.yml配置文件 安装如图所示进行修改
xpack.security.transport.ssl:enabled: trueverification_mode: certificatekeystore.path: certs/elastic-certificates.p12truststore.path: certs/elastic-certificates.p126.如果certificate设置了密码需要执行一下两个命令
./bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password./bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password此处可能出现以下错误需要退出容器给提示的文件设置所属用户
sudo chown linux当前登录用户名 elasticsearch/config/elasticsearch.keystore退出elasticsearch容器给certificates文件赋予777权限
sudo chmod 777 elasticsearch/config/certs/elastic-certificates.p12 重启elasticsearch容器
docker restart elasticsearch配置elasticsearch和kibana开启https访问
参考文档
再次进入elasticsearch的docker容器内
docker exec -it elasticsearch bash生成elasticsearch-ssl-http.zip 命令所需操作过长此处不在截图
elasticsearchelasticsearch:~$ ./bin/elasticsearch-certutil http## Elasticsearch HTTP Certificate UtilityThe http command guides you through the process of generating certificates
for use on the HTTP (Rest) interface for Elasticsearch.This tool will ask you a number of questions in order to generate the right
set of files for your needs.## Do you wish to generate a Certificate Signing Request (CSR)?A CSR is used when you want your certificate to be created by an existing
Certificate Authority (CA) that you do not control (that is, you dont have
access to the keys for that CA). If you are in a corporate environment with a central security team, then you
may have an existing Corporate CA that can generate your certificate for you.
Infrastructure within your organisation may already be configured to trust this
CA, so it may be easier for clients to connect to Elasticsearch if you use a
CSR and send that request to the team that controls your CA.If you choose not to generate a CSR, this tool will generate a new certificate
for you. That certificate will be signed by a CA under your control. This is a
quick and easy way to secure your cluster with TLS, but you will need to
configure all your clients to trust that custom CA.## 生成CSR 输入n
Generate a CSR? [y/N]n## Do you have an existing Certificate Authority (CA) key-pair that you wish to use to sign your certificate?If you have an existing CA certificate and key, then you can use that CA to
sign your new http certificate. This allows you to use the same CA across
multiple Elasticsearch clusters which can make it easier to configure clients,
and may be easier for you to manage.If you do not have an existing CA, one will be generated for you.## 是否使用存在的ca 输入y在基础配置时生成了
Use an existing CA? [y/N]y## What is the path to your CA?Please enter the full pathname to the Certificate Authority that you wish to
use for signing your new http certificate. This can be in PKCS#12 (.p12), JKS
(.jks) or PEM (.crt, .key, .pem) format.
## 输入ca文件的地址
CA Path: /usr/share/elasticsearch/config/elastic-stack-ca.p12
Reading a PKCS12 keystore requires a password.
It is possible for the keystores password to be blank,
in which case you can simply press ENTER at the prompt
## 输入文件设置的密码
Password for elastic-stack-ca.p12:## How long should your certificates be valid?Every certificate has an expiry date. When the expiry date is reached clients
will stop trusting your certificate and TLS connections will fail.Best practice suggests that you should either:
(a) set this to a short duration (90 - 120 days) and have automatic processes
to generate a new certificate before the old one expires, or
(b) set it to a longer duration (3 - 5 years) and then perform a manual update
a few months before it expires.You may enter the validity period in years (e.g. 3Y), months (e.g. 18M), or days (e.g. 90D)## 设置过期时间
For how long should your certificate be valid? [5y] 5y## Do you wish to generate one certificate per node?If you have multiple nodes in your cluster, then you may choose to generate a
separate certificate for each of these nodes. Each certificate will have its
own private key, and will be issued for a specific hostname or IP address.Alternatively, you may wish to generate a single certificate that is valid
across all the hostnames or addresses in your cluster.If all of your nodes will be accessed through a single domain
(e.g. node01.es.example.com, node02.es.example.com, etc) then you may find it
simpler to generate one certificate with a wildcard hostname (*.es.example.com)
and use that across all of your nodes.However, if you do not have a common domain name, and you expect to add
additional nodes to your cluster in the future, then you should generate a
certificate per node so that you can more easily generate new certificates when
you provision new nodes.## 是否为每一个节点生成证书 输入n
Generate a certificate per node? [y/N]n## Which hostnames will be used to connect to your nodes?These hostnames will be added as DNS names in the Subject Alternative Name
(SAN) field in your certificate.You should list every hostname and variant that people will use to connect to
your cluster over http.
Do not list IP addresses here, you will be asked to enter them later.If you wish to use a wildcard certificate (for example *.es.example.com) you
can enter that here.## 节点的hostname,设置为elasticsearch敲两次回车
Enter all the hostnames that you need, one per line.
When you are done, press ENTER once more to move on to the next step.elasticsearchYou entered the following hostnames.- elasticsearch## 配置是否正确
Is this correct [Y/n]y## Which IP addresses will be used to connect to your nodes?If your clients will ever connect to your nodes by numeric IP address, then you
can list these as valid IP Subject Alternative Name (SAN) fields in your
certificate.If you do not have fixed IP addresses, or not wish to support direct IP access
to your cluster then you can just press ENTER to skip this step.## 节点的ip(可以在宿主机通过命令docker inspect elasticsearch查看),设置为172.27.0.2敲两次回车
Enter all the IP addresses that you need, one per line.
When you are done, press ENTER once more to move on to the next step.172.27.0.2You entered the following IP addresses.- 172.27.0.2
## 配置是否正确
Is this correct [Y/n]y## Other certificate optionsThe generated certificate will have the following additional configuration
values. These values have been selected based on a combination of the
information you have provided above and secure defaults. You should not need to
change these values unless you have specific requirements.Key Name: elasticsearch
Subject DN: CNelasticsearch
Key Size: 2048## 是否更改任意项
Do you wish to change any of these options? [y/N]n## What password do you want for your private key(s)?Your private key(s) will be stored in a PKCS#12 keystore file named http.p12.
This type of keystore is always password protected, but it is possible to use a
blank password.If you wish to use a blank password, simply press enter at the prompt below.
## 输入生成文件的密码可不设置设置需要在后面进行配置
Provide a password for the http.p12 file: [ENTER for none]
## 再次输入生成文件的密码
Repeat password to confirm: ## Where should we save the generated files?A number of files will be generated including your private key(s),
public certificate(s), and sample configuration options for Elastic Stack products.These files will be included in a single zip archive.## 生成压缩文件的地址和名称直接敲回车即可
What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip] Zip file written to /usr/share/elasticsearch/elasticsearch-ssl-http.zip移动elasticsearch-ssl-http.zip压缩包 mv elasticsearch-ssl-http.zip ./config/解压并移动elasticsearc的http.p12文件删除其余文件
unzip config/elasticsearch-ssl-http.zip
mv elasticsearch/http.p12 ./config/certs/
rm -rf elasticsearch/ kibana/设置密码是需要执行该命令没有设置则跳过
./bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password7. 退出elasticsearch容器给http.p12文件赋予777权限
sudo chmod 777 elasticsearch/config/certs/http.p12重启elasticsearch容器
docker restart elasticsearch10.进入容器给elastic用户设置密码验证https访问 修改密码参考资料
bin/elasticsearch-reset-password -u elastic -i浏览器访问 https://宿主机IP:映射端口 作者这里宿主机IP为192.168.0.100 映射端口为21033 故访问https://192.168.0.100:21033/ 访问需要帐号密码进行验证帐号为elastic 密码为刚刚设置的密码 登录成功后可看到elasticsearch配置 11. 复制压缩包第二步生成的内kibana文件夹下elasticsearch-ca.pem到kibana的配置文件夹内 12. 配置kibana_system用户的密码 进入elasticsearch容器内执行下列命令
bin/elasticsearch-reset-password -u kibana_system -i生成kibana用https访问的公钥和私钥
./bin/elasticsearch-certutil csr -name kibana-server会生成一个压缩文件解压后有一下文件
/kibana-server
|_ kibana-server.csr
|_ kibana-server.key将kibana-server.csr和kibana-server.key两个文件拷贝到kibana配置文件夹内执行下列命令生成
openssl x509 -req -days 3650 -in kibana-server.csr -signkey kibana-server.key -out kibana-server.crt打开kibana配置文件kibana.yml添加和修改以下配置IP自查
server.host: 0.0.0.0
server.shutdownTimeout: 5s
elasticsearch.hosts: [ https://172.27.0.2:9200 ]
monitoring.ui.container.elasticsearch.enabled: true
elasticsearch.ssl.certificateAuthorities: [/usr/share/kibana/config/elasticsearch-ca.pem]
elasticsearch.username: kibana_system
elasticsearch.password: 12步设置的密码server.ssl.certificate: /usr/share/kibana/config/kibana-server.crt
server.ssl.key: /usr/share/kibana/config/kibana-server.key
server.ssl.enabled: true
# 设置中文访问
i18n.locale: zh-CN给生成的四个文件设置777权限
sudo chmod 777 elasticsearch-ca.pem kibana-server.csr kibana-server.key kibana-server.crt重启kibana登录kibana控制台验证 浏览器访问 https://宿主机IP:映射端口 作者这里宿主机IP为192.168.0.100 映射端口为21041 故访问https://192.168.0.100:21041/ 访问需要帐号密码进行验证帐号为elastic 密码为刚刚设置的密码登录成功后选择自己浏览 配置logstash
设置elasticsearch内置用户logstash_system密码上面已经设置过两个用户的密码了此处就当留一个考点吧生成logstash和elasticsearch之间的安全认证文件文件elastic-certificates.p12大家还有印象吗
#如果设置有密码此命令需要输入密码
openssl pkcs12 -in elasticsearch/config/certs/elastic-certificates.p12 -cacerts -nokeys -chain -out logstash.pem移动logstash.pem文件到logstash配置文件目录下并设置777权限
sudo chmod 777 logstash.pem配置logstash.yml文件
http.host: 0.0.0.0xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: 第一步设置的密码#这里必须用 https 这里必须用 https 这里必须用 https
xpack.monitoring.elasticsearch.hosts: [ https://172.27.0.2:9200 ]
#你的ca.pem 的所在路径
xpack.monitoring.elasticsearch.ssl.certificate_authority: /usr/share/logstash/config/logstash.pem
xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
# 探嗅 es节点设置为 false
xpack.monitoring.elasticsearch.sniffing: false使用kabina配置一个新的elasticsearch用户给logstash使用 在pipeline目录下增加logstash.conf文件 配置如下
# Sample Logstash configuration for creating a simple
# Beats - Logstash - Elasticsearch pipeline.input {tcp {port 21068codec json_lines}
}output {elasticsearch {hosts [https://172.27.0.2:9200]index 自定义索引名称-%{YYYY.MM.dd}user 自定义用户password 自定义密码ssl_enabled truessl_certificate_authorities [/usr/share/logstash/config/logstash.pem]}
}重启logstash
创建springboot项目进行测试
在pom文件内添加依赖 dependencygroupIdnet.logstash.logback/groupIdartifactIdlogstash-logback-encoder/artifactIdversion7.4/version/dependency配置logback.xml文件添加以下appender destination配置问logstash的地址需要哪些数据请自定义。 springProperty scopecontext nameappName sourcespring.application.name/springProperty scopecontext nameport sourceserver.port/appender nameLOGSTASH classnet.logstash.logback.appender.LogstashTcpSocketAppenderdestinationip:port/destinationencoder classnet.logstash.logback.encoder.LoggingEventCompositeJsonEncodertimestamptimeZoneAsia/Shanghai/timeZone/timestampproviderspatternpattern{index:index-%d{yyyy-MM-dd},appName:${appName},pid: ${PID:-},port:${port},timestamp:%d{yyyy-MM-dd HH:mm:ss.SSS},thread:%thread,level:%level,class: %logger,message: %message,stack_trace:%exception}/pattern/pattern/providers/encoderkeepAliveDuration5 minutes/keepAliveDuration/appender将新增加appender的名字添加到root内 root levelINFOappender-ref refERROR/appender-ref refWARN/appender-ref refINFO/appender-ref refDEBUG/appender-ref refLOGSTASH/appender-ref refSTDOUT//root4.启动验证
kibana创建视图查询日志