关于做网站的外语文献书名,wordpress 钩子开发,自动生成网页的工具,手机小说网站建设原文网址#xff1a;K8S--部署Nacos-CSDN博客
简介
本文介绍K8S部署Nacos的方法。Nacos版本是#xff1a;2.2.3。
部署方案
本文为了简单#xff0c;使用此部署方式#xff1a;使用本地pvconfigmap#xff0c;以embedded模式部署单机nacos。以nodePort方式暴露端口。 …原文网址K8S--部署Nacos-CSDN博客
简介
本文介绍K8S部署Nacos的方法。Nacos版本是2.2.3。
部署方案
本文为了简单使用此部署方式使用本地pvconfigmap以embedded模式部署单机nacos。以nodePort方式暴露端口。
正式环境可以这样部署使用nfs以mysql方式部署集群nacos以ingress方式暴露端口。
官网网址
Kubernetes Nacos | Nacos
https://github.com/nacos-group/nacos-k8s/blob/master/deploy/nacos/nacos-pvc-nfs.yaml
部署结果 我的工作目录/work/devops/k8s/app/nacos 1.创建命名空间
创建namespace.yaml文件
内容如下
# 创建命名空间
apiVersion: v1
kind: Namespace
metadata:name: middlelabels:name: middle
创建命名空间
kubectl apply -f namespace.yaml
结果 2.用ConfigMap创建配置
这里只能用ConfigMap因为PV不能挂载单个文件只能挂载目录。ConfigMap可以单独挂载配置文件。想要挂载单个文件可以用hostPath方式。
1.修改配置
先从github上下载nacos压缩包地址这里
修改配置文件nacos-server-2.2.3\nacos\conf\application.properties修改点如下 备注除了第一个要改为true之外下边的几个都是随便写最后一个配置必须大于32个字符不然会报错。
修改后的配置文件如下
#
# Copyright 1999-2021 Alibaba Group Holding Ltd.
#
# Licensed under the Apache License, Version 2.0 (the License);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##*************** Spring Boot Related Configurations ***************#
### Default web context path:
server.servlet.contextPath/nacos
### Include message field
server.error.include-messageALWAYS
### Default web server port:
server.port8848#*************** Network Related Configurations ***************#
### If prefer hostname over ip for Nacos server addresses in cluster.conf:
# nacos.inetutils.prefer-hostname-over-ipfalse### Specify local servers IP:
# nacos.inetutils.ip-address#*************** Config Module Related Configurations ***************#
### If use MySQL as datasource:
### Deprecated configuration property, it is recommended to use spring.sql.init.platform replaced.
# spring.datasource.platformmysql
# spring.sql.init.platformmysql### Count of DB:
# db.num1### Connect URL of DB:
# db.url.0jdbc:mysql://127.0.0.1:3306/nacos?characterEncodingutf8connectTimeout1000socketTimeout3000autoReconnecttrueuseUnicodetrueuseSSLfalseserverTimezoneUTC
# db.user.0nacos
# db.password.0nacos### Connection pool configuration: hikariCP
db.pool.config.connectionTimeout30000
db.pool.config.validationTimeout10000
db.pool.config.maximumPoolSize20
db.pool.config.minimumIdle2#*************** Naming Module Related Configurations ***************#### If enable data warmup. If set to false, the server would accept request without local data preparation:
# nacos.naming.data.warmuptrue### If enable the instance auto expiration, kind like of health check of instance:
# nacos.naming.expireInstancetrue### Add in 2.0.0
### The interval to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.interval60000### The expired time to clean empty service, unit: milliseconds.
# nacos.naming.clean.empty-service.expired-time60000### The interval to clean expired metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.interval5000### The expired time to clean metadata, unit: milliseconds.
# nacos.naming.clean.expired-metadata.expired-time60000### The delay time before push task to execute from service changed, unit: milliseconds.
# nacos.naming.push.pushTaskDelay500### The timeout for push task execute, unit: milliseconds.
# nacos.naming.push.pushTaskTimeout5000### The delay time for retrying failed push task, unit: milliseconds.
# nacos.naming.push.pushTaskRetryDelay1000### Since 2.0.3
### The expired time for inactive client, unit: milliseconds.
# nacos.naming.client.expired.time180000#*************** CMDB Module Related Configurations ***************#
### The interval to dump external CMDB in seconds:
# nacos.cmdb.dumpTaskInterval3600### The interval of polling data change event in seconds:
# nacos.cmdb.eventTaskInterval10### The interval of loading labels in seconds:
# nacos.cmdb.labelTaskInterval300### If turn on data loading task:
# nacos.cmdb.loadDataAtStartfalse#*************** Metrics Related Configurations ***************#
### Metrics for prometheus
#management.endpoints.web.exposure.include*### Metrics for elastic search
management.metrics.export.elastic.enabledfalse
#management.metrics.export.elastic.hosthttp://localhost:9200### Metrics for influx
management.metrics.export.influx.enabledfalse
#management.metrics.export.influx.dbspringboot
#management.metrics.export.influx.urihttp://localhost:8086
#management.metrics.export.influx.auto-create-dbtrue
#management.metrics.export.influx.consistencyone
#management.metrics.export.influx.compressedtrue#*************** Access Log Related Configurations ***************#
### If turn on the access log:
server.tomcat.accesslog.enabledtrue### The access log pattern:
server.tomcat.accesslog.pattern%h %l %u %t %r %s %b %D %{User-Agent}i %{Request-Source}i### The directory of access log:
server.tomcat.basedirfile:.#*************** Access Control Related Configurations ***************#
### If enable spring security, this option is deprecated in 1.2.0:
#spring.security.enabledfalse### The ignore urls of auth
nacos.security.ignore.urls/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**### The auth system to use, currently only nacos and ldap is supported:
nacos.core.auth.system.typenacos### If turn on auth system:
#nacos.core.auth.enabledfalse
nacos.core.auth.enabledtrue### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.
nacos.core.auth.caching.enabledtrue### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.
nacos.core.auth.enable.userAgentAuthWhitefalse### Since 1.4.1, worked when nacos.core.auth.enabledtrue and nacos.core.auth.enable.userAgentAuthWhitefalse.
### The two properties is the white list for auth and used by identity the request from other server.
#nacos.core.auth.server.identity.key
#nacos.core.auth.server.identity.value
nacos.core.auth.server.identity.keyexampleKey
nacos.core.auth.server.identity.valueexampleValue### worked when nacos.core.auth.system.typenacos
### The token expiration in seconds:
nacos.core.auth.plugin.nacos.token.cache.enablefalse
nacos.core.auth.plugin.nacos.token.expire.seconds18000
### The default token (Base64 String):
#nacos.core.auth.plugin.nacos.token.secret.key
nacos.core.auth.plugin.nacos.token.secret.keyHo9pJlDFurhga1847fhj3jtlsvc18jguehfjgkhh17365jdf8### worked when nacos.core.auth.system.typeldap{0} is Placeholder,replace login username
#nacos.core.auth.ldap.urlldap://localhost:389
#nacos.core.auth.ldap.basedcdcexample,dcorg
#nacos.core.auth.ldap.userDncnadmin,${nacos.core.auth.ldap.basedc}
#nacos.core.auth.ldap.passwordadmin
#nacos.core.auth.ldap.userdncn{0},dcexample,dcorg
#nacos.core.auth.ldap.filter.prefixuid
#nacos.core.auth.ldap.case.sensitivetrue#*************** Istio Related Configurations ***************#
### If turn on the MCP server:
nacos.istio.mcp.server.enabledfalse#*************** Core Related Configurations ***************#### set the WorkerID manually
# nacos.core.snowflake.worker-id### Member-MetaData
# nacos.core.member.meta.site
# nacos.core.member.meta.adweight
# nacos.core.member.meta.weight### MemberLookup
### Addressing pattern category, If set, the priority is highest
# nacos.core.member.lookup.type[file,address-server]
## Set the cluster list with a configuration file or command-line argument
# nacos.member.list192.168.16.101:8847?raft_port8807,192.168.16.101?raft_port8808,192.168.16.101:8849?raft_port8809
## for AddressServerMemberLookup
# Maximum number of retries to query the address server upon initialization
# nacos.core.address-server.retry5
## Server domain name address of [address-server] mode
# address.server.domainjmenv.tbsite.net
## Server port of [address-server] mode
# address.server.port8080
## Request address of [address-server] mode
# address.server.url/nacos/serverlist#*************** JRaft Related Configurations ***************#### Sets the Raft cluster election timeout, default value is 5 second
# nacos.core.protocol.raft.data.election_timeout_ms5000
### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute
# nacos.core.protocol.raft.data.snapshot_interval_secs30
### raft internal worker threads
# nacos.core.protocol.raft.data.core_thread_num8
### Number of threads required for raft business request processing
# nacos.core.protocol.raft.data.cli_service_thread_num4
### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat
# nacos.core.protocol.raft.data.read_index_typeReadOnlySafe
### rpc request timeout, default 5 seconds
# nacos.core.protocol.raft.data.rpc_request_timeout_ms5000#*************** Distro Related Configurations ***************#### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.
# nacos.core.protocol.distro.data.sync.delayMs1000### Distro data sync timeout for one sync data, default 3 seconds.
# nacos.core.protocol.distro.data.sync.timeoutMs3000### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.
# nacos.core.protocol.distro.data.sync.retryDelayMs3000### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.
# nacos.core.protocol.distro.data.verify.intervalMs5000### Distro data verify timeout for one verify, default 3 seconds.
# nacos.core.protocol.distro.data.verify.timeoutMs3000### Distro data load retry delay when load snapshot data failed, default 30 seconds.
# nacos.core.protocol.distro.data.load.retryDelayMs30000### enable to support prometheus service discovery
#nacos.prometheus.metrics.enabledtrue### Since 2.3
#*************** Grpc Configurations ***************### sdk grpc(between nacos server and client) configuration
## Sets the maximum message size allowed to be received on the server.
#nacos.remote.server.grpc.sdk.max-inbound-message-size10485760## Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours.
#nacos.remote.server.grpc.sdk.keep-alive-time7200000## Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds.
#nacos.remote.server.grpc.sdk.keep-alive-timeout20000## Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes
#nacos.remote.server.grpc.sdk.permit-keep-alive-time300000## cluster grpc(inside the nacos server) configuration
#nacos.remote.server.grpc.cluster.max-inbound-message-size10485760## Sets the time(milliseconds) without read activity before sending a keepalive ping. The typical default is two hours.
#nacos.remote.server.grpc.cluster.keep-alive-time7200000## Sets a time(milliseconds) waiting for read activity after sending a keepalive ping. Defaults to 20 seconds.
#nacos.remote.server.grpc.cluster.keep-alive-timeout20000## Sets a time(milliseconds) that specify the most aggressive keep-alive time clients are permitted to configure. The typical default is 5 minutes
#nacos.remote.server.grpc.cluster.permit-keep-alive-time3000002.生成ConfigMap
将上一步的文件放到此路径/work/devops/k8s/app/nacos/conf/application.properties
kubectl create configmap nacos-configmap --namespacemiddle --from-fileconf/application.properties
结果
configmap/nacos-configmap created
命令查看结果
kubectl get configmap -n middle
结果 dashboard查看结果 3.创建K8S配置
创建k8s.yaml文件内容如下
#用PV创建存储空间
---
apiVersion: v1
kind: PersistentVolume
metadata:name: pv-volume-nacosnamespace: middlelabels:type: localpv-name: pv-volume-nacos
spec:storageClassName: manual-nacoscapacity:storage: 1GiaccessModes:- ReadWriteOncehostPath:path: /work/devops/k8s/app/nacos/pv
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:name: pv-claim-nacosnamespace: middle
spec:storageClassName: manual-nacosaccessModes:- ReadWriteOnceresources:requests:storage: 1Giselector:matchLabels:pv-name: pv-volume-nacos#创建Nacos容器
---
apiVersion: apps/v1
kind: Deployment
metadata:name: nacos-standalonenamespace: middle
spec:replicas: 1 #单机模式下若副本大于1,注册时会分配到不同副本上。web查看时只能看到一个副本的注册服务。刷新web网页可切换副本selector:matchLabels:app: nacos-standalonetemplate:metadata:labels:app: nacos-standaloneannotations:pod.alpha.kubernetes.io/initialized: truespec:tolerations: #设置能在master上部署- key: node-role.kubernetes.io/masteroperator: ExistsinitContainers:- name: peer-finder-plugin-installimage: nacos/nacos-peer-finder-plugin:1.1imagePullPolicy: AlwaysvolumeMounts:- mountPath: /home/nacos/plugins/peer-findername: volumesubPath: peer-findercontainers:- name: nacosimage: nacos/nacos-server:v2.3.0imagePullPolicy: IfNotPresentenv:- name: TZvalue: Asia/Shanghai- name: MODEvalue: standalone- name: EMBEDDED_STORAGEvalue: embeddedvolumeMounts:- name: volumemountPath: /home/nacos/plugins/peer-findersubPath: peer-finder- name: volumemountPath: /home/nacos/datasubPath: data- name: volumemountPath: /home/nacos/logssubPath: logs- name: config-mapmountPath: /home/nacos/conf/application.propertiessubPath: application.propertiesports:- containerPort: 8848name: client- containerPort: 9848name: client-rpc- containerPort: 9849name: raft-rpc- containerPort: 7848name: old-raft-rpcvolumes:- name: volumepersistentVolumeClaim:claimName: pv-claim-nacos- name: config-mapconfigMap:name: nacos-configmapaffinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- nacos-standalonetopologyKey: kubernetes.io/hostname---
apiVersion: v1
kind: Service
metadata:namespace: middlename: nacos-servicelabels:app: nacos-service
spec:type: NodePortports:- port: 8848name: server-registtargetPort: 8848nodePort: 30006- port: 9848 # 必须开放出去否则开发和互联网环境无法访问这是rpc 服务调用程序默认 端口 偏移 1000 name: server-grpctargetPort: 9848nodePort: 30007 - port: 9849 # 必须开放出去否则开发和互联网环境无法访问这是rpc 服务调用程序默认 端口 偏移 1000 name: server-grpc-synctargetPort: 9849nodePort: 30008selector:app: nacos-standalone
4.启动Nacos
kubectl apply -f k8s.yaml
结果
persistentvolume/pv-volume-nacos created
persistentvolumeclaim/pv-claim-nacos created
deployment.apps/nacos-standalone created
service/nacos-service created
用命令查看结果
kubectl get all -n middle
结果 用dashboard查看结果 5.访问Nacos页面
访问http://192.168.5.193:30006/nacos
结果 输入默认的账号密码nacos/nacos登录进去