《容器云平台的集群高可用安装部署及配置.docx》由会员分享,可在线阅读,更多相关《容器云平台的集群高可用安装部署及配置.docx(44页珍藏版)》请在课桌文档上搜索。
1、1容器云平台的集群高可用安装部署KUbemeteS(筒称k8s)凭借着其优良的架构灵活的扩展能力,丰富的应用编排模型,成为了容器编排领域的事实标准,也是各大企业进行容器云平台建设的首选技术.无论是在公有云环境使用还是在私有云环境使用,k8s然群的生产商可用是一个不能回避的话超,本章节阐述k8s容涔云平台的高可用部署方式,其核心思想是让k8smaster节点中的各类组件(etcdkube-apiserverkubecontroller-managerkube-scheduler)具备品可用性,不存在组件的单点故障。学习木章节,学员可以掌握k8s环境下如何让各组件组成集群以达到而可用的效果,为k8
2、s在生产的高可用部署实践提供参考和指导。1.1 k8s集群高可用部署说明1.1.1 环境准备要实现k8s亲群的高可用,集群至少需要3节点.本章节以下面的3节点为例进行部署说明.主机名:k8s-l;IP:172.16.90.39主机名:k8s-2;#:172.16.90.40主机名:k8s-3;IP:172,16.90.413节点均为CentOS7麋作系统,建议升级内核到4.4以上,因为CentOS7自带的3.10内核存在一些bug,会导致Docker,Kubernetes运行不稳定,特别是高版本的Docker.Kubernetes(参考节点上需要做如下配百.SSh免密登录,将k8s-l的SSh
3、公钥分发到另外两个节点,ssh-copy-idrootk8s-2各节点上关闭防火墙,Systemctlstopfirewalld&systemctldisablefirewalld各节点上关闭SE1.inux,Setenforce0&sed-i,sSE1.INUX=.SE1.1NUX=disabled/etcseli-nuxconfig1.1.2 部署策略及高可用原理以下部署的KUberneteS版本为1.14.2,etcd的版本为3.3.13。Kubernetes的节点角色分为master和node两种,node节点默认已经有高可用了,因为pod会分配到各个node上,如果有node挂了,k
4、8s就会将node置为NotReady,随后到其他健康的node上新建pod保证副本数为预期状态.因此要实现Kubernetes集群的高可用,实际上是要实现master节点的高可用。master节点上运行了如下组件:etcdkube-apiserverkube-schedulerkube-controller-manager其中,etcd采用以上3节点实现商可用,etcd会保证所有的节点都会保存数据,并保证数据的一致性和正确性,在正常运行的状态下,集群中会有一个leader,其余的节点都是followers.通常情况下,如果是foil。Wer节点宕机,如果剩余可用节点数集超过半数,不影响集群正
5、常工作.如果是leader节点宕机,那么foil。Wer就收不到心跳而超时,发起竞选获得投票,成为新的leader,继续为集舞提供服务.kube-apiserver.kube-scheduler?Qkube-contr。Iler-manager三个组件均以多实例模式运行.kube-apiserver是无状态的,一般可以采用以下两种方式实现高可用:(1)通过配置haproy或ngin等负载均衡器进行代理访问kube-apiserver,从而保证kube-apiserver服务的高可用性,Mproxy或nginx等负或均衡器自身使用keepalived绑定一个vip实现高可用;(2)在每个mast
6、er和node节点配在一个nginx,后端对接多个kube-apiserver实例,nginx对多个kube-apiserver实例做健康检肯和负载均衡,kubelet,kube-proxy,controller-manager,scheduler组件通过本地的nginx访问kube-apiserver,从而实现kube-apiserver的高可用.本堂节以第二种方式为例来介绍k8s集群的高可用部署.kube-scheduler和kube-ContrOller-manager1是有状态的服务,多个实例会通过向apiserver中的endpoint加锁的方式来选举产生一个Ieader实例,其它实
7、例处于阻塞模式,当Ieader挂了后,会重新选举产生新的leader,从而保证服务的可用性.因此这两个组件采用多实例部署即可实现高可用.1.2 etcd蛆件的高可用部署1.2.1 创建CA自签名根证书Kubernetes系统各组件需要使用509证书对通信进行加密和认证.本章节在部署各组件前,需要创建CA自签名根证书,用来给后续创建的其他组件证书迸行签发.0创建相应的工作目录mkdir-poptk8sbi&mkdir-poptk8scrt0安装CfSSI工具wgt-Ooptk8sbicfsslhttps:/pkg.cfssl.Org/R1.2/cfssl_linux-amd64Wget-Oopt
8、k8sticfssljshttps:/pkg.cfssl.org/R1.2cfssljsJinux-amd64n添加执行权限及配置PATHChmod+xoptk8sbl*echo,PATH=optk8sbinSPATHrt.bashrcsourcert,basrc#创建根证书配置文件catca-cong.jsoca-csr.jsoetcd-csr.jsg=ca-cong.jso-profIe=Rubernetesetcd-csr.jsoICfssljson-bareetcd该命令会生成3个文件etcd.pem(etcd使用的证书)etcd-key.pem(etcd的私钥)etcd.csr(et
9、cd的证书签名请求文件)将etcd.pem及etcd-key.pem拷贝到所有节点的etcetcdcert目录下,若没有该目录需创建.1.2.3创建etcd的systemdunit文件catetcd-k8s-l.svce0F(Unit)Descripion三EtcdServerAfter=network,targetAfter=ntwork-oli.targ9tWantS=network-OnSne.targetDocumetatio=httpsgithub.corcorosServiceType=tifyWockigDirectory=datak8stcddataExcStart=optk8
10、sbnetcddata-iir=datak8se-tcetcdcftetcdpom-poor-koy-te三0(cetcdoeftetc-kGy.pm-pr-trustd-ca-to=tck8sce11ca.pm-poor-dient-cert-ahcfint-cert-autfisten-peer-urts=ttps:/172.16.90.39:2300WTnrtiakOdwtiSo-PoOf-Vrhttp517216.90392380W一向ten-cfenJ-IJrIjhnps17216.90.392379,http:/127.0.0.1:2379Wae11ise-cfent-r1s三h!
11、tp17216.90.39;2379-ifitiaHAjstef-tok(U6tod-cbster-OWM*CkjSter=k&s-l=ttpc172.16.90J923OX3-2=ttp:/172.1690.40:2330.k8s-3=(tps172.16.90.412380W-EaKAjster-Stagnewauto-corpctcn-11xxte=periodcW-jo-ypoct-et11tion1W-iuota-bckend-bytes=6442450944-hbeat-ntfvai=250W-e*ctlon-timeout-2000Restart-on-failureRestar
12、tSC=5UmitNOFl1.E=65536EMWatedBy=mlti-usGr.targtEOF注意,以上文件是k8s-l节点上使用的etcdservice文件,另外两个节点需要把以上文件中的红色部分替换为自身节点的主机名或IP.按照以上内容创建好3个节点的etcdservice文件后,将文件拷贝并重命名为相应节点的etcsystemdSyStem/etcd.service文件.1.2.4 各节点启动etcd服务分别登录到各节点,执行如下命令.力创建38的数据目录mkdir-pdatak8setcddatamkdir-pdatak8stcdwal*启动etcdB8务Systemctldae
13、mon-reloadSystemctlenableetcdSystemctlstartetcd检查etcd启动结果是否为activerunningSystemctlstatusetcdIgrepActive#查看etcd日志JOumaIctI-uetcd1.2.5 黔证服旁状态分别在各节点执行如下命令,endpoints中的IP需要替换为实际节点的IP.ETCoCTuAp1=3otk8sbinetcdctledpoints=https172.16.39r2379一cacrt=tck8scertca.pem- -cert=tcetcdcrttod.pem- ke=etcetcdcertetcd-
14、key.pemendpointhealth若集群服务状态正常,则会分别输出如下结果:httpe172.16.9039:2379eoalthsuosUOtxnrrwttedproposal:took-1.O4O5mshttps;/172169040:23798healthysuocess*UcommutedPrCgSaltook三1.85477amshttp6:/172.16.90.41:2379(shealthy:SUXeSSM&committedprcposaltook=2.6l4656直看集群当前的leader.O在任意节点上执行ETCDC11._AP=3optk8sbiretcdct-
15、Wtablecacert-eck8scertcapem- -oert=tcetodcertetcdpem- ky=0tctcdoerttcd-kepm-ondpointshttps172.16.9O.39r2379ttps:/172.1690.402379.hnpe:/172.16.90.41:2379nintstatus输出如下,显示第3个节点为leader.OPOXWTIXOIVER51ONIIBmElISIEAaRIWTTEWIWTIXCXIIHWM172.16.M42379|e4MSfc96ck3bIhttp/1721。3:23Zi4lM2XFIIhtpz172.U.W.41:23n7
16、f5f41)3MSeII3.3.3II31.UII3.3.UIkBIfalseI2I8IkBIflMI2IBIXkftItrwI2IIMM.I1.3kube-apiserver组件的高可用部署1.3.1 各节点部署nginx代理通过在各节点部署nginx,让k8s组件通过本地nginx访问kube-apiserver,实现kube-apiserver的高可用.如果有yum源可以使用(如公网yum源.nginx可以使用yum安装,攵为方便.也可以下载源码迸行编译安装,这里以海码编译安装进行说明,U下nginx的StaWe版本,这里以1.16.1版本为例wgeth11psnxocgdonloadg
17、inx-1.16.1.tar.gztar-xzvfgix-1J6.1.tar.gzo给律粕安袋cdng11-1.16.1mkdlrgix-prefcoongure-witb-strom-WithoUt-http-prox*S(pj)ngrc-pcox-without-http_vwsgi_modutewthout-ttp.scg1.modle-witbout-httpJastcgi-mcdtemake&makeInStail蜡误后生成的可执行文件在ngm-preM/Sbin目录下然后将nginx可执行文件拷贝到3个节点上,并配置systemdunit文件来管理nginx服务.在各节点上创建如下
18、目录11*sr-Popkubo-nginxconsistent;11wjais3foijimcout30s;maxJaM3faijimeout-3(h;max-faite=3faijimeout=30s;SCrW172169039443server172.16.90.40443server172.16.90.41443JSOrVOr(IiSton127.0.01:8443;PrOXVJXXYiectJirruxxrt13;proyjassbactend;)EOF将该配置文件分别拷贝到3个节点的optk8skube-nginxcon17目录下。aKlsystorjSt文件catkute-ngnx
19、.servicoEOF(Unit!Dscrlption=kube-apisfVfgnxproxyAfter=networktargtAfter0twork-oanetargetWantABtWCrk-Eno.WgctServiceYyre=fakingEXeCStartPre=0ptk8wkubrgi11xsbinkuben9nx- Coptc8scube-ngnxco11fkutw-ngirconf-pccXZk85kube-ngi11x-tEXeCStartcpkskuboggsbi2kuboEnx- cZopt/k8s/kub6-nghx/conf/kut-nginK.conf-pop(
20、k8kube-ngiaxEXeCReoRopVK8skube-gin3binkube-M- coptka3lij9rconfkube-gi11xconf-p0-reoed&SYStomcaCnab论kubo-ngm&Systemcttrestartkube-nginx检置3个力点上映UbnginX运行是否为activeg11ggWSIarCtlstatusgrepActK8.aEH3Iounfltetl-ukiw-n11kubef11ets-csr.jsonenyp(l-f)g.8fM0Fkind:EncryptionConbgapiVersio:VIresources:-rosouroos:
21、-secretsprvkfers:-ascbc.keys:-name:keylsecretSEhCAYPTKI.KEY)-k*t!tyrEOF将encryption-config.yaml拷贝到所有节点的etckubernetes目录下。1.3.5创建Policy文件catadit-p0lic.y9rriEOFapiVoBon:jftk8siovlbaiMfXtPOliCyrules:-level:Noneresources:-group:-resources:- endpants- services- SefMeeS/statususers-systemkdDe-poxy,VOfte-wat
22、ch-tevo:NonerSOUrOGS-group:*resources:-nodes-nodes/SWBuserGroups:-systmnods*vorte-get-level:Nonenamespaces:_kte-systemresources:-group:Mresources:-endpointsusers:-.SVStemrkube-Oomrakmanager-,sstmkub-scdulf-,system:sefviceaccountJube-system:endpoint-controlter,Vertee-get一update-level:Noneresources:-g
23、p:resources:- namespaces- amspoesstatus- namespacesnalizeusers:-,systemrapisfvr,verbs:-9tttDOrftlogHPAfetchingmetrics.-tev:Nonresources:-group:metrics.kas.iousers:-,systefn:kube-controlter-manager,verbs:-get-listttDontlogtheserad-or/UR1.s.-level:NOnGnocResourceUR1.s.TbeelM,-versi-/swagger*,&Dtk)geve
24、ntsrequests.-tev*:Noneresources:-group:resources:-eventsnodeandpodstatuscatsfromnodesarehh-vo4mandcanblarge,donttogresponsesforexpectedupdatesfromnodes-tevei.RequestomitStagos:-ReauestReceivedresoiroes:-group:-resources:-DOdeS/status-pxfcstatususers:-kubetet-,systOTod9-robtem-tetctor,-sysiem:SerV100
25、acxMicbe-3ysUMnnodo-pble11Vaaleetcvort:-update-PMCb-no*Roquosto11tStag:-RoquostRecciyodresources:-group:-resources:-NXteS/status-pods/statusUSaGrOUps:一systemnodesvs:update-pa(c# de# Secrets.ConhgMaps,andTOkOnRoViOWSCancontainSenSitiVo&binarydata.HSOonlylogattheMeidC划alevel.-IeV供MetadataomitStages:-R
26、equeetReceivedresources:_group:-resources:-secrets-cong11ws-group:authonticatmk8s.ioresources:os)so4:dncu6-suospcdoj6-oS8)!)ezMi8:da6-fS8)fuReouu8Miejnj6-sdde:dnoj6-0s8W!iejjS!6a*de:dnoj6-orsgsotsu(x9)d:dnai6-O(SQMuofiej)SieajuoiSSiiupe:dn6-“:dnoj6-rsojnosjpesoH)sanb-:sa6eis收OisboI-M)dos:6jceqUeOses
27、uosdji#S3!JU)tOl- group:networkingkasJo-groupposcy- group:rtac.authodzat)onk8s.io- group:scedullng.k8o- group:Sttings,k8sio- groupstorage.k8s.eovofte:get-Ifet- watchODafaiitlevelforknownAPIs-level:ReQUetStReSPonSeoatSge-ReqjestReceivedresources:- gouo”- group.admissiorgtrati.k8s.io- group:a0extenskn
28、s.k8&io- grou。pirogtration.k8sk)-group:apps- group:authtcatiork8s.io- grou。utborizotionk8s.k)- group:8utoscang- group:batch- group:OertincalOSks.io- group:oto11sios- group:rretricskBsio- group:networkingX8s.io- group:policy- group:rtxc.auto11zation.k8s.io- grp:scheduiing.k8s.eo- group:settingsk8s.o-
29、 grp:storag.k8s.iuDefauhtevekube-apisrver.serviceEOFUnitDescnption=KubemetesAPIServerDocmtatl=hts9thub.co11GoogleCtoudPatfokubefnetesAfteanotworktargot(SrvcWorkingDirector=datak8sk8skub-apisrverExecStart=optk8sbinkube-apisfverade11tse-adckess=172.1690.39- -t-reacf-toleratkx-seconds=360W- -default-Un
30、reachabto-toration-sconds360W- featuretS=(naruditing=truW- max-mutatng-rqusts-iflit=2(X)W一max-equsts-inflght=40defult-watch-cac-stze=2- -Ctetete-COltection-wockers=2W- FncryPtion-PrOVider-Congetckubemetesenyikv8nhg.yarH- -etcd-cale=0tckubrnetsrtca.pGrn- etcd-certte=tckubemtscrtkubemtespm-etcd-keyl=t
31、ckubcntescert/kubemetes-kGy.pemetcd-se(ws=https172.16.90.39:2379,https:/172.16.90.40:2379.https172.16.90.41:2379W-bind-ackirss=172.16.90.39SeaJre-POrt=6443W-tts-cert-We/etcAubemete$/cert/kubemetes.peotckubomotscortkubrrrts-koypofn-insecure-port=O一audlt-tynamic-coguraton- audt-4og-maxag=15W- audit-og
32、-rwbackup-3W- audh-bg-maxsiz=1- audn-log-truncate-nabled- atft-log-p6th三datak8sk8skube-apsefvefaMClt.logW-aJt-polic-ftlo0tcubmotosaudit-poiicyyaml-profiling- -anonymous-uth-fateo- dlm-ca-te=ecubmetscecapm- -o11aWo-bootstrap-tokon-aut- -requeetheader-ailsved-namesaggregabTCqUOSlhCadOfn8WockubornciosOart/caPOmreque6teadr-extra-headr3-pref=-Rmote-Extra-,- TeQUoSIhOadOfq-haders*RemOtO-GrOUPW- -reque6thadr-u9marre-eaders=X-Remot-Usr- -$OrViCO-aooounkoy-hkhGtcubf11otcsoorV8pom-authcxizatkyte=NodB8AC