博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
安装k8s Master高可用集群
阅读量:4061 次
发布时间:2019-05-25

本文共 17480 字,大约阅读时间需要 58 分钟。

安装k8s Master高可用集群

主机 角色 组件
172.18.6.101 K8S Master Kubelet,kubectl,cni,etcd
172.18.6.102 K8S Master Kubelet,kubectl,cni,etcd
172.18.6.103 K8S Master Kubelet,kubectl,cni,etcd
172.18.6.104 K8S Worker Kubelet,cni
172.18.6.105 K8S Worker Kubelet,cni
172.18.6.106 K8S Worker Kubelet,cni

etcd安装

保证k8smaster高可用,不建议使用container的方式启动etcd集群,因为container可能会出现随时死掉的情况,etcd每个节点的启动service又是有状态的。因此此处将以二进制方式进行部署,建议在正式环境中最少部署3个节点的etcd集群,etcd具体安装步骤参考

必要组件以及证书安装

ca证书

参考创建CA证书,并将ca-key.pemca.pem放置到k8s集群中所有节点下的/etc/kubernetes/ssl

woker证书制作

参考从节点证书生成段落,进行worker节点证书生成。对应ip的证书放置到对应worker节点的/etc/kubernetes/ssl

kubelet.conf配置安装

创建/etc/kubernetes/kubelet.conf内容如下:

apiVersion: v1kind: Configclusters:- name: local  cluster:    server: https://[负载均衡IP]:[apiserver端口]    certificate-authority: /etc/kubernetes/ssl/ca.pemusers:- name: kubelet  user:    client-certificate: /etc/kubernetes/ssl/worker.pem    client-key: /etc/kubernetes/ssl/worker-key.pemcontexts:- context:    cluster: local    user: kubelet  name: kubelet-contextcurrent-context: kubelet-context

cni插件安装

从下载cni的必须二进制文件,需要放置到k8s集群中所有节点下的/opt/cni/bin下。

后续将提供rpm包进行一键安装。

kubelet服务部署

注意:后续将提供rpm包进行一键安装。

  1. 将对应版本的kubelet二进制文件放置到k8s集群中所有节点下的/usr/bin

  2. 创建/etc/systemd/system/kubelet.service内容如下:

    # /etc/systemd/system/kubelet.service[Unit]Description=kubelet: The Kubernetes Node AgentDocumentation=http://kubernetes.io/docs/[Service]Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"Environment="KUBELET_DNS_ARGS=--cluster-dns=10.100.0.10 --cluster-domain=cluster.local"Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/shenshouer/pause-amd64:3.0"ExecStart=ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_EXTRA_ARGSRestart=alwaysStartLimitInterval=0RestartSec=10[Install]WantedBy=multi-user.target
  3. 创建如下目录:

    /etc/kubernetes/|-- kubelet.conf|-- manifests`-- ssl   |-- ca-key.pem   |-- ca.pem   |-- worker.csr   |-- worker-key.pem   |-- worker-openssl.cnf   `-- worker.pem

master组件安装

配置负载均衡

配置LVS使用VIP172.18.6.254指向后端172.18.6.101、172.18.6.102、172.18.6.103, 如需简单,则可使用nginx进行TCP4层的负载。

证书生成

openssl.cnf内容如下:

[req]req_extensions = v3_reqdistinguished_name = req_distinguished_name[req_distinguished_name][ v3_req ]basicConstraints = CA:FALSEkeyUsage = nonRepudiation, digitalSignature, keyEnciphermentsubjectAltName = @alt_names[alt_names]DNS.1 = kubernetesDNS.2 = kubernetes.defaultDNS.3 = kubernetes.default.svcDNS.4 = kubernetes.default.svc.cluster.localDNS.5 = test.example.com.cnIP.1 = 10.96.0.1IP.2 = 172.18.6.101IP.3 = 172.18.6.102IP.3 = 172.18.6.103IP.4 = 172.18.6.254
# 三个master的IPIP.2 = 172.18.6.101IP.3 = 172.18.6.102IP.3 = 172.18.6.103# LVS负载均衡的VIPIP.4 = 172.18.6.254# 可能会用到的负载均衡domainDNS.5 = test.example.com.cn

证书生成具体步骤请参考 Master证书生成部分与Worker证书生成部分,生成后的证书需要放置到三台Master节点对应路径上

其他组件安装

Master节点上/etc/kubernetes/manifests下放置如下三个文件

  1. kube-apiserver.manifest:
# /etc/kubernetes/manifests/kube-apiserver.manifest{  "kind": "Pod",  "apiVersion": "v1",  "metadata": {    "name": "kube-apiserver",    "namespace": "kube-system",    "creationTimestamp": null,    "labels": {      "component": "kube-apiserver",      "tier": "control-plane"    }  },  "spec": {    "volumes": [      {        "name": "k8s",        "hostPath": {          "path": "/etc/kubernetes"        }      },      {        "name": "certs",        "hostPath": {          "path": "/etc/ssl/certs"        }      }    ],    "containers": [      {        "name": "kube-apiserver",        "image": "registry.aliyuncs.com.cn/shenshouer/kube-apiserver:v1.5.2",        "command": [          "kube-apiserver",          "--insecure-bind-address=127.0.0.1",          "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota",          "--service-cluster-ip-range=10.96.0.0/12",          "--service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem",          "--client-ca-file=/etc/kubernetes/ssl/ca.pem",          "--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem",          "--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",          "--secure-port=6443",          "--allow-privileged",          "--advertise-address=[当前Master节点IP]",          "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname",          "--anonymous-auth=false",          "--etcd-servers=http://127.0.0.1:2379"        ],        "resources": {          "requests": {            "cpu": "250m"          }        },        "volumeMounts": [          {            "name": "k8s",            "readOnly": true,            "mountPath": "/etc/kubernetes/"          },          {            "name": "certs",            "mountPath": "/etc/ssl/certs"          }        ],        "livenessProbe": {          "httpGet": {            "path": "/healthz",            "port": 8080,            "host": "127.0.0.1"          },          "initialDelaySeconds": 15,          "timeoutSeconds": 15,          "failureThreshold": 8        }      }    ],    "hostNetwork": true  },  "status": {}}
  1. kube-controller-manager.manifest

    { "kind": "Pod", "apiVersion": "v1", "metadata": {   "name": "kube-controller-manager",   "namespace": "kube-system",   "creationTimestamp": null,   "labels": {     "component": "kube-controller-manager",     "tier": "control-plane"   } }, "spec": {   "volumes": [     {       "name": "k8s",       "hostPath": {         "path": "/etc/kubernetes"       }     },     {       "name": "certs",       "hostPath": {         "path": "/etc/ssl/certs"       }     }   ],   "containers": [     {       "name": "kube-controller-manager",       "image": "registry.aliyuncs.com/shenshouer/kube-controller-manager:v1.5.2",       "command": [         "kube-controller-manager",         "--address=127.0.0.1",         "--leader-elect",         "--master=127.0.0.1:8080",         "--cluster-name=kubernetes",         "--root-ca-file=/etc/kubernetes/ssl/ca.pem",         "--service-account-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem",         "--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem",         "--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem",         "--insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap",         "--allocate-node-cidrs=true",         "--cluster-cidr=10.244.0.0/16"       ],       "resources": {         "requests": {           "cpu": "200m" }       },       "volumeMounts": [         {           "name": "k8s",           "readOnly": true,           "mountPath": "/etc/kubernetes/"         },         {           "name": "certs",           "mountPath": "/etc/ssl/certs"         }       ],       "livenessProbe": {         "httpGet": {           "path": "/healthz",           "port": 10252,           "host": "127.0.0.1" },         "initialDelaySeconds": 15,         "timeoutSeconds": 15,         "failureThreshold": 8       }     }   ],   "hostNetwork": true }, "status": {}}

  2. kube-scheduler.manifest

    { "kind": "Pod", "apiVersion": "v1", "metadata": {   "name": "kube-scheduler",   "namespace": "kube-system",   "creationTimestamp": null,   "labels": {     "component": "kube-scheduler",     "tier": "control-plane"   } }, "spec": {   "containers": [     {       "name": "kube-scheduler",       "image": "registry.aliyuncs.com/shenshouer/kube-scheduler:v1.5.2",       "command": [         "kube-scheduler",         "--address=127.0.0.1",         "--leader-elect",         "--master=127.0.0.1:8080"       ],       "resources": {         "requests": {           "cpu": "100m" }       },       "livenessProbe": {         "httpGet": {           "path": "/healthz",           "port": 10251,           "host": "127.0.0.1" },         "initialDelaySeconds": 15,         "timeoutSeconds": 15,         "failureThreshold": 8       }     }   ],   "hostNetwork": true }, "status": {}

其他组件安装

kube-proxy安装

在任意master上执行kubectl create -f kube-proxy-ds.yaml,其中kube-proxy-ds.yaml内容如下:

apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  labels:    component: kube-proxy    k8s-app: kube-proxy    kubernetes.io/cluster-service: "true"    name: kube-proxy    tier: node  name: kube-proxy  namespace: kube-systemspec:  selector:    matchLabels:      component: kube-proxy      k8s-app: kube-proxy      kubernetes.io/cluster-service: "true"      name: kube-proxy      tier: node  template:    metadata:      labels:        component: kube-proxy        k8s-app: kube-proxy        kubernetes.io/cluster-service: "true"        name: kube-proxy        tier: node    spec:      containers:      - command:        - kube-proxy        - --kubeconfig=/run/kubeconfig        - --cluster-cidr=10.244.0.0/16        image: registry.aliyuncs.com/shenshouer/kube-proxy:v1.5.2        imagePullPolicy: IfNotPresent        name: kube-proxy        resources: {}        securityContext:          privileged: true        terminationMessagePath: /dev/termination-log        volumeMounts:        - mountPath: /var/run/dbus          name: dbus        - mountPath: /run/kubeconfig          name: kubeconfig        - mountPath: /etc/kubernetes/ssl          name: ssl      dnsPolicy: ClusterFirst      hostNetwork: true      restartPolicy: Always      securityContext: {}      terminationGracePeriodSeconds: 30      volumes:      - hostPath:          path: /etc/kubernetes/kubelet.conf        name: kubeconfig      - hostPath:          path: /var/run/dbus        name: dbus      - hostPath:          path: /etc/kubernetes/ssl        name: ssl

网络组件安装

在任意master上执行kubectl apply -f kube-flannel.yaml,其中kube-flannel.yaml内容如下,注意,如果是在vagrant启动的虚拟机中运行,请修改flannled启动参数将--iface指向具体通讯网卡

---apiVersion: v1kind: ServiceAccountmetadata:  name: flannel  namespace: kube-system---kind: ConfigMapapiVersion: v1metadata:  namespace: kube-system  name: kube-flannel-cfg  labels:    tier: node    app: flanneldata:  cni-conf.json: |    {      "name": "cbr0",      "type": "flannel",      "delegate": {        "ipMasq": true,        "bridge": "cbr0",        "hairpinMode": true,        "forceAddress": true,        "isDefaultGateway": true      }    }  net-conf.json: |    {      "Network": "10.244.0.0/16",      "Backend": {        "Type": "vxlan"      }    }---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  namespace: kube-system  name: kube-flannel-ds  labels:    tier: node    app: flannelspec:  template:    metadata:      labels:        tier: node        app: flannel    spec:      hostNetwork: true      nodeSelector:        beta.kubernetes.io/arch: amd64      serviceAccountName: flannel      containers:      - name: kube-flannel        image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr", "--iface=eth0" ]        securityContext:          privileged: true        env:        - name: POD_NAME          valueFrom:            fieldRef:              fieldPath: metadata.name        - name: POD_NAMESPACE          valueFrom:            fieldRef:              fieldPath: metadata.namespace        volumeMounts:        - name: run          mountPath: /run        - name: flannel-cfg          mountPath: /etc/kube-flannel/      - name: install-cni        image: registry.aliyuncs.com/shenshouer/flannel:v0.7.0        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ]        volumeMounts:        - name: cni          mountPath: /etc/cni/net.d        - name: flannel-cfg          mountPath: /etc/kube-flannel/      volumes:        - name: run          hostPath:            path: /run        - name: cni          hostPath:            path: /etc/cni/net.d        - name: flannel-cfg          configMap:            name: kube-flannel-cfg

DNS部署

在任意master上执行kubectl create -f skydns.yaml,其中skydns.yaml内容如下

apiVersion: v1kind: Servicemetadata:  name: kube-dns  namespace: kube-system  labels:    k8s-app: kube-dns    kubernetes.io/cluster-service: "true"    kubernetes.io/name: "KubeDNS"spec:  selector:    k8s-app: kube-dns  clusterIP: 10.100.0.10  ports:  - name: dns    port: 53    protocol: UDP  - name: dns-tcp    port: 53    protocol: TCP---apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: kube-dns  namespace: kube-system  labels:    k8s-app: kube-dns    kubernetes.io/cluster-service: "true"spec:  # replicas: not specified here:  # 1. In order to make Addon Manager do not reconcile this replicas parameter.  # 2. Default is 1.  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.  strategy:    rollingUpdate:      maxSurge: 10%      maxUnavailable: 0  selector:    matchLabels:      k8s-app: kube-dns  template:    metadata:      labels:        k8s-app: kube-dns      annotations:        scheduler.alpha.kubernetes.io/critical-pod: ''        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'    spec:      containers:      - name: kubedns        image: registry.aliyuncs.com/shenshouer/kubedns-amd64:1.9        resources:          # TODO: Set memory limits when we've profiled the container for large          # clusters, then set request = limit to keep this container in          # guaranteed class. Currently, this container falls into the          # "burstable" category so the kubelet doesn't backoff from restarting it.          limits:            memory: 170Mi          requests:            cpu: 100m            memory: 70Mi        livenessProbe:          httpGet:            path: /healthz-kubedns            port: 8080            scheme: HTTP          initialDelaySeconds: 60          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 5        readinessProbe:          httpGet:            path: /readiness            port: 8081            scheme: HTTP          # we poll on pod startup for the Kubernetes master service and          # only setup the /readiness HTTP server once that's available.          initialDelaySeconds: 3          timeoutSeconds: 5        args:        - --domain=cluster.local.        - --dns-port=10053        - --config-map=kube-dns        # This should be set to v=2 only after the new image (cut from 1.5) has        # been released, otherwise we will flood the logs.        - --v=0        - --federations=myfederation=federation.test        env:        - name: PROMETHEUS_PORT          value: "10055"        ports:        - containerPort: 10053          name: dns-local          protocol: UDP        - containerPort: 10053          name: dns-tcp-local          protocol: TCP        - containerPort: 10055          name: metrics          protocol: TCP      - name: dnsmasq        image: registry.aliyuncs.com/shenshouer/kube-dnsmasq-amd64:1.4        livenessProbe:          httpGet:            path: /healthz-dnsmasq            port: 8080            scheme: HTTP          initialDelaySeconds: 60          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 5        args:        - --cache-size=1000        - --no-resolv        - --server=127.0.0.1#10053        - --log-facility=-        ports:        - containerPort: 53          name: dns          protocol: UDP        - containerPort: 53          name: dns-tcp          protocol: TCP        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details        resources:          requests:            cpu: 150m            memory: 10Mi      - name: dnsmasq-metrics        image: registry.aliyuncs.com/shenshouer/dnsmasq-metrics-amd64:1.0        livenessProbe:          httpGet:            path: /metrics            port: 10054            scheme: HTTP          initialDelaySeconds: 60          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 5        args:        - --v=2        - --logtostderr        ports:        - containerPort: 10054          name: metrics          protocol: TCP        resources:          requests:            memory: 10Mi      - name: healthz        image: registry.aliyuncs.com/shenshouer/exechealthz-amd64:1.2        resources:          limits:            memory: 50Mi          requests:            cpu: 10m            # Note that this container shouldn't really need 50Mi of memory. The            # limits are set higher than expected pending investigation on #29688.            # The extra memory was stolen from the kubedns container to keep the            # net memory requested by the pod constant.            memory: 50Mi        args:        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null        - --url=/healthz-dnsmasq        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null        - --url=/healthz-kubedns        - --port=8080        - --quiet        ports:        - containerPort: 8080          protocol: TCP      dnsPolicy: Default  # Don't use cluster DNS.

Node节点安装

  • Docker安装

  • 新建/etc/kubernetes/目录

    |-- kubelet.conf|-- manifests`-- ssl  |-- ca-key.pem  |-- ca.pem  |-- ca.srl  |-- worker.csr  |-- worker-key.pem  |-- worker-openssl.cnf  `-- worker.pem
  • 新建/etc/kubernetes/kubelet.conf配置,参考kubelet.conf配置

  • 新建/etc/kubernetes/ssl,证书制作参考worker证书制作

  • 新建/etc/kubernetes/manifests

  • 新建/opt/cni/bin,安装CNI参考cni安装步骤

  • 安装kubelet,参考kubelet安装

    systemctl enable kubelet && systemctl restart kubelet && journalctl -fu kubelet

转载地址:http://sucji.baihongyu.com/

你可能感兴趣的文章
Golang 数据可视化利器 go-echarts ,实际使用
查看>>
mysql 跨机器查询,使用dblink
查看>>
mysql5.6.34 升级到mysql5.7.32
查看>>
dba 常用查询
查看>>
Oracle 异机恢复
查看>>
Oracle 12C DG 搭建(RAC-RAC/RAC-单机)
查看>>
Truncate 表之恢复
查看>>
Oracle DG failover 后恢复
查看>>
mysql 主从同步配置
查看>>
为什么很多程序员都选择跳槽?
查看>>
mongdb介绍
查看>>
mongdb在java中的应用
查看>>
区块链技术让Yotta企业云盘为行政事业服务助力
查看>>
Yotta企业云盘更好的为媒体广告业服务
查看>>
Yotta企业云盘助力科技行业创高峰
查看>>
Yotta企业云盘更好地为教育行业服务
查看>>
Yotta企业云盘怎么帮助到能源化工行业
查看>>
企业云盘如何助力商业新发展
查看>>
医疗行业运用企业云盘可以带来什么样的提升
查看>>
能源化工要怎么管控核心数据
查看>>