当前位置: 首页>后端>正文

golang中 k8s 获取docker 容器的ip docker k8s devops

核心组件

  • ETCD:分布式高性能键值数据库,存储整个集群的所有元数据
  • ApiServer:API服务器,集群资源访问控制入口,提供restAPI及安全访问控制
  • Scheduler:调度器,负责把业务容器调度到最合适的Node节点
  • Controller Manager:控制器管理,确保集群资源按照期望的方式运行
    Replication Controller
    Node controller
    ResourceQuota Controller
    Namespace Controller
    ServiceAccount Controller
    Tocken Controller
    Service Controller
    Endpoints Controller
  • kubelet:运行在每个节点上的节点代理
    pod管理:kubelet定期从所监听的数据源获取节点上pod/container的期望状态
    容器健康检查:kubelet创建了容器之后还要检查容器是否正常运行,如果容器运行出错,就要通过容器设置的重启策略进行处理
    容器监控:kubelet会监控所在节点的资源使用情况,并定时向master报告
  • kubectl:命令行接口,用于对kubernetes集群运行命令
  • CNI实现:通过网络接口,我们使用flannel来作为k8s集群的网络插件,实现跨节点通信

主要架构:

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_Group,第1张

工作流程:

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_linux_02,第2张

  1. 用户准备了一个资源文件(记录了业务应用的名称,镜像地址等信息),通过调用APIServer执行创建Pod
  2. APIserver收到用户的Pod创建请求后,将Pod信息写入到etcd中
  3. 调度器通过list-watch的方式,发现有新的pod数据,但是这个pod还没有绑定到某一个节点中
  4. 调度器通过调度算法,计算出最适合该pod运行的节点,并调用APIServer,把信息更新到etcd中
  5. kubelet同样通过list-watch方式,发现有新的pod调度到本机的节点了,因此调用容器运行时,去根据pod的描述信息,拉取镜像,启动容器,同时生成事件信息
  6. 同时,把容器的信息,事件及状态也通过APIServer写入到etcd中

搭建k8s集群,三台机器
172.16.10.189 master
172.16.10.185 slave1
172.16.10.184 slave1
先设置主机名:其他两台同理

hostnamectl set-hostname k8s-master

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_docker_03,第3张

添加host解析

cat >> /etc/hosts<<EOF
172.16.10.189 k8s-master
172.16.10.185 k8s-slave1
172.16.10.184 k8s-slave1
EOF

设置yum源

scp -r docker-ce.repo root@172.16.10.189 /etc/yum.repos.d/
cat << EOF> /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
	http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF

安装kubelet组件,所有的master和slave都需要执行,kubelet kubectl kubeadm三个组件

yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes

查看kubeadm版本

kubeadm version

初始化配置文件
操作节点:只在master节点执行

kubeadm config print init-defaults > kubeadm.yaml
vim kubeadm.yaml

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_kubernetes_04,第4张

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_kubernetes_05,第5张

kubeadm config images list --config kubeadm.yaml

如果没有问题,将得到以下列表

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_Group_06,第6张

#提前下载镜像到本地
kubeadm config images pull --config kubeadm.yaml

初始化master节点(只在master节点执行)

kubeadm init --config kubeadm.yaml

我初始化的时候报了

[ERROR DirAvailable–var-lib-etcd]: /var/lib/etcd is not empt

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_Group_07,第7张

解决办法:直接删除这个目录就好

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_linux_08,第8张

提示我们按照上述提示信息报错,配置kubectl客户端的认证

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_docker_09,第9张

之后使用提示命令,将两个slave节点加入到master中

kubeadm join 172.16.10.189:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:eecde65b9f9ecea275e9f03d0224e8ba4431d76c39fa3bd0a6ec101d3431329c

我添加节点时报以下错误,让我们关闭分区

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_centos_10,第10张

使用 swapoff -a 命令关闭,之后我们看到添加成功

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_Group_11,第11张

kubectl get node #查看节点信息

我们会看到有三条节点信息,但是状态为NotReady,因为我们还没有装flannel插件

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_Group_12,第12张

安装flannel插件

下载flannel的yaml文件

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
  allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
rules:
  - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
  - apiGroups:
      - ""
    resources:
      - pods
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - amd64
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
     s:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm64
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm64
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
     s:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-arm
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - arm
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
     s:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-arm
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - ppc64le
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
     s:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds-s390x
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/os
                    operator: In
                    values:
                      - linux
                  - key: kubernetes.io/arch
                    operator: In
                    values:
                      - s390x
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
     s:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.12.0-s390x
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
             add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run/flannel
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-flannel-cfg

创建

kubectl create -f kube-flannel.yml
kubectl delete -f kube-flannel.yml #出了问题可以执行这条命令删除重新创建
kubectl get po -n kube-system -o wide #查看pod运行状态信息

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_linux_13,第13张

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_centos_14,第14张

可以看到有两个状态为驱赶,一般是磁盘空间不足了,

清理slave磁盘空间后,状态正常

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_linux_15,第15张

创建测试nginx服务 测试

kubectl run test-nginx --image=nginx:latest

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_docker_16,第16张

查看pod状态,并测试是否可用

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_kubernetes_17,第17张

部署dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml

在service节点加一行,因为默认Dashboard只能集群内部访问,因此修改为NodePort,暴露到外部访问

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_kubernetes_18,第18张

kubectl create -f recommended.yaml #创建Dashboard

查看状态,都正常

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_centos_19,第19张

查看dashboard端口

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_docker_20,第20张

用火狐浏览器打开,ip为master的外部IP,选token模式

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_Group_21,第21张

通过以下三条命令获取token

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print }')

输入之后将会看到一下界面

golang中 k8s 获取docker 容器的ip docker k8s devops,golang中 k8s 获取docker 容器的ip docker k8s devops_linux_22,第22张

至此,k8s的集群搭建成功! 肥肠完美!

总结
静态Pod方式:

## etcd、apiserver、controller-manager、kube-scheduler
kubectl -n kube-system get po
#kubectl:二进制命令行工具

理解集群资源
组件是为了支撑k8s平台的运行,安装好的软件
资源是如何去使用k8s的能力的定义。比如,k8s可以使用pod来管理业务应用,那么Pod就是k8s集群的一类资源,集群中的所有资源可以提供如下方式查看:

kubectl api-resources

命名空间,集群内一个虚拟的概念,类似于资源池的概念,一个池子里可以有各种资源类型,绝大多数的资源都必须属于某一个namespace,集群初始化安装好之后,会默认有如下几个namespace:
查看某一个namespace下面的pod的详细信息

kubectl -n kube-system get po

所有的NAMESPACED的资源,在创建的时候都需要指定namespace,若不指定,默认会在default命名空间下

kubectl的使用类似于docker,kubectl是命令行工具,用于和APIServer交互,内置了丰富的子命令,功能极其强大。



https://www.xamrdz.com/backend/3yp1964197.html

相关文章: