当前位置: 首页>前端>正文

k8s容器内部时间和机器时间不一致 k8s时区设置

部署方式:kubeadm ---k8s部署成容器

部署信息:

IP地址  

主机名

系统版本

角色

192.168.10.10

k8s-master

7.5

master 

192.168.10.20

k8s-node01

7.5

 node

192.168.10.30

k8s-node02

7.5

 node

 

一、系统基础配置

1、设定时钟同步:

# yum install chrony -y

# timedatectl set-timezone Asia/Shanghai  (更改系统时区为上海)

#启动服务
# systemctl start chronyd.service
# systemctl enable chronyd.service

2、设置主机名称解析

# hostnamectl set-hostname k8s-master

cat >> /etc/hosts << EOF
192.168.10.10  k8s-master
192.168.10.20  k8s-node01
192.168.10.30  k8s-node02
EOF

3、关闭iptables或firewalld防火墙

# systemctl stop firewalld.service
# systemctl disable firewalld.service

4、关闭selinux

# sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config  && setenforce 0

5、关闭swap设备

# sed -i '/ swap / s/^\(.*\)$/#/g' /etc/fstab #永久

# swapoff -a # 临时

6、内核调整,将桥接的IPv4流量传递到iptables的链

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# sysctl -p

 二、安装docker服务

# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
# yum -y install docker-ce-18.06.1.ce-3.el7

# systemctl enable docker && systemctl start docker
# docker --version

三、安装Kubernetes

1、配置kubenetes的yum仓库(这里使用阿里云仓库)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、安装kubeadm,kubelet和kubectl

# yum -y install kubelet-1.15.2 kubeadm-1.15.2 kubectl-1.15.2

# rpm -aq kubelet kubectl kubeadm
 
# systemctl enable kubelet

3、配置忽略swap报错

# vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false"

4、初始化 Kubernetes Master (只在master节点执行)

kubeadm init \
--apiserver-advertise-address=192.168.10.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.15.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--ignore-preflight-errors=Swap


......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.10:6443 --token 8nvmlq.wbmmws0ymbszk1yb \
    --discovery-token-ca-cert-hash sha256:ec5385c0abbc1cc14c3c9e40f6ca021e8ae24a11e1c557c285882ffe0b20124d

参数解析:

--kubernetes-version    #指定Kubernetes版本
--image-repository      #由于kubeadm默认是从官网k8s.grc.io下载所需镜像,国内无法访问,所以这里通过--image-repository指定为阿里云镜像仓库地址
--pod-network-cidr      #指定pod网络段
--service-cidr          #指定service网络段
--ignore-preflight-errors=Swap    #忽略swap报错信息

5、按照上面初始化成功提示创建配置文件

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

6、初始化完成后可以看到所需镜像也拉取下来了

[root@k8s-master ~]# docker image ls 
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.15.2             9f5df470155d        12 months ago       159MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.15.2             88fa9cb27bd2        12 months ago       81.1MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.15.2             167bbf6c9338        12 months ago       82.4MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.15.2             34a53be6c9a7        12 months ago       207MB
registry.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        19 months ago       40.3MB
registry.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        21 months ago       258MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
[root@k8s-master ~]#

7、查看master状态:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   60m   v1.15.2

8、添加flannel网络组件

添加flannel网络组件,(在master执行) 经过实践发现,node节点上必须有quay.io/coreos/flannel的镜像,,不然状态一直为NotReady

flannel地址:https://github.com/coreos/flannel

[root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

报错:The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?  

解决办法:

# 在https://www.ipaddress.com/查询raw.githubusercontent.com的真实IP。
sudo vim /etc/hosts
199.232.28.133 raw.githubusercontent.com

如果flannel的pod状态还不正常,可以卸载掉网络

# kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

四、将node加入集群:

[root@k8s-node01 ~]# kubeadm join 192.168.10.10:6443 --token 8nvmlq.wbmmws0ymbszk1yb     \
    --discovery-token-ca-cert-hash sha256:ec5385c0abbc1cc14c3c9e40f6ca021e8ae24a11e1c557c285882ffe0b20124d \
    --ignore-preflight-errors=Swap

查看状态:

#  kubectl config view

# kubectl get pods -n kube-system|grep flannel

使用命令查看节点信息时候 node 节点的STATUS 为NotReady

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES    AGE    VERSION
k8s-master   Ready      master   108m   v1.15.2
k8s-node01   NotReady   <none>   16m    v1.15.2
k8s-node02   NotReady   <none>   12m    v1.15.2

出现这个错误可以在节点机器上执行journalctl -f -u kubelet查看kubelet的输出日志信息.

# journalctl -f -u kubelet

发现以下错误:

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

我们可以执行命令docker images|grep flannel来查看flannel镜像是否已经成功拉取下来.经过排查master端的已经安装好,node节点没有flannel插件,安装即可.

master需要flannel,node也需要flannel。

# docker save -o flannel.tar quay.io/coreos/flannel

# docker load -i flannel.tar

 

 

准备三台服务器

服务器ip            服务器名称
192.168.10.20        k8s-master
192.168.10.30        k8s-node01
192.168.10.40        k8s-node02

1. 初始化系统环境,以下命令三台服务器全部执行
1.1 关闭防火墙

# systemctl stop   firewalld.service
# systemctl disable firewalld.service

1.2 关闭selinux和关闭swap

# setenforce 0
# swapoff -a
# sed  -i 's/enforcing/disabled'  /etc/selinux/config

1.3 修改主机名,并写入三台服务器的host中

# hostnamectl  set-hostname K8S-master
# hostnamectl  set-hostname K8S-node01
# hostnamectl  set-hostname K8S-node02

cat  >>  /etc/hosts  << EOF
192.168.10.20     K8S-master
192.168.10.30     K8S-node01
192.168.10.40     K8S-node02
EOF

1.4 将桥接的IPV4流量传递到iptables的链:

cat  >  /etc/sysctl.d/k8s.conf  <<  EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

1.5 同步时间
# ntpdate  time.windows.com

#如果时区不对执行下面命令,然后在同步
cp  /usr/share/zoneinfo/Asia/Shanghai   /etc/localtime


1.6 配置阿里源,下载相应的软件包

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF


# yum  install -y   kubelet-1.18.0  kubeadm-1.18.0  kubectl-1.18.0

#kubectl-1.18.0命令行管理工具,kubeadm-1.18.0是引导K8S集群,kubelet-1.18.0管理容器

# systemctl  enable  kubelet

2.部署kubernetes Master节点(master节点上执行)

kubeadm init \
--apiserver-advertise-address=192.168.10.20 \                    #指定master监听的地址
--image-repository registry.aliyuncs.com/google_containers \    #指定下载源
--kubernetes-version v1.18.0 \                                    #指定kubernetes版本
--service-cidr=10.1.0.0/16 \                                    #设置集群内部的网络
--pod-network-cidr=10.244.0.0/16 \                                #设置pod的网络
--ignore-preflight-errors=Swap


......

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.20:6443 --token z5dftx.kfgw69e9v3bpt63c \
    --discovery-token-ca-cert-hash sha256:6832b66e03e1f58e782d24ce83a32ae501f59f2e88fbd845604e9713dbbad059 
[root@k8s-master ~]#



[root@k8s-master ~]# kubectl get node
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   NotReady   master   11m   v1.18.0
[root@k8s-master ~]# kubectl  get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-4d2wt             0/1     Pending   0          11m
coredns-7ff77c879f-kjvtg             0/1     Pending   0          11m
etcd-k8s-master                      1/1     Running   0          11m
kube-apiserver-k8s-master            1/1     Running   0          11m
kube-controller-manager-k8s-master   1/1     Running   0          11m
kube-proxy-dj6gz                     1/1     Running   0          11m
kube-scheduler-k8s-master            1/1     Running   0          11m
[root@k8s-master ~]# 

2.1 安装Pod网络插件

# wget http://120.78.77.38/file/kube-flannel.yaml #下载镜像,国外源,我已经下载到本地
# kubectl apply  -f  kube-flannel.yaml #原始镜像有问题改成如下图所示的镜像名

启动镜像,启动完查看镜像,会增加一个flannel镜像。支持多主机容器网络通信的

# docker pull lizhenliang/flannel:v0.11.0-amd64  #建议先下载镜像
# kubectl apply  -f  kube-flannel.yaml
# kubectl  get   pods  -n  kube-system


部署网络插件后

[root@k8s-master ~]# kubectl  get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-4d2wt             1/1     Running   0          16m
coredns-7ff77c879f-kjvtg             1/1     Running   0          16m
etcd-k8s-master                      1/1     Running   0          17m
kube-apiserver-k8s-master            1/1     Running   0          17m
kube-controller-manager-k8s-master   1/1     Running   0          17m
kube-flannel-ds-amd64-49dr9          1/1     Running   0          73s
kube-proxy-dj6gz                     1/1     Running   0          16m
kube-scheduler-k8s-master            1/1     Running   0          17m
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   17m   v1.18.0
[root@k8s-master ~]# 

如上所示说明K8S master pod正常启动

3. k8s-node节点加入master节点(两台主机分别执行)

kubeadm join 192.168.10.20:6443 --token z5dftx.kfgw69e9v3bpt63c \
    --discovery-token-ca-cert-hash sha256:6832b66e03e1f58e782d24ce83a32ae501f59f2e88fbd845604e9713dbbad059

# kubectl get nodes     #master节点运行

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   29m     v1.18.0
k8s-node01   Ready    <none>   7m7s    v1.18.0
k8s-node02   Ready    <none>   6m56s   v1.18.0
[root@k8s-master ~]# 


至此K8S集群部署完成!!!

4. master节点安装管理页面dashboard

可以先手动下载dashboard镜像:

# docker pull kubernetesui/dashboard:v2.0.0-beta8
# docker pull kubernetesui/metrics-scraper:v1.0.1


# wget   http://120.78.77.38/file/kubernetes-dashboard.yaml
# kubectl apply -f  kubernetes-dashboard.yaml

[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-694557449d-nhxc9   1/1     Running   0          8m44s
kubernetes-dashboard-9774cc786-r2qnx         1/1     Running   0          8m44s
[root@k8s-master ~]# 

[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboard  -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-694557449d-nhxc9   1/1     Running   0          9m20s   10.244.1.2   k8s-node01   <none>           <none>
kubernetes-dashboard-9774cc786-r2qnx         1/1     Running   0          9m20s   10.244.2.4   k8s-node02   <none>           <none>
[root@k8s-master ~]# 


登录地址:
https://192.168.10.20:30001        #需要用火狐浏览器打开访问

#创建token

[root@k8s-master ~]# kubectl create serviceaccount  dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create  clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret |awk '/dashboard-admin/{pirnt }')

......

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlZQeU15aWgyeFZNOS14aTQ0ZVFHaGM5WXMyb2sxMkNMVWRMdkJ1cDBKbncifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJ0dGwtY29udHJvbGxlci10b2tlbi1zcXM2eiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJ0dGwtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjZlY2I3ZjM3LTFkMWQtNDE0Ny1hMGM4LTkyYmMwYzZlZmM0OSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTp0dGwtY29udHJvbGxlciJ9.k1VoJ6UJD0X5tt-ldVNiBLGVf_1PL791_ccNkMWGXQg2YPj_ZJR0G-jaQ8V336JHNtv5LlxPnmGpZhSMcxiMqGDukKIozWoqfiZVCysbPzsqN-NoVtCe90su2apjuHJhDB-2hFUZYJ481p7Q69SQ9pCf5QQv1FORyoHRvWG4a5M_QCgRXnLsNdcIHb56bvs2sA18n6EHDYDr4bLFWKxlEe6eHNgIyQeBJZ4jr7kEJ1DrDrU1Gr5fGAhCAsyONeFoJvv2Fcpk4o_CR1eIAxZHV4JiODl14tDTn5zMCLRYcU2X3QMXB9fc5JE7TI_nGl2INIQ0asYvVTJxmKxp8gFdBA
[root@k8s-master ~]# 

测试kubernetes集群

(1)、创建一个nginx的pod
现在我们在kubernetes集群中创建一个nginx的pod,验证是否能正常运行。
在master节点执行一下步骤:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# 

现在我们查看pod和service

[root@k8s-master ~]# kubectl get pod,svc -o wide
NAME                        READY   STATUS    RESTARTS   AGE     IP           NODE         NOMINATED NODE   READINESS GATES
pod/nginx-f89759699-rrg5g   1/1     Running   0          3m33s   10.244.2.5   k8s-node02   <none>           <none>

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        71m     <none>
service/nginx        NodePort    10.1.12.175   <none>        80:30990/TCP   3m22s   app=nginx
[root@k8s-master ~]# 

打印的结果中,前半部分是pod相关信息,后半部分是service相关信息。我们看service/nginx这一行可以看出service暴漏给集群的端口是30990。记住这个端口。

然后从pod的详细信息可以看出此时pod在node2节点之上。node2节点的IP地址是192.168.10.40

(2)、访问nginx验证集群
那现在我们访问一下。打开浏览器(建议火狐浏览器),访问地址就是:

http://192.168.10.40:30990



参考文档:

https://mp.weixin.qq.com/s?__biz=MzAxMTkwODIyNA==&mid=2247505692&idx=1&sn=83d7b8acfed9ca96547a2ec4a5483669&chksm=9bbb72f3acccfbe507b10fa766f691e25b10b6db1f6169bbcbac52d3d3ba5d9c66fd716d40c3&cur_album_id=1524878320667049989&scene=189#rd

二进制部署方式:
https://mp.weixin.qq.com/s?__biz=MzAxMTkwODIyNA==&mid=2247505692&idx=1&sn=83d7b8acfed9ca96547a2ec4a5483669&chksm=9bbb72f3acccfbe507b10fa766f691e25b10b6db1f6169bbcbac52d3d3ba5d9c66fd716d40c3&cur_album_id=1524878320667049989&scene=189#rd

k8s 监控:

http://192.168.10.20:31672/metrics

http://192.168.10.20:30003/targets

http://192.168.10.20:30106/?orgId=1

kubeadm 部署k8s 1.18版本集群笔记

 

卸载kubeadm部署的k8s集群:

# 卸载服务
kubeadm reset
# 删除rpm包
rpm -qa|grep kube*|xargs rpm --nodeps -e
# 删除容器及镜像
docker images -qa|xargs docker rmi -f




https://www.xamrdz.com/web/27t1926592.html

相关文章: