手工安装 k8s 1.9

2018-01-20 Lingxian Kong 更多博文 » 博客 » GitHub »

原文链接 https://lingxiankong.github.io/2018-01-20-install-k8s.html
注:以下为加速网络访问所做的原文缓存,经过重新格式化,可能存在格式方面的问题,或偶有遗漏信息,请以原文为准。


更新历史:

  • 2018.01.20,初稿完成
  • 2018.02.14,更新 ansible 脚本

前两天折腾 Qinling 的 devstack,因为 Qinling 默认会对 k8s 有依赖,所以要在 devstack 的安装过程中安装 k8s。最初的脚本是从 openstack-helm 项目偷过来的,当时版本还是1.7,而 Qinling 升级了 kubernetes python client 之后发现不兼容1.7,于是开始折腾 k8s 的安装。期间我几乎尝试了世面上能找到的开源的安装 k8s 的工具,但发现跟 devstack 配合时都或多或少有一些限制(或者说麻烦),最后又不得不回到 openstack-helm 的 k8s-aio 脚本(其实现在 openstack-helm 已经跟 zuul3 配合用 ansible 安装 k8s 了)。k8s-aio 脚本其实就是把手工安装 k8s 步骤整合了一下,但它的特别之处在于它把 kubelet 也跑在容器里。

我自己的问题解决后发现自己对于 k8s 的安装是俩眼一抹黑,万一以后再碰到 devstack 的问题,怕是又要花时间折腾,于是就自己尝试手工安装 k8s 感受一下,也为阅读 k8s 代码打个基础。

我参考的就是 k8s 官方文档:

  • https://kubernetes.io/docs/setup/independent/install-kubeadm/ # 有端口矩阵
  • https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

Install docker

apt-get update -qq >/dev/null
apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | apt-key add -qq - >/dev/null
add-apt-repository "deb https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") $(lsb_release -cs) stable"
apt-get update -qq >/dev/null && apt-get install -y docker-ce=$(apt-cache madison docker-ce | grep 17.04 | head -1 | awk '{print $3}')

Install kube* toolset

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -qq - >/dev/null
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update > /dev/null
apt-get install -y kubelet kubeadm kubectl

其中,

  • kubectl 大家应该都知道,是跟 k8s 服务交互的命令行工具。
  • kubeadm 就是安装 k8s 测试环境的命令行工具
  • kubelet 就比较重要了,没有 kubelet,kubeadm 啥也干不了。kubelet 其实就类似于 Nova 中的 nova-compute 进程(管理 VM),负责管理 container。安装完 kubelet,系统中就多了一个 kubelet 服务。关于 kubelet,我就说几点:

Init k8s master

直接运行:

root@lingxian-test-kubeadm:~# kubeadm init

这个命令都干了什么:

  • 系统状态检查
  • 生成 token
  • 生成自签名证书
  • 生成 kubeconfig 用于跟 api talk
  • 为管理面服务容器生成 manifest,放在/etc/kubernetes/manifests 目录下
  • 配置 RBAC,并设置master node只运行管理容器
  • 创建附加服务,比如 kube-proxy 和 kube-dns 等

安装成功后就可以查看系统中创建的 container 和 pod:

root@lingxian-test-kubeadm:~# docker ps | grep -v '/pause'
CONTAINER ID        IMAGE                                                    COMMAND                  CREATED             STATUS              PORTS               NAMES
c4a99477f9f8        gcr.io/google_containers/kube-proxy-amd64                "/usr/local/bin/ku..."   3 minutes ago       Up 3 minutes                            k8s_kube-proxy_kube-proxy-zflss_kube-system_fd04b41e-fdc1-11e7-a8b1-fa163e3fffda_0
cbb31b4900e4        gcr.io/google_containers/kube-apiserver-amd64            "kube-apiserver --..."   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver_kube-apiserver-lingxian-test-kubeadm_kube-system_404a8fb3a79bc1000957873bdade26b5_0
60e0266dc272        gcr.io/google_containers/kube-scheduler-amd64            "kube-scheduler --..."   4 minutes ago       Up 4 minutes                            k8s_kube-scheduler_kube-scheduler-lingxian-test-kubeadm_kube-system_69c12074e336b0dbbd0a1666ce05226a_0
e3729e63ceec        gcr.io/google_containers/kube-controller-manager-amd64   "kube-controller-m..."   4 minutes ago       Up 4 minutes                            k8s_kube-controller-manager_kube-controller-manager-lingxian-test-kubeadm_kube-system_8e301ae2c343e960ad9d90471f03cf52_0
6f60fa52200d        gcr.io/google_containers/etcd-amd64                      "etcd --listen-cli..."   4 minutes ago       Up 4 minutes                            k8s_etcd_etcd-lingxian-test-kubeadm_kube-system_7278f85057e8bf5cb81c9f96d3b25320_0
root@lingxian-test-kubeadm:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY     STATUS    RESTARTS   AGE
kube-system   etcd-lingxian-test-kubeadm                      1/1       Running   0          4m
kube-system   kube-apiserver-lingxian-test-kubeadm            1/1       Running   0          3m
kube-system   kube-controller-manager-lingxian-test-kubeadm   1/1       Running   0          4m
kube-system   kube-dns-6f4fd4bdf-5lhmb                        0/3       Pending   0          4m
kube-system   kube-proxy-zflss                                1/1       Running   0          4m
kube-system   kube-scheduler-lingxian-test-kubeadm            1/1       Running   0          3m

多说一句,熟悉 k8s 的朋友其实已经知道那些处于 pause 的 container 的作用。当你在 k8s 中创建包含一个 container 的 pod 时,其实 k8s 会在这个 pod 里偷偷创建一个叫 infra-container 的容器,初始化 pod 的网络、命名空间,pod 中的其他 container 就会共享这个网络和命名空间。所以完成网络初始化后,这些 infra-container 就会永久睡眠,直到收到 SIGINT 或 SIGTERM 信号。

这里看到 kube-dns 卡在 Pending 状态,是因为它必须在安装 pod network 组件后才能启动成功。

Revert k8s master

当我再往下准备安装网络组件时,发现 calico 要求执行 kubeadm 时有额外参数,所以我就回退了 kubeadm 的安装:

root@lingxian-test-kubeadm:~# kubectl get nodes
NAME                    STATUS     ROLES     AGE       VERSION
lingxian-test-kubeadm   NotReady   master    8m        v1.9.2
root@lingxian-test-kubeadm:~# kubectl drain lingxian-test-kubeadm --delete-local-data --force --ignore-daemonsets
node "lingxian-test-kubeadm" cordoned
WARNING: Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: etcd-lingxian-test-kubeadm, kube-apiserver-lingxian-test-kubeadm, kube-controller-manager-lingxian-test-kubeadm, kube-scheduler-lingxian-test-kubeadm; Ignoring DaemonSet-managed pods: kube-proxy-zflss
node "lingxian-test-kubeadm" drained
root@lingxian-test-kubeadm:~# kubectl delete node lingxian-test-kubeadm
node "lingxian-test-kubeadm" deleted
root@lingxian-test-kubeadm:~# kubeadm reset
[preflight] Running pre-flight checks.
[reset] Stopping the kubelet service.
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers.
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]

重新运行 kubeadm 命令:

kubeadm init --pod-network-cidr=192.168.0.0/16

Install calico

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

然后再检查管理 pod 都是 running 状态:

root@lingxian-test-kubeadm:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                            READY     STATUS    RESTARTS   AGE
kube-system   calico-etcd-xlcd2                               1/1       Running   0          3m
kube-system   calico-kube-controllers-d554689d5-xz7xz         1/1       Running   0          3m
kube-system   calico-node-zgm89                               2/2       Running   0          3m
kube-system   etcd-lingxian-test-kubeadm                      1/1       Running   0          3m
kube-system   kube-apiserver-lingxian-test-kubeadm            1/1       Running   0          3m
kube-system   kube-controller-manager-lingxian-test-kubeadm   1/1       Running   0          3m
kube-system   kube-dns-6f4fd4bdf-mr7dh                        3/3       Running   0          4m
kube-system   kube-proxy-z2svf                                1/1       Running   0          4m
kube-system   kube-scheduler-lingxian-test-kubeadm            1/1       Running   0          3m

Allow Pod schedule to Master

我只有一个 node,所以就把 master 直接当 worker 用:

root@lingxian-test-kubeadm:~# kubectl taint nodes --all node-role.kubernetes.io/master-
node "lingxian-test-kubeadm" untainted

Test

root@lingxian-test-kubeadm:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
root@lingxian-test-kubeadm:~# kubectl run mynginx --image=nginx --expose --port 80
service "mynginx" created
deployment "mynginx" created
root@lingxian-test-kubeadm:~# kubectl get services
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   25m
mynginx      ClusterIP   10.104.113.58   <none>        80/TCP    12m
root@lingxian-test-kubeadm:~# curl 10.104.113.58
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Join nodes

Refer to https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#44-joining-your-nodes

自动化

一行一行的手动执行脚本不是我的风格,于是我把这些步骤用 ansible 自动化,我是 ansible 新手,写脚本的时候是一边 google 一边测试,最后完成了一个还算比较稳定的初级版本。当然,k8s 的 nodes 是 openstack 虚拟机。脚本参见这里