共计 13083 个字符,预计需要花费 33 分钟才能阅读完成。
Kubernetes 1.5.2 已经发布,下面调整部署文档。
1 环境准备
准备了三台机器作安装测试工作,机器信息如下:
IP | Name | Role | OS |
---|---|---|---|
172.16.1.101 | Master01 | Controller | CentOS7.2 |
172.16.1.106 | Minion01 | Compute | CentOS7.2 |
172.16.1.107 | Minino02 | Compute | CentOS7.2 |
2 安装 docker
tee /etc/yum.repos.d/docker.repo <<-'EOF' | |
[dockerrepo] | |
name=Docker Repository | |
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/ | |
enabled=1 | |
gpgcheck=1 | |
gpgkey=https://yum.dockerproject.org/gpg | |
EOF | |
yum update -y && yum upgrade -y | |
yum install docker-engine -y | |
systemctl start docker | |
systemctl enable docker.service |
3 安装 k8s 工具包
三种方式:官方源安装、非官方源安装和 release 工程编译,yum 方式因为不能直接使用 google 提供的源,非官方源中提供的版本比较老(mritd 提供的源很不错,版本很新),如果要使用新版本,可以尝试 release 工程编译的方式。
本站提供
一些比较懒得同学:-D,可以直接从本文提供的位置下载 RPM 工具包安装,下载地址。
可以到 Linux 公社资源站下载:
—————————————— 分割线 ——————————————
免费下载地址在 http://linux.linuxidc.com/
用户名与密码都是www.linuxidc.com
具体下载目录在 /2017 年资料 / 2 月 /17 日 /Kubeadm 快速部署 Kubernetes1.5.2/
下载方法见 http://www.linuxidc.com/Linux/2013-07/87684.htm
—————————————— 分割线 ——————————————
yum install -y socat | |
rpm -ivh kubeadm-1.6.0-0.alpha.0.2074.a092d8e0f95f52.x86_64.rpm kubectl-1.5.1-0.x86_64.rpm kubelet-1.5.1-0.x86_64.rpm kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm | |
systemctl enable kubelet.service |
官方源安装
跨越 GFW 方式不细说,你懂的。
建议使用 yumdownloader
下载 rpm 包,不然那下载速度,会让各位对玩 k8s 失去兴趣的。
yum install -y yum-utils | |
cat <<EOF > /etc/yum.repos.d/kubernetes.repo | |
[kubernetes] | |
name=Kubernetes | |
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 | |
enabled=1 | |
gpgcheck=1 | |
repo_gpgcheck=1 | |
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg | |
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg | |
EOF | |
yumdownloader kubelet kubeadm kubectl kubernetes-cni | |
rpm -ivh *.rpm | |
systemctl enable kubelet.service && systemctl start kubelet |
非官方源安装
# 感谢 mritd 维护了一个 yum 源 | |
tee /etc/yum.repos.d/mritd.repo << EOF | |
[mritdrepo] | |
name=Mritd Repository | |
baseurl=https://rpm.mritd.me/centos/7/x86_64 | |
enabled=1 | |
gpgcheck=1 | |
gpgkey=https://cdn.mritd.me/keys/rpm.public.key | |
EOF | |
yum makecache | |
yum install -y kubelet kubectl kubernetes-cni kubeadm | |
systemctl enable kubelet && systemctl start kubelet |
relese 编译
git clone https://github.com/kubernetes/release.git | |
cd release/rpm | |
./docker-build.sh |
编译完成后生成 rpm 包到:/output/x86_64
,进入到该目录后安装 rpm 包,注意选择 amd64 的包(相信大多数同学都是 64bit 环境,如果是 32bit 或者 arm 架构请自行选择安装)。
4 下载 docker 镜像
kubeadm 方式安装 kubernetes 集群需要的镜像在 docker 官方镜像中并未提供,只能去 google 的官方镜像库:gcr.io
中下载,GFW 咋办?翻墙! 也可以使用 docker hub 做跳板自己构建,这里针对 k8s-1.5.2 我已经做好镜像,各位可以直接下载,dashboard 的版本并未紧跟 kubelet 主线版本,用哪个版本都可以,本文使用 kubernetes-dashboard-amd64:v1.5.0。
kubernetes-1.5.2 所需要的镜像:
- etcd-amd64:2.2.5
- kubedns-amd64:1.9
- kube-dnsmasq-amd64:1.4
- dnsmasq-metrics-amd64:1.0
- exechealthz-amd64:1.2
- pause-amd64:3.0
- kube-discovery-amd64:1.0
- kube-proxy-amd64:v1.5.2
- kube-scheduler-amd64:v1.5.2
- kube-controller-manager-amd64:v1.5.2
- kube-apiserver-amd64:v1.5.2
- kubernetes-dashboard-amd64:v1.5.0
偷下懒吧,直接执行以下脚本:
images=(kube-proxy-amd64:v1.5.2 kube-discovery-amd64:1.0 kubedns-amd64:1.9 kube-scheduler-amd64:v1.5.2 kube-controller-manager-amd64:v1.5.2 kube-apiserver-amd64:v1.5.2 etcd-amd64:2.2.5 kube-dnsmasq-amd64:1.4 dnsmasq-metrics-amd64:1.0 exechealthz-amd64:1.2 pause-amd64:3.0 kubernetes-dashboard-amd64:v1.5.0 nginx-ingress-controller:0.8.3) | |
for imageName in ${images[@]} ; do | |
docker pull linuxidc/$imageName | |
docker tag linuxidc/$imageName gcr.io/google_containers/$imageName | |
docker rmi linuxidc/$imageName | |
done |
5 安装 master 节点
由于 kubeadm 和 kubelet 安装过程中会生成 /etc/kubernetes
目录,而 kubeadm init
会先检测该目录是否存在,所以我们先使用 kubeadm 初始化环境。
kubeadm reset && systemctl start kubelet | |
kubeadm init --api-advertise-addresses=172.16.1.101 --use-kubernetes-version v1.5.2 | |
# 如果使用外部 etcd 集群: | |
kubeadm init --api-advertise-addresses=172.16.1.101 --use-kubernetes-version v1.5.2 --external-etcd-endpoints http://172.16.1.107:2379,http://172.16.1.107:4001 |
说明:如果打算使用 flannel 网络,请加上:
--pod-network-cidr=10.244.0.0/16
。如果有多网卡的,请根据实际情况配置--api-advertise-addresses=<ip-address>
,单网卡情况可以省略。
如果出现 ebtables not found in system path
的错误,要先安装 ebtables
包,我安装的过程中未提示,该包系统已经自带了。
yum install -y ebtables
安装过程大概 2 - 3 分钟,输出结果如下:
[is in alpha, please do not use it for production clusters. | ] WARNING: kubeadm|
[ | ] Running pre-flight checks|
[.5.2 | ] Using Kubernetes version: v1|
["064158.548b9ddb1d3fad3e" | ] Generated token:|
[and certificate. | ] Generated Certificate Authority key|
[and certificate | ] Generated API Server key|
[ | ] Generated Service Account signing keys|
[and certificates in "/etc/kubernetes/pki" | ] Created keys|
["/etc/kubernetes/kubelet.conf" | ] Wrote KubeConfig file to disk:|
["/etc/kubernetes/admin.conf" | ] Wrote KubeConfig file to disk:|
[for the control plane to become ready | ] Created API client, waiting|
[61.317580 seconds | ] All control plane components are healthy after|
[for at least one node to register and become ready | ] Waiting|
[is ready after 6.556101 seconds | ] First node|
[ | ] Creating a test deployment|
[ | ] Test deployment succeeded|
[for it to become ready | ] Created the kube-discovery deployment, waiting|
[is ready after 6.020980 seconds | ] kube-discovery|
[ | ] Created essential addon: kube-proxy|
[ | ] Created essential addon: kube-dns|
Your Kubernetes master has initialized successfully! | |
You should now deploy a pod network to the cluster. | |
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: | |
http://kubernetes.io/docs/admin/addons/ | |
You can now join any number of machines by running the following on each node: | |
kubeadm join --token=de3d61.504a049ec342e135 172.16.1.101 |
6 安装 minion 节点
Master 节点安装好了 Minoin 节点就简单了。
kubeadm reset && systemctl start kubelet | |
kubeadm join --token=de3d61.504a049ec342e135 172.16.1.101 |
输出结果如下:
[is in alpha, please do not use it for production clusters. | ] WARNING: kubeadm|
[ | ] Running pre-flight checks|
[ | ] Starting the kubelet service|
[ | ] Validating provided token|
[from "http://172.16.1.101:9898/cluster-info/v1/?token-id=f11877" | ] Created cluster info discovery client, requesting info|
[object received, verifying signature using given token | ] Cluster info|
[and contents are valid, will use API endpoints [https://172.16.1.101:6443] | ] Cluster info signature|
[//172.16.1.101:6443 | ] Trying to connect to endpoint https:|
[.5.2 | ] Detected server version: v1|
[with endpoint "https://172.16.1.101:6443" | ] Successfully established connection|
[for this node, generating keys and certificate signing request | ] Created API client to obtain unique certificate|
[from the API server: | ] Received signed certificate|
Issuer: CN=kubernetes | Subject: CN=system:node:yournode | CA: false | |
Not before: 2016-12-15 19:44:00 +0000 UTC Not After: 2017-12-15 19:44:00 +0000 UTC | |
[ | ] Generating kubelet configuration|
["/etc/kubernetes/kubelet.conf" | ] Wrote KubeConfig file to disk:|
Node join complete: | |
* Certificate signing request sent to master and response | |
received. | |
* Kubelet informed of new secure connection details. | |
Run 'kubectl get nodes' on the master to see this machine join. |
安装完成后可以查看下状态:
[ | ]|
NAME STATUS AGE | |
master Ready,master 6m | |
minion01 Ready 2m | |
minion02 Ready 2m |
7 安装 Calico 网络
网络组件选择很多,可以根据自己的需要选择 calico、weave、flannel,calico 性能最好,weave 和 flannel 差不多。Addons 中有配置好的 yaml,部署环境使用的阿里云的 VPC,官方提供的 flannel.yaml 创建的 flannel 网络有问题,所以本文中尝试 calico 网络,。
kubectl apply -f http://docs.projectcalico.org/v2.0/getting-started/kubernetes/installation/hosted/kubeadm/calico.yaml
如果使用了外部 etcd,去掉其中以下内容,并修改etcd_endpoints: [ETCD_ENDPOINTS]
:
# This manifest installs the Calico etcd on the kubeadm master. This uses a DaemonSet | |
# to force it to run on the master even when the master isn't schedulable, and uses | |
# nodeSelector to ensure it only runs on the master. | |
apiVersion: extensions/v1beta1 | |
kind: DaemonSet | |
metadata: | |
name: calico-etcd | |
namespace: kube-system | |
labels: | |
k8s-app: calico-etcd | |
spec: | |
template: | |
metadata: | |
labels: | |
k8s-app: calico-etcd | |
annotations: | |
scheduler.alpha.kubernetes.io/critical-pod: '' | |
scheduler.alpha.kubernetes.io/tolerations: | | |
[{"key": "dedicated", "value": "master", "effect": "NoSchedule" }, | |
{"key":"CriticalAddonsOnly", "operator":"Exists"}] | |
spec: | |
# Only run this pod on the master. | |
nodeSelector: | |
kubeadm.alpha.kubernetes.io/role: master | |
hostNetwork: true | |
containers: | |
- name: calico-etcd | |
image: gcr.io/google_containers/etcd:2.2.1 | |
env: | |
- name: CALICO_ETCD_IP | |
valueFrom: | |
fieldRef: | |
fieldPath: status.podIP | |
command: ["/bin/sh","-c"] | |
args: ["/usr/local/bin/etcd --name=calico --data-dir=/var/etcd/calico-data --advertise-client-urls=http://$CALICO_ETCD_IP:6666 --listen-client-urls=http://0.0.0.0:6666 --listen-peer-urls=http://0.0.0.0:6667"] | |
volumeMounts: | |
- name: var-etcd | |
mountPath: /var/etcd | |
volumes: | |
- name: var-etcd | |
hostPath: | |
path: /var/etcd | |
# This manfiest installs the Service which gets traffic to the Calico | |
# etcd. | |
apiVersion: v1 | |
kind: Service | |
metadata: | |
labels: | |
k8s-app: calico-etcd | |
name: calico-etcd | |
namespace: kube-system | |
spec: | |
# Select the calico-etcd pod running on the master. | |
selector: | |
k8s-app: calico-etcd | |
# This ClusterIP needs to be known in advance, since we cannot rely | |
# on DNS to get access to etcd. | |
clusterIP: 10.96.232.136 | |
ports: | |
- port: 6666 |
检查各节点组件运行状态:
[root@master work]# kubectl get po -n=kube-system -o wide | |
NAME READY STATUS RESTARTS AGE IP NODE | |
calico-node-0jkjn 2/2 Running 0 25m 172.16.1.101 master | |
calico-node-w1kmx 2/2 Running 2 25m 172.16.1.106 minion01 | |
calico-node-xqch6 2/2 Running 0 25m 172.16.1.107 minion02 | |
calico-policy-controller-807063459-d7z47 1/1 Running 0 11m 172.16.1.107 minion02 | |
dummy-2088944543-qw3vr 1/1 Running 0 29m 172.16.1.101 master | |
kube-apiserver-master 1/1 Running 0 28m 172.16.1.101 master | |
kube-controller-manager-master 1/1 Running 0 29m 172.16.1.101 master | |
kube-discovery-1769846148-lzlff 1/1 Running 0 29m 172.16.1.101 master | |
kube-dns-2924299975-jfvrd 4/4 Running 0 29m 192.168.228.193 master | |
kube-proxy-6bk7n 1/1 Running 0 28m 172.16.1.107 minion02 | |
kube-proxy-6pgqz 1/1 Running 1 29m 172.16.1.106 minion01 | |
kube-proxy-7ms6m 1/1 Running 0 29m 172.16.1.101 master | |
kube-scheduler-master 1/1 Running 0 28m 172.16.1.101 master |
说明:kube-dns 需要等 calico 配置完成后才是 running 状态。
8 部署 Dashboard
下载 kubernetes-dashboard.yaml
curl -O https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
修改配置内容,#——内是修改的内容,调整目的:部署 kubernetes-dashboard 到 default-namespaces,不暴露端口到 HostNode,调整版本为 1.5.0,imagePullPolicy 调整为 IfNotPresent。
kind: Deployment | |
apiVersion: extensions/v1beta1 | |
metadata: | |
labels: | |
app: kubernetes-dashboard | |
name: kubernetes-dashboard | |
#---------- | |
# namespace: kube-system | |
#---------- | |
spec: | |
replicas: 1 | |
selector: | |
matchLabels: | |
app: kubernetes-dashboard | |
template: | |
metadata: | |
labels: | |
app: kubernetes-dashboard | |
annotations: | |
scheduler.alpha.kubernetes.io/tolerations: | | |
[ | |
{"key": "dedicated", | |
"operator": "Equal", | |
"value": "master", | |
"effect": "NoSchedule" | |
} | |
] | |
spec: | |
containers: | |
- name: kubernetes-dashboard | |
#---------- | |
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.0 | |
imagePullPolicy: IfNotPresent | |
#---------- | |
ports: | |
- containerPort: 9090 | |
protocol: TCP | |
args: | |
# Uncomment the following line to manually specify Kubernetes API server Host | |
# If not specified, Dashboard will attempt to auto discover the API server and connect | |
# to it. Uncomment only if the default does not work. | |
# - --apiserver-host=http://my-address:port | |
livenessProbe: | |
httpGet: | |
path: / | |
port: 9090 | |
initialDelaySeconds: 30 | |
timeoutSeconds: 30 | |
kind: Service | |
apiVersion: v1 | |
metadata: | |
labels: | |
app: kubernetes-dashboard | |
name: kubernetes-dashboard | |
#---------- | |
# namespace: kube-system | |
#---------- | |
spec: | |
#---------- | |
# type: NodePort | |
#---------- | |
ports: | |
- port: 80 | |
targetPort: 9090 | |
selector: | |
app: kubernetes-dashboard |
9 Dashboard 服务暴露到公网
kubernetes 中的 Service 暴露到外部有三种方式,分别是:
- LoadBlancer Service
- NodePort Service
- Ingress
LoadBlancer Service 是 kubernetes 深度结合云平台的一个组件;当使用 LoadBlancer Service 暴露服务时,实际上是通过向底层云平台申请创建一个负载均衡器来向外暴露服务;目前 LoadBlancer Service 支持的云平台已经相对完善,比如国外的 GCE、DigitalOcean,国内的 阿里云,私有云 OpenStack 等等,由于 LoadBlancer Service 深度结合了云平台,所以只能在一些云平台上来使用。
NodePort Service 顾名思义,实质上就是通过在集群的每个 node 上暴露一个端口,然后将这个端口映射到某个具体的 service 来实现的,虽然每个 node 的端口有很多 (0~65535),但是由于安全性和易用性(服务多了就乱了,还有端口冲突问题) 实际使用可能并不多。
Ingress 可以实现使用 nginx 等开源的反向代理负载均衡器实现对外暴露服务,可以理解 Ingress 就是用于配置域名转发的一个东西,在 nginx 中就类似 upstream,它与 ingress-controller 结合使用,通过 ingress-controller 监控到 pod 及 service 的变化,动态地将 ingress 中的转发信息写到诸如 nginx、apache、haproxy 等组件中实现方向代理和负载均衡。
9.1 部署 Nginx-ingress-controller
Nginx-ingress-controller
是 kubernetes 官方提供的集成了 Ingress-controller 和 Nginx 的一个 docker 镜像。
apiVersion: v1 | |
kind: ReplicationController | |
metadata: | |
name: nginx-ingress-controller | |
labels: | |
k8s-app: nginx-ingress-lb | |
spec: | |
replicas: 1 | |
selector: | |
k8s-app: nginx-ingress-lb | |
template: | |
metadata: | |
labels: | |
k8s-app: nginx-ingress-lb | |
name: nginx-ingress-lb | |
spec: | |
terminationGracePeriodSeconds: 60 | |
hostNetwork: true | |
#本环境中的 minion02 节点有外网 IP,并且有 label 定义:External-IP=true | |
nodeSelector: | |
External-IP: true | |
containers: | |
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 | |
name: nginx-ingress-lb | |
imagePullPolicy: IfNotPresent | |
readinessProbe: | |
httpGet: | |
path: /healthz | |
port: 10254 | |
scheme: HTTP | |
livenessProbe: | |
httpGet: | |
path: /healthz | |
port: 10254 | |
scheme: HTTP | |
initialDelaySeconds: 10 | |
timeoutSeconds: 1 | |
env: | |
- name: POD_NAME | |
valueFrom: | |
fieldRef: | |
fieldPath: metadata.name | |
- name: POD_NAMESPACE | |
valueFrom: | |
fieldRef: | |
fieldPath: metadata.namespace | |
args: | |
- /nginx-ingress-controller | |
- --default-backend-service=$(POD_NAMESPACE)/kubernetes-dashboard |
9.2 部署 Ingress
apiVersion: extensions/v1beta1 | |
kind: Ingress | |
metadata: | |
name: k8s-dashboard | |
spec: | |
rules: | |
- host: dashboard.linuxidc.com | |
http: | |
paths: | |
- path: / | |
backend: | |
serviceName: kubernetes-dashboard | |
servicePort: 80 |
部署完 Ingress 后,解析域名 dashboard.linuxidc.com
到 minion02 的外网 IP,就可以使用 dashboard.linuxidc.com
访问 dashboard。
Docker 中部署 Kubernetes http://www.linuxidc.com/Linux/2016-07/133020.htm
Kubernetes 集群部署 http://www.linuxidc.com/Linux/2015-12/125770.htm
OpenStack, Kubernetes, Mesos 谁主沉浮 http://www.linuxidc.com/Linux/2015-09/122696.htm
Kubernetes 集群搭建过程中遇到的问题及解决 http://www.linuxidc.com/Linux/2015-12/125735.htm
Kubernetes 集群部署 http://www.linuxidc.com/Linux/2015-12/125770.htm
Ubuntu 16.04 下安装搭建 Kubernetes 集群环境 http://www.linuxidc.com/Linux/2017-02/140555.htm
Kubernetes 的详细介绍:请点这里
Kubernetes 的下载地址:请点这里
本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-02/140722.htm
