共计 11337 个字符,预计需要花费 29 分钟才能阅读完成。
本教程介绍如何构建和部署使用 Kubernetes 和一个简单的,多层次的 Web 应用程序 Guestbook。
实验目标
- 启动一个 Redis Master
- 启动一个 Redis Slave
- 启动 guestbook 程序
- 展示和查看前端服务
- 清理
实验环境
需要有一个 Kubernetes 集群,以及 kubectl 命令行工具必须配置与集群通信
- 参考:https://www.linuxidc.com/Linux/2018-03/151479.htm
在此可以检查 k8s 及相关工具版本:
kubectl version
[root@aniu-k8s ~]# kubectl version
Client Version: version.Info{Major:”1″, Minor:”9″, GitVersion:”v1.9.2″, GitCommit:”5fa2db2bd46ac79e5e00a4e6ed24191080aa463b”, GitTreeState:”clean”, BuildDate:”2018-01-18T10:09:24Z”, GoVersion:”go1.9.2″, Compiler:”gc”, Platform:”linux/amd64″}
Server Version: version.Info{Major:”1″, Minor:”9″, GitVersion:”v1.9.2″, GitCommit:”5fa2db2bd46ac79e5e00a4e6ed24191080aa463b”, GitTreeState:”clean”, BuildDate:”2018-01-18T09:42:01Z”, GoVersion:”go1.9.2″, Compiler:”gc”, Platform:”linux/amd64″}
下载实验用到的配置文件:
- redis-master-deployment.yaml
- redis-master-service.yaml
- redis-slave-deployment.yaml
- redis-slave-service.yaml
- frontend-deployment.yaml
- frontend-service.yaml
笔者在 k8s 集群 master 服务器上创建了 /opt/k8s/guestbook 目录,下载上面的文件到目录如下:
[root@aniu-k8s ~]# cd /opt/k8s/guestbook/
[root@aniu-k8s guestbook]# ll
total 24
-rw-r–r– 1 root root 1086 Feb 6 16:28 frontend-deployment.yaml
-rw-r–r– 1 root root 438 Feb 6 16:29 frontend-service.yaml
-rw-r–r– 1 root root 561 Feb 6 17:01 redis-master-deployment.yaml
-rw-r–r– 1 root root 233 Feb 6 17:14 redis-master-service.yaml
-rw-r–r– 1 root root 1117 Feb 6 16:28 redis-slave-deployment.yaml
-rw-r–r– 1 root root 209 Feb 6 16:28 redis-slave-service.yaml
启动 Redis Master
留言簿应用程序使用 Redis 来存储其数据。它将其数据写入 Redis 主实例,并从多个 Redis 从实例中读取数据
创建 Redis Master 部署配置文件
清单文件(如下所示)指定运行单个副本 Redis master Pod 的 Deployment Controller。
- 在下载清单文件的目录中启动一个终端窗口
- 从
redis-master-deployment.yaml
文件应用 Redis Master 部署
[root@aniu-k8s guestbook]# kubectl apply -f redis-master-deployment.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply deployment "redis-master" configured
- redis-master-deployment.yam
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-master
spec:
selector:
matchLabels:
app: redis
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
app: redis
role: master
tier: backend
spec:
containers:
– name: master
image: k8s.gcr.io/redis:e2e # or just image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
– containerPort: 6379
- 查询 Pod 的列表以验证 Redis Master Pod 正在运
[root@aniu-k8s guestbook]# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-585798d8ff-g69wc 1/1 Running 0 28m
- 运行以下命令查看 Redis Master Pod 中的日志:
[root@aniu-k8s guestbook]# kubectl logs -f redis-master-585798d8ff-g69wc
_._
_.-“__ ”-._
_.-“ `. `_. ”-._ Redis 2.8.19 (00000000/0) 64 bit
.-“ .-“`. “`\/ _.,_ ”-._
(‘ , .-` | `,) Running in stand alone mode
|`-._`-…-` __…-.“-._|’` _.-‘| Port: 6379
| `-._ `._ / _.-‘ | PID: 1
`-._ `-._ `-./ _.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ | http://redis.io
`-._ `-._`-.__.-‘_.-‘ _.-‘
|`-._`-._ `-.__.-‘ _.-‘_.-‘|
| `-._`-._ _.-‘_.-‘ |
`-._ `-._`-.__.-‘_.-‘ _.-‘
`-._ `-.__.-‘ _.-‘
`-._ _.-‘
`-.__.-‘
[1] 06 Feb 09:14:33.096 # Server started, Redis version 2.8.19
[1] 06 Feb 09:14:33.097 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
[1] 06 Feb 09:14:33.097 * The server is now ready to accept connections on port 6379
创建 Redis Master 服务
留言簿应用程序需要与 Redis 主站通信以写入其数据。您需要应用服务将流量代理到 Redis 主 Pod。服务定义访问 Pod 的策略
- 应用以下
redis-master-service.yaml
文件中的 Redis Master 服务
[root@aniu-k8s guestbook]# kubectl apply -f redis-master-service.yaml
service “redis-master” created
- redis-master-service.yaml
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
– port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
- 查询服务列表以验证 Redis 主服务正在运行:
[root@aniu-k8s guestbook]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
redis-master ClusterIP 10.109.193.22 <none> 6379/TCP 6s
启动 Redis Slaves 服务
虽然 Redis master 是一个单独的 pod,但是可以通过添加副本 Redis Slaves 来使其高度可用以满足流量需求
创建 Redis Slaves 部署
部署根据清单文件中设置的配置进行缩放。在这种情况下,Deployment 对象指定两个副本
如果没有任何副本正在运行,则此部署将在您的容器群集上启动两个副本。相反,如果有两个以上的副本正在运行,则会缩小直到两个副本正在运行
- 从
redis-slave-deployment.yaml
文件应用 Redis Slave 部署:
[root@aniu-k8s guestbook]# kubectl apply -f redis-slave-deployment.yaml
deployment “redis-slave” created
- redis-slave-deployment.yam
apiVersion: v1
kind: Service
metadata:
name: redis-master
labels:
app: redis
role: master
tier: backend
spec:
ports:
– port: 6379
targetPort: 6379
selector:
app: redis
role: master
tier: backend
[root@aniu-k8s guestbook]# cat redis-slave-deployment.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: redis-slave
spec:
selector:
matchLabels:
app: redis
role: slave
tier: backend
replicas: 2
template:
metadata:
labels:
app: redis
role: slave
tier: backend
spec:
containers:
– name: slave
image: gcr.io/google_samples/gb-redisslave:v1
resources:
requests:
cpu: 100m
memory: 100Mi
env:
– name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# instead access an environment variable to find the master
# service’s host. To do so, comment out the ‘value: dns’ line above, and
# uncomment the line below:
# value: env
ports:
– containerPort: 6379
- 查询 Pod 的列表以验证 Redis Slave Pod 是否正在运行:
[root@aniu-k8s guestbook]# kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-585798d8ff-g69wc 1/1 Running 0 3m
redis-slave-865486c9df-tjvfn 0/1 ContainerCreating 0 27s
redis-slave-865486c9df-x76gb 1/1 Running
创建 Redis Slave 服务
留言簿应用程序需要与 Redis Slave 进行通信以读取数据。为了使 Redis Slave 可以被发现,需要建立一个服务。服务为一组 Pod 提供透明的负载平衡。
- 应用以下
redis-slave-service.yaml
文件中的 Redis 从服务
[root@aniu-k8s guestbook]# kubectl apply -f redis-slave-service.yaml
service “redis-slave” created
- redis-slave-service.yam
apiVersion: v1
kind: Service
metadata:
name: redis-slave
labels:
app: redis
role: slave
tier: backend
spec:
ports:
– port: 6379
selector:
app: redis
role: slave
tier: backend
- 查询服务列表以验证 Redis Slave 服务是否正在运行
[root@aniu-k8s guestbook]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
redis-master ClusterIP 10.109.193.22 <none> 6379/TCP 2m
redis-slave ClusterIP 10.101.252.227 <none> 6379/TCP 21s
设置和公开留言簿前端
留言簿应用程序有一个 Web 前端,用于使用 PHP 编写的 HTTP 请求。它被配置为连接到 redis-master 服务的写请求和 redis-slave 服务的读请求
创建留言簿前端部署
- 从以下
frontend-deployment.yaml
文件应用前端部署:
[root@aniu-k8s guestbook]# kubectl apply -f frontend-deployment.yaml
deployment “frontend” created
- frontend-deployment.yam
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: guestbook
tier: frontend
replicas: 3
template:
metadata:
labels:
app: guestbook
tier: frontend
spec:
containers:
– name: php-redis
image: gcr.io/google-samples/gb-frontend:v4
resources:
requests:
cpu: 100m
memory: 100Mi
env:
– name: GET_HOSTS_FROM
value: dns
# Using `GET_HOSTS_FROM=dns` requires your cluster to
# provide a dns service. As of Kubernetes 1.3, DNS is a built-in
# service launched automatically. However, if the cluster you are using
# does not have a built-in DNS service, you can instead
# instead access an environment variable to find the master
# service’s host. To do so, comment out the ‘value: dns’ line above, and
# uncomment the line below:
# value: env
ports:
– containerPort: 80
- 查询 Pod 的列表以验证三个前端副本正在运行:
[root@aniu-k8s guestbook]# kubectl get pods -l app=guestbook -l tier=frontend
NAME READY STATUS RESTARTS AGE
frontend-67f65745c-lr25s 1/1 Running 0 1m
frontend-67f65745c-n798g 1/1 Running 0 1m
frontend-67f65745c-n92r4 1/1 Running 0 1m
创建前端服务
应用的 redis-slave 和 redis-master 服务只能在容器群集内访问,因为服务的默认类型是 ClusterIP。ClusterIP 为服务指向的一组 Pod 提供一个 IP 地址。该 IP 地址只能在群集内访问。
如果希望 guest 虚拟机能够访问您的留言簿,则必须将前端服务配置为外部可见,以便客户端可以从容器群集外部请求该服务。
Minikube
只能通过NodePort
公开服务
- 应用以下
frontend-service.yaml
文件中的前端服务
[root@aniu-k8s guestbook]# kubectl apply -f frontend-service.yaml
service “frontend” created
- frontend-service.yam
apiVersion: v1
kind: Service
metadata:
name: frontend
labels:
app: guestbook
tier: frontend
spec:
# comment or delete the following line if you want to use a LoadBalancer
type: NodePort
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
# type: LoadBalancer
ports:
– port: 80
selector:
app: guestbook
tier: frontend
- 查询服务列表以验证前端服务正在运
[root@aniu-k8s guestbook]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend NodePort 10.103.1.159 <none> 80:31546/TCP 7s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6d
redis-master ClusterIP 10.109.193.22 <none> 6379/TCP 5m
redis-slave ClusterIP 10.101.252.227 <none> 6379/TCP 4m
通过 NodePort 查看前端服务
如果您将此应用程序部署到 Minikube 或本地群集,则需要查找 IP 地址才能查看您的留言簿。
- 行以下命令获取前端服务的 IP 地址
minikube service frontend –url
复制 IP 地址,然后在浏览器中加载页面以查看您的留言簿。
通过 LoadBalancer 查看前端服务
- 运行以下命令获取前端服务的 IP 地址
[root@aniu-k8s guestbook]# kubectl get service frontend
复制外部 IP 地址,并在浏览器中加载页面以查看您的留言簿
扩展 Web 前端
放大或缩小很容易,因为我们的服务器被定义为使用部署控制器的服务
- 运行以下命令以放大前端 Pod 的数量
[root@aniu-k8s guestbook]# kubectl scale deployment frontend –replicas=5
deployment “frontend” scaled
- 查询 Pod 的列表以验证正在运行的前端 Pod 的数量
[root@aniu-k8s guestbook]# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-67f65745c-lr25s 1/1 Running 0 5m
frontend-67f65745c-n798g 1/1 Running 0 5m
frontend-67f65745c-n92r4 1/1 Running 0 5m
frontend-67f65745c-nsnfs 1/1 Running 0 7s
frontend-67f65745c-spjnm 1/1 Running 0 7s
redis-master-585798d8ff-g69wc 1/1 Running 0 10m
redis-slave-865486c9df-tjvfn 1/1 Running 0 8m
redis-slave-865486c9df-x76gb 1/1 Running 0 8m
- 运行以下命令以缩小前端 Pod 的数量:
[root@aniu-k8s guestbook]# kubectl scale deployment frontend –replicas=2
deployment “frontend” scaled
[root@aniu-k8s guestbook]# kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-67f65745c-lr25s 1/1 Running 0 50m
frontend-67f65745c-n798g 1/1 Running 0 50m
frontend-67f65745c-n92r4 0/1 Terminating 0 50m
frontend-67f65745c-spjnm 0/1 Terminating 0 45m
redis-master-585798d8ff-g69wc 1/1 Running 0 56m
redis-slave-865486c9df-tjvfn 1/1 Running 0 53m
redis-slave-865486c9df-x76gb 1/1 Running 0 53m
清理 pods 及 services
删除部署和服务也会删除任何正在运行的 Pod。使用标签可以用一个命令删除多个资源。
- 运行以下命令删除所有 Pod,Deployments 和 Services。
kubectl delete deployment -l app=redis
kubectl delete service -l app=redis
kubectl delete deployment -l app=guestbook
kubectl delete service -l app=guestbook
- 回显如下:
deployment “redis-master” deleted
deployment “redis-slave” deleted
service “redis-master” deleted
service “redis-slave” deleted
deployment “frontend” deleted
service “frontend” deleted
- 查询 Pod 的列表以验证没有 Pod 正在运行:
kubectl get pods
#
No resources found.
CentOS 7.3 利用 kubeadm 安装 Kubernetes 1.7.3 完整版 https://www.linuxidc.com/Linux/2018-01/150517.htm
在 Kubernetes 集群中部署 MySQL 主从 https://www.linuxidc.com/Linux/2018-03/151282.htm
国内获取 Kubernetes 镜像的方法 https://www.linuxidc.com/Linux/2018-02/151015.htm
Kubernetes 数据持久化方案 https://www.linuxidc.com/Linux/2018-03/151280.htm
Kubernetes 集群配置笔记 https://www.linuxidc.com/Linux/2018-03/151136.htm
Kubernetes 的详细介绍:请点这里
Kubernetes 的下载地址:请点这里