技术交流28群

服务热线

135-6963-3175

微信服务号

k8s安装部署 更新时间 2020-9-21 浏览1778次

k8s安装部署


Kubernetes集群组件:
  - etcd 一个高可用的K/V键值对存储和服务发现系统
  - flannel 实现夸主机的容器网络的通信
  - kube-apiserver 提供kubernetes集群的API调用
  - kube-controller-manager 确保集群服务
  - kube-scheduler 调度容器,分配到Node
  - kubelet 在Node节点上按照配置文件中定义的容器规格启动容器
  - kube-proxy 提供网络代理服务


master:
    etcd:提供分布式数据存储的数据库吧,用于持久化存储k8s集群的配置和状态
 kube-apiserver:api service提供了http rest接口,是整个集群的入口,K8s其它组件之间不直接通信,而是通过API server通信的。(只有API server连接了etcd,即其它组件更新K8s集群的状态时,只能通过API server读写etcd中的数据)
 kube-scheduler:scheduler负责资源的调度
 kube-controller-manager:整个集群的管理控制中心,此组件里面是由多个控制器组成的,如:Replication Manager(管理ReplicationController 资源),ReplicaSet Controller,PersistentVolume controller。主要作用用来复制组件、追踪工作结点状态、处理失败结点。
node:
    flannel:好像是用来支持网络通信的吧
    kube-proxy:用来负载均衡网络流量
    kubelet:用来管理node节点机上的容器
    docker:运行项目镜像容器的组件


1、准备机器节点如下:192.168.1.102、192.168.1.103


所有机器上执行以下命令,准备安装环境:(注意是所有机器,主机master,从机node都要安装)

1.1、安装epel-release源(可选)

yum -y install epel-release

这个软件包会自动配置yum的软件仓库。


1.2 所有机器关闭防火墙(实现服务之间通信:Flannel 0.7.0需要关闭防火墙,并连接网络。)


systemctl stop firewalld
systemctl disable firewalld
setenforce 0#查看防火墙状态
firewall-cmd --state
not running

现在开始master主机上192.168.1.102安装kubernetes Master 使用yum安装etcd、kubernetes-master

yum -y install etcd kubernetes-master
=============================================================================================================================================
安装  2 软件包 (+1 依赖软件包)
总下载量:49 M
安装大小:269 M
Downloading packages:
(1/3): etcd-3.3.11-2.el7.centos.x86_64.rpm                                                                            |  10 MB  00:00:01     
(2/3): kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64.rpm                                                          |  14 MB  00:00:01     
(3/3): kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64.rpm                                                          |  25 MB  00:00:02     
---------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                          17 MB/s |  49 MB  00:00:02     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64                                                                        1/3 
  正在安装    : kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64                                                                        2/3 
  正在安装    : etcd-3.3.11-2.el7.centos.x86_64                                                                                          3/3 
  验证中      : kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64                                                                        1/3 
  验证中      : kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64                                                                        2/3 
  验证中      : etcd-3.3.11-2.el7.centos.x86_64                                                                                          3/3 
已安装:
  etcd.x86_64 0:3.3.11-2.el7.centos                            kubernetes-master.x86_64 0:1.5.2-0.7.git269f928.el7                           
作为依赖被安装:
  kubernetes-client.x86_64 0:1.5.2-0.7.git269f928.el7                                                                                        
完毕!

编辑:vi /etc/etcd/etcd.conf文件,修改结果如下:

#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_LISTEN_PEER_URLS=" 
#监听地址
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#名称
ETCD_NAME="default"
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
#ETCD_INITIAL_ADVERTISE_PEER_URLS=" 
#url
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_INITIAL_CLUSTER="default=http://localhost:2380"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
#ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"
#
#[Proxy]
#ETCD_PROXY="off"
#ETCD_PROXY_FAILURE_WAIT="5000"
#ETCD_PROXY_REFRESH_INTERVAL="30000"
#ETCD_PROXY_DIAL_TIMEOUT="1000"

vi /etc/kubernetes/apiserver文件,配置结果如下:

###
#内容包括:绑定主机的IP地址、端口号、etcd服务地址、Service所需的Cluster IP池、一系列admission控制策略等
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies,此处删除了ServiceAccount
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
# Add your own!
KUBE_API_ARGS=""
~

启动etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并设置开机启动。


systemctl start etcd
systemctl start kube-apiserver.service
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
systemctl enable etcd
systemctl enable kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl enable kube-scheduler.service

上面命令也可通过下面命令代替:

for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES ; done

在etcd中定义flannel网络

[root@localhost ~]# etcdctl mk /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
{"Network":"172.17.0.0/16"}
[root@localhost ~]#


接下来在node节点上执行:

在node机上192.168.1.103安装kubernetes Node和flannel组件应用

yum -y install flannel kubernetes-node
错误:docker-ce conflicts with 2:docker-1.13.1-162.git64e9980.el7.centos.x86_64
错误:docker-ce-cli conflicts with 2:docker-1.13.1-162.git64e9980.el7.centos.x86_64
 您可以尝试添加 --skip-broken 选项来解决该问题
 您可以尝试执行:rpm -Va --nofiles --nodigest
[root@localhost ~]# yum list installed | grep docker
containerd.io.x86_64                 1.2.13-3.2.el7                 @docker-ce-stable
docker-ce.x86_64                     3:19.03.12-3.el7               @docker-ce-stable
docker-ce-cli.x86_64                 1:19.03.12-3.el7               @docker-ce-stable
[root@localhost ~]# yum remove -y docker-ce.x86_64
[root@localhost ~]# yum remove -y docker-ce-cli.x86_64
yum -y install flannel kubernetes-node

为flannel网络指定etcd服务,修改/etc/sysconfig/flanneld文件,配置结果如下:

# Flanneld configuration options
# etcd url location.  Point this to the server where etcd runs,和Flannel组件有关,改为master的ip:2379
FLANNEL_ETCD_ENDPOINTS="http://192.168.1.102:2379"
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"
# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

修改vi /etc/kubernetes/config文件

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver #修改为master的ip
KUBE_MASTER="--master=http://192.168.1.102:8080"

修改node机的kubelet配置文件/etc/kubernetes/kubelet

###
# kubernetes kubelet (minion) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)改成0.0.0.0
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname 本机ip
KUBELET_HOSTNAME="--hostname-override=192.168.1.103"
# location of the api-server master ip地址
KUBELET_API_SERVER="--api-servers=http://192.168.1.102:8080"
# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
# Add your own!
KUBELET_ARGS=""

node节点机上启动kube-proxy,kubelet,docker,flanneld等服务,并设置开机启动。

for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES; done
[root@localhost ~]# for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES; done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2020-09-13 08:07:45 CST; 253ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 3866 (kube-proxy)
   CGroup: /system.slice/kube-proxy.service
           ├─3866 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.1.102:8080
           └─3890 iptables -w -C OUTPUT -t nat -m comment --comment handle ClusterIPs; NOTE: this must be before the NodePort rules -j KUB...
9月 13 08:07:45 localhost.localdomain systemd[1]: Started Kubernetes Kube-Proxy Server.
9月 13 08:07:45 localhost.localdomain kube-proxy[3866]: E0913 08:07:45.362710    3866 server.go:421] Can't get Node "localhost.loca... found
9月 13 08:07:45 localhost.localdomain kube-proxy[3866]: I0913 08:07:45.365496    3866 server.go:215] Using iptables Proxier.
9月 13 08:07:45 localhost.localdomain kube-proxy[3866]: W0913 08:07:45.368580    3866 server.go:468] Failed to retrieve node info: ... found
9月 13 08:07:45 localhost.localdomain kube-proxy[3866]: W0913 08:07:45.368682    3866 proxier.go:248] invalid nodeIP, initialize ku...nodeIP
9月 13 08:07:45 localhost.localdomain kube-proxy[3866]: W0913 08:07:45.368689    3866 proxier.go:253] clusterCIDR not specified, un...raffic
9月 13 08:07:45 localhost.localdomain kube-proxy[3866]: I0913 08:07:45.368699    3866 server.go:227] Tearing down userspace rules.
Hint: Some lines were ellipsized, use -l to show in full.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2020-09-13 08:07:46 CST; 150ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 4150 (kubelet)
   CGroup: /system.slice/kubelet.service
           └─4150 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://192.168.1.102:8080 --address=0.0.0.0 --port=10250 --hostn...
9月 13 08:07:46 localhost.localdomain systemd[1]: Started Kubernetes Kubelet Server.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/docker.service.d
           └─flannel.conf
   Active: active (running) since 日 2020-09-13 08:07:49 CST; 139ms ago
     Docs: http://docs.docker.com
 Main PID: 4238 (dockerd-current)
   CGroup: /system.slice/docker.service
           ├─4238 /usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc...
           └─4243 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 -...
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.295365720+08:00" level=warning msg="overlay2: the ...
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.340199748+08:00" level=info msg="Graph migrat...onds"
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.340686273+08:00" level=info msg="Loading cont...art."
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.347235461+08:00" level=info msg="Firewalld ru...alse"
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.417613975+08:00" level=info msg="Default brid...ress"
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.435988926+08:00" level=info msg="Loading cont...one."
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.443345913+08:00" level=info msg="Daemon has c...tion"
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.443359161+08:00" level=info msg="Docker daemo....13.1
9月 13 08:07:49 localhost.localdomain systemd[1]: Started Docker Application Container Engine.
9月 13 08:07:49 localhost.localdomain dockerd-current[4238]: time="2020-09-13T08:07:49.453920665+08:00" level=info msg="API listen o...sock"
Hint: Some lines were ellipsized, use -l to show in full.
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
● flanneld.service - Flanneld overlay address etcd agent
   Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
   Active: active (running) since 日 2020-09-13 08:07:49 CST; 175ms ago
 Main PID: 4400 (flanneld)
   CGroup: /system.slice/flanneld.service
           └─4400 /usr/bin/flanneld -etcd-endpoints=http://192.168.1.102:2379 -etcd-prefix=/atomic.io/network
9月 13 08:07:49 localhost.localdomain systemd[1]: Starting Flanneld overlay address etcd agent...
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.776700    4400 main.go:132] Installing signal handlers
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.777052    4400 manager.go:136] Determining IP address of ...rface
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.783183    4400 manager.go:149] Using interface with name ...1.103
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.783212    4400 manager.go:166] Defaulting external addres....103)
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.799237    4400 local_manager.go:179] Picking subnet in ra...255.0
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.854985    4400 manager.go:250] Lease acquired: 172.17.29.0/24
9月 13 08:07:49 localhost.localdomain flanneld-start[4400]: I0913 08:07:49.860685    4400 network.go:98] Watching for new subnet leases
9月 13 08:07:49 localhost.localdomain systemd[1]: Started Flanneld overlay address etcd agent.
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]#

然后在master上执行

[root@localhost ~]# kubectl get nodes
NAME            STATUS    AGE
192.168.1.103   Ready     1m
[root@localhost ~]#