二进制部署生产级K8s-1.20.x集群,etcd与master拆分,100年集群证书(稳如老狗)

二进制部署生产级K8s-1.20.x集群,etcd与master拆分,100年集群证书(稳如老狗)

Deng YongJie's blog 1,347 2022-05-16

Kubernetes 1.20 快速安装方法
必看!!!!!!!!!!!!!!!!
必需注意集群网络规划:

主机规划:
| 10.7.0.61 | | etcd-01 | etcd
| 10.7.0.62 | | etcd-02 | etcd
| 10.7.0.63 | | etcd-03 | etcd
| 10.7.0.65 | | master-01 | VIP: 10.7.0.64 主(keepalived) api、controller、scheduler
| 10.7.0.66 | | master-02 | VIP: 10.7.0.64 备(keepalived) api、controller、scheduler
| 10.7.0.67 | | master-03 | api、controller、scheduler
| 10.7.0.68 | | 预留master |
| 10.7.0.69 | | 预留master |
| 10.7.0.70 | | 预留master |
| 10.7.0.71 | | worker | kubelet、flannel、kube-proxy
| 10.7.0.72 | |

cluster-cidr: 172.17.0.0/16 pod的网段,需要和etcd设置flannel获取的网段一致!
kubernetes cluster ip: 10.43.0.1 kubernetes svc的ip地址
service-cluster-ip-range: 10.43.0.0/16 svc的网段
clusterDNS: 10.43.0.10 DNS的SVC ip地址

以下集群内部证书为100年有效期,ca证书为5年

如果添加了新的master节点,不需要重新签发证书,因为从一开始的crs文件就提前预留了几个ip,签发的证书已包含了。
只需要重新生成各个组件kubeconfig文件,即是与api-server通信的集群参数,还有客户端认证参数、上下文参数、上下文绑定、角色绑定。
然后把相应的*.pem证书分发到相应节点和目录上,创建组件的配置文件,启动即可

下面etcd、api-server、controller-manager、scheduler组件均提前添加了3个master节点ip地址,下面操作已说明
必看!!!!!!!!!!!!!!!!!!!!!

master节点无需安装flannel插件
二进制K8s集群,服务启动顺序
启动Master节点–启动顺序
service keepalived start
service etcd start
service kube-scheduler start
service kube-controller-manager start
service kube-apiserver restart
kubectl get cs

启动worker节点
service flanneld start
service docker start
service kubelet start
service kube-proxy start

停止顺序:
停止worker节点
service kubelet stop
service kube-proxy stop
service docker stop
service flanneld stop

11.5.2 停止Master 节点
service kube-controller-manager stop
service kube-scheduler stop
service etcd stop
service keepalived stop

4系统初始化
4.1 初始化工具安装
#所有节点
[root@master-1 ~]# apt install net-tools vim wget lrzsz git conntrack-tools -y

4.2 关闭防火墙与Selinux
#所有节点
[root@master-1 ~]# systemctl stop firewalld
[root@master-1 ~]# systemctl disable firewalld
[root@master-1 ~]# sed -i “s/SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config
[root@master-1 ~]# reboot

4.3设置时区
#所有节点
[root@master-1 ~]# timedatectl set-timezone Asia/Shanghai

#设置主机名
[root@master-1 ~]# hostnamectl set-hostname master-1

4.4关闭交换分区
#所有节点
[root@master-1 ~]# swapoff -a
[root@master-1 ~]# sed -i ‘/ swap / s/^(.*)$/#\1/g’ /etc/fstab

4.5设置系统时间同步
#所有节点
[root@master-1 ~]# apt install -y chrony
[root@master-1 ~]# systemctl start chronyd
[root@master-1 ~]# systemctl enable chronyd
[root@master-1 ~]# chronyc sources

4.6 设置主机名
#所有节点
[root@master-1 ~]# cat >> /etc/hosts <<EOF
10.17.1.47 crm-etcd-k8s-b1
10.17.1.48 crm-etcd-k8s-b2
10.17.1.49 crm-etcd-k8s-b3
10.17.1.51 crm-ma01-k8s-b1
10.17.1.52 crm-ma01-k8s-b2
10.17.1.53 crm-ma01-k8s-b3
10.17.1.57 crm-wrk01-k8s-b1
10.17.1.58 crm-wrk01-k8s-b2
10.17.1.59 crm-wrk01-k8s-b3

#预留master
10.17.1.54 crm-ma01-k8s-b4
10.17.1.55 crm-ma01-k8s-b5
10.17.1.56 crm-ma01-k8s-b6
EOF

4.7 设置免密码登录
#从任意Master节点分发配置到其他所有的节点(包括其他的Master与Node)
#本例中从master-01分发
[root@master-1 ~]# apt install -y expect
[root@master-1 ~]# ssh-keygen -t rsa -P “” -f /root/.ssh/id_rsa
#密码更换,自己的root密码
[root@master-1 ~]# export mypass=root_passwd
[root@master-1 ~]# name=(xds-ma01-etcd-p1 xds-ma01-etcd-p2 xds-ma01-etcd-p3 xds-ma01-k8s-p1 xds-ma01-k8s-p2 xds-ma01-k8s-p3 xds-wrk01-k8s-p1 xds-wrk01-k8s-p2 xds-wrk01-k8s-p3)
[root@master-1 ~]# for i in name[@];doexpectc"spawnsshcopyidi/root/.ssh/idrsa.pubroot@{name[@]};do expect -c " spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@i
expect {
yes/no” {send “yes\r”; exp_continue}
password” {send “mypass\r\"; exp_continue} \"*Password*\" {send \"mypass\r”;}
}"
done

#连接测试
[root@master-1 ~]# ssh master-02

4.8 优化内核参数
#所有节点
[root@master-1 ~]# cat >>/etc/sysctl.conf<<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
vm.swappiness=0
fs.file-max=52706963
fs.nr_open=52706963
EOF

#应用内核配置
[root@master-2 ~]# sysctl -p
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
net.ipv4.ip_forward = 1
vm.swappiness = 0
fs.file-max = 52706963
fs.nr_open = 52706963

#worker 节点执行
[root@demo ~]# cat <<EOF | tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
[root@demo ~]# modprobe overlay
[root@demo ~]# modprobe br_netfilter
[root@master-2 ~]# sysctl -p

4.9 高可用节点安装Keepalived
#10.7.0.13
[root@master-1 ~]# apt install -y keepalived

#注意修改网卡地址与SLAVE节点的优先级
[root@master-1 ~]# cat >/etc/keepalived/keepalived.conf <<EOL
global_defs {
router_id KUB_LVS
}
vrrp_script check_api {
script “/etc/keepalived/check_api.sh”
interval 5
}
vrrp_instance VI_1 {
state MASTER
interface ens192
virtual_router_id 61
priority 100
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 111111
}
unicast_src_ip 10.17.1.51
unicast_peer {
10.17.1.52
}
virtual_ipaddress {
10.17.1.50/22 dev ens192
}
track_script {
check_api
}
}
EOL

chmod 644 /etc/keepalived/keepalived.conf

#主的脚本
cat > /etc/keepalived/check_api.sh <<‘EOF’
#!/bin/bash

#如果apiserver进程挂了,则重启apiserver。重启失败则停止keepalived
API_STATUS=$(ps -ef|grep [k]ube-apiserver|wc -l)
if [ ${API_STATUS} == 0 ]
then
systemctl restart kube-apiserver
if [ $? == 1 ]
then
systemctl stop keepalived
fi
fi
EOF
chmod +x /etc/keepalived/check_api.sh
systemctl enable keepalived && systemctl restart keepalived
service keepalived status

#SLAVE
#修改state为slave, priority 为 90
#10.7.0.14,备节点执行
[root@master-02 ~]# apt install -y keepalived

#注意修改网卡地址与SLAVE节点的优先级
[root@master-02 ~]# cat >/etc/keepalived/keepalived.conf <<EOL
global_defs {
router_id KUB_LVS_SLAVE
}
vrrp_script check_vip {
script “/etc/keepalived/check_vip.sh”
interval 5
}
vrrp_instance VI_1 {
state SLAVE
interface ens192
virtual_router_id 61
priority 90
advert_int 1
nopreempt
authentication {
auth_type PASS
auth_pass 111111
}
unicast_src_ip 10.17.1.52
unicast_peer {
10.17.1.51
}
virtual_ipaddress {
10.17.1.50/22 dev ens192
}
track_script {
check_vip
}
}
EOL

chmod 644 /etc/keepalived/keepalived.conf

#从的脚本
cat > /etc/keepalived/check_vip.sh <<‘EOF’
#!/bin/bash
#50是VIP,51是master
#master自身有VIP,并且SLAVE自身也有VIP,则停止自身的keepalived
MASTER_VIP=(ssh10.17.1.51ipagrep10.17.1.50wcl)MYVIP=(ssh 10.17.1.51 ip a|grep 10.17.1.50|wc -l) MY_VIP=(ip a|grep 10.17.1.50|wc -l)

if [ ${MASTER_VIP} == 1 -a ${MY_VIP} == 1 ]
then
systemctl stop keepalived
fi
EOF

chmod +x /etc/keepalived/check_vip.sh

4.10.启动keepalived
[root@master-1 ~]# systemctl enable keepalived && systemctl restart keepalived
[root@master-1 ~]# service keepalived status

4.11配置证书
4.12 下载自签名证书生成工具
#在分发机器Master-01上操作
[root@master-1 ~]# mkdir /soft && cd /soft
[root@master-1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@master-1 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@master-1 ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
[root@master-1 ~]# chmod +x cf*
[root@master-1 ~]# mv cfssl /usr/local/bin/cfssl
[root@master-1 ~]# mv cfssljson /usr/local/bin/cfssljson
[root@master-1 ~]# mv cfssl-certinfo /usr/local/bin/cfssl-certinfo

5.2 生成ETCD证书
#创建目录(Master-1)
[root@master-1 ~]# mkdir /root/etcd && cd /root/etcd

5.2.1 CA 证书配置(Master-1)
[root@master-1 ~]# cat << EOF | tee ca-config.json
{
“signing”: {
“default”: {
“expiry”: “876000h”
},
“profiles”: {
“www”: {
“expiry”: “876000h”,
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
]
}
}
}
}
EOF

5.2.2 创建CA证书请求文件(Master-1)
[root@master-1 ~]# cat << EOF | tee ca-csr.json
{
“CN”: “etcd CA”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”
}
]
}
EOF

5.2.3 创建ETCD证书请求文件
#可以把所有的master IP 加入到csr文件中(Master-1)
#可以提前预多几个master IP填进去,方便后期扩容master
[root@master-1 ~]# cat << EOF | tee server-csr.json
{
“CN”: “etcd”,
“hosts”: [
“crm-ma01-k8s-b1”,
“crm-ma01-k8s-b2”,
“crm-ma01-k8s-b3”,
“crm-ma01-k8s-b4”,
“crm-ma01-k8s-b5”,
“crm-ma01-k8s-b6”,
“crm-etcd-k8s-b1”,
“crm-etcd-k8s-b2”,
“crm-etcd-k8s-b3”,
“10.17.1.51”,
“10.17.1.52”,
“10.17.1.53”,
“10.17.1.54”,
“10.17.1.55”,
“10.17.1.56”,
“10.17.1.47”,
“10.17.1.48”,
“10.17.1.49”,
“10.17.1.50”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”
}
]
}
EOF

5.2.4 生成 ETCD CA 证书和ETCD公私钥(Master-1)
[root@master-1 ~]# cd /root/etcd/

#生成ca证书(Master-1)
[root@master-1 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
[root@master-1 etcd]# ll
total 24
-rw-r–r-- 1 root root 287 Mar 4 10:16 ca-config.json
-rw-r–r-- 1 root root 956 Mar 4 10:18 ca.csr
-rw-r–r-- 1 root root 209 Mar 4 10:17 ca-csr.json
-rw------- 1 root root 1679 Mar 4 10:18 ca-key.pem
-rw-r–r-- 1 root root 1265 Mar 4 10:18 ca.pem
-rw-r–r-- 1 root root 338 Mar 4 10:18 server-csr.json

#生成etcd证书(Master-1)
[root@master-1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@master-1 etcd]# ll
total 36
-rw-r–r-- 1 root root 287 Mar 4 10:16 ca-config.json
-rw-r–r-- 1 root root 956 Mar 4 10:18 ca.csr
-rw-r–r-- 1 root root 209 Mar 4 10:17 ca-csr.json
-rw------- 1 root root 1679 Mar 4 10:18 ca-key.pem
-rw-r–r-- 1 root root 1265 Mar 4 10:18 ca.pem
-rw-r–r-- 1 root root 1054 Mar 4 10:19 server.csr
-rw-r–r-- 1 root root 338 Mar 4 10:18 server-csr.json
-rw------- 1 root root 1679 Mar 4 10:19 server-key.pem
-rw-r–r-- 1 root root 1379 Mar 4 10:19 server.pem

#10 11 12这三个节点,部署etcd
下载etcd软件包
[root@etcd-01 soft]# cd /soft
[root@etcd-01 soft]# wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
[root@etcd-01 soft]# tar -xvf etcd-v3.4.13-linux-amd64.tar.gz
[root@etcd-01 soft]# cp -p etcd-v3.4.13-linux-amd64/etcd* /usr/local/bin/
[root@etcd-01 soft]# for i in crm-etcd-k8s-b1 crm-etcd-k8s-b2 crm-etcd-k8s-b3;do scp /usr/local/bin/etc* $i:/usr/local/bin/;done

6.1 编辑etcd配置文件(所有master)
#注意修改每个节点的ETCD_NAME
#注意修改每个节点的监听地址
[root@etcd-01 ~]# mkdir -p /etc/etcd/cfg/
[root@etcd-01 ~]# cat >/etc/etcd/cfg/etcd.conf<<EOFL
#[Member]
ETCD_NAME=“crm-etcd-k8s-b1”
ETCD_DATA_DIR=“/var/lib/etcd/default.etcd”
ETCD_LISTEN_PEER_URLS=“https://10.17.1.47:2380
ETCD_LISTEN_CLIENT_URLS=“https://10.17.1.47:2379,https://127.0.0.1:2379

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“https://10.17.1.47:2380
ETCD_ADVERTISE_CLIENT_URLS=“https://10.17.1.47:2379
ETCD_INITIAL_CLUSTER=“crm-etcd-k8s-b1=https://10.17.1.47:2380,crm-etcd-k8s-b2=https://10.17.1.48:2380,crm-etcd-k8s-b3=https://10.17.1.49:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster”
ETCD_ENABLE_V2=“true”
EOFL

[root@etcd-02 ~]# mkdir -p /etc/etcd/cfg/
[root@etcd-02 ~]# cat >/etc/etcd/cfg/etcd.conf<<EOFL
#[Member]
ETCD_NAME=“crm-etcd-k8s-b2”
ETCD_DATA_DIR=“/var/lib/etcd/default.etcd”
ETCD_LISTEN_PEER_URLS=“https://10.17.1.48:2380
ETCD_LISTEN_CLIENT_URLS=“https://10.17.1.48:2379,https://127.0.0.1:2379

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“https://10.17.1.48:2380
ETCD_ADVERTISE_CLIENT_URLS=“https://10.17.1.48:2379
ETCD_INITIAL_CLUSTER=“crm-etcd-k8s-b1=https://10.17.1.47:2380,crm-etcd-k8s-b2=https://10.17.1.48:2380,crm-etcd-k8s-b3=https://10.17.1.49:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster”
ETCD_ENABLE_V2=“true”
EOFL

[root@etcd-03 ~]# mkdir -p /etc/etcd/cfg/
[root@etcd-03 ~]# cat >/etc/etcd/cfg/etcd.conf<<EOFL
#[Member]
ETCD_NAME=“crm-etcd-k8s-b3”
ETCD_DATA_DIR=“/var/lib/etcd/default.etcd”
ETCD_LISTEN_PEER_URLS=“https://10.17.1.49:2380
ETCD_LISTEN_CLIENT_URLS=“https://10.17.1.49:2379,https://127.0.0.1:2379

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“https://10.17.1.49:2380
ETCD_ADVERTISE_CLIENT_URLS=“https://10.17.1.49:2379
ETCD_INITIAL_CLUSTER=“crm-etcd-k8s-b1=https://10.17.1.47:2380,crm-etcd-k8s-b2=https://10.17.1.48:2380,crm-etcd-k8s-b3=https://10.17.1.49:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster”
ETCD_ENABLE_V2=“true”
EOFL

注:
#参数说明:
ETCD_NAME 节点名称, 如果有多个节点, 那么每个节点要修改为本节点的名称。
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址,如果多个节点那么逗号分隔
ETCD_INITIAL_CLUSTER=“master1=https://192.168.91.200:2380,master2=https://192.168.91.201:2380,master3=https://192.168.91.202:2380”
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

2.3.1 创建ETCD的系统启动服务(所有etcd节点)
报错:
ETCD3.4版本会自动读取环境变量的参数,所以EnvironmentFile文件中有的参数,不需要再次在ExecStart启动参数中添加,二选一,
如同时配置,会触发以下类似报错“etcd: conflicting environment variable “ETCD_NAME”
is shadowed by corresponding command-line flag (either unset environment variable or disable flag)”

[root@etcd-01 ~]# cat > /usr/lib/systemd/system/etcd.service<<EOFL
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/etc/etcd/cfg/etcd.conf
ExecStart=/usr/local/bin/etcd
–initial-cluster-state=new
–cert-file=/etc/etcd/ssl/server.pem
–key-file=/etc/etcd/ssl/server-key.pem
–peer-cert-file=/etc/etcd/ssl/server.pem
–peer-key-file=/etc/etcd/ssl/server-key.pem
–trusted-ca-file=/etc/etcd/ssl/ca.pem
–heartbeat-interval=100
–election-timeout=500
–quota-backend-bytes=8589934592
–peer-trusted-ca-file=/etc/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOFL

6.3 复制etcd证书到指定目录
[root@etcd-01 ~]# mkdir -p /etc/etcd/ssl/
[root@etcd-01 ~]# \cp /root/etcd/*pem /etc/etcd/ssl/ -rf

#复制etcd证书到每个节点,所有master、worker、etcd节点
[root@etcd-01 ~]# for i in crm-etcd-k8s-b1 crm-etcd-k8s-b2 crm-etcd-k8s-b3 crm-ma01-k8s-b2 crm-ma01-k8s-b3 crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do ssh $i mkdir -p /etc/etcd/{cfg,ssl};done
[root@etcd-01 ~]# for i in crm-etcd-k8s-b1 crm-etcd-k8s-b2 crm-etcd-k8s-b3 crm-ma01-k8s-b2 crm-ma01-k8s-b3 crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do scp /etc/etcd/ssl/* $i:/etc/etcd/ssl/;done
[root@etcd-01 ~]# for i in crm-etcd-k8s-b1 crm-etcd-k8s-b2 crm-etcd-k8s-b3 crm-ma01-k8s-b2 crm-ma01-k8s-b3 crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do echo $i “------>”; ssh $i ls /etc/etcd/ssl;done

6.4 启动etcd (所有etcd节点)
[root@etcd-01 ~]# systemctl daemon-reload
[root@etcd-01 ~]# systemctl start etcd
[root@etcd-01 ~]# systemctl enable etcd
[root@etcd-01 ~]# service etcd status

2.3.5 #debug 模式启动测试
/usr/local/bin/etcd
–name=etcd-01
–data-dir=/var/lib/etcd/default.etcd
–listen-peer-urls=https://10.7.0.10:2380
–listen-client-urls=https://10.7.0.10:2379,http://10.7.0.10:2390
–advertise-client-urls=https://10.7.0.10:2379
–initial-advertise-peer-urls=https://10.7.0.10:2380
–initial-cluster=etcd-01=https://10.7.0.10:2380,etcd-02=https://10.7.0.11:2380,etcd-03=https://10.7.0.12:2380
–initial-cluster-token=etcd-cluster
–initial-cluster-state=new
–cert-file=/etc/etcd/ssl/server.pem
–key-file=/etc/etcd/ssl/server-key.pem
–peer-cert-file=/etc/etcd/ssl/server.pem
–peer-key-file=/etc/etcd/ssl/server-key.pem
–trusted-ca-file=/etc/etcd/ssl/ca.pem
–peer-trusted-ca-file=/etc/etcd/ssl/ca.pem

2.3.6 #集群状态检查
[root@master-2 ~]# etcdctl --cacert=/etc/etcd/ssl/ca.pem
–cert=/etc/etcd/ssl/server.pem
–key=/etc/etcd/ssl/server-key.pem
–endpoints=“https://10.17.1.47:2379,https://10.17.1.48:2379,https://10.17.1.49:2379
endpoint health

#v3 方式
ETCDCTL_API=3 /usr/local/bin/etcdctl
–write-out=table --cacert=/etc/etcd/ssl/ca.pem
–cert=/etc/etcd/ssl/server.pem
–key=/etc/etcd/ssl/server-key.pem
–endpoints=“https://10.17.1.47:2379,https://10.17.1.48:2379,https://10.17.1.49:2379
endpoint health

#结果展示
±---------------------------±-------±------------±------+
| ENDPOINT | HEALTH | TOOK | ERROR |
±---------------------------±-------±------------±------+
| https://10.7.0.10:2379 | true | 16.669768ms | |
| https://10.7.0.11:2379 | true | 23.794203ms | |
| https://10.7.0.12:2379 | true | 23.658683ms | |
±---------------------------±-------±------------±------+

2.3.7 设置flannel 网段,任意一个etcd节点执行
注意:这里set的网段需要和kube-controller-manager、kube-proxy配置里的cluster-cidr网段一致,也就是pod的网段
ETCDCTL_API=2 etcdctl
–endpoints=“https://10.17.1.47:2379,https://10.17.1.48:2379,https://10.17.1.49:2379
–ca-file=/etc/etcd/ssl/ca.pem
–key-file=/etc/etcd/ssl/server-key.pem
–cert-file=/etc/etcd/ssl/server.pem
set /coreos.com/network/config ‘{ “Network”: “172.17.0.0/16”, “Backend”: {“Type”: “vxlan”}}’

#如果需要更改值,直接重置下(set)

#检查是否建立网段
[root@master-1 etcd-v3.3.10-linux-amd64]# ETCDCTL_API=2 etcdctl
–endpoints=“https://10.17.1.47:2379,https://10.17.1.48:2379,https://10.17.1.49:2379
–ca-file=/etc/etcd/ssl/ca.pem
–cert-file=/etc/etcd/ssl/server.pem
–key-file=/etc/etcd/ssl/server-key.pem
get /coreos.com/network/config

#结果

#以上所有操作都是etcd节点

#以下操作,部署master 3个节点的组件
#分发二进制
[root@master-1 soft]# wget https://dl.k8s.io/v1.20.1/kubernetes-server-linux-amd64.tar.gz
[root@master-1 soft]# tar xvf kubernetes-server-linux-amd64.tar.gz
[root@master-1 soft]# cd kubernetes/server/bin/
[root@master-1 bin]# \cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
[root@master-1 bin]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3;do scp kube-apiserver kube-controller-manager kube-scheduler kubectl $i:/usr/local/bin/;done
[root@master-1 bin]# for i in crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do scp kubelet kube-proxy $i:/usr/local/bin/;done

3.0 生成kubernetes 证书
#创建 Kubernetes 相关证书
#此证书用于Kubernetes节点直接的通信, 与之前的ETCD证书不同. (Master-1)

[root@master-1 ~]# mkdir /root/kubernetes/ && cd /root/kubernetes/

3.1 配置ca 文件(Master-1)
[root@master-1 ~]# cat << EOF | tee ca-config.json
{
“signing”: {
“default”: {
“expiry”: “876000h”
},
“profiles”: {
“kubernetes”: {
“expiry”: “876000h”,
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
]
}
}
}
}
EOF

3.2 创建ca证书申请文件(Master-1)
[root@master-1 ~]# cat << EOF | tee ca-csr.json
{
“CN”: “kubernetes”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”,
“O”: “k8s”,
“OU”: “System”
}
]
}
EOF

#证书字段意义
CN=commonName (网站域名)
OU=organizationUnit (组织部门名)
O=organizationName (组织名)
L=localityName (城市)
S=stateName (省份)
C=country (国家)

3.3 生成API SERVER证书申请文件(Master-1)
#注意要修改VIP的地址
#同样是提前预留3个ip位,给以后需要扩容的master节点使用
注意:此处填写master和node所有节点ip!!!
10.43.0.1是kubernetes的SVC地址
[root@master-1 ~]# cat << EOF | tee server-csr.json
{
“CN”: “kubernetes”,
“hosts”: [
“10.43.0.1”,
“127.0.0.1”,
“10.17.1.50”,
“10.17.1.51”,
“10.17.1.52”,
“10.17.1.53”,
“10.17.1.54”,
“10.17.1.55”,
“10.17.1.56”,
“10.17.1.57”,
“10.17.1.58”,
“10.17.1.59”,
“crm-ma01-k8s-b1”,
“crm-ma01-k8s-b2”,
“crm-ma01-k8s-b3”,
“crm-ma01-k8s-b4”,
“crm-ma01-k8s-b5”,
“crm-ma01-k8s-b6”,
“crm-wrk01-k8s-b1”,
“crm-wrk01-k8s-b2”,
“crm-wrk01-k8s-b3”,
“kubernetes”,
“kubernetes.default”,
“kubernetes.default.svc”,
“kubernetes.default.svc.cluster”,
“kubernetes.default.svc.cluster.local”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”,
“O”: “k8s”,
“OU”: “System”
}
]
}
EOF

3.4 创建 Kubernetes Proxy 证书申请文件(Master-1)
[root@master-1 ~]# cat << EOF | tee kube-proxy-csr.json
{
“CN”: “system:kube-proxy”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”,
“O”: “k8s”,
“OU”: “System”
}
]
}
EOF

3.5 生成 kubernetes CA 证书和公私钥

生成ca证书(Master-1)

[root@master-1 ~]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca –

3.5.1 # 生成 api-server 证书(Master-1)
[root@master-1 ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem
-config=ca-config.json
-profile=kubernetes server-csr.json | cfssljson -bare server

cfssl参数

gencert: 生成新的key(密钥)和签名证书
-initca:初始化一个新ca
-ca:指明ca的证书
-ca-key:指明ca的私钥文件
-config:指明请求证书的json文件
-profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息

3.5.2 # 生成 kube-proxy 证书(Master-1)
[root@master-1 ~]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json
-profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

4 安装 Docker
#安装CE版本(node)
[root@node-1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@node-1 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@node-1 ~]# yum install -y docker-ce-19.03.6 docker-ce-cli-19.03.6 containerd.io

4.1 启动Docker服务
[root@node-1 ~]# systemctl restart docker
[root@node-1 ~]# systemctl enable docker
[root@node-1 ~]# service docker status

debian系统:
apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common -y
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable”
apt-get update
apt-cache madison docker-ce #查找docker版本
apt install docker-ce=5:19.03.153-0debian-buster containerd.io -y
systemctl enable docker
systemctl status docker
docker info

#worker节点都要安装。etcd、master节点不需要安装
5 安装flannel
5.1 下载Flannel二进制包
#所有的worker节点,下载到master-1
[root@ master -1 ~]# mkdir /soft ; cd /soft
[root@ master -1 ~]# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
[root@ master -1 ~]# tar xvf flannel-v0.11.0-linux-amd64.tar.gz
[root@ master -1 ~]# mv flanneld mk-docker-opts.sh /usr/local/bin/

#复制flanneld到其他的所有worker节点
[root@ master -1 ~]# for i in crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do scp flannel-v0.11.0-linux-amd64.tar.gz $i:/root;done

worker节点操作:
tar xvf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld mk-docker-opts.sh /usr/local/bin/

5.2 配置Flannel (所有worker节点)
[root@node-1 ~]# mkdir -p /etc/flannel
[root@node-1 ~]# cat > /etc/flannel/flannel.cfg<<EOF
FLANNEL_OPTIONS=“-etcd-endpoints=https://10.17.1.47:2379,https://10.17.1.48:2379,https://10.17.1.49:2379 -etcd-cafile=/etc/etcd/ssl/ca.pem -etcd-certfile=/etc/etcd/ssl/server.pem -etcd-keyfile=/etc/etcd/ssl/server-key.pem”
EOF
#只填etcd节点ip

5.3 配置flannel 配置文件
cat > /usr/lib/systemd/system/flanneld.service <<EOF
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service

[Service]
Type=notify
EnvironmentFile=/etc/flannel/flannel.cfg
ExecStart=/usr/local/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

5.4 所有worker节点启动Flannel
systemctl daemon-reload
service flanneld restart
systemctl enable flanneld
service flanneld status

#flannel能运行起来之后,停止,修改docker启动文件再运行flannel
5.5#所有worker节点停止flanneld
service flanneld stop

5.6 修改docker 配置文件(所有worker节点)
cat >/usr/lib/systemd/system/docker.service<<EOFL
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOFL

5.7 重启Docker服务
systemctl daemon-reload
service flanneld restart
service docker restart

6 安装Master组件
6.1 安装Api Server服务
6.1.1 下载Kubernetes二进制包(1.20.2)(master-1)
[root@master-1 soft]# cd /soft
[root@master-1 soft]# tar xvf kubernetes-server-linux-amd64.tar.gz
[root@master-1 soft]# cd kubernetes/server/bin/
[root@master-1 soft]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl /usr/local/bin/

#复制执行文件到其他的master节点
[root@master-1 bin]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3;do scp /usr/local/bin/kube* $i:/usr/local/bin/;done

6.1.2 配置Kubernetes证书
#Kubernetes各个组件之间通信需要证书,需要复制个每个master节点(master-1)
[root@master-1 soft]# mkdir -p /etc/kubernetes/{cfg,ssl}
[root@master-1 soft]# cp /root/kubernetes/*.pem /etc/kubernetes/ssl/

#复制到其他的节点
[root@master-1 soft]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3 crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do ssh $i mkdir -p /etc/kubernetes/{cfg,ssl};done
[root@master-1 soft]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3 crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do scp /etc/kubernetes/ssl/* $i:/etc/kubernetes/ssl/;done
[root@master-1 bin]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3 crm-wrk01-k8s-b1 crm-wrk01-k8s-b2 crm-wrk01-k8s-b3;do echo $i “---------->”; ssh $i ls /etc/kubernetes/ssl;done

6.1.4 编辑Token 文件(master-1)
#f89a76f197526a0d4bc2bf9c86e871c3:随机字符串,自定义生成; kubelet-bootstrap:用户名; 10001:UID; system:kubelet-bootstrap:用户组
#可以不用修改,直接粘贴
[root@master-1 soft]# vim /etc/kubernetes/cfg/token.csv
f89a76f197526a0d4bc2bf9c86e871c3,kubelet-bootstrap,10001,“system:kubelet-bootstrap”

#复制到其他的master节点
[root@master-1 bin]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3 ;do scp /etc/kubernetes/cfg/token.csv $i:/etc/kubernetes/cfg/token.csv;done

#配置api-server启动文件,每台master节点都需要执行
#可以自行修改集群网段和子网掩码
[root@master-1 soft]# cat >/etc/kubernetes/cfg/kube-apiserver.cfg <<EOFL
KUBE_APISERVER_OPTS=“–enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount
–anonymous-auth=false
–bind-address=0.0.0.0
–secure-port=6443
–advertise-address=0.0.0.0
–insecure-port=0
–authorization-mode=Node,RBAC
–runtime-config=api/all=true
–enable-bootstrap-token-auth
–service-cluster-ip-range=10.43.0.0/16
–token-auth-file=/etc/kubernetes/cfg/token.csv
–service-node-port-range=30000-32767
–tls-cert-file=/etc/kubernetes/ssl/server.pem
–tls-private-key-file=/etc/kubernetes/ssl/server-key.pem
–client-ca-file=/etc/kubernetes/ssl/ca.pem
–service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
–service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
–service-account-issuer=https://kubernetes.default.svc.cluster.local
–etcd-cafile=/etc/etcd/ssl/ca.pem
–etcd-certfile=/etc/etcd/ssl/server.pem
–etcd-keyfile=/etc/etcd/ssl/server-key.pem
–etcd-servers=https://10.17.1.47:2379,https://10.17.1.48:2379,https://10.17.1.49:2379
–enable-swagger-ui=true
–allow-privileged=true
–apiserver-count=3
–audit-log-maxage=30
–audit-log-maxbackup=3
–audit-log-maxsize=100
–audit-log-path=/var/log/kube-apiserver-audit.log
–event-ttl=1h
–alsologtostderr=true
–logtostderr=false
–log-dir=/var/log/kubernetes
–watch-cache=true
–default-watch-cache-size=1500
–event-ttl=1h0m0s
–max-requests-inflight=800
–max-mutating-requests-inflight=400
–feature-gates=RemoveSelfLink=false
–default-not-ready-toleration-seconds=60
–default-unreachable-toleration-seconds=60
–v=4”
EOFL

注:
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers:etcd集群地址
–bind-address:监听地址
–secure-port:https安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service虚拟IP地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用RBAC授权和节点自管理
–enable-bootstrap-token-auth:启用TLS bootstrap机制
–token-auth-file:bootstrap token文件
–service-node-port-range:Service nodeport类型默认分配端口范围
–kubelet-client-xxx:apiserver访问kubelet客户端证书
–tls-xxx-file:apiserver https证书
–etcd-xxxfile:连接Etcd集群证书
–audit-log-xxx:审计日志

参数说明: https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-apiserver/

#设置启动
cat >/usr/lib/systemd/system/kube-apiserver.service<<EOFL
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-apiserver.cfg
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOFL

#配置自动启动
systemctl daemon-reload
service kube-apiserver restart
systemctl enable kube-apiserver
service kube-apiserver status

#debug 模式
/usr/local/bin/kube-apiserver
–enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount
–anonymous-auth=false
–bind-address=0.0.0.0
–secure-port=6443
–advertise-address=0.0.0.0
–insecure-port=0
–authorization-mode=Node,RBAC
–runtime-config=api/all=true
–enable-bootstrap-token-auth
–service-cluster-ip-range=10.43.0.0/16
–token-auth-file=/etc/kubernetes/cfg/token.csv
–service-node-port-range=30000-50000
–tls-cert-file=/etc/kubernetes/ssl/server.pem
–tls-private-key-file=/etc/kubernetes/ssl/server-key.pem
–client-ca-file=/etc/kubernetes/ssl/ca.pem
–service-account-key-file=/etc/kubernetes/ssl/ca-key.pem
–service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
–service-account-issuer=https://kubernetes.default.svc.cluster.local
–etcd-cafile=/etc/etcd/ssl/ca.pem
–etcd-certfile=/etc/etcd/ssl/server.pem
–etcd-keyfile=/etc/etcd/ssl/server-key.pem
–etcd-servers=https://10.7.0.10:2379,https://10.7.0.11:2379,https://10.7.0.12:2379
–enable-swagger-ui=true
–allow-privileged=true
–apiserver-count=3
–audit-log-maxage=30
–audit-log-maxbackup=3
–audit-log-maxsize=100
–audit-log-path=/var/log/kube-apiserver-audit.log
–event-ttl=1h
–alsologtostderr=true
–logtostderr=false
–log-dir=/var/log/kubernetes
–watch-cache=true
–default-watch-cache-size=1500
–event-ttl=1h0m0s
–max-requests-inflight=800
–max-mutating-requests-inflight=400
–feature-gates=RemoveSelfLink=false
–v=4

#验证地址,ip地址是VIP
[root@master-1 bin]# curl --insecure https://10.17.1.50:6443

#结果
{
“kind”: “Status”,
“apiVersion”: “v1”,
“metadata”: {

},
“status”: “Failure”,
“message”: “Unauthorized”,
“reason”: “Unauthorized”,
“code”: 401
}

#部署kubectl
创建csr请求文件
[root@master-1 ~]# cd /root/kubernetes

#配置客户端授权文件
[root@master-1 ~]# cat > admin-csr.json <<‘EOF’
{
“CN”: “admin”,
“hosts”: [],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “system:masters”,
“OU”: “System”
}
]
}
EOF

说明:
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;
kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;
O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;
注:
这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group;
“O”: “system:masters”, 必须是system:masters,否则后面kubectl create clusterrolebinding报错。

#生成客户端证书
[root@master-1 kubernetes]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem
-config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

#复制证书
[root@master-1 kubernetes]# \cp /root/kubernetes/admin*.pem /etc/kubernetes/ssl/
[root@master-1 kubernetes]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3;do
scp /etc/kubernetes/ssl/admin*.pem $i:/etc/kubernetes/ssl/;done

设置集群参数
[root@master-1 kubernetes]# export KUBE_APISERVER=“https://10.17.1.50:6443” #VIP地址
[root@master-1 kubernetes]# kubectl config set-cluster kubernetes
–certificate-authority=/etc/kubernetes/ssl/ca.pem
–embed-certs=true
–server=${KUBE_APISERVER}
–kubeconfig=kube.config

设置客户端认证参数
[root@master-1 kubernetes]# kubectl config set-credentials admin
–client-certificate=/etc/kubernetes/ssl/admin.pem
–client-key=/etc/kubernetes/ssl/admin-key.pem
–embed-certs=true --kubeconfig=kube.config

设置上下文参数
[root@master-1 kubernetes]# kubectl config set-context kubernetes
–cluster=kubernetes --user=admin --kubeconfig=kube.config

设置默认上下文
[root@master-1 kubernetes]# kubectl config use-context kubernetes --kubeconfig=kube.config
[root@master-1 kubernetes]# mkdir ~/.kube
[root@master-1 kubernetes]# \cp kube.config ~/.kube/config

授权kubernetes证书访问kubelet api权限
[root@master1 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis
–clusterrole=system:kubelet-api-admin --user kubernetes

#获取集群信息
[root@master-1 kubernetes]# kubectl cluster-info
Kubernetes control plane is running at https://10.7.0.59:6443
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

#获取组件信息
[root@master-1 ~]# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get “http://127.0.0.1:10251/healthz”: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get “http://127.0.0.1:10252/healthz”: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-2 Healthy {“health”:“true”}
etcd-0 Healthy {“health”:“true”}
etcd-1 Healthy

配置kubectl子命令补全
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.20.15/bin/linux/amd64/kubectl
chmod +x ./kubectl
mv ./kubectl /usr/local/bin/kubectl
mkdir -p ~/.local/bin/kubectl
kubectl version --client

apt install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)
kubectl completion bash >/etc/bash_completion.d/kubectl

3.4.5 部署kube-controller-manager 所有master节点执行
创建csr请求文件
[root@master-1 ~]# cd /root/kubernetes
[root@master-1 kubernetes]# cat > kube-controller-manager-csr.json <<‘EOF’
{
“CN”: “system:kube-controller-manager”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“hosts”: [
“127.0.0.1”,
“10.17.1.50”,
“10.17.1.51”,
“10.17.1.52”,
“10.17.1.53”,
“10.17.1.54”,
“10.17.1.55”,
“10.17.1.56”
],
“names”: [
{
“C”: “CN”,
“ST”: “BeiJing”,
“L”: “BeiJing”,
“O”: “system:kube-controller-manager”,
“OU”: “system”
}
]
}
EOF

注:
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限

生成证书
[root@master-1 kubernetes]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem
-config=ca-config.json -profile=kubernetes
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

#查看证书
[root@master-1 kubernetes]# ls kube-controller-manager*.pem
kube-controller-manager-key.pem kube-controller-manager.pem

#复制证书
[root@master-1 kubernetes]# \cp kube-controller-manager*.pem /etc/kubernetes/ssl/
[root@master-1 kubernetes]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3;do
scp /etc/kubernetes/ssl/kube-controller-manager*.pem $i:/etc/kubernetes/ssl/;done

创建kube-controller-manager的kubeconfig

设置集群参数,设置VIP地址
[root@master-1 kubernetes]# kubectl config set-cluster kubernetes
–certificate-authority=/etc/kubernetes/ssl/ca.pem
–embed-certs=true
–server=https://10.17.1.50:6443
–kubeconfig=kube-controller-manager.kubeconfig

设置客户端认证参数
[root@master-1 kubernetes]# kubectl config set-credentials
system:kube-controller-manager
–client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem
–client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem
–embed-certs=true
–kubeconfig=kube-controller-manager.kubeconfig

设置上下文参数
[root@master-1 kubernetes]# kubectl config set-context
system:kube-controller-manager --cluster=kubernetes
–user=system:kube-controller-manager
–kubeconfig=kube-controller-manager.kubeconfig

设置默认上下文
[root@master-1 kubernetes]# kubectl config use-context
system:kube-controller-manager
–kubeconfig=kube-controller-manager.kubeconfig

#复制文件
[root@master-1 kubernetes]# \cp kube-controller-manager.kubeconfig /etc/kubernetes/cfg/
[root@master-1 kubernetes]# for i in crm-ma01-k8s-b2 crm-ma01-k8s-b3;do
scp /etc/kubernetes/cfg/kube-controller-manager.kubeconfig $i:/etc/kubernetes/cfg;done

#设置配置文件,master节点都要执行
注意:cluster-cidr是pod的网段,需要和上面etcd set的网段一致。
[root@master-1 bin]# cat >/etc/kubernetes/cfg/kube-controller-manager.cfg<<EOFL
KUBE_CONTROLLER_MANAGER_OPTS=“–bind-address=0.0.0.0
–secure-port=10257
–port=10252
–address=0.0.0.0
–kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig
–service-cluster-ip-range=10.43.0.0/16
–cluster-name=kubernetes
–cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
–cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
–allocate-node-cidrs=true
–cluster-cidr=172.17.0.0/16
–experimental-cluster-signing-duration=876000h
–root-ca-file=/etc/kubernetes/ssl/ca.pem
–service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem
–leader-elect=true
–feature-gates=RotateKubeletServerCertificate=true
–controllers=*,bootstrapsigner,tokencleaner
–horizontal-pod-autoscaler-use-rest-clients=true
–horizontal-pod-autoscaler-sync-period=10s
–tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem
–tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem
–use-service-account-credentials=true
–alsologtostderr=true
–logtostderr=false
–log-dir=/var/log/kubernetes
–node-monitor-period=2s
–node-monitor-grace-period=20s
–node-startup-grace-period=30s
–pod-eviction-timeout=1m
–v=2”
EOFL

#注
参数说明: https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kube-controller-manager/
kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;

9.3.2 创建kube-controller-manager 启动文件
[root@master-1 bin]# cat >/usr/lib/systemd/system/kube-controller-manager.service<<EOFL
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=/etc/kubernetes/cfg/kube-controller-manager.cfg
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOFL

9.3.3启动kube-controller-manager服务
[root@master-1 bin]# systemctl daemon-reload && systemctl enable kube-controller-manager
[root@master-1 bin]# service kube-controller-manager restart
[root@master-1 bin]# service kube-controller-manager status

#debug 启动
/usr/local/bin/kube-controller-manager
–bind-address=0.0.0.0
–secure-port=10257
–port=10252
–address=0.0.0.0
–kubeconfig=/etc/kubernetes/cfg/kube-controller-manager.kubeconfig
–service-cluster-ip-range=10.43.0.0/16
–cluster-name=kubernetes
–cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem
–cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem
–allocate-node-cidrs=true
–cluster-cidr=172.17.0.0/16
–experimental-cluster-signing-duration=876000h
–root-ca-file=/etc/kubernetes/ssl/ca.pem
–service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem
–leader-elect=true
–feature-gates=RotateKubeletServerCertificate=true
–controllers=*,bootstrapsigner,tokencleaner
–horizontal-pod-autoscaler-use-rest-clients=true
–horizontal-pod-autoscaler-sync-period=10s
–tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem
–tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem
–use-service-account-credentials=true
–alsologtostderr=true
–logtostderr=false
–log-dir=/var/log/kubernetes
–node-monitor-grace-period=20s
–node-startup-grace-period=30s
–pod-eviction-timeout=1m
–v=2

详细版,请联系QQ:1043018380(备注来处)