debian11操作系统
| master1&etcd1 | 192.168.2.135 |
| master2&etcd2 | 192.168.2.136 |
| node1 | 192.168.2.137 |
| node2 | 192.168.2.138 |
| vip | 192.168.2.139 |
| etcd3 | 192.168.2.140 |
配置主机名:
192.168.2.135、136、137、138、140上分别执行如下:
hostnamectl set-hostname k8s-master1
hostnamectl set-hostname k8s-master2
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname etcd3
配置hosts文件:
cat >>/etc/hosts <<EOF
192.168.2.135 k8s-master1 master1
192.168.2.136 k8s-master2 master2
192.168.2.137 k8s-node1 node1
192.168.2.138 k8s-node2 node2
192.168.2.140 etcd3
EOF
配置主机之间无密码登录,每台机器都按如下操作:
ssh-kegen -t rsa (一路回车,不输入密码)
for i in master1 master2 node1 node2 etcd3;do ssh-copy-id $i;done (依次输入主机的root密码)
关闭firewalld防火墙、selinux(可选):
systemctl stop firewalld && systemctl disable firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
关闭交换分区swap:
临时关闭: swapoff -a
永久关闭:注释swap挂载,打开/etc/fstab给swap这行开头加一下注释
#/dev/mapper/debian--vg-swap_1 none swap sw 0 0
修改内核参数:
modprobe br_netfilter (加载 br_netfilter 模块)
echo "br_netfilter" >>/etc/modules (开机自动加载)
(如不执行上面步骤则在修改/etc/sysctl.d/k8s.conf 文件后再执行 sysctl -p /etc/sysctl.d/k8s.conf 会出现如下报错:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory)
lsmod |grep br_netfilter (验证模块是否加载成功)
(net.ipv4.ip_forward 是数据包转发:
出于安全考虑,Linux 系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的 ip 地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。
要让 Linux 系统具有路由转发功能,需要配置一个 Linux 的内核参数 net.ipv4.ip_forward。这个参数指定了 Linux 系统当前对路由转发功能的支持情况;其值为 0 时表示禁止进行 IP 转发;如果是 1,则说明 IP 转发功能已经打开。)
cat > /etc/sysctl.d/k8s.conf <<EOF (修改内核参数)
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf (使刚才修改的内核参数生效)
配置时间同步(可选):
cat >/etc/chrony/chrony.conf <<EOF
pool ntp.iftop.top iburst minpoll 3 maxpoll 3 maxsources 1 prefer
pool ntp.ubuntu.com iburst maxsources 4
pool 0.ubuntu.pool.ntp.org iburst maxsources 1
pool 1.ubuntu.pool.ntp.org iburst maxsources 1
pool 2.ubuntu.pool.ntp.org iburst maxsources 2
stratumweight 0.05
driftfile /var/lib/chrony/drift
rtcsync
makestep 0.5 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
noclientlog
logchange 0.5
logdir /var/log/chrony
EOF
安装iptables:
apt-get install iptables
docker环境安装(离线部署,所有机器都需要部署):
修改 docker 文件驱动为 systemd,默认为 cgroupfs,kubelet 默认使用 systemd,两者必须一致才可以
1.下载安装包,上传服务器
https://download.docker.com/linux/static/stable/x86_64/docker-24.0.6.tgz
2.安装
cp docker/* /usr/bin
3.注册系统服务
cat >/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=65535
LimitNPROC=65535
LimitCORE=65535
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
4.设置daemon.json
mkdir /etc/docker
cat >/etc/docker/daemon.json
{
"insecure-registries":["210.14.75.1:5000"],
"registry-mirrors" :[
"https://hub.docker.com",
"https://dockerproxy.com",
"https://docker.nju.edu.cn",
"https://mirror.baidubce.com",
"https://docker.mirrors.sjtug.sjtu.edu.cn",
"https://mirror.iscas.ac.cn"
],
"proxies": {
"http-proxy": "http://relay-acting.iftop.top:11969",
"https-proxy": "http://relay-acting.iftop.top:11969",
"no-proxy": "*.cn,127.0.0.0/8,192.168.0.0/16,172.16.0.0/12,10.0.0.0/8"
},
"data-root": "/var/lib/docker",
"exec-opts": ["native.cgroupdriver=systemd"]
}
5.启动和开机自启动
apt-get install iptables (解决报错:dockerd[7870]: failed to start daemon: Error initializing network controller: error obtaining controller instance: failed)
systemctl daemon-reload
systemctl enable --now docker
docker info
k8s环境部署
搭建etcd集群
配置etcd工作目录(master1、2、etcd3同时操作)
mkdir -p /etc/etcd/ssl
上传etcd、etcdctl、etcdutl到/usr/local/bin目录
scp /usr/local/bin/etcd* master2:/usr/local/bin
scp /usr/local/bin/etcd* etcd3:/usr/local/bin
安装签发证书工具cfssl
工具下载地址:https://github.com/cloudflare/cfssl/releases/tag/v1.6.5
在master1上操作:
mkdir /data/work -p
cd /data/work/
上传:cfssl_1.6.5_linux_amd64 cfssl-certinfo_1.6.5_linux_amd64 cfssljson_1.6.5_linux_amd64
mv cfssl_1.6.5_linux_amd64 /usr/local/bin/cfssl
mv cfssljson_1.6.5_linux_amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_1.6.5_linux_amd64 /usr/local/bin/cfssl-certinfo
chmod +x /usr/local/bin/cfssl*
配置ca证书
生成ca证书请求文件:
root@k8s-master1:/data/work# cat ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
],
"ca": {
"expiry": "87600h"
}
}
[root@k8s-master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
注:
CN:Common Name(公用名称),kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端证书则为证书申请者的姓名。
O:Organization(单位名称),kube-apiserver 从证书中提取该字段作为请求用户所属的组
(Group);对于 SSL 证书,一般为网站域名;而对于代码签名证书则为申请单位名称;而对于客户端单位证书则为证书申请者所在单位名称。
L 字段:所在城市
S 字段:所在省份
C 字段:只能是国家字母缩写,如中国:CN
生成ca证书文件
root@k8s-master1:/data/work# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
生成etcd证书
配置etcd证书请求,hosts的IP变成自己etcd所在节点的IP,hosts 字段中 IP 为所有 etcd 节点的集群内部通信 IP,可以预留几个,做扩容用
root@k8s-master1:/data/work# cat etcd-csr.json
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"192.168.2.135",
"192.168.2.136",
"192.168.2.137",
"192.168.2.138",
"192.168.2.139",
"192.168.2.140"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
]
}
[root@k8s-master1 work]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
root@k8s-master1:/data/work# ls etcd*.pem
etcd-key.pem etcd.pem
创建配置文件
root@k8s-master1:/data/work# cat etcd.conf
#[Member]
ETCD_NAME="etcd1"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.135:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.135:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.135:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.135:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.2.135:2380,etcd2=https://192.168.2.136:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_NAME:节点名称,集群中唯一
ETCD_DATA_DIR:数据目录
ETCD_LISTEN_PEER_URLS:集群通信监听地址
ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
ETCD_INITIAL_CLUSTER:集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN:集群 TokenETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new 是新集群,existing 表示加入已有集群
创建启动服务文件:
root@k8s-master1:/data/work# cat etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=-/etc/etcd/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-client-cert-auth \
--client-cert-auth
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
把etcd的证书都到拷贝到/etc/etcd/ssl目录下, 并同步拷贝到master2
cp ca*.pem etcd*.pem /etc/etcd/ssl/
cp etcd.conf /etc/etcd/
cp etcd.service /usr/lib/systemd/system/
scp -r /etc/etcd master2:/etc/
scp -r /usr/lib/systemd/system/etcd.service master2:/usr/lib/systemd/system/
启动etcd集群
[root@k8s-master1 work]# mkdir -p /var/lib/etcd/default.etcd
[root@k8s-master2 work]# mkdir -p /var/lib/etcd/default.etcd
修改master2的etcd配置文件:
root@k8s-master2:~# cat /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd2"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.136:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.136:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.136:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.136:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.2.135:2380,etcd2=https://192.168.2.136:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动etcd服务(启动 etcd 的时候,先启动 k8s-master1 的 etcd 服务,会一直卡住在启动的状态,然后接着再启动k8s-master2 的 etcd,这样 k8s-master1 这个节点 etcd 才会正常起来)
[root@k8s-master1 work]#systemctl daemon-reload && systemctl enable --now etcd.service
[root@k8s-master2 work]#systemctl daemon-reload && systemctl enable --now etcd.service
启动正常

查看etcd集群
root@k8s-master1:/data/work# ETCDCTL_API=3 && /usr/local/bin/etcdctl --write-out=table --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem --endpoints=https://192.168.2.135:2379,https://192.168.2.136:2379 endpoint health
+----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.2.135:2379 | true | 15.279648ms | |
| https://192.168.2.136:2379 | true | 22.56428ms | |
+----------------------------+--------+-------------+-------+
root@k8s-master1:/data/work#
root@k8s-master1:/data/work# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.135:2379,https://192.168.2.136:2379 --cacert=ca.pem --cert=etcd.pem --key=etcd-key.pem member list --write-out=table
+------------------+---------+-------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| 1f07f058a7c8ed46 | started | etcd1 | https://192.168.2.135:2380 | https://192.168.2.135:2379 | false |
| bcc3f105d45e0ff7 | started | etcd2 | https://192.168.2.136:2380 | https://192.168.2.136:2379 | false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
扩容etcd
etcd需要3个节点才能实现容错1个节点的冗余
在master1:
scp -r /etc/etcd etcd3:/etc/
scp -r /usr/lib/systemd/system/etcd.service etcd3:/usr/lib/systemd/system/
scp /usr/local/bin/etcd* etcd3:/usr/local/bin/
修改etcd3的/etc/etcd/etcd.conf:
root@etcd3:~# cat /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd3"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.2.140:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.140:2379,http://127.0.0.1:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.140:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.140:2379"
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.2.135:2380,etcd2=https://192.168.2.136:2380,etcd3=https://192.168.2.140:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="existing"
[root@etcd3 ~]# mkdir -p /var/lib/etcd/default.etcd
先别急着启动etcd服务!!!
在master1上:Adds a member into the cluster
cd /data/work
etcdctl member add etcd3 --cacert=ca.pem --cert=etcd.pem --key=etcd-key.pem --peer-urls="https://192.168.2.140:2380"
root@k8s-master1:/data/work# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.135:2379 --cacert=ca.pem --cert=etcd.pem --key=etcd-key.pem member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| 1f07f058a7c8ed46 | started | etcd1 | https://192.168.2.135:2380 | https://192.168.2.135:2379 | false |
| 6caf8b3ebfbf22e3 | started | etcd3 | https://192.168.2.140:2380 | | false |
| bcc3f105d45e0ff7 | started | etcd2 | https://192.168.2.136:2380 | https://192.168.2.136:2379 | false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
然后,修改etcd1、2的etcd.conf文件中的ETCD_INITIAL_CLUSTER为:
ETCD_INITIAL_CLUSTER="etcd1=https://192.168.2.135:2380,etcd2=https://192.168.2.136:2380,etcd3=https://192.168.2.140:2380"
etcd1、2:
systemctl restart etcd
etcd3:
systemctl daemon-reload && systemctl enable --now etcd
root@k8s-master1:/data/work# ETCDCTL_API=3 etcdctl --endpoints=https://192.168.2.135:2379 --cacert=ca.pem --cert=etcd.pem --key=etcd-key.pem member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| 1f07f058a7c8ed46 | started | etcd1 | https://192.168.2.135:2380 | https://192.168.2.135:2379 | false |
| 6caf8b3ebfbf22e3 | started | etcd3 | https://192.168.2.140:2380 | https://192.168.2.140:2379 | false |
| bcc3f105d45e0ff7 | started | etcd2 | https://192.168.2.136:2380 | https://192.168.2.136:2379 | false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
下载k8s二进制包:
1.下载源代码包:https://github.com/kubernetes/kubernetes/releases/tag/v1.23.14
https://github.com/kubernetes/kubernetes/archive/refs/tags/v1.23.14.tar.gz
2.提取二进制包
tar zxf kubernetes-1.23.14.tar.gz
cd kubernetes-1.23.14/
echo "v1.23.14" >./version
cd cluster/
vi get-kube-binaries.sh (头部加入export https_proxy=http://relay-acting.iftop.top:11969;export http_proxy=http://relay-acting.iftop.top:11969)
bash ./get-kube-binaries.sh
cd ../server/
sz kubernetes-server-linux-amd64.tar.gz
把kubernetes-server-linux-amd64.tar.gz 上传到master1上的/data/work目录下:
root@k8s-master1:/data/work# rz
rz waiting to receive.
Starting zmodem transfer. Press Ctrl+C to cancel.
root@k8s-master1:/data/work# rz -bye
rz waiting to receive.
Starting zmodem transfer. Press Ctrl+C to cancel.
Transferring kubernetes-server-linux-amd64.tar.gz...
100% 333979 KB 6957 KB/sec 00:00:48 0 Errors
root@k8s-master1:/data/work# tar zxf kubernetes-server-linux-amd64.tar.gz
root@k8s-master1:/data/work# cd kubernetes/server/bin/
root@k8s-master1:/data/work/kubernetes/server/bin# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
root@k8s-master1:/data/work/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-scheduler kubectl master2:/usr/local/bin/
kube-apiserver 100% 125MB 99.3MB/s 00:01
kube-controller-manager 100% 116MB 125.0MB/s 00:00
kube-scheduler 100% 47MB 112.8MB/s 00:00
kubectl 100% 44MB 115.1MB/s 00:00
root@k8s-master1:/data/work/kubernetes/server/bin# cd /data/work/
root@k8s-master1:/data/work# mkdir -p /etc/kubernetes/ssl
root@k8s-master1:/data/work# mkdir /var/log/kubernetes
root@k8s-master1:/data/work#
部署apiserver组件
启动TLS Bootstrapping机制Master apiserver 启用 TLS 认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的CA 签发的有效证书才能与 apiserver 通讯,当 Node 节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes 引入了 TLS bootstraping 机制来自动颁发客户端证书,kubelet 会以一个低权限用户自动向 apiserver 申请证书,kubelet 的证书由 apiserver 动态签署。
TLS bootstrapping 具体引导过程
1.TLS 作用
TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver建立连接,更不用提有没有权限向 apiserver 请求指定内容。
2.RBAC作用
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O 字段作为用户组。
以上说明:第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。
kubelet 首次启动流程
TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接apiserver;那么第一次启动时没有证书如何连接 apiserver ?
在 apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由 apiserver 的 CA 签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向apiserver 声明自己的 RBAC 授权身份.
token.csv 格式:
3940fd7fbb391d1b4d861ad17a1f0613,kubelet-bootstrap,10001,”system:kubelet-bootstrap”
首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;所以需要创建一个ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。
创建token.csv文件,格式:token,用户名,UID,用户组
root@k8s-master1:/data/work# cat >token.csv <<EOF
>$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
>EOF
root@k8s-master1:/data/work# cat token.csv
8c06cbc832b6ca19f349cebc82fc74b0,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
root@k8s-master1:/data/work#
创建csr请求文件,替换为自己机器的IP
如果 hosts 字段不为空则需要指定授权使用该证书的 IP 或域名列表。 由于该证书后续被 kubernetes master 集群使用,需要将 master 节点的 IP 都填上,同时还需要填写 service 网络的首个 IP。(一般是 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个 IP,如 10.255.0.1)
root@k8s-master1:/data/work# cat kube-apiserver-csr.json
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.2.135",
"192.168.2.136",
"192.168.2.138",
"192.168.2.139",
"192.168.2.140",
"10.255.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
]
}
生成证书
root@k8s-master1:/data/work# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json |cfssljson -bare kube-apiserver
2024/08/07 14:51:10 [INFO] generate received request
2024/08/07 14:51:10 [INFO] received CSR
2024/08/07 14:51:10 [INFO] generating key: rsa-2048
2024/08/07 14:51:10 [INFO] encoded CSR
2024/08/07 14:51:10 [INFO] signed certificate with serial number 695892646101567378378411579612735881419826252228
root@k8s-master1:/data/work# ls kube-apiserver*.pem
kube-apiserver-key.pem kube-apiserver.pem
创建api-server的配置文件,替换成自己的IP
root@k8s-master1:/data/work# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.2.135 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.2.135:2379,https://192.168.2.136:2379,https://192.168.2.140:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
注解
–logtostderr:启用日志
–v:日志等级
–log-dir:日志目录
–etcd-servers:etcd 集群地址
–bind-address:监听地址(用keepalived高可用方案,这里填写0.0.0.0,可以通过vip连接)
–secure-port:https 安全端口
–advertise-address:集群通告地址
–allow-privileged:启用授权
–service-cluster-ip-range:Service 虚拟 IP 地址段
–enable-admission-plugins:准入控制模块
–authorization-mode:认证授权,启用 RBAC 授权和节点自管理
–enable-bootstrap-token-auth:启用 TLS bootstrap 机制
–token-auth-file:bootstrap token 文件
–service-node-port-range:Service nodeport 类型默认分配端口范围
–kubelet-client-xxx:apiserver 访问 kubelet 客户端证书
–tls-xxx-file:apiserver https 证书
–etcd-xxxfile:连接 Etcd 集群证书
–-audit-log-xxx:审计日志
创建服务启动文件
root@k8s-master1:/data/work# cat >kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=-/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝证书文件到相应的目录,同时也拷贝到master2节点
root@k8s-master1:/data/work# cp ca*.pem kube-apiserver*.pem /etc/kubernetes/ssl/
root@k8s-master1:/data/work# cp token.csv /etc/kubernetes/
root@k8s-master1:/data/work# cp kube-apiserver.conf /etc/kubernetes/
root@k8s-master1:/data/work# cp kube-apiserver.service /usr/lib/systemd/system/
root@k8s-master1:/data/work# scp -r /etc/kubernetes master2:/etc/
token.csv 100% 84 44.3KB/s 00:00
kube-apiserver.conf 100% 1584 885.2KB/s 00:00
kube-apiserver.pem 100% 1590 1.5MB/s 00:00
ca-key.pem 100% 1679 2.0MB/s 00:00
ca.pem 100% 1298 1.9MB/s 00:00
kube-apiserver-key.pem 100% 1675 2.4MB/s 00:00
root@k8s-master1:/data/work# scp /usr/lib/systemd/system/kube-apiserver.service master2:/usr/lib/systemd/system/
kube-apiserver.service
注意!!!!k8s-master2 配置文件 kube-apiserver.conf 的 IP 地址修改为实际的本机 IP
root@k8s-master2:/etc/kubernetes# cat kube-apiserver.conf
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
--anonymous-auth=false \
--bind-address=0.0.0.0 \
--secure-port=6443 \
--advertise-address=192.168.2.136 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.255.0.0/16 \
--token-auth-file=/etc/kubernetes/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=https://kubernetes.default.svc.cluster.local \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.2.135:2379,https://192.168.2.136:2379,https://192.168.2.140:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=4"
启动kube-apiserver
root@k8s-master1:/data/work# systemctl daemon-reload && systemctl enable --now kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /lib/systemd/system/kube-apiserver.service.
root@k8s-master1:/data/work# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2024-08-07 15:11:14 CST; 14s ago
root@k8s-master2:~# systemctl daemon-reload && systemctl enable --now kube-apiserver
Created symlink /etc/systemd/system/multi-user.target.wants/kube-apiserver.service → /lib/systemd/system/kube-apiserver.service.
root@k8s-master2:~# systemctl status kube-apiserver
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2024-08-07 15:13:45 CST; 11s ago
root@k8s-master1:/data/work# curl -k https://192.168.2.135:6443/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
上面看到 401,这个是正常的的状态,还没认证
部署kubectl组件
kubectl组件介绍
Kubectl 是客户端工具,操作k8s 资源的,如增删改查等。
Kubectl 操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl 会根据这个文件的配置,去访问 k8s 资源。
/etc/kubernetes/admin.conf 文件记录了访问的 k8s 集群,和用到的证书。可以设置一个环境变量 KUBECONFIG
[root@ k8s-master1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
这样在操作 kubectl,就会自动加载 KUBECONFIG 来操作要管理哪个集群的 k8s 资源了
也可以按照下面方法,这个是在 kubeadm 初始化 k8s 的时候会提示我们要用的一个方法
[root@ k8s-master1 ~]# cp /etc/kubernetes/admin.conf /root/.kube/config
这样我们在执行 kubectl,就会加载/root/.kube/config 文件,去操作 k8s 资源了
如果设置了 KUBECONFIG,那就会先找到 KUBECONFIG 去操作 k8s,如果没有 KUBECONFIG变量,那就会使用/root/.kube/config 文件决定管理哪个 k8s 集群的资源注意:admin.conf 还没创建,下面步骤创建
创建csr请求文件
root@k8s-master1:/data/work# cat admin-csr.json
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:masters",
"OU": "system"
}
]
}
后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权; kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用 kube-apiserver 的所有 API 的权限; O 指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters,所以被授予访问所有 API 的权限;
注: 这个 admin 证书,是将来生成管理员用的 kube config 配置文件用的,现在我们一般建议使用 RBAC 来对 kubernetes 进行角色权限控制, kubernetes 将证书中的 CN 字段 作为 User,
O 字段作为 Group; “O”: “system:masters”, 必须是 system:masters,否则后面 kubectl create
clusterrolebinding 报错。
证书 O 配置为 system:masters 在集群内部 cluster-admin 的 clusterrolebinding 将system:masters 组和cluster-admin clusterrole 绑定在一起
生成客户端证书
root@k8s-master1:/data/work# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json |cfssljson -bare admin
2024/08/07 15:28:54 [INFO] generate received request
2024/08/07 15:28:54 [INFO] received CSR
2024/08/07 15:28:54 [INFO] generating key: rsa-2048
2024/08/07 15:28:54 [INFO] encoded CSR
2024/08/07 15:28:54 [INFO] signed certificate with serial number 202934663019094511269755251183590216779451529437
2024/08/07 15:28:54 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
root@k8s-master1:/data/work# ls admin*.pem
admin-key.pem admin.pem
root@k8s-master1:/data/work# cp admin*.pem /etc/kubernetes/ssl/
root@k8s-master1:/data/work#
配置安全上下文
创建kubeconfig配置文件,比较重要
kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、 CA 证书和自身使用的证书
1.设置集群参数
root@k8s-master1:/data/work# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.135:6443 --kubeconfig=kube.config
Cluster "kubernetes" set.
查看kube.config内容
root@k8s-master1:/data/work# cat kube.config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
2.设置客户端认证参数
root@k8s-master1:/data/work# kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config
User "admin" set.
root@k8s-master1:/data/work# cat kube.config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVSTR2bnJ1bVV0c3NmeTdydUVFdzdPZklLK04wd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURjeU5EQXdXaGNOTXpRd09EQTFNRGN5TkRBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3dGL0dWdGlLNTdHYTgxRURaOWhEbzAKRGxpcVlWUHFxQlRmbENOcmZUS1YrNnNNMnJaZWNzRFlHb01kK3orbTNoNmlDNEtTVGlIaGpUbHd6RVNEVWdHWApWazBUYmUxdTZQRHlGK2FWeXoyVlFhaURTOVN4Y002RStyMWszVkdXbURJV2VjM3NhZGRGZ1ZKa2V4VVFqN2E2CmljRC94WjNoUERMUEdoaUYxSVZPK3BPSExXS1BKeDVhR3hlUFZCdUdrQnloTTIrMkl5UmMzZ3JRWjlGRkYwbTQKWXhtT1VET2lWbE5VVlp1ek1wZnY3VmUzRUdMbHZ6NkY3NStLdXlxREFmbDFIZS9USm0zRmtubDlkSU1rSW9kcgovUkE0aWZSSnZhQzVBa2ZQbDYrNWExZTJGWmhmdWlRVGxkTjF3aFRFTzhoNmpQQWo4T0pnNS9oa2RRVktKeUVDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUkhQSEFhUytUUGZyL1I4NkVQVE82SQpkSDJTeERBZkJnTlZIU01FR0RBV2dCUkNTSnU4OUtCbGpoaHZjdUNhRzlnVmM3eHJ0REFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQWg5RHlONFFjQXUveWVBRDM5NjE4OVVlVzBnZ0dsQk1rYlNhbFJ4N0EvR0VxNkp6RVNpMVMKM2FpZWhtS0JndlFHRHZYbFQvQjI3ZjZqaUw4RSthSzVGMTdiZDhnUWtQeGovSnRvUll0bWRnTkNENnN6REozaQorS2twVlY0d0NjSTRpTHkzTS9UcHNmbGJ1OXRjQjNCZEdyY2FQTEZ4SEk4emFWU0xMQ2JmcWJIcTNoMnNKNWZZCnBuN0hTV21jbWhsMDRESy9sT2dydS9EcGkvdnhoclRSdWZHTmdrQ2w3REVSZ2RIS3pWUmhwS3Q1VGJuV2ZoNEgKU29kWkx4OG5sbjJJZDRnL01XTkRJQjJHT3NKMS93YTdFVndjWWFFUkw4NndWNDRiNkVVQzJkL2k1aDY3YjF1MQpNZ1Q3T0RldURTVVNwT2JubmtHd3AwVzVnVCtIeU9WbTZ3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBckFYOFpXMklybnNacnpVUU5uMkVPalFPV0twaFUrcW9GTitVSTJ0OU1wWDdxd3phCnRsNXl3TmdhZ3gzN1A2YmVIcUlMZ3BKT0llR05PWERNUklOU0FaZFdUUk50N1c3bzhQSVg1cFhMUFpWQnFJTkwKMUxGd3pvVDZ2V1RkVVphWU1oWjV6ZXhwMTBXQlVtUjdGUkNQdHJxSndQL0ZuZUU4TXM4YUdJWFVoVTc2azRjdApZbzhuSGxvYkY0OVVHNGFRSEtFemI3WWpKRnplQ3RCbjBVVVhTYmhqR1k1UU02SldVMVJWbTdNeWwrL3RWN2NRCll1Vy9Qb1h2bjRxN0tvTUIrWFVkNzlNbWJjV1NlWDEwZ3lRaWgydjlFRGlKOUVtOW9Ma0NSOCtYcjdsclY3WVYKbUYrNkpCT1YwM1hDRk1RN3lIcU04Q1B3NG1EbitHUjFCVW9uSVFJREFRQUJBb0lCQUdwcnlabDJDZmpuYnh4VgpWNUplVkU4dHBUSjFOWUVVeXFjZktpWS9lVlN5Tk4rOU5CRmVuTjl3MGZZTHRrUEtsOStib0VORy84ODJHb2hPCm9CQkNyWmtPWnZXSDc1blQ0NGdzUFYwSmpwS3FvOVA4WmcxUE9OcUtxaFJCTWlvbllFQ2NadjVlSTV4cUEzZFYKY2srMXp6TGNkQnhTSDQ5c3FERkdybjQ5VFJ5cW0vRHV6aDdJUFBFY2QvZloyZXFPMGg5S3NZK0JERG9ObFV6bwpMcFIvZ2xMeGIxN3M0M05lZEYvMUFFM2hRQmdrVCtQVHEyTDQyNlFtRTZWalN5QUpJYUt4R3pDRUxGUklrTjRlCmplMVV5WWdPczUyUU01SGtndERKYi9lZEpKWVpIZWR1dFBFSHpheFg2Vlp4L3ZkZkNBMUlyTUZPTjF6T2NWL3MKbUFOZGNZRUNnWUVBMDhZcCtHVmwzVy92a1MvM3lxR3F3QlVjcUpyLzBTWUVEYWdJRDhQUkJ0UVdQYXRmVzlINApFVVRNQTdRRElpSUxFNEFXcnhsSWUzanZwOFdvMmYvOHNCZ08wSStrOU4vNnJYNmc2Zlh1MWp5QmExOUxKY2lBClA0eHhxMWNRSjlGNXVxc2VHQ1VPSGxlc2dSbDhNZDFuOFkvU283a2hqWmp6dUNzWjNRMDdwRThDZ1lFQXovS3QKQmdicnNGMmpEbEtSS1VWNDAvczJjWnhzbVNXVmlPVHJoNDVHRGVzYVdrUEdya1NPdmVUWnZzcjhqY1pJYVhNdgpJb0lwZjJWbVBudjhQbjQvbDAvTnhDd09GbE1ReFo2ZzNCc0lHSlpVckh5U1hCTmpod2wrRjBtUlpLSkErbXQwCkRET3hZN1ByU2NhSUU3RnBvUWFrdHo2NEZEY0F6Qmw2VkM1RjhZOENnWUExU1FWQ2RQRCttSzJrMEhiK3kxTFYKWmZxQ0NnNFlLQUtaRlJDQ052a2ZTTG9YNWtqbUo1ek5hNHdSMm5kM1hTMkFTSmhza21ZRWUxZUIxV0E1Q2dvZwpuTTBOZVRjK1RpVWJCbU9pdXJqUHV3V3RhSnJWOU84Z0RreURtako4Y2w2NHMxbXRKWlc1Mk1HVThqNm5wVmdFCkZmWWdML0xiV0FMcThoMWQyM2lJVFFLQmdENnhIUFRLTlZnd2dxNFl1bWJFNlE2UGwvUmNnbWtSYWFtaHlsaE4KemxUMzRqUUFadSsyLzRuRWF0a1lmVmVJeGQvMHQrc2hicjFYcHFHRDQ2STdrWlJlbk54ZG84bWJOVjArMjZSQQpDZ3JQbDZ1QXl1Y3plVGdHNXByQ3RUQ3ZzZ05OVGVrMzFHMElteERjNTcxNEtTNUF3SHYyVHF6WmdFWUlFRmM4CnRCMkZBb0dCQU1TRC9wbkN0RTRISEhXWkRGZXNFRmZXZGFnMTJrOFlwOFFGcFQvZWpjazcyL3YvN3IwS0hVeFIKd2JySGdZbHZONnNlSXVNNUlZS1dJbElFVUc4ZWVEZjd3SFRxT3pWc292QnhSeFZQSU5EQTlxNno4b2ZtM1JkegpRSDhITksveTZWMjFFTm5XN0hYZGYyVHEvWHJCR01CaFlGbnVkVm41Z0ZPMjVWQ0xRdW1pCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
3.设置上下文参数
root@k8s-master1:/data/work# kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config
Context "kubernetes" created.
root@k8s-master1:/data/work# cat kube.config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: ""
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVSTR2bnJ1bVV0c3NmeTdydUVFdzdPZklLK04wd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURjeU5EQXdXaGNOTXpRd09EQTFNRGN5TkRBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3dGL0dWdGlLNTdHYTgxRURaOWhEbzAKRGxpcVlWUHFxQlRmbENOcmZUS1YrNnNNMnJaZWNzRFlHb01kK3orbTNoNmlDNEtTVGlIaGpUbHd6RVNEVWdHWApWazBUYmUxdTZQRHlGK2FWeXoyVlFhaURTOVN4Y002RStyMWszVkdXbURJV2VjM3NhZGRGZ1ZKa2V4VVFqN2E2CmljRC94WjNoUERMUEdoaUYxSVZPK3BPSExXS1BKeDVhR3hlUFZCdUdrQnloTTIrMkl5UmMzZ3JRWjlGRkYwbTQKWXhtT1VET2lWbE5VVlp1ek1wZnY3VmUzRUdMbHZ6NkY3NStLdXlxREFmbDFIZS9USm0zRmtubDlkSU1rSW9kcgovUkE0aWZSSnZhQzVBa2ZQbDYrNWExZTJGWmhmdWlRVGxkTjF3aFRFTzhoNmpQQWo4T0pnNS9oa2RRVktKeUVDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUkhQSEFhUytUUGZyL1I4NkVQVE82SQpkSDJTeERBZkJnTlZIU01FR0RBV2dCUkNTSnU4OUtCbGpoaHZjdUNhRzlnVmM3eHJ0REFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQWg5RHlONFFjQXUveWVBRDM5NjE4OVVlVzBnZ0dsQk1rYlNhbFJ4N0EvR0VxNkp6RVNpMVMKM2FpZWhtS0JndlFHRHZYbFQvQjI3ZjZqaUw4RSthSzVGMTdiZDhnUWtQeGovSnRvUll0bWRnTkNENnN6REozaQorS2twVlY0d0NjSTRpTHkzTS9UcHNmbGJ1OXRjQjNCZEdyY2FQTEZ4SEk4emFWU0xMQ2JmcWJIcTNoMnNKNWZZCnBuN0hTV21jbWhsMDRESy9sT2dydS9EcGkvdnhoclRSdWZHTmdrQ2w3REVSZ2RIS3pWUmhwS3Q1VGJuV2ZoNEgKU29kWkx4OG5sbjJJZDRnL01XTkRJQjJHT3NKMS93YTdFVndjWWFFUkw4NndWNDRiNkVVQzJkL2k1aDY3YjF1MQpNZ1Q3T0RldURTVVNwT2JubmtHd3AwVzVnVCtIeU9WbTZ3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBckFYOFpXMklybnNacnpVUU5uMkVPalFPV0twaFUrcW9GTitVSTJ0OU1wWDdxd3phCnRsNXl3TmdhZ3gzN1A2YmVIcUlMZ3BKT0llR05PWERNUklOU0FaZFdUUk50N1c3bzhQSVg1cFhMUFpWQnFJTkwKMUxGd3pvVDZ2V1RkVVphWU1oWjV6ZXhwMTBXQlVtUjdGUkNQdHJxSndQL0ZuZUU4TXM4YUdJWFVoVTc2azRjdApZbzhuSGxvYkY0OVVHNGFRSEtFemI3WWpKRnplQ3RCbjBVVVhTYmhqR1k1UU02SldVMVJWbTdNeWwrL3RWN2NRCll1Vy9Qb1h2bjRxN0tvTUIrWFVkNzlNbWJjV1NlWDEwZ3lRaWgydjlFRGlKOUVtOW9Ma0NSOCtYcjdsclY3WVYKbUYrNkpCT1YwM1hDRk1RN3lIcU04Q1B3NG1EbitHUjFCVW9uSVFJREFRQUJBb0lCQUdwcnlabDJDZmpuYnh4VgpWNUplVkU4dHBUSjFOWUVVeXFjZktpWS9lVlN5Tk4rOU5CRmVuTjl3MGZZTHRrUEtsOStib0VORy84ODJHb2hPCm9CQkNyWmtPWnZXSDc1blQ0NGdzUFYwSmpwS3FvOVA4WmcxUE9OcUtxaFJCTWlvbllFQ2NadjVlSTV4cUEzZFYKY2srMXp6TGNkQnhTSDQ5c3FERkdybjQ5VFJ5cW0vRHV6aDdJUFBFY2QvZloyZXFPMGg5S3NZK0JERG9ObFV6bwpMcFIvZ2xMeGIxN3M0M05lZEYvMUFFM2hRQmdrVCtQVHEyTDQyNlFtRTZWalN5QUpJYUt4R3pDRUxGUklrTjRlCmplMVV5WWdPczUyUU01SGtndERKYi9lZEpKWVpIZWR1dFBFSHpheFg2Vlp4L3ZkZkNBMUlyTUZPTjF6T2NWL3MKbUFOZGNZRUNnWUVBMDhZcCtHVmwzVy92a1MvM3lxR3F3QlVjcUpyLzBTWUVEYWdJRDhQUkJ0UVdQYXRmVzlINApFVVRNQTdRRElpSUxFNEFXcnhsSWUzanZwOFdvMmYvOHNCZ08wSStrOU4vNnJYNmc2Zlh1MWp5QmExOUxKY2lBClA0eHhxMWNRSjlGNXVxc2VHQ1VPSGxlc2dSbDhNZDFuOFkvU283a2hqWmp6dUNzWjNRMDdwRThDZ1lFQXovS3QKQmdicnNGMmpEbEtSS1VWNDAvczJjWnhzbVNXVmlPVHJoNDVHRGVzYVdrUEdya1NPdmVUWnZzcjhqY1pJYVhNdgpJb0lwZjJWbVBudjhQbjQvbDAvTnhDd09GbE1ReFo2ZzNCc0lHSlpVckh5U1hCTmpod2wrRjBtUlpLSkErbXQwCkRET3hZN1ByU2NhSUU3RnBvUWFrdHo2NEZEY0F6Qmw2VkM1RjhZOENnWUExU1FWQ2RQRCttSzJrMEhiK3kxTFYKWmZxQ0NnNFlLQUtaRlJDQ052a2ZTTG9YNWtqbUo1ek5hNHdSMm5kM1hTMkFTSmhza21ZRWUxZUIxV0E1Q2dvZwpuTTBOZVRjK1RpVWJCbU9pdXJqUHV3V3RhSnJWOU84Z0RreURtako4Y2w2NHMxbXRKWlc1Mk1HVThqNm5wVmdFCkZmWWdML0xiV0FMcThoMWQyM2lJVFFLQmdENnhIUFRLTlZnd2dxNFl1bWJFNlE2UGwvUmNnbWtSYWFtaHlsaE4KemxUMzRqUUFadSsyLzRuRWF0a1lmVmVJeGQvMHQrc2hicjFYcHFHRDQ2STdrWlJlbk54ZG84bWJOVjArMjZSQQpDZ3JQbDZ1QXl1Y3plVGdHNXByQ3RUQ3ZzZ05OVGVrMzFHMElteERjNTcxNEtTNUF3SHYyVHF6WmdFWUlFRmM4CnRCMkZBb0dCQU1TRC9wbkN0RTRISEhXWkRGZXNFRmZXZGFnMTJrOFlwOFFGcFQvZWpjazcyL3YvN3IwS0hVeFIKd2JySGdZbHZONnNlSXVNNUlZS1dJbElFVUc4ZWVEZjd3SFRxT3pWc292QnhSeFZQSU5EQTlxNno4b2ZtM1JkegpRSDhITksveTZWMjFFTm5XN0hYZGYyVHEvWHJCR01CaFlGbnVkVm41Z0ZPMjVWQ0xRdW1pCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
4.设置当前上下文
root@k8s-master1:/data/work# kubectl config use-context kubernetes --kubeconfig=kube.config
Switched to context "kubernetes".
root@k8s-master1:/data/work# cat kube.config
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQxVENDQXIyZ0F3SUJBZ0lVSTR2bnJ1bVV0c3NmeTdydUVFdzdPZklLK04wd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURjeU5EQXdXaGNOTXpRd09EQTFNRGN5TkRBd1dqQm5NUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVJjd0ZRWURWUVFLRXc1egplWE4wWlcwNmJXRnpkR1Z5Y3pFUE1BMEdBMVVFQ3hNR2MzbHpkR1Z0TVE0d0RBWURWUVFERXdWaFpHMXBiakNDCkFTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS3dGL0dWdGlLNTdHYTgxRURaOWhEbzAKRGxpcVlWUHFxQlRmbENOcmZUS1YrNnNNMnJaZWNzRFlHb01kK3orbTNoNmlDNEtTVGlIaGpUbHd6RVNEVWdHWApWazBUYmUxdTZQRHlGK2FWeXoyVlFhaURTOVN4Y002RStyMWszVkdXbURJV2VjM3NhZGRGZ1ZKa2V4VVFqN2E2CmljRC94WjNoUERMUEdoaUYxSVZPK3BPSExXS1BKeDVhR3hlUFZCdUdrQnloTTIrMkl5UmMzZ3JRWjlGRkYwbTQKWXhtT1VET2lWbE5VVlp1ek1wZnY3VmUzRUdMbHZ6NkY3NStLdXlxREFmbDFIZS9USm0zRmtubDlkSU1rSW9kcgovUkE0aWZSSnZhQzVBa2ZQbDYrNWExZTJGWmhmdWlRVGxkTjF3aFRFTzhoNmpQQWo4T0pnNS9oa2RRVktKeUVDCkF3RUFBYU4vTUgwd0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3IKQmdFRkJRY0RBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUkhQSEFhUytUUGZyL1I4NkVQVE82SQpkSDJTeERBZkJnTlZIU01FR0RBV2dCUkNTSnU4OUtCbGpoaHZjdUNhRzlnVmM3eHJ0REFOQmdrcWhraUc5dzBCCkFRc0ZBQU9DQVFFQWg5RHlONFFjQXUveWVBRDM5NjE4OVVlVzBnZ0dsQk1rYlNhbFJ4N0EvR0VxNkp6RVNpMVMKM2FpZWhtS0JndlFHRHZYbFQvQjI3ZjZqaUw4RSthSzVGMTdiZDhnUWtQeGovSnRvUll0bWRnTkNENnN6REozaQorS2twVlY0d0NjSTRpTHkzTS9UcHNmbGJ1OXRjQjNCZEdyY2FQTEZ4SEk4emFWU0xMQ2JmcWJIcTNoMnNKNWZZCnBuN0hTV21jbWhsMDRESy9sT2dydS9EcGkvdnhoclRSdWZHTmdrQ2w3REVSZ2RIS3pWUmhwS3Q1VGJuV2ZoNEgKU29kWkx4OG5sbjJJZDRnL01XTkRJQjJHT3NKMS93YTdFVndjWWFFUkw4NndWNDRiNkVVQzJkL2k1aDY3YjF1MQpNZ1Q3T0RldURTVVNwT2JubmtHd3AwVzVnVCtIeU9WbTZ3PT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBckFYOFpXMklybnNacnpVUU5uMkVPalFPV0twaFUrcW9GTitVSTJ0OU1wWDdxd3phCnRsNXl3TmdhZ3gzN1A2YmVIcUlMZ3BKT0llR05PWERNUklOU0FaZFdUUk50N1c3bzhQSVg1cFhMUFpWQnFJTkwKMUxGd3pvVDZ2V1RkVVphWU1oWjV6ZXhwMTBXQlVtUjdGUkNQdHJxSndQL0ZuZUU4TXM4YUdJWFVoVTc2azRjdApZbzhuSGxvYkY0OVVHNGFRSEtFemI3WWpKRnplQ3RCbjBVVVhTYmhqR1k1UU02SldVMVJWbTdNeWwrL3RWN2NRCll1Vy9Qb1h2bjRxN0tvTUIrWFVkNzlNbWJjV1NlWDEwZ3lRaWgydjlFRGlKOUVtOW9Ma0NSOCtYcjdsclY3WVYKbUYrNkpCT1YwM1hDRk1RN3lIcU04Q1B3NG1EbitHUjFCVW9uSVFJREFRQUJBb0lCQUdwcnlabDJDZmpuYnh4VgpWNUplVkU4dHBUSjFOWUVVeXFjZktpWS9lVlN5Tk4rOU5CRmVuTjl3MGZZTHRrUEtsOStib0VORy84ODJHb2hPCm9CQkNyWmtPWnZXSDc1blQ0NGdzUFYwSmpwS3FvOVA4WmcxUE9OcUtxaFJCTWlvbllFQ2NadjVlSTV4cUEzZFYKY2srMXp6TGNkQnhTSDQ5c3FERkdybjQ5VFJ5cW0vRHV6aDdJUFBFY2QvZloyZXFPMGg5S3NZK0JERG9ObFV6bwpMcFIvZ2xMeGIxN3M0M05lZEYvMUFFM2hRQmdrVCtQVHEyTDQyNlFtRTZWalN5QUpJYUt4R3pDRUxGUklrTjRlCmplMVV5WWdPczUyUU01SGtndERKYi9lZEpKWVpIZWR1dFBFSHpheFg2Vlp4L3ZkZkNBMUlyTUZPTjF6T2NWL3MKbUFOZGNZRUNnWUVBMDhZcCtHVmwzVy92a1MvM3lxR3F3QlVjcUpyLzBTWUVEYWdJRDhQUkJ0UVdQYXRmVzlINApFVVRNQTdRRElpSUxFNEFXcnhsSWUzanZwOFdvMmYvOHNCZ08wSStrOU4vNnJYNmc2Zlh1MWp5QmExOUxKY2lBClA0eHhxMWNRSjlGNXVxc2VHQ1VPSGxlc2dSbDhNZDFuOFkvU283a2hqWmp6dUNzWjNRMDdwRThDZ1lFQXovS3QKQmdicnNGMmpEbEtSS1VWNDAvczJjWnhzbVNXVmlPVHJoNDVHRGVzYVdrUEdya1NPdmVUWnZzcjhqY1pJYVhNdgpJb0lwZjJWbVBudjhQbjQvbDAvTnhDd09GbE1ReFo2ZzNCc0lHSlpVckh5U1hCTmpod2wrRjBtUlpLSkErbXQwCkRET3hZN1ByU2NhSUU3RnBvUWFrdHo2NEZEY0F6Qmw2VkM1RjhZOENnWUExU1FWQ2RQRCttSzJrMEhiK3kxTFYKWmZxQ0NnNFlLQUtaRlJDQ052a2ZTTG9YNWtqbUo1ek5hNHdSMm5kM1hTMkFTSmhza21ZRWUxZUIxV0E1Q2dvZwpuTTBOZVRjK1RpVWJCbU9pdXJqUHV3V3RhSnJWOU84Z0RreURtako4Y2w2NHMxbXRKWlc1Mk1HVThqNm5wVmdFCkZmWWdML0xiV0FMcThoMWQyM2lJVFFLQmdENnhIUFRLTlZnd2dxNFl1bWJFNlE2UGwvUmNnbWtSYWFtaHlsaE4KemxUMzRqUUFadSsyLzRuRWF0a1lmVmVJeGQvMHQrc2hicjFYcHFHRDQ2STdrWlJlbk54ZG84bWJOVjArMjZSQQpDZ3JQbDZ1QXl1Y3plVGdHNXByQ3RUQ3ZzZ05OVGVrMzFHMElteERjNTcxNEtTNUF3SHYyVHF6WmdFWUlFRmM4CnRCMkZBb0dCQU1TRC9wbkN0RTRISEhXWkRGZXNFRmZXZGFnMTJrOFlwOFFGcFQvZWpjazcyL3YvN3IwS0hVeFIKd2JySGdZbHZONnNlSXVNNUlZS1dJbElFVUc4ZWVEZjd3SFRxT3pWc292QnhSeFZQSU5EQTlxNno4b2ZtM1JkegpRSDhITksveTZWMjFFTm5XN0hYZGYyVHEvWHJCR01CaFlGbnVkVm41Z0ZPMjVWQ0xRdW1pCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
root@k8s-master1:/data/work# mkdir ~/.kube -p
root@k8s-master1:/data/work# cp kube.config /root/.kube/config
root@k8s-master1:/data/work# cp kube.config /etc/kubernetes/admin.conf
5.授权kubernetes证书访问kubelet api权限
root@k8s-master1:/data/work# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver:kubelet-apis created
root@k8s-master1:/data/work# kubectl cluster-info
Kubernetes control plane is running at https://192.168.2.135:6443
查询集群信息
root@k8s-master1:/data/work# kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Unhealthy Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
scheduler Unhealthy Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused
etcd-0 Healthy {"health":"true","reason":""}
etcd-1 Healthy {"health":"true","reason":""}
root@k8s-master1:/data/work# kubectl get all --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 144m
同步kubectl文件到其他节点
root@k8s-master2:~# mkdir -p /root/.kube
root@k8s-master1:~# scp -r /root/.kube/config master2:/root/.kube/
配置kubectl子命令补全(master1、2)
root@k8s-master1:~# apt-get install bash-completion
root@k8s-master1:~# source /usr/share/bash-completion/bash_completion
root@k8s-master1:~# source <(kubectl completion bash)
root@k8s-master1:~# kubectl completion bash > ~/.kube/completion.bash.inc
root@k8s-master1:~# source '/root/.kube/completion.bash.inc'
部署kube-controller-manager组件
创建kube-controller-manager csr请求文件
root@k8s-master1:/data/work# cat kube-controller-manager-csr.json
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"hosts": [
"127.0.0.1",
"192.168.2.135",
"192.168.2.136",
"192.168.2.137",
"192.168.2.138",
"192.168.2.139",
"192.168.2.140"
],
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:kube-controller-manager",
"OU": "system"
}
]
}
hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager
O 为 system:kube-controller-manager,
kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限
生成kube-controller-manager证书
root@k8s-master1:/data/work# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
2024/08/08 09:20:54 [INFO] generate received request
2024/08/08 09:20:54 [INFO] received CSR
2024/08/08 09:20:54 [INFO] generating key: rsa-2048
2024/08/08 09:20:54 [INFO] encoded CSR
2024/08/08 09:20:54 [INFO] signed certificate with serial number 647757414703145155970563093502545002451022037613
创建kube-controller-manager的kubeconfig
1.设置集群参数
root@k8s-master1:/data/work# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.135:6443 --kubeconfig=kube-controller-manager.kubeconfig
Cluster "kubernetes" set.
root@k8s-master1:/data/work# cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
2.设置客户端认证参数
root@k8s-master1:/data/work# kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.kubeconfig
User "system:kube-controller-manager" set.
root@k8s-master1:/data/work# cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts: null
current-context: ""
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVY1haeWNqZU16Vnh4WVFxUmxQc1kwbmdhd20wd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREE0TURFeE5qQXdXaGNOTXpRd09EQTJNREV4TmpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnhJL3lPUGpBQThrWFQvbytLWEpVMUZHZzNmMVM0SQpRV3BXMGVVODdtTCtHVG01SGVxekppcjRUb1lUQnR6T09rKzZXSFFGVjlJOVdvZVNHa0h6VjJGUGhOam5SODRoCjJpc3liQ05tZTZLdHMxRGxMNW83THYyeVg2d0JRQ3NhSG1HeERhdUZINkdDSmxqVEhrblAwbVlRTXlabVl0aWIKbHEvUnd6OVZsVm1MYWd5K3lkV0M3Q0lSNnlQbXdPNWIwUkRoZjQ4S0VVbWpZWlp4QWJBYmVCakhiL1NFYnFsaApqYkVLS3IyeEtpZnhITG8waVlDbnArdk1GcUVNNWJWQnBNY1BrU2kyME85d1hPN0JSK25rV2NaUHRlUkZ1aWltCjh0OEpFYTNTQk5jZHE4bXBnUnI5bU0vWDJQMG94VE51V2JlbFdLdlVEcUJEVVBaZFhSR0FiUmtDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGRm1ZUGVsUXRWTHlUUzc2cERXb1U5QzNNMFlZCk1COEdBMVVkSXdRWU1CYUFGRUpJbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJNQ29Cd3FIQk1Db0J3dUhCTUNvQncySEJNQ29CMk13RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNuNApGQ1d2SDBHN3V6YnJMcXd1ZklIenhSZ1M4UlJjZXdiWEhVOHFYVEtQZVBYZXFub3h2L1NaLzhIN2lKM2hOa2FNCk5Bbmc3VGF5RVdHRGtHeitibVMxemJGb2k5Y0R0TngyNTk1bWcxcnFWL2xmZWJLTThJTm5wcUxkN2ZSMkNPSWEKMTJVNVdXOVhBa0xXeUp4TXBZUytlRjc3UU52NTYvU3pidHpFbzA4VnFIVG1ydDVSUXRDTGNoajI1RDNodW53UgpEUGlzei9jcXluWGR3bC9TQjlnNUEwWWRveXloKzhqMko0eDNaSTU1UURkeDZieFAyQTEyR0FqRGYwY0diN0lSCjJIRlhlUDl6azRnWXUzWHVRRkxlb1MwL3Z3V3pkMTFjOEpLeEs4UE16TndsckpzR1VMOTk5eUZBakY5Y3RNcnQKdFFuc0FBbjY1Um43cS9jdXlzST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBM0VqL0k0K01BRHlSZFArajRwY2xUVVVhRGQvVkxnaEJhbGJSNVR6dVl2NFpPYmtkCjZyTW1LdmhPaGhNRzNNNDZUN3BZZEFWWDBqMWFoNUlhUWZOWFlVK0UyT2RIemlIYUt6SnNJMlo3b3EyelVPVXYKbWpzdS9iSmZyQUZBS3hvZVliRU5xNFVmb1lJbVdOTWVTYy9TWmhBekptWmkySnVXcjlIRFAxV1ZXWXRxREw3SgoxWUxzSWhIckkrYkE3bHZSRU9GL2p3b1JTYU5obG5FQnNCdDRHTWR2OUlSdXFXR05zUW9xdmJFcUovRWN1alNKCmdLZW42OHdXb1F6bHRVR2t4dytSS0xiUTczQmM3c0ZINmVSWnhrKzE1RVc2S0tieTN3a1JyZElFMXgycnlhbUIKR3YyWXo5ZlkvU2pGTTI1WnQ2VllxOVFPb0VOUTlsMWRFWUJ0R1FJREFRQUJBb0lCQUhITi9KSWduUkdhT1FPYQo5czRmakJQcGVWWmxwenNLNU5ETlhjN3l0YTNLM0xsbm03OGZJcjdjWGFVQ3UyN2oxRmhRUzFaVlZGTzNnc2U3CmdYbEZBSVd6a1V5RjRDRHNlRXdNMXJWTFF1QitvTDlRU0ZHRDlmajNhRm55bzNZaEhrVVdOWnZCUU9BdDN5WFEKbkR0QjlNN3AyNk1oRGp3ZDFiR3J5eFV6WDk5TUkvSTJaREZZakwvMkpQMDNsb0JaRWltbG94cGRaQ1UzWHhnUQp5NFFDdHp1ZWV5RFVvVzFJTm5DM0VIeHcyalZKWmp4d29PNmUvMHpUTTBxeGN2Vm85bDhkUlk1dk5BSnhydVh6CitES3Z6MDA0b2Q4b3loNXF0MGEyS2l4dWdqRnJuOVNLVnI3anJlNDc4VFNkSllBeGp0SFJQVVRZRmVEQUNOM0cKMTJSZEF3RUNnWUVBNzVkemVFMmVLamlkaUY3Rk9PMXlhMVpudEJQQkRLamRPOTZYV1JpVmw0dU85UGhGdkZnVwpMQ3pJelhXa05Ic1V1N1YybDhnanh2MTZ0SzlQL2ZiN1NVMEJxRmpQOGY3aXVESWRWSHJVNVZLUm9qRWJOYUlJCk95KzMyVzMwaTkvRkRkSDFBM0J3bUt1MXNVMXlPVHU1cVpUQUZIczdveTZrUWc0ZzcrR2U4bkVDZ1lFQTYxOFAKOWk1U0N4VlU5SVZBV2h2Y3dHVDJaUEI5OE85Q3dBYndZOXM4eUlGU1lQTlRJN2ZMUVNZSXRWY3BtSVhWZ0lpdwptS1NSM2ZmbXhaR1dOZktZOUlwMjRaSW5MbWlJaldnS29NV3A5TXk0Z1BrT0RHY2hOUmlDamxsNkFyS2VQb3Z4CnpRQ0FPejBPTHVGWE8yY1dHSjVwVldJRTFyNDYzT1owTUFKdnFTa0NnWUVBaGpWNU9pK0laTEE0RmxhMzlXNlYKQkdsdlIrRTA1NG1EKy9CeEtUaHJPMnV5bGFpcEw1ck1PTXlSWXYzK0VHUE50bVFzM1ZNQUw0eDMrdFNsWTJiQgpWa3NybllpNld4MWpGTGtGMHZmSFgvb0RtQzRYeHRCUCtnOTkxZThRNkhWZHBhTXhzMDU5MUJlRGZLRWNWZEVOCjdGOWx4Vk5Pa2RjanJkaktQSFZQR3hFQ2dZRUFyaFpVdnZnSnRLcmxlQ25xcS9zNXJvKytjbkF5Sm5kQS9yamoKS21ob3I4Qi9CcmhTUVBQYkFPZTV2eTZsMUdzQXZCM2R5RGpJcnMyQndaVnA3YUx1b01pZEgwQXpmSzdTZVF4Lwo5K1BiVGZYeGJXdElpY0hwbk5UeEU0cDRwUEFwL1FjVEpGWi9nZEVwNFdESVhXWmt3SGJDWCtXc3dJeFpDelBrCnNmSExWdWtDZ1lBVTIycVNncjUxdFd0TnFIT0d6MVdPTHRwYllNSEo2QWR1OG9oUUhORU9yMHhJU3VHWGdRRnAKZm9TZ3BEcXVTRWZQVUt2cGI2ZkIxQVUra1F3NUJNTGJpOFEzbVRFM3lRYnRPZ3dhODZjL3lURWhZZGJDNGVJOQpqVlBDR3NJcWtaL0NQZU84cGhKTEI0VjFIK1ljSGJPU1I2VTJPS01BSFdob1NrNm5iaHMxWmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
3.设置上下文参数
root@k8s-master1:/data/work# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Context "system:kube-controller-manager" created.
root@k8s-master1:/data/work# cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:kube-controller-manager
name: system:kube-controller-manager
current-context: ""
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVY1haeWNqZU16Vnh4WVFxUmxQc1kwbmdhd20wd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREE0TURFeE5qQXdXaGNOTXpRd09EQTJNREV4TmpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnhJL3lPUGpBQThrWFQvbytLWEpVMUZHZzNmMVM0SQpRV3BXMGVVODdtTCtHVG01SGVxekppcjRUb1lUQnR6T09rKzZXSFFGVjlJOVdvZVNHa0h6VjJGUGhOam5SODRoCjJpc3liQ05tZTZLdHMxRGxMNW83THYyeVg2d0JRQ3NhSG1HeERhdUZINkdDSmxqVEhrblAwbVlRTXlabVl0aWIKbHEvUnd6OVZsVm1MYWd5K3lkV0M3Q0lSNnlQbXdPNWIwUkRoZjQ4S0VVbWpZWlp4QWJBYmVCakhiL1NFYnFsaApqYkVLS3IyeEtpZnhITG8waVlDbnArdk1GcUVNNWJWQnBNY1BrU2kyME85d1hPN0JSK25rV2NaUHRlUkZ1aWltCjh0OEpFYTNTQk5jZHE4bXBnUnI5bU0vWDJQMG94VE51V2JlbFdLdlVEcUJEVVBaZFhSR0FiUmtDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGRm1ZUGVsUXRWTHlUUzc2cERXb1U5QzNNMFlZCk1COEdBMVVkSXdRWU1CYUFGRUpJbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJNQ29Cd3FIQk1Db0J3dUhCTUNvQncySEJNQ29CMk13RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNuNApGQ1d2SDBHN3V6YnJMcXd1ZklIenhSZ1M4UlJjZXdiWEhVOHFYVEtQZVBYZXFub3h2L1NaLzhIN2lKM2hOa2FNCk5Bbmc3VGF5RVdHRGtHeitibVMxemJGb2k5Y0R0TngyNTk1bWcxcnFWL2xmZWJLTThJTm5wcUxkN2ZSMkNPSWEKMTJVNVdXOVhBa0xXeUp4TXBZUytlRjc3UU52NTYvU3pidHpFbzA4VnFIVG1ydDVSUXRDTGNoajI1RDNodW53UgpEUGlzei9jcXluWGR3bC9TQjlnNUEwWWRveXloKzhqMko0eDNaSTU1UURkeDZieFAyQTEyR0FqRGYwY0diN0lSCjJIRlhlUDl6azRnWXUzWHVRRkxlb1MwL3Z3V3pkMTFjOEpLeEs4UE16TndsckpzR1VMOTk5eUZBakY5Y3RNcnQKdFFuc0FBbjY1Um43cS9jdXlzST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBM0VqL0k0K01BRHlSZFArajRwY2xUVVVhRGQvVkxnaEJhbGJSNVR6dVl2NFpPYmtkCjZyTW1LdmhPaGhNRzNNNDZUN3BZZEFWWDBqMWFoNUlhUWZOWFlVK0UyT2RIemlIYUt6SnNJMlo3b3EyelVPVXYKbWpzdS9iSmZyQUZBS3hvZVliRU5xNFVmb1lJbVdOTWVTYy9TWmhBekptWmkySnVXcjlIRFAxV1ZXWXRxREw3SgoxWUxzSWhIckkrYkE3bHZSRU9GL2p3b1JTYU5obG5FQnNCdDRHTWR2OUlSdXFXR05zUW9xdmJFcUovRWN1alNKCmdLZW42OHdXb1F6bHRVR2t4dytSS0xiUTczQmM3c0ZINmVSWnhrKzE1RVc2S0tieTN3a1JyZElFMXgycnlhbUIKR3YyWXo5ZlkvU2pGTTI1WnQ2VllxOVFPb0VOUTlsMWRFWUJ0R1FJREFRQUJBb0lCQUhITi9KSWduUkdhT1FPYQo5czRmakJQcGVWWmxwenNLNU5ETlhjN3l0YTNLM0xsbm03OGZJcjdjWGFVQ3UyN2oxRmhRUzFaVlZGTzNnc2U3CmdYbEZBSVd6a1V5RjRDRHNlRXdNMXJWTFF1QitvTDlRU0ZHRDlmajNhRm55bzNZaEhrVVdOWnZCUU9BdDN5WFEKbkR0QjlNN3AyNk1oRGp3ZDFiR3J5eFV6WDk5TUkvSTJaREZZakwvMkpQMDNsb0JaRWltbG94cGRaQ1UzWHhnUQp5NFFDdHp1ZWV5RFVvVzFJTm5DM0VIeHcyalZKWmp4d29PNmUvMHpUTTBxeGN2Vm85bDhkUlk1dk5BSnhydVh6CitES3Z6MDA0b2Q4b3loNXF0MGEyS2l4dWdqRnJuOVNLVnI3anJlNDc4VFNkSllBeGp0SFJQVVRZRmVEQUNOM0cKMTJSZEF3RUNnWUVBNzVkemVFMmVLamlkaUY3Rk9PMXlhMVpudEJQQkRLamRPOTZYV1JpVmw0dU85UGhGdkZnVwpMQ3pJelhXa05Ic1V1N1YybDhnanh2MTZ0SzlQL2ZiN1NVMEJxRmpQOGY3aXVESWRWSHJVNVZLUm9qRWJOYUlJCk95KzMyVzMwaTkvRkRkSDFBM0J3bUt1MXNVMXlPVHU1cVpUQUZIczdveTZrUWc0ZzcrR2U4bkVDZ1lFQTYxOFAKOWk1U0N4VlU5SVZBV2h2Y3dHVDJaUEI5OE85Q3dBYndZOXM4eUlGU1lQTlRJN2ZMUVNZSXRWY3BtSVhWZ0lpdwptS1NSM2ZmbXhaR1dOZktZOUlwMjRaSW5MbWlJaldnS29NV3A5TXk0Z1BrT0RHY2hOUmlDamxsNkFyS2VQb3Z4CnpRQ0FPejBPTHVGWE8yY1dHSjVwVldJRTFyNDYzT1owTUFKdnFTa0NnWUVBaGpWNU9pK0laTEE0RmxhMzlXNlYKQkdsdlIrRTA1NG1EKy9CeEtUaHJPMnV5bGFpcEw1ck1PTXlSWXYzK0VHUE50bVFzM1ZNQUw0eDMrdFNsWTJiQgpWa3NybllpNld4MWpGTGtGMHZmSFgvb0RtQzRYeHRCUCtnOTkxZThRNkhWZHBhTXhzMDU5MUJlRGZLRWNWZEVOCjdGOWx4Vk5Pa2RjanJkaktQSFZQR3hFQ2dZRUFyaFpVdnZnSnRLcmxlQ25xcS9zNXJvKytjbkF5Sm5kQS9yamoKS21ob3I4Qi9CcmhTUVBQYkFPZTV2eTZsMUdzQXZCM2R5RGpJcnMyQndaVnA3YUx1b01pZEgwQXpmSzdTZVF4Lwo5K1BiVGZYeGJXdElpY0hwbk5UeEU0cDRwUEFwL1FjVEpGWi9nZEVwNFdESVhXWmt3SGJDWCtXc3dJeFpDelBrCnNmSExWdWtDZ1lBVTIycVNncjUxdFd0TnFIT0d6MVdPTHRwYllNSEo2QWR1OG9oUUhORU9yMHhJU3VHWGdRRnAKZm9TZ3BEcXVTRWZQVUt2cGI2ZkIxQVUra1F3NUJNTGJpOFEzbVRFM3lRYnRPZ3dhODZjL3lURWhZZGJDNGVJOQpqVlBDR3NJcWtaL0NQZU84cGhKTEI0VjFIK1ljSGJPU1I2VTJPS01BSFdob1NrNm5iaHMxWmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
4.设置当前上下文
root@k8s-master1:/data/work# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig
Switched to context "system:kube-controller-manager".
root@k8s-master1:/data/work# cat kube-controller-manager.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURrakNDQW5xZ0F3SUJBZ0lVWk1XQ2lJT3NGVGVKYm9WR2VQOG8vOTYyb1Y0d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREEzTURFeE5EQXdXaGNOTXpRd09EQTFNREV4TkRBd1dqQmhNUXN3Q1FZRFZRUUcKRXdKRFRqRU9NQXdHQTFVRUNCTUZTSFZpWldreERqQU1CZ05WQkFjVEJWZDFhR0Z1TVF3d0NnWURWUVFLRXdOcgpPSE14RHpBTkJnTlZCQXNUQm5ONWMzUmxiVEVUTUJFR0ExVUVBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU1VN2FTanJQOExsT2FLcjRCdUIrRVlLSDA4Q3lrOHAKUTc1WUJHWUVINWxrVVpRVkFBSG9wM3p3SUxuMVczL2dRNVdURXY1dmpzbVAzY3JBRk5EVWU0U0pTNTZPQjlMRApsVjdNWVNpeGRHMERLeGdQZjVVNVNBQTFrbWg1L2h3R25TK0FnSTBlZzhnWHMrTms3Ym5rSFNFazZHRlZGczVEClk5NmlmTVMrOFFaVWhMOHpKcmlQYUc3NjZ1MXZRRTZUVjcyUytOVnNVNlB1SmlGTnorbC9YeHNNV21VV3R0SDYKM1ZxWTZrSTBUUDdwZ3BDV3VabkoyYTEzLzdGWVlVNE5sd1A2MDlvZnBkMHNndjV3NitwZGptdnNnNWFSRWk4VgpMUUM3N3IrSHgxVER5L1QzQ2J4Y3E3UytwY29LV25ndXNLUDRTMmt4U283VUE4L0dzak9vS2o4Q0F3RUFBYU5DCk1FQXdEZ1lEVlIwUEFRSC9CQVFEQWdFR01BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZFSkkKbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFBOGhNVXAyemsva1h1UQpzR0Q5b3VLOEwvVEZWTWJlK2oyOXBtUmpkcThVUjJ0Q3VEa3A1QUhVMFZjbHlyQmd5U282UW5TY2t3bG9oMTN0CmFiMDNKYi9VNit6clR3U3VBdG9oeGlObWI3NXluZjJmanZ4djhGS0RYZnZvZ3R0aVBHNzNuWXlPcEcwODdqZCsKaHRzcERDYmRTYlJ4eGUrejZsb2lZZ1F0TExSOW10WnhjWTZhSXdsVkZ5MmQyekpZdFNJSnNLeHFCMk4zT0t1MgpsNUlBTGYyaGhjNENya3RrTnR5ZHRIUitrdll2Ny95UjloTWF3MTBPWVUxcUpaQnU3UEdlTXc4emtUWXRka0NjCmo3UlEwRjN5RSs4RVVqdmZ2RGVsUERZUWRqWkJwTlNjUmc1MHdzYXRwN0krRXhvVjFiUGtvM2ZsQkZ0N1lWYzkKVEloZWVndWEKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://192.168.2.135:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: system:kube-controller-manager
name: system:kube-controller-manager
current-context: system:kube-controller-manager
kind: Config
preferences: {}
users:
- name: system:kube-controller-manager
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVLakNDQXhLZ0F3SUJBZ0lVY1haeWNqZU16Vnh4WVFxUmxQc1kwbmdhd20wd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1lURUxNQWtHQTFVRUJoTUNRMDR4RGpBTUJnTlZCQWdUQlVoMVltVnBNUTR3REFZRFZRUUhFd1ZYZFdoaApiakVNTUFvR0ExVUVDaE1EYXpoek1ROHdEUVlEVlFRTEV3WnplWE4wWlcweEV6QVJCZ05WQkFNVENtdDFZbVZ5CmJtVjBaWE13SGhjTk1qUXdPREE0TURFeE5qQXdXaGNOTXpRd09EQTJNREV4TmpBd1dqQ0JrREVMTUFrR0ExVUUKQmhNQ1EwNHhEakFNQmdOVkJBZ1RCVWgxWW1WcE1RNHdEQVlEVlFRSEV3VlhkV2hoYmpFbk1DVUdBMVVFQ2hNZQpjM2x6ZEdWdE9tdDFZbVV0WTI5dWRISnZiR3hsY2kxdFlXNWhaMlZ5TVE4d0RRWURWUVFMRXdaemVYTjBaVzB4Ckp6QWxCZ05WQkFNVEhuTjVjM1JsYlRwcmRXSmxMV052Ym5SeWIyeHNaWEl0YldGdVlXZGxjakNDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnhJL3lPUGpBQThrWFQvbytLWEpVMUZHZzNmMVM0SQpRV3BXMGVVODdtTCtHVG01SGVxekppcjRUb1lUQnR6T09rKzZXSFFGVjlJOVdvZVNHa0h6VjJGUGhOam5SODRoCjJpc3liQ05tZTZLdHMxRGxMNW83THYyeVg2d0JRQ3NhSG1HeERhdUZINkdDSmxqVEhrblAwbVlRTXlabVl0aWIKbHEvUnd6OVZsVm1MYWd5K3lkV0M3Q0lSNnlQbXdPNWIwUkRoZjQ4S0VVbWpZWlp4QWJBYmVCakhiL1NFYnFsaApqYkVLS3IyeEtpZnhITG8waVlDbnArdk1GcUVNNWJWQnBNY1BrU2kyME85d1hPN0JSK25rV2NaUHRlUkZ1aWltCjh0OEpFYTNTQk5jZHE4bXBnUnI5bU0vWDJQMG94VE51V2JlbFdLdlVEcUJEVVBaZFhSR0FiUmtDQXdFQUFhT0IKcVRDQnBqQU9CZ05WSFE4QkFmOEVCQU1DQmFBd0hRWURWUjBsQkJZd0ZBWUlLd1lCQlFVSEF3RUdDQ3NHQVFVRgpCd01DTUF3R0ExVWRFd0VCL3dRQ01BQXdIUVlEVlIwT0JCWUVGRm1ZUGVsUXRWTHlUUzc2cERXb1U5QzNNMFlZCk1COEdBMVVkSXdRWU1CYUFGRUpJbTd6MG9HV09HRzl5NEpvYjJCVnp2R3UwTUNjR0ExVWRFUVFnTUI2SEJIOEEKQUFHSEJNQ29Cd3FIQk1Db0J3dUhCTUNvQncySEJNQ29CMk13RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUNuNApGQ1d2SDBHN3V6YnJMcXd1ZklIenhSZ1M4UlJjZXdiWEhVOHFYVEtQZVBYZXFub3h2L1NaLzhIN2lKM2hOa2FNCk5Bbmc3VGF5RVdHRGtHeitibVMxemJGb2k5Y0R0TngyNTk1bWcxcnFWL2xmZWJLTThJTm5wcUxkN2ZSMkNPSWEKMTJVNVdXOVhBa0xXeUp4TXBZUytlRjc3UU52NTYvU3pidHpFbzA4VnFIVG1ydDVSUXRDTGNoajI1RDNodW53UgpEUGlzei9jcXluWGR3bC9TQjlnNUEwWWRveXloKzhqMko0eDNaSTU1UURkeDZieFAyQTEyR0FqRGYwY0diN0lSCjJIRlhlUDl6azRnWXUzWHVRRkxlb1MwL3Z3V3pkMTFjOEpLeEs4UE16TndsckpzR1VMOTk5eUZBakY5Y3RNcnQKdFFuc0FBbjY1Um43cS9jdXlzST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBM0VqL0k0K01BRHlSZFArajRwY2xUVVVhRGQvVkxnaEJhbGJSNVR6dVl2NFpPYmtkCjZyTW1LdmhPaGhNRzNNNDZUN3BZZEFWWDBqMWFoNUlhUWZOWFlVK0UyT2RIemlIYUt6SnNJMlo3b3EyelVPVXYKbWpzdS9iSmZyQUZBS3hvZVliRU5xNFVmb1lJbVdOTWVTYy9TWmhBekptWmkySnVXcjlIRFAxV1ZXWXRxREw3SgoxWUxzSWhIckkrYkE3bHZSRU9GL2p3b1JTYU5obG5FQnNCdDRHTWR2OUlSdXFXR05zUW9xdmJFcUovRWN1alNKCmdLZW42OHdXb1F6bHRVR2t4dytSS0xiUTczQmM3c0ZINmVSWnhrKzE1RVc2S0tieTN3a1JyZElFMXgycnlhbUIKR3YyWXo5ZlkvU2pGTTI1WnQ2VllxOVFPb0VOUTlsMWRFWUJ0R1FJREFRQUJBb0lCQUhITi9KSWduUkdhT1FPYQo5czRmakJQcGVWWmxwenNLNU5ETlhjN3l0YTNLM0xsbm03OGZJcjdjWGFVQ3UyN2oxRmhRUzFaVlZGTzNnc2U3CmdYbEZBSVd6a1V5RjRDRHNlRXdNMXJWTFF1QitvTDlRU0ZHRDlmajNhRm55bzNZaEhrVVdOWnZCUU9BdDN5WFEKbkR0QjlNN3AyNk1oRGp3ZDFiR3J5eFV6WDk5TUkvSTJaREZZakwvMkpQMDNsb0JaRWltbG94cGRaQ1UzWHhnUQp5NFFDdHp1ZWV5RFVvVzFJTm5DM0VIeHcyalZKWmp4d29PNmUvMHpUTTBxeGN2Vm85bDhkUlk1dk5BSnhydVh6CitES3Z6MDA0b2Q4b3loNXF0MGEyS2l4dWdqRnJuOVNLVnI3anJlNDc4VFNkSllBeGp0SFJQVVRZRmVEQUNOM0cKMTJSZEF3RUNnWUVBNzVkemVFMmVLamlkaUY3Rk9PMXlhMVpudEJQQkRLamRPOTZYV1JpVmw0dU85UGhGdkZnVwpMQ3pJelhXa05Ic1V1N1YybDhnanh2MTZ0SzlQL2ZiN1NVMEJxRmpQOGY3aXVESWRWSHJVNVZLUm9qRWJOYUlJCk95KzMyVzMwaTkvRkRkSDFBM0J3bUt1MXNVMXlPVHU1cVpUQUZIczdveTZrUWc0ZzcrR2U4bkVDZ1lFQTYxOFAKOWk1U0N4VlU5SVZBV2h2Y3dHVDJaUEI5OE85Q3dBYndZOXM4eUlGU1lQTlRJN2ZMUVNZSXRWY3BtSVhWZ0lpdwptS1NSM2ZmbXhaR1dOZktZOUlwMjRaSW5MbWlJaldnS29NV3A5TXk0Z1BrT0RHY2hOUmlDamxsNkFyS2VQb3Z4CnpRQ0FPejBPTHVGWE8yY1dHSjVwVldJRTFyNDYzT1owTUFKdnFTa0NnWUVBaGpWNU9pK0laTEE0RmxhMzlXNlYKQkdsdlIrRTA1NG1EKy9CeEtUaHJPMnV5bGFpcEw1ck1PTXlSWXYzK0VHUE50bVFzM1ZNQUw0eDMrdFNsWTJiQgpWa3NybllpNld4MWpGTGtGMHZmSFgvb0RtQzRYeHRCUCtnOTkxZThRNkhWZHBhTXhzMDU5MUJlRGZLRWNWZEVOCjdGOWx4Vk5Pa2RjanJkaktQSFZQR3hFQ2dZRUFyaFpVdnZnSnRLcmxlQ25xcS9zNXJvKytjbkF5Sm5kQS9yamoKS21ob3I4Qi9CcmhTUVBQYkFPZTV2eTZsMUdzQXZCM2R5RGpJcnMyQndaVnA3YUx1b01pZEgwQXpmSzdTZVF4Lwo5K1BiVGZYeGJXdElpY0hwbk5UeEU0cDRwUEFwL1FjVEpGWi9nZEVwNFdESVhXWmt3SGJDWCtXc3dJeFpDelBrCnNmSExWdWtDZ1lBVTIycVNncjUxdFd0TnFIT0d6MVdPTHRwYllNSEo2QWR1OG9oUUhORU9yMHhJU3VHWGdRRnAKZm9TZ3BEcXVTRWZQVUt2cGI2ZkIxQVUra1F3NUJNTGJpOFEzbVRFM3lRYnRPZ3dhODZjL3lURWhZZGJDNGVJOQpqVlBDR3NJcWtaL0NQZU84cGhKTEI0VjFIK1ljSGJPU1I2VTJPS01BSFdob1NrNm5iaHMxWmc9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
创建kube-controller-manager配置文件
root@k8s-master1:/data/work# cat >kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \
--secure-port=10257 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \
--service-cluster-ip-range=10.255.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.0.0.0/16 \
--experimental-cluster-signing-duration=87600h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
创建kube-controller-manager启动文件
root@k8s-master1:/data/work# cat >kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
启动kube-controller-manager服务
root@k8s-master1:/data/work# cp kube-controller-manager*.pem /etc/kubernetes/ssl/
root@k8s-master1:/data/work# cp kube-controller-manager.kubeconfig /etc/kubernetes/
root@k8s-master1:/data/work# cp kube-controller-manager.conf /etc/kubernetes/
root@k8s-master1:/data/work# cp kube-controller-manager.service /usr/lib/systemd/system/
root@k8s-master1:/data/work# scp -r kube-controller-manager*.pem master2:/etc/kubernetes/ssl/
kube-controller-manager-key.pem 100% 1679 3.0MB/s 00:00
kube-controller-manager.pem 100% 1505 1.8MB/s 00:00
root@k8s-master1:/data/work# scp -r kube-controller-manager.kubeconfig master2:/etc/kubernetes/
kube-controller-manager.kubeconfig 100% 6415 3.7MB/s 00:00
root@k8s-master1:/data/work# scp -r kube-controller-manager.conf master2:/etc/kubernetes/
kube-controller-manager.conf 100% 1048 721.8KB/s 00:00
root@k8s-master1:/data/work# scp -r kube-controller-manager.service master2:/usr/lib/systemd/system/
kube-controller-manager.service 100% 324 288.2KB/s 00:00
root@k8s-master1:/data/work# systemctl daemon-reload && systemctl enable --now kube-controller-manager && systemctl status kube-controller-manager
root@k8s-master2:~# systemctl daemon-reload && systemctl enable --now kube-controller-manager && systemctl status kube-controller-manager

部署kube-scheduler组件
创建kube-scheduler的csr请求
root@k8s-master1:/data/work# cat kube-scheduler-csr.json
{
"CN": "system:kube-scheduler",
"hosts": [
"127.0.0.1",
"192.168.2.135",
"192.168.2.136",
"192.168.2.137",
"192.168.2.138",
"192.168.2.139",
"192.168.2.140"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "system:kube-scheduler",
"OU": "system"
}
]
}
hosts 列表包含所有 kube-scheduler 节点 IP; CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。
生成kube-scheduler证书
root@k8s-master1:/data/work# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
2024/08/08 10:02:11 [INFO] generate received request
2024/08/08 10:02:11 [INFO] received CSR
2024/08/08 10:02:11 [INFO] generating key: rsa-2048
2024/08/08 10:02:11 [INFO] encoded CSR
2024/08/08 10:02:11 [INFO] signed certificate with serial number 512402351679637476564588241239497857472370192274
创建kube-scheduler的kubeconfig文件
1.设置集群参数
root@k8s-master1:/data/work# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.135:6443 --kubeconfig=kube-scheduler.kubeconfig
Cluster "kubernetes" set.
2.设置客户端参数
root@k8s-master1:/data/work# kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.kubeconfig
User "system:kube-scheduler" set.
3.设置上下文参数
root@k8s-master1:/data/work# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Context "system:kube-scheduler" created.
4.设置当前上下文
root@k8s-master1:/data/work# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig
Switched to context "system:kube-scheduler".
创建kube-scheduler的配置文件
root@k8s-master1:/data/work# cat >kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
创建kube-scheduler的服务启动文件
root@k8s-master1:/data/work# cat >kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
拷贝文件到master2节点并启动服务
root@k8s-master1:/data/work# cp kube-scheduler*.pem /etc/kubernetes/ssl/
root@k8s-master1:/data/work# cp kube-scheduler.kubeconfig /etc/kubernetes/
root@k8s-master1:/data/work# cp kube-scheduler.conf /etc/kubernetes/
root@k8s-master1:/data/work# cp kube-scheduler.service /usr/lib/systemd/system/
root@k8s-master1:/data/work# scp kube-scheduler*.pem master2:/etc/kubernetes/ssl/
kube-scheduler-key.pem 100% 1679 890.1KB/s 00:00
kube-scheduler.pem 100% 1497 977.8KB/s 00:00
root@k8s-master1:/data/work# scp kube-scheduler.kubeconfig kube-scheduler.conf master2:/etc/kubernetes/
kube-scheduler.kubeconfig 100% 6367 4.4MB/s 00:00
kube-scheduler.conf 100% 208 287.6KB/s 00:00
root@k8s-master1:/data/work# scp kube-scheduler.service master2:/usr/lib/systemd/system/
kube-scheduler.service
100% 292 225.7KB/s 00:00
root@k8s-master1:/data/work# systemctl daemon-reload && systemctl enable --now kube-scheduler && systemctl status kube-scheduler
root@k8s-master2:~# systemctl daemon-reload && systemctl enable --now kube-scheduler && systemctl status kube-scheduler

部署kubelet组件
kubelet: 每个 Node 节点上的 kubelet 定期就会调用 API Server 的 REST 接口报告自身状态, API Server 接收这些信息后,将节点状态信息更新到 etcd 中。kubelet 也通过 API Server 监听 Pod信息,从而对 Node 机器上的 POD 进行管理,如创建、删除、更新 Pod
以下操作在k8s-master1上操作
创建kubelet-bootstrap.kubeconfig
root@k8s-master1:/data/work# BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/token.csv)
root@k8s-master1:/data/work# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.135:6443 --kubeconfig=kubelet-bootstrap.kubeconfig
Cluster "kubernetes" set.
root@k8s-master1:/data/work# kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.kubeconfig
User "kubelet-bootstrap" set.
root@k8s-master1:/data/work# kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.kubeconfig
Context "default" created.
root@k8s-master1:/data/work# kubectl config use-context default --kubeconfig=kubelet-bootstrap.kubeconfig
Switched to context "default".
root@k8s-master1:/data/work# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
无需手动创建kubelet.kubeconfig,该文件自动生成。
创建配置文件kubelet.json( “cgroupDriver”: “systemd”要和 docker 的驱动一致, address 替换为自己 k8s-node1 的 IP 地址 )
root@k8s-master1:/data/work# cat kubelet.json
{
"kind": "KubeletConfiguration",
"apiVersion": "kubelet.config.k8s.io/v1beta1",
"authentication": {
"x509": {
"clientCAFile": "/etc/kubernetes/ssl/ca.pem"
},
"webhook": {
"enabled": true,
"cacheTTL": "2m0s"
},
"anonymous": {
"enabled": false
}
},
"authorization": {
"mode": "Webhook",
"webhook": {
"cacheAuthorizedTTL": "5m0s",
"cacheUnauthorizedTTL": "30s"
}
},
"address": "192.168.2.135",
"port": 10250,
"readOnlyPort": 10255,
"cgroupDriver": "systemd",
"hairpinMode": "promiscuous-bridge",
"serializeImagePulls": false,
"featureGates": {
"RotateKubeletServerCertificate": true
},
"clusterDomain": "cluster.local.",
"clusterDNS": [
"10.255.0.2"
]
}
创建kubelet服务启动文件
root@k8s-master1:/data/work# cat >kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.kubeconfig \
--config=/etc/kubernetes/kubelet.json \
--network-plugin=cni \
--pod-infra-container-image=k8s.gcr.io/pause:3.2 \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
–hostname-override:显示名称,集群中唯一
–network-plugin:启用 CNI
–kubeconfig:空路径,会自动生成,后面用于连接 apiserver
–bootstrap-kubeconfig:首次启动向 apiserver 申请证书
–config:配置参数文件
–cert-dir:kubelet 证书生成目录
–pod-infra-container-image:管理 Pod 网络容器的镜像
kubelete.json 配置文件 address 改为各个节点的 ip 地址,在各个 work 节点上启动服务
上传pause-3.2.tar、导入镜像(k8s.gcr.io/pause:3.2)
docker load -i pause-3.2.tar
拷贝kubelet的可执行文件、证书以及配置文件到node节点
root@k8s-master1:/data/work# scp -r kubernetes/server/bin/kubelet node1:/usr/local/bin/
kubelet 100% 118MB 69.1MB/s 00:01
root@k8s-master1:/data/work# scp -r kubernetes/server/bin/kubelet node2:/usr/local/bin/
kubelet 100% 118MB 69.2MB/s 00:01
root@k8s-node1:~# mkdir /etc/kubernetes/ssl -p
root@k8s-node2:~# mkdir /etc/kubernetes/ssl -p
root@k8s-master1:/data/work# scp -r kubelet-bootstrap.kubeconfig kubelet.kubeconfig kubelet.json node1:/etc/kubernetes/
kubelet-bootstrap.kubeconfig 100% 2087 1.5MB/s 00:00
kubelet.kubeconfig 100% 2087 1.3MB/s 00:00
kubelet.json 100% 766 396.4KB/s 00:00
root@k8s-master1:/data/work# scp -r kubelet-bootstrap.kubeconfig kubelet.kubeconfig kubelet.json node2:/etc/kubernetes/
kubelet-bootstrap.kubeconfig 100% 2087 3.4MB/s 00:00
kubelet.kubeconfig 100% 2087 3.0MB/s 00:00
kubelet.json 100% 766 992.3KB/s 00:00
root@k8s-master1:/data/work# scp ca.pem node1:/etc/kubernetes/ssl/
ca.pem 100% 1298 1.2MB/s 00:00
root@k8s-master1:/data/work# scp ca.pem node2:/etc/kubernetes/ssl/
ca.pem 100% 1298 759.7KB/s 00:00
root@k8s-master1:/data/work# scp kubelet.service node1:/usr/lib/systemd/system/
kubelet.service 100% 649 1.1MB/s 00:00
root@k8s-master1:/data/work# scp kubelet.service node2:/usr/lib/systemd/system/
kubelet.service 100% 649 502.4KB/s 00:00
在node节点上启动kubelet服务(启动服务报错8月 08 14:21:22 k8s-node2 systemd[1733]: kubelet.service: Failed at step CHDIR spawning /usr/local/bin/kubelet: No such file or directory就是没有建立这两个目录)
root@k8s-node1:~# mkdir /var/lib/kubelet /var/log/kubernetes -p
root@k8s-node2:~# mkdir /var/lib/kubelet /var/log/kubernetes -p
/etc/kubernetes/kubelet.json中address替换为node自己的ip地址

root@k8s-node1:~# systemctl daemon-reload && systemctl enable --now kubelet && systemctl status kubelet
root@k8s-node2:~# systemctl daemon-reload && systemctl enable --now kubelet && systemctl status kubelet

确认 kubelet 服务启动成功后,接着到 k8s-master1 节点上 Approve 一下 bootstrap 请求。
执行如下命令可以看到一个 worker 节点发送了一个 CSR 请求:
root@k8s-master1:/data/work# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
node-csr-AqEx4L9s5zC2wg1nJkrWAouj3aRX2mEYENhnC6x-gko 16m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending
node-csr-ge83doNA568ZQMe03QJOTXfeC-sqL6-G41AFSAqwwJQ 3m59s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending
在master节点审批node节点的请求
root@k8s-master1:/data/work# kubectl certificate approve node-csr-AqEx4L9s5zC2wg1nJkrWAouj3aRX2mEYENhnC6x-gko
certificatesigningrequest.certificates.k8s.io/node-csr-AqEx4L9s5zC2wg1nJkrWAouj3aRX2mEYENhnC6x-gko approved
root@k8s-master1:/data/work# kubectl certificate approve node-csr-ge83doNA568ZQMe03QJOTXfeC-sqL6-G41AFSAqwwJQ
certificatesigningrequest.certificates.k8s.io/node-csr-ge83doNA568ZQMe03QJOTXfeC-sqL6-G41AFSAqwwJQ approved
再次查看申请
root@k8s-master1:/data/work# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
node-csr-AqEx4L9s5zC2wg1nJkrWAouj3aRX2mEYENhnC6x-gko 20m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
node-csr-ge83doNA568ZQMe03QJOTXfeC-sqL6-G41AFSAqwwJQ 8m11s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
问题:CONDITION只有Approved,没有Issued
原因:有approved 但没有issued,是因为申请通过认证但是没有正常颁发证书。
验证:kubectl get csr node-csr-H_RP_EgvacATe0bfEhlr_rPLTS4EVRjEr-0XukFgg3A -o yaml,status:certificate: 没有输出证书
排查:查看一下kube-controller-manager的日志是不是报错了,查看kube-controller-manager日志: systemctl status kube-controller-manager有没有错误报出,
例如:Jan 25 21:14:50 k8s-master1 kube-controller-manager[315]: E0125 21:14:50.897855 315 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get “https://192.168.2.135:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s“: net/http: request canceled while waiting for connectio,连不上api-server,检查发现是kube-controller-manager.kubeconfig中api地址写错
问题解决后重启controller-manager,Issued自动出来了。
在master上看一下node节点是否已经正常加进来了
root@k8s-master1:/data/work# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady <none> 2m16s v1.23.14
k8s-node2 NotReady <none> 95s v1.23.14
STATUS 是NotReady 表示还没安装网络插件
部署kube-proxy组件
创建kube-proxy的csr请求
root@k8s-master1:/data/work# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Hubei",
"L": "Wuhan",
"O": "k8s",
"OU": "system"
}
]
}
生成证书
root@k8s-master1:/data/work# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2024/08/08 14:46:42 [INFO] generate received request
2024/08/08 14:46:42 [INFO] received CSR
2024/08/08 14:46:42 [INFO] generating key: rsa-2048
2024/08/08 14:46:42 [INFO] encoded CSR
2024/08/08 14:46:42 [INFO] signed certificate with serial number 99906890704076854292872130320716163640275398132
2024/08/08 14:46:42 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
创建kubeconfig文件
root@k8s-master1:/data/work# kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.2.135:6443 --kubeconfig=kube-proxy.kubeconfig
Cluster "kubernetes" set.
root@k8s-master1:/data/work# kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
User "kube-proxy" set.
root@k8s-master1:/data/work# kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
Context "default" created.
root@k8s-master1:/data/work# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
Switched to context "default".
创建kube-proxy配置文件(2.138上记得把ip地址换成自己的)
root@k8s-master1:/data/work# cat kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.2.137
clientConnection:
kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.0.0.0/16
healthzBindAddress: 192.168.2.137:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.2.137:10249
mode: "ipvs"
创建kube-proxy服务启动文件
root@k8s-master1:/data/work# cat >kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yaml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
拷贝kube-proxy文件到node节点上
root@k8s-master1:/data/work# scp kube-proxy.kubeconfig kube-proxy.yaml node1:/etc/kubernetes/
kube-proxy.kubeconfig 100% 6173 4.6MB/s 00:00
kube-proxy.yaml 100% 292 228.4KB/s 00:00
root@k8s-master1:/data/work# scp kube-proxy.service node1:/usr/lib/systemd/system/
kube-proxy.service 100% 438 143.6KB/s 00:00
root@k8s-master1:/data/work# scp kube-proxy.kubeconfig kube-proxy.yaml node2:/etc/kubernetes/
kube-proxy.kubeconfig 100% 6173 5.0MB/s 00:00
kube-proxy.yaml 100% 292 199.8KB/s 00:00
root@k8s-master1:/data/work# scp kube-proxy.service node2:/usr/lib/systemd/system/
kube-proxy.service 100% 438 378.8KB/s 00:00
root@k8s-master1:/data/work# scp kubernetes/server/bin/kube-proxy node1:/usr/local/bin/
kube-proxy 100% 42MB 72.7MB/s 00:00
root@k8s-master1:/data/work# scp kubernetes/server/bin/kube-proxy node2:/usr/local/bin/
kube-proxy 100% 42MB 61.2MB/s 00:0
启动kube-proxy服务
root@k8s-node1:/etc/kubernetes# mkdir -p /var/lib/kube-proxy (不建立会报错:Oct 18 11:05:43 m2 (be-proxy)[14064]: kube-proxy.service: Failed at step CHDIR spawning /usr/local/bin/kube-proxy: No such file or directory)
root@k8s-node1:/etc/kubernetes# systemctl daemon-reload && systemctl enable --now kube-proxy && systemctl status kube-proxy
root@k8s-node2:/etc/kubernetes# mkdir -p /var/lib/kube-proxy
root@k8s-node2:/etc/kubernetes# systemctl daemon-reload && systemctl enable --now kube-proxy && systemctl status kube-proxy

部署calico组件(curl -sLk -o calico.yamlhttps://calico-v3-25.netlify.app/archive/v3.25/manifests/calico.yaml)
上传到node并导入离线镜像包calico.tar(node1、2)
docker load -i ./calico.tar
把calico.yaml文件上传到master1的/data/work目录(修改 Pod IP 地址段,找到 CALICO_IPV4POOL_CIDR 变量,取消注释并修改如下)

root@k8s-master1:/data/work# kubectl apply -f calico.yaml

*奇怪的问题:如果状态一直是pending,且用kubectl describe pod calico-node-gwlzz -n kube-system查看event为<none>,node节点上docker ps -a 查看不到与calico相关的容器,
*排查思路:那么就要检查master1上的/var/log/kubernetes里面的kube-scheduler.ERROR日志,

很明显是访问192.168.2.135的6443端口(kube-apiserver)出错。检查通路、服务、ip地址是否正确(kube-scheduler.kubeconfig里面的server:)
kubectl config set-cluster kubernetes –certificate-authority=ca.pem –embed-certs=true –server=https://<正确的ip>:6443–kubeconfig=kube-scheduler.kubeconfig
然后将kube-scheduler.kubeconfig拷贝到master1、2的/etc/kubernetes下,重启kube-scheduler服务。
问题2:


root@node1:~# docker logs -f a1,报错:
2025-01-25 15:20:07.201 [ERROR][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post “https://10.255.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token“: dial tcp 10.255.0.1:443: connect: connection refused
2025-01-25 15:20:07.201 [FATAL][1] cni-installer/<nil> <nil>: Unable to create token for CNI kubeconfig error=Post “https://10.255.0.1:443/api/v1/namespaces/kube-system/serviceaccounts/calico-node/token“: dial tcp 10.255.0.1:443: connect: connection refused

检查:kube-proxy

原来是apiserver的地址写错了,纠正kube-proxy.kubeconfig里的地址,重启服务。
问题解决了:

部署coredns(https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed)
- CLUSTER_DOMAIN 改为 cluster.local
- REVERSE_CIDRS 改为 in-addr.arpa ip6.arpa
- UPSTREAMNAMESERVER 改为 /etc/resolv.conf,如果报错,则改成当前网络所使用的 DNS 地址
- 删除 STUBDOMAINS
- CLUSTER_DNS_IP 改为 10.255.0.2(应与 /etc/kubernetes/kubelet.json 中配置的clusterDNS保持一致)
上传并导入coredns离线镜像包(node1、2)
docker load -i coredns.tar
上传coredns.yaml到master1节点
root@k8s-master1:/data/work# kubectl apply -f coredns.yaml
root@k8s-master1:/data/work# kubectl -n kube-system get pod
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-64cc74d646-7lfjt 1/1 Running 1 (43m ago) 55m
calico-node-gwlzz 1/1 Running 0 55m
calico-node-tcwgg 1/1 Running 0 55m
coredns-6fb76d9459-rdjkm 1/1 Running 0 16m
验证dns功能:
root@m1:/data/work# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
nslookup kubernetes.default.svc.cluster.local

集群组件功能验证测试
对系统用户kubernetes做授权
root@k8s-master1:/data/work# kubectl create clusterrolebinding kubernetes-kubectl --clusterrole=cluster-admin --user=kubernetes
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-kubectl created
测试k8s集群部署tomcat服务
准备tomcat.yaml
kubectl create deployment tomcat --image=tomcat --port=8080 --replicas=2 --dry-run -o yaml >tomcat.yaml
root@k8s-master1:/data/work# cat tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: tomcat
name: tomcat
spec:
replicas: 2
selector:
matchLabels:
app: tomcat
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: tomcat
spec:
containers:
- image: tomcat
name: tomcat
ports:
- containerPort: 8080
resources: {}
status: {}
kubectl expose deployment tomcat --type=NodePort
root@k8s-master1:/data/work# kubectl get pod
NAME READY STATUS RESTARTS AGE
tomcat-6b89757df7-7fxf4 1/1 Running 1 (23m ago) 17h
tomcat-6b89757df7-vs6sg 1/1 Running 1 (23m ago) 17h
root@k8s-master1:/data/work# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.255.0.1 <none> 443/TCP 43h
tomcat NodePort 10.255.110.227 <none> 8080:44749/TCP 17h
浏览器访问http://192.168.2.137、138的44749端口:

测试coredns是否正常
root@k8s-master1:/data/work# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.255.0.2
Address 1: 10.255.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.255.0.1 kubernetes.default.svc.cluster.local
/ # ping www.baidu.com
PING www.baidu.com (180.101.50.188): 56 data bytes
64 bytes from 180.101.50.188: seq=0 ttl=51 time=228.038 ms
busybox 要用指定的 1.28 版本,不能用最新版本,最新版本,nslookup 会解析不到 dns 和 ip
10.255.0.2 就是我们 coreDNS 的 clusterIP,说明 coreDNS 配置好了。解析内部 Service 的名称,是通过 coreDNS 去解析的。
实现k8s apiserver高可用
将keepalived.tar镜像包上传master1、2
导入镜像包
docker load -i keepalived.tar
master1上启动容器
root@k8s-master1:~# docker run -d --name keepalived-master --net=host --restart unless-stopped -e KEEPALIVED_INTERFACE=enp0s3 -e KEEPALIVED_PRIORITY=100 -e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.2.135','192.168.2.136']" -e KEEPALIVED_VIRTUAL_IPS="192.168.2.139" -e KEEPALIVED_STATE="MASTER" --privileged=true osixia/keepalived --loglevel debug


master2上启动容器
root@k8s-master2:~# docker run -d --name keepalived-backup --net=host --restart unless-stopped -e KEEPALIVED_INTERFACE=enp0s3 -e KEEPALIVED_PRIORITY=50 -e KEEPALIVED_UNICAST_PEERS="#PYTHON2BASH:['192.168.2.135','192.168.2.136']" -e KEEPALIVED_VIRTUAL_IPS="192.168.2.139" -e KEEPALIVED_STATE="BACKUP" --privileged=true osixia/keepalived --loglevel debug
故障模拟
通过在主节点上模拟 keepalived 故障,使用命令 docker stop keepalived-master,可看见VIP飘动到备用节点。
目前所有的 Worker Node 组件连接都还是 k8s-master1 Node,如果不改为连接 VIP 走负载均衡器,那么 Master 还是单点故障。
因此接下来就是要改所有 Worker Node(kubectl get node 命令查看到的节点)组件配置文件,由原来 192.168.2.135 修改为 192.168.2.139(VIP)。
在所有 Worker Node 执行:
sed -i 's/192.168.2.135:6443/192.168.2.139:6443/g' /etc/kubernetes/kubelet-bootstrap.kubeconfig /etc/kubernetes/kubelet.json /etc/kubernetes/kubelet.kubeconfig /etc/kubernetes/kube-proxy.yaml /etc/kubernetes/kube-proxy.kubeconfig
重启下服务
systemctl restart kubelet kube-proxy
在所有master上执行:
sed -i 's/192.168.2.135:6443/192.168.2.139:6443/g' /etc/kubernetes/kube-scheduler.kubeconfig /etc/kubernetes/kube-controller-manager.kubeconfig /etc/kubernetes/admin.conf /root/.kube/config
重启下服务
systemctl restart kube-scheduler kube-controller-manager
两个问题:1. 模拟master1故障(systemctl stop networking)后,master2 不可用:原因是etcd集群不可用,需要增加一个节点,至少三个节点的etcd集群才能容错1个冗余。
2. ping包DUP!!的问题:原因是虚拟机的网卡mac地址重复了(克隆时候没处理好),关闭虚拟机,设置里面刷新mac地址即可:

附件: