您现在的位置是: 网站首页 >Kubernetes >Docker&Kubernetes技术全解 Kubernetes
【K8s+Docker技术全解】11.Node运算节点服务-部署kubelet
admin2020年10月12日 09:37 【Docker | Kubernetes | Linux 】 5063人已围观
Docker&Kubernetes技术全解简介 Kubernetes 是一个可移植的、可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。Kubernetes 拥有一个庞大且快速增长的生态系统。Kubernetes 的服务、支持和工具广泛可用。 课程来自老男孩教育学习总结。
## 99.151/152部署kubelet ### 什么是kubelet? 一个在集群中每个节点上运行的代理,kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。kubelet 不会管理不是由 Kubernetes 创建的容器。 简单来说主要是三个功能: - 接收pod的期望状态(副本数、镜像、网络等),并调用容器运行环境(container runtime)来实现预期状态,目前container runtime基本都是docker ce。需要注意的是,**pod网络是由kubelet管理的,而不是kube-proxy**。 - 定时汇报节点的状态给 apiserver,作为scheduler调度的基础。 - 对镜像和容器的清理工作,避免不必要的文件资源占用磁盘空间。 运算节点服务kubelet是集群中最复杂的 ### 运维主机签发kubelet证书 #### 创建生成证书签名请求(csr)的json配置文件 ```bash # 192.168.99.200 [root@k8s99-200 certs]# vim kubelet-csr.json ``` 内容如下 ```json { "CN": "k8s_kubelet", "hosts": [ "127.0.0.1", "192.168.0.1", "192.168.99.151", "192.168.99.152", "192.168.99.153", "192.168.99.100" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "sichuan", "L": "chengdu", "O": "study", "OU": "ops" } ] } ``` `hosts`里面写了kubelet可能部署的地址,把这些IP地址都写进去,另外还要加一个vip`192.168.99.100`,虚拟IP地址。 #### 生成kubelet证书和私钥 ```bash # 192.168.99.200 [root@k8s99-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssljson -bare kubelet # 查看签发的证书 [root@k8s99-200 certs]# ll | grep kubelet -rw-r--r--. 1 root root 1086 6月 2 21:55 kubelet.csr -rw-r--r--. 1 root root 433 6月 2 21:55 kubelet-csr.json -rw-------. 1 root root 1679 6月 2 21:55 kubelet-key.pem -rw-r--r--. 1 root root 1456 6月 2 21:55 kubelet.pem ``` ### 99.200运维主机上准备pause基础镜像 Pause容器 全称infrastucture container(又叫infra)基础容器。 在kubelet的启动配置中都指定了这个参数`--pod-infra-container-image harbor.study.com/public/pause:latest`,这是指定拉取的pause镜像地址。 #### pause的作用 在node节点上都会起很多pause容器,和pod是一一对应的。 每个Pod里运行着一个特殊的被称之为Pause的容器,其他容器则为业务容器,这些业务容器共享Pause容器的网络栈和Volume挂载卷,因此他们之间通信和数据交换更为高效,在设计时可以充分利用这一特性将一组密切相关的服务进程放入同一个Pod中。同一个Pod里的容器之间仅需通过localhost就能互相通信。通过`docker ps`就可以查看 #### pause的功能 kubernetes中的pause容器主要为每个业务容器提供以下功能: - PID命名空间:Pod中的不同应用程序可以看到其他应用程序的进程ID。 - 网络命名空间:Pod中的多个容器能够访问同一个IP和端口范围。 - IPC命名空间:Pod中的多个容器能够使用SystemV IPC或POSIX消息队列进行通信。 - UTS命名空间:Pod中的多个容器共享一个主机名; - Volumes(共享存储卷):Pod中的各个容器可以访问在Pod级别定义的Volumes。 #### 示例讲解(不做操作) ![](_v_images/20200604131234446_12672.png =564x) 1. 首先在节点上运行一个pause容器。 ```bash # docker run -d --name pause -p 8880:80 registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 ``` 2. 然后再运行一个nginx容器,nginx将为`localhost:2368`创建一个代理。 ```bash # cat << EOF >> nginx.conf ferror_log stderr; events { worker_connections 1024; } http { access_log /dev/stdout combined; server { listen 80 default_server; server_name example.com www.example.com; location / { proxy_pass http://127.0.0.1:2368; } } } EOF # docker run -d --name nginx -v `pwd`/nginx.conf:/etc/nginx/nginx.conf --net=container:pause --ipc=container:pause --pid=container:pause nginx ``` 3. 然后再为ghost创建一个应用容器,这是一款博客软件。 ```bash # docker run -d --name ghost --net=container:pause --ipc=container:pause --pid=container:pause ghost ``` 现在访问 http://localhost:8880/ 就可以看到ghost博客的界面了。 4. 解析 pause容器将内部的80端口映射到宿主机的8880端口,pause容器在宿主机上设置好了网络namespace后,nginx容器加入到该网络namespace中,我们看到nginx容器启动的时候指定了`--net=container:pause`,ghost容器同样加入到了该网络namespace中,这样三个容器就共享了网络,互相之间就可以使用`localhost`直接通信,`--ipc=contianer:pause --pid=container:pause`就是三个容器处于同一个namespace中,init进程为`pause`,这时我们进入到ghost容器中查看进程情况。 ```bash # ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 1024 4 ? Ss 13:49 0:00 /pause root 5 0.0 0.1 32432 5736 ? Ss 13:51 0:00 nginx: master psystemd+ 9 0.0 0.0 32980 3304 ? S 13:51 0:00 nginx: worker pnode 10 0.3 2.0 1254200 83788 ? Ssl 13:53 0:03 node current/in root 79 0.1 0.0 4336 812 pts/0 Ss 14:09 0:00 sh root 87 0.0 0.0 17500 2080 pts/0 R+ 14:10 0:00 ps aux ``` 在ghost容器中同时可以看到pause和nginx容器的进程,并且pause容器的PID是1。而在kubernetes中容器的PID=1的进程即为容器本身的业务进程。 #### 准备pause镜像 ```bash # 登录Harbor仓库 [root@k8s99-200 ~]# docker login harbor.study.com Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded # 拉取pause镜像 [root@k8s99-200 ~]# docker pull kubernetes/pause Using default tag: latest latest: Pulling from kubernetes/pause 4f4fb700ef54: Downloading b9c8ec465f6b: Download complete latest: Pulling from kubernetes/pause 4f4fb700ef54: Pull complete b9c8ec465f6b: Pull complete Digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 Status: Downloaded newer image for kubernetes/pause:latest docker.io/kubernetes/pause:latest [root@k8s99-200 ~]# docker images| grep pause kubernetes/pause latest f9d5de079539 5 years ago 240kB # 打上标签 [root@k8s99-200 ~]# docker tag f9d5de079539 harbor.study.com/public/pause:latest # 推送到Harbor仓库 [root@k8s99-200 ~]# docker push harbor.study.com/public/pause:latest The push refers to repository [harbor.study.com/public/pause] 5f70bf18a086: Pushed e16a89738269: Pushed latest: digest: sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 size: 938 ``` 访问 http://harbor.study.com/harbor/projects/2/repositories 就可以看到刚才上传的`pause`镜像了 ### 99.151部署kubelet #### 从运维主机拷贝证书到certs ```bash [root@k8s99-151 certs]# scp k8s99-200:/opt/certs/kubelet.pem . [root@k8s99-151 certs]# scp k8s99-200:/opt/certs/kubelet-key.pem . # 查看拷贝过来的kubelet证书和私钥 [root@k8s99-151 certs]# ll 总用量 32 -rw-------. 1 root root 1675 6月 1 23:03 apiserver-key.pem -rw-r--r--. 1 root root 1619 6月 1 23:03 apiserver.pem -rw-------. 1 root root 1675 6月 1 23:03 ca-key.pem -rw-r--r--. 1 root root 1354 6月 1 23:03 ca.pem -rw-------. 1 root root 1679 6月 1 23:03 client-key.pem -rw-r--r--. 1 root root 1387 6月 1 23:03 client.pem -rw-------. 1 root root 1679 6月 2 22:00 kubelet-key.pem -rw-r--r--. 1 root root 1456 6月 2 22:00 kubelet.pem ``` #### 创建配置 分发证书到 99.151 为kubelet创建一个配置文件,分为4步。生成完成后,可以直接拷贝这个配置文件到其他主机,因为这些命令生成的是同一个文件。 ```bash [root@k8s99-151 certs]# cd ../conf/ [root@k8s99-151 conf]# ls audit.yaml [root@k8s99-151 conf]# pwd /opt/kubernetes/server/bin/conf ``` 接下来会在`/opt/kubernetes/server/bin/conf`生成`kubelet.kubeconfig`配置文件 使用`config`的子命令修改kubeconfig配置文件,命令为`kubectl config SUBCOMMAND`,如`kubectl config set current-context my-context`。 选项 ```bash --kubeconfig="": 使用特定的配置文件。 ``` 继承自父命令的选项 ```bash --alsologtostderr[=false]: 同时输出日志到标准错误控制台和文件。 --api-version="": 和服务端交互使用的API版本。 --certificate-authority="": 用以进行认证授权的.cert文件路径。 --client-certificate="": TLS使用的客户端证书路径。 --client-key="": TLS使用的客户端密钥路径。 --cluster="": 指定使用的kubeconfig配置文件中的集群名。 --context="": 指定使用的kubeconfig配置文件中的环境名。 --insecure-skip-tls-verify[=false]: 如果为true,将不会检查服务器凭证的有效性,这会导致你的HTTPS链接变得不安全。 --kubeconfig="": 命令行请求使用的配置文件路径。 --log-backtrace-at=:0: 当日志长度超过定义的行数时,忽略堆栈信息。 --log-dir="": 如果不为空,将日志文件写入此目录。 --log-flush-frequency=5s: 刷新日志的最大时间间隔。 --logtostderr[=true]: 输出日志到标准错误控制台,不输出到文件。 --match-server-version[=false]: 要求服务端和客户端版本匹配。 --namespace="": 如果不为空,命令将使用此namespace。 --password="": API Server进行简单认证使用的密码。 -s, --server="": Kubernetes API Server的地址和端口号。 --stderrthreshold=2: 高于此级别的日志将被输出到错误控制台。 --token="": 认证到API Server使用的令牌。 --user="": 指定使用的kubeconfig配置文件中的用户名。 --username="": API Server进行简单认证使用的用户名。 --v=0: 指定输出日志的级别。 --vmodule=: 指定输出日志的模块,格式如下:pattern=N,使用逗号分隔。 ``` ##### set-cluster 在`kubeconfig`配置文件中设置一个集群项。 如果指定了一个已存在的名字,将合并新字段并覆盖旧字段。 ###### 选项 ```bash kubectl config set-cluster [集群的名字] --api-version="" # 设置kuebconfig配置文件中集群选项中的api-version。 --certificate-authority="" # 设置kuebconfig配置文件中集群选项中的certificate-authority路径。集群证书颁发的ca。 --embed-certs=false # 设置kuebconfig配置文件中集群选项中的embed-certs开关,及嵌入证书。 --insecure-skip-tls-verify=false # 设置kuebconfig配置文件中集群选项中的insecure-skip-tls-verify开关。 --server="" # 设置kuebconfig配置文件中集群选项中的server。指定kube_apiserver的地址。 --kubeconfig=kubelet.kubeconfig # 把命令生成的信息内容写入kubeconfig,并且同时写入kubectl.kubeconfig文件 ``` ###### 执行 在上面的`conf`目录下执行 ```bash # 设置集群参数 [root@k8s99-151 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/certs/ca.pem \ --embed-certs=true \ --server=https://192.168.99.100:7443 \ --kubeconfig=kubelet.kubeconfig Cluster "myk8s" set. [root@k8s99-151 conf]# ls audit.yaml kubelet.kubeconfig # 查看生成的文本内容 [root@k8s99-151 conf]# cat kubelet.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1akNDQXFLZ0F3SUJBZ0lVU1Nvd1dvdGp0SUhDaHQwTGt2TGY4b0lVZWRjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE1qSXhNREJhRncwME1EQTFNamN4TWpJeE1EQmFNR014Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SSXdFQVlEVlFRRERBbHJPSE5mYzNSMVpIa3dnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQy9FWHhyeVRoQTYrbld3Vm9DSTdMUEhwT28KVXpvMlhGU2FvZ05kbXZKMVZnbU9XdDk4Y0NVZTk5NlhnM2Uvay9xamZvN2hUWklwblhGYTlYWCtyMTJoVk5vagplWXJuVG1lYVZKWjJpTGw3cTJUdGI0QjdjdTU1Q1hoVmhoK0RGWnl4d1paT0Rqb1AxdUVZSHpmZjF5VmlnbUdwCkIwN3Yzd3lNcUlpVHEvY0xQVTFlUXdnUGk3bW10N1ptdExNZWNOeklJK1I3NmJLRU5ET3RuUzJhc3lxQ01pbnMKVHVXNjJNWWRGVFgySlk1TERyNm1zTGZiaDlvQ25MQ0M4cmNXQnZ3WHEwMXVidm50VHNWMGZSQ3VwSHpyYTQ4RQpTOVZKTHZoM1N4RHU1OU9ZVUVJZ01uRVlrYng3bWJiWks3Yzl1RFlGMTQ0UGNPSi9YUnIrWWp6L0p5UUhBZ01CCkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWQKRGdRV0JCU2JGSWd4SkdVYm9RK04wb2xyZW1xQ0p6aGswVEFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTgowb2xyZW1xQ0p6aGswVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSks5a1l0RmtMQ2FpUFliaklxM3NZRkpSCnY1VzJKdkNXU2gyZ0V1aUlhZVJmYUMySlJMZkcyQk9TeFk3djdkaTRwSGF0N2QwSFJ1SjBMWmp5Z0lTaUtDN1MKKzJSNXI4cVJiamx2aUNVNzF1Tzl2cDR2dit3MmdRd0hpakQwZXlsL0l1K3FuVFV0czV4M2FuQXM3cVRhUWp5NgpXSHM0U0xCU3dVZ2JuOW9QZG5sVmR3b0Y1dURiVVF0cTJzMHlZTE9SbjFTSE5hS0pycGpnaGZuYllHaDVRQlpXCmNPaDVBYVhrVVBCWkNYZVFadUlxWEZMbTlsTGYydTlHU1dtVWQxTUpWR05WcHU2UStyYk1xMXphbmJNczJmQVUKVDJORzhTMmwwb29KVS9PQS94ekZreUxoaUNMb0Zhb3N5b00rb3JWUUI1Y3JqbGU1MnN4S3hkdi9BcjVmd2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.99.100:7443 name: myk8s contexts: null current-context: "" kind: Config preferences: {} users: null ``` ##### set-credentials 在kubeconfig配置文件中设置一个用户项。 如果指定了一个已存在的名字,将合并新字段并覆盖旧字段。 客户端证书设置: `–-client-certificate=certfile –-client-key=keyfile` 不记名令牌设置: `–-token=bearer_token` 基础认证设置: `–-username=basic_user –-password=basic_password` 不记名令牌和基础认证不能同时使用。 ###### 选项 ```bash kubectl config set-credentials [用户名,这个要和 client-csr.json 请求配置中的 CN 名称保持一致] --client-certificate="": 设置kuebconfig配置文件中用户选项中的证书文件路径。 --client-key="": 设置kuebconfig配置文件中用户选项中的证书密钥路径。 --embed-certs=false: 设置kuebconfig配置文件中用户选项中的embed-certs开关。 --password="": 设置kuebconfig配置文件中用户选项中的密码。 --token="": 设置kuebconfig配置文件中用户选项中的令牌。 --username="": 设置kuebconfig配置文件中用户选项中的用户名。 ``` ###### 执行 在上面的`conf`目录下执行,创建`k8s_node`**用户**,使用RABC的规则在k8s对该用户授一个权限,使其具有集群里面成为运算节点的权限。 ```bash # 设置客户端认证参数 [root@k8s99-151 conf]# kubectl config set-credentials k8s_node \ --client-certificate=/opt/kubernetes/server/bin/certs/client.pem \ --client-key=/opt/kubernetes/server/bin/certs/client-key.pem \ --embed-certs=true \ --kubeconfig=kubelet.kubeconfig User "k8s_node" set. [root@k8s99-151 conf]# cat kubelet.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1akNDQXFLZ0F3SUJBZ0lVU1Nvd1dvdGp0SUhDaHQwTGt2TGY4b0lVZWRjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE1qSXhNREJhRncwME1EQTFNamN4TWpJeE1EQmFNR014Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SSXdFQVlEVlFRRERBbHJPSE5mYzNSMVpIa3dnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQy9FWHhyeVRoQTYrbld3Vm9DSTdMUEhwT28KVXpvMlhGU2FvZ05kbXZKMVZnbU9XdDk4Y0NVZTk5NlhnM2Uvay9xamZvN2hUWklwblhGYTlYWCtyMTJoVk5vagplWXJuVG1lYVZKWjJpTGw3cTJUdGI0QjdjdTU1Q1hoVmhoK0RGWnl4d1paT0Rqb1AxdUVZSHpmZjF5VmlnbUdwCkIwN3Yzd3lNcUlpVHEvY0xQVTFlUXdnUGk3bW10N1ptdExNZWNOeklJK1I3NmJLRU5ET3RuUzJhc3lxQ01pbnMKVHVXNjJNWWRGVFgySlk1TERyNm1zTGZiaDlvQ25MQ0M4cmNXQnZ3WHEwMXVidm50VHNWMGZSQ3VwSHpyYTQ4RQpTOVZKTHZoM1N4RHU1OU9ZVUVJZ01uRVlrYng3bWJiWks3Yzl1RFlGMTQ0UGNPSi9YUnIrWWp6L0p5UUhBZ01CCkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWQKRGdRV0JCU2JGSWd4SkdVYm9RK04wb2xyZW1xQ0p6aGswVEFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTgowb2xyZW1xQ0p6aGswVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSks5a1l0RmtMQ2FpUFliaklxM3NZRkpSCnY1VzJKdkNXU2gyZ0V1aUlhZVJmYUMySlJMZkcyQk9TeFk3djdkaTRwSGF0N2QwSFJ1SjBMWmp5Z0lTaUtDN1MKKzJSNXI4cVJiamx2aUNVNzF1Tzl2cDR2dit3MmdRd0hpakQwZXlsL0l1K3FuVFV0czV4M2FuQXM3cVRhUWp5NgpXSHM0U0xCU3dVZ2JuOW9QZG5sVmR3b0Y1dURiVVF0cTJzMHlZTE9SbjFTSE5hS0pycGpnaGZuYllHaDVRQlpXCmNPaDVBYVhrVVBCWkNYZVFadUlxWEZMbTlsTGYydTlHU1dtVWQxTUpWR05WcHU2UStyYk1xMXphbmJNczJmQVUKVDJORzhTMmwwb29KVS9PQS94ekZreUxoaUNMb0Zhb3N5b00rb3JWUUI1Y3JqbGU1MnN4S3hkdi9BcjVmd2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.99.100:7443 name: myk8s contexts: null current-context: "" kind: Config preferences: {} users: - name: k8s_node user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQwakNDQXJxZ0F3SUJBZ0lVZVk0Sk5TM2FSMU54WXRWdk9yKy9VZ1lBWHo4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE5EVXlNREJhRncweU1UQTJNREV4TkRVeU1EQmFNR0l4Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SRXdEd1lEVlFRRERBaHJPSE5mYm05a1pUQ0NBU0l3CkRRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMjltbkJLdjlMMkU0NDlCbWNuUDF1bkJxanQKTFBMeGFRcDYvaGVneTJZMGU0dENaVGs5WjU5SkpkdEI2TUx1eEpmLzdsNUp1MmxPRDkzKzZiYWlOWllIQ09FRgpKbGtWRWl3MUpRTnJlTmVZRXMzazRUUU91TUd2ODd0czJKS1JNUjlKY3Q5L0NsSHhCcWp6TlpYd042YVNxODZwCm1qTlhXRXR5emZ4TUVHa1d4QW1SLzdHVGJOeGtYMk84OE5WK3I5c2w2cnFUVlVSRVdhbzlwRTZUNUloMnNaMGkKTklVZlZTV0VWQzZXT1lFRVVjY3NWZnBrcWhoYjRlRWhIRHpycTdnbUtsMTRaM25OR055TFdIVVlROEw0Nzc3YgpOZmdPTjlQY1VmVW54a0ZuNlM5L2ZhM3EzQlRUUXRaZGxTa2tDeVZ1eEUxc2c5cHFCN0YxM0ZaTThCc0NBd0VBCkFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUYKQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTY0NDV29xOWlXMXVMeFpyckVqcFhaemJ2UQpqREFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTjBvbHJlbXFDSnpoazBUQU5CZ2txaGtpRzl3MEJBUXNGCkFBT0NBUUVBZ3FLKzFCZDBpUm44cFdjWXFXV0pMS1FUY2pQTnpuZ1BSckE2Vnl3TytGaFc0YXBWdTd2UDF3TGcKTi9yM1MrTzgzVHIwaitIQXFRejUxSm1tVzNqS2FNVU12djUwMHJqSFBuSjRSQ2hJTENaSDMyNElSS05WSmpEYQpqNE8zeVJnOVdZand3c0loeko2Zm1vakNCN2FoYTRoa0VWby81RnVmNkdWd2t2bWJqblQvTys1TGtVemtVWjZpClBKbkxUTXlMc3ZCOEh2TlpGWW9YMzYvOU01K1ZDSStIY3Y2OHExZEtmZG8ycEtLYnVUM3BVQ1ZvaW1FOGVMOWoKN0h2QXVCU3pFbVVsWUFPdUxVTi82Q1JOYUl3MFlGdlYxanFrTmdOZ2d4UTBQZkFOQVRPSTdiRVRxN05wZS9jdwpWVndEak1NTUh5dzB4WGxaTFFTRDNHZnAyV2pSVlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcmIyYWNFcS8wdllUamowR1p5Yy9XNmNHcU8wczh2RnBDbnIrRjZETFpqUjdpMEpsCk9UMW5uMGtsMjBIb3d1N0VsLy91WGttN2FVNFAzZjdwdHFJMWxnY0k0UVVtV1JVU0xEVWxBMnQ0MTVnU3plVGgKTkE2NHdhL3p1MnpZa3BFeEgwbHkzMzhLVWZFR3FQTTFsZkEzcHBLcnpxbWFNMWRZUzNMTi9Fd1FhUmJFQ1pILwpzWk5zM0dSZlk3encxWDZ2MnlYcXVwTlZSRVJacWoya1RwUGtpSGF4blNJMGhSOVZKWVJVTHBZNWdRUlJ4eXhWCittU3FHRnZoNFNFY1BPdXJ1Q1lxWFhobmVjMFkzSXRZZFJoRHd2anZ2dHMxK0E0MzA5eFI5U2ZHUVdmcEwzOTkKcmVyY0ZOTkMxbDJWS1NRTEpXN0VUV3lEMm1vSHNYWGNWa3p3R3dJREFRQUJBb0lCQVFDWm5JSG50cTlUZm01YgozR2pFMjFhUldtUHhCNEl1YW5OTTZPR3ZVbU9Ed2ZOWmRTSzZNU2hsRk85N3BCS0FCMHVhZUpYd2w1QXowdWtFCmhsVnRQSTNVVC9QeHYvUGYzMWxwN0xNRkN3aVUxY2dLY3NzS2s5bVdwbE1BNjRPd1V0N3pvcVVHSmZTaDdsRE4KSC92SXpkTG1XOWU3cVFqRlMveXMrSnhXN1ltMDN3NnhCSVFRZEdpd0x3MW1IdTQ5UWt5RlpkZGZNMVg3Q3J0UQp0bnczV21ubHRVd0JzODlnOElYOURVMDZHUUFvM3FFWlUzdFJmNkFrYnpKV0Nsekl6NmZhSTBTU3pwTUMvZm00Cms3cGlPbkdENjNqLzFEQ2UwYUJMbklIRmticFhNbFIrVGxXaDFRSkZETTMvUGhZUDZlaXBIUXE3L0w4RVN0ODcKeDR4MmwvckJBb0dCQU1sbm5Td05vc1JYWmNRd0JqNVRxMzBVNUpxV1YvVENCSkxYV0oxRzlWQzRTeHkyM0IwbwpaSk9jWUZqSTM5R3QxbGlaUEdQeFhsT2E3QldiaVJmZzBLd0FhQ05NTWQ3S1FrTHE4R3pjNFhGVFMzakhFZGtMCk9YdHcwQjhudlhtSVZUUkZqNmRubjdwT1l3NVJBTWtIZ1gvREs0NTBHNDRQazl2K3FVWWMyREx0QW9HQkFOelcKUUlNYWpkU0F5aXB1SjM5aGFXZUxpWnRWRU9mSGZGMUZIcmtqM2FCY0kwMnZVRjhwUkxYUDRybmZEdFBVVU9LUApKbEh3S2lKeDZ0WmtmOTFCT1Z0YnAvcDFRRW1mejBieVNiaVE4bndQeEJsS1hSblQrVkpDZVdPYW5CbTNmMDNQCnFkWnFURDZ0aGtnd242NVlSeElJYlZMdXp0Z0hGekJlcjBrQlJpWW5Bb0dBRTFJa1NzaVlGN3ZHb0haMjh0MjEKOHM3aFMwOG02dXo5NFZSVlJPazh1Vnlrb0FHT3hpRk0zTGhBcWVQRFRPc2ZSK3FUVjRjZEpHb1ZRZEZrNm14Lwp1REJselJVUEo2OUJnZ2ZsdnB2RjZNZ3owa2RPbUFLeXBmdmhpMHVWcXFZQWZuNDUxTFpsSlV0RngyYlA2M2tUCjNEek4zZUdYang1QmNmWlJLTSsxa2RVQ2dZQWEreFpBcmxMYnZxeDBpS2JNdWxNTmNvL3FhWGM2U2pZWWt5UlEKekYvOGdxVlJqWFROVzVab2ZEQ2dNdTkxMC8vWjNsRjVPMVY2aGQ2Q09SOGlJaURtMTRqclliM3NBTmxyT3BqbApNdGhkYmZPd2YxUm1ubTVjcllCMU9lT09oTXZKN3ZBNklSeVZuOC9ETENXbVFHVFdnQzhUTGRnWnJxd1huY25lCnFvZ3Vpd0tCZ1FDQzNqQ1AwNitoMlNnd2V5SUUvWXR4aXdtY2JDbjgxOGNDblQ2NC9aWmdWRkY3dUZTNWJ6T0EKMEErbWJ4aFdmWnEzVXovL2RWR0tUa1ExRWpRRFhwclFBVEpUYU1JT3ZTTW1jL0JUZUd6Yzk0VDIvNlF3ais5WgpCY0t5SEVPVnRzR1FGcXJReUF3Y0VUM2U1VnlxNjZ6UDhqcVN5QVJLcHA0Y2NRQTRqcXNxYVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= ``` ##### set-context 在kubeconfig配置文件中设置一个环境项。 如果指定了一个已存在的名字,将合并新字段并覆盖旧字段。 ###### 选项 ```bash kubectl config set-context [content名字] --cluster="" # 设置kuebconfig配置文件中环境选项中的集群。集群名字 --namespace="" # 设置kuebconfig配置文件中环境选项中的命名空间。 --user="" # 设置kuebconfig配置文件中环境选项中的用户。访问集群的用户名字 ``` ###### 执行 执行命令 ```bash [root@k8s99-151 conf]# kubectl config set-context myk8s_context \ --cluster=myk8s \ --user=k8s_node \ --kubeconfig=kubelet.kubeconfig Context "myk8s_context" created. [root@k8s99-151 conf]# cat kubelet.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1akNDQXFLZ0F3SUJBZ0lVU1Nvd1dvdGp0SUhDaHQwTGt2TGY4b0lVZWRjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE1qSXhNREJhRncwME1EQTFNamN4TWpJeE1EQmFNR014Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SSXdFQVlEVlFRRERBbHJPSE5mYzNSMVpIa3dnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQy9FWHhyeVRoQTYrbld3Vm9DSTdMUEhwT28KVXpvMlhGU2FvZ05kbXZKMVZnbU9XdDk4Y0NVZTk5NlhnM2Uvay9xamZvN2hUWklwblhGYTlYWCtyMTJoVk5vagplWXJuVG1lYVZKWjJpTGw3cTJUdGI0QjdjdTU1Q1hoVmhoK0RGWnl4d1paT0Rqb1AxdUVZSHpmZjF5VmlnbUdwCkIwN3Yzd3lNcUlpVHEvY0xQVTFlUXdnUGk3bW10N1ptdExNZWNOeklJK1I3NmJLRU5ET3RuUzJhc3lxQ01pbnMKVHVXNjJNWWRGVFgySlk1TERyNm1zTGZiaDlvQ25MQ0M4cmNXQnZ3WHEwMXVidm50VHNWMGZSQ3VwSHpyYTQ4RQpTOVZKTHZoM1N4RHU1OU9ZVUVJZ01uRVlrYng3bWJiWks3Yzl1RFlGMTQ0UGNPSi9YUnIrWWp6L0p5UUhBZ01CCkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWQKRGdRV0JCU2JGSWd4SkdVYm9RK04wb2xyZW1xQ0p6aGswVEFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTgowb2xyZW1xQ0p6aGswVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSks5a1l0RmtMQ2FpUFliaklxM3NZRkpSCnY1VzJKdkNXU2gyZ0V1aUlhZVJmYUMySlJMZkcyQk9TeFk3djdkaTRwSGF0N2QwSFJ1SjBMWmp5Z0lTaUtDN1MKKzJSNXI4cVJiamx2aUNVNzF1Tzl2cDR2dit3MmdRd0hpakQwZXlsL0l1K3FuVFV0czV4M2FuQXM3cVRhUWp5NgpXSHM0U0xCU3dVZ2JuOW9QZG5sVmR3b0Y1dURiVVF0cTJzMHlZTE9SbjFTSE5hS0pycGpnaGZuYllHaDVRQlpXCmNPaDVBYVhrVVBCWkNYZVFadUlxWEZMbTlsTGYydTlHU1dtVWQxTUpWR05WcHU2UStyYk1xMXphbmJNczJmQVUKVDJORzhTMmwwb29KVS9PQS94ekZreUxoaUNMb0Zhb3N5b00rb3JWUUI1Y3JqbGU1MnN4S3hkdi9BcjVmd2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.99.100:7443 name: myk8s contexts: - context: cluster: myk8s user: k8s_node name: myk8s_context current-context: "" kind: Config preferences: {} users: - name: k8s_node user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQwakNDQXJxZ0F3SUJBZ0lVZVk0Sk5TM2FSMU54WXRWdk9yKy9VZ1lBWHo4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE5EVXlNREJhRncweU1UQTJNREV4TkRVeU1EQmFNR0l4Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SRXdEd1lEVlFRRERBaHJPSE5mYm05a1pUQ0NBU0l3CkRRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMjltbkJLdjlMMkU0NDlCbWNuUDF1bkJxanQKTFBMeGFRcDYvaGVneTJZMGU0dENaVGs5WjU5SkpkdEI2TUx1eEpmLzdsNUp1MmxPRDkzKzZiYWlOWllIQ09FRgpKbGtWRWl3MUpRTnJlTmVZRXMzazRUUU91TUd2ODd0czJKS1JNUjlKY3Q5L0NsSHhCcWp6TlpYd042YVNxODZwCm1qTlhXRXR5emZ4TUVHa1d4QW1SLzdHVGJOeGtYMk84OE5WK3I5c2w2cnFUVlVSRVdhbzlwRTZUNUloMnNaMGkKTklVZlZTV0VWQzZXT1lFRVVjY3NWZnBrcWhoYjRlRWhIRHpycTdnbUtsMTRaM25OR055TFdIVVlROEw0Nzc3YgpOZmdPTjlQY1VmVW54a0ZuNlM5L2ZhM3EzQlRUUXRaZGxTa2tDeVZ1eEUxc2c5cHFCN0YxM0ZaTThCc0NBd0VBCkFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUYKQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTY0NDV29xOWlXMXVMeFpyckVqcFhaemJ2UQpqREFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTjBvbHJlbXFDSnpoazBUQU5CZ2txaGtpRzl3MEJBUXNGCkFBT0NBUUVBZ3FLKzFCZDBpUm44cFdjWXFXV0pMS1FUY2pQTnpuZ1BSckE2Vnl3TytGaFc0YXBWdTd2UDF3TGcKTi9yM1MrTzgzVHIwaitIQXFRejUxSm1tVzNqS2FNVU12djUwMHJqSFBuSjRSQ2hJTENaSDMyNElSS05WSmpEYQpqNE8zeVJnOVdZand3c0loeko2Zm1vakNCN2FoYTRoa0VWby81RnVmNkdWd2t2bWJqblQvTys1TGtVemtVWjZpClBKbkxUTXlMc3ZCOEh2TlpGWW9YMzYvOU01K1ZDSStIY3Y2OHExZEtmZG8ycEtLYnVUM3BVQ1ZvaW1FOGVMOWoKN0h2QXVCU3pFbVVsWUFPdUxVTi82Q1JOYUl3MFlGdlYxanFrTmdOZ2d4UTBQZkFOQVRPSTdiRVRxN05wZS9jdwpWVndEak1NTUh5dzB4WGxaTFFTRDNHZnAyV2pSVlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcmIyYWNFcS8wdllUamowR1p5Yy9XNmNHcU8wczh2RnBDbnIrRjZETFpqUjdpMEpsCk9UMW5uMGtsMjBIb3d1N0VsLy91WGttN2FVNFAzZjdwdHFJMWxnY0k0UVVtV1JVU0xEVWxBMnQ0MTVnU3plVGgKTkE2NHdhL3p1MnpZa3BFeEgwbHkzMzhLVWZFR3FQTTFsZkEzcHBLcnpxbWFNMWRZUzNMTi9Fd1FhUmJFQ1pILwpzWk5zM0dSZlk3encxWDZ2MnlYcXVwTlZSRVJacWoya1RwUGtpSGF4blNJMGhSOVZKWVJVTHBZNWdRUlJ4eXhWCittU3FHRnZoNFNFY1BPdXJ1Q1lxWFhobmVjMFkzSXRZZFJoRHd2anZ2dHMxK0E0MzA5eFI5U2ZHUVdmcEwzOTkKcmVyY0ZOTkMxbDJWS1NRTEpXN0VUV3lEMm1vSHNYWGNWa3p3R3dJREFRQUJBb0lCQVFDWm5JSG50cTlUZm01YgozR2pFMjFhUldtUHhCNEl1YW5OTTZPR3ZVbU9Ed2ZOWmRTSzZNU2hsRk85N3BCS0FCMHVhZUpYd2w1QXowdWtFCmhsVnRQSTNVVC9QeHYvUGYzMWxwN0xNRkN3aVUxY2dLY3NzS2s5bVdwbE1BNjRPd1V0N3pvcVVHSmZTaDdsRE4KSC92SXpkTG1XOWU3cVFqRlMveXMrSnhXN1ltMDN3NnhCSVFRZEdpd0x3MW1IdTQ5UWt5RlpkZGZNMVg3Q3J0UQp0bnczV21ubHRVd0JzODlnOElYOURVMDZHUUFvM3FFWlUzdFJmNkFrYnpKV0Nsekl6NmZhSTBTU3pwTUMvZm00Cms3cGlPbkdENjNqLzFEQ2UwYUJMbklIRmticFhNbFIrVGxXaDFRSkZETTMvUGhZUDZlaXBIUXE3L0w4RVN0ODcKeDR4MmwvckJBb0dCQU1sbm5Td05vc1JYWmNRd0JqNVRxMzBVNUpxV1YvVENCSkxYV0oxRzlWQzRTeHkyM0IwbwpaSk9jWUZqSTM5R3QxbGlaUEdQeFhsT2E3QldiaVJmZzBLd0FhQ05NTWQ3S1FrTHE4R3pjNFhGVFMzakhFZGtMCk9YdHcwQjhudlhtSVZUUkZqNmRubjdwT1l3NVJBTWtIZ1gvREs0NTBHNDRQazl2K3FVWWMyREx0QW9HQkFOelcKUUlNYWpkU0F5aXB1SjM5aGFXZUxpWnRWRU9mSGZGMUZIcmtqM2FCY0kwMnZVRjhwUkxYUDRybmZEdFBVVU9LUApKbEh3S2lKeDZ0WmtmOTFCT1Z0YnAvcDFRRW1mejBieVNiaVE4bndQeEJsS1hSblQrVkpDZVdPYW5CbTNmMDNQCnFkWnFURDZ0aGtnd242NVlSeElJYlZMdXp0Z0hGekJlcjBrQlJpWW5Bb0dBRTFJa1NzaVlGN3ZHb0haMjh0MjEKOHM3aFMwOG02dXo5NFZSVlJPazh1Vnlrb0FHT3hpRk0zTGhBcWVQRFRPc2ZSK3FUVjRjZEpHb1ZRZEZrNm14Lwp1REJselJVUEo2OUJnZ2ZsdnB2RjZNZ3owa2RPbUFLeXBmdmhpMHVWcXFZQWZuNDUxTFpsSlV0RngyYlA2M2tUCjNEek4zZUdYang1QmNmWlJLTSsxa2RVQ2dZQWEreFpBcmxMYnZxeDBpS2JNdWxNTmNvL3FhWGM2U2pZWWt5UlEKekYvOGdxVlJqWFROVzVab2ZEQ2dNdTkxMC8vWjNsRjVPMVY2aGQ2Q09SOGlJaURtMTRqclliM3NBTmxyT3BqbApNdGhkYmZPd2YxUm1ubTVjcllCMU9lT09oTXZKN3ZBNklSeVZuOC9ETENXbVFHVFdnQzhUTGRnWnJxd1huY25lCnFvZ3Vpd0tCZ1FDQzNqQ1AwNitoMlNnd2V5SUUvWXR4aXdtY2JDbjgxOGNDblQ2NC9aWmdWRkY3dUZTNWJ6T0EKMEErbWJ4aFdmWnEzVXovL2RWR0tUa1ExRWpRRFhwclFBVEpUYU1JT3ZTTW1jL0JUZUd6Yzk0VDIvNlF3ais5WgpCY0t5SEVPVnRzR1FGcXJReUF3Y0VUM2U1VnlxNjZ6UDhqcVN5QVJLcHA0Y2NRQTRqcXNxYVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= ``` ##### use-context 使用kubeconfig中的一个环境项作为当前配置。 ###### 执行 执行命令 ```bash [root@k8s99-151 conf]# kubectl config use-context myk8s_context \ --kubeconfig=kubelet.kubeconfig Switched to context "myk8s_context". [root@k8s99-151 conf]# cat kubelet.kubeconfig apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR1akNDQXFLZ0F3SUJBZ0lVU1Nvd1dvdGp0SUhDaHQwTGt2TGY4b0lVZWRjd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE1qSXhNREJhRncwME1EQTFNamN4TWpJeE1EQmFNR014Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SSXdFQVlEVlFRRERBbHJPSE5mYzNSMVpIa3dnZ0VpCk1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLQW9JQkFRQy9FWHhyeVRoQTYrbld3Vm9DSTdMUEhwT28KVXpvMlhGU2FvZ05kbXZKMVZnbU9XdDk4Y0NVZTk5NlhnM2Uvay9xamZvN2hUWklwblhGYTlYWCtyMTJoVk5vagplWXJuVG1lYVZKWjJpTGw3cTJUdGI0QjdjdTU1Q1hoVmhoK0RGWnl4d1paT0Rqb1AxdUVZSHpmZjF5VmlnbUdwCkIwN3Yzd3lNcUlpVHEvY0xQVTFlUXdnUGk3bW10N1ptdExNZWNOeklJK1I3NmJLRU5ET3RuUzJhc3lxQ01pbnMKVHVXNjJNWWRGVFgySlk1TERyNm1zTGZiaDlvQ25MQ0M4cmNXQnZ3WHEwMXVidm50VHNWMGZSQ3VwSHpyYTQ4RQpTOVZKTHZoM1N4RHU1OU9ZVUVJZ01uRVlrYng3bWJiWks3Yzl1RFlGMTQ0UGNPSi9YUnIrWWp6L0p5UUhBZ01CCkFBR2paakJrTUE0R0ExVWREd0VCL3dRRUF3SUJCakFTQmdOVkhSTUJBZjhFQ0RBR0FRSC9BZ0VDTUIwR0ExVWQKRGdRV0JCU2JGSWd4SkdVYm9RK04wb2xyZW1xQ0p6aGswVEFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTgowb2xyZW1xQ0p6aGswVEFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBSks5a1l0RmtMQ2FpUFliaklxM3NZRkpSCnY1VzJKdkNXU2gyZ0V1aUlhZVJmYUMySlJMZkcyQk9TeFk3djdkaTRwSGF0N2QwSFJ1SjBMWmp5Z0lTaUtDN1MKKzJSNXI4cVJiamx2aUNVNzF1Tzl2cDR2dit3MmdRd0hpakQwZXlsL0l1K3FuVFV0czV4M2FuQXM3cVRhUWp5NgpXSHM0U0xCU3dVZ2JuOW9QZG5sVmR3b0Y1dURiVVF0cTJzMHlZTE9SbjFTSE5hS0pycGpnaGZuYllHaDVRQlpXCmNPaDVBYVhrVVBCWkNYZVFadUlxWEZMbTlsTGYydTlHU1dtVWQxTUpWR05WcHU2UStyYk1xMXphbmJNczJmQVUKVDJORzhTMmwwb29KVS9PQS94ekZreUxoaUNMb0Zhb3N5b00rb3JWUUI1Y3JqbGU1MnN4S3hkdi9BcjVmd2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== server: https://192.168.99.100:7443 name: myk8s contexts: - context: cluster: myk8s user: k8s_node name: myk8s_context current-context: myk8s_context kind: Config preferences: {} users: - name: k8s_node user: client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQwakNDQXJxZ0F3SUJBZ0lVZVk0Sk5TM2FSMU54WXRWdk9yKy9VZ1lBWHo4d0RRWUpLb1pJaHZjTkFRRUwKQlFBd1l6RUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjNOcFkyaDFZVzR4RURBT0JnTlZCQWNUQjJObwpaVzVuWkhVeERqQU1CZ05WQkFvVEJYTjBkV1I1TVF3d0NnWURWUVFMRXdOdmNITXhFakFRQmdOVkJBTU1DV3M0CmMxOXpkSFZrZVRBZUZ3MHlNREEyTURFeE5EVXlNREJhRncweU1UQTJNREV4TkRVeU1EQmFNR0l4Q3pBSkJnTlYKQkFZVEFrTk9NUkF3RGdZRFZRUUlFd2R6YVdOb2RXRnVNUkF3RGdZRFZRUUhFd2RqYUdWdVoyUjFNUTR3REFZRApWUVFLRXdWemRIVmtlVEVNTUFvR0ExVUVDeE1EYjNCek1SRXdEd1lEVlFRRERBaHJPSE5mYm05a1pUQ0NBU0l3CkRRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMjltbkJLdjlMMkU0NDlCbWNuUDF1bkJxanQKTFBMeGFRcDYvaGVneTJZMGU0dENaVGs5WjU5SkpkdEI2TUx1eEpmLzdsNUp1MmxPRDkzKzZiYWlOWllIQ09FRgpKbGtWRWl3MUpRTnJlTmVZRXMzazRUUU91TUd2ODd0czJKS1JNUjlKY3Q5L0NsSHhCcWp6TlpYd042YVNxODZwCm1qTlhXRXR5emZ4TUVHa1d4QW1SLzdHVGJOeGtYMk84OE5WK3I5c2w2cnFUVlVSRVdhbzlwRTZUNUloMnNaMGkKTklVZlZTV0VWQzZXT1lFRVVjY3NWZnBrcWhoYjRlRWhIRHpycTdnbUtsMTRaM25OR055TFdIVVlROEw0Nzc3YgpOZmdPTjlQY1VmVW54a0ZuNlM5L2ZhM3EzQlRUUXRaZGxTa2tDeVZ1eEUxc2c5cHFCN0YxM0ZaTThCc0NBd0VBCkFhTi9NSDB3RGdZRFZSMFBBUUgvQkFRREFnV2dNQjBHQTFVZEpRUVdNQlFHQ0NzR0FRVUZCd01CQmdnckJnRUYKQlFjREFqQU1CZ05WSFJNQkFmOEVBakFBTUIwR0ExVWREZ1FXQkJTY0NDV29xOWlXMXVMeFpyckVqcFhaemJ2UQpqREFmQmdOVkhTTUVHREFXZ0JTYkZJZ3hKR1Vib1ErTjBvbHJlbXFDSnpoazBUQU5CZ2txaGtpRzl3MEJBUXNGCkFBT0NBUUVBZ3FLKzFCZDBpUm44cFdjWXFXV0pMS1FUY2pQTnpuZ1BSckE2Vnl3TytGaFc0YXBWdTd2UDF3TGcKTi9yM1MrTzgzVHIwaitIQXFRejUxSm1tVzNqS2FNVU12djUwMHJqSFBuSjRSQ2hJTENaSDMyNElSS05WSmpEYQpqNE8zeVJnOVdZand3c0loeko2Zm1vakNCN2FoYTRoa0VWby81RnVmNkdWd2t2bWJqblQvTys1TGtVemtVWjZpClBKbkxUTXlMc3ZCOEh2TlpGWW9YMzYvOU01K1ZDSStIY3Y2OHExZEtmZG8ycEtLYnVUM3BVQ1ZvaW1FOGVMOWoKN0h2QXVCU3pFbVVsWUFPdUxVTi82Q1JOYUl3MFlGdlYxanFrTmdOZ2d4UTBQZkFOQVRPSTdiRVRxN05wZS9jdwpWVndEak1NTUh5dzB4WGxaTFFTRDNHZnAyV2pSVlE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg== client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBcmIyYWNFcS8wdllUamowR1p5Yy9XNmNHcU8wczh2RnBDbnIrRjZETFpqUjdpMEpsCk9UMW5uMGtsMjBIb3d1N0VsLy91WGttN2FVNFAzZjdwdHFJMWxnY0k0UVVtV1JVU0xEVWxBMnQ0MTVnU3plVGgKTkE2NHdhL3p1MnpZa3BFeEgwbHkzMzhLVWZFR3FQTTFsZkEzcHBLcnpxbWFNMWRZUzNMTi9Fd1FhUmJFQ1pILwpzWk5zM0dSZlk3encxWDZ2MnlYcXVwTlZSRVJacWoya1RwUGtpSGF4blNJMGhSOVZKWVJVTHBZNWdRUlJ4eXhWCittU3FHRnZoNFNFY1BPdXJ1Q1lxWFhobmVjMFkzSXRZZFJoRHd2anZ2dHMxK0E0MzA5eFI5U2ZHUVdmcEwzOTkKcmVyY0ZOTkMxbDJWS1NRTEpXN0VUV3lEMm1vSHNYWGNWa3p3R3dJREFRQUJBb0lCQVFDWm5JSG50cTlUZm01YgozR2pFMjFhUldtUHhCNEl1YW5OTTZPR3ZVbU9Ed2ZOWmRTSzZNU2hsRk85N3BCS0FCMHVhZUpYd2w1QXowdWtFCmhsVnRQSTNVVC9QeHYvUGYzMWxwN0xNRkN3aVUxY2dLY3NzS2s5bVdwbE1BNjRPd1V0N3pvcVVHSmZTaDdsRE4KSC92SXpkTG1XOWU3cVFqRlMveXMrSnhXN1ltMDN3NnhCSVFRZEdpd0x3MW1IdTQ5UWt5RlpkZGZNMVg3Q3J0UQp0bnczV21ubHRVd0JzODlnOElYOURVMDZHUUFvM3FFWlUzdFJmNkFrYnpKV0Nsekl6NmZhSTBTU3pwTUMvZm00Cms3cGlPbkdENjNqLzFEQ2UwYUJMbklIRmticFhNbFIrVGxXaDFRSkZETTMvUGhZUDZlaXBIUXE3L0w4RVN0ODcKeDR4MmwvckJBb0dCQU1sbm5Td05vc1JYWmNRd0JqNVRxMzBVNUpxV1YvVENCSkxYV0oxRzlWQzRTeHkyM0IwbwpaSk9jWUZqSTM5R3QxbGlaUEdQeFhsT2E3QldiaVJmZzBLd0FhQ05NTWQ3S1FrTHE4R3pjNFhGVFMzakhFZGtMCk9YdHcwQjhudlhtSVZUUkZqNmRubjdwT1l3NVJBTWtIZ1gvREs0NTBHNDRQazl2K3FVWWMyREx0QW9HQkFOelcKUUlNYWpkU0F5aXB1SjM5aGFXZUxpWnRWRU9mSGZGMUZIcmtqM2FCY0kwMnZVRjhwUkxYUDRybmZEdFBVVU9LUApKbEh3S2lKeDZ0WmtmOTFCT1Z0YnAvcDFRRW1mejBieVNiaVE4bndQeEJsS1hSblQrVkpDZVdPYW5CbTNmMDNQCnFkWnFURDZ0aGtnd242NVlSeElJYlZMdXp0Z0hGekJlcjBrQlJpWW5Bb0dBRTFJa1NzaVlGN3ZHb0haMjh0MjEKOHM3aFMwOG02dXo5NFZSVlJPazh1Vnlrb0FHT3hpRk0zTGhBcWVQRFRPc2ZSK3FUVjRjZEpHb1ZRZEZrNm14Lwp1REJselJVUEo2OUJnZ2ZsdnB2RjZNZ3owa2RPbUFLeXBmdmhpMHVWcXFZQWZuNDUxTFpsSlV0RngyYlA2M2tUCjNEek4zZUdYang1QmNmWlJLTSsxa2RVQ2dZQWEreFpBcmxMYnZxeDBpS2JNdWxNTmNvL3FhWGM2U2pZWWt5UlEKekYvOGdxVlJqWFROVzVab2ZEQ2dNdTkxMC8vWjNsRjVPMVY2aGQ2Q09SOGlJaURtMTRqclliM3NBTmxyT3BqbApNdGhkYmZPd2YxUm1ubTVjcllCMU9lT09oTXZKN3ZBNklSeVZuOC9ETENXbVFHVFdnQzhUTGRnWnJxd1huY25lCnFvZ3Vpd0tCZ1FDQzNqQ1AwNitoMlNnd2V5SUUvWXR4aXdtY2JDbjgxOGNDblQ2NC9aWmdWRkY3dUZTNWJ6T0EKMEErbWJ4aFdmWnEzVXovL2RWR0tUa1ExRWpRRFhwclFBVEpUYU1JT3ZTTW1jL0JUZUd6Yzk0VDIvNlF3ais5WgpCY0t5SEVPVnRzR1FGcXJReUF3Y0VUM2U1VnlxNjZ6UDhqcVN5QVJLcHA0Y2NRQTRqcXNxYVE9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo= ``` 对于上面的`certificate-authority-data:`的值,可以使用`echo "LS0tLS1C*****tLS0tCg==" | base64 -d`显示出来就是`ca.pem`的内容,即根证书嵌入到这个配置文件了 #### 创建k8s-node.yaml角色绑定 ```bash [root@k8s99-151 conf]# vim k8s-node.yaml ``` 添加下面的内容 ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s_node ``` rbac鉴权也是一种资源,大致有5个段 让User是`k8s_node`这个用户(这个用户是和证书文件中CN名称保持一致)具有成为集群角色`ClusterRole`名叫`system:node`运算节点的权限。 ```bash [root@k8s99-151 conf]# ls audit.yaml k8s-node.yaml kubelet.kubeconfig [root@k8s99-151 conf]# kubectl create -f k8s-node.yaml clusterrolebinding.rbac.authorization.k8s.io/k8s-node created ``` 查看 ```bash # k8s_node对应metadata中的name [root@k8s99-151 conf]# kubectl get clusterrolebinding k8s-node NAME ROLE AGE k8s-node ClusterRole/system:node 11s [root@k8s99-151 conf]# kubectl get clusterrolebinding k8s-node -o yaml ``` 显示的内容如下 ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: creationTimestamp: "2020-06-17T14:57:55Z" managedFields: - apiVersion: rbac.authorization.k8s.io/v1 fieldsType: FieldsV1 fieldsV1: f:roleRef: f:apiGroup: {} f:kind: {} f:name: {} f:subjects: {} manager: kubectl operation: Update time: "2020-06-17T14:57:55Z" name: k8s-node resourceVersion: "174450" selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/k8s-node uid: 006f0fa4-51e5-4f98-bb2c-7f649fca3a17 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s_node ``` 如果再次创建 ```bash [root@k8s99-151 conf]# kubectl create -f k8s-node.yaml Error from server (AlreadyExists): error when creating "k8s-node.yaml": clusterrolebindings.rbac.authorization.k8s.io "k8s-node" already exists # 会提示这个资源已经存在,同样使用apply也是类似的提示为发生更改 [root@k8s99-151 conf]# kubectl apply -f k8s-node.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply clusterrolebinding.rbac.authorization.k8s.io/k8s-node configured ``` #### 创建kubelet启动脚本 ##### kubelet指引 https://kubernetes.io/zh/docs/reference/command-line-tools-reference/kubelet/ 在执行命令之前需要桌备好pause镜像。 ##### 开始创建kubelet-startup.sh ```bash [root@k8s99-151 conf]# cd .. [root@k8s99-151 bin]# vim kubelet-startup.sh # 添加下面的内容 #!/bin/bash ./kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 192.168.0.2 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on="false" \ --client-ca-file ./certs/ca.pem \ --tls-cert-file ./certs/kubelet.pem \ --tls-private-key-file ./certs/kubelet-key.pem \ --hostname-override k8s99-151.host.com \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig ./conf/kubelet.kubeconfig \ --log-dir /var/log/kubelet \ --pod-infra-container-image harbor.study.com/public/pause:latest \ --root-dir /data/kubelet # 创建相关的目录 [root@k8s99-151 bin]# mkdir -p /var/log/kubelet /data/kubelet # 添加执行权限 [root@k8s99-151 bin]# chmod +x kubelet-startup.sh ``` 测试执行 ```bash [root@k8s99-151 bin]# ./kubelet-startup.sh ``` 由于使用的新版kubelet,有些参数已经进行了修改,使用`--config`来表示,参考 https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ ```bash # 运行报错:检查kubectl config set-credentials这几条是否正确 I0614 20:45:33.030018 2405 kubelet_node_status.go:70] Attempting to register node k8s99-151.host.com E0614 20:45:33.032410 2405 kubelet_node_status.go:92] Unable to register node "k8s99-151.host.com" with API server: nodes is forbidden: User "system:anonymous" cannot create resource "nodes" in API group "" at the cluster scope E0614 20:45:33.120180 2405 kubelet.go:2267] node "k8s99-151.host.com" not found E0614 20:45:33.404607 2405 csi_plugin.go:271] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: csinodes.storage.k8s.io "k8s99-151.host.com" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope F0614 20:45:33.404624 2405 csi_plugin.go:285] Failed to initialize CSINodeInfo after retrying # ===================== # 运行报错:k8s_node这个用户没有请求权限,经过多次测试,在做角色绑定时,需要将用户名设置为Client证书请求文件CN相同的名称。 E0617 22:29:34.041534 2484 controller.go:136] failed to ensure node lease exists, will retry in 7s, error: leases.coordination.k8s.io "k8s99-151.host.com" is forbidden: User "k8s_node" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" E0617 22:29:34.093463 2484 kubelet.go:2267] node "k8s99-151.host.com" not found E0617 22:29:35.502577 2484 kubelet.go:2267] node "k8s99-151.host.com" not found E0617 22:29:35.524773 2484 csi_plugin.go:271] Failed to initialize CSINodeInfo: error updating CSINode annotation: timed out waiting for the condition; caused by: csinodes.storage.k8s.io "k8s99-151.host.com" is forbidden: User "k8s_node" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope F0617 22:29:35.524848 2484 csi_plugin.go:285] Failed to initialize CSINodeInfo after retrying ``` #### 使用supervisor启动kubelet.ini ```bash [root@k8s99-151 bin]# vim /etc/supervisord.d/kubelet.ini ``` 写入下面配置 ```ini [program:kubelet-k8s99-151] command=/opt/kubernetes/server/bin/kubelet-startup.sh numprocs=1 directory=/opt/kubernetes/server/bin/ autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/var/log/kubelet/kubelet.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false ``` ```bash [root@k8s99-151 bin]# supervisorctl update kubelet-k8s99-151: added process group [root@k8s99-151 bin]# supervisorctl status etcd-server-k8s99-151 RUNNING pid 1377, uptime 1:43:42 kube-apiserver-k8s99-151 RUNNING pid 1367, uptime 1:43:42 kube-controller-manager-k8s99-151 RUNNING pid 1373, uptime 1:43:42 kube-scheduler-k8s99-151 RUNNING pid 1375, uptime 1:43:42 kubelet-k8s99-151 RUNNING pid 10628, uptime 0:01:23 ``` #### 检查kubelet端口监听 ```bash [root@k8s99-151 bin]# netstat -luntp | grep kubelet tcp 0 0 127.0.0.1:45855 0.0.0.0:* LISTEN 10629/./kubelet tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10629/./kubelet tcp6 0 0 :::10250 :::* LISTEN 10629/./kubelet tcp6 0 0 :::10255 :::* LISTEN 10629/./kubelet ``` #### 防火墙允许对外端口 ```bash [root@k8s99-151 bin]# firewall-cmd --zone=public --add-port=10250/tcp --permanent success [root@k8s99-151 bin]# firewall-cmd --zone=public --add-port=10255/tcp --permanent success [root@k8s99-151 bin]# firewall-cmd --reload success ``` ### 99.152部署kubelet ```bash # 从运维主机上拷贝kubelet.pem、kubelet-key.pem到本机,因为这个和k8s99-151生成的文件其实是一样的 [root@k8s99-152 ~]# cd /opt/kubernetes/server/bin/certs/ [root@k8s99-152 certs]# scp k8s99-200:/opt/certs/kubelet.pem . root@k8s99-200`s password: kubelet.pem 100% 1456 118.7KB/s 00:00 [root@k8s99-152 certs]# scp k8s99-200:/opt/certs/kubelet-key.pem . root@k8s99-200`s password: kubelet-key.pem 100% 1679 230.6KB/s 00:00 [root@k8s99-152 certs]# ll 总用量 32 -rw-------. 1 root root 1675 6月 1 23:13 apiserver-key.pem -rw-r--r--. 1 root root 1619 6月 1 23:12 apiserver.pem -rw-------. 1 root root 1675 6月 1 23:12 ca-key.pem -rw-r--r--. 1 root root 1354 6月 1 23:12 ca.pem -rw-------. 1 root root 1679 6月 1 23:12 client-key.pem -rw-r--r--. 1 root root 1387 6月 1 23:12 client.pem -rw-------. 1 root root 1679 6月 14 20:16 kubelet-key.pem -rw-r--r--. 1 root root 1456 6月 14 20:16 kubelet.pem # 从99.151拷贝生成的配置文件kubelet.kubeconfig 到本机 [root@k8s99-152 certs]# cd ../conf/ [root@k8s99-152 conf]# ls audit.yaml [root@k8s99-152 conf]# scp k8s99-151:/opt/kubernetes/server/bin/conf/kubelet.kubeconfig . The authenticity of host 'k8s99-151 (192.168.99.151)' can`t be established. ECDSA key fingerprint is SHA256:sZ8YJcYAarwkAZg1GiHrQJpVRdzLBLtTma6o8Q8nSt4. ECDSA key fingerprint is MD5:6f:83:14:23:29:e2:3e:33:0a:a1:69:cd:dc:63:5b:df. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'k8s99-151,192.168.99.151' (ECDSA) to the list of known hosts. root@k8s99-151`s password: kubelet.kubeconfig 100% 6248 935.4KB/s 00:00 [root@k8s99-152 conf]# ls audit.yaml kubelet.kubeconfig # k8s-node.yaml角色绑定 [root@k8s99-152 conf]# scp k8s99-151:/opt/kubernetes/server/bin/conf/k8s-node.yaml . root@k8s99-151`s password: k8s-node.yaml 100% 258 119.5KB/s 00:00 [root@k8s99-152 conf]# ls audit.yaml k8s-node.yaml kubelet.kubeconfig # 创建启动文件 [root@k8s99-152 conf]# cd .. [root@k8s99-152 bin]# vim kubelet-startup.sh #!/bin/bash ./kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 192.168.0.2 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on="false" \ --client-ca-file ./certs/ca.pem \ --tls-cert-file ./certs/kubelet.pem \ --tls-private-key-file ./certs/kubelet-key.pem \ --hostname-override k8s99-152.host.com \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig ./conf/kubelet.kubeconfig \ --log-dir /var/log/kubelet \ --pod-infra-container-image harbor.study.com/public/pause:latest \ --root-dir /data/kubelet # 创建必要目录,并添加执行权限 [root@k8s99-152 bin]# mkdir -p /var/log/kubelet /data/kubelet [root@k8s99-152 bin]# chmod +x kubelet-startup.sh [root@k8s99-152 bin]# ./kubelet-startup.sh # 使用supervisor运行kubelet [root@k8s99-152 bin]# vim /etc/supervisord.d/kubelet.ini [program:kubelet-k8s99-152] command=/opt/kubernetes/server/bin/kubelet-startup.sh numprocs=1 directory=/opt/kubernetes/server/bin/ autostart=true autorestart=true startsecs=30 startretries=3 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=true stdout_logfile=/var/log/kubelet/kubelet.stdout.log stdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false [root@k8s99-152 bin]# supervisorctl update [root@k8s99-152 bin]# supervisorctl status etcd-server-k8s99-152 RUNNING pid 1374, uptime 1:47:43 kube-apiserver-k8s99-152 RUNNING pid 1373, uptime 1:47:43 kube-controller-manager-k8s99-152 RUNNING pid 1372, uptime 1:47:43 kube-scheduler-k8s99-152 RUNNING pid 1375, uptime 1:47:43 kubelet-k8s99-152 RUNNING pid 3414, uptime 0:01:31 # 防火墙允许 [root@k8s99-152 bin]# firewall-cmd --zone=public --add-port=10250/tcp --permanent success [root@k8s99-152 bin]# firewall-cmd --zone=public --add-port=10255/tcp --permanent success [root@k8s99-152 bin]# firewall-cmd --reload success ``` ### 检查运行节点健康状态 查看node节点是否已添加到集群 ```bash [root@k8s99-151 bin]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s99-151.host.com Ready <none> 45h v1.18.2 k8s99-152.host.com Ready <none> 21h v1.18.2 [root@k8s99-152 bin]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s99-151.host.com Ready <none> 45h v1.18.2 k8s99-152.host.com Ready <none> 21h v1.18.2 ``` 节点名字也就是在`./kubelet`执行的`--hostname-override` 但是ROLES的值为`<none>`,这个和kubeadm创建的不一样,这个创建的没有角色 ```bash # 将k8s99-151.host.com设置为master节点 [root@k8s99-151 bin]# kubectl label node k8s99-151.host.com node-role.kubernetes.io/master= node/k8s99-151.host.com labeled [root@k8s99-151 bin]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s99-151.host.com Ready master 45h v1.18.2 k8s99-152.host.com Ready <none> 21h v1.18.2 # 但是规划这个节点既要做master节点又是node节点 [root@k8s99-151 bin]# kubectl label node k8s99-151.host.com node-role.kubernetes.io/node= node/k8s99-151.host.com labeled [root@k8s99-151 bin]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s99-151.host.com Ready master,node 45h v1.18.2 k8s99-152.host.com Ready <none> 21h v1.18.2 ``` 设置第二个节点 ```bash [root@k8s99-151 bin]# kubectl label node k8s99-152.host.com node-role.kubernetes.io/master= node/k8s99-152.host.com labeled [root@k8s99-151 bin]# kubectl label node k8s99-152.host.com node-role.kubernetes.io/node= node/k8s99-152.host.com labeled [root@k8s99-151 bin]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s99-151.host.com Ready master,node 45h v1.18.2 k8s99-152.host.com Ready master,node 21h v1.18.2 # 从另一台看也自动改变了 [root@k8s99-152 bin]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s99-151.host.com Ready master,node 45h v1.18.2 k8s99-152.host.com Ready master,node 21h v1.18.2 ``` 可以根据标签过滤相关的节点。
很赞哦! (4)
相关文章
文章交流
- emoji
当前用户
未登录,点击 登录专题目录
- 【K8s+Docker技术全解】01.Kubernetes快速入门概述
- 【K8s+Docker技术全解】02.k8s搭建环境准备-准备DNS服务
- 【K8s+Docker技术全解】03.k8s搭建环境准备-证书签发环境和Docker环境
- 【K8s+Docker技术全解】04.运维主机部署Harbor环境
- 【K8s+Docker技术全解】05.部署k8s分布式数据库etcd
- 【K8s+Docker技术全解】06.Master主控节点服务-部署kube-apiserver集群
- 【K8s+Docker技术全解】07.Master主控节点服务-配置nginx4层反向代理
- 【K8s+Docker技术全解】08.Master主控节点服务-keepalived配置vip
- 【K8s+Docker技术全解】09.Master主控节点服务-部署controller-manager
- 【K8s+Docker技术全解】10.Master主控节点服务-部署kube-scheduler、检查集群状态
- 【K8s+Docker技术全解】11.Node运算节点服务-部署kubelet
- 【K8s+Docker技术全解】12.Node运算节点服务-部署kube-proxy
- 【K8s+Docker技术全解】13.验证kubernets集群
- 【K8s+Docker技术全解】14.关于k8s证书
- 【K8s+Docker技术全解】15.管理k8s核心资源方法
- 【kubernetes】使用kubeadm快速搭建k8s集群学习