目录
kubernetes 安装配置
介绍
这是一个简单的基于ubuntu服务器的kubernetes的安装步骤,使用了kubeadm, 之前使用过kubespray,简单一点,但是感觉使用kubeadm搭建可能对细节更了解一些。 看自己需求吧。 ubuntu虚拟机搭建的k8s集群,用来测试学习。搭建包括下载安装修改配置的时间大概一两个小时。 --2022年02月10日20:37:43 当前版本 Kubernetes 1.230环境
Ubuntu 20 虚拟机 Kubernetes 1.23.0规划
主机名 静态IP 账号 密码
master01 172.16.106.11 root
node01 172.16.106.12 root
node02 172.16.106.13 root
master02 172.16.106.14 root
操作
01.设置root登录
所有主机都需要操作
sudo passwd root sudo vim /etc/ssh/sshd_config PermitRootLogin yes # 添加 sudo systemctl restart sshd.service02. 修改ip
所有主机都需要操作
按照规划设置静态IP
静态ip根据自己需求修改,dns自己查看自己主机设置
sudo nano /etc/netplan/00-installer-config.yaml network: ethernets: enp0s3: dhcp4: no addresses: [172.16.106.11/24]# 静态ip gateway4: 172.16.106.1 # 网关 nameservers: addresses: [202.106.1.20, 202.106.111.120] # dns 需要根据实际修改,查看windows10主机的dns version: 2 sudo netplan apply03. 修改主机名
所有主机都需要操作
按照规划设置主机名
这里以master01为例
sudo hostnamectl set-hostname master01 sudo nano /etc/hosts 127.0.0.1 localhost #127.0.1.1 master01 # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters 172.16.106.38 master01 172.16.106.39 node01 172.16.106.50 node02 172.16.106.51 master0204. 查看 禁用 selinux
所有主机都需要操作
注:ubuntu没有自带这个功能模块,不用管它需要下载安装, centos等需要禁用
下面是如果安装了这个模块后需要的操作
sestatus# 查看 sudo nano /etc/selinux/config # 修改配置 SELINUX=disabled sestatus # 查看 reboot sestatus # 查看05. 禁用 swap
所有主机都需要操作
swapoff -a && sed -i /swap/d /etc/fstab06. 开放端口
所有主机都需要操作
master server
Protocol Direction Port Range Purpose Used By TCP Inbound 6443 Kubernetes API server All TCP Inbound 2379-2380 etcd server client API kube-apiserver, etcd TCP Inbound 10250 Kubelet API Self, Control plane TCP Inbound 10259 kube-scheduler Self TCP Inbound 10257 kube-controller-manager Self
work node
Protocol Direction Port Range Purpose Used By TCP Inbound 10250 Kubelet API Self, Control plane TCP Inbound 30000-32767 NodePort Services All
在自己的学习环境时,可以直接关闭防火墙
生产环境,尽量使用防火墙,开放指定端口
sudo ufw disable # ubuntu systemctl stop firewalld && systemctl disable firewalld # centos07. 修改配置
所有主机都需要操作
# Enable kernel modules sudo modprobe overlay && \ sudo modprobe br_netfilter # Add some settings to sysctl sudo tee /etc/sysctl.d/kubernetes.conf<<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF # Reload sysctl sudo sysctl --system08. 安装 docker
所有主机都需要操作
sudo apt update && \ sudo apt install apt-transport-https ca-certificates curl software-properties-common && \ curl -fsSL| sudo apt-key add - && \ sudo add-apt-repository "deb [arch=amd64]focal stable" && \ apt-cache policy docker-ce && \ sudo apt install -y containerd.io docker-ce docker-ce-cli && \ sudo systemctl status docker09. 安装 k8s
参考:
安装 kubelet, kubeadm and kubectl
翻墙版本:sudo apt update sudo apt -y install curl apt-transport-https curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "debkubernetes-xenial main" | sudo tee/etc/apt/sources.list.d/kubernetes.list注: 需要翻墙可以先下载下来,
然后拷贝到master01,node01上 然后
sudo apt-key add apt-key.gpg
无法翻墙的可以更改为国内阿里源
curl| sudo apt-key add -
9.1 阿里源版本
所有主机都需要操作
sudo apt update && \ sudo apt -y install curl apt-transport-https curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - echo "debkubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list9.2 更新下载
所有主机都需要操作
sudo apt update && \ sudo apt -y install vim git curl wget kubelet kubeadm kubectl && \ sudo apt-mark hold kubelet kubeadm kubectl9.3 查看安装情况及版本
所有主机都需要操作
kubectl version --client && kubeadm version9.4 配置docker
所有主机都需要操作
# Create required directories sudo mkdir -p /etc/systemd/system/docker.service.d # Create daemon json config file sudo tee /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF # Start and enable Services sudo systemctl daemon-reload && \ sudo systemctl restart docker && \ sudo systemctl enable docker9.5 拉取k8s镜像(重点)
所有主机都需要操作
sudo kubeadm config images list root@master01:~# sudo kubeadm config images list k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/pause:3.6 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 sudo kubeadm config images pull注:需要翻墙
如果没有翻墙可修改为本地源拉取:
# 拉取阿里镜像 kubeadm config print init-defaults > kubeadm.conf sed -i s/k8s.gcr.io/registry.aliyuncs.com\/google_containers/g kubeadm.conf sudo kubeadm config images list --config kubeadm.conf root@master01:~# sudo kubeadm config images list --config kubeadm.conf registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0 registry.aliyuncs.com/google_containers/pause:3.6 registry.aliyuncs.com/google_containers/etcd:3.5.1-0 registry.aliyuncs.com/google_containers/coredns:v1.8.6 sudo kubeadm config images pull --config kubeadm.conf # 修改镜像名字为谷歌镜像名字 前者为阿里云镜像名字 后者为谷歌镜像名字 # 前者:sudo kubeadm config images list --config kubeadm.conf # 后者:sudo kubeadm config images list docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0 k8s.gcr.io/kube-apiserver:v1.23.3 && docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0 k8s.gcr.io/kube-controller-manager:v1.23.3 && docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0 k8s.gcr.io/kube-scheduler:v1.23.3 && docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0 k8s.gcr.io/kube-proxy:v1.23.3 && docker tag registry.aliyuncs.com/google_containers/pause:3.6 k8s.gcr.io/pause:3.6 && docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0 k8s.gcr.io/etcd:3.5.1-0 && docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 k8s.gcr.io/coredns/coredns:v1.8.69.6 kubeadm建立集群
master01主机 操作
sudo kubeadm init \ --pod-network-cidr=192.168.0.0/16 \ --upload-certs \ --control-plane-endpoint=master01输出如下
Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: You can now join any number of the control-plane node running the following command on each as root: kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \ --discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cbefe9 \ --control-plane --certificate-key 8ca22421db8958b89a628f2324da23eab0f008de97af427b698b Please note that the certificate-key gives access to cluster sensitive data, keep it secret! As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use "kubeadm init phase upload-certs --upload-certs" to reload certs afterward. Then you can join any number of worker nodes by running the following on each as root: kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \ --discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cbefe9 mkdir -p $HOME/.kube && \ sudo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config && \ sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl cluster-infonode01 node02操作
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \ --discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cbefe9master02 操作
kubeadm join master01:6443 --token 29vzft.qgxppbi7k5doadni \ --discovery-token-ca-cert-hash sha256:747bc65f0954b6e715fe5ba2444cfe23f6f55a6cbefe9 \ --control-plane --certificate-key 8ca22421db8958b89a628f2324da23eab0f008de97af427b698b9.7 安装网络插件
master01操作
安装网络插件calicokubectl create -f kubectl create -f注:需要几分钟时间
查看pod、node状态watch kubectl get pods --all-namespaces kubectl get nodes -o wide注:pod显示全为Running, node节点全为ready说明安装成功,集群可以使用
其他
参考文献:
操作时间:2022年02月10日
Kubernetes版本:1.23.0
如有问题 欢迎提出
如有帮助 欢迎留言