Difference between revisions of "Deploy kubernetes cluster on 2 or more servers ; k8s on-premise"
(11 intermediate revisions by the same user not shown) | |||
Line 7: | Line 7: | ||
+ | PLEAE note that swap should be disable on the k8s nodes. This should be done during install with ansible kubespray, however remember to check it in case of issues. | ||
all actions under root. | all actions under root. | ||
Line 20: | Line 21: | ||
ssh-copy-id root@<masterIP> | ssh-copy-id root@<masterIP> | ||
− | + | On master rememebr to add /root/.ssh/id_rsa.pub content into /root/.ssh/authorized_keys | |
− | + | this will allow to connect master on himself, yeah this need if we start ansible from master | |
+ | |||
master+slabve: | master+slabve: | ||
Line 27: | Line 29: | ||
disable selinux: | disable selinux: | ||
− | + | setenforce 0 | |
− | + | sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux | |
stop and disable firewaal | stop and disable firewaal | ||
Line 36: | Line 38: | ||
ON master: | ON master: | ||
− | # modprobe br_netfilter | + | #modprobe br_netfilter |
− | # echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables | + | #echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables |
− | # sysctl -w net.ipv4.ip_forward=1 | + | #sysctl -w net.ipv4.ip_forward=1 |
On Slave: | On Slave: | ||
− | # echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables | + | # echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables |
− | # sysctl -w net.ipv4.ip_forward=1 | + | # sysctl -w net.ipv4.ip_forward=1 |
next slave and master again: | next slave and master again: | ||
− | + | sudo yum install epel-release | |
− | + | sudo yum install ansible | |
+ | dnf install python3 | ||
− | |||
//// easy_install pip | //// easy_install pip | ||
− | pip3 install jinja2 --upgrade | + | pip3 install jinja2 --upgrade |
on master install kubespay | on master install kubespay | ||
− | git clone https://github.com/kubernetes-incubator/kubespray.git | + | git clone https://github.com/kubernetes-incubator/kubespray.git |
− | + | cd kubespray | |
− | + | sudo pip install -r requirements.txt | |
− | + | Copy inventory/sample as inventory/ndicluster (change “ndisluter” to any name you want for the cluster) | |
− | cp -avr inventory/sample inventory/ndicluster | + | cp -avr inventory/sample inventory/ndicluster |
− | |||
Update the Ansible inventory file with inventory builder | Update the Ansible inventory file with inventory builder | ||
− | declare -a IPS=( 213.108.199. | + | declare -a IPS=( 213.108.199.13* 213.108.199.14*) |
CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]} | CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]} | ||
Line 79: | Line 80: | ||
check the hosts.ini ... it may be generation in differ variants but I;'ve put my to the next view: | check the hosts.ini ... it may be generation in differ variants but I;'ve put my to the next view: | ||
− | [all] | + | [all] |
− | + | node 1 ansible_host=213.108.199.13* ip=213.108.199.13* | |
− | node2 ansible_host=213.108.199. | + | node2 ansible_host=213.108.199.14* ip=213.108.199.14* |
+ | |||
+ | [kube-master] | ||
+ | node1 | ||
+ | |||
+ | [kube-node] | ||
+ | node1#should not be there!?? | ||
+ | node2 | ||
+ | |||
+ | [etcd] | ||
+ | node1 | ||
+ | |||
+ | [k8s-cluster:children] | ||
+ | kube-master | ||
+ | kube-node | ||
+ | |||
+ | [calico-rr] | ||
+ | |||
+ | [vault] #probably add!?? | ||
+ | node1 | ||
+ | node2 | ||
+ | |||
+ | |||
+ | BETTER USE CENTOS7 ! | ||
+ | |||
− | + | ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml | |
− | + | ||
+ | |||
+ | check | ||
+ | kubectl get pods -A | ||
− | |||
− | |||
− | |||
− | + | NOTE: if you like to reinstall you cluster you may do it with reset.yaml: | |
− | + | ansible-playbook -i inventory/ndicluster/hosts.ini reset.yml | |
− | + | and install back after | |
− | |||
− | |||
− | |||
− | + | ---dashoboard---- | |
− | |||
− | |||
+ | references: #https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md | ||
+ | #https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ | ||
− | + | Insecure way but will allow you to access dashboard quickly w/o proxy: | |
+ | allow dashboard access | ||
+ | kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous | ||
+ | create user | ||
+ | cat <<EOF | kubectl apply -f - | ||
+ | apiVersion: v1 | ||
+ | kind: ServiceAccount | ||
+ | metadata: | ||
+ | name: admin-user | ||
+ | namespace: kube-system | ||
+ | EOF | ||
+ | apply permissions | ||
+ | cat <<EOF | kubectl apply -f - | ||
+ | apiVersion: rbac.authorization.k8s.io/v1 | ||
+ | kind: ClusterRoleBinding | ||
+ | metadata: | ||
+ | name: admin-user | ||
+ | roleRef: | ||
+ | apiGroup: rbac.authorization.k8s.io | ||
+ | kind: ClusterRole | ||
+ | name: cluster-admin | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: admin-user | ||
+ | namespace: kube-system | ||
+ | EOF | ||
− | + | get token: | |
+ | kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | ||
+ | Login with token: | ||
+ | https://213.108.19*.*:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default | ||
− | + | ---HELM--- | |
− | |||
+ | we will install helm exactly on our master node; but you may easily confifgure the kubectl and helm work from your PC | ||
− | --- | + | curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 |
+ | chmod 700 get_helm.sh | ||
+ | ./get_helm.sh | ||
− | + | Init | |
− | + | helm init | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | By default RBAC enabled so you need to add proper permissions to helm tiller | |
− | + | kubectl create serviceaccount tiller --namespace kube-system | |
− | + | Create tiller-perm.yaml: | |
− | + | kind: ClusterRoleBinding | |
+ | apiVersion: rbac.authorization.k8s.io/v1beta1 | ||
+ | metadata: | ||
+ | name: tiller-clusterrolebinding | ||
+ | subjects: | ||
+ | - kind: ServiceAccount | ||
+ | name: tiller | ||
+ | namespace: kube-system | ||
+ | roleRef: | ||
+ | kind: ClusterRole | ||
+ | name: cluster-admin | ||
+ | apiGroup: "" | ||
+ | Apply | ||
+ | kubectl create -f tiller-perm.yaml | ||
+ | clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding created | ||
+ | Update tiller deployment: | ||
+ | helm init --service-account tiller --upgrade | ||
− | # | + | wait few mins, test: |
− | + | helm ls #should be no errors | |
− | |||
− | + | Example install elasticsearch: | |
− | + | *optional you may like to swicth current context to some another for example: | |
+ | kubectl config set-context --current --namespace=jx-production | ||
+ | Add repo | ||
+ | helm repo add elastic https://helm.elastic.co | ||
+ | "elastic" has been added to your repositories | ||
+ | |||
+ | #optional: you may donwload helm chart and modify iot before apply or apply default from repo. whatever | ||
+ | #curl -O https://raw.githubusercontent.com/elastic/Helm-charts/master/elasticsearch/examples/minikube/values.yaml | ||
+ | #helm install --name elasticsearch elastic/elasticsearch -f ./values.yaml | ||
+ | helm install --name elasticsearch elastic/elasticsearch | ||
− | + | [[Category:Linux]] | |
− | |||
− |
Latest revision as of 23:55, 4 December 2020
DRAFT
https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product
get all yamls backup kubectl get po,deployment,rc,rs,ds,no,job -o yaml?
PLEAE note that swap should be disable on the k8s nodes. This should be done during install with ansible kubespray, however remember to check it in case of issues.
all actions under root.
make servers able to communicate cvia ssh with keys ,m like:
on master as root
ssh-keygen -t rsa -b 4096 ssh-copy-id root@<slaveIP>
on slave as root
ssh-keygen -t rsa -b 4096 ssh-copy-id root@<masterIP>
On master rememebr to add /root/.ssh/id_rsa.pub content into /root/.ssh/authorized_keys this will allow to connect master on himself, yeah this need if we start ansible from master
master+slabve:
yum upgrade
disable selinux:
setenforce 0 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
stop and disable firewaal
systemctl stop firewalld systemctl disable firewalld
ON master:
#modprobe br_netfilter #echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables #sysctl -w net.ipv4.ip_forward=1
On Slave:
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables # sysctl -w net.ipv4.ip_forward=1
next slave and master again:
sudo yum install epel-release sudo yum install ansible
dnf install python3
//// easy_install pip
pip3 install jinja2 --upgrade
on master install kubespay
git clone https://github.com/kubernetes-incubator/kubespray.git
cd kubespray sudo pip install -r requirements.txt
Copy inventory/sample as inventory/ndicluster (change “ndisluter” to any name you want for the cluster)
cp -avr inventory/sample inventory/ndicluster
Update the Ansible inventory file with inventory builder
declare -a IPS=( 213.108.199.13* 213.108.199.14*) CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
check the hosts.ini ... it may be generation in differ variants but I;'ve put my to the next view:
[all] node 1 ansible_host=213.108.199.13* ip=213.108.199.13* node2 ansible_host=213.108.199.14* ip=213.108.199.14* [kube-master] node1 [kube-node] node1#should not be there!?? node2 [etcd] node1 [k8s-cluster:children] kube-master kube-node [calico-rr] [vault] #probably add!?? node1 node2
BETTER USE CENTOS7 !
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
check
kubectl get pods -A
NOTE: if you like to reinstall you cluster you may do it with reset.yaml:
ansible-playbook -i inventory/ndicluster/hosts.ini reset.yml
and install back after
---dashoboard----
references: #https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
#https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
Insecure way but will allow you to access dashboard quickly w/o proxy: allow dashboard access
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
create user
cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system EOF
apply permissions
cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF
get token:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Login with token:
https://213.108.19*.*:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
---HELM---
we will install helm exactly on our master node; but you may easily confifgure the kubectl and helm work from your PC
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh
Init
helm init
By default RBAC enabled so you need to add proper permissions to helm tiller
kubectl create serviceaccount tiller --namespace kube-system
Create tiller-perm.yaml:
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: tiller-clusterrolebinding subjects: - kind: ServiceAccount name: tiller namespace: kube-system roleRef: kind: ClusterRole name: cluster-admin apiGroup: ""
Apply
kubectl create -f tiller-perm.yaml clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding created
Update tiller deployment:
helm init --service-account tiller --upgrade
wait few mins, test:
helm ls #should be no errors
Example install elasticsearch:
- optional you may like to swicth current context to some another for example:
kubectl config set-context --current --namespace=jx-production
Add repo
helm repo add elastic https://helm.elastic.co "elastic" has been added to your repositories #optional: you may donwload helm chart and modify iot before apply or apply default from repo. whatever #curl -O https://raw.githubusercontent.com/elastic/Helm-charts/master/elasticsearch/examples/minikube/values.yaml #helm install --name elasticsearch elastic/elasticsearch -f ./values.yaml
helm install --name elasticsearch elastic/elasticsearch