Difference between revisions of "Deploy kubernetes cluster on 2 or more servers ; k8s on-premise"

From vpsget wiki
Jump to: navigation, search
 
(11 intermediate revisions by the same user not shown)
Line 7: Line 7:
  
  
 +
PLEAE note that  swap should be disable on the k8s nodes. This should be done during install  with ansible kubespray, however remember to check it in case of issues.
  
 
all actions under root.
 
all actions under root.
Line 20: Line 21:
 
  ssh-copy-id root@<masterIP>
 
  ssh-copy-id root@<masterIP>
  
on master rememebr to add /root/.ssh/id_rsa.pub content into /root/.ssh/authorized_keys  
+
On master rememebr to add /root/.ssh/id_rsa.pub content into /root/.ssh/authorized_keys  
this will allow to connect master on himself, yeah this need if we start ansible from master  
+
this will allow to connect master on himself, yeah this need if we start ansible from master  
 +
 
  
 
master+slabve:
 
master+slabve:
Line 27: Line 29:
 
disable selinux:  
 
disable selinux:  
  
~]# setenforce 0
+
  setenforce 0
~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
+
  sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
  
 
stop and disable firewaal
 
stop and disable firewaal
Line 36: Line 38:
 
ON master:
 
ON master:
  
# modprobe br_netfilter
+
  #modprobe br_netfilter
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
+
  #echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
# sysctl -w net.ipv4.ip_forward=1
+
  #sysctl -w net.ipv4.ip_forward=1
  
 
On Slave:
 
On Slave:
  
  
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
+
# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
# sysctl -w net.ipv4.ip_forward=1
+
# sysctl -w net.ipv4.ip_forward=1
  
  
 
next slave and master again:  
 
next slave and master again:  
  
]# sudo yum install epel-release
+
  sudo yum install epel-release
~]# sudo yum install ansible
+
  sudo yum install ansible
  
 +
  dnf install python3
  
dnf install python3
 
  
 
//// easy_install pip
 
//// easy_install pip
pip3 install jinja2 --upgrade
+
pip3 install jinja2 --upgrade
  
  
 
on master install kubespay
 
on master install kubespay
  
git clone https://github.com/kubernetes-incubator/kubespray.git
+
git clone https://github.com/kubernetes-incubator/kubespray.git
  
~]# cd kubespray
+
cd kubespray
~]# sudo pip install -r requirements.txt
+
sudo pip install -r requirements.txt
  
  
Copy inventory/sample as inventory/ndicluster (change “ndisluter” to any name you want for the cluster)
+
Copy inventory/sample as inventory/ndicluster (change “ndisluter” to any name you want for the cluster)
cp -avr inventory/sample  inventory/ndicluster
+
cp -avr inventory/sample  inventory/ndicluster
 
 
  
  
 
Update the Ansible inventory file with inventory builder
 
Update the Ansible inventory file with inventory builder
  declare -a IPS=( 213.108.199.135 213.108.199.147)
+
  declare -a IPS=( 213.108.199.13* 213.108.199.14*)
 
  CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
 
  CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
  
Line 79: Line 80:
 
check the hosts.ini ... it may be generation in differ variants  but I;'ve put my to the next view:  
 
check the hosts.ini ... it may be generation in differ variants  but I;'ve put my to the next view:  
  
[all]
+
[all]
node1 ansible_host=213.108.199.135 ip=213.108.199.135  
+
node 1 ansible_host=213.108.199.13* ip=213.108.199.13*  
node2 ansible_host=213.108.199.147 ip=213.108.199.147  
+
node2 ansible_host=213.108.199.14* ip=213.108.199.14* 
 +
 +
[kube-master]
 +
node1
 +
 +
[kube-node]
 +
node1#should not be there!??
 +
node2
 +
 +
[etcd]
 +
  node1
 +
 +
[k8s-cluster:children]
 +
kube-master
 +
kube-node
 +
 +
[calico-rr]
 +
 +
[vault] #probably add!??
 +
node1
 +
node2
 +
 
 +
 
 +
BETTER USE CENTOS7 !
 +
 
  
[kube-master]
+
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
node1
+
 
 +
 
 +
check
 +
kubectl get pods -A
  
[kube-node]
 
node1#should not be there!??
 
node2
 
  
[etcd]
+
NOTE: if you like to reinstall you cluster you may do it with reset.yaml:
node1
+
ansible-playbook -i inventory/ndicluster/hosts.ini reset.yml
  
[k8s-cluster:children]
+
and install back after
kube-master
 
kube-node
 
  
[calico-rr]
 
  
[vault] #probably add!??
+
---dashoboard----
node1
 
node2
 
  
 +
references: #https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
 +
            #https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
  
BETTER USE CENTOS7 !
+
Insecure way but will allow you to access dashboard quickly w/o proxy:
 +
allow dashboard access
 +
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
  
 +
create user
  
 +
cat <<EOF | kubectl apply -f -
 +
apiVersion: v1
 +
kind: ServiceAccount
 +
metadata:
 +
  name: admin-user
 +
  namespace: kube-system
 +
EOF
  
 +
apply permissions
  
 +
cat <<EOF | kubectl apply -f -
 +
apiVersion: rbac.authorization.k8s.io/v1
 +
kind: ClusterRoleBinding
 +
metadata:
 +
  name: admin-user
 +
roleRef:
 +
  apiGroup: rbac.authorization.k8s.io
 +
  kind: ClusterRole
 +
  name: cluster-admin
 +
subjects:
 +
- kind: ServiceAccount
 +
  name: admin-user
 +
  namespace: kube-system
 +
EOF
  
ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml
+
get token:
 +
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
  
 +
Login with token:
 +
https://213.108.19*.*:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default
  
  
check
+
---HELM---
kubectl get pods -A
 
  
 +
we will install helm exactly on our master node; but you may easily confifgure the kubectl and helm work from your PC
  
---dashoboard
+
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
 +
chmod 700 get_helm.sh
 +
./get_helm.sh
  
kubectl apply -f - <<EOF
+
Init
apiVersion: v1
+
helm init
kind: Secret
 
metadata:
 
  name: build-robot-secret
 
  annotations:
 
    kubernetes.io/service-account.name: build-robot
 
type: kubernetes.io/service-account-token
 
EOF
 
  
#IMPORTANT:
 
kubectl create serviceaccount build-robot
 
kubectl create clusterrolebinding ndi-build-robotn --clusterrole=cluster-admin --serviceaccount=default:build-robot
 
kubectl create clusterrolebinding ndi-build-robot --clusterrole=admin --serviceaccount=default:build-robot
 
kubectl -n default describe secret $(kubectl -n default get secret | awk '/^build-robot-token-/{print $1}') | awk '$1=="token:"{print $2}'
 
  
JUST TRY :
+
By default RBAC enabled so you need to add  proper permissions to  helm  tiller
https://213.108.199.163:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/pod?namespace=_all
+
kubectl create serviceaccount tiller --namespace kube-system
  
!!!INSECURE BUT WORK: kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
+
Create tiller-perm.yaml:
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
+
kind: ClusterRoleBinding
 +
apiVersion: rbac.authorization.k8s.io/v1beta1
 +
metadata:
 +
  name: tiller-clusterrolebinding
 +
subjects:
 +
- kind: ServiceAccount
 +
  name: tiller
 +
  namespace: kube-system
 +
roleRef:
 +
  kind: ClusterRole
 +
  name: cluster-admin
 +
  apiGroup: ""
  
 +
Apply
 +
kubectl create -f tiller-perm.yaml
 +
  clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding created
 +
Update tiller deployment:
 +
helm init --service-account tiller --upgrade
  
#kubectl proxy --address='213.108.199.163'
+
wait few mins, test:
and should proxy yoou to the console!
+
helm ls #should be no errors
  
ok if work you need to fix dashboard:
 
  
configure nginx reverse on kube-prxy; run kube-oprxy in scrren
+
Example install elasticsearch:
  
create tokens for service acc users
+
*optional you may like to swicth current context to some another for example:
 +
  kubectl config set-context --current --namespace=jx-production
  
 +
Add repo
 +
helm repo add elastic https://helm.elastic.co
 +
"elastic" has been added to your repositories
 +
 +
#optional: you may donwload helm chart and modify iot before apply or apply default from repo. whatever
 +
#curl -O https://raw.githubusercontent.com/elastic/Helm-charts/master/elasticsearch/examples/minikube/values.yaml
 +
#helm install --name elasticsearch elastic/elasticsearch -f ./values.yaml
  
 +
helm install --name elasticsearch elastic/elasticsearch
  
----seem slike also need:
+
[[Category:Linux]]
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
 
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/
 

Latest revision as of 23:55, 4 December 2020

DRAFT

https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product

get all yamls backup kubectl get po,deployment,rc,rs,ds,no,job -o yaml?


PLEAE note that swap should be disable on the k8s nodes. This should be done during install with ansible kubespray, however remember to check it in case of issues.

all actions under root.

make servers able to communicate cvia ssh with keys ,m like:

on master as root

ssh-keygen -t rsa -b 4096
ssh-copy-id root@<slaveIP>

on slave as root

ssh-keygen -t rsa -b 4096 
ssh-copy-id root@<masterIP>

On master rememebr to add /root/.ssh/id_rsa.pub content into /root/.ssh/authorized_keys this will allow to connect master on himself, yeah this need if we start ansible from master


master+slabve: yum upgrade disable selinux:

 setenforce 0
 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

stop and disable firewaal

systemctl stop firewalld
systemctl disable firewalld

ON master:

 #modprobe br_netfilter
 #echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
 #sysctl -w net.ipv4.ip_forward=1

On Slave:


# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
# sysctl -w net.ipv4.ip_forward=1


next slave and master again:

 sudo yum install epel-release
 sudo yum install ansible
  dnf install python3


//// easy_install pip

pip3 install jinja2 --upgrade


on master install kubespay

git clone https://github.com/kubernetes-incubator/kubespray.git
cd kubespray
sudo pip install -r requirements.txt


Copy inventory/sample as inventory/ndicluster (change “ndisluter” to any name you want for the cluster)

cp -avr inventory/sample  inventory/ndicluster


Update the Ansible inventory file with inventory builder

declare -a IPS=( 213.108.199.13* 213.108.199.14*)
CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}


check the hosts.ini ... it may be generation in differ variants but I;'ve put my to the next view:

[all]
node 1 ansible_host=213.108.199.13*  ip=213.108.199.13*  
node2 ansible_host=213.108.199.14*  ip=213.108.199.14*  

[kube-master]
node1 

[kube-node]
node1#should not be there!??
node2 

[etcd]
node1 

[k8s-cluster:children]
kube-master
kube-node 

[calico-rr]

[vault] #probably add!??
node1
node2


BETTER USE CENTOS7 !


ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml


check

kubectl get pods -A


NOTE: if you like to reinstall you cluster you may do it with reset.yaml:

ansible-playbook -i inventory/ndicluster/hosts.ini reset.yml 

and install back after


---dashoboard----

references: #https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md

           #https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/

Insecure way but will allow you to access dashboard quickly w/o proxy: allow dashboard access

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

create user

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF

apply permissions

cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF

get token:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Login with token:

https://213.108.19*.*:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/overview?namespace=default


---HELM---

we will install helm exactly on our master node; but you may easily confifgure the kubectl and helm work from your PC

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Init

helm init


By default RBAC enabled so you need to add proper permissions to helm tiller

kubectl create serviceaccount tiller --namespace kube-system

Create tiller-perm.yaml:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: tiller-clusterrolebinding
subjects:
- kind: ServiceAccount
  name: tiller
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: ""

Apply

kubectl create -f tiller-perm.yaml 
 clusterrolebinding.rbac.authorization.k8s.io/tiller-clusterrolebinding created

Update tiller deployment:

helm init --service-account tiller --upgrade

wait few mins, test:

helm ls #should be no errors


Example install elasticsearch:

  • optional you may like to swicth current context to some another for example:
 kubectl config set-context --current --namespace=jx-production

Add repo

helm repo add elastic https://helm.elastic.co
"elastic" has been added to your repositories

#optional: you may donwload helm chart and modify iot before apply or apply default from repo. whatever
#curl -O https://raw.githubusercontent.com/elastic/Helm-charts/master/elasticsearch/examples/minikube/values.yaml
#helm install --name elasticsearch elastic/elasticsearch -f ./values.yaml 
helm install --name elasticsearch elastic/elasticsearch