Deploy kubernetes cluster on 2 or more servers ; k8s on-premise

From vpsget wiki
Revision as of 16:39, 10 November 2020 by Ndi (talk | contribs)
Jump to: navigation, search

DRAFT

https://dzone.com/articles/kubespray-10-simple-steps-for-installing-a-product

get all yamls backup kubectl get po,deployment,rc,rs,ds,no,job -o yaml?


all actions under root.

make servers able to communicate cvia ssh with keys ,m like:

on master as root

ssh-keygen -t rsa -b 4096
ssh-copy-id root@<slaveIP>

on slave as root

ssh-keygen -t rsa -b 4096 
ssh-copy-id root@<masterIP>

on master rememebr to add /root/.ssh/id_rsa.pub content into /root/.ssh/authorized_keys

this will allow to connect master on himself, yeah this need if we start ansible from master 

master+slabve: yum upgrade disable selinux:

 setenforce 0
 sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

stop and disable firewaal

systemctl stop firewalld
systemctl disable firewalld

ON master:

 #modprobe br_netfilter
 #echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
 #sysctl -w net.ipv4.ip_forward=1

On Slave:


# echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
# sysctl -w net.ipv4.ip_forward=1


next slave and master again:

 sudo yum install epel-release
 sudo yum install ansible
  dnf install python3


//// easy_install pip

pip3 install jinja2 --upgrade


on master install kubespay

git clone https://github.com/kubernetes-incubator/kubespray.git
cd kubespray
sudo pip install -r requirements.txt


Copy inventory/sample as inventory/ndicluster (change “ndisluter” to any name you want for the cluster)

cp -avr inventory/sample  inventory/ndicluster


Update the Ansible inventory file with inventory builder

declare -a IPS=( 213.108.199.135 213.108.199.147)
CONFIG_FILE=inventory/ndicluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}


check the hosts.ini ... it may be generation in differ variants but I;'ve put my to the next view:

[all]
node 1 ansible_host=213.108.199.135  ip=213.108.199.135  
node2 ansible_host=213.108.199.147  ip=213.108.199.147  

[kube-master]
node1 

[kube-node]
node1#should not be there!??
node2 

[etcd]
node1 

[k8s-cluster:children]
kube-master
kube-node 

[calico-rr]

[vault] #probably add!??
node1
node2


BETTER USE CENTOS7 !


ansible-playbook -i inventory/mycluster/hosts.ini cluster.yml


check

kubectl get pods -A


NOTE: if you like to reinstall you cluster you may do it with reset.yaml:

ansible-playbook -i inventory/ndicluster/hosts.ini reset.yml 

and install back after


---dashoboard----

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: build-robot-secret
  annotations:
    kubernetes.io/service-account.name: build-robot
type: kubernetes.io/service-account-token
EOF
  1. IMPORTANT:
kubectl create serviceaccount build-robot
kubectl create clusterrolebinding ndi-build-robotn --clusterrole=cluster-admin --serviceaccount=default:build-robot
kubectl create clusterrolebinding ndi-build-robot --clusterrole=admin --serviceaccount=default:build-robot
kubectl -n default describe secret $(kubectl -n default get secret | awk '/^build-robot-token-/{print $1}') | awk '$1=="token:"{print $2}' 

JUST TRY :

https://213.108.199.163:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#/pod?namespace=_all

!!!INSECURE BUT WORK:

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous


  1. kubectl proxy --address='213.108.199.163'
  2. and should proxy yoou to the console!
  1. ok if work you need to fix dashboard:
  1. configure nginx reverse on kube-prxy; run kube-oprxy in scrren
  1. create tokens for service acc users



seem slike also need:

https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/