/root/.blog centos7

k8s Dashboard installation

Deploy the Dashboard

  1. install the kubernetes dashboard
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
  1. Deploy heapster to enable container cluster monitoring and performance analysis on your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
  1. Deploy the influxdb backend for heapster to your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
  1. Create the heapster cluster role binding for the dashboard:
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

clusterrolebinding "heapster" created

Create an admin Service Account and Cluster Role Binding

  1. Create a file called k8s-admin-service-account.yaml with the text below
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-admin
  namespace: kube-system
  1. Apply the service account to your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f k8s-admin-service-account.yaml

serviceaccount "k8s-admin" created
  1. Create a file called k8s-admin-cluster-role-binding.yaml with the text below
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: k8s-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: k8s-admin
  namespace: kube-system
  1. Apply the cluster role binding to your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f k8s-admin-cluster-role-binding.yaml

clusterrolebinding "k8s-admin" created

Connect to the Dashboard

  1. Retrieve an authentication token for the eks-admin service account. Copy the <authentication_token> value from the output. You use this token to connect to the dashboard
[k8sadm@test-vm1 ~]$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep k8s-admin | awk '{print $1}')

Name:         k8s-admin-token-b5zv4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=k8s-admin
              kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      <authentication_token>
  1. Start the kubectl proxy
[k8sadm@test-vm1 ~]$ kubectl proxy

Starting to serve on 127.0.0.1:8001
  1. Open the following link with a web browser to access the dashboard endpoint: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

  2. Choose Token, paste the <authentication_token> output from the previous command into the Token field, and choose SIGN IN.


sources:

  1. https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html

Continue with:

  1. K8s rook-ceph Install https://sunwfrk.com/rook-ceph-on-k8s/

udev rules for ASM disks

Make sure you have sg3 utils installed.

# yum install -y sg3_utils

After the LUNs were added to the server run:

# rescan-scsi-bus.sh

This will generate a lot of output and will tell you if it found new disks.
If you've received the wwid's from you SAN administrator you can skip this next stept, if not we'll have to figure out what disks were added using:

# dmesg

Record most (if you asked for 2 LUNs with different sizes, you can note 2 disks with both sizes) disks for further reference. I'm noting:

[1808189.173460] sd 0:0:0:9: [sdak] 209715200 512-byte logical blocks: (107 GB/100 GiB)
[1808189.213339] sd 0:0:0:10: [sdal] 104857600 512-byte logical blocks: (53.6 GB/50.0 GiB)

I will assume you have multipath, if you are blacklisting all luns by default you will also need to modify your multipath configuration. I will not cover this here.

now run

# multipath -ll
...
mpathk (36006016056a04000e9113c6d9189e811) dm-21 DGC     ,VRAID
size=50G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:0:10 sdal 66:80  active ready running
| `- 1:0:0:10 sdap 66:144 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 0:0:1:10 sdan 66:112 active ready running
  `- 1:0:1:10 sdar 66:176 active ready running
mpathj (36006016056a04000ea81ef4f9189e811) dm-20 DGC     ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 0:0:1:9  sdam 66:96  active ready running
| `- 1:0:1:9  sdaq 66:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
  |- 0:0:0:9  sdak 66:64  active ready running
  `- 1:0:0:9  sdao 66:128 active ready running
...

I'm only showing the mpath devices I need. What is now important is the wwid's

  • 36006016056a04000e9113c6d9189e811
  • 36006016056a04000ea81ef4f9189e811

Now we'll edit /etc/udev/rules.d/99-oracle-asmdevices.rules

# vi /etc/udev/rules.d/99-oracle-asmdevices.rules

and add

#100G mpathj asm-data-example
KERNEL=="dm-*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $tempnode", RESULT=="36006016056a04000ea81ef4f9189e811", SYMLINK+="asm-data-example", OWNER="oracle", GROUP="dba", MODE="0660"
 
#50G mpathk asm-fra-example
KERNEL=="dm-*", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $tempnode", RESULT=="36006016056a04000e9113c6d9189e811", SYMLINK+="asm-fra-example", OWNER="oracle", GROUP="dba", MODE="0660"

Now, very important, you won't succeed without this:

# partprobe /dev/mapper/mpathk
# partprobe /dev/mapper/mpathj

Last step is to reload the udev config and activate it

# udevadm control --reload-rules
# udevadm trigger --type=devices --action=change

Verify our new devices are created:

# ls -lrt /dev/asm*example

lrwxrwxrwx. 1 root root 5 Jul 17 10:38 /dev/asm-fra-example -> dm-21
lrwxrwxrwx. 1 root root 5 Jul 17 10:38 /dev/asm-data-example -> dm-20

Syncing a RPM repo for offline use

For example we want to sync the epel repo for offline use

If you are on CENTOS 7 you can just type:

# yum install epel-release

If not add the EPEL repo this way:

# wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# rpm -ivh epel-release-latest-7.noarch.rpm

Install the reposync utility which is included in 'yum-utils':

# yum install yum-utils createrepo

Create an offline copy with the latest files only ('-n' option):

# reposync -n --repoid=epel --download_path=/data

Create repomd (xml-rpm-metadata) repository

# createrepo /data/epel

When you later want to update the repo then just resync it:

# reposync -n --repoid=epel --download_path=/data

Remove older rpm's from the updated repo:

# repomanage -k1 -c -o /data/epel/ |xargs rm

Run createrepo with the --update flag to speed things up

# createrepo --update /var/www/html/repo