/root/.blog sunwfrk

"GPT PMBR size mismatch" when growing LogicalDrive

What I did..

I sequentally hot-swapped two drives that are in a mirror on my HP P420 raid controller. Now to actually use that space you need to grow the logicaldrive.

# ssacli ctrl slot=1 ld 1 modify size=max

Warning: Extension may not be supported on certain operating systems.
         Performing extension on these operating systems can cause data to
         become inaccessible. See SSA documentation for details. Continue?
         (y/n) y

I typed "y" because this was going trough my mind:

I'm on Linux.. this will just work, right....

Wrong.. Even after rescanning nothing made my drive appear bigger.. I even did a reboot.

Ok, let's check fdisk:

# fdisk /dev/sda

Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

GPT PMBR size mismatch (286677119 != 2344160431) will be corrected by w(rite).
GPT PMBR size mismatch (286677119 != 2344160431) will be corrected by w(rite).

Command (m for help): p

Disk /dev/sda: 1.1 TiB, 1200210141184 bytes, 2344160432 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Disklabel type: gpt
Disk identifier: B0426352-B2AA-4C55-A328-F6C29271B05F

Device      Start       End   Sectors   Size Type
/dev/sda1    2048      4095      2048     1M BIOS boot
/dev/sda2    4096    528383    524288   256M EFI System
/dev/sda3  528384 286677086 286148703 136.5G Linux LVM

Command (m for help):

Yup something is wrong.. I was like.. ok, fdisk knows whats wrong... the GPT label and the Protective MBR don't match... lets just punch 'w'.. and reboot (because this is the bootdisk.. partprobe won't help you here...) right?

nope, that doesn't work either (can't seem to find that output..). fdisk isn't able to write anything...

Ok, I can go on with other things that failed but that would make the story to long. So what did help?
=> parted

# parted
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an
extra 2057483312 blocks) or continue with the current setting? 
Fix/Ignore? Fix                                                           
Model: HP LOGICAL VOLUME (scsi)
Disk /dev/sda: 1200GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  271MB   268MB   fat32              boot, esp
 3      271MB   147GB   147GB                      lvm

(parted) q

And now I could use my extra diskspace! Hooray!

AYBABTU

All Your Base Are Belong To Us

Know Your Meme: https://knowyourmeme.com/memes/all-your-base-are-belong-to-us

Captain: What happen ?
Mechanic: Somebody set up us the bomb.
Operator: We get signal.
Captain: What !
Operator: Main screen turn on.
Captain: It's you !!
CATS: How are you gentlemen !!
CATS: All your base are belong to us.
CATS: You are on the way to destruction.
Captain: What you say !!
CATS: You have no chance to survive make your time.
CATS: Ha ha ha ha …
Operator: Captain !!
Captain: Take off every 'ZIG'!!
Captain: You know what you doing.
Captain: Move 'ZIG'.
Captain: For great justice.

Enable watchdog on Raspberry PI

I'm using Raspbian Linux on 2 Pi's and I was successful in enabling the hardware watchdog after a lot of failed attempts because of wrong manuals on the net.

Edit the Raspbian boot config file to enable the watchdog

root@raspbianpi:~# vi /boot/config.txt

Add the following somewhere at the end:

dtparam=watchdog=on

Reboot your Pi

root@raspbianpi:~# reboot

After the reboot install the watchdog software

root@raspbianpi:~# apt install watchdog

Edit the watchdog configuration file

root@raspbianpi:~# vi /etc/watchdog.conf

and add:

watchdog-device = /dev/watchdog
watchdog-timeout = 15

Start and make sure the watchdog daemon starts at boot

root@raspbianpi:~# systemctl start watchdog
root@raspbianpi:~# systemctl enable watchdog

rook-ceph on k8s

DRAFT

  1. Add 3 worker nodes with a dedicated block device to use with ceph

  2. Install git

[root@test-vm1 ~]$ yum install -y git
  1. Install rook
[root@test-vm1 ~]$ su - k8sadm
[k8sadm@test-vm1 ~]$ git clone https://github.com/rook/rook.git
[k8sadm@test-vm1 ~]$ kubectl apply -f  rook/cluster/examples/kubernetes/ceph/operator.yaml
namespace/rook-ceph-system created
customresourcedefinition.apiextensions.k8s.io/clusters.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/filesystems.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/objectstores.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/pools.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created
clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
role.rbac.authorization.k8s.io/rook-ceph-system created
clusterrole.rbac.authorization.k8s.io/rook-ceph-global created
serviceaccount/rook-ceph-system created
rolebinding.rbac.authorization.k8s.io/rook-ceph-system created
clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created
deployment.apps/rook-ceph-operator created 
  1. Label these worker nodes as 'storage-node'
[k8sadm@test-vm1 ~]$ kubectl label node test-vm4.home.lcl role=storage-node
node/test-vm4.home.lcl labeled

[k8sadm@test-vm1 ~]$ kubectl label node test-vm5.home.lcl role=storage-node
node/test-vm5.home.lcl labeled

[k8sadm@test-vm1 ~]$ kubectl label node test-vm6.home.lcl role=storage-node
node/test-vm6.home.lcl labeled
  1. create a cluster config
[k8sadm@test-vm1 ~]$ vi cluster.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: rook-ceph
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: rook-ceph-cluster
  namespace: rook-ceph
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-cluster
  namespace: rook-ceph
rules:
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: [ "get", "list", "watch", "create", "update", "delete" ]
---
# Allow the operator to create resources in this cluster's namespace
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-cluster-mgmt
  namespace: rook-ceph
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: rook-ceph-cluster-mgmt
subjects:
- kind: ServiceAccount
  name: rook-ceph-system
  namespace: rook-ceph-system
---
# Allow the pods in this namespace to work with configmaps
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: rook-ceph-cluster
  namespace: rook-ceph
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: rook-ceph-cluster
subjects:
- kind: ServiceAccount
  name: rook-ceph-cluster
  namespace: rook-ceph
---
apiVersion: ceph.rook.io/v1beta1
kind: Cluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  dataDirHostPath: /var/lib/rook
  serviceAccount: rook-ceph-cluster
  mon:
    count: 3
    allowMultiplePerNode: true
  dashboard:
    enabled: true
  network:
    hostNetwork: false
  placement:
    all:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: role
              operator: In
              values:
              - storage-node
      podAffinity:
      podAntiAffinity:
      tolerations:
      - key: storage-node
        operator: Exists
  resources:
  storage:
    useAllNodes: false
    useAllDevices: false
    deviceFilter:
    location:
    config:
      databaseSizeMB: "1024"
      journalSizeMB: "1024"
    nodes:
    - name: "test-vm4.home.lcl"
      devices:
      - name: "vdb"
    - name: "test-vm5.home.lcl"
      devices:
      - name: "vdb"
    - name: "test-vm6.home.lcl"
      devices:
      - name: "vdb"

You might want to change in the above yaml:

  • the number of mon's
    • use an odd number!
    • between 1 and 9
  • node names
  • device names (it could be vdc or sdb in your case..
  1. Apply the cluster configuration
[k8sadm@test-vm1 ~]$ kubectl apply -f cluster.yaml
namespace/rook-ceph created
serviceaccount/rook-ceph-cluster created
role.rbac.authorization.k8s.io/rook-ceph-cluster created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created
rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster created
cluster.ceph.rook.io/rook-ceph created
  1. Check the status
[k8sadm@test-vm1 ~]$ kubectl -n rook-ceph get pods
NAME                                            READY     STATUS      RESTARTS   AGE
rook-ceph-mgr-a-77f86598dd-clsqw                1/1       Running     0          5m
rook-ceph-mon-a-c8b6b9c78-f54px                 1/1       Running     0          5m
rook-ceph-mon-b-85c677b6b4-wg9xb                1/1       Running     0          5m
rook-ceph-mon-c-5fbd645bc4-gwq4v                1/1       Running     0          5m
rook-ceph-osd-0-bc94cf68d-tz7pg                 1/1       Running     0          4m
rook-ceph-osd-1-858b858874-bktlk                1/1       Running     0          4m
rook-ceph-osd-2-6c54c75878-m2zpx                1/1       Running     0          4m
rook-ceph-osd-prepare-test-vm4.home.lcl-fdbnx   0/1       Completed   0          5m
rook-ceph-osd-prepare-test-vm5.home.lcl-m2k75   0/1       Completed   0          5m
rook-ceph-osd-prepare-test-vm6.home.lcl-qcqk5   0/1       Completed   0          5m
  1. Install the ceph toolbox
[k8sadm@test-vm1 ~]$ kubectl apply -f  rook/cluster/examples/kubernetes/ceph/toolbox.yaml 
deployment.apps/rook-ceph-tools created

[k8sadm@test-vm1 ~]$ kubectl -n rook-ceph get pods
NAME                                            READY     STATUS      RESTARTS   AGE
rook-ceph-mgr-a-77f86598dd-clsqw                1/1       Running     0          5m
rook-ceph-mon-a-c8b6b9c78-f54px                 1/1       Running     0          5m
rook-ceph-mon-b-85c677b6b4-wg9xb                1/1       Running     0          5m
rook-ceph-mon-c-5fbd645bc4-gwq4v                1/1       Running     0          5m
rook-ceph-osd-0-bc94cf68d-tz7pg                 1/1       Running     0          4m
rook-ceph-osd-1-858b858874-bktlk                1/1       Running     0          4m
rook-ceph-osd-2-6c54c75878-m2zpx                1/1       Running     0          4m
rook-ceph-osd-prepare-test-vm4.home.lcl-fdbnx   0/1       Completed   0          5m
rook-ceph-osd-prepare-test-vm5.home.lcl-m2k75   0/1       Completed   0          5m
rook-ceph-osd-prepare-test-vm6.home.lcl-qcqk5   0/1       Completed   0          5m
rook-ceph-tools-856cd87f69-9tznz                1/1       Running     0          4m
  1. Check the ceph status
[k8sadm@test-vm1 ~]$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') ceph status
  cluster:
    id:     2afbac2e-0df9-43a5-821a-c08bdbff3584
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum b,c,a
    mgr: a(active)
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   3077 MB used, 289 GB / 292 GB avail
    pgs:
  1. Checl the ceph osd status
[k8sadm@test-vm1 ~]$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') ceph osd status
+----+----------------------------------+-------+-------+--------+---------+--------+---------+-----------+
| id |               host               |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+----------------------------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | rook-ceph-osd-0-bc94cf68d-tz7pg  | 1025M | 96.4G |    0   |     0   |    0   |     0   | exists,up |
| 1  | rook-ceph-osd-1-858b858874-bktlk | 1025M | 96.4G |    0   |     0   |    0   |     0   | exists,up |
| 2  | rook-ceph-osd-2-6c54c75878-m2zpx | 1025M | 96.4G |    0   |     0   |    0   |     0   | exists,up |
+----+----------------------------------+-------+-------+--------+---------+--------+---------+-----------+

Now just follow the manual to use ceph:

https://rook.io/docs/rook/master/block.html

[k8sadm@test-vm1 kubernetes]$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') ceph status
  cluster:
    id:     2afbac2e-0df9-43a5-821a-c08bdbff3584
    health: HEALTH_OK
 
  services:
    mon: 4 daemons, quorum b,c,d,a
    mgr: a(active)
    osd: 3 osds: 3 up, 3 in
 
  data:
    pools:   1 pools, 100 pgs
    objects: 62 objects, 95040 kB
    usage:   3151 MB used, 289 GB / 292 GB avail
    pgs:     100 active+clean
 
  io:
    client:   71023 B/s rd, 5648 kB/s wr, 10 op/s rd, 18 op/s wr

sources:

  1. http://docs.ceph.com/docs/master/dev/kubernetes/
  2. https://rook.io/docs/rook/master/
  3. https://medium.com/@zhimin.wen/deploy-rook-ceph-on-icp-2-1-0-3-63ec16787093

k8s Dashboard installation

Deploy the Dashboard

  1. install the kubernetes dashboard
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role "kubernetes-dashboard-minimal" created
rolebinding "kubernetes-dashboard-minimal" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
  1. Deploy heapster to enable container cluster monitoring and performance analysis on your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

serviceaccount "heapster" created
deployment "heapster" created
service "heapster" created
  1. Deploy the influxdb backend for heapster to your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

deployment "monitoring-influxdb" created
service "monitoring-influxdb" created
  1. Create the heapster cluster role binding for the dashboard:
[k8sadm@test-vm1 ~]$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

clusterrolebinding "heapster" created

Create an admin Service Account and Cluster Role Binding

  1. Create a file called k8s-admin-service-account.yaml with the text below
apiVersion: v1
kind: ServiceAccount
metadata:
  name: k8s-admin
  namespace: kube-system
  1. Apply the service account to your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f k8s-admin-service-account.yaml

serviceaccount "k8s-admin" created
  1. Create a file called k8s-admin-cluster-role-binding.yaml with the text below
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: k8s-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: k8s-admin
  namespace: kube-system
  1. Apply the cluster role binding to your cluster
[k8sadm@test-vm1 ~]$ kubectl apply -f k8s-admin-cluster-role-binding.yaml

clusterrolebinding "k8s-admin" created

Connect to the Dashboard

  1. Retrieve an authentication token for the eks-admin service account. Copy the <authentication_token> value from the output. You use this token to connect to the dashboard
[k8sadm@test-vm1 ~]$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep k8s-admin | awk '{print $1}')

Name:         k8s-admin-token-b5zv4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=k8s-admin
              kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      <authentication_token>
  1. Start the kubectl proxy
[k8sadm@test-vm1 ~]$ kubectl proxy

Starting to serve on 127.0.0.1:8001
  1. Open the following link with a web browser to access the dashboard endpoint: http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

  2. Choose Token, paste the <authentication_token> output from the previous command into the Token field, and choose SIGN IN.


sources:

  1. https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html

Continue with:

  1. K8s rook-ceph Install https://sunwfrk.com/rook-ceph-on-k8s/
Older Posts