-
Define
TOKEN
andCLUSTER_URL
environment variables -
Connect to the database. If it runs inside the hub cluster, run:
kubectl exec -it $(kubectl get pods -n hoh-postgres -l postgres-operator.crunchydata.com/role=master -o jsonpath='{.items..metadata.name}') -n hoh-postgres -c database -- psql -d hoh
-
Show the clusters in the DB
select leaf_hub_name, payload->'metadata'->'name' as cluster_name from status.managed_clusters ORDER BY cluster_name;
select payload -> 'metadata' ->>'name' as cluster_name, payload -> 'metadata' -> 'labels' -> 'env' as env from status.managed_clusters ORDER BY cluster_name;
Output:
labels --------------------------------------------------------------------- {"name": "cluster0", "vendor": "Kind", "env": "production"} {"name": "cluster1", "vendor": "Kind", "env": "production"} {"name": "cluster2", "vendor": "Kind", "env": "production"} {"name": "cluster3", "vendor": "Kind", "env": "dev"} {"name": "cluster4", "vendor": "Kind"} {"name": "cluster5", "vendor": "Kind", "env": "production"} {"name": "cluster6", "vendor": "Kind", "env": "production"} {"name": "cluster7", "vendor": "Kind"} {"name": "cluster8", "vendor": "Kind"} {"name": "cluster9", "vendor": "Kind", "env": "dev"}
-
Show some SQL queries on the table:
SELECT payload -> 'metadata' ->> 'name' FROM status.managed_clusters WHERE payload -> 'metadata' -> 'labels' ->> 'env' = 'dev';
SELECT payload -> 'metadata' ->> 'name' FROM status.managed_clusters WHERE payload -> 'metadata' -> 'labels' ->> 'env' = 'production';
-
Show that there are no managed cluster CRs defined in the Hub-of-Hubs:
kubectl config view | grep server kubectl get managedcluster -A kubectl get ns cluster0
-
Show the managed clusters on the leaf hub1:
kubectl get managedcluster --kubeconfig $HUB1_CONFIG kubectl get ns cluster0 --kubeconfig $HUB1_CONFIG
-
Show the current identity:
curl -k https://api.$CLUSTER_URL:6443/apis/user.openshift.io/v1/users/~ -H "Authorization: Bearer $TOKEN"
-
Show the managed clusters in the ACM UI.
-
Show the managed clusters in Non-Kubernetes REST API:
curl -ks https://multicloud-console.apps.$CLUSTER_URL/multicloud/hub-of-hubs-nonk8s-api/managedclusters -H "Authorization: Bearer $TOKEN" | jq .[].metadata.name | sort
-
Show the managed clusters in Kubernetes REST API:
curl -ks https://api.$CLUSTER_URL:6443/apis/cluster.open-cluster-management.io/v1/managedclusters -H "Authorization: Bearer $TOKEN" | jq .items[].metadata.name
-
Show the partial evaluation returned by the RBAC component:
kubectl exec -it $(kubectl get pod -l name=$(basename $(pwd)) -o jsonpath='{.items[0].metadata.name}' -n open-cluster-management) -n open-cluster-management -- curl -ks https://localhost:8181/v1/compile?pretty -H 'Content-Type: application/json' -d '{"query":"data.rbac.clusters.allow == true","input":{"user":"VADIME"},"unknowns":["input.cluster"]}'
-
Show the SQL query performed by Non-Kubernetes REST API:
kubectl logs -l name=hub-of-hubs-nonk8s-api -n open-cluster-management
-
Edit
role_bindings.yaml
, change your role to be one of:developer
,SRE
,devops
,highClearance
,highClearanceDevops
,admin
. -
Redefine the secret with the role bindings and restart the RBAC component:
kubectl delete secret opa-data -n open-cluster-management --ignore-not-found kubectl create secret generic opa-data -n open-cluster-management --from-file=testdata/data.json --from-file=role_bindings.yaml --from-file=opa_authorization.rego kubectl rollout restart deployment hub-of-hubs-rbac -n open-cluster-management
-
Watch the RBAC pods are recreated:
watch kubectl get pod -l name=hub-of-hubs-rbac -n open-cluster-management
-
Check the SOD violation (for
developer
andhighClearance
roles):kubectl exec -it $(kubectl get pod -l name=hub-of-hubs-rbac -o jsonpath='{.items[0].metadata.name}' -n open-cluster-management) \ -n open-cluster-management -- curl -ks https://localhost:8181/v1/data/rbac/sod?pretty -H 'Content-Type: application/json'